Dissertations / Theses on the topic 'Computational science'

To see the other types of publications on this topic, follow the link: Computational science.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computational science.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Spagnuolo, Carmine. "Scalable computational science." Doctoral thesis, Universita degli studi di Salerno, 2017. http://hdl.handle.net/10556/2581.

Full text
Abstract:
2015 - 2016
Computational science also know as scientific computing is a rapidly growing novel field that uses advanced computing in order to solve complex problems. This new discipline combines technologies, modern computational methods and simulations to address problems too complex to be reliably predicted only by theory and too dangerous or expensive to be reproduced in laboratories. Successes in computational science over the past twenty years have caused demand of supercomputing, to improve the performance of the solutions and to allow the growth of the models, in terms of sizes and quality. From a computer scientist’s perspective, it is natural to think to distribute the computation required to study a complex systems among multiple machines: it is well known that the speed of singleprocessor computers is reaching some physical limits. For these reasons, parallel and distributed computing has become the dominant paradigm for computational scientists who need the latest development on computing resources in order to solve their problems and the “Scalability” has been recognized as the central challenge in this science. In this dissertation the design and implementation of Frameworks, Parallel Languages and Architectures, which enable to improve the state of the art on Scalable Computational Science, are discussed. Frameworks. The proposal of D-MASON, a distributed version of MASON, a wellknown and popular Java toolkit for writing and running Agent-Based Simulations (ABSs). D-MASON introduces a framework level parallelization so that scientists that use the framework (e.g., a domain expert with limited knowledge of distributed programming) could be only minimally aware of such distribution. D-MASON, was began to be developed since 2011, the main purpose of the project was overcoming the limits of the sequentially computation of MASON, using distributed computing. D-MASON enables to do more than MASONin terms of size of simulations (number of agents and complexity of agents behaviors), but allows also to reduce the simulation time of simulations written in MASON. For this reason, one of the most important feature of D-MASON is that it requires a limited number of changing on the MASON’s code in order to execute simulations on distributed systems. v D-MASON, based on Master-Worker paradigm, was initially designed for heterogeneous computing in order to exploit the unused computational resources in labs, but it also provides functionality to be executed in homogeneous systems (as HPC systems) as well as cloud infrastructures. The architecture of D-MASON is presented in the following three papers, which describes all D-MASON layers: • Cordasco G., Spagnuolo C. and Scarano V. Toward the new version of D-MASON: Efficiency, Effectiveness and Correctness in Parallel and Distributed Agent-based Simulations. 1st IEEE Workshop on Parallel and Distributed Processing for Computational Social Systems. IEEE International Parallel & Distributed Processing Symposium 2016. • Cordasco G., De Chiara R., Mancuso A., Mazzeo D., Scarano V. and Spagnuolo C. Bringing together efficiency and effectiveness in distributed simulations: the experience with D-MASON. SIMULATION: Transactions of The Society for Modeling and Simulation International, June 11, 2013. • Cordasco G., De Chiara R., Mancuso A., Mazzeo D., Scarano V. and Spagnuolo C. A Framework for distributing Agent-based simulations. Ninth International Workshop Algorithms, Models and Tools for Parallel Computing on Heterogeneous Platforms of Euro-Par 2011 conference. Much effort has been made, on the Communication Layer, to improve the communication efficiency in the case of homogeneous systems. D-MASON is based on Publish/Subscribe (PS) communication paradigm and uses a centralized message broker (based on the Java Message Service standard) to deal with heterogeneous systems. The communication for homogeneous system uses the Message Passing Interface (MPI) standard and is also based on PS. In order to use MPI within Java, D-MASON uses a Java binding of MPI. Unfortunately, this binding is relatively new and does not provides all MPI functionalities. Several communication strategies were designed, implemented and evaluated. These strategies were presented in two papers: • Cordasco G., Milone F., Spagnuolo C. and Vicidomini L. Exploiting D-MASON on Parallel Platforms: A Novel Communication Strategy 2st Workshop on Parallel and Distributed Agent-Based Simulations of EuroPar 2014 conference. • Cordasco G., Mancuso A., Milone F. and Spagnuolo C. Communication strategies in Distributed Agent-Based Simulations: the experience with D-MASON 1st Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2013 conference. vi D-MASON provides also mechanisms for the visualization and gathering of the data in distributed simulation (available on the Visualization Layer). These solutions are presented in the paper: • Cordasco G., De Chiara R., Raia F., Scarano V., Spagnuolo C. and Vicidomini L. Designing Computational Steering Facilities for Distributed Agent Based Simulations. Proceedings of the ACM SIGSIM Conference on Principles of Advanced Discrete Simulation 2013. In DABS one of the most complex problem is the partitioning and balancing of the computation. D-MASON provides, in the Distributed Simulation layer, mechanisms for partitioning and dynamically balancing the computation. D-MASON uses field partitioning mechanism to divide the computation among the distributed system. The field partitioning mechanism provides a nice trade-off between balancing and communication effort. Nevertheless a lot of ABS are not based on 2D- or 3D-fields and are based on a communication graph that models the relationship among the agents. Inthiscasethefieldpartitioningmechanismdoesnotensuregoodsimulation performance. Therefore D-MASON provides also a specific mechanisms to manage simulation that uses a graph to describe agent interactions. These solutions were presented in the following publication: • Antelmi A., Cordasco G., Spagnuolo C. and Vicidomini L.. On Evaluating Graph Partitioning Algorithms for Distributed Agent Based Models on Networks. 3rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2015 conference. The field partitioning mechanism, intuitively, enables the mono and bi-dimensional partitioning of an Euclidean space. This approach is also know as uniform partitioning. But in some cases, e.g. simulations that simulate urban areas using a Geographical Information System (GIS), the uniform partitioning degrades the simulation performance, due to the unbalanced distribution of the agents on the field and consequently on the computational resources. In such a case, D-MASON provides a non-uniform partitioning mechanism (inspired by Quad-Tree data structure), presented in the following paper: • Lettieri N., Spagnuolo C. and Vicidomini L.. Distributed Agent-based Simulation and GIS: An Experiment With the dynamics of Social Norms. 3rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2015 conference. • G. Cordasco and C. Spagnuolo and V. Scarano. Work Partitioning on Parallel and Distributed Agent-Based Simulation. IEEE Workshop on vii Parallel and Distributed Processing for Computational Social Systems of International Parallel & Distributed Processing Symposium, 2017. The latest version of D-MASON provides a web-based System Management, to better use D-MASON in Cloud infrastructures. D-MASON on the Amazon EC2 Cloud infrastructure and its performance in terms of speed and cost were compared against D-MASON on an HPC environment. The obtained results, and the new System Management Layer are presented in the following paper: • MCarillo,GCordasco,FSerrapica,CSpagnuolo,P.Szufel,andL.Vicidomini. D-Mason on the Cloud: an Experience with Amazon Web Services. 4rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2016 conference. ParallelLanguages. The proposal of an architecture, which enable to invoke code supported by a Java Virtual Machine (JVM) from code written in C language. Swft/T, is a parallel scripting language for programming highly concurrent applications in parallel and distributed environments. Swift/T is the reimplemented version of Swift language, with a new compiler and runtime. Swift/T improve Swift, allowing scalability over 500 tasks per second, load balancing feature, distributed data structures, and dataflow-driven concurrent task execution. Swif/T provides an interesting feature the one of calling easily and natively other languages (as Python, R, Julia, C) by using special language functions named leaf functions. Considering the actual trend of some supercomputing vendors (such as Cray Inc.) that support in its processors Java Virtual Machines (JVM), it is desirable to provide methods to call also Java code from Swift/T. In particular is really attractive to be able to call scripting languages for JVM as Clojure, Scala, Groovy, JavaScript etc. For this purpose a C binding to instanziate and call JVM was designed. This binding is used in Swif/T (since the version 1.0) to develop leaf functions that call Java code. The code are public available at GitHub project page. Frameworks. The proposal of two tools, which exploit the computing power of parallel systems to improve the effectiveness and the efficiency of Simulation Optimization strategies. Simulations Optimization (SO) is used to refer to the techniques studied for ascertaining the parameters of a complex model that minimize (or maximize) given criteria (one or many), which can only be computed by performing a simulation run. Due to the the high dimensionality of the search space, the heterogeneity of parameters, the irregular shape and the stochastic nature of the objective evaluation function, the tuning of such systems is extremely demanding from the computational point of view. The first frameworks is SOF: Zero Configuration Simulation Optimization Framework on the Cloud, it was designed to run SO process in viii the cloud. SOF is based on the Apache Hadoop infrastructure and is presented in the following paper: • Carillo M., Cordasco G., Scarano V., Serrapica F., Spagnuolo C. and Szufel P. SOF: Zero Configuration Simulation Optimization Framework on the Cloud. Parallel, Distributed, and Network-Based Processing 2016. The second framework is EMEWS: Extreme-scale Model Exploration with Swift/T, it has been designed at Argonne National Laboratory (USA). EMEWS as SOF allows to perform SO processes in distributed system. Both the frameworks are mainly designed for ABS. In particular EMEWS was tested using the ABS simulation toolkit Repast. Initially, EMEWS was not able to easily execute out of the box simulations written in MASON and NetLogo. This thesis presents new functionalities of EMEWS and solutions to easily execute MASON and NetLogo simulations on it. The EMEWS use cases are presented in the following paper: • J. Ozik, N. T. Collier, J. M. Wozniak and C. Spagnuolo From Desktop To Large-scale Model Exploration with Swift/T. Winter Simulation Conference 2016. Architectures. The proposal of an open-source, extensible, architecture for the visualization of data in HTML pages, exploiting a distributed web computing. Following the Edge-centric Computing paradigm, the data visualization is performed edge side ensuring data trustiness, privacy, scalability and dynamic data loading. The architecture has been exploited in the Social Platform for Open Data (SPOD). The proposed architecture, that has also appeared in the following papers: • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini A Scalable Data Web Visualization Architecture. Parallel, Distributed, and Network-Based Processing 2017. • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini An Architecture for Social Sharing and Collaboration around Open Data Visualisation. In Poster Proc. of the 19th ACM conference on "Computer-Supported Cooperative Work and Social Computing 2016. • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini An extensible architecture for an ecosystem of visualization web-components for Open Data Maximising interoperability Workshop— core vocabularies, location-aware data and more 2015. [edited by author]
Computational science anche conosciuta come calcolo scientifico è un settore in rapida crescita che usa il calcolo avanzato per affrontare problemi complessi. Questa nuova disciplina, combina tecnologia, moderni metodi computazionali e simulazioni per affrontare problemi troppo difficili da poter essere studiati solo in teoria o troppo pericolosi e costosi per poter essere riprodotti sperimentalmente in laboratorio. I progressi dell’ultimo ventennio in computational science hanno sfruttato il supercalcolo per migliorare le performance delle soluzioni e permettere la crescita dei modelli, in termini di dimensioni e qualità dei risultati ottenuti. Le soluzioni adottate si avvalgono del calcolo distribuito: è ben noto che la velocità di un computer con un singolo processore sta raggiungendo dei limiti fisici. Per queste ragioni, la computazione parallela e distribuita è diventata il principale paradigma di calcolo per affrontare i problemi nell’ambito della computational science, in cui la scalabilità delle soluzioni costituisce la sfida da affrontare. In questa tesi vengono discusse la progettazione e l’implementazione di Framework, Linguaggi Paralleli e Architetture che consentono di migliorare lo stato dell’arte della Scalable Computational Science. In particolare, i maggiori contributi riguardano: Frameworks. La proposta di D-MASON, una versione distribuita di MASON, un toolkit Java per la scrittura e l’esecuzione di simulazioni basate su agenti (AgentBased Simulations, ABSs). D-MASON introduce la parallelizzazione a livello framework per far si che gli scienziati che lo utilizzano (ad esempio un esperto con limitata conoscenza della programmazione distribuita) possano rendersi conto solo minimamente di lavorare in ambiente distribuito (ad esempio esperti del dominio con limitata esperienza o nessuna esperienza nel calcolo distribuito). D-MASON è un progetto iniziato nel 2011, il cui principale obiettivo è quello di superare i limiti del calcolo sequenziale di MASON, sfruttando il calcolo distribuito. D-MASON permette di simulare modelli molto più complessi (in termini di numero di agenti e complessità dei comportamenti dei singoli agenti) rispetto a MASON e inoltre consente, a parità di calcolo, di ridurre il tempo necessario ad eseguire le simulazioni MASON. D-MASON è stato progettato in modo da permettere la migrazione di simulazioni scritte in MASON con un numero limitato di modifiche da apportare al codice, al fine di garantire il massimo della semplicità d’uso. v D-MASON è basato sul paradigma Master-Worker, inizialmente pensato per sistemi di calcolo eterogenei, nelle sue ultime versioni consente l’esecuzione anche in sistemi omogenei come sistemi HPC e infrastrutture di cloud computing. L’architettura di D-MASON è stata presentata nelle seguenti pubblicazioni: • Cordasco G., Spagnuolo C. and Scarano V. Toward the new version of D-MASON: Efficiency, Effectiveness and Correctness in Parallel and Distributed Agent-based Simulations. 1st IEEE Workshop on Parallel and Distributed Processing for Computational Social Systems. IEEE International Parallel & Distributed Processing Symposium 2016. • Cordasco G., De Chiara R., Mancuso A., Mazzeo D., Scarano V. and Spagnuolo C. Bringing together efficiency and effectiveness in distributed simulations: the experience with D-MASON. SIMULATION: Transactions of The Society for Modeling and Simulation International, June 11, 2013. • Cordasco G., De Chiara R., Mancuso A., Mazzeo D., Scarano V. and Spagnuolo C. A Framework for distributing Agent-based simulations. Ninth International Workshop Algorithms, Models and Tools for Parallel Computing on Heterogeneous Platforms of Euro-Par 2011 conference. Uno degli strati architetturali di D-MASON che ne determina le prestazioni, è il il Communication Layer, il quale offre le funzionalità di comunicazione tra tutte le entità coinvolte nel calcolo. La comunicazione in D-MASON è basata sul paradigma Publish/Subscribe (PS). Al fine di soddisfare la flessibilità e la scalabilità richiesta, vengono fornite due strategie di comunicazione, una centralizzata (utilizzando Java Message Service) e una decentralizzata (utilizzando Message Passing Interface). La comunicazione in sistemi omogenei è sempre basata su PS ma utilizza lo standard Message Passing Interface (MPI). Al fine di utilizzare MPI in Java, lo strato di comunicazione di D-MASON è implementato sfruttando un binding Java a MPI. Tale soluzione non permette però l’utilizzo di tutte le funzionalità di MPI. Al tal proposito molteplici soluzioni sono stare progettate e implementate, e sono presentate nelle seguenti pubblicazioni: • Cordasco G., Milone F., Spagnuolo C. and Vicidomini L. Exploiting D-MASON on Parallel Platforms: A Novel Communication Strategy 2st Workshop on Parallel and Distributed Agent-Based Simulations of EuroPar 2014 conference. • Cordasco G., Mancuso A., Milone F. and Spagnuolo C. Communication strategies in Distributed Agent-Based Simulations: the experience with D-MASON 1st Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2013 conference. vi D-MASON offre anche meccanismi per la visualizzazione centralizzata e la raccolta di informazioni in simulazioni distribuite (tramite il Visualization Layer). I risultati ottenuti sono stati presentati nella seguente pubblicazione: • Cordasco G., De Chiara R., Raia F., Scarano V., Spagnuolo C. and Vicidomini L. Designing Computational Steering Facilities for Distributed Agent Based Simulations. Proceedings of the ACM SIGSIM Conference on Principles of Advanced Discrete Simulation 2013. Quando si parla di simulazioni distribuite una delle principali problematiche è il bilanciamento del carico. D-MASON offre, nel Distributed Simulation Layer, meccanismi per il partizionamento dinamico e il bilanciamento del carico. DMASON utilizza la tecnica del field partitioning per suddividere il lavoro tra le entità del sistema distribuito. La tecnica di field partitioning consente di ottenere un buon equilibrio tra il bilanciamento del carico e l’overhead di comunicazione. Molti modelli di simulazione non sono basati su spazi 2/3-dimensionali ma bensì modellano le relazioni tra gli agenti utilizzando strutture dati grafo. In questi casi la tecnica di field partitioning non garantisce soluzioni che consentono di ottenere buone prestazioni. Per risolvere tale problema, D-MASON fornisce particolari soluzioni per simulazioni che utilizzano i grafi per modellare le relazioni tra gli agenti. I risultati conseguiti sono stati presentati nella seguente pubblicazione: • Antelmi A., Cordasco G., Spagnuolo C. and Vicidomini L.. On Evaluating Graph Partitioning Algorithms for Distributed Agent Based Models on Networks. 3rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2015 conference. Il metodo di field partitioning consente il partizionamento di campi Euclidei mono e bi-dimensionali; tale approccio è anche conosciuto con il nome di partizionamento uniforme. In alcuni casi, come ad esempio simulazioni che utilizzano Geographical Information System (GIS), il metodo di partizionamento uniforme non è in grado di garantire buone prestazioni, a causa del posizionamento non bilanciato degli agenti sul campo di simulazione. In questi casi, D-MASON offre un meccanismo di partizionamento non uniforme (inspirato alla struttura dati Quad-Tree), presentato nelle seguenti pubblicazioni: • Lettieri N., Spagnuolo C. and Vicidomini L.. Distributed Agent-based Simulation and GIS: An Experiment With the dynamics of Social Norms. 3rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2015 conference. • G. Cordasco and C. Spagnuolo and V. Scarano. Work Partitioning on Parallel and Distributed Agent-Based Simulation. IEEE Workshop on vii Parallel and Distributed Processing for Computational Social Systems of International Parallel & Distributed Processing Symposium, 2017. Inoltre, D-MASON èstatoestesoalloscopodifornireun’infrastrutturaSimulation-asa-Service(SIMaaS),chesemplificailprocessodiesecuzionedisimulazionidistribuite in un ambiente di Cloud Computing. D-MASON nella sua versione più recente offre uno strato software di management basato su web, che ne consente estrema facilità d’uso in ambienti Cloud. Utilizzando il System Management, D-MASON è stato sperimentato sull’infrastruttura Cloud Amazon EC2 confrontando le prestazioni in questo ambiente cloud con un sistema HPC. I risultati ottenuti sono stati presentati nella seguente pubblicazione: • MCarillo,GCordasco,FSerrapica,CSpagnuolo,P.Szufel,andL.Vicidomini. D-Mason on the Cloud: an Experience with Amazon Web Services. 4rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2016 conference. LinguaggiParalleli. La proposta di un’architettura, la quale consente di invocare il codice per Java Virtual Machine (JVM) da codice scritto in linguaggio C. Swift/T è un linguaggio di scripting parallelo per sviluppare applicazioni altamente scalabili in ambienti paralleli e distribuiti. Swift/T è l’implementazione del linguaggio Swift per ambienti HPC. Swift/T migliora il linguaggio Swift, consentendo la scalabilità fino a 500 task per secondo, il bilanciamento del carico, strutture dati distribuite, e dataflow task execution. Swift/T consente di invocare nativamente codice scritto in altri linguaggi (come Python, R, Julia e C) utilizzando particolari funzioni definite come leaf function. Il trend attuale di molti produttori di sistemi di supercalcolo (come Cray Inc.), è quello di offrire processori che supportano JVM. Considerato ciò in questa tesi viene presentato il metodo adottato in Swift/T per l’invocazione di linguaggi per JVM (come Java, Clojure, Scala, Groovy, JavaScript) da Swift/T. A tale scopo è stato realizzato un binding C per l’invocazione e la gestione di codice per JVM. Questa soluzione è stata utilizzata in Swift/T (dalla versione 1.0) per estendere il supporto del linguaggio anche a linguaggi per JVM. Il codice sviluppato è stato rilasciato sotto licenza open source ed è disponibile in un repository pubblico su GitHub. Frameworks. La proposta di due tool che sfruttano la potenza di calcolo di sistemi distribuiti per migliorare l’efficacia e l’efficienza di strategie di Simulation Optimization. Simulation Optimization (SO) si riferisce alle tecniche utilizzate per l’individuazione dei parametri di un modello complesso che minimizzano (o massimizzano) determinati criteri, i quali possono essere computati solo tramite l’esecuzione di una simulazione. A causa dell’elevata dimensionalità dello spazio dei parametri, della loro eterogeneità e, della natura stocastica della funzione di viii valutazione, la configurazione di tali sistemi è estremamente onerosa dal punto di vista computazionale. In questo lavoro sono presentati due framework per SO. Il primo framework è SOF:Zero ConfigurationSimulation OptimizationFramework on the Cloud, progettato per l’esecuzione del processo SO in ambienti di cloud computing. SOF è basato su Apache Hadoop ed è presentato nella seguente pubblicazione: • Carillo M., Cordasco G., Scarano V., Serrapica F., Spagnuolo C. and Szufel P. SOF: Zero Configuration Simulation Optimization Framework on the Cloud. Parallel, Distributed, and Network-Based Processing 2016. Il secondo framework è EMEWS: Extreme-scale Model Exploration with Swift/T, progettato per eseguire processi SO in sistemi HPC. Entrambi i framework sono stati sviluppati principalmente per ABS. In particolare EMEWS è stato sperimentato utilizzando il toolkit ABS chiamato Repast. Nella sua prima versione EMEWS non supportava simulazioni scritte in MASON e NetLogo. In questo lavoro di tesi sono descritte alcune funzionalità di EMEWS che consentono il supporto a tali simulazioni. EMEWS e alcuni casi d’uso sono presentati nella seguente pubblicazione: • J. Ozik, N. T. Collier, J. M. Wozniak and C. Spagnuolo From Desktop To Large-scale Model Exploration with Swift/T. Winter Simulation Conference 2016. Architetture. La proposta di un’architettura open source per la visualizzazione web di dati dinamici. Tale architettura si basa sul paradigma di Edge-centric Computing; la visualizzazione dei dati è eseguita lato client, garantendo in questo modo l’affidabilità dei dati, la privacy e la scalabilità in termini di numero di visualizzazioni concorrenti. L’architettura è stata utilizzata all’interno della piattaforma sociale SPOD (Social Platform for Open Data), ed è stata presentata nelle seguenti pubblicazioni: • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini A Scalable Data Web Visualization Architecture. Parallel, Distributed, and Network-Based Processing 2017. • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini An Architecture for Social Sharing and Collaboration around Open Data Visualisation. In Poster Proc. of the 19th ACM conference on "Computer-Supported Cooperative Work and Social Computing 2016. • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini An extensible architecture for an ecosystem of visualization web-components for Open Data Maximising interoperability Workshop— core vocabularies, location-aware data and more 2015. [a cura dell'autore]
XV n.s. (XXIX)
APA, Harvard, Vancouver, ISO, and other styles
2

Cushing, Judith Bayard. "Computational proxies : an object-based infrastructure for computational science /." Full text open access at:, 1995. http://content.ohsu.edu/u?/etd,195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Brogliato, Marcelo Salhab. "Essays in computational management science." reponame:Repositório Institucional do FGV, 2018. http://hdl.handle.net/10438/24615.

Full text
Abstract:
Submitted by Marcelo Salhab Brogliato (msbrogli@gmail.com) on 2018-08-23T20:44:22Z No. of bitstreams: 1 brogliato-phd-thesis-v3-final.pdf: 19935806 bytes, checksum: df0caf31076cfec5c116e0b4c18346ee (MD5)
Approved for entry into archive by ÁUREA CORRÊA DA FONSECA CORRÊA DA FONSECA (aurea.fonseca@fgv.br) on 2018-08-24T16:29:40Z (GMT) No. of bitstreams: 1 brogliato-phd-thesis-v3-final.pdf: 19935806 bytes, checksum: df0caf31076cfec5c116e0b4c18346ee (MD5)
Made available in DSpace on 2018-08-27T13:54:00Z (GMT). No. of bitstreams: 1 brogliato-phd-thesis-v3-final.pdf: 19935806 bytes, checksum: df0caf31076cfec5c116e0b4c18346ee (MD5) Previous issue date: 2018-07-15
A presente tese é formada por três trabalhos científicos na área de Management Science Computacional. A gestão moderna e a alta tecnologia interagem em múltiplas e profundas formas. O professor Andre Ng diz aos seus estudantes na Escola de Negócios de Stanford que “Inteligência Artificial é a nova eletricidade”, como sua forma hiperbólica de enfatizar o potencial transformador da tecnologia. O primeiro trabalho é inspirado na possibilidade de que haverá alguma forma de dinheiro digital e estuda ledger distribuídas, propondo e analisando o Hathor, uma arquitetura alternativa para criptomoedas escaláveis. O segundo trabalho pode ser um item crucial no entendimento de tomadas de decisão, nos trazendo um modelo formal de recognition-primed decisions. Situada na intersecção entre psicologia cognitiva, ciência da computação, neuro-ciência e inteligência artifical, ele apresenta um framework open-source, multi-plataforma e altamente paralelo da Sparse Distributed Memory e analisa a dinâmica da memória e algumas aplicações. O terceiro e último trabalho se situa na intersecção entre marketing, difusão de inovação tecnologica e modelagem, extendendo o famoso modelo de Bass para levar em consideração usuário que, após adotar a tecnologia por um tempo, decidiram rejeitá-la.
This thesis presents three specific, self-contained, scientific papers in the Computational Management Science area. Modern management and high technology interact in multiple, profound, ways. Professor Andrew Ng tells students at Stanford’s Graduate School of Business that “AI is the new electricity”, as his hyperbolic way to emphasize the potential transformational power of the technology. The first paper is inspired by the possibility that there will be some form of purely digital money and studies distributed ledgers, proposing and analyzing Hathor, an alternative architecture towards a scalable cryptocurrency. The second paper may be a crucial item in understanding human decision making, perhaps, bringing us a formal model of recognition-primed decision. Lying at the intersection of cognitive psychology, computer science, neuroscience, and artificial intelligence, it presents an open-source, cross-platform, and highly parallel framework of the Sparse Distributed Memory and analyzes the dynamics of the memory with some applications. Last but not least, the third paper lies at the intersection of marketing, diffusion of technological innovation, and modeling, extending the famous Bass model to account for users who, after adopting the innovation for a while, decide to reject it later on.
APA, Harvard, Vancouver, ISO, and other styles
4

Chada, Daniel de Magalhães. "From cognitive science to management science: two computational contributions." reponame:Repositório Institucional do FGV, 2011. http://hdl.handle.net/10438/17053.

Full text
Abstract:
Submitted by Kelly Ayala (kelly.ayala@fgv.br) on 2016-09-12T12:57:06Z No. of bitstreams: 1 Chada 2011 FINAL ENTREGUE.pdf: 579283 bytes, checksum: f463590c20f51b84ba0f9357ab1a6e08 (MD5)
Approved for entry into archive by Kelly Ayala (kelly.ayala@fgv.br) on 2016-09-12T12:58:17Z (GMT) No. of bitstreams: 1 Chada 2011 FINAL ENTREGUE.pdf: 579283 bytes, checksum: f463590c20f51b84ba0f9357ab1a6e08 (MD5)
Approved for entry into archive by Kelly Ayala (kelly.ayala@fgv.br) on 2016-09-12T13:00:07Z (GMT) No. of bitstreams: 1 Chada 2011 FINAL ENTREGUE.pdf: 579283 bytes, checksum: f463590c20f51b84ba0f9357ab1a6e08 (MD5)
Made available in DSpace on 2016-09-12T13:03:31Z (GMT). No. of bitstreams: 1 Chada 2011 FINAL ENTREGUE.pdf: 579283 bytes, checksum: f463590c20f51b84ba0f9357ab1a6e08 (MD5) Previous issue date: 2011
This work is composed of two contributions. One borrows from the work of Charles Kemp and Joshua Tenenbaum, concerning the discovery of structural form: their model is used to study the Business Week Rankings of U.S. Business Schools, and to investigate how other structural forms (structured visualizations) of the same information used to generate the rankings can bring insights into the space of business schools in the U.S., and into rankings in general. The other essay is purely theoretical in nature. It is a study to develop a model of human memory that does not exceed our (human) psychological short-term memory limitations. This study is based on Pentti Kanerva’s Sparse Distributed Memory, in which human memories are registered into a vast (but virtual) memory space, and this registration occurs in massively parallel and distributed fashion, in ideal neurons.
Este trabalho é composto de duas contribuições. Uma se usa do trabalhode Charles Kemp e Joshua Tenenbaum sobre a descoberta da forma estrutural: o seu modelo é usado para estudar os rankings da revista Business Week sobre escolas de administração, e para investigar como outras formas estruturais (visualizações estruturadas) da mesma informação usada para gerar os rankings pode trazer discernimento no espaço de escolas de negócios nos Estados Unidos e em rankings em geral. O outro ensaio é de natureza puramente teórica. Ele é um estudo no desenvolvimento de um modelo de memória que não excede os nossos (humanos) limites de memória de curto-prazo. Este estudo se baseia na Sparse Distributed Memory (Memória Esparsa e Distribuida) de Pentti Kanerva, na qual memórias humanas são registradas em um vasto (mas virtual) espaço, e este registro ocorre de forma maciçamente paralela e distribuida, em neurons ideais.
APA, Harvard, Vancouver, ISO, and other styles
5

Anzola, David. "The philosophy of computational social science." Thesis, University of Surrey, 2015. http://epubs.surrey.ac.uk/808102/.

Full text
Abstract:
The thesis is a collection of six stand-alone chapters aimed at setting the foundations for the philosophy of computational social science. Agent-based modelling has been used for social research since the nineties. While at the beginning it was simply conceived as a methodological alternative, recently, the notion of ‘computational social science’ has started to be used to denote a separate disciplinary field. There are important differences with mainstream social science and traditional social research. Yet, the literature in the field has not accounted for these differences. Computational social science is a strongly practice-oriented field, where theoretical and philosophical concerns have been pushed into the background. This thesis presents an initial analysis of the methodology, epistemology and ontology of computational social science, by examining the following topics: 1) verification and validation and 2) modelling and theorising, 3) mechanisms 4) explanation 5) agency, action and interaction and 6) entities and process philosophy. Five general conclusions are drawn from the thesis. It is first argued that the wider ontological base in agent-based modelling allows for a new approach to traditional social dualisms, moving away from the methodological individualism that dominates computational social science. Second, the need to place a distinction between explanation and understanding and to make explanatory goals explicit is highlighted. Third, it is claimed that computational social science needs to pay attention to the social epistemology of the field, for this could provide important insights regarding values, ideologies and interests that underlie the practice of agent-based modelling. Fourth, it is suggested that a more robust theorisation regarding the experimental and model-based character of agent-based modelling should be developed. Finally, it is argued that the method can greatly contribute to the development of a processual account of social life.
APA, Harvard, Vancouver, ISO, and other styles
6

Cattinelli, I. "INVESTIGATIONS ON COGNITIVE COMPUTATION AND COMPUTATIONAL COGNITION." Doctoral thesis, Università degli Studi di Milano, 2011. http://hdl.handle.net/2434/155482.

Full text
Abstract:
This Thesis describes our work at the boundary between Computer Science and Cognitive (Neuro)Science. In particular, (1) we have worked on methodological improvements to clustering-based meta-analysis of neuroimaging data, which is a technique that allows to collectively assess, in a quantitative way, activation peaks from several functional imaging studies, in order to extract the most robust results in the cognitive domain of interest. Hierarchical clustering is often used in this context, yet it is prone to the problem of non-uniqueness of the solution: a different permutation of the same input data might result in a different clustering result. In this Thesis, we propose a new version of hierarchical clustering that solves this problem. We also show the results of a meta-analysis, carried out using this algorithm, aimed at identifying specific cerebral circuits involved in single word reading. Moreover, (2) we describe preliminary work on a new connectionist model of single word reading, named the two-component model because it postulates a cascaded information flow from a more cognitive component that computes a distributed internal representation for the input word, to an articulatory component that translates this code into the corresponding sequence of phonemes. Output production is started when the internal code, which evolves in time, reaches a sufficient degree of clarity; this mechanism has been advanced as a possible explanation for behavioral effects consistently reported in the literature on reading, with a specific focus on the so called serial effects. This model is here discussed in its strength and weaknesses. Finally, (3) we have turned to consider how features that are typical of human cognition can inform the design of improved artificial agents; here, we have focused on modelling concepts inspired by emotion theory. A model of emotional interaction between artificial agents, based on probabilistic finite state automata, is presented: in this model, agents have personalities and attitudes that can change through the course of interaction (e.g. by reinforcement learning) to achieve autonomous adaptation to the interaction partner. Markov chain properties are then applied to derive reliable predictions of the outcome of an interaction. Taken together, these works show how the interplay between Cognitive Science and Computer Science can be fruitful, both for advancing our knowledge of the human brain and for designing more and more intelligent artificial systems.
APA, Harvard, Vancouver, ISO, and other styles
7

Yu, Jingyuan. "Discovering Twitter through Computational Social Science Methods." Doctoral thesis, Universitat Autònoma de Barcelona, 2021. http://hdl.handle.net/10803/671609.

Full text
Abstract:
Visibilitzant la vida quotidiana de la gent, Twitter s'ha convertit en una de les plataformes d'intercanvi d'informació més importants i ha atret ràpidament l'atenció dels científics. Investigadors de tot el món s'han centrat en les ciències socials i en els estudis d'Internet amb dades de Twitter com a mostra del món real, i en l'última dècada s'han dissenyat nombroses eines d'anàlisis i algorismes. La present tesi doctoral consta de tres recerques, en primer lloc, donats els 14 anys (fins a 2020) d'història des de la fundació de Twitter, hem assistit a una explosió de publicacions científiques relacionades, però el panorama actual de la recerca en aquesta plataforma de mitjans socials continuava sent desconegut. Per a omplir aquest buit de recerca, vam fer una anàlisi bibliomètrica dels estudis relacionats amb Twitter per a analitzar com van evolucionar els estudis de Twitter al llarg del temps, i per a proporcionar una descripció general de l'entorn acadèmic de recerca de Twitter des d'un nivell macro. En segon lloc, atès que hi ha moltes eines de programari analític que estan disponibles actualment per a la recerca en Twitter, una pregunta pràctica per als investigadors júnior és com triar el programari més apropiat per al seu propi projecte de recerca. Per a resoldre aquest problema, vam fer una revisió del programari per a alguns dels sistemes integrats que es consideren més rellevants per a la recerca en ciències socials. Atès que els investigadors júnior en ciències socials poden enfrontar-se a possibles limitacions financeres, vam reduir el nostre abast per a centrar-nos únicament en el programari gratuït i de baix cost. En tercer lloc, donada l'actual crisi de salut pública, hem observat que els mitjans de comunicació social són una de les fonts d'informació i notícies més accessibles per al públic. Durant una pandèmia, la forma en què s'emmarquen els problemes de salut i les malalties en la premsa influeix en la comprensió del públic sobre l'actual brot epidèmic i les seves actituds i comportaments. Per tant, decidim usar Twitter com una font de notícies de fàcil accés per a analitzar l'evolució dels marcs de notícies espanyols durant la pandèmia COVID-19. En general, les tres recerques s'han associat estretament amb l'aplicació de mètodes computacionals, incloent la recol·lecció de dades en línia, la mineria de textos, l'anàlisi de xarxes i la visualització de dades. Aquest projecte de doctorat ha mostrat com la gent estudia i utilitza Twitter des de tres nivells diferents: el nivell acadèmic, el nivell pràctic i el nivell empíric.
Visibilizando la vida cotidiana de la gente, Twitter se ha convertido en una de las plataformas de intercambio de información más importantes y ha atraído rápidamente la atención de los científicos. Investigadores de todo el mundo se han centrado en las ciencias sociales y en los estudios de Internet con datos de Twitter como muestra del mundo real, y en la última década se han diseñado numerosas herramientas de análisis y algoritmos. La presente tesis doctoral consta de tres investigaciones, en primer lugar, dados los 14 años (hasta 2020) de historia desde la fundación de Twitter, hemos asistido a una explosión de publicaciones científicas relacionadas, pero el panorama actual de la investigación en esta plataforma de medios sociales seguía siendo desconocido. Para llenar este vacío de investigación, hicimos un análisis bibliométrico de los estudios relacionados con Twitter para analizar cómo evolucionaron los estudios de Twitter a lo largo del tiempo, y para proporcionar una descripción general del entorno académico de investigación de Twitter desde un nivel macro. En segundo lugar, dado que hay muchas herramientas de software analítico que están disponibles actualmente para la investigación en Twitter, una pregunta práctica para los investigadores junior es cómo elegir el software más apropiado para su propio proyecto de investigación. Para resolver este problema, hicimos una revisión del software para algunos de los sistemas integrados que se consideran más relevantes para la investigación en ciencias sociales. Dado que los investigadores junior en ciencias sociales pueden enfrentarse a posibles limitaciones financieras, redujimos nuestro alcance para centrarnos únicamente en el software gratuito y de bajo coste. En tercer lugar, dada la actual crisis de salud pública, hemos observado que los medios de comunicación social son una de las fuentes de información y noticias más accesibles para el público. Durante una pandemia, la forma en que se enmarcan los problemas de salud y las enfermedades en la prensa influye en la comprensión del público sobre el actual brote epidémico y sus actitudes y comportamientos. Por lo tanto, decidimos usar Twitter como una fuente de noticias de fácil acceso para analizar la evolución de los marcos de noticias españoles durante la pandemia COVID-19. En general, las tres investigaciones se han asociado estrechamente con la aplicación de métodos computacionales, incluyendo la recolección de datos en línea, la minería de textos, el análisis de redes y la visualización de datos. Este proyecto de doctorado ha mostrado cómo la gente estudia y utiliza Twitter desde tres niveles diferentes: el nivel académico, el nivel práctico y el nivel empírico.
As Twitter has covered up people’s daily life, it has became one of the most important information exchange platforms, and quickly attracted scientists’ attention. Researchers around the world have highly focused on social science and internet studies with Twitter data as a real world sample, and numerous analytics tools and algorithms have been designed in the last decade. The present doctoral thesis consists of three researches, first, given the 14 years (until 2020) of history since the foundation of Twitter, an explosion of related scientific publications have been witnessed, but the current research landscape on this social media platform remained unknown, to fill this research gap, we did a bibliometric analysis on Twitter-related studies to analyze how the Twitter studies evolved over time, and to provide a general description of the Twitter research academic environment from a macro level. Second, since there are many analytic software tools that are currently available for Twitter research, a practical question for junior researchers is how to choose the most appropriate software for their own research project, to solve this problem, we did a software review for some of the integrated frameworks that are considered most relevant for social science research, given that junior social science researchers may face possible financial constraints, we narrowed our scope to solely focus on the free and low-cost software. Third, given the current public health crisis, we have noticed that social media are one of the most accessed information and news sources for the public. During a pandemic, how health issues and diseases are framed in the news release impacts public’s understanding of the current epidemic outbreak and their attitudes and behaviors. Hence, we decided to use Twitter as an easy-access news source to analyze the evolution of the Spanish news frames during the COVID-19 pandemic. Overall, the three researches have closely associated with the application of computational methods, including online data collection, text mining, complex network and data visualization. And this doctoral project has discovered how people study and use Twitter from three different levels: the academic level, the practical level and the empirical level.
APA, Harvard, Vancouver, ISO, and other styles
8

Osorio, Guillén Jorge Mario. "Density Functional Theory in Computational Materials Science." Doctoral thesis, Uppsala University, Department of Physics, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-4496.

Full text
Abstract:

The present thesis is concerned to the application of first-principles self-consistent total-energy calculations within the density functional theory on different topics in materials science.

Crystallographic phase-transitions under high-pressure has been study for TiO2, FeI2, Fe3O4, Ti, the heavy alkali metals Cs and Rb, and C3N4. A new high-pressure polymorph of TiO2 has been discovered, this new polymorph has an orthorhombic OI (Pbca) crystal structure, which is predicted theoretically for the pressure range 50 to 100 GPa. Also, the crystal structures of Cs and Rb metals have been studied under high compressions. Our results confirm the recent high-pressure experimental observations of new complex crystal structures for the Cs-III and Rb-III phases. Thus, it is now certain that the famous isostructural phase transition in Cs is rather a new crystallographic phase transition.

The elastic properties of the new superconductor MgB2 and Al-doped MgB2 have been investigated. Values of all independent elastic constants (c11, c12, c13, c33, and c55) as well as bulk moduli in the a and c directions (Ba and Bc respectively) are predicted. Our analysis suggests that the high anisotropy of the calculated elastic moduli is a strong indication that MgB2 should be rather brittle. Al doping decreases the elastic anisotropy of MgB2 in the a and c directions, but, it will not change the brittle behaviour of the material considerably.

The three most relevant battery properties, namely average voltage, energy density and specific energy, as well as the electronic structure of the Li/LixMPO4 systems, where M is either Fe, Mn, or Co have been calculated. The mixing between Fe and Mn in these materials is also examined. Our calculated values for these properties are in good agreement with recent experimental values. Further insight is gained from the electronic density of states of these materials, through which conclusions about the physical properties of the various phases are made.

The electronic and magnetic properties of the dilute magnetic semiconductor Mn-doped ZnO has been calculated. We have found that for an Mn concentration of 5.6%, the ferromagnetic configuration is energetically stable in comparison to the antiferromgnetic one. A half-metallic electronic structure is calculated by the GGA approximation, where Mn ions are in a divalent state leading to a total magnetic moment of 5 μB per Mn atom.

APA, Harvard, Vancouver, ISO, and other styles
9

Osorio, Guillén Jorge Mario. "Density functional theory in computational materials science /." Uppsala : Acta Universitatis Upsaliensis : Univ.-bibl. [distributör], 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-4496.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shimada, Yosuke. "Computational science of turbulent mixing and combustion." Thesis, Cranfield University, 2010. http://dspace.lib.cranfield.ac.uk/handle/1826/5552.

Full text
Abstract:
Implicit Large Eddy Simulation (ILES) with high-resolution and high-order computational modelling has been applied to flows with turbulent mixing and combustion. Due to the turbulent nature, mixing of fuel and air and the subsequent combustion still remain challenging for computational fluid dynamics. However, recently ILES, an advanced numerical approach in Large Eddy Simulation methods, has shown encouraging results in prediction of turbulent flows. In this thesis the governing equations for single phase compressible flow were solved with an ILES approach using a finite volume Godunov-type method without explicit modelling of the subgrid scales. Up to ninth-order limiters were used to achieve high order spatial accuracy. When simulating non chemical reactive flows, the mean flow of a fuel burner was compared with the experimental results and showed good agreement in regions of strong turbulence and recirculation. The one dimensional kinetic energy spectrum was also examined and an ideal k−5/ 3 decay of energy could be seen in a certain range, which increased with grid resolution and order of the limiter. The cut-off wavenumbers are larger than the estimated maximum wavenumbers on the grid, therefore, the numerical dissipation sufficiently accounted for the energy transportation between large and small eddies. The effect of density differences between fuel and air was investigated for a wide range of Atwood number. The mean flow showed that when fuel momentum fluxes are identical the flow structure and the velocity fields were unchanged by Atwood number except for near fuel jet regions. The results also show that the effects of Atwood number on the flow structure can be described with a mixing parameter. In combustion flows simulation, a non filtered Arrhenius model was applied for the chemical source term, which corresponds to the case of the large chemical time scale compared to the turbulent time scale. A methane and air shear flow simulation was performed and the methane reaction rate showed non zero values against all temperature ranges. Small reaction rates were observed in the low temperature range due to the lack of subgrid scale modelling of the chemical source term. Simulation was also performed with fast chemistry approach representing the case of the large turbulent time scale compared to the chemical time scale. The mean flow of burner flames were compared with experimental data and a fair agreement was observed.
APA, Harvard, Vancouver, ISO, and other styles
11

Prottsman, Christie Lee Lili. "Computational Thinking and Women in Computer Science." Thesis, University of Oregon, 2011. http://hdl.handle.net/1794/11485.

Full text
Abstract:
x, 40 p. : col. ill.
Though the first computer programmers were female, women currently make up only a quarter of the computing industry. This lack of diversity jeopardizes technical innovation, creativity and profitability. As demand for talented computing professionals grows, both academia and industry are seeking ways to reach out to groups of individuals who are underrepresented in computer science, the largest of which is women. Women are most likely to succeed in computer science when they are introduced to computing concepts as children and are exposed over a long period of time. In this paper I show that computational thinking (the art of abstraction and automation) can be introduced earlier than has been demonstrated before. Building on ideas being developed for the state of California, I have created an entertaining and engaging educational software prototype that makes primary concepts accessible down to the third grade level.
Committee in charge: Michal Young, Chairperson; Joanna Goode, Member
APA, Harvard, Vancouver, ISO, and other styles
12

Scott-Murray, Amy. "Applications of 3D computational photography to marine science." Thesis, University of Aberdeen, 2017. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=233937.

Full text
Abstract:
This thesis represents the first dedicated study of the application of computational photography in marine science. It deals chiefly with the acquisition and use of photogrammetrically derived 3D organism models. The use of 3D models as 'virtual specimens' means that they may be securely archived and are accessible by anyone in any part of the world. Interactive 3D objects enhance learning by engaging the viewer in a participatory manner, and can help to clarify features that are unclear in photographs or diagrams. Measurements may be taken from these models for morphometric work, either manually or in an automated process. Digital 3D models permit the collection of novel metrics such as volume and surface area, which are very difficult to take by traditional means. These, and other metrics taken from 3D models, are a key step towards automating the species identification process. Where an organism changes over time, photogrammetry offers the ability to mathematically compare its shape before and after change. Sponge plasticity in response to stress and injury is quantified and visualised here for the first time. An array of networked underwater cameras was constructed for simultaneous capture of image sets. The philosophy of adapting simple, cheap consumer hardware is continued for the imaging and quantification of marine particulates. A restricted light field imaging system is described, together with techniques for image processing and data extraction. The techniques described are shown to be as effective as traditional instruments and methods for particulate measurement. The array cameras used a novel epoxy encapsulation technique which offers significant weight and cost advantages when compared to traditional metal pressure housings. It is also described here applied to standalone autonomous marine cameras. A fully synchronised autonomous in situ photogrammetry array is now possible. This will permit the non-invasive archiving and examination of organisms that may be damaged by recovery to the surface.
APA, Harvard, Vancouver, ISO, and other styles
13

Rinker, Robert E. "Reducing Computational Expense of Ray-Tracing Using Surface Oriented Pre-Computation." UNF Digital Commons, 1991. http://digitalcommons.unf.edu/etd/26.

Full text
Abstract:
The technique of rendering a scene using the method of ray-tracing is known to produce excellent graphic quality, but is also generally computationally expensive. Most of this computation involves determining intersections between objects in the scene and ray projections. Previous work to reduce this expense has been directed towards ray oriented optimization techniques. This paper presents a different approach, one that bases pre-computation on the characteristics of the scene itself, making the results independent of the position of the observer. This means that the results of one pre-computation run can be applied to renderings of the scene from multiple view points. Using this method on a scene of random triangular planar patches, impressive reductions in the number of intersection computations was realized, along with significant, reductions in the time required to render the scene.
APA, Harvard, Vancouver, ISO, and other styles
14

Rousseau, Mathieu. "Computational modeling and analysis of chromatin structure." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=116941.

Full text
Abstract:
The organization of DNA in the nucleus of a cell has long been known to play an important role in processes such as DNA replication and repair, and the regulation of gene expression. Recent advances in microarray and high-throughput sequencing technologies have enabled the creation of novel techniques for measuring certain aspects of the three-dimensional conformation of chromatin in vivo. The data generated by these methods contain both structural information and noise from the experimental procedures. Methods for modeling and analyzing these data to infer three-dimensional chromatin structure will constitute invaluable tools in the discovery of the mechanism by which chromatin structure is mediated. The overall objective of my thesis is to develop robust three-dimensional computational models of DNA and to analyze these data to gain biological insight into the role of chromatin structure on cellular processes. This thesis presents three main results, firstly, a novel computational modeling and analysis approach for the inference of three-dimensional structure from chromatin conformation capture carbon copy (5C) and Hi-C data. Our method named MCMC5C is based on Markov chain Monte Carlo sampling and can generate representative ensembles of three-dimensional models from noisy experimental data. Secondly, our investigation of the relationship between chromatin structure and gene expression during cellular differentiation shows that chromatin architecture is a dynamic structure which adopts an open conformation for actively transcribed genes and a condensed conformation for repressed genes. And thirdly, we developed a support vector machine classifier from 5C data and demonstrate a proof-of-concept that chromatin conformation signatures could be used to discriminate between human acute lymphoid and myeloid leukemias.
L'organisation de l'ADN à l'intérieur du noyau d'une cellule est connue pour jouer un rôle important pour des processus tels que la réplication et la réparation de l'ADN et la régulation de l'expression de gènes. Des avancées technologiques récentes concernant les puces à ADN et le séquençage à haut débit ont permis la création de nouvelles techniques mesurant la conformation de la chromatine in vivo. Les données générées par ces méthodes constituent une mesure approximative de la structure de la chromatine. Des méthodes modélisant et analysant ces données afin de déduire la structure tridimensionnelle de la chromatine constitueront des outils précieux pour la découverte du mécanisme gouvernant la structure de la chromatine. L'objectif global de ma thèse est de développer des modèles computationnels analysant la structure tridimensionnelle de l'ADN et d'effectuer l'analyse de données afin de mieux comprendre le rôle de la structure de la chromatine dans divers processus cellulaires. Cette thèse présente trois résultats principaux. Premièrement, un nouvel ensemble d'outils pour la modélisation computationnelle et l'analyse des données provenant de la capture copie conforme de la conformation de la chromatine (5C) et Hi-C. Notre méthode nommée MCMC5C se base sur une méthode de Monte Carlo par chaînes de Markov et peut générer des ensembles de modèles tridimensionnels représentatifs à partir de données expérimentales contenant du bruit. Deuxièmement, notre enquête sur la relation entre la structure de la chromatine et l'expression de gènes durant la différenciation cellulaire démontre que l'architecture de la chromatine est une structure dynamique qui adopte une conformation ouverte pour les gènes activement transcrits et une conformation condensée pour les gènes non-transcrits. Troisièmement, nous avons développé un classifieur basé sur une machine à vecteur de support à partir de nos données 5C et nous avons montré que les signatures de la conformation de la chromatine pourraient être utilisées pour différencier entre la leucémie lymphoïde et myéloïde.
APA, Harvard, Vancouver, ISO, and other styles
15

Urgen, Burcu Aysen. "A Philosophical Analysis Of Computational Modeling In Cognitive Science." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608832/index.pdf.

Full text
Abstract:
This study analyses the methodology of computational cognitive modeling as one of the ways of conducting research in cognitive science. The aim of the study is to provide an understanding of the place of computational cognitive models in understanding human cognition. Considering the vast number of computational cognitive models which have been just given to account for some cognitive phenomenon by solely simulating some experimental study and fitting to empirical data, a practice-oriented approach is adopted in this study to understand the work of the modeler, and accordingly to discover the potential of computational cognitive models, apart from their being simulation tools. In pursuit of this aim, a framework with a practice-oriented approach from the philosophy of science literature, which is Morgan &
Morrison (1999)&rsquo
s account, is employed on a case study. The framework emphasizes four key elements to understand the place of models in science, which are the construction of models, the function of models, the representation they provide, and the ways we learn from models. The case study Q-Soar (Simon, Newell &
Klahr, 1991), is a model built with Soar cognitive architecture (Laird, Newell &
Rosenbloom, 1987) which is representative of a class of computational cognitive models. Discussions are included for how to make generalizations for computational cognitive models out of this class, i.e. for models that are built with other modeling paradigms.
APA, Harvard, Vancouver, ISO, and other styles
16

Langham, A. E. "A self-organising approach to problems in computational science." Thesis, Swansea University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.637847.

Full text
Abstract:
In this thesis, problems from computational science are tackled using a swarm-based approach. The application areas considered are related to the use of the Finite Element Method which involves the simulation of complex fluid flow. The problem domain is first discretised into a set of geometrical elements and solution for quantities such as material strain or pressure is computed at the element nodes. This discretisation of the domain is known as mesh generation. The problem can then be divided for parallel execution. This is known as partitioning and must divide the task equally amongst processors such that the communication is minimised. Communication occurs when two connected nodes are assigned to different processors. Standard approaches to these problems use recursive methods in which the final solution is dependent on solutions found at higher levels. For example partitioning into k sets is done using recursive bisection and meshing is often performed by creating an initial coarse-grained mesh and inserting extra nodes into the existing elements to achieve the required density. The inherently parallel, distributed nature of the swarm-based approach allows us to simultaneously partition into k sets or create different parts of the mesh at the same time. Furthermore, because this approach is dependent only on the state of the local environment it is ideal for problems of an adaptive nature which are difficult for standard approaches to tackle. Results show that this approach is superior in quality when compared to standard methods. However, it is not as efficient and hence we outline various improvements to both speed up and improve the quality of the methods presented. A discussion of potential applications is also provided to indicate the general applicability of this approach to problems in computational science.
APA, Harvard, Vancouver, ISO, and other styles
17

Castillo, Andrea R. (Andrea Redwing). "Assessing computational methods and science policy in systems biology." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/51655.

Full text
Abstract:
Thesis (S.M. in Technology and Policy)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program, 2009.
Includes bibliographical references (p. 109-112).
In this thesis, I discuss the development of systems biology and issues in the progression of this science discipline. Traditional molecular biology has been driven by reductionism with the belief that breaking down a biological system into the fundamental biomolecular components will elucidate such phenomena. We have reached limitations with this approach due to the complex and dynamical nature of life and our inability to intuit biological behavior from a modular perspective [37]. Mathematical modeling has been integral to current system biology endeavors since detailed analysis would be invasive if performed on humans experimentally or in clinical trials [17]. The interspecies commonalities in systemic properties and molecular mechanisms suggests that certain behaviors transcend specie differentiation and therefore easily lend to generalizing from simpler organisms to more complex organisms such as humans [7, 17]. Current methodologies in mathematical modeling and analysis have been diverse and numerous, with no standardization to progress the discipline in a collaborative manner. Without collaboration during this formative period, successful development and application of systems biology for societal welfare may be at risk. Furthermore, such collaboration has to be standardized in a fundamental approach to discover generic principles, in the manner of preceding long-standing science disciplines. This study effectively implements and analyzes a mathematical model of a three-protein biochemical network, the Synechococcus elongatus circadian clock.
(cont.) I use mass action theory expressed in kronecker products to exploit the ability to apply numerical methods-including sensitivity analysis via boundary value formulation (BVP) and trapiezoidal integration rule-and experimental techniques-including partial reaction fitting and enzyme-driven activations-when mathematically modeling large-scale biochemical networks. Amidst other applicable methodologies, my approach is grounded in the law of mass action because it is based in experimental data and biomolecular mechanistic properties, yet provides predictive power in the complete delineation of the biological system dynamics for all future time points. The results of my research demonstrate the holistic approach that mass action method-ologies have in determining emergent properties of biological systems. I further stress the necessity to enforce collaboration and standardization in future policymaking, with reconsiderations on current stakeholder incentive to redirect academia and industry focus from new molecular entities to interests in holistic understanding of the complexities and dynamics of life entities. Such redirection away from reductionism could further progress basic and applied scientific research to embetter our circumstances through new treatments and preventive measures for health, and development of new strains and disease control in agriculture and ecology [13].
by Andrea R. Castillo.
S.M.in Technology and Policy
APA, Harvard, Vancouver, ISO, and other styles
18

Rathgeber, Florian. "Productive and efficient computational science through domain-specific abstractions." Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/18911.

Full text
Abstract:
In an ideal world, scientific applications are computationally efficient, maintainable and composable and allow scientists to work very productively. We argue that these goals are achievable for a specific application field by choosing suitable domain-specific abstractions that encapsulate domain knowledge with a high degree of expressiveness. This thesis demonstrates the design and composition of domain-specific abstractions by abstracting the stages a scientist goes through in formulating a problem of numerically solving a partial differential equation. Domain knowledge is used to transform this problem into a different, lower level representation and decompose it into parts which can be solved using existing tools. A system for the portable solution of partial differential equations using the finite element method on unstructured meshes is formulated, in which contributions from different scientific communities are composed to solve sophisticated problems. The concrete implementations of these domain-specific abstractions are Firedrake and PyOP2. Firedrake allows scientists to describe variational forms and discretisations for linear and non-linear finite element problems symbolically, in a notation very close to their mathematical models. PyOP2 abstracts the performance-portable parallel execution of local computations over the mesh on a range of hardware architectures, targeting multi-core CPUs, GPUs and accelerators. Thereby, a separation of concerns is achieved, in which Firedrake encapsulates domain knowledge about the finite element method separately from its efficient parallel execution in PyOP2, which in turn is completely agnostic to the higher abstraction layer. As a consequence of the composability of those abstractions, optimised implementations for different hardware architectures can be automatically generated without any changes to a single high-level source. Performance matches or exceeds what is realistically attainable by hand-written code. Firedrake and PyOP2 are combined to form a tool chain that is demonstrated to be competitive with or faster than available alternatives on a wide range of different finite element problems.
APA, Harvard, Vancouver, ISO, and other styles
19

Ielina, Tetiana, Liudmyla Halavska, Daiva Mikucioniene, and Rimvidas Milasius. "Information models of knitwear in computational science and engineering." Thesis, Київський національний університет технологій та дизайну, 2021. https://er.knutd.edu.ua/handle/123456789/19105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Kuhlman, Christopher J. "High Performance Computational Social Science Modeling of Networked Populations." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/51175.

Full text
Abstract:
Dynamics of social processes in populations, such as the spread of emotions, influence, opinions, and mass movements (often referred to individually and collectively as contagions), are increasingly studied because of their economic, social, and political impacts. Moreover, multiple contagions may interact and hence studying their simultaneous evolution is important. Within the context of social media, large datasets involving many tens of millions of people are leading to new insights into human behavior, and these datasets continue to grow in size. Through social media, contagions can readily cross national boundaries, as evidenced by the 2011 Arab Spring. These and other observations guide our work. Our goal is to study contagion processes at scale with an approach that permits intricate descriptions of interactions among members of a population. Our contributions are a modeling environment to perform these computations and a set of approaches to predict contagion spread size and to block the spread of contagions. Since we represent populations as networks, we also provide insights into network structure effects, and present and analyze a new model of contagion dynamics that represents a person\'s behavior in repeatedly joining and withdrawing from collective action. We study variants of problems for different classes of social contagions, including those known as simple and complex contagions.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
21

Cedeno, Vanessa Ines. "Pipelines for Computational Social Science Experiments and Model Building." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/91445.

Full text
Abstract:
There has been significant growth in online social science experiments in order to understand behavior at-scale, with finer-grained data collection. Considerable work is required to perform data analytics for custom experiments. In this dissertation, we design and build composable and extensible automated software pipelines for evaluating social phenomena through iterative experiments and modeling. To reason about experiments and models, we design a formal data model. This combined approach of experiments and models has been done in some studies without automation, or purely conceptually. We are motivated by a particular social behavior, namely collective identity (CI). Group or CI is an individual's cognitive, moral, and emotional connection with a broader community, category, practice, or institution. Extensive experimental research shows that CI influences human decision-making. Because of this, there is interest in modeling situations that promote the creation of CI in order to learn more from the process and to predict human behavior in real life situations. One of our goals in this dissertation is to understand whether a cooperative anagram game can produce CI within a group. With all of the experimental work on anagram games, it is surprising that very little work has been done in modeling these games. Also, abduction is an inference approach that uses data and observations to identify plausibly (and preferably, best) explanations for phenomena. Abduction has broad application in robotics, genetics, automated systems, and image understanding, but have largely been devoid of human behavior. We use these pipelines to understand intra-group cooperation and its effect on fostering CI. We devise and execute an iterative abductive analysis process that is driven by the social sciences. In a group anagrams web-based networked game setting, we formalize an abductive loop, implement it computationally, and exercise it; we build and evaluate three agent-based models (ABMs) through a set of composable and extensible pipelines; we also analyze experimental data and develop mechanistic and data-driven models of human reasoning to predict detailed game player action. The agreement between model predictions and experimental data indicate that our models can explain behavior and provide novel experimental insights into CI.
Doctor of Philosophy
To understand individual and collective behavior, there has been significant interest in using online systems to carry out social science experiments. Considerable work is required for analyzing the data and to uncover interesting insights. In this dissertation, we design and build automated software pipelines for evaluating social phenomena through iterative experiments and modeling. To reason about experiments and models, we design a formal data model. This combined approach of experiments and models has been done in some studies without automation, or purely conceptually. We are motivated by a particular social behavior, namely collective identity (CI). Group or CI is an individual’s cognitive, moral, and emotional connection with a broader community, category, practice, or institution. Extensive experimental research shows that CI influences human decision-making, so there is interest in modeling situations that promote the creation of CI to learn more from the process and to predict human behavior in real life situations. One of our goals in this dissertation is to understand whether a cooperative anagram game can produce CI within a group. With all of the experimental work on anagrams games, it is surprising that very little work has been done in modeling these games. In addition, to identify best explanations for phenomena we use abduction. Abduction is an inference approach that uses data and observations. Abduction has broad application in robotics, genetics, automated systems, and image understanding, but have largely been devoid of human behavior. In a group anagrams web-based networked game setting we do the following. We use these pipelines to understand intra-group cooperation and its effect on fostering CI. We devise and execute an iterative abductive analysis process that is driven by the social sciences. We build and evaluate three agent-based models (ABMs). We analyze experimental data and develop models of human reasoning to predict detailed game player action. We claim our models can explain behavior and provide novel experimental insights into CI, because there is agreement between the model predictions and the experimental data.
APA, Harvard, Vancouver, ISO, and other styles
22

Gouws, Lindsey Ann. "The role of computational thinking in introductory computer science." Thesis, Rhodes University, 2014. http://hdl.handle.net/10962/d1011152.

Full text
Abstract:
Computational thinking (CT) is gaining recognition as an important skill for students, both in computer science and other disciplines. Although there has been much focus on this field in recent years, it is rarely taught as a formal course, and there is little consensus on what exactly CT entails and how to teach and evaluate it. This research addresses the lack of resources for integrating CT into the introductory computer science curriculum. The question that we aim to answer is whether CT can be evaluated in a meaningful way. A CT framework that outlines the skills and techniques comprising CT and describes the nature of student engagement was developed; this is used as the basis for this research. An assessment (CT test) was then created to gauge the ability of incoming students, and a CT-specfic computer game was developed based on the analysis of an existing game. A set of problem solving strategies and practice activities were then recommended based on criteria defined in the framework. The results revealed that the CT abilities of first year university students are relatively poor, but that the students' scores for the CT test could be used as a predictor for their future success in computer science courses. The framework developed for this research proved successful when applied to the test, computer game evaluation, and classification of strategies and activities. Through this research, we established that CT is a skill that first year computer science students are lacking, and that using CT exercises alongside traditional programming instruction can improve students' learning experiences.
APA, Harvard, Vancouver, ISO, and other styles
23

Sidiropoulos, Anastasios. "Computational metric embeddings." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44712.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 141-145).
We study the problem of computing a low-distortion embedding between two metric spaces. More precisely given an input metric space M we are interested in computing in polynomial time an embedding into a host space M' with minimum multiplicative distortion. This problem arises naturally in many applications, including geometric optimization, visualization, multi-dimensional scaling, network spanners, and the computation of phylogenetic trees. We focus on the case where the host space is either a euclidean space of constant dimension such as the line and the plane, or a graph metric of simple topological structure such as a tree. For Euclidean spaces, we present the following upper bounds. We give an approximation algorithm that, given a metric space that embeds into R1 with distortion c, computes an embedding with distortion c(1) [delta]3/4 (A denotes the ratio of the maximum over the minimum distance). For higher-dimensional spaces, we obtain an algorithm which, for any fixed d > 2, given an ultrametric that embeds into Rd with distortion c, computes an embedding with distortion co(1). We also present an algorithm achieving distortion c logo(1) [delta] for the same problem. We complement the above upper bounds by proving hardness of computing optimal, or near-optimal embeddings. When the input space is an ultrametric, we show that it is NP-hard to compute an optimal embedding into R2 under the ... norm. Moreover, we prove that for any fixed d > 2, it is NP-hard to approximate the minimum distortion embedding of an n-point metric space into Rd within a factor of Q(n1/(17d)). Finally, we consider the problem of embedding into tree metrics. We give a 0(1)approximation algorithm for the case where the input is the shortest-path metric of an unweighted graph.
(cont.) For general metric spaces, we present an algorithm which, given an n-point metric that embeds into a tree with distortion c, computes an embedding with distortion (clog n)o ... . By composing this algorithm with an algorithm for embedding trees into R1, we obtain an improved algorithm for embedding general metric spaces into R1.
by Anastasios Sidiropoulos.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
24

Levy, Abitbol Jacobo. "Computational detection of socioeconomic inequalities." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN001.

Full text
Abstract:
Nous vivons une période marquante: pour la première fois, nous sommes conscients des enjeux de notre temps, nous produisons suffisamment de données pour en fournir une description complète et nous disposons d'algorithmes raisonnablement optimaux pour les traiter. Au centre de ce carrefour, une nouvelle discipline, la science sociale computationnelle, profondément imprégnée des avances en intelligence artificielle et en algorithmique, vient se dresser comme une sphère de connaissance à part entière. Cette thèse s'inscrit dans cet élan et cherche à fournir des éléments de compréhension à la problématique des inégalités socioéconomiques en traitant des données massives, notamment issues de réseaux sociaux en ligne et de l'observation de l'environnement urbain. Ainsi, les contributions principales de cette série de travaux sont centrées autour de 1) l’étude des dépendances spatiales, temporelles, linguistique et du réseau liées aux inégalités et 2) l’inférence du statut socioéconomique à partir de ces signaux multimodaux. Le contexte dans lequel cette série de travaux est inscrite est double. D'un côté, nous cherchons à fournir aux chercheurs et aux éléments du pouvoir décisionnel des outils qui leur permettront d'obtenir une image plus fine et détaillée de la répartition de richesse dans le pays dans le but qu'ils puissent adopter des stratégies portant à la résolution de deux défis de notre temps: la pauvreté et les inégalités socioéconomiques. De l'autre nous cherchons nous même à fournir des éléments de réponse aux questions posées par les sciences sociales qui se sont avérées trop intractable pour être abordées sans le volume et la qualité de données nécessaires
Machine and deep learning advances have come to permeate modern sciences and have unlocked the study of numerous issues many deemed intractable. Social sciences have accordingly not been exempted from benefiting from these advances, as neural language model have been extensively used to analyze social and linguistic based phenomena such as the quantification of semantic change or the detection of the ideological bias of news articles, while convolutional neural networks have been used in urban settings to explore the dynamics of urban change by determining which characteristics predict neighborhood improvement or by examining how the perception of safety affects the liveliness of neighborhoods. In light of this fact, this dissertation argues that one particular social phenomenon, socioeconomic inequalities, can be gainfully studied by means of the above. We set out to collect and combine large datasets enabling 1) the study of the spatial, temporal, linguistic and network dependencies of socioeconomic inequalities and 2) the inference of socioeconomic status (SES) from these multimodal signals. This task is one worthy of study as previous research endeavors have come short of providing a complete picture on how these multiple factors are intertwined with individual socioeconomic status and how the former can fuel better inference methodologies for the latter. The study of these questions is important, as much is still unclear about the root causes of SES inequalities and the deployment of ML/DL solutions to pinpoint them is still very much in its infancy
APA, Harvard, Vancouver, ISO, and other styles
25

Varde, Aparna S. "Graphical data mining for computational estimation in materials science applications." Link to electronic thesis, 2006. http://www.wpi.edu/Pubs/ETD/Available/etd-081506-152633/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Piccoli, Prisca Primavera <1991&gt. "Didactics of Computational Thinking Addressed to Non-Computer Science Learners." Master's Degree Thesis, Università Ca' Foscari Venezia, 2017. http://hdl.handle.net/10579/10595.

Full text
Abstract:
In the course of this work, we will firstly introduce the concept of computational thinking and provide an overview of the available literature on the topic. Subsequently, we will illustrate some of the most significant initiatives that are being enacted in the United States and in Europe in favor of a didactics of computing science addressed to school-aged children, and will analyze the most popular educational tools used to introduce the students of primary and secondary schools to the basics of computing. As a conclusion, we will provide an overview of the current debate over the role of computational thinking inside primary and secondary education, by analyzing some of the most recent didactic proposals, and suggesting some possible directions for future inquiries.
APA, Harvard, Vancouver, ISO, and other styles
27

Kanade, Varun. "Computational Questions in Evolution." Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10556.

Full text
Abstract:
Darwin's theory (1859) proposes that evolution progresses by the survival of those individuals in the population that have greater fitness. Modern understanding of Darwinian evolution is that variation in phenotype, or functional behavior, is caused by variation in genotype, or the DNA sequence. However, a quantitative understanding of what functional behaviors may emerge through Darwinian mechanisms, within reasonable computational and information-theoretic resources, has not been established. Valiant (2006) proposed a computational model to address the question of the complexity of functions that may be evolved through Darwinian mechanisms. In Valiant's model, the goal is to evolve a representation that computes a function that is close to some ideal function under the target distribution. While this evolution model can be simulated in the statistical query learning framework of Kearns (1993), Feldman has shown that under some constraints the reverse also holds, in the sense that learning algorithms in this framework may be cast as evolutionary mechanisms in Valiant's model. In this thesis, we present three results in Valiant's computational model of evolution. The first shows that evolutionary mechanisms in this model can be made robust to gradual drift in the ideal function, and that such drift resistance is universal, in the sense that, if some concept class is evolvable when the ideal function is stationary, it is also evolvable in the setting when the ideal function drifts at some low rate. The second result shows that under certain de nitions of recombination and for certain selection mechanisms, evolution with recombination may be substantially faster. We show that in many cases polylogarithmic, rather than polynomial, generations are sufficient to evolve a concept class, whenever a suitable parallel learning algorithm exists. The third result shows that computation, and not just information, is a limiting resource for evolution. We show that when computational resources in Valiant's model are allowed to be unbounded, while requiring that the information-theoretic resources be polynomially bounded, more concept classes are evolvable. This result is based on widely believed conjectures from complexity theory.
Engineering and Applied Sciences
APA, Harvard, Vancouver, ISO, and other styles
28

Raina, Priyanka. "Architectures for computational photography." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82393.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 93-94).
Computational photography refers to a wide range of image capture and processing techniques that extend the capabilities of digital photography and allow users to take photographs that could not have been taken by a traditional camera. Since its inception less than a decade ago, the field today encompasses a wide range of techniques including high dynamic range (HDR) imaging, low light enhancement, panorama stitching, image deblurring and light field photography. These techniques have so far been software based, which leads to high energy consumption and typically no support for real-time processing. This work focuses on hardware architectures for two algorithms - (a) bilateral filtering which is commonly used in computational photography applications such as HDR imaging, low light enhancement and glare reduction and (b) image deblurring. In the first part of this work, digital circuits for three components of a multi-application bilateral filtering processor are implemented - the grid interpolation block, the HDR image creation and contrast adjustment blocks, and the shadow correction block. An on-chip implementation of the complete processor, designed with other team members, performs HDR imaging, low light enhancement and glare reduction. The 40 nm CMOS test chip operates from 98 MHz at 0.9 V to 25 MHz at 0.9 V and processes 13 megapixels/s while consuming 17.8 mW at 98 MHz and 0.9 V, achieving significant energy reduction compared to previous CPU/GPU implementations. In the second part of this work, a complete system architecture for blind image deblurring is proposed. Digital circuits for the component modules are implemented using Bluespec SystemVerilog and verified to be bit accurate with a reference software implementation. Techniques to reduce power and area cost are investigated and synthesis results in 40nm CMOS technology are presented
by Priyanka Raina.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
29

Kirmani, Ghulam A. (Ghulam Ahmed). "Computational time-resolved imaging." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/97803.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 151-159).
Classical photography uses steady-state illumination and light sensing with focusing optics to capture scene reflectivity as images; temporal variations of the light field are not exploited. This thesis explores the use of time-varying optical illumination and time-resolved sensing along with signal modeling and computational reconstruction. Its purpose is to create new imaging modalities, and to demonstrate high-quality imaging in cases in which traditional techniques fail to even form degraded imagery. The principal contributions in this thesis are the derivation of physically-accurate signal models for the scene's response to timevarying illumination and the photodetection statistics of the sensor, and the combining of these models with computationally tractable signal recovery algorithms leading to image formation. In active optical imaging setups, we use computational time-resolved imaging to experimentally demonstrate: non line-of-sight imaging or looking around corners, in which only diffusely scattered light was used to image a hidden plane which was completely occluded from both the light source and the sensor; single-pixel 3D imaging or compressive depth acquisition, in which accurate depth maps were obtained using a single, non-spatially resolving bucket detector in combination with a spatial light modulator; and high-photon efficiency imaging including first-photon imaging, in which high-quality 3D and reflectivity images were formed using only the first detected photon at each sensor pixel despite the presence of high levels of background light.
by Ghulam A. Kirmani.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
30

Hanson-Smith, Victor 1981. "Error and Uncertainty in Computational Phylogenetics." Thesis, University of Oregon, 2011. http://hdl.handle.net/1794/12151.

Full text
Abstract:
xi, 119 p. : ill. (some col.)
The evolutionary history of protein families can be difficult to study because necessary ancestral molecules are often unavailable for direct observation. As an alternative, the field of computational phylogenetics has developed statistical methods to infer the evolutionary relationships among extant molecular sequences and their ancestral sequences. Typically, the methods of computational phylogenetic inference and ancestral sequence reconstruction are combined with other non-computational techniques in a larger analysis pipeline to study the inferred forms and functions of ancient molecules. Two big problems surrounding this analysis pipeline are computational error and statistical uncertainty. In this dissertation, I use simulations and analysis of empirical systems to show that phylogenetic error can be reduced by using an alternative search heuristic. I then use similar methods to reveal the relationship between phylogenetic uncertainty and the accuracy of ancestral sequence reconstruction. Finally, I provide a case-study of a molecular machine in yeast, to demonstrate all stages of the analysis pipeline. This dissertation includes previously published co-authored material.
Committee in charge: John Conery, Chair; Daniel Lowd, Member; Sara Douglas, Member; Joseph W. Thornton, Outside Member
APA, Harvard, Vancouver, ISO, and other styles
31

James, Roshan P. "The computational content of isomorphisms." Thesis, Indiana University, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3587675.

Full text
Abstract:

Abstract models of computation, such as Turing machines, λ-calculus and logic gates, allow us to express computation without being concerned about the underlying technology that realizes them in the physical world. These models embrace a classical worldview wherein computation is essentially irreversible. From the perspective of quantum physics however, the physical world is one where every fundamental interaction is essentially reversible and various quantities such as energy, mass, angular momentum are conserved. Thus the irreversible abstractions we choose as the basis of our most primitive models of computing are at odds with the underlying reversible physical reality and hence our thesis: By embracing irreversible physical primitives, models of computation have also implicitly included a class of computational effects which we call information effects.

To make this precise, we develop an information preserving model of computation (in the sense of Shannon entropy) wherein the process of computing does not gain or lose information. We then express information effects in this model using an arrow meta-language, in much the same way that we model computational effects in the λ-calculus using a monadic metalanguage. A consequence of this careful treatment of information, is that we effectively capture the gap between reversible computation and irreversible computation using a type-and-effect system.

The treatment of information effects has a parallel with open and closed systems in physics. Closed physical systems conserve mass and energy and are the basic unit of study in physics. Open systems interact with their environment, possibly exchanging matter or energy. These interactions may be thought of as effects that modify the conservation properties of the system. Computations with information effects are much like open systems and they can be converted into pure computations by making explicit the surrounding information environment that they interact with.

Finally, we show how conventional irreversible computation such as the λ-calculus can be embedded into this model, such that the embedding makes the implicit information effects of the λ-calculus explicit.

APA, Harvard, Vancouver, ISO, and other styles
32

Cope, James S. "Computational methods for the classification of plants." Thesis, Kingston University, 2014. http://eprints.kingston.ac.uk/28759/.

Full text
Abstract:
Plants are of fundamental importance to life on Earth. The shapes of leaves, petals and whole plants are of great significance to plant science, as they can help to distinguish between different species, to measure plant health, and even to model climate change. The current availability of botanists is increasingly failing to meet the growing demands for their expertise. These demands range from amateurs desiring help in identifying plants, to agricultural applications such as automated weeding systems, and to the cataloguing of biodiversity for conservational purposes. This thesis aims to help fill this gap, by exploring computational techniques for the automated analysis and classification of plants from images of their leaves. The main objective is to provide novel techniques and the required frame¬work for a robust, automated plant identification system. This involves firstly the accurate extraction of different features of the leaf and the generation of appropriate descriptors. One of the biggest challenges involved in working with plants is the high amounts of variation that may occur within a species, and high similarity that exists between some species. Algorithms are introduced which aim to allow accurate classification in spite of this. With many features of the leaf being available for use in classification, a suitable framework is required for combining them. An efficient method is proposed which selects on a leaf-by-leaf basis which of the leaf features are most likely to be of use. This decreases computational costs whilst increasing accuracy, by ignoring unsuitable features. Finally a study is carried out looking at how professional botanists view leaf images. Much can be learnt from the behaviour of experts which can be applied to the task at hand. Eye-tracking technology is used to establish the difference between how botanists and non-botanists view leaf images, and preliminary work is performed towards utilizing this information in an automated system.
APA, Harvard, Vancouver, ISO, and other styles
33

Blount, Steven Michael 1958. "Computational methods for stochastic epidemics." Diss., The University of Arizona, 1997. http://hdl.handle.net/10150/288714.

Full text
Abstract:
Compartmental models constructed for stochastic epidemics are usually difficult to analyze mathematically or computationally. Researchers have mostly resorted to deterministic approximations or simulation to investigate these models. This dissertation describes three original computational methods for analyzing compartmental models of stochastic epidemics. The first method is the Markov Process Method which computes the probability law for the epidemic by solving the Chapman-Kolmogorov ordinary differential equations as an initial value problem using standard numerical analysis techniques. It is limited to models with small populations and few compartments and requires sophisticated numerical analysis tools and relatively extensive computer resources. The second method is the Probability Vector Method which can estimate the first few moments of a discrete time epidemic model over a limited time period (i.e. if Y(t) is the number of individuals in a given compartment then this method can estimate E[ Yr for small positive integers r. Size restrictions limit the maximum order of the moment that can be computed. For compartmental models with a constant, homogeneous population, this method requires modest computational resources to estimate the first two moments of Y(t). The third method is the Linear Extrapolation Method, which computes the moments of a compartmental model with a large population by extrapolating from the given moments of the same model with smaller populations. This method is limited to models that have some alternate way of calculating the moments for small populations. These moments should be computed exactly from probabilistic principles. When this is not practical, any method that can produce accurate estimates of these moments for small populations can be used. Two compartmental epidemic models are analyzed using these three methods. First, the simple susceptible/infective epidemic is used to illustrate each method and serves as a benchmark for accuracy and performance. These computations show that each algorithm is capable of producing acceptably accurate solutions (at least for the specific parameters that were used). Next, an HIV/AIDS model is analyzed and the numerical results are presented and compared with the deterministic and simulation solutions. Only the probability vector method could compete with simulation on the larger (i.e. more compartments) HIV/AIDS model.
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Zheng Ph D. Massachusetts Institute of Technology. "Computational Raman imaging and thermography." Thesis, Massachusetts Institute of Technology, 2021. https://hdl.handle.net/1721.1/130673.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, February, 2021
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 185-201).
Thermography tools that perform accurate temperature measurements with nanoscale resolution are highly desired in our modern society. Although researchers have put extensive efforts in developing nanoscale thermography for more than three decades and a significant amount of achievements have been made in this field, the mainstream thermography tools have not fully met the requirements from the industry and the academia. In this thesis, we present our home-built Raman microscope for Raman imaging and thermography. The performance of this instrument is enhanced by computational approaches. The body of the thesis will be divided into three parts. First, the instrumentation of our setup are introduced. Second, we present the results of Raman imaging with computational super-resolution techniques. Third, this instrument is used as a thermography tool to map the temperature profile of a nanowire device. These results provide insights in combining advanced instrumentation and computational methods in Raman imaging and Raman thermography for the applications in modern nano-technology.
by Zheng Li.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Materials Science and Engineering
APA, Harvard, Vancouver, ISO, and other styles
35

Lavallée-Adam, Mathieu. "Protein-protein interaction confidence assessment and network clustering computational analysis." Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=121237.

Full text
Abstract:
Protein-protein interactions represent a crucial source of information for the understanding of the biological mechanisms of the cell. In order to be useful, high quality protein-protein interactions must be computationally extracted from the noisy datasets produced by high-throughput experiments such as affinity purification. Even when filtered protein-protein interaction datasets are obtained, the task of analyzing the network formed by these numerous interactions remains tremendous. Protein-protein interaction networks are large, intricate, and require computational approaches to provide meaningful biological insights. The overall objective of this thesis is to explore algorithms assessing the quality of protein-protein interactions and facilitating the analysis of their networks. This work is divided into four results: 1) a novel Bayesian approach to model contaminants originating from affinity purifications, 2) a new method to identify and evaluate the quality of protein-protein interactions independently in different cell compartments, 3) an algorithm computing the statistical significance of clusterings of proteins sharing the same functional annotation in protein-protein interaction networks, and 4) a computational tool performing sequence motif discovery in 5' untranslated regions as well as evaluating the clustering of such motifs in protein-protein interaction networks.
Les interactions protéine-protéine représentent une source d'information essentielle à la compréhension des divers méchanismes biologiques de la cellule. Cependant, les expériences à haut débit qui identifient ces interactions, comme la purification par affinité, produisent un très grand nombre de faux-positifs. Des méthodes computationelles sont donc requises afin d'extraire de ces ensembles de données les interactions protéine-protéine de grande qualité. Toutefois, même lorsque filtrés, ces ensembles de données forment des réseaux très complexes à analyser. Ces réseaux d'interactions protéine-protéine sont d'une taille importante, d'une grande complexité et requièrent des approches computationelles sophistiquées afin d'en retirer des informations possédant une réelle portée biologique. L'objectif de cette thèse est d'explorer des algorithmes évaluant la qualité d'interactions protéine-protéine et de faciliter l'analyse des réseaux qu'elles composent. Ce travail de recherche est divisé en quatre principaux résultats: 1) une nouvelle approche bayésienne permettant la modélisation des contaminants provenant de la purification par affinité, 2) une nouvelle méthode servant à la découverte et l'évaluation de la qualité d'interactions protéine-protéine à l'intérieur de différents compartiments de la cellule, 3) un algorithme détectant les regroupements statistiquement significatifs de protéines partageant une même annotation fonctionnelle dans un réseau d'interactions protéine-protéine et 4) un outil computationel qui a pour but la découverte de motifs de séquences dans les régions 5' non traduites tout en évaluant le regroupement de ces motifs dans les réseaux d'interactions protéine-protéine.
APA, Harvard, Vancouver, ISO, and other styles
36

Gupta, Gaurav. "Computational material science of carboncarbon : composites based on carbonaceous mesophase matrices." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=83865.

Full text
Abstract:
Carbon/Carbon composites belong to the generic class of fiber reinforced composites and are widely used because of their high strength as well as chemical and thermal stability. Like other fiber reinforced composites they consist of the fibers which act as reinforcements and matrix which acts as a glue that binds the fibers. c/c composites from pitch based precursor are unique since the matrix in this case is a liquid crystal or mesophase. This makes them remarkable in the sense that unlike c/c composites from other precursors such as PAN, rayon etc. they have extremely high degree of molecular orientation and exhibit texture. An important characteristic of their textures is the presence of topological defects. It is hence of great interest to understand and elucidate the principles that govern the formation of textures so as to optimize their properties. In this work we present a computational study of structure formation in carbon-carbon composites that describes the emergence of topological defects due to the distortions in the oriented matrix created by the presence of fiber matrix interaction. Dynamical and structural features of texture formation were characterized using gradient elasticity and defect physics.
APA, Harvard, Vancouver, ISO, and other styles
37

Qiu, Kanjun. "Developing a computational textiles curriculum to increase diversity in computer science." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85222.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 97-98).
The current culture surrounding computer science is quite narrow, resulting in a participating population that lacks diversity in both gender and interests. The field of computational textiles has shown promise as a domain for diversifying computer science culture by drawing a population with broader, less traditional interests and backgrounds into creating technology; however, little effort has been made to build resources and communities around computational textiles. This thesis presents a curriculum that teaches computer science and computer programming through a series of lessons for building and programming computational textile projects, along with systematic considerations that support the real-world implementation of such a curriculum. In 2011-12, we conducted three workshops to evaluate the impact of our curriculum methods and projects on students' technological self-efficacy. As a result of data obtained from these workshops, we conclude that working with our curriculum's structured computational textile projects both draws a gender-diverse population, and increases students' comfort with, enjoyment of, and interest in working with electronics and programming. Accordingly, we are transforming the curriculum into a published book in order to provide educational resources to support the development of a computer science culture around computational textiles.
by Kanjun Qiu.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
38

Alabdulkareem, Ahmad. "Analyzing cities' complex socioeconomic networks using computational science and machine learning." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119325.

Full text
Abstract:
Thesis: Ph. D. in Computational Science & Engineering, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 133-141).
By 2050, it is expected that 66% of the world population will be living in cities. The urban growth explosion in recent decades has raised many questions concerning the evolutionary advantages of urbanism, with several theories delving into the multitude of benefits of such efficient systems. This thesis focuses on one important aspect of cities: their social dimension, and in particular, the social aspect of their complex socioeconomic fabric (e.g. labor markets and social networks). Economic inequality is one of the greatest challenges facing society today, in tandem with the eminent impact of automation, which can exacerbate this issue. The social dimension plays a significant role in both, with many hypothesizing that social skills will be the last bastion of differentiation between humans and machines, and thus, jobs will become mostly dominated by social skills. Using data-driven tools from network science, machine learning, and computational science, the first question I aim to answer is the following: what role do social skills play in today's labor markets on both a micro and macro scale (e.g. individuals and cities)? Second, how could the effects of automation lead to various labor dynamics, and what role would social skills play in combating those effects? Specifically, what are social skills' relation to career mobility? Which would inform strategies to mitigate the negative effects of automation and off-shoring on employment. Third, given the importance of the social dimension in cities, what theoretical model can explain such results, and what are its consequences? Finally, given the vulnerabilities for invading individuals' privacy, as demonstrated in previous chapters, how does highlighting those results affect people's interest in privacy preservation, and what are some possible solutions to combat this issue?
by Ahmad Alabdulkareem.
Ph. D. in Computational Science & Engineering
APA, Harvard, Vancouver, ISO, and other styles
39

Ristad, Eric Sven. "Computational Structure of Human Language." Thesis, Massachusetts Institute of Technology, 1990. http://hdl.handle.net/1721.1/7038.

Full text
Abstract:
The central thesis of this report is that human language is NP-complete. That is, the process of comprehending and producing utterances is bounded above by the class NP, and below by NP-hardness. This constructive complexity thesis has two empirical consequences. The first is to predict that a linguistic theory outside NP is unnaturally powerful. The second is to predict that a linguistic theory easier than NP-hard is descriptively inadequate. To prove the lower bound, I show that the following three subproblems of language comprehension are all NP-hard: decide whether a given sound is possible sound of a given language; disambiguate a sequence of words; and compute the antecedents of pronouns. The proofs are based directly on the empirical facts of the language user's knowledge, under an appropriate idealization. Therefore, they are invariant across linguistic theories. (For this reason, no knowledge of linguistic theory is needed to understand the proofs, only knowledge of English.) To illustrate the usefulness of the upper bound, I show that two widely-accepted analyses of the language user's knowledge (of syntactic ellipsis and phonological dependencies) lead to complexity outside of NP (PSPACE-hard and Undecidable, respectively). Next, guided by the complexity proofs, I construct alternate linguisitic analyses that are strictly superior on descriptive grounds, as well as being less complex computationally (in NP). The report also presents a new framework for linguistic theorizing, that resolves important puzzles in generative linguistics, and guides the mathematical investigation of human language.
APA, Harvard, Vancouver, ISO, and other styles
40

Chin, Toshio M. "Dynamic estimation in computational vision." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/13072.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1992.
Includes bibliographical references (leaves 213-220).
by Toshio Michael Chin.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
41

Baggett, David McAdams. "A system for computational phonology." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36535.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (p. 127-129).
by David McAdams Baggett.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
42

Sither, Matthew C. (Matthew Christian). "Adaptive consolidation of computational perspectives." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/37098.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
Includes bibliographical references (p. 81).
This thesis describes the design and implementation of machine learning algorithms and real-time recommendations within EWall, a software system used for individual and collaborative information management. In the EWall workspace, users collect and arrange cards, which are compact visual abstractions of information. A significant problem that often arises when humans try to collect information is information overload. Information overload refers to the state of having too much information, and it causes difficulty in discovering relevant information. When affected by information overload, the user loses focus and spends more time filtering out irrelevant information. This thesis first presents a simple solution that uses a set of algorithms that prioritize information. Based on the information the user is working with, the algorithms search for relevant information in a database by analyzing spatial, temporal, and collaborative relationships. A second, more adaptive solution uses agents that observe user behavior and learn to apply the prioritization algorithms more effectively. Adaptive agents help to prevent information overload by removing the burden of search and filter from the user, and they hasten the process of discovering interesting and relevant information.
by Matthew C. Sither.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
43

Bylinskii, Zoya. "Computational understanding of image memorability." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/97256.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 77-82).
Previous studies have identified that images carry the attribute of memorability, a predictive value of whether a novel image will be later remembered or forgotten. In this thesis we investigate the interplay between intrinsic and extrinsic factors that affect image memorability. First, we find that intrinsic differences in memorability exist at a finer-grained scale than previously documented. Moreover, we demonstrate high consistency across participant populations and experiments. We show how these findings generalize to an applied visual modality - information visualizations. We separately find that intrinsic differences are already present shortly after encoding and remain apparent over time. Second, we consider two extrinsic factors: image context and observer behavior. We measure the effects of image context (the set of images from which the experimental sequence is sampled) on memorability. Building on prior findings that images that are distinct with respect to their context are better remembered, we propose an information-theoretic model of image distinctiveness. Our model can predict how changes in context change the memorability of natural images using automatically computed image features. Our results are presented on a large dataset of indoor and outdoor scene categories. We also measure the effects of observer behavior on memorability, on a trial-bytrial basis. Specifically, our proposed computational model can use an observer's eye movements on an image to predict whether or not the image will be later remembered. Apart from eye movements, we also show how 2 additional physiological measurements - pupil dilations and blink rates - can be predictive of image memorability, without the need for overt responses. Together, by considering both intrinsic and extrinsic effects on memorability, we arrive at a more complete model of image memorability than previously available.
by Zoya Bylinskii.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
44

Herzog, Jonathan 1975. "Computational soundness of formal adversaries." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87334.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2003.
Includes bibliographical references (p. 50-51).
by Jonathan Herzog.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
45

Banks, Eric 1976. "Computational approaches to gene finding." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/81523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Sealfon, Rachel (Rachel Sima). "Computational investigation of pathogen evolution." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99858.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 105-118).
Pathogen genomes, especially those of viruses, often change rapidly. Changes in pathogen genomes may have important functional implications, for example by altering adaptation to the host or conferring drug resistance. Accumulated genomic changes, many of which are functionally neutral, also serve as markers that can elucidate transmission dynamics or reveal how long a pathogen has been present in a given environment. Moreover, systematically probing portions of the pathogen genome that are changing more or less rapidly than expected can provide important clues about the function of these regions. In this thesis, I (1) examine changes in the Vibrio cholerae genome shortly after the introduction of the pathogen to Hispaniola to gain insight into genomic change and functional evolution during an epidemic. I then (2) use changes in the Lassa genome to estimate the time that the pathogen has been circulating in Nigeria and in Sierra Leone, and to pinpoint sites that have recurrent, independent mutations that may be markers for lineage-specific selection. I (3) develop a method to identify regions of overlapping function in viral genomes, and apply the approach to a wide range of viral genomes. Finally, I (4) use changes in the genome of Ebola virus to elucidate the virus' origin, evolution, and transmission dynamics at the start of the outbreak in Sierra Leone.
by Rachel Sealfon.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
47

Syed, Zeeshan Hassan 1980. "Computational methods for physiological data." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/54671.

Full text
Abstract:
Thesis (Ph. D.)--Harvard-MIT Division of Health Sciences and Technology, 2009.
Author is also affiliated with the MIT Dept. of Electrical Engineering and Computer Science. Cataloged from PDF version of thesis.
Includes bibliographical references (p. 177-188).
Large volumes of continuous waveform data are now collected in hospitals. These datasets provide an opportunity to advance medical care, by capturing rare or subtle phenomena associated with specific medical conditions, and by providing fresh insights into disease dynamics over long time scales. We describe how progress in medicine can be accelerated through the use of sophisticated computational methods for the structured analysis of large multi-patient, multi-signal datasets. We propose two new approaches, morphologic variability (MV) and physiological symbolic analysis, for the analysis of continuous long-term signals. MV studies subtle micro-level variations in the shape of physiological signals over long periods. These variations, which are often widely considered to be noise, can contain important information about the state of the underlying system. Symbolic analysis studies the macro-level information in signals by abstracting them into symbolic sequences. Converting continuous waveforms into symbolic sequences facilitates the development of efficient algorithms to discover high risk patterns and patients who are outliers in a population. We apply our methods to the clinical challenge of identifying patients at high risk of cardiovascular mortality (almost 30% of all deaths worldwide each year). When evaluated on ECG data from over 4,500 patients, high MV was strongly associated with both cardiovascular death and sudden cardiac death. MV was a better predictor of these events than other ECG-based metrics. Furthermore, these results were independent of information in echocardiography, clinical characteristics, and biomarkers.
(cont.) Our symbolic analysis techniques also identified groups of patients exhibiting a varying risk of adverse outcomes. One group, with a particular set of symbolic characteristics, showed a 23 fold increased risk of death in the months following a mild heart attack, while another exhibited a 5 fold increased risk of future heart attacks.
by Zeeshan Hassan Syed.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
48

Burke, Lauren. "Computer Science Education at The Claremont Colleges: The Building of an Intuition." Scholarship @ Claremont, 2016. http://scholarship.claremont.edu/scripps_theses/875.

Full text
Abstract:
In this thesis, I discuss how the undergraduate computer scientist is trained, and how they learn what I am calling computational intuition. Computational intuition describes the methodology in which computer scientists approach their problems and solve them through the use of computers. Computational intuition is a series of skills and a way of thinking or approaching problems that students learn throughout their education. The main way that computational intuition is taught to students is through the experience they gain as they work on homework and classwork problems. To develop computational intuition, students learn explicit knowledge and techniques as well as knowledge that is tacit and harder to teach within the lectures of a classroom environment. Computational intuition includes concepts that professors and students discuss which include “computer science intuition,” “computational thinking,” general problem solving skills or heuristics, and trained judgement. This way of learning is often social, and I draw on the pedagogy of cognitive apprenticeship to understand the interactions between the professors, tutors, and other students help learners gain an understanding of the “computer science intuition.” It is this method of thinking that computer scientists at the Claremont Colleges have stated as being one of the most essential items that should be taught and gained throughout their education and signals a wider understanding of computer science as a field.
APA, Harvard, Vancouver, ISO, and other styles
49

Pirzadeh, Hormoz. "Computational Geometry with the Rotating Calipers." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0027/MQ50856.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Miri, Hossein. "CernoCAMAL : a probabilistic computational cognitive architecture." Thesis, University of Hull, 2012. http://hydra.hull.ac.uk/resources/hull:6887.

Full text
Abstract:
This thesis presents one possible way to develop a computational cognitive architecture, dubbed CernoCAMAL, that can be used to govern artificial minds probabilistically. The primary aim of the CernoCAMAL research project is to investigate how its predecessor architecture CAMAL can be extended to reason probabilistically about domain model objects through perception, and how the probability formalism can be integrated into its BDI (Belief-Desire-Intention) model to coalesce a number of mechanisms and processes. The motivation and impetus for extending CAMAL and developing CernoCAMAL is the considerable evidence that probabilistic thinking and reasoning is linked to cognitive development and plays a role in cognitive functions, such as decision making and learning. This leads us to believe that a probabilistic reasoning capability is an essential part of human intelligence. Thus, it should be a vital part of any system that attempts to emulate human intelligence computationally. The extensions and augmentations to CAMAL, which are the main contributions of the CernoCAMAL research project, are as follows: - The integration of the EBS (Extended Belief Structure) that associates a probability value with every belief statement, in order to represent the degrees of belief numerically. - The inclusion of the CPR (CernoCAMAL Probabilistic Reasoner) that reasons probabilistically over the goal- and task-oriented perceptual feedback generated by reactive sub-systems. - The compatibility of the probabilistic BDI model with the affect and motivational models and affective and motivational valences used throughout CernoCAMAL. A succession of experiments in simulation and robotic testbeds is carried out to demonstrate improvements and increased efficacy in CernoCAMAL’s overall cognitive performance. A discussion and critical appraisal of the experimental results, together with a summary, a number of potential future research directions, and some closing remarks conclude the thesis.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography