Dissertations / Theses on the topic 'Computational science'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Computational science.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Spagnuolo, Carmine. "Scalable computational science." Doctoral thesis, Universita degli studi di Salerno, 2017. http://hdl.handle.net/10556/2581.
Full textComputational science also know as scientific computing is a rapidly growing novel field that uses advanced computing in order to solve complex problems. This new discipline combines technologies, modern computational methods and simulations to address problems too complex to be reliably predicted only by theory and too dangerous or expensive to be reproduced in laboratories. Successes in computational science over the past twenty years have caused demand of supercomputing, to improve the performance of the solutions and to allow the growth of the models, in terms of sizes and quality. From a computer scientist’s perspective, it is natural to think to distribute the computation required to study a complex systems among multiple machines: it is well known that the speed of singleprocessor computers is reaching some physical limits. For these reasons, parallel and distributed computing has become the dominant paradigm for computational scientists who need the latest development on computing resources in order to solve their problems and the “Scalability” has been recognized as the central challenge in this science. In this dissertation the design and implementation of Frameworks, Parallel Languages and Architectures, which enable to improve the state of the art on Scalable Computational Science, are discussed. Frameworks. The proposal of D-MASON, a distributed version of MASON, a wellknown and popular Java toolkit for writing and running Agent-Based Simulations (ABSs). D-MASON introduces a framework level parallelization so that scientists that use the framework (e.g., a domain expert with limited knowledge of distributed programming) could be only minimally aware of such distribution. D-MASON, was began to be developed since 2011, the main purpose of the project was overcoming the limits of the sequentially computation of MASON, using distributed computing. D-MASON enables to do more than MASONin terms of size of simulations (number of agents and complexity of agents behaviors), but allows also to reduce the simulation time of simulations written in MASON. For this reason, one of the most important feature of D-MASON is that it requires a limited number of changing on the MASON’s code in order to execute simulations on distributed systems. v D-MASON, based on Master-Worker paradigm, was initially designed for heterogeneous computing in order to exploit the unused computational resources in labs, but it also provides functionality to be executed in homogeneous systems (as HPC systems) as well as cloud infrastructures. The architecture of D-MASON is presented in the following three papers, which describes all D-MASON layers: • Cordasco G., Spagnuolo C. and Scarano V. Toward the new version of D-MASON: Efficiency, Effectiveness and Correctness in Parallel and Distributed Agent-based Simulations. 1st IEEE Workshop on Parallel and Distributed Processing for Computational Social Systems. IEEE International Parallel & Distributed Processing Symposium 2016. • Cordasco G., De Chiara R., Mancuso A., Mazzeo D., Scarano V. and Spagnuolo C. Bringing together efficiency and effectiveness in distributed simulations: the experience with D-MASON. SIMULATION: Transactions of The Society for Modeling and Simulation International, June 11, 2013. • Cordasco G., De Chiara R., Mancuso A., Mazzeo D., Scarano V. and Spagnuolo C. A Framework for distributing Agent-based simulations. Ninth International Workshop Algorithms, Models and Tools for Parallel Computing on Heterogeneous Platforms of Euro-Par 2011 conference. Much effort has been made, on the Communication Layer, to improve the communication efficiency in the case of homogeneous systems. D-MASON is based on Publish/Subscribe (PS) communication paradigm and uses a centralized message broker (based on the Java Message Service standard) to deal with heterogeneous systems. The communication for homogeneous system uses the Message Passing Interface (MPI) standard and is also based on PS. In order to use MPI within Java, D-MASON uses a Java binding of MPI. Unfortunately, this binding is relatively new and does not provides all MPI functionalities. Several communication strategies were designed, implemented and evaluated. These strategies were presented in two papers: • Cordasco G., Milone F., Spagnuolo C. and Vicidomini L. Exploiting D-MASON on Parallel Platforms: A Novel Communication Strategy 2st Workshop on Parallel and Distributed Agent-Based Simulations of EuroPar 2014 conference. • Cordasco G., Mancuso A., Milone F. and Spagnuolo C. Communication strategies in Distributed Agent-Based Simulations: the experience with D-MASON 1st Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2013 conference. vi D-MASON provides also mechanisms for the visualization and gathering of the data in distributed simulation (available on the Visualization Layer). These solutions are presented in the paper: • Cordasco G., De Chiara R., Raia F., Scarano V., Spagnuolo C. and Vicidomini L. Designing Computational Steering Facilities for Distributed Agent Based Simulations. Proceedings of the ACM SIGSIM Conference on Principles of Advanced Discrete Simulation 2013. In DABS one of the most complex problem is the partitioning and balancing of the computation. D-MASON provides, in the Distributed Simulation layer, mechanisms for partitioning and dynamically balancing the computation. D-MASON uses field partitioning mechanism to divide the computation among the distributed system. The field partitioning mechanism provides a nice trade-off between balancing and communication effort. Nevertheless a lot of ABS are not based on 2D- or 3D-fields and are based on a communication graph that models the relationship among the agents. Inthiscasethefieldpartitioningmechanismdoesnotensuregoodsimulation performance. Therefore D-MASON provides also a specific mechanisms to manage simulation that uses a graph to describe agent interactions. These solutions were presented in the following publication: • Antelmi A., Cordasco G., Spagnuolo C. and Vicidomini L.. On Evaluating Graph Partitioning Algorithms for Distributed Agent Based Models on Networks. 3rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2015 conference. The field partitioning mechanism, intuitively, enables the mono and bi-dimensional partitioning of an Euclidean space. This approach is also know as uniform partitioning. But in some cases, e.g. simulations that simulate urban areas using a Geographical Information System (GIS), the uniform partitioning degrades the simulation performance, due to the unbalanced distribution of the agents on the field and consequently on the computational resources. In such a case, D-MASON provides a non-uniform partitioning mechanism (inspired by Quad-Tree data structure), presented in the following paper: • Lettieri N., Spagnuolo C. and Vicidomini L.. Distributed Agent-based Simulation and GIS: An Experiment With the dynamics of Social Norms. 3rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2015 conference. • G. Cordasco and C. Spagnuolo and V. Scarano. Work Partitioning on Parallel and Distributed Agent-Based Simulation. IEEE Workshop on vii Parallel and Distributed Processing for Computational Social Systems of International Parallel & Distributed Processing Symposium, 2017. The latest version of D-MASON provides a web-based System Management, to better use D-MASON in Cloud infrastructures. D-MASON on the Amazon EC2 Cloud infrastructure and its performance in terms of speed and cost were compared against D-MASON on an HPC environment. The obtained results, and the new System Management Layer are presented in the following paper: • MCarillo,GCordasco,FSerrapica,CSpagnuolo,P.Szufel,andL.Vicidomini. D-Mason on the Cloud: an Experience with Amazon Web Services. 4rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2016 conference. ParallelLanguages. The proposal of an architecture, which enable to invoke code supported by a Java Virtual Machine (JVM) from code written in C language. Swft/T, is a parallel scripting language for programming highly concurrent applications in parallel and distributed environments. Swift/T is the reimplemented version of Swift language, with a new compiler and runtime. Swift/T improve Swift, allowing scalability over 500 tasks per second, load balancing feature, distributed data structures, and dataflow-driven concurrent task execution. Swif/T provides an interesting feature the one of calling easily and natively other languages (as Python, R, Julia, C) by using special language functions named leaf functions. Considering the actual trend of some supercomputing vendors (such as Cray Inc.) that support in its processors Java Virtual Machines (JVM), it is desirable to provide methods to call also Java code from Swift/T. In particular is really attractive to be able to call scripting languages for JVM as Clojure, Scala, Groovy, JavaScript etc. For this purpose a C binding to instanziate and call JVM was designed. This binding is used in Swif/T (since the version 1.0) to develop leaf functions that call Java code. The code are public available at GitHub project page. Frameworks. The proposal of two tools, which exploit the computing power of parallel systems to improve the effectiveness and the efficiency of Simulation Optimization strategies. Simulations Optimization (SO) is used to refer to the techniques studied for ascertaining the parameters of a complex model that minimize (or maximize) given criteria (one or many), which can only be computed by performing a simulation run. Due to the the high dimensionality of the search space, the heterogeneity of parameters, the irregular shape and the stochastic nature of the objective evaluation function, the tuning of such systems is extremely demanding from the computational point of view. The first frameworks is SOF: Zero Configuration Simulation Optimization Framework on the Cloud, it was designed to run SO process in viii the cloud. SOF is based on the Apache Hadoop infrastructure and is presented in the following paper: • Carillo M., Cordasco G., Scarano V., Serrapica F., Spagnuolo C. and Szufel P. SOF: Zero Configuration Simulation Optimization Framework on the Cloud. Parallel, Distributed, and Network-Based Processing 2016. The second framework is EMEWS: Extreme-scale Model Exploration with Swift/T, it has been designed at Argonne National Laboratory (USA). EMEWS as SOF allows to perform SO processes in distributed system. Both the frameworks are mainly designed for ABS. In particular EMEWS was tested using the ABS simulation toolkit Repast. Initially, EMEWS was not able to easily execute out of the box simulations written in MASON and NetLogo. This thesis presents new functionalities of EMEWS and solutions to easily execute MASON and NetLogo simulations on it. The EMEWS use cases are presented in the following paper: • J. Ozik, N. T. Collier, J. M. Wozniak and C. Spagnuolo From Desktop To Large-scale Model Exploration with Swift/T. Winter Simulation Conference 2016. Architectures. The proposal of an open-source, extensible, architecture for the visualization of data in HTML pages, exploiting a distributed web computing. Following the Edge-centric Computing paradigm, the data visualization is performed edge side ensuring data trustiness, privacy, scalability and dynamic data loading. The architecture has been exploited in the Social Platform for Open Data (SPOD). The proposed architecture, that has also appeared in the following papers: • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini A Scalable Data Web Visualization Architecture. Parallel, Distributed, and Network-Based Processing 2017. • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini An Architecture for Social Sharing and Collaboration around Open Data Visualisation. In Poster Proc. of the 19th ACM conference on "Computer-Supported Cooperative Work and Social Computing 2016. • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini An extensible architecture for an ecosystem of visualization web-components for Open Data Maximising interoperability Workshop— core vocabularies, location-aware data and more 2015. [edited by author]
Computational science anche conosciuta come calcolo scientifico è un settore in rapida crescita che usa il calcolo avanzato per affrontare problemi complessi. Questa nuova disciplina, combina tecnologia, moderni metodi computazionali e simulazioni per affrontare problemi troppo difficili da poter essere studiati solo in teoria o troppo pericolosi e costosi per poter essere riprodotti sperimentalmente in laboratorio. I progressi dell’ultimo ventennio in computational science hanno sfruttato il supercalcolo per migliorare le performance delle soluzioni e permettere la crescita dei modelli, in termini di dimensioni e qualità dei risultati ottenuti. Le soluzioni adottate si avvalgono del calcolo distribuito: è ben noto che la velocità di un computer con un singolo processore sta raggiungendo dei limiti fisici. Per queste ragioni, la computazione parallela e distribuita è diventata il principale paradigma di calcolo per affrontare i problemi nell’ambito della computational science, in cui la scalabilità delle soluzioni costituisce la sfida da affrontare. In questa tesi vengono discusse la progettazione e l’implementazione di Framework, Linguaggi Paralleli e Architetture che consentono di migliorare lo stato dell’arte della Scalable Computational Science. In particolare, i maggiori contributi riguardano: Frameworks. La proposta di D-MASON, una versione distribuita di MASON, un toolkit Java per la scrittura e l’esecuzione di simulazioni basate su agenti (AgentBased Simulations, ABSs). D-MASON introduce la parallelizzazione a livello framework per far si che gli scienziati che lo utilizzano (ad esempio un esperto con limitata conoscenza della programmazione distribuita) possano rendersi conto solo minimamente di lavorare in ambiente distribuito (ad esempio esperti del dominio con limitata esperienza o nessuna esperienza nel calcolo distribuito). D-MASON è un progetto iniziato nel 2011, il cui principale obiettivo è quello di superare i limiti del calcolo sequenziale di MASON, sfruttando il calcolo distribuito. D-MASON permette di simulare modelli molto più complessi (in termini di numero di agenti e complessità dei comportamenti dei singoli agenti) rispetto a MASON e inoltre consente, a parità di calcolo, di ridurre il tempo necessario ad eseguire le simulazioni MASON. D-MASON è stato progettato in modo da permettere la migrazione di simulazioni scritte in MASON con un numero limitato di modifiche da apportare al codice, al fine di garantire il massimo della semplicità d’uso. v D-MASON è basato sul paradigma Master-Worker, inizialmente pensato per sistemi di calcolo eterogenei, nelle sue ultime versioni consente l’esecuzione anche in sistemi omogenei come sistemi HPC e infrastrutture di cloud computing. L’architettura di D-MASON è stata presentata nelle seguenti pubblicazioni: • Cordasco G., Spagnuolo C. and Scarano V. Toward the new version of D-MASON: Efficiency, Effectiveness and Correctness in Parallel and Distributed Agent-based Simulations. 1st IEEE Workshop on Parallel and Distributed Processing for Computational Social Systems. IEEE International Parallel & Distributed Processing Symposium 2016. • Cordasco G., De Chiara R., Mancuso A., Mazzeo D., Scarano V. and Spagnuolo C. Bringing together efficiency and effectiveness in distributed simulations: the experience with D-MASON. SIMULATION: Transactions of The Society for Modeling and Simulation International, June 11, 2013. • Cordasco G., De Chiara R., Mancuso A., Mazzeo D., Scarano V. and Spagnuolo C. A Framework for distributing Agent-based simulations. Ninth International Workshop Algorithms, Models and Tools for Parallel Computing on Heterogeneous Platforms of Euro-Par 2011 conference. Uno degli strati architetturali di D-MASON che ne determina le prestazioni, è il il Communication Layer, il quale offre le funzionalità di comunicazione tra tutte le entità coinvolte nel calcolo. La comunicazione in D-MASON è basata sul paradigma Publish/Subscribe (PS). Al fine di soddisfare la flessibilità e la scalabilità richiesta, vengono fornite due strategie di comunicazione, una centralizzata (utilizzando Java Message Service) e una decentralizzata (utilizzando Message Passing Interface). La comunicazione in sistemi omogenei è sempre basata su PS ma utilizza lo standard Message Passing Interface (MPI). Al fine di utilizzare MPI in Java, lo strato di comunicazione di D-MASON è implementato sfruttando un binding Java a MPI. Tale soluzione non permette però l’utilizzo di tutte le funzionalità di MPI. Al tal proposito molteplici soluzioni sono stare progettate e implementate, e sono presentate nelle seguenti pubblicazioni: • Cordasco G., Milone F., Spagnuolo C. and Vicidomini L. Exploiting D-MASON on Parallel Platforms: A Novel Communication Strategy 2st Workshop on Parallel and Distributed Agent-Based Simulations of EuroPar 2014 conference. • Cordasco G., Mancuso A., Milone F. and Spagnuolo C. Communication strategies in Distributed Agent-Based Simulations: the experience with D-MASON 1st Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2013 conference. vi D-MASON offre anche meccanismi per la visualizzazione centralizzata e la raccolta di informazioni in simulazioni distribuite (tramite il Visualization Layer). I risultati ottenuti sono stati presentati nella seguente pubblicazione: • Cordasco G., De Chiara R., Raia F., Scarano V., Spagnuolo C. and Vicidomini L. Designing Computational Steering Facilities for Distributed Agent Based Simulations. Proceedings of the ACM SIGSIM Conference on Principles of Advanced Discrete Simulation 2013. Quando si parla di simulazioni distribuite una delle principali problematiche è il bilanciamento del carico. D-MASON offre, nel Distributed Simulation Layer, meccanismi per il partizionamento dinamico e il bilanciamento del carico. DMASON utilizza la tecnica del field partitioning per suddividere il lavoro tra le entità del sistema distribuito. La tecnica di field partitioning consente di ottenere un buon equilibrio tra il bilanciamento del carico e l’overhead di comunicazione. Molti modelli di simulazione non sono basati su spazi 2/3-dimensionali ma bensì modellano le relazioni tra gli agenti utilizzando strutture dati grafo. In questi casi la tecnica di field partitioning non garantisce soluzioni che consentono di ottenere buone prestazioni. Per risolvere tale problema, D-MASON fornisce particolari soluzioni per simulazioni che utilizzano i grafi per modellare le relazioni tra gli agenti. I risultati conseguiti sono stati presentati nella seguente pubblicazione: • Antelmi A., Cordasco G., Spagnuolo C. and Vicidomini L.. On Evaluating Graph Partitioning Algorithms for Distributed Agent Based Models on Networks. 3rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2015 conference. Il metodo di field partitioning consente il partizionamento di campi Euclidei mono e bi-dimensionali; tale approccio è anche conosciuto con il nome di partizionamento uniforme. In alcuni casi, come ad esempio simulazioni che utilizzano Geographical Information System (GIS), il metodo di partizionamento uniforme non è in grado di garantire buone prestazioni, a causa del posizionamento non bilanciato degli agenti sul campo di simulazione. In questi casi, D-MASON offre un meccanismo di partizionamento non uniforme (inspirato alla struttura dati Quad-Tree), presentato nelle seguenti pubblicazioni: • Lettieri N., Spagnuolo C. and Vicidomini L.. Distributed Agent-based Simulation and GIS: An Experiment With the dynamics of Social Norms. 3rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2015 conference. • G. Cordasco and C. Spagnuolo and V. Scarano. Work Partitioning on Parallel and Distributed Agent-Based Simulation. IEEE Workshop on vii Parallel and Distributed Processing for Computational Social Systems of International Parallel & Distributed Processing Symposium, 2017. Inoltre, D-MASON èstatoestesoalloscopodifornireun’infrastrutturaSimulation-asa-Service(SIMaaS),chesemplificailprocessodiesecuzionedisimulazionidistribuite in un ambiente di Cloud Computing. D-MASON nella sua versione più recente offre uno strato software di management basato su web, che ne consente estrema facilità d’uso in ambienti Cloud. Utilizzando il System Management, D-MASON è stato sperimentato sull’infrastruttura Cloud Amazon EC2 confrontando le prestazioni in questo ambiente cloud con un sistema HPC. I risultati ottenuti sono stati presentati nella seguente pubblicazione: • MCarillo,GCordasco,FSerrapica,CSpagnuolo,P.Szufel,andL.Vicidomini. D-Mason on the Cloud: an Experience with Amazon Web Services. 4rd Workshop on Parallel and Distributed Agent-Based Simulations of Euro-Par 2016 conference. LinguaggiParalleli. La proposta di un’architettura, la quale consente di invocare il codice per Java Virtual Machine (JVM) da codice scritto in linguaggio C. Swift/T è un linguaggio di scripting parallelo per sviluppare applicazioni altamente scalabili in ambienti paralleli e distribuiti. Swift/T è l’implementazione del linguaggio Swift per ambienti HPC. Swift/T migliora il linguaggio Swift, consentendo la scalabilità fino a 500 task per secondo, il bilanciamento del carico, strutture dati distribuite, e dataflow task execution. Swift/T consente di invocare nativamente codice scritto in altri linguaggi (come Python, R, Julia e C) utilizzando particolari funzioni definite come leaf function. Il trend attuale di molti produttori di sistemi di supercalcolo (come Cray Inc.), è quello di offrire processori che supportano JVM. Considerato ciò in questa tesi viene presentato il metodo adottato in Swift/T per l’invocazione di linguaggi per JVM (come Java, Clojure, Scala, Groovy, JavaScript) da Swift/T. A tale scopo è stato realizzato un binding C per l’invocazione e la gestione di codice per JVM. Questa soluzione è stata utilizzata in Swift/T (dalla versione 1.0) per estendere il supporto del linguaggio anche a linguaggi per JVM. Il codice sviluppato è stato rilasciato sotto licenza open source ed è disponibile in un repository pubblico su GitHub. Frameworks. La proposta di due tool che sfruttano la potenza di calcolo di sistemi distribuiti per migliorare l’efficacia e l’efficienza di strategie di Simulation Optimization. Simulation Optimization (SO) si riferisce alle tecniche utilizzate per l’individuazione dei parametri di un modello complesso che minimizzano (o massimizzano) determinati criteri, i quali possono essere computati solo tramite l’esecuzione di una simulazione. A causa dell’elevata dimensionalità dello spazio dei parametri, della loro eterogeneità e, della natura stocastica della funzione di viii valutazione, la configurazione di tali sistemi è estremamente onerosa dal punto di vista computazionale. In questo lavoro sono presentati due framework per SO. Il primo framework è SOF:Zero ConfigurationSimulation OptimizationFramework on the Cloud, progettato per l’esecuzione del processo SO in ambienti di cloud computing. SOF è basato su Apache Hadoop ed è presentato nella seguente pubblicazione: • Carillo M., Cordasco G., Scarano V., Serrapica F., Spagnuolo C. and Szufel P. SOF: Zero Configuration Simulation Optimization Framework on the Cloud. Parallel, Distributed, and Network-Based Processing 2016. Il secondo framework è EMEWS: Extreme-scale Model Exploration with Swift/T, progettato per eseguire processi SO in sistemi HPC. Entrambi i framework sono stati sviluppati principalmente per ABS. In particolare EMEWS è stato sperimentato utilizzando il toolkit ABS chiamato Repast. Nella sua prima versione EMEWS non supportava simulazioni scritte in MASON e NetLogo. In questo lavoro di tesi sono descritte alcune funzionalità di EMEWS che consentono il supporto a tali simulazioni. EMEWS e alcuni casi d’uso sono presentati nella seguente pubblicazione: • J. Ozik, N. T. Collier, J. M. Wozniak and C. Spagnuolo From Desktop To Large-scale Model Exploration with Swift/T. Winter Simulation Conference 2016. Architetture. La proposta di un’architettura open source per la visualizzazione web di dati dinamici. Tale architettura si basa sul paradigma di Edge-centric Computing; la visualizzazione dei dati è eseguita lato client, garantendo in questo modo l’affidabilità dei dati, la privacy e la scalabilità in termini di numero di visualizzazioni concorrenti. L’architettura è stata utilizzata all’interno della piattaforma sociale SPOD (Social Platform for Open Data), ed è stata presentata nelle seguenti pubblicazioni: • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini A Scalable Data Web Visualization Architecture. Parallel, Distributed, and Network-Based Processing 2017. • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini An Architecture for Social Sharing and Collaboration around Open Data Visualisation. In Poster Proc. of the 19th ACM conference on "Computer-Supported Cooperative Work and Social Computing 2016. • G. Cordasco, D. Malandrino, P. Palmieri, A. Petta, D. Pirozzi, V. Scarano, L. Serra, C. Spagnuolo, L. Vicidomini An extensible architecture for an ecosystem of visualization web-components for Open Data Maximising interoperability Workshop— core vocabularies, location-aware data and more 2015. [a cura dell'autore]
XV n.s. (XXIX)
Cushing, Judith Bayard. "Computational proxies : an object-based infrastructure for computational science /." Full text open access at:, 1995. http://content.ohsu.edu/u?/etd,195.
Full textBrogliato, Marcelo Salhab. "Essays in computational management science." reponame:Repositório Institucional do FGV, 2018. http://hdl.handle.net/10438/24615.
Full textApproved for entry into archive by ÁUREA CORRÊA DA FONSECA CORRÊA DA FONSECA (aurea.fonseca@fgv.br) on 2018-08-24T16:29:40Z (GMT) No. of bitstreams: 1 brogliato-phd-thesis-v3-final.pdf: 19935806 bytes, checksum: df0caf31076cfec5c116e0b4c18346ee (MD5)
Made available in DSpace on 2018-08-27T13:54:00Z (GMT). No. of bitstreams: 1 brogliato-phd-thesis-v3-final.pdf: 19935806 bytes, checksum: df0caf31076cfec5c116e0b4c18346ee (MD5) Previous issue date: 2018-07-15
A presente tese é formada por três trabalhos científicos na área de Management Science Computacional. A gestão moderna e a alta tecnologia interagem em múltiplas e profundas formas. O professor Andre Ng diz aos seus estudantes na Escola de Negócios de Stanford que “Inteligência Artificial é a nova eletricidade”, como sua forma hiperbólica de enfatizar o potencial transformador da tecnologia. O primeiro trabalho é inspirado na possibilidade de que haverá alguma forma de dinheiro digital e estuda ledger distribuídas, propondo e analisando o Hathor, uma arquitetura alternativa para criptomoedas escaláveis. O segundo trabalho pode ser um item crucial no entendimento de tomadas de decisão, nos trazendo um modelo formal de recognition-primed decisions. Situada na intersecção entre psicologia cognitiva, ciência da computação, neuro-ciência e inteligência artifical, ele apresenta um framework open-source, multi-plataforma e altamente paralelo da Sparse Distributed Memory e analisa a dinâmica da memória e algumas aplicações. O terceiro e último trabalho se situa na intersecção entre marketing, difusão de inovação tecnologica e modelagem, extendendo o famoso modelo de Bass para levar em consideração usuário que, após adotar a tecnologia por um tempo, decidiram rejeitá-la.
This thesis presents three specific, self-contained, scientific papers in the Computational Management Science area. Modern management and high technology interact in multiple, profound, ways. Professor Andrew Ng tells students at Stanford’s Graduate School of Business that “AI is the new electricity”, as his hyperbolic way to emphasize the potential transformational power of the technology. The first paper is inspired by the possibility that there will be some form of purely digital money and studies distributed ledgers, proposing and analyzing Hathor, an alternative architecture towards a scalable cryptocurrency. The second paper may be a crucial item in understanding human decision making, perhaps, bringing us a formal model of recognition-primed decision. Lying at the intersection of cognitive psychology, computer science, neuroscience, and artificial intelligence, it presents an open-source, cross-platform, and highly parallel framework of the Sparse Distributed Memory and analyzes the dynamics of the memory with some applications. Last but not least, the third paper lies at the intersection of marketing, diffusion of technological innovation, and modeling, extending the famous Bass model to account for users who, after adopting the innovation for a while, decide to reject it later on.
Chada, Daniel de Magalhães. "From cognitive science to management science: two computational contributions." reponame:Repositório Institucional do FGV, 2011. http://hdl.handle.net/10438/17053.
Full textApproved for entry into archive by Kelly Ayala (kelly.ayala@fgv.br) on 2016-09-12T12:58:17Z (GMT) No. of bitstreams: 1 Chada 2011 FINAL ENTREGUE.pdf: 579283 bytes, checksum: f463590c20f51b84ba0f9357ab1a6e08 (MD5)
Approved for entry into archive by Kelly Ayala (kelly.ayala@fgv.br) on 2016-09-12T13:00:07Z (GMT) No. of bitstreams: 1 Chada 2011 FINAL ENTREGUE.pdf: 579283 bytes, checksum: f463590c20f51b84ba0f9357ab1a6e08 (MD5)
Made available in DSpace on 2016-09-12T13:03:31Z (GMT). No. of bitstreams: 1 Chada 2011 FINAL ENTREGUE.pdf: 579283 bytes, checksum: f463590c20f51b84ba0f9357ab1a6e08 (MD5) Previous issue date: 2011
This work is composed of two contributions. One borrows from the work of Charles Kemp and Joshua Tenenbaum, concerning the discovery of structural form: their model is used to study the Business Week Rankings of U.S. Business Schools, and to investigate how other structural forms (structured visualizations) of the same information used to generate the rankings can bring insights into the space of business schools in the U.S., and into rankings in general. The other essay is purely theoretical in nature. It is a study to develop a model of human memory that does not exceed our (human) psychological short-term memory limitations. This study is based on Pentti Kanerva’s Sparse Distributed Memory, in which human memories are registered into a vast (but virtual) memory space, and this registration occurs in massively parallel and distributed fashion, in ideal neurons.
Este trabalho é composto de duas contribuições. Uma se usa do trabalhode Charles Kemp e Joshua Tenenbaum sobre a descoberta da forma estrutural: o seu modelo é usado para estudar os rankings da revista Business Week sobre escolas de administração, e para investigar como outras formas estruturais (visualizações estruturadas) da mesma informação usada para gerar os rankings pode trazer discernimento no espaço de escolas de negócios nos Estados Unidos e em rankings em geral. O outro ensaio é de natureza puramente teórica. Ele é um estudo no desenvolvimento de um modelo de memória que não excede os nossos (humanos) limites de memória de curto-prazo. Este estudo se baseia na Sparse Distributed Memory (Memória Esparsa e Distribuida) de Pentti Kanerva, na qual memórias humanas são registradas em um vasto (mas virtual) espaço, e este registro ocorre de forma maciçamente paralela e distribuida, em neurons ideais.
Anzola, David. "The philosophy of computational social science." Thesis, University of Surrey, 2015. http://epubs.surrey.ac.uk/808102/.
Full textCattinelli, I. "INVESTIGATIONS ON COGNITIVE COMPUTATION AND COMPUTATIONAL COGNITION." Doctoral thesis, Università degli Studi di Milano, 2011. http://hdl.handle.net/2434/155482.
Full textYu, Jingyuan. "Discovering Twitter through Computational Social Science Methods." Doctoral thesis, Universitat Autònoma de Barcelona, 2021. http://hdl.handle.net/10803/671609.
Full textVisibilizando la vida cotidiana de la gente, Twitter se ha convertido en una de las plataformas de intercambio de información más importantes y ha atraído rápidamente la atención de los científicos. Investigadores de todo el mundo se han centrado en las ciencias sociales y en los estudios de Internet con datos de Twitter como muestra del mundo real, y en la última década se han diseñado numerosas herramientas de análisis y algoritmos. La presente tesis doctoral consta de tres investigaciones, en primer lugar, dados los 14 años (hasta 2020) de historia desde la fundación de Twitter, hemos asistido a una explosión de publicaciones científicas relacionadas, pero el panorama actual de la investigación en esta plataforma de medios sociales seguía siendo desconocido. Para llenar este vacío de investigación, hicimos un análisis bibliométrico de los estudios relacionados con Twitter para analizar cómo evolucionaron los estudios de Twitter a lo largo del tiempo, y para proporcionar una descripción general del entorno académico de investigación de Twitter desde un nivel macro. En segundo lugar, dado que hay muchas herramientas de software analítico que están disponibles actualmente para la investigación en Twitter, una pregunta práctica para los investigadores junior es cómo elegir el software más apropiado para su propio proyecto de investigación. Para resolver este problema, hicimos una revisión del software para algunos de los sistemas integrados que se consideran más relevantes para la investigación en ciencias sociales. Dado que los investigadores junior en ciencias sociales pueden enfrentarse a posibles limitaciones financieras, redujimos nuestro alcance para centrarnos únicamente en el software gratuito y de bajo coste. En tercer lugar, dada la actual crisis de salud pública, hemos observado que los medios de comunicación social son una de las fuentes de información y noticias más accesibles para el público. Durante una pandemia, la forma en que se enmarcan los problemas de salud y las enfermedades en la prensa influye en la comprensión del público sobre el actual brote epidémico y sus actitudes y comportamientos. Por lo tanto, decidimos usar Twitter como una fuente de noticias de fácil acceso para analizar la evolución de los marcos de noticias españoles durante la pandemia COVID-19. En general, las tres investigaciones se han asociado estrechamente con la aplicación de métodos computacionales, incluyendo la recolección de datos en línea, la minería de textos, el análisis de redes y la visualización de datos. Este proyecto de doctorado ha mostrado cómo la gente estudia y utiliza Twitter desde tres niveles diferentes: el nivel académico, el nivel práctico y el nivel empírico.
As Twitter has covered up people’s daily life, it has became one of the most important information exchange platforms, and quickly attracted scientists’ attention. Researchers around the world have highly focused on social science and internet studies with Twitter data as a real world sample, and numerous analytics tools and algorithms have been designed in the last decade. The present doctoral thesis consists of three researches, first, given the 14 years (until 2020) of history since the foundation of Twitter, an explosion of related scientific publications have been witnessed, but the current research landscape on this social media platform remained unknown, to fill this research gap, we did a bibliometric analysis on Twitter-related studies to analyze how the Twitter studies evolved over time, and to provide a general description of the Twitter research academic environment from a macro level. Second, since there are many analytic software tools that are currently available for Twitter research, a practical question for junior researchers is how to choose the most appropriate software for their own research project, to solve this problem, we did a software review for some of the integrated frameworks that are considered most relevant for social science research, given that junior social science researchers may face possible financial constraints, we narrowed our scope to solely focus on the free and low-cost software. Third, given the current public health crisis, we have noticed that social media are one of the most accessed information and news sources for the public. During a pandemic, how health issues and diseases are framed in the news release impacts public’s understanding of the current epidemic outbreak and their attitudes and behaviors. Hence, we decided to use Twitter as an easy-access news source to analyze the evolution of the Spanish news frames during the COVID-19 pandemic. Overall, the three researches have closely associated with the application of computational methods, including online data collection, text mining, complex network and data visualization. And this doctoral project has discovered how people study and use Twitter from three different levels: the academic level, the practical level and the empirical level.
Osorio, Guillén Jorge Mario. "Density Functional Theory in Computational Materials Science." Doctoral thesis, Uppsala University, Department of Physics, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-4496.
Full textThe present thesis is concerned to the application of first-principles self-consistent total-energy calculations within the density functional theory on different topics in materials science.
Crystallographic phase-transitions under high-pressure has been study for TiO2, FeI2, Fe3O4, Ti, the heavy alkali metals Cs and Rb, and C3N4. A new high-pressure polymorph of TiO2 has been discovered, this new polymorph has an orthorhombic OI (Pbca) crystal structure, which is predicted theoretically for the pressure range 50 to 100 GPa. Also, the crystal structures of Cs and Rb metals have been studied under high compressions. Our results confirm the recent high-pressure experimental observations of new complex crystal structures for the Cs-III and Rb-III phases. Thus, it is now certain that the famous isostructural phase transition in Cs is rather a new crystallographic phase transition.
The elastic properties of the new superconductor MgB2 and Al-doped MgB2 have been investigated. Values of all independent elastic constants (c11, c12, c13, c33, and c55) as well as bulk moduli in the a and c directions (Ba and Bc respectively) are predicted. Our analysis suggests that the high anisotropy of the calculated elastic moduli is a strong indication that MgB2 should be rather brittle. Al doping decreases the elastic anisotropy of MgB2 in the a and c directions, but, it will not change the brittle behaviour of the material considerably.
The three most relevant battery properties, namely average voltage, energy density and specific energy, as well as the electronic structure of the Li/LixMPO4 systems, where M is either Fe, Mn, or Co have been calculated. The mixing between Fe and Mn in these materials is also examined. Our calculated values for these properties are in good agreement with recent experimental values. Further insight is gained from the electronic density of states of these materials, through which conclusions about the physical properties of the various phases are made.
The electronic and magnetic properties of the dilute magnetic semiconductor Mn-doped ZnO has been calculated. We have found that for an Mn concentration of 5.6%, the ferromagnetic configuration is energetically stable in comparison to the antiferromgnetic one. A half-metallic electronic structure is calculated by the GGA approximation, where Mn ions are in a divalent state leading to a total magnetic moment of 5 μB per Mn atom.
Osorio, Guillén Jorge Mario. "Density functional theory in computational materials science /." Uppsala : Acta Universitatis Upsaliensis : Univ.-bibl. [distributör], 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-4496.
Full textShimada, Yosuke. "Computational science of turbulent mixing and combustion." Thesis, Cranfield University, 2010. http://dspace.lib.cranfield.ac.uk/handle/1826/5552.
Full textProttsman, Christie Lee Lili. "Computational Thinking and Women in Computer Science." Thesis, University of Oregon, 2011. http://hdl.handle.net/1794/11485.
Full textThough the first computer programmers were female, women currently make up only a quarter of the computing industry. This lack of diversity jeopardizes technical innovation, creativity and profitability. As demand for talented computing professionals grows, both academia and industry are seeking ways to reach out to groups of individuals who are underrepresented in computer science, the largest of which is women. Women are most likely to succeed in computer science when they are introduced to computing concepts as children and are exposed over a long period of time. In this paper I show that computational thinking (the art of abstraction and automation) can be introduced earlier than has been demonstrated before. Building on ideas being developed for the state of California, I have created an entertaining and engaging educational software prototype that makes primary concepts accessible down to the third grade level.
Committee in charge: Michal Young, Chairperson; Joanna Goode, Member
Scott-Murray, Amy. "Applications of 3D computational photography to marine science." Thesis, University of Aberdeen, 2017. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=233937.
Full textRinker, Robert E. "Reducing Computational Expense of Ray-Tracing Using Surface Oriented Pre-Computation." UNF Digital Commons, 1991. http://digitalcommons.unf.edu/etd/26.
Full textRousseau, Mathieu. "Computational modeling and analysis of chromatin structure." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=116941.
Full textL'organisation de l'ADN à l'intérieur du noyau d'une cellule est connue pour jouer un rôle important pour des processus tels que la réplication et la réparation de l'ADN et la régulation de l'expression de gènes. Des avancées technologiques récentes concernant les puces à ADN et le séquençage à haut débit ont permis la création de nouvelles techniques mesurant la conformation de la chromatine in vivo. Les données générées par ces méthodes constituent une mesure approximative de la structure de la chromatine. Des méthodes modélisant et analysant ces données afin de déduire la structure tridimensionnelle de la chromatine constitueront des outils précieux pour la découverte du mécanisme gouvernant la structure de la chromatine. L'objectif global de ma thèse est de développer des modèles computationnels analysant la structure tridimensionnelle de l'ADN et d'effectuer l'analyse de données afin de mieux comprendre le rôle de la structure de la chromatine dans divers processus cellulaires. Cette thèse présente trois résultats principaux. Premièrement, un nouvel ensemble d'outils pour la modélisation computationnelle et l'analyse des données provenant de la capture copie conforme de la conformation de la chromatine (5C) et Hi-C. Notre méthode nommée MCMC5C se base sur une méthode de Monte Carlo par chaînes de Markov et peut générer des ensembles de modèles tridimensionnels représentatifs à partir de données expérimentales contenant du bruit. Deuxièmement, notre enquête sur la relation entre la structure de la chromatine et l'expression de gènes durant la différenciation cellulaire démontre que l'architecture de la chromatine est une structure dynamique qui adopte une conformation ouverte pour les gènes activement transcrits et une conformation condensée pour les gènes non-transcrits. Troisièmement, nous avons développé un classifieur basé sur une machine à vecteur de support à partir de nos données 5C et nous avons montré que les signatures de la conformation de la chromatine pourraient être utilisées pour différencier entre la leucémie lymphoïde et myéloïde.
Urgen, Burcu Aysen. "A Philosophical Analysis Of Computational Modeling In Cognitive Science." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608832/index.pdf.
Full textMorrison (1999)&rsquo
s account, is employed on a case study. The framework emphasizes four key elements to understand the place of models in science, which are the construction of models, the function of models, the representation they provide, and the ways we learn from models. The case study Q-Soar (Simon, Newell &
Klahr, 1991), is a model built with Soar cognitive architecture (Laird, Newell &
Rosenbloom, 1987) which is representative of a class of computational cognitive models. Discussions are included for how to make generalizations for computational cognitive models out of this class, i.e. for models that are built with other modeling paradigms.
Langham, A. E. "A self-organising approach to problems in computational science." Thesis, Swansea University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.637847.
Full textCastillo, Andrea R. (Andrea Redwing). "Assessing computational methods and science policy in systems biology." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/51655.
Full textIncludes bibliographical references (p. 109-112).
In this thesis, I discuss the development of systems biology and issues in the progression of this science discipline. Traditional molecular biology has been driven by reductionism with the belief that breaking down a biological system into the fundamental biomolecular components will elucidate such phenomena. We have reached limitations with this approach due to the complex and dynamical nature of life and our inability to intuit biological behavior from a modular perspective [37]. Mathematical modeling has been integral to current system biology endeavors since detailed analysis would be invasive if performed on humans experimentally or in clinical trials [17]. The interspecies commonalities in systemic properties and molecular mechanisms suggests that certain behaviors transcend specie differentiation and therefore easily lend to generalizing from simpler organisms to more complex organisms such as humans [7, 17]. Current methodologies in mathematical modeling and analysis have been diverse and numerous, with no standardization to progress the discipline in a collaborative manner. Without collaboration during this formative period, successful development and application of systems biology for societal welfare may be at risk. Furthermore, such collaboration has to be standardized in a fundamental approach to discover generic principles, in the manner of preceding long-standing science disciplines. This study effectively implements and analyzes a mathematical model of a three-protein biochemical network, the Synechococcus elongatus circadian clock.
(cont.) I use mass action theory expressed in kronecker products to exploit the ability to apply numerical methods-including sensitivity analysis via boundary value formulation (BVP) and trapiezoidal integration rule-and experimental techniques-including partial reaction fitting and enzyme-driven activations-when mathematically modeling large-scale biochemical networks. Amidst other applicable methodologies, my approach is grounded in the law of mass action because it is based in experimental data and biomolecular mechanistic properties, yet provides predictive power in the complete delineation of the biological system dynamics for all future time points. The results of my research demonstrate the holistic approach that mass action method-ologies have in determining emergent properties of biological systems. I further stress the necessity to enforce collaboration and standardization in future policymaking, with reconsiderations on current stakeholder incentive to redirect academia and industry focus from new molecular entities to interests in holistic understanding of the complexities and dynamics of life entities. Such redirection away from reductionism could further progress basic and applied scientific research to embetter our circumstances through new treatments and preventive measures for health, and development of new strains and disease control in agriculture and ecology [13].
by Andrea R. Castillo.
S.M.in Technology and Policy
Rathgeber, Florian. "Productive and efficient computational science through domain-specific abstractions." Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/18911.
Full textIelina, Tetiana, Liudmyla Halavska, Daiva Mikucioniene, and Rimvidas Milasius. "Information models of knitwear in computational science and engineering." Thesis, Київський національний університет технологій та дизайну, 2021. https://er.knutd.edu.ua/handle/123456789/19105.
Full textKuhlman, Christopher J. "High Performance Computational Social Science Modeling of Networked Populations." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/51175.
Full textPh. D.
Cedeno, Vanessa Ines. "Pipelines for Computational Social Science Experiments and Model Building." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/91445.
Full textDoctor of Philosophy
To understand individual and collective behavior, there has been significant interest in using online systems to carry out social science experiments. Considerable work is required for analyzing the data and to uncover interesting insights. In this dissertation, we design and build automated software pipelines for evaluating social phenomena through iterative experiments and modeling. To reason about experiments and models, we design a formal data model. This combined approach of experiments and models has been done in some studies without automation, or purely conceptually. We are motivated by a particular social behavior, namely collective identity (CI). Group or CI is an individual’s cognitive, moral, and emotional connection with a broader community, category, practice, or institution. Extensive experimental research shows that CI influences human decision-making, so there is interest in modeling situations that promote the creation of CI to learn more from the process and to predict human behavior in real life situations. One of our goals in this dissertation is to understand whether a cooperative anagram game can produce CI within a group. With all of the experimental work on anagrams games, it is surprising that very little work has been done in modeling these games. In addition, to identify best explanations for phenomena we use abduction. Abduction is an inference approach that uses data and observations. Abduction has broad application in robotics, genetics, automated systems, and image understanding, but have largely been devoid of human behavior. In a group anagrams web-based networked game setting we do the following. We use these pipelines to understand intra-group cooperation and its effect on fostering CI. We devise and execute an iterative abductive analysis process that is driven by the social sciences. We build and evaluate three agent-based models (ABMs). We analyze experimental data and develop models of human reasoning to predict detailed game player action. We claim our models can explain behavior and provide novel experimental insights into CI, because there is agreement between the model predictions and the experimental data.
Gouws, Lindsey Ann. "The role of computational thinking in introductory computer science." Thesis, Rhodes University, 2014. http://hdl.handle.net/10962/d1011152.
Full textSidiropoulos, Anastasios. "Computational metric embeddings." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44712.
Full textIncludes bibliographical references (p. 141-145).
We study the problem of computing a low-distortion embedding between two metric spaces. More precisely given an input metric space M we are interested in computing in polynomial time an embedding into a host space M' with minimum multiplicative distortion. This problem arises naturally in many applications, including geometric optimization, visualization, multi-dimensional scaling, network spanners, and the computation of phylogenetic trees. We focus on the case where the host space is either a euclidean space of constant dimension such as the line and the plane, or a graph metric of simple topological structure such as a tree. For Euclidean spaces, we present the following upper bounds. We give an approximation algorithm that, given a metric space that embeds into R1 with distortion c, computes an embedding with distortion c(1) [delta]3/4 (A denotes the ratio of the maximum over the minimum distance). For higher-dimensional spaces, we obtain an algorithm which, for any fixed d > 2, given an ultrametric that embeds into Rd with distortion c, computes an embedding with distortion co(1). We also present an algorithm achieving distortion c logo(1) [delta] for the same problem. We complement the above upper bounds by proving hardness of computing optimal, or near-optimal embeddings. When the input space is an ultrametric, we show that it is NP-hard to compute an optimal embedding into R2 under the ... norm. Moreover, we prove that for any fixed d > 2, it is NP-hard to approximate the minimum distortion embedding of an n-point metric space into Rd within a factor of Q(n1/(17d)). Finally, we consider the problem of embedding into tree metrics. We give a 0(1)approximation algorithm for the case where the input is the shortest-path metric of an unweighted graph.
(cont.) For general metric spaces, we present an algorithm which, given an n-point metric that embeds into a tree with distortion c, computes an embedding with distortion (clog n)o ... . By composing this algorithm with an algorithm for embedding trees into R1, we obtain an improved algorithm for embedding general metric spaces into R1.
by Anastasios Sidiropoulos.
Ph.D.
Levy, Abitbol Jacobo. "Computational detection of socioeconomic inequalities." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN001.
Full textMachine and deep learning advances have come to permeate modern sciences and have unlocked the study of numerous issues many deemed intractable. Social sciences have accordingly not been exempted from benefiting from these advances, as neural language model have been extensively used to analyze social and linguistic based phenomena such as the quantification of semantic change or the detection of the ideological bias of news articles, while convolutional neural networks have been used in urban settings to explore the dynamics of urban change by determining which characteristics predict neighborhood improvement or by examining how the perception of safety affects the liveliness of neighborhoods. In light of this fact, this dissertation argues that one particular social phenomenon, socioeconomic inequalities, can be gainfully studied by means of the above. We set out to collect and combine large datasets enabling 1) the study of the spatial, temporal, linguistic and network dependencies of socioeconomic inequalities and 2) the inference of socioeconomic status (SES) from these multimodal signals. This task is one worthy of study as previous research endeavors have come short of providing a complete picture on how these multiple factors are intertwined with individual socioeconomic status and how the former can fuel better inference methodologies for the latter. The study of these questions is important, as much is still unclear about the root causes of SES inequalities and the deployment of ML/DL solutions to pinpoint them is still very much in its infancy
Varde, Aparna S. "Graphical data mining for computational estimation in materials science applications." Link to electronic thesis, 2006. http://www.wpi.edu/Pubs/ETD/Available/etd-081506-152633/.
Full textPiccoli, Prisca Primavera <1991>. "Didactics of Computational Thinking Addressed to Non-Computer Science Learners." Master's Degree Thesis, Università Ca' Foscari Venezia, 2017. http://hdl.handle.net/10579/10595.
Full textKanade, Varun. "Computational Questions in Evolution." Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10556.
Full textEngineering and Applied Sciences
Raina, Priyanka. "Architectures for computational photography." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82393.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 93-94).
Computational photography refers to a wide range of image capture and processing techniques that extend the capabilities of digital photography and allow users to take photographs that could not have been taken by a traditional camera. Since its inception less than a decade ago, the field today encompasses a wide range of techniques including high dynamic range (HDR) imaging, low light enhancement, panorama stitching, image deblurring and light field photography. These techniques have so far been software based, which leads to high energy consumption and typically no support for real-time processing. This work focuses on hardware architectures for two algorithms - (a) bilateral filtering which is commonly used in computational photography applications such as HDR imaging, low light enhancement and glare reduction and (b) image deblurring. In the first part of this work, digital circuits for three components of a multi-application bilateral filtering processor are implemented - the grid interpolation block, the HDR image creation and contrast adjustment blocks, and the shadow correction block. An on-chip implementation of the complete processor, designed with other team members, performs HDR imaging, low light enhancement and glare reduction. The 40 nm CMOS test chip operates from 98 MHz at 0.9 V to 25 MHz at 0.9 V and processes 13 megapixels/s while consuming 17.8 mW at 98 MHz and 0.9 V, achieving significant energy reduction compared to previous CPU/GPU implementations. In the second part of this work, a complete system architecture for blind image deblurring is proposed. Digital circuits for the component modules are implemented using Bluespec SystemVerilog and verified to be bit accurate with a reference software implementation. Techniques to reduce power and area cost are investigated and synthesis results in 40nm CMOS technology are presented
by Priyanka Raina.
S.M.
Kirmani, Ghulam A. (Ghulam Ahmed). "Computational time-resolved imaging." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/97803.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 151-159).
Classical photography uses steady-state illumination and light sensing with focusing optics to capture scene reflectivity as images; temporal variations of the light field are not exploited. This thesis explores the use of time-varying optical illumination and time-resolved sensing along with signal modeling and computational reconstruction. Its purpose is to create new imaging modalities, and to demonstrate high-quality imaging in cases in which traditional techniques fail to even form degraded imagery. The principal contributions in this thesis are the derivation of physically-accurate signal models for the scene's response to timevarying illumination and the photodetection statistics of the sensor, and the combining of these models with computationally tractable signal recovery algorithms leading to image formation. In active optical imaging setups, we use computational time-resolved imaging to experimentally demonstrate: non line-of-sight imaging or looking around corners, in which only diffusely scattered light was used to image a hidden plane which was completely occluded from both the light source and the sensor; single-pixel 3D imaging or compressive depth acquisition, in which accurate depth maps were obtained using a single, non-spatially resolving bucket detector in combination with a spatial light modulator; and high-photon efficiency imaging including first-photon imaging, in which high-quality 3D and reflectivity images were formed using only the first detected photon at each sensor pixel despite the presence of high levels of background light.
by Ghulam A. Kirmani.
Ph. D.
Hanson-Smith, Victor 1981. "Error and Uncertainty in Computational Phylogenetics." Thesis, University of Oregon, 2011. http://hdl.handle.net/1794/12151.
Full textThe evolutionary history of protein families can be difficult to study because necessary ancestral molecules are often unavailable for direct observation. As an alternative, the field of computational phylogenetics has developed statistical methods to infer the evolutionary relationships among extant molecular sequences and their ancestral sequences. Typically, the methods of computational phylogenetic inference and ancestral sequence reconstruction are combined with other non-computational techniques in a larger analysis pipeline to study the inferred forms and functions of ancient molecules. Two big problems surrounding this analysis pipeline are computational error and statistical uncertainty. In this dissertation, I use simulations and analysis of empirical systems to show that phylogenetic error can be reduced by using an alternative search heuristic. I then use similar methods to reveal the relationship between phylogenetic uncertainty and the accuracy of ancestral sequence reconstruction. Finally, I provide a case-study of a molecular machine in yeast, to demonstrate all stages of the analysis pipeline. This dissertation includes previously published co-authored material.
Committee in charge: John Conery, Chair; Daniel Lowd, Member; Sara Douglas, Member; Joseph W. Thornton, Outside Member
James, Roshan P. "The computational content of isomorphisms." Thesis, Indiana University, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3587675.
Full textAbstract models of computation, such as Turing machines, λ-calculus and logic gates, allow us to express computation without being concerned about the underlying technology that realizes them in the physical world. These models embrace a classical worldview wherein computation is essentially irreversible. From the perspective of quantum physics however, the physical world is one where every fundamental interaction is essentially reversible and various quantities such as energy, mass, angular momentum are conserved. Thus the irreversible abstractions we choose as the basis of our most primitive models of computing are at odds with the underlying reversible physical reality and hence our thesis: By embracing irreversible physical primitives, models of computation have also implicitly included a class of computational effects which we call information effects.
To make this precise, we develop an information preserving model of computation (in the sense of Shannon entropy) wherein the process of computing does not gain or lose information. We then express information effects in this model using an arrow meta-language, in much the same way that we model computational effects in the λ-calculus using a monadic metalanguage. A consequence of this careful treatment of information, is that we effectively capture the gap between reversible computation and irreversible computation using a type-and-effect system.
The treatment of information effects has a parallel with open and closed systems in physics. Closed physical systems conserve mass and energy and are the basic unit of study in physics. Open systems interact with their environment, possibly exchanging matter or energy. These interactions may be thought of as effects that modify the conservation properties of the system. Computations with information effects are much like open systems and they can be converted into pure computations by making explicit the surrounding information environment that they interact with.
Finally, we show how conventional irreversible computation such as the λ-calculus can be embedded into this model, such that the embedding makes the implicit information effects of the λ-calculus explicit.
Cope, James S. "Computational methods for the classification of plants." Thesis, Kingston University, 2014. http://eprints.kingston.ac.uk/28759/.
Full textBlount, Steven Michael 1958. "Computational methods for stochastic epidemics." Diss., The University of Arizona, 1997. http://hdl.handle.net/10150/288714.
Full textLi, Zheng Ph D. Massachusetts Institute of Technology. "Computational Raman imaging and thermography." Thesis, Massachusetts Institute of Technology, 2021. https://hdl.handle.net/1721.1/130673.
Full textCataloged from the official PDF of thesis.
Includes bibliographical references (pages 185-201).
Thermography tools that perform accurate temperature measurements with nanoscale resolution are highly desired in our modern society. Although researchers have put extensive efforts in developing nanoscale thermography for more than three decades and a significant amount of achievements have been made in this field, the mainstream thermography tools have not fully met the requirements from the industry and the academia. In this thesis, we present our home-built Raman microscope for Raman imaging and thermography. The performance of this instrument is enhanced by computational approaches. The body of the thesis will be divided into three parts. First, the instrumentation of our setup are introduced. Second, we present the results of Raman imaging with computational super-resolution techniques. Third, this instrument is used as a thermography tool to map the temperature profile of a nanowire device. These results provide insights in combining advanced instrumentation and computational methods in Raman imaging and Raman thermography for the applications in modern nano-technology.
by Zheng Li.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Materials Science and Engineering
Lavallée-Adam, Mathieu. "Protein-protein interaction confidence assessment and network clustering computational analysis." Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=121237.
Full textLes interactions protéine-protéine représentent une source d'information essentielle à la compréhension des divers méchanismes biologiques de la cellule. Cependant, les expériences à haut débit qui identifient ces interactions, comme la purification par affinité, produisent un très grand nombre de faux-positifs. Des méthodes computationelles sont donc requises afin d'extraire de ces ensembles de données les interactions protéine-protéine de grande qualité. Toutefois, même lorsque filtrés, ces ensembles de données forment des réseaux très complexes à analyser. Ces réseaux d'interactions protéine-protéine sont d'une taille importante, d'une grande complexité et requièrent des approches computationelles sophistiquées afin d'en retirer des informations possédant une réelle portée biologique. L'objectif de cette thèse est d'explorer des algorithmes évaluant la qualité d'interactions protéine-protéine et de faciliter l'analyse des réseaux qu'elles composent. Ce travail de recherche est divisé en quatre principaux résultats: 1) une nouvelle approche bayésienne permettant la modélisation des contaminants provenant de la purification par affinité, 2) une nouvelle méthode servant à la découverte et l'évaluation de la qualité d'interactions protéine-protéine à l'intérieur de différents compartiments de la cellule, 3) un algorithme détectant les regroupements statistiquement significatifs de protéines partageant une même annotation fonctionnelle dans un réseau d'interactions protéine-protéine et 4) un outil computationel qui a pour but la découverte de motifs de séquences dans les régions 5' non traduites tout en évaluant le regroupement de ces motifs dans les réseaux d'interactions protéine-protéine.
Gupta, Gaurav. "Computational material science of carboncarbon : composites based on carbonaceous mesophase matrices." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=83865.
Full textQiu, Kanjun. "Developing a computational textiles curriculum to increase diversity in computer science." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85222.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 97-98).
The current culture surrounding computer science is quite narrow, resulting in a participating population that lacks diversity in both gender and interests. The field of computational textiles has shown promise as a domain for diversifying computer science culture by drawing a population with broader, less traditional interests and backgrounds into creating technology; however, little effort has been made to build resources and communities around computational textiles. This thesis presents a curriculum that teaches computer science and computer programming through a series of lessons for building and programming computational textile projects, along with systematic considerations that support the real-world implementation of such a curriculum. In 2011-12, we conducted three workshops to evaluate the impact of our curriculum methods and projects on students' technological self-efficacy. As a result of data obtained from these workshops, we conclude that working with our curriculum's structured computational textile projects both draws a gender-diverse population, and increases students' comfort with, enjoyment of, and interest in working with electronics and programming. Accordingly, we are transforming the curriculum into a published book in order to provide educational resources to support the development of a computer science culture around computational textiles.
by Kanjun Qiu.
M. Eng.
Alabdulkareem, Ahmad. "Analyzing cities' complex socioeconomic networks using computational science and machine learning." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119325.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 133-141).
By 2050, it is expected that 66% of the world population will be living in cities. The urban growth explosion in recent decades has raised many questions concerning the evolutionary advantages of urbanism, with several theories delving into the multitude of benefits of such efficient systems. This thesis focuses on one important aspect of cities: their social dimension, and in particular, the social aspect of their complex socioeconomic fabric (e.g. labor markets and social networks). Economic inequality is one of the greatest challenges facing society today, in tandem with the eminent impact of automation, which can exacerbate this issue. The social dimension plays a significant role in both, with many hypothesizing that social skills will be the last bastion of differentiation between humans and machines, and thus, jobs will become mostly dominated by social skills. Using data-driven tools from network science, machine learning, and computational science, the first question I aim to answer is the following: what role do social skills play in today's labor markets on both a micro and macro scale (e.g. individuals and cities)? Second, how could the effects of automation lead to various labor dynamics, and what role would social skills play in combating those effects? Specifically, what are social skills' relation to career mobility? Which would inform strategies to mitigate the negative effects of automation and off-shoring on employment. Third, given the importance of the social dimension in cities, what theoretical model can explain such results, and what are its consequences? Finally, given the vulnerabilities for invading individuals' privacy, as demonstrated in previous chapters, how does highlighting those results affect people's interest in privacy preservation, and what are some possible solutions to combat this issue?
by Ahmad Alabdulkareem.
Ph. D. in Computational Science & Engineering
Ristad, Eric Sven. "Computational Structure of Human Language." Thesis, Massachusetts Institute of Technology, 1990. http://hdl.handle.net/1721.1/7038.
Full textChin, Toshio M. "Dynamic estimation in computational vision." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/13072.
Full textIncludes bibliographical references (leaves 213-220).
by Toshio Michael Chin.
Ph.D.
Baggett, David McAdams. "A system for computational phonology." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36535.
Full textIncludes bibliographical references (p. 127-129).
by David McAdams Baggett.
M.S.
Sither, Matthew C. (Matthew Christian). "Adaptive consolidation of computational perspectives." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/37098.
Full textIncludes bibliographical references (p. 81).
This thesis describes the design and implementation of machine learning algorithms and real-time recommendations within EWall, a software system used for individual and collaborative information management. In the EWall workspace, users collect and arrange cards, which are compact visual abstractions of information. A significant problem that often arises when humans try to collect information is information overload. Information overload refers to the state of having too much information, and it causes difficulty in discovering relevant information. When affected by information overload, the user loses focus and spends more time filtering out irrelevant information. This thesis first presents a simple solution that uses a set of algorithms that prioritize information. Based on the information the user is working with, the algorithms search for relevant information in a database by analyzing spatial, temporal, and collaborative relationships. A second, more adaptive solution uses agents that observe user behavior and learn to apply the prioritization algorithms more effectively. Adaptive agents help to prevent information overload by removing the burden of search and filter from the user, and they hasten the process of discovering interesting and relevant information.
by Matthew C. Sither.
M.Eng.
Bylinskii, Zoya. "Computational understanding of image memorability." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/97256.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 77-82).
Previous studies have identified that images carry the attribute of memorability, a predictive value of whether a novel image will be later remembered or forgotten. In this thesis we investigate the interplay between intrinsic and extrinsic factors that affect image memorability. First, we find that intrinsic differences in memorability exist at a finer-grained scale than previously documented. Moreover, we demonstrate high consistency across participant populations and experiments. We show how these findings generalize to an applied visual modality - information visualizations. We separately find that intrinsic differences are already present shortly after encoding and remain apparent over time. Second, we consider two extrinsic factors: image context and observer behavior. We measure the effects of image context (the set of images from which the experimental sequence is sampled) on memorability. Building on prior findings that images that are distinct with respect to their context are better remembered, we propose an information-theoretic model of image distinctiveness. Our model can predict how changes in context change the memorability of natural images using automatically computed image features. Our results are presented on a large dataset of indoor and outdoor scene categories. We also measure the effects of observer behavior on memorability, on a trial-bytrial basis. Specifically, our proposed computational model can use an observer's eye movements on an image to predict whether or not the image will be later remembered. Apart from eye movements, we also show how 2 additional physiological measurements - pupil dilations and blink rates - can be predictive of image memorability, without the need for overt responses. Together, by considering both intrinsic and extrinsic effects on memorability, we arrive at a more complete model of image memorability than previously available.
by Zoya Bylinskii.
S.M.
Herzog, Jonathan 1975. "Computational soundness of formal adversaries." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87334.
Full textIncludes bibliographical references (p. 50-51).
by Jonathan Herzog.
S.M.
Banks, Eric 1976. "Computational approaches to gene finding." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/81523.
Full textSealfon, Rachel (Rachel Sima). "Computational investigation of pathogen evolution." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99858.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 105-118).
Pathogen genomes, especially those of viruses, often change rapidly. Changes in pathogen genomes may have important functional implications, for example by altering adaptation to the host or conferring drug resistance. Accumulated genomic changes, many of which are functionally neutral, also serve as markers that can elucidate transmission dynamics or reveal how long a pathogen has been present in a given environment. Moreover, systematically probing portions of the pathogen genome that are changing more or less rapidly than expected can provide important clues about the function of these regions. In this thesis, I (1) examine changes in the Vibrio cholerae genome shortly after the introduction of the pathogen to Hispaniola to gain insight into genomic change and functional evolution during an epidemic. I then (2) use changes in the Lassa genome to estimate the time that the pathogen has been circulating in Nigeria and in Sierra Leone, and to pinpoint sites that have recurrent, independent mutations that may be markers for lineage-specific selection. I (3) develop a method to identify regions of overlapping function in viral genomes, and apply the approach to a wide range of viral genomes. Finally, I (4) use changes in the genome of Ebola virus to elucidate the virus' origin, evolution, and transmission dynamics at the start of the outbreak in Sierra Leone.
by Rachel Sealfon.
Ph. D.
Syed, Zeeshan Hassan 1980. "Computational methods for physiological data." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/54671.
Full textAuthor is also affiliated with the MIT Dept. of Electrical Engineering and Computer Science. Cataloged from PDF version of thesis.
Includes bibliographical references (p. 177-188).
Large volumes of continuous waveform data are now collected in hospitals. These datasets provide an opportunity to advance medical care, by capturing rare or subtle phenomena associated with specific medical conditions, and by providing fresh insights into disease dynamics over long time scales. We describe how progress in medicine can be accelerated through the use of sophisticated computational methods for the structured analysis of large multi-patient, multi-signal datasets. We propose two new approaches, morphologic variability (MV) and physiological symbolic analysis, for the analysis of continuous long-term signals. MV studies subtle micro-level variations in the shape of physiological signals over long periods. These variations, which are often widely considered to be noise, can contain important information about the state of the underlying system. Symbolic analysis studies the macro-level information in signals by abstracting them into symbolic sequences. Converting continuous waveforms into symbolic sequences facilitates the development of efficient algorithms to discover high risk patterns and patients who are outliers in a population. We apply our methods to the clinical challenge of identifying patients at high risk of cardiovascular mortality (almost 30% of all deaths worldwide each year). When evaluated on ECG data from over 4,500 patients, high MV was strongly associated with both cardiovascular death and sudden cardiac death. MV was a better predictor of these events than other ECG-based metrics. Furthermore, these results were independent of information in echocardiography, clinical characteristics, and biomarkers.
(cont.) Our symbolic analysis techniques also identified groups of patients exhibiting a varying risk of adverse outcomes. One group, with a particular set of symbolic characteristics, showed a 23 fold increased risk of death in the months following a mild heart attack, while another exhibited a 5 fold increased risk of future heart attacks.
by Zeeshan Hassan Syed.
Ph.D.
Burke, Lauren. "Computer Science Education at The Claremont Colleges: The Building of an Intuition." Scholarship @ Claremont, 2016. http://scholarship.claremont.edu/scripps_theses/875.
Full textPirzadeh, Hormoz. "Computational Geometry with the Rotating Calipers." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0027/MQ50856.pdf.
Full textMiri, Hossein. "CernoCAMAL : a probabilistic computational cognitive architecture." Thesis, University of Hull, 2012. http://hydra.hull.ac.uk/resources/hull:6887.
Full text