Siga este link para ver outros tipos de publicações sobre o tema: Microgravity Science and Applications Program.

Teses / dissertações sobre o tema "Microgravity Science and Applications Program"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 31 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Microgravity Science and Applications Program".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Wendt, N. Rodney. "Applications of program understanding and rule-based quality assurance to Slam II simulation programs". Thesis, University of Ottawa (Canada), 1993. http://hdl.handle.net/10393/6893.

Texto completo da fonte
Resumo:
With the advance of time, our inventory of simulation programs has and continues to accumulate. To maximize the return on our investment of time and money into these software systems, it is advantageous for us to reuse software components as much as possible. For example, previously engineered simulation models can often be reused and exercised under a new set of experimental conditions. Before a software component can be reused, the analyst must learn and understand its functionality. This learning process is often made unnecessarily difficult due to incomplete documentation. Another contributing factor is the complexity brought about by interacting directly with the program code. Furthermore, when it comes time to make updates to the code, the potential arises for semantic and syntactic errors to work their way into the program. Knowledge-based program understanding systems with built in quality assurance can be used as an environment for simplifying the learning and the update processes, while ensuring an acceptable degree of quality has been maintained during the update process. This thesis discusses program understanding and quality assurance issues related to the Slam II programming language and discusses the architecture of E/Slam (Elucidation of Slam II programs). E/Slam is a knowledge-based program understanding system with built-in quality assurance ability.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Kang, Paul J. (Paul Ji Hwan) 1974. "A technical and economic analysis of structural composite use in automotive body-in-white applications". Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/34697.

Texto completo da fonte
Resumo:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering; and, (S.M.)--Massachusetts Institute of Technology, Technology and Policy Program, 1998.
Science Library copy in pages.
Includes bibliographical references (leaves 163-170).
by Paul J. Kang.
S.M.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Du, Wei. "Advanced middleware support for distributed data-intensive applications". Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1126208308.

Texto completo da fonte
Resumo:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xix, 183 p.; also includes graphics (some col.). Includes bibliographical references (p. 170-183). Available online via OhioLINK's ETD Center
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Huang, Jin. "Detecting Server-Side Web Applications with Unrestricted File Upload Vulnerabilities". Wright State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=wright163007760528389.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Ezeozue, Chidube Donald. "Large-scale consensus clustering and data ownership considerations for medical applications". Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/86273.

Texto completo da fonte
Resumo:
Thesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program, 2013.
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 97-101).
An intersection of events has led to a massive increase in the amount of medical data being collected from patients inside and outside the hospital. These events include the development of new sensors, the continuous decrease in the cost of data storage, the development of Big Data algorithms in other domains and the Health Information Technology for Economic and Clinical Health (HITECH) Act's $20 billion incentive for hospitals to install and use Electronic Health Record (EHR) systems. The data being collected presents an excellent opportunity to improve patient care. However, this opportunity is not without its challenges. Some of the challenges are technical in nature, not the least of which is how to efficiently process such massive amounts of data. At the other end of the spectrum, there are policy questions that deal with data privacy, confidentiality and ownership to ensure that research continues unhindered while preserving the rights and interests of the stakeholders involved. This thesis addresses both ends of the challenge spectrum. First of all, we design and implement a number of methods for automatically discovering groups within large amounts of data, otherwise known as clustering. We believe this technique would prove particularly useful in identifying patient states, segregating cohorts of patients and hypothesis generation. Specifically, we scale a popular clustering algorithm, Expectation-Maximization (EM) for Gaussian Mixture Models to be able to run on a cloud of computers. We also give a lot of attention to the idea of Consensus Clustering which allows multiple clusterings to be merged into a single ensemble clustering. Here, we scale one existing consensus clustering algorithm, which relies on EM for multinomial mixture models. We also develop and implement a more general framework for retrofitting any consensus clustering algorithm and making it amenable to streaming data as well as distribution on a cloud. On the policy end of the spectrum, we argue that the issue of data ownership is essential and highlight how the law in the United States has handled this issue in the past several decades, focusing on common law and state law approaches. We proceed to identify the flaws, especially the fragmentation, in the current system and make recommendations for a more equitable and efficient policy stance. The recommendations center on codifying the policy stance in Federal Law and allocating the property rights of the data to both the healthcare provider and the patient.
by Chidube Donald Ezeozue.
S.M. in Technology and Policy
S.M.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Sanjeepan, Vivekananthan. "A service-oriented, scalable, secure framework for Grid-enabling legacy scientific applications". [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0013276.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Ghafoor, Sheikh Khaled. "Integrating Algorithmic and Systemic Load Balancing Strategies in Parallel Applications". MSSTATE, 2003. http://sun.library.msstate.edu/ETD-db/theses/available/etd-11112003-113055/.

Texto completo da fonte
Resumo:
Load imbalance is a major source of performance degradation in parallel scientific applications. Load balancing increases the efficient use of existing resources and improves performance of parallel applications running in distributed environments. At a coarse level of granularity, advances in runtime systems for parallel programs have been proposed in order to control available resources as efficiently as possible by utilizing idle resources and using task migration. At a finer granularity level, advances in algorithmic strategies for dynamically balancing computational loads by data redistribution have been proposed in order to respond to variations in processor performance during the execution of a given parallel application. Algorithmic and systemic load balancing strategies have complementary set of advantages. An integration of these two techniques is possible and it should result in a system, which delivers advantages over each technique used in isolation. This thesis presents a design and implementation of a system that combines an algorithmic fine-grained data parallel load balancing strategy called Fractiling with a systemic coarse-grained task-parallel load balancing system called Hector. It also reports on experimental results of running N-body simulations under this integrated system. The experimental results indicate that a distributed runtime environment, which combines both algorithmic and systemic load balancing strategies, can provide performance advantages with little overhead, underscoring the importance of this approach in large complex scientific applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Cherian, Mathew Sam. "A semantic data federation engine : design, implementation & applications in educational information management". Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/65501.

Texto completo da fonte
Resumo:
Thesis (S.M. in Technology and Policy)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 87-90).
With the advent of the World Wide Web, the amount of digital information in the world has increased exponentially. The ability to organize this deluge of data, retrieve it, and combine it with other data would bring numerous benefits to organizations that rely on the analysis of this data for their operations. The Semantic Web encompasses various technologies that support better information organization and access. This thesis proposes a data federation engine that facilitates integration of data across distributed Semantic Web data sources while maintaining appropriate access policies. After discussing existing literature in the field, the design and implementation of the system including its capabilities and limitations are thoroughly described. Moreover, a possible application of the system at the Massachusetts Department of Education is explored in detail, including an investigation of the technical and nontechnical challenges associated with its adoption at a government agency. By using the federation engine, users would be able to exploit the expressivity of the Semantic Web by querying for disparate data at a single location without having to know how it is distributed or where it is stored. Among this research's contributions to the fledgling Semantic Web are: an integrated system for executing SPARQL queries; and, an optimizer that faciliates efficient querying by exploiting statistical information about the data sources.
by Mathew Sam Cherian.
S.M.
S.M.in Technology and Policy
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Minh, Hyunsik Eugene. "Communication options for protection and control device in Smart Grid applications". Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82401.

Texto completo da fonte
Resumo:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science; in conjunction with the Leaders for Global Operations Program at MIT, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 74-75).
Increasing use of electricity, interest in renewable energy sources, and need for a more reliable power grid system are some of the many drivers for the concept of the Smart Grid technology. In order to achieve these goals, one of the critical elements is communication between systems or between the system and human beings. With the decreasing cost of various communication technologies, especially wireless devices and utilities, researchers are increasingly interested in implementing complex two-way communication infrastructures to enhance the quality of the grid. The protection and control relay at the distribution level is one of the key component in enhancing the efficiency, security and reliability of power grid. At present, it may be premature to apply wireless devices to power electronics and to distribution automation, especially for protection and control relays in the distribution level. While fiber technology is still very attractive for protection and control applications in general, wireless technology can bring improvements in user experience applications in the future. The ABB medium voltage group needs to overcome challenges that arise from conservative industry structure, increasing complexity and cost of the product, and needs for higher reliability and security. However, with collaborative efforts among different product groups, the medium voltage group will successfully develop next generation distribution feeder relay.
by Hyunsik Eugene Minh.
S.M.
M.B.A.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Pizarro, Oscar. "Large area underwater mosaicing for scientific applications by Oscar Pizarro". Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/91909.

Texto completo da fonte
Resumo:
Thesis (S.M.)--Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Dept. of Ocean Engineering; and the Woods Hole Oceanographic Institution), 2003.
Thesis (S.M.)--Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science; and the Woods Hole Oceanographic Institution), 2003.
Includes bibliographical references (p. 73-79).
S.M.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Long, Wendy. "CATY : an ASN. 1-C++ translator in support of distributed object-oriented applications /". Master's thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-10242009-020105/.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Uquillas-Gomez, Verónica. "Supporting Integration Activities in Object-Oriented Applications". Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2012. http://tel.archives-ouvertes.fr/tel-00835097.

Texto completo da fonte
Resumo:
De plus en plus de logiciels sont développés par des équipes de développeurs travaillant de manière collaborative en parallèle. Les développeurs peuvent altérer un ensemble d'artéfacts, inspecter et in- tégrer le code de changements faits par d'autres développeurs. Par exemple, les corrections d'erreurs, les améliorations ou nouvelles fonctionnalités doivent être intégrées dans la version finale d'un logi- ciel et ceci à différents moments du cycle de développement. A un niveau technique, le processus de développement collaboratif est mis en pratique à l'aide d'outils de contrôle de versions (ex: git, SVN). Ces outils permettent aux développeurs de créer leurs propres branches de développement, faisant des tâches de fusion ou d'intégration de ces branches une partie intégrante du processus de développement. Les systèmes de versions de contrôle utilisent des algorithmes de fusion pour aider les développeurs à fusionner les modifications de leur branche dans le base de code commune. Cependant ces techniques travaillent à un niveau lexical, et elles ne garantissent pas que le système résultant soit fonctionnel. Alors que l'utilisation de branches offre de nombreux avantages, la fusion et l'intégration de mod- ifications d'une branche sur une autre est difficile à mettre en oeuvre du fait du manque de support pour assister les développeurs dans la compréhension d'un changement et de son impact. Par exemple, l'intégration d'un changement peut parfois avoir un effet inattendu sur le système et son comporte- ment menant à des bugs subtiles. De plus, les développeurs ne sont pas aidés lors de l'évaluation de l'impact d'un changement, ou lors de la sélection de changements à intégrer d'une branche vers une autre (cherry picking), en particulier lorsque ces branches ont divergé. Dans cette dissertation, nous présentons une approche dont le but est d'apporter des solutions à ces problèmes pour les développeurs, et plus précisément les intégrateurs. Cette approche se base sur des outils et solutions semi-automatisés aidant à de changements la compréhension à l'intérieur d'une branche ou entre branches. Nous nous attachons à satisfaire les besoins en information des intégrateurs quand ils doivent comprendre et intégrer des changements. Pour cela, nous caractérisons les changements et/ou séquences de changements et leurs dépendances. Ces caractérisations sont basées sur la représentation comme citoyens de première classe de l'historique du système et des changements approtés considérant les entités logicielles (ex: classes ou méthodes) et leurs relations plutôt que des fichiers et du texte comme le font les outils de con- trôle de versions. Pour cela, nous proposons une famille de méta-modèles (Ring, RingH, RingS et RingC) qui offrent une représentation des entités du système, de son historique, des changements apportés dans les différentes branches et de leurs dépendances. Des instances de ces meta-modèles sont ensuite utilisées par nos outils destinée à assister les intégrateurs: Torch, un outil visuel qui car- actérise les changements, et JET un ensemble d'outils qui permettent de naviguer dans des séquences de changements. Mots clés: programmation à objets; méta-modèles; historique et versions de programmes; vi- sualisation de programmes; fusion sémantique; analyse de programmes. Samenvatting Hedendaagse software is het resultaat van een collaboratief ontwikkelingsproces met meerdere teams van ontwikkelaars. Het doel van dit proces is om het toe te laten dat ontwikkelaars gelijktijdig en onafhankelijk van elkaar kunnen werken. Hiervoor hebben ze toegang tot een gedeelde verzameling van artefacten die ze kunnen aanpassen, en hebben ze de mogelijkheid om de aanpassingen die an- dere ontwikkelaars maken aan de broncode te inspecteren en te integreren. Zo kunnen bijvoorbeeld bug fixes, verbeteringen en nieuwe functionaliteit tijdig geïntegreerd worden in een versie van een softwaresysteem. Op een technisch niveau wordt dit collaboratief ontwikkelingsproces ondersteund door versiecon- trolesystemen. Gezien deze versiecontrolesystemen het mogelijk maken voor ontwikkelaars om in hun eigen branch van het systeem te werken, zijn merging en integratie een volwaardig onderdeel van het ontwikkelingsproces geworden. Hiertoe bieden deze versiecontrolesystemen geavanceerde en geautomatiseerde merge-technieken aan die ontwikkelaars helpen om hun aanpassingen samen te voegen met de aanpassingen van andere ontwikkelaars. Echter, deze technieken garanderen niet dat het resultaat van dit samenvoegen tot een werkend systeem zal leiden. Alhoewel het gebruik van branching binnen het ontwikkelingsproces vele voordelen biedt, wor- den de hieraan verbonden taken van het invoegen en integreren van aanpassingen bemoeilijkt door een gebrek aan ondersteuning. Bijvoorbeeld, het integreren van aanpassingen kan een onverwachte impact hebben op het ontwerp of het gedrag van het systeem, wat dan weer kan leiden tot de intro- ductie van subtiele fouten. Bovendien wordt er aan ontwikkelaars geen ondersteuning geboden bij het integreren van veranderen die afkomstig zijn uit een andere branch van het systeem (het zogenaamde cherry picking), bij divergerende branches, bij het zoeken naar afhankelijkheden tussen aanpassingen, of bij het inschatten van de mogelijke impact van een verzameling veranderingen op het systeem. In dit proefschrift stellen we een techniek voor die bovenvermelde problemen aanpakt door on- twikkelaars - en in het bijzonder integrators - semi-automatisch te assisteren bij het integreren van aanpassingen, zowel binnen één branch als tussen verschillende branches. We leggen hierbij de klem- toon op het helpen van integrators om de informatie te verkrijgen die ze nodig hebben om aanpassin- gen aan de software te begrijpen en te integreren. Hiervoor maken we gebruik van een karakterisering van aanpassingen en van aanpassingsstromen (dit zijn een opeenvolging van aanpassingen binnen een branch), te samen met een karakterisatie van de afhankelijkheden tussen de aanpassingen. Deze karakteriseringen zijn gebaseerd op een eersterangs voorstelling van de historiek van een systeem en de aanpassingen die binnen deze historiek werden uitgevoerd. Deze voorstelling is gedefinieerd in termen van de feitelijke programma-entiteiten, in plaats van bestanden en tekst die integrators niet de noodzakelijke informatie verschaffen. Hiervoor bieden we een familie van meta- modellen aan (Ring, RingH, RingS en RingC) die een implementatie verschaffen van de voorstelling van programma-entiteiten, de historiek van het systeem, aanpassingen, en de afhankelijkheden tussen aanpassingen. Deze meta-modellen bieden ook de analyses aan om versies van een systeem te vergeli- jken, en om aanpassingen en afhankelijkheden te berekenen. Verder stellen we tools voor die, gebruik makende van instanties van onze meta-modellen, het mogelijk maken voor integrators om de karak-iv teriseringen van aanpassingen te analyseren. De visuele tool Torch en de verzameling van JET-tools, voorzien in de informatie die noodzakelijk is om assistentie te bieden bij respectievelijk het integreren van aanpassingen binnen één branch en tussen verschillende branches. Trefwoorden: objectgericht programmeren; meta-modellen; historiek en versies van pro- gramma's; visualisatie; semantisch mergen; programma-analyses
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Ji, Katrina Yun. "ADAP: A component-based model using design patterns with applications in E-Commerce". CSUSB ScholarWorks, 2000. https://scholarworks.lib.csusb.edu/etd-project/1694.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Niu, Qingpeng. "Characterization and Enhancement of Data Locality and Load Balancing for Irregular Applications". The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1420811652.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Filippi, Margaux(Martin-Filippi). "Advancing the theory and applications of Lagrangian Coherent Structures methods for oceanic surface flows". Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122328.

Texto completo da fonte
Resumo:
Thesis: Sc. D., Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 207-218).
Ocean surface transport is at the core of many environmental disasters, including the spread of marine plastic pollution, the Deepwater Horizon oil spill and the Fukushima nuclear contamination. Understanding and predicting flow transport, however, remains a scientific challenge, because it operates on multiple length- and time-scales that are set by the underlying dynamics. Building on the recent emergence of Lagrangian methods, this thesis investigates the present-day abilities to describe and understand the organization of flow transport at the ocean surface, including the abilities to detect the underlying key structures, the regions of stirring and regions of coherence within the flow. Over the past four years, the field of dynamical system theory has adapted several algorithms from unsupervised machine learning for the detection of Lagrangian Coherent Structures (LCS). The robustness and applicability of these tools is yet to be proven, especially for geophysical flows.
An updated, parameter-free spectral clustering approach is developed and a noise-based cluster coherence metric is proposed to evaluate the resulting clusters. The method is tested against benchmarks flows of dynamical system theory: the quasi-periodic Bickley jet, the Duffing oscillator and a modified, asymmetric Duffing oscillator. The applicability of this newly developed spectral clustering method, along with several common LCS approaches, such as the Finite-Time Lyapunov Exponent, is tested in several field studies. The focus is on the ability to predict these LCS in submesoscale ocean surface flows, given all the uncertainties of the modeled and observed velocity fields, as well as the sparsity of Lagrangian data. This includes the design and execution of field experiments targeting LCS from predictive models and their subsequent Lagrangian analysis.
These experiments took place in Scott Reef, an atoll system in Western Australia, and off the coast of Martha's Vineyard, Massachusetts, two case studies with tidally-driven channel flows. The FTLE and spectral clustering analyses were particularly helpful in describing key transient flow features and how they were impacted by tidal forcing and vertical velocities. This could not have been identified from the Eulerian perspective, showing the utility of the Lagrangian approach in understanding the organization of transport.
by Margaux Filippi.
Sc. D.
Sc.D. Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution)
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Singh, Saurabh. "Characterizing applications by integrating andimproving tools for data locality analysis and programperformance". The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1492741656429829.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Jamrozik, Hervé. "Aide à la mise au point des applications parallèles et réparties à base d'objets persistants". Phd thesis, Grenoble 1, 1993. http://tel.archives-ouvertes.fr/tel-00005129.

Texto completo da fonte
Resumo:
L'objectif de ce travail est d'offrir une aide a la mise au point des applications paralleles et reparties, a base dobjets persistants, permettant une mise au point cyclique et offrant une observation de l'execution dun haut niveau dabstraction. Le non-determinisme et la sensibilite a toute perturbation de ce type d'execution rendent tres difficile la correction des erreurs liees aux conditions d'execution. Les limitations de l'analyse statique des programmes et des approches dynamiques fondees sur une execution courante nous conduisent a preconiser la mise en oeuvre de methodes basees sur la reproduction d'une execution qui apportent une solution au non-determinisme en fixant une execution. La mise au point s'effectue alors dans un contexte particulier ou le comportement de l'execution a corriger est deja connu et peut etre observe a l'aide de vues de l'execution adaptees aux particularites de l'environnement dexecution. Nous definissons, dans le contexte des systemes a objets, un systeme de mise au point base sur la reproduction (dirigee par le controle) d'une execution, permettant une mise au point cyclique et une observation de l'execution au niveau des objets. Nous specifions le service de reexecution, le service d'observation, et proposons une architecture modulaire pour l'assemblage des composants logiciels realisant ces services. Nous presentons ensuite l'application concrete des propositions precedentes au systeme Guide. Nous avons realise un noyau de reexecution, structure en objets Guide, qui se charge de maniere automatique de l'enregistrement et de la reproduction dune execution Guide.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Merry, Alexander. "Reasoning with !-graphs". Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:416c2e6d-2932-4220-8506-50e6b403b660.

Texto completo da fonte
Resumo:
The aim of this thesis is to present an extension to the string graphs of Dixon, Duncan and Kissinger that allows the finite representation of certain infinite families of graphs and graph rewrite rules, and to demonstrate that a logic can be built on this to allow the formalisation of inductive proofs in the string diagrams of compact closed and traced symmetric monoidal categories. String diagrams provide an intuitive method for reasoning about monoidal categories. However, this does not negate the ability for those using them to make mistakes in proofs. To this end, there is a project (Quantomatic) to build a proof assistant for string diagrams, at least for those based on categories with a notion of trace. The development of string graphs has provided a combinatorial formalisation of string diagrams, laying the foundations for this project. The prevalence of commutative Frobenius algebras (CFAs) in quantum information theory, a major application area of these diagrams, has led to the use of variable-arity nodes as a shorthand for normalised networks of Frobenius algebra morphisms, so-called "spider notation". This notation greatly eases reasoning with CFAs, but string graphs are inadequate to properly encode this reasoning. This dissertation firstly extends string graphs to allow for variable-arity nodes to be represented at all, and then introduces !-box notation – and structures to encode it – to represent string graph equations containing repeated subgraphs, where the number of repetitions is abitrary. This can be used to represent, for example, the "spider law" of CFAs, allowing two spiders to be merged, as well as the much more complex generalised bialgebra law that can arise from two interacting CFAs. This work then demonstrates how we can reason directly about !-graphs, viewed as (typically infinite) families of string graphs. Of particular note is the presentation of a form of graph-based induction, allowing the formal encoding of proofs that previously could only be represented as a mix of string diagrams and explanatory text.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Ramraj, Varun. "Exploiting whole-PDB analysis in novel bioinformatics applications". Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:6c59c813-2a4c-440c-940b-d334c02dd075.

Texto completo da fonte
Resumo:
The Protein Data Bank (PDB) is the definitive electronic repository for experimentally-derived protein structures, composed mainly of those determined by X-ray crystallography. Approximately 200 new structures are added weekly to the PDB, and at the time of writing, it contains approximately 97,000 structures. This represents an expanding wealth of high-quality information but there seem to be few bioinformatics tools that consider and analyse these data as an ensemble. This thesis explores the development of three efficient, fast algorithms and software implementations to study protein structure using the entire PDB. The first project is a crystal-form matching tool that takes a unit cell and quickly (< 1 second) retrieves the most related matches from the PDB. The unit cell matches are combined with sequence alignments using a novel Family Clustering Algorithm to display the results in a user-friendly way. The software tool, Nearest-cell, has been incorporated into the X-ray data collection pipeline at the Diamond Light Source, and is also available as a public web service. The bulk of the thesis is devoted to the study and prediction of protein disorder. Initially, trying to update and extend an existing predictor, RONN, the limitations of the method were exposed and a novel predictor (called MoreRONN) was developed that incorporates a novel sequence-based clustering approach to disorder data inferred from the PDB and DisProt. MoreRONN is now clearly the best-in-class disorder predictor and will soon be offered as a public web service. The third project explores the development of a clustering algorithm for protein structural fragments that can work on the scale of the whole PDB. While protein structures have long been clustered into loose families, there has to date been no comprehensive analytical clustering of short (~6 residue) fragments. A novel fragment clustering tool was built that is now leading to a public database of fragment families and representative structural fragments that should prove extremely helpful for both basic understanding and experimentation. Together, these three projects exemplify how cutting-edge computational approaches applied to extensive protein structure libraries can provide user-friendly tools that address critical everyday issues for structural biologists.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Pallow, Richard Brian. "Graduate Advisor System". CSUSB ScholarWorks, 2005. https://scholarworks.lib.csusb.edu/etd-project/2917.

Texto completo da fonte
Resumo:
The purpose of this project is to update the architecture and design of the California State University San Bernardino Graduate Advisor System. This system allows potential students into the Master of Science degree program in Computer Science to complete their application online.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Ferrer, Esteban. "A high order Discontinuous Galerkin - Fourier incompressible 3D Navier-Stokes solver with rotating sliding meshes for simulating cross-flow turbines". Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:db8fe6e3-25d0-4f6a-be1b-6cde7832296d.

Texto completo da fonte
Resumo:
This thesis details the development, verification and validation of an unsteady unstructured high order (≥ 3) h/p Discontinuous Galerkin - Fourier solver for the incompressible Navier-Stokes equations on static and rotating meshes in two and three dimensions. This general purpose solver is used to provide insight into cross-flow (wind or tidal) turbine physical phenomena. Simulation of this type of turbine for renewable energy generation needs to account for the rotational motion of the blades with respect to the fixed environment. This rotational motion implies azimuthal changes in blade aero/hydro-dynamics that result in complex flow phenomena such as stalled flows, vortex shedding and blade-vortex interactions. Simulation of these flow features necessitates the use of a high order code exhibiting low numerical errors. This thesis presents the development of such a high order solver, which has been conceived and implemented from scratch by the author during his doctoral work. To account for the relative mesh motion, the incompressible Navier-Stokes equations are written in arbitrary Lagrangian-Eulerian form and a non-conformal Discontinuous Galerkin (DG) formulation (i.e. Symmetric Interior Penalty Galerkin) is used for spatial discretisation. The DG method, together with a novel sliding mesh technique, allows direct linking of rotating and static meshes through the numerical fluxes. This technique shows spectral accuracy and no degradation of temporal convergence rates if rotational motion is applied to a region of the mesh. In addition, analytical mappings are introduced to account for curved external boundaries representing circular shapes and NACA foils. To simulate 3D flows, the 2D DG solver is parallelised and extended using Fourier series. This extension allows for laminar and turbulent regimes to be simulated through Direct Numerical Simulation and Large Eddy Simulation (LES) type approaches. Two LES methodologies are proposed. Various 2D and 3D cases are presented for laminar and turbulent regimes. Among others, solutions for: Stokes flows, the Taylor vortex problem, flows around square and circular cylinders, flows around static and rotating NACA foils and flows through rotating cross-flow turbines, are presented.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Paneth, Omer. "Foundations and applications of program obfuscation". Thesis, 2016. https://hdl.handle.net/2144/34412.

Texto completo da fonte
Resumo:
Code is said to be obfuscated if it is intentionally difficult for humans to understand. Obfuscating a program conceals its sensitive implementation details and protects it from reverse engineering and hacking. Beyond software protection, obfuscation is also a powerful cryptographic tool, enabling a variety of advanced applications. Ideally, an obfuscated program would hide any information about the original program that cannot be obtained by simply executing it. However, Barak et al. [CRYPTO 01] proved that for some programs, such ideal obfuscation is impossible. Nevertheless, Garg et al. [FOCS 13] recently suggested a candidate general-purpose obfuscator which is conjectured to satisfy a weaker notion of security called indistinguishability obfuscation. In this thesis, we study the feasibility and applicability of secure obfuscation: - What notions of secure obfuscation are possible and under what assumptions? - How useful are weak notions like indistinguishability obfuscation? Our first result shows that the applications of indistinguishability obfuscation go well beyond cryptography. We study the tractability of computing a Nash equilibrium vii of a game { a central problem in algorithmic game theory and complexity theory. Based on indistinguishability obfuscation, we construct explicit games where a Nash equilibrium cannot be found efficiently. We also prove the following results on the feasibility of obfuscation. Our starting point is the Garg at el. obfuscator that is based on a new algebraic encoding scheme known as multilinear maps [Garg et al. EUROCRYPT 13]. 1. Building on the work of Brakerski and Rothblum [TCC 14], we provide the first rigorous security analysis for obfuscation. We give a variant of the Garg at el. obfuscator and reduce its security to that of the multilinear maps. Specifically, modeling the multilinear encodings as ideal boxes with perfect security, we prove ideal security for our obfuscator. Our reduction shows that the obfuscator resists all generic attacks that only use the encodings' permitted interface and do not exploit their algebraic representation. 2. Going beyond generic attacks, we study the notion of virtual-gray-box obfusca- tion [Bitansky et al. CRYPTO 10]. This relaxation of ideal security is stronger than indistinguishability obfuscation and has several important applications such as obfuscating password protected programs. We formulate a security requirement for multilinear maps which is sufficient, as well as necessary for virtual-gray-box obfuscation. 3. Motivated by the question of basing obfuscation on ideal objects that are simpler than multilinear maps, we give a negative result showing that ideal obfuscation is impossible, even in the random oracle model, where the obfuscator is given access to an ideal random function. This is the first negative result for obfuscation in a non-trivial idealized model.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Ren, Xiaoxia. "Change impact analysis for Java programs and applications". 2007. http://hdl.rutgers.edu/1782.2/rucore10001600001.ETD.16765.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Li, Chang. "Garbage collection scheduling for Java applications". 2001. http://wwwlib.umi.com/cr/yorku/fullcit?pMQ67749.

Texto completo da fonte
Resumo:
Thesis (M. Sc.)--York University, 2001. Graduate Programme in Computer Science.
Typescript. Includes bibliographical references (leaves 87-92). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://wwwlib.umi.com/cr/yorku/fullcit?pMQ67749.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Psallidas, Fotis. "Physical Plan Instrumentation in Databases: Mechanisms and Applications". Thesis, 2019. https://doi.org/10.7916/d8-vwcd-6050.

Texto completo da fonte
Resumo:
Database management systems (DBMSs) are designed with the goal set to compile SQL queries to physical plans that, when executed, provide results to the SQL queries. Building on this functionality, an ever-increasing number of application domains (e.g., provenance management, online query optimization, physical database design, interactive data profiling, monitoring, and interactive data visualization) seek to operate on how queries are executed by the DBMS for a wide variety of purposes ranging from debugging and data explanation to optimization and monitoring. Unfortunately, DBMSs provide little, if any, support to facilitate the development of this class of important application domains. The effect is such that database application developers and database system architects either rewrite the database internals in ad-hoc ways; work around the SQL interface, if possible, with inevitable performance penalties; or even build new databases from scratch only to express and optimize their domain-specific application logic over how queries are executed. To address this problem in a principled manner in this dissertation, we introduce a prototype DBMS, namely, Smoke, that exposes instrumentation mechanisms in the form of a framework to allow external applications to manipulate physical plans. Intuitively, a physical plan is the underlying representation that DBMSs use to encode how a SQL query will be executed, and providing instrumentation mechanisms at this representation level allows applications to express and optimize their logic on how queries are executed. Having such an instrumentation-enabled DBMS in-place, we then consider how to express and optimize applications that rely their logic on how queries are executed. To best demonstrate the expressive and optimization power of instrumentation-enabled DBMSs, we express and optimize applications across several important domains including provenance management, interactive data visualization, interactive data profiling, physical database design, online query optimization, and query discovery. Expressivity-wise, we show that Smoke can express known techniques, introduce novel semantics on known techniques, and introduce new techniques across domains. Performance-wise, we show case-by-case that Smoke is on par with or up-to several orders of magnitudes faster than state-of-the-art imperative and declarative implementations of important applications across domains. As such, we believe our contributions provide evidence and form the basis towards a class of instrumentation-enabled DBMSs with the goal set to express and optimize applications across important domains with core logic over how queries are executed by DBMSs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Athaiya, Snigdha. "Extending Program Analysis Techniques to Web Applications and Distributed Systems". Thesis, 2020. https://etd.iisc.ac.in/handle/2005/5523.

Texto completo da fonte
Resumo:
Web-based applications and distributed systems are ubiquitous and indispensable today. These systems use multiple parallel machines for greater functionality, and efficient and reliable computation. At the same time they present innumerable challenges, especially in the field of program analysis. In this thesis, we address two problems in the domain of web based applications and distributed systems relating to program analysis, and design effective solutions for those problems. The first challenge that the thesis addresses is the difficulty of analyzing a web application in an end-to-end manner using a single tool. Such an analysis is hard due to client-server interaction, user interaction, and the use of multiple types of languages, and frameworks in a web application. We propose a semantics preserving modeling technique, that converts a web application into a single-language program. The model of a web application in the thesis is a Java program as we present our modeling technique in the context Java-based web applications. As a result of the translation, off -the-shelf tools available for Java can now be used to analyze the application. We have built a tool for the translation of applications. We evaluate our translation tool by converting 5 real world web applications into corresponding models, and then analyzing the models using 3 popular third-party program analysis tools - Wala (static slicing), Java PathFinder (explicit-state and symbolic model checking), and Zoltar (dynamic fault localization). In all the analysis tools, we get precise results for most cases. The second challenge that the thesis addresses, is the precise data flow analysis of message passing asynchronous systems. Message passing systems are distributed systems, where multiple processes execute concurrently, and communicate with each other by passing messages to the channels associated with each process. These systems encompass majority of the real world distributed systems, e.g., web applications, event-driven programs, reactive systems, etc. Therefore, there is a clear need for robust program analysis techniques for these systems. One such technique is data flow analysis, which statically analyzes a program, and approximates the values of variables in the program due to all runs of the program, using lattices. Any precise data flow analysis needs to account for the blocking of execution in message passing systems, when the required message is not present in the channel. Current data flow analysis techniques for message passing systems either over-approximate the behavior by allowing non-blocking receive operations, or, they are not applicable to general data flow lattices. The thesis proposes algorithms for performing precise data flow analysis of message passing asynchronous programs, using infinite, but finite height lattices. The problem was not known to be decidable before. The algorithm builds on the concepts of parallel systems modeling theory, in a novel and involved manner. We have also made a tool for the algorithm, and have studied its precision and performance by analyzing 10 well-known asynchronous systems and protocols with encouraging results
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Zhong, Peilin. "New Primitives for Tackling Graph Problems and Their Applications in Parallel Computing". Thesis, 2021. https://doi.org/10.7916/d8-pnyz-ck91.

Texto completo da fonte
Resumo:
We study fundamental graph problems under parallel computing models. In particular, we consider two parallel computing models: Parallel Random Access Machine (PRAM) and Massively Parallel Computation (MPC). The PRAM model is a classic model of parallel computation. The efficiency of a PRAM algorithm is measured by its parallel time and the number of processors needed to achieve the parallel time. The MPC model is an abstraction of modern massive parallel computing systems such as MapReduce, Hadoop and Spark. The MPC model captures well coarse-grained computation on large data --- data is distributed to processors, each of which has a sublinear (in the input data) amount of local memory and we alternate between rounds of computation and rounds of communication, where each machine can communicate an amount of data as large as the size of its memory. We usually desire fully scalable MPC algorithms, i.e., algorithms that can work for any local memory size. The efficiency of a fully scalable MPC algorithm is measured by its parallel time and the total space usage (the local memory size times the number of machines). Consider an 𝑛-vertex 𝑚-edge undirected graph 𝐺 (either weighted or unweighted) with diameter 𝐷 (the largest diameter of its connected components). Let 𝑁=𝑚+𝑛 denote the size of 𝐺. We present a series of efficient (randomized) parallel graph algorithms with theoretical guarantees. Several results are listed as follows: 1) Fully scalable MPC algorithms for graph connectivity and spanning forest using 𝑂(𝑁) total space and 𝑂(log 𝐷loglog_{𝑁/𝑛} 𝑛) parallel time. 2) Fully scalable MPC algorithms for 2-edge and 2-vertex connectivity using 𝑂(𝑁) total space where 2-edge connectivity algorithm needs 𝑂(log 𝐷loglog_{𝑁/𝑛} 𝑛) parallel time, and 2-vertex connectivity algorithm needs 𝑂(log 𝐷⸱log²log_{𝑁/𝑛} n+\log D'⸱loglog_{𝑁/𝑛} 𝑛) parallel time. Here 𝐷' denotes the bi-diameter of 𝐺. 3) PRAM algorithms for graph connectivity and spanning forest using 𝑂(𝑁) processors and 𝑂(log 𝐷loglog_{𝑁/𝑛} 𝑛) parallel time. 4) PRAM algorithms for (1 + 𝜖)-approximate shortest path and (1 + 𝜖)-approximate uncapacitated minimum cost flow using 𝑂(𝑁) processors and poly(log 𝑛) parallel time. These algorithms are built on a series of new graph algorithmic primitives which may be of independent interests.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Côté, Hugo. "Programmes de branchement catalytiques : algorithmes et applications". Thèse, 2018. http://hdl.handle.net/1866/22123.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Rengasamy, Vasudevan. "A Runtime Framework for Regular and Irregular Message-Driven Parallel Applications on GPU Systems". Thesis, 2014. http://etd.iisc.ac.in/handle/2005/3193.

Texto completo da fonte
Resumo:
The effective use of GPUs for accelerating applications depends on a number of factors including effective asynchronous use of heterogeneous resources, reducing data transfer between CPU and GPU, increasing occupancy of GPU kernels, overlapping data transfers with computations, reducing GPU idling and kernel optimizations. Overcoming these challenges require considerable effort on the part of the application developers. Most optimization strategies are often proposed and tuned specifically for individual applications. Message-driven executions with over-decomposition of tasks constitute an important model for parallel programming and provide multiple benefits including communication-computation overlap and reduced idling on resources. Charm++ is one such message-driven language which employs over decomposition of tasks, computation-communication overlap and a measurement-based load balancer to achieve high CPU utilization. This research has developed an adaptive runtime framework for efficient executions of Charm++ message-driven parallel applications on GPU systems. In the first part of our research, we have developed a runtime framework, G-Charm with the focus primarily on optimizing regular applications. At runtime, G-Charm automatically combines multiple small GPU tasks into a single larger kernel which reduces the number of kernel invocations while improving CUDA occupancy. G-Charm also enables reuse of existing data in GPU global memory, performs GPU memory management and dynamic scheduling of tasks across CPU and GPU in order to reduce idle time. In order to combine the partial results obtained from the computations performed on CPU and GPU, G-Charm allows the user to specify an operator using which the partial results are combined at runtime. We also perform compile time code generation to reduce programming overhead. For Cholesky factorization, a regular parallel application, G-Charm provides 14% improvement over a highly tuned implementation. In the second part of our research, we extended our runtime to overcome the challenges presented by irregular applications such as a periodic generation of tasks, irregular memory access patterns and varying workloads during application execution. We developed models for deciding the number of tasks that can be combined into a kernel based on the rate of task generation, and the GPU occupancy of the tasks. For irregular applications, data reuse results in uncoalesced GPU memory access. We evaluated the effect of altering the global memory access pattern in improving coalesced access. We’ve also developed adaptive methods for hybrid execution on CPU and GPU wherein we consider the varying workloads while scheduling tasks across the CPU and GPU. We demonstrate that our dynamic strategies result in 8-38% reduction in execution times for an N-body simulation application and a molecular dynamics application over the corresponding static strategies that are amenable for regular applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Rengasamy, Vasudevan. "A Runtime Framework for Regular and Irregular Message-Driven Parallel Applications on GPU Systems". Thesis, 2014. http://hdl.handle.net/2005/3193.

Texto completo da fonte
Resumo:
The effective use of GPUs for accelerating applications depends on a number of factors including effective asynchronous use of heterogeneous resources, reducing data transfer between CPU and GPU, increasing occupancy of GPU kernels, overlapping data transfers with computations, reducing GPU idling and kernel optimizations. Overcoming these challenges require considerable effort on the part of the application developers. Most optimization strategies are often proposed and tuned specifically for individual applications. Message-driven executions with over-decomposition of tasks constitute an important model for parallel programming and provide multiple benefits including communication-computation overlap and reduced idling on resources. Charm++ is one such message-driven language which employs over decomposition of tasks, computation-communication overlap and a measurement-based load balancer to achieve high CPU utilization. This research has developed an adaptive runtime framework for efficient executions of Charm++ message-driven parallel applications on GPU systems. In the first part of our research, we have developed a runtime framework, G-Charm with the focus primarily on optimizing regular applications. At runtime, G-Charm automatically combines multiple small GPU tasks into a single larger kernel which reduces the number of kernel invocations while improving CUDA occupancy. G-Charm also enables reuse of existing data in GPU global memory, performs GPU memory management and dynamic scheduling of tasks across CPU and GPU in order to reduce idle time. In order to combine the partial results obtained from the computations performed on CPU and GPU, G-Charm allows the user to specify an operator using which the partial results are combined at runtime. We also perform compile time code generation to reduce programming overhead. For Cholesky factorization, a regular parallel application, G-Charm provides 14% improvement over a highly tuned implementation. In the second part of our research, we extended our runtime to overcome the challenges presented by irregular applications such as a periodic generation of tasks, irregular memory access patterns and varying workloads during application execution. We developed models for deciding the number of tasks that can be combined into a kernel based on the rate of task generation, and the GPU occupancy of the tasks. For irregular applications, data reuse results in uncoalesced GPU memory access. We evaluated the effect of altering the global memory access pattern in improving coalesced access. We’ve also developed adaptive methods for hybrid execution on CPU and GPU wherein we consider the varying workloads while scheduling tasks across the CPU and GPU. We demonstrate that our dynamic strategies result in 8-38% reduction in execution times for an N-body simulation application and a molecular dynamics application over the corresponding static strategies that are amenable for regular applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Arora, Himanshu. "Checking Observational Purity of Procedures". Thesis, 2018. https://etd.iisc.ac.in/handle/2005/5298.

Texto completo da fonte
Resumo:
We provide two static analysis approaches(using theorem proving) that check if a given (recursive) procedure behaves as if it were stateless, even when it maintains state in global variables. In other words, we check if the given procedure behaves like a mathematical function. In order to eliminate the need for manual annotations, we make use of an invariant that makes use of uninterpreted function symbols. This invariant captures the set of reachable global states in all runs of the procedure, if the procedure is observationally pure. If the procedure is not observationally pure, this invariant has no semantics. Allowing function symbols makes it easier to generate the invariant automatically. The two static analysis are an existential checker and an impurity witness checker. The impurity witness checker outputs a formula whose unsatis fiability implies that the procedure is observationally pure. Whereas, the existential checker outputs a formula that constrains the de finition of the function that the given procedure may implement. Satisfi ability of the formula generated by the existential checker implies that the given procedure is observationally pure. The impurity witness approach works better (empirically) with SMT solvers, whereas the existential approach is more precise on paper. We illustrate our work on examples such as matrix chain multiplication. Examples such as these are not addressable by related techniques in the literature. The closest work to our is by Barnett et al.; this work cannot handle procedures with self recursion. We prove both our approaches to be sound. We have implemented the two static analyses using the Boogie framework and the Z3 SMT solver, and have evaluated our implementation on a number of examples
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia