To see the other types of publications on this topic, follow the link: Composite applications (Computer science).

Dissertations / Theses on the topic 'Composite applications (Computer science)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Composite applications (Computer science).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Cyr, Pierre. "Development of a computer application for optimization of composite material structures." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0030/MQ64214.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Paydavosi, Sarah. "Study of organic molecules and nano-particle/polymer composites for flash memory and switch applications." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/75644.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 205-218).
Organic materials exhibit fascinating optical and electronic properties which motivate their hybridization with traditional silicon-based electronics in order to achieve novel functionalities and address scaling challenges of these devices. The application of organic molecules and nano-particle/polymer composites for flash memory and switch applications is studied in this dissertation. Facilitating data storage on individual small molecules as the approach the limits in miniaturization for ultra-high density and low power consumption media may enable orders of magnitude increase in data storage capabilities. A floating gate consisting of a thin film of molecules would provide the advantage of a uniform set of identical nano-structured charge storage elements with high molecular area densities which can result in a several-fold higher density of charge-storage sites as compared to quantum dot (QD) memory and even SONOS devices. Additionally, the discrete charge storage in such nano-segmented floating gate designs limits the impact of any tunnel oxide defects to the charge stored in the proximity of the defect site. The charge retention properties of molecular films was investigated in this dissertation by injecting charges via a biased conductive atomic force microscopy (AFM) tip into molecules comprising the thin films. The Kelvin force microscopy (KFM) results revealed minimal changes in the spatial extent of the charge trapping over time after initial injection. Fabricated memory capacitors show a device durability over 105 program/erase cycles and hysteresis window of up to 12.8 V, corresponding to stored charge densities as high as 5.4x 1013 cm-2, suggesting the potential use of organic molecules in high storage capacity memory cells. Also, these results demonstrate that charge storage properties of the molecular trapping layer can be engineered by rearranging molecules and their a-orbital overlaps via addition of dopant molecules. Finally, the design, fabrication, testing and evaluation of a MEMS switch that employs viscoelastic organic polymers doped with nano-particles as the active material is presented in this dissertation. The conductivity of the nano-composite changes 10,000-fold as it is mechanically compressed. In this demonstration the compressive squeeze is applied with electric actuation. Since squeezing initiates the switching behavior, the device is referred to as a "squitch". The squitch is essentially a new type of FET that is compatible with large area processing with printing or photolithography, on rigid or flexible substrates and can exhibit large on-to-off conduction ratio.
by Sarah Paydavosi.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
3

Gibson, Jason. "Nano-Particles in Multi-Scale Composites and Ballistic Applications." Doctoral diss., University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5745.

Full text
Abstract:
Carbon nanotubes, graphene and nano sized core shell rubber particles have all been extensively researched for their capability to improve mechanical properties of thermoset resins. However, there has been a lack of research on their evaluation for energy absorption in high velocity impact scenarios, and the fundamental mechanics of their failure mechanisms during highly dynamic stress transfer through the matrix. This fundamental research is essential for laying the foundation for improvement in ballistic performance in composite armor. In hard armor applications, energy absorption is largely accomplished through delamination between plies of the composite laminate. This energy absorption is accomplished through two mechanisms. The first being the elongation of the fiber reinforcement contained in the resin matrix, and the second is the propagation of the crack in between the discreet fabric plies. This research aims to fundamentally study the energy absorption characteristics of various nano-particles as reinforcements in thermoset resin for high velocity impact applications. Multiple morphologies will be evaluated through use of platelet, tubular and spherical shaped nano-particles. Evaluations of the effect on stress transfer through the matrix due to the combination of nano sized and micro scale particles of milled fiber is conducted. Three different nano-particles are utilized, specifically, multi-walled carbon nanotubes, graphene, and core shell rubber particles. The difference in surface area, aspect ratio and molecular structure between the tube, platelet and spherical nano-particles causes energy absorption through different failure mechanisms. This changes the impact performance of composite panels enhanced with the nano-particle fillers. Composite panels made through the use of dispersing the various nano-particles in a non-contact planetary mixer, are evaluated through various dynamic and static testing, including unnotched cantilever beam impact, mixed mode fracture toughness, split-Hopkinson bar, and ballistic V50 testing. The unnotched cantilever beam testing showed that the addition of milled fiber degraded the impact resistance of the samples. Addition of graphene nano platelets unilaterally degraded impact resistance through the unnotched cantilever beam testing. 1.5% loading of MWCNT showed the greatest increase in impact resistance, with a 43% increase over baseline. Determining the critical load for mixed mode interlaminar shear testing can be difficult for composite panels that bend without breaking. An iterative technique of optimizing the coefficient of determination, R2, in linear regression is developed for objectively determining the point of non-linearity for critical load. This allows for a mathematical method of determination; thereby eliminating any subjective decision of choosing where the data becomes non-linear. The core shell rubber nano particles showed the greatest strain energy release rate with an exponential improvement over the baseline results. Synergistic effects between nano and micro sized particles in the resin matrix during transfer of the stress wave were created and evaluated. Loadings of 1% milled carbon fiber enhanced the V50 ballistic performance of both carbon nanotube and core shell rubber particles in the resin matrix. However, the addition of milled carbon fiber degrades the impact resistance of all nano-particle enhanced resin matrices. Therefore, benefits gained from the addition of micro-sized particles in combination with nano-sized particles, are only seen in high energy impact scenarios with micro second durations. Loadings of 1% core shell rubber particles and 1% milled carbon fiber have an improvement of 8% in V50 ballistic performance over the baseline epoxy sample for 44 mag single wad cutter gas check projectiles. Loadings of 1% multi-walled carbon nanotubes with 1% milled carbon fiber have an improvement of 7.3% in V50 ballistic performance over the baseline epoxy sample. The failure mechanism of the various nano-particle enhanced resin matrices during the ballistic event is discussed through the use of scanning electron microscope images and Raman spectroscopy of the panels after failure. The Raman spectroscopy data shows a Raman shift for the fibers that had an enhancement in the V50 performance through the use of nano-particles. The Raman band for Kevlar centered at 1,649 cm-1 stemming from the stretching of the C==O bond of the fiber shows to be more sensitive to the residual axial strain, while the Raman band centered at 1,611 cm-1 stemming from the C-C phenyl ring is minimally affected for the CSR enhanced panels due to the failure mechanism of the CSR particles during crack propagation.
Ph.D.
Doctorate
Mechanical and Aerospace Engineering
Engineering and Computer Science
Mechanical Engineering
APA, Harvard, Vancouver, ISO, and other styles
4

Gärdin, Marcus. "Characterization of Graphene-Based Anisotropic Conducting Adhesives : A study regarding x-ray sensing applications." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-257844.

Full text
Abstract:
A common method of cancer treatment is radiation therapy. In radiation therapy, a treatment planning system is made to specify the dose of X-rays needed to eradicate the tumor. To assure the right amount of X-ray dosage a quality assurance is using a phantom containing radiation sensors. The sensors are made of semiconductor materials with heavy metal-based contacts. Irradiating heavy elements with a high-intensity beam such as Xrays causes secondary scattering of electrons, resulting in an additional photocurrent which may distort the signal used in the quality analysis. By exchanging the heavy-metal contact material to a lighter version such as a carbon-based material, preventing secondary scattering, the error obtained from the quality analysis can be minimized.In this thesis, characterization of contacts between radiation diodes and a copper substrate by flip-chip bonding with reduced graphene oxide-based anisotropic conducting adhesive is made. The parameters of the connections are characterized with respect to electrical, thermal and mechanical properties.Analysis of the novel contact material is done by comparing different types of graphene-based anisotropic fillers with a commercial metal-based filler. Results obtained indicate that it is possible to exchange the metal-based fillers in an anisotropic conducting adhesive with reduced graphene oxide coated polymer spheres as a contacting material for radiation sensing technology.
En vanlig metod som används för att behandla cancer är strålningsterapi. I strålningsterapi görs ett behandlingsplaneringssystem för att specificera en exakt dos av röntgenstrålning som krävs för att slå ut en tumör. För att säkerställa att man ger rätt dos av röntgenstrålning utförs en kvalitetssäkring genom att använda en fantom innehållande strålningssensorer. Sensorerna är gjorda av halvledarmaterial men har oftast anslutningar gjorda av tunga metalliska material. När man bestrålar metaller med hög intensitet, exempelvis röntgenstrålning, emitteras en sekundär spridning av elektroner i form av en fotoström som kan störa signalen i kvalitetsäkrningen. Genom att byta ut metallen som används i anslutningarna till ett kontaktmaterial med lägre atomnummer som exempelvis kolbaserade material, förhindras den sekundära spridningen av elektroner, som troligtvis minskar felet som uppstår vid kvalitetssäkringen.I detta arbete har en kartläggning av kontakter mellan stålningsdioder och ett kopparsubstrat, genom en flip-chip-bindning process med ett ledande adhesiv baserat på reducerad grafenoxid gjorts. Kontaktparametrarna som kartlagts är baserade på termiska, elektriska och mekaniska egenskaper.Kartläggningen av kontakterna har i mestadels gjort genom att jämföra olika typer av grafen baserade partiklar ett kommersiellt metalbaserat material gjort för flip-chipbindning. Resultaten från arbetet indikerar att det är möjligt att byta ut det metallbaserade partiklarna i ett anisotropt ledande adhesiv med reducerade grafenoxid-belagda polymersfärer som ett ledande material för strålningsapplikationer.
APA, Harvard, Vancouver, ISO, and other styles
5

Gensel, Jérôme. "Contraintes et représentation de connaissances par objets : application au modèle Tropes." Phd thesis, Université Joseph Fourier (Grenoble), 1995. http://tel.archives-ouvertes.fr/tel-00005046.

Full text
Abstract:
Ce travail montre que l'introduction de contraintes dans un modèle de connaissances à objets permet d'en accroître à la fois l'expressivité (les contraintes sont des énoncés déclaratifs de relations entre attributs) et les capacités d'inférence (la maintenance et la résolution des contraintes sont chargées de la cohérence et de la complétion des bases de connaissances). Les répercussions de la présence de contraintes au sein d'un tel système sont également étudiées. Les attributs contraints sont désignés à l'aide de la notion d'accès, qui étend la notion classique de chemin au traitement des attributs multivalués (dont la valeur est un ensemble ou une liste de valeurs). Les niveaux de représentation considérés (concept, classe, instance) définissent la portée d'une contrainte comme l'ensemble des objets sur lesquels elle doit être posée, et induisent alors entre eux un héritage de ce trait descriptif. De même, le maintien d'un certain degré de consistance locale sur les domaines des attributs exige une gestion interne de leurs types. Vis-a-vis des mécanismes d'inférence du modèle (instanciation, classification, procédures, etc.), un comportement adapté des contraintes est établi, qui ne remet pas en cause la sémantique de ces mécanismes. Ces principes d'intégration ont été validés dans le modèle de connaissances à objets Tropes. Nous avons réalisé un module de programmation par contraintes, appelé Micro, qui est semi-faiblement couplé à Tropes. Micro répond à de nombreuses exigences de la représentation de connaissances par objets, en gérant la maintenance et la résolution de Problèmes de Satisfaction de Contraintes (CSP) dynamiques, définis sur des variables numériques, booléennes, ou multivaluées, dont les domaines sont finis ou infinis. L'intégration qui a été réalisée autorise, en outre, l'utilisation de la puissance expressive et calculatoire des contraintes par le système de représentation de connaissances lui-même. Ainsi, la présence des contraintes permet d'étendre et de contrôler la sémantique de notions diverses et évoluées comme celles d'objet composite, de tâche, de relation, ou encore de filtre.
APA, Harvard, Vancouver, ISO, and other styles
6

Abi, Lahoud Elie. "Composition dynamique de services : application à la conception et au développement de systèmes d'information dans un environnement distribué." Phd thesis, Université de Bourgogne, 2010. http://tel.archives-ouvertes.fr/tel-00560489.

Full text
Abstract:
L'orientation service occupe de plus en plus une place importante dans la structuration des systèmes complexes. La conception et le développement d'applications évoluent progressivement d'un modèle traditionnel vers un modèle plus dynamique orienté services où la réutilisation et l'adaptabilité jouent un rôle important. Dans cette thèse, nous proposons une étude portant sur la conception et le développement d'applications par composition de services. Nous décrivons un environnement de partage de services : DyCoSe. Il consiste en un écosystème coopératif où les entreprises membres, organisées en communautés, partagent un consensus global représentant les fonctionnalités métier récurrentes et les propriétés non fonctionnelles communes. La composition d'applications dans DyCoSe repose sur une architecture à trois niveaux combinant à la fois une démarche descendante et une autre ascendante. La démarche descendante permet de décrire l'application à travers une interaction de composants haut niveau et de la raffiner en une ou plusieurs orchestrations de services. La démarche ascendante projette les caractéristiques de l'infrastructure réseau sous-jacente au niveau services. Un processus d'instanciation visant à réaliser une application composite est détaillé. Il formalise le choix des services, selon un ensemble de contraintes données, comme un problème d'optimisation de coûts. Deux solutions au problème d'instanciation sont étudiées. Une solution globale tient compte de l'ensemble des services disponibles dans l'écosystème et une solution locale favorise les services de certaines communautés. Un algorithme génétique est décrit comme implémentation de l'instanciation globale. Une simulation stochastique de l'environnement DyCoSe est proposée. Elle permet d'étudier les possibilités d'instanciation d'une application donnée dans un environnement où la disponibilité des services n'est pas garantie. Elle permet d'étudier aussi, le taux de réussite de l'exécution d'une instance d'une application donnée.
APA, Harvard, Vancouver, ISO, and other styles
7

Mayer, Anthony Edward. "Composite construction of high performance scientific applications." Thesis, Imperial College London, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.252520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bota, Horatiu S. "Composite web search." Thesis, University of Glasgow, 2018. http://theses.gla.ac.uk/38925/.

Full text
Abstract:
The figure above shows Google’s results page for the query “taylor swift”, captured in March 2016. Assembled around the long-established list of search results is content extracted from various source — news items and tweets merged within the results ranking, images, songs and social media profiles displayed to the right of the ranking, in an interface element that is known as an entity card. Indeed, the entire page seems more like an assembly of content extracted from various sources, rather than just a ranked list of blue links. Search engine result pages have become increasingly diverse over the past few years, with most commercial web search providers responding to user queries with different types of results, merged within a unified page. The primary reason for this diversity on the results page is that the web itself has become more diverse, given the ease with which creating and hosting different types of content on the web is possible today. This thesis investigates the aggregation of web search results retrieved from various document sources (e.g., images, tweets, Wiki pages) within information “objects” to be integrated in the results page assembled in response to user queries. We use the terms “composite objects” or “composite results” to refer to such objects, and throughout this thesis use the terminology of Composite Web Search (e.g., result composition) to distinguish our approach from other methods of aggregating diverse content within a unified results page (e.g., Aggregated Search). In our definition, the aspects that differentiate composite information objects from aggregated search blocks are that composite objects (i) contain results from multiple sources of information, (ii) are specific to a common topic or facet of a topic rather than a grouping of results of the same type, and (iii) are not a uniform ranking of results ordered only by their topical relevance to a query. The most widely used type of composite result in web search today is the entity card. Entity cards have become extremely popular over the past few years, with some informal studies suggesting that entity cards are now shown on the majority of result pages generated by Google. As composite results are used more and more by commercial search engines to address information needs directly on the results page, understanding the properties of such objects and their influence on searchers is an essential aspect of modern web search science. The work presented throughout this thesis attempts the task of studying composite objects by exploring users’ perspectives on accessing and aggregating diverse content manually, by analysing the effect composite objects have on search behaviour and perceived workload, and by investigating different approaches to constructing such objects from diverse results. Overall, our experimental findings suggest that items which play a central role within composite objects are decisive in determining their usefulness, and that the overall properties of composite objects (i.e., relevance, diversity and coherence) play a combined role in mediating object usefulness.
APA, Harvard, Vancouver, ISO, and other styles
9

Wei, Peiran. "PREPARATION AND APPLICATIONS OF STIMULI-RESPONSIVE COMPOSITE MATERIALS." Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1565317654535383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hou, Dezhi. "COMPREHENSIVE EVALUATION COMPOSITE GENE FEATURES IN CANCER OUTCOME PREDICTION." Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1386952765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

John, Sheline Anna. "Runtime verification of composite web services." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2008. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Lalani, Nisar. "Validation of Internet Applications." Thesis, Karlstad University, Faculty of Economic Sciences, Communication and IT, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-449.

Full text
Abstract:

Today, testing applications for Internet (web sites and other applications) is being verified using

proprietary test solutions. An application is developed and another application is developed to

test the first one. Test Competence Centre at Ericsson AB has expertise on testing telecom

applications using TTCN-2 and TTCN-3 notations. These notations have lot of potential and are

being used for testing in various areas. So far, not much work has been done on using TTCN

notations for testing Internet application. This thesis was a step through which the

capabilities/possibilities of the TTCN notation (in Web testing) could be determined.

This thesis presents investigation results of the 3 different test technologies/tools (TTCN-2,

TTCN-3 and a proprietary free software, PureTest) to see which one is the best for testing

Internet Applications and what are the drawbacks/benefits each technology has.

The background topics included are brief introduction of software testing and web testing, short

introduction of TTCN language and its version 2 and 3, description of the tool set representing

the chosen technologies, conceptual view of how the tools work, a short description of HTTP

protocol and description of HTTP adapter (Test Port).

Several benefits and drawbacks were found from all the three technologies but it can be said that

at the moment proprietary test solutions (PureTest in this case) is still the best tool to test Internet

Application. It scores over other the two technologies (TTCN-2 and TTCN-3) due to reason like

flexibility, cost effectiveness, user friendliness, small lead times for competence development etc.

TTCN-3 is more of a programming language and is certainly more flexible when compared to

TTCN-2. TTCN-3 is still evolving and it can be said that it holds promise. Some of the features

are missing which are vital for testing Internet Applications but are better than TTCN-2.

APA, Harvard, Vancouver, ISO, and other styles
13

Solovey, Edward 1979. "Simulation of composite I/O automata." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/18033.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.
Includes bibliographical references (p. 107-108).
The IOA simulator is a tool that has been developed in the Theory of Distributed Systems group at MIT. This tool simulates the execution of automata described by the IOA language. It generates logs of execution traces and provides other pertinent information regarding the execution, such as the validity of specified invariants. Although the simulator supports paired simulation of two automata for the purpose of checking simulation relations, one of its limitations is its lack of support for the simulation of composite automata. A composite automaton represents a complex system and is made up of other automata, each representing a system component. This thesis concerns the addition of a capability to simulate composite automata in a manner that allows observing and debugging the individual system component automata. While there is work in progress on creating a tool that will translate a composite definition into a single automaton, the added ability to simulate composite automata directly will add modularity and simplicity, as well as ease of observing the behavior of individual components for the purpose of distributed debugging.
by Edward Solovey.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
14

Lowman, Tim. "Secure Computer Applications in an Enterprise Environment." NCSU, 1999. http://www.lib.ncsu.edu/theses/available/etd-19990401-134848.

Full text
Abstract:

Sophisticated computing environments support many of the complex tasks whicharise in modern enterprises. An enterprise environment is a collective ofthe organization's software, hardware, networking, and data systems.Typically, many user workstations communicate with shared servers, balancingcomputer processing throughout the organization. In a ``secure" modernenterprise issues of authentication, private communication, and protected,shared data space must be addressed. In this thesis we present a generalmodel for adding security to the currently popular enterprise architecture:the World Wide Web (WWW).

The results of our investigation into adding security to the generalWWW architecture are reported in this document. We focus onauthenticating users (Kerberos), establishing a secure communicationlink for private data exchange (SSL), protected space to store shareddata (AFS filesystem), and an enhanced server (Apache) to integrate thesecomponents. After presenting our secure model, we describe a prototypeapplication, built using our approach, which addresses a common problemof secure online submission of homework assignments in a universityenvironment.

APA, Harvard, Vancouver, ISO, and other styles
15

Dingley, Sharon. "A composite framework for the strategic alignment of information systems development." Thesis, Aston University, 1996. http://publications.aston.ac.uk/10595/.

Full text
Abstract:
Information systems are corporate resources, therefore information systems development must be aligned with corporate strategy. This thesis proposes that effective strategic alignment of information systems requires information systems development, information systems planning and strategic management to be united. Literature in these areas is examined, breaching the academic boundaries which separate these areas, to contribute a synthesised approach to the strategic alignment of information systems development. Previous work in information systems planning has extended information systems development techniques, such as data modelling, into strategic planning activities, neglecting techniques of strategic management. Examination of strategic management in this thesis, identifies parallel trends in strategic management and information systems development; the premises of the learning school of strategic management are similar to those of soft systems approaches to information systems development. It is therefore proposed that strategic management can be supported by a soft systems approach. Strategic management tools and techniques frame individual views of a strategic situation; soft systems approaches can integrate these diverse views to explore the internal and external environments of an organisation. The information derived from strategic analysis justifies the need for an information system and provides a starting point for information systems development. This is demonstrated by a composite framework which enables each information system to be justified according to its direct contribution to corporate strategy. The proposed framework was developed through action research conducted in a number of organisations of varying types. This suggests that the framework can be widely used to support the strategic alignment of information systems development, thereby contributing to organisational success.
APA, Harvard, Vancouver, ISO, and other styles
16

Lo, Felix Tun-Han. "A model of composite objects for information mesh." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/41012.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.
Includes bibliographical references (p. 87-88).
by Felix Tun-Han Lo.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
17

Liao, I.-En. "Fuzzy time and its applications /." The Ohio State University, 1990. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487676847117812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Gelbart, Michael Adam. "Constrained Bayesian Optimization and Applications." Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:17467236.

Full text
Abstract:
Bayesian optimization is an approach for globally optimizing black-box functions that are expensive to evaluate, non-convex, and possibly noisy. Recently, Bayesian optimization has been used with great effectiveness for applications like tuning the hyperparameters of machine learning algorithms and automatic A/B testing for websites. This thesis considers Bayesian optimization in the presence of black-box constraints. Prior work on constrained Bayesian optimization consists of a variety of methods that can be used with some efficacy in specific contexts. Here, by forming a connection with multi-task Bayesian optimization, we formulate a more general class of constrained Bayesian optimization problems that we call Bayesian optimization with decoupled constraints. In this general framework, the objective and constraint functions are divided into tasks that can be evaluated independently of each other, and resources with which these tasks can be performed. We then present two methods for solving problems in this general class. The first method, an extension to a constrained variant of expected improvement, is fast and straightforward to implement but performs poorly in some circumstances and is not sufficiently flexible to address all varieties of decoupled problems. The second method, Predictive Entropy Search with Constraints (PESC), is highly effective and sufficiently flexible to address all problems in the general class of decoupled problems without any ad hoc modifications. The two weaknesses of PESC are its implementation difficulty and slow execution time. We address these issues by, respectively, providing a publicly available implementation within the popular Bayesian optimization software Spearmint, and developing an extension to PESC that achieves greater speed without significant performance losses. We demonstrate the effectiveness of these methods on real-world machine learning meta-optimization problems.
Biophysics
APA, Harvard, Vancouver, ISO, and other styles
19

Boland, Ralph Patrick. "Polygon visibility decompositions with applications." Thesis, University of Ottawa (Canada), 2002. http://hdl.handle.net/10393/6244.

Full text
Abstract:
Many problems in Computational Geometry involve a simple polygon P and a family of geometric objects, say sigma, contained in P. For example, if sigma is the family of chords of P then we may want to find the longest chord in P. Alternatively, given a chord of P, we may wish to determine the areas of the two subpolygons of P determined by the chord. Let pi be a polygonal decomposition of a polygon P. We call pi a visibility decomposition with respect to sigma if, for any object g ∈ sigma, we can cover g with o(|P|) of the subpolygons of pi. We investigate the application of visibility decompositions of polygons to solving problems of the forms described. Any visibility decomposition pi of a polygon P that we construct will have the property that, for some class of polygons ℘ where the polygons in ℘ have useful properties, pi ⊆ ℘. Furthermore, the properties of ℘ will be key to solving any problems we solve on P using pi. Some of the visibility decomposition classes we investigate are already known in the literature, for example weakly edge visible polygon decompositions. We make improvements relating to these decomposition classes and in some cases we also find new applications for them. We also develop several new polygon visibility decomposition classes. We then use these decomposition classes to solve a number of problems on polygons including the circular ray shooting problem and the largest axis-aligned rectangle problem. It is noteworthy that the solutions to problems that we provide are usually more efficient and always simpler than alternative solutions.
APA, Harvard, Vancouver, ISO, and other styles
20

Long, Xiang. "Numerical study on reinforcement mechanism of copper/carbon nanotubes composite." Master's thesis, University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5409.

Full text
Abstract:
Because of its high stiffness, carbon nanotubes (CNTs) are considered as one of the widely used reinforcement materials in the metal matrix composites. In this thesis, finite element (FE) models were built in Ls-Dyna3D to simulate Copper/CNTs composite deformation and fracture, and to explore CNTs reinforcement mechanisms. Several possible mechanisms were discussed. Deformation and failure of Cu/CNT composites were studied numerically using unit cell FE models, which consist of both metal matrix and CNTs. The simulation results have been verified by existing experiment data reported by Chen's group. The matrix material was modeled as elasto-plastic ductile solids. The CNTs material properties were taken from literature results using molecular dynamics simulation. FE simulations have showed that CNTs deformation exceeds material elastic limit, which means that CNTs plasticity should be taken into account as well. 2D unit cell models were developed using axial symmetric elements with suitable boundary conditions. Several mechanisms are found to affect CNTs reinforcement prediction. The first one is the boundary condition imposed in the models. The CNTs significantly affect the plastic flow of copper during plastic deformation, which is one important reinforcement mechanism. The second reinforcement mechanism is found to be the hardening zone of Cu matrix around CNTs, which is introduced by mismatch of coefficient of thermal expansion (CTE). A round of parametric studies was performed to investigate the effects of several modeling parameters in the FE simulations; these parameters include the volume fraction of CNTs, aspect ratio of CNTs, the size of hardening zone, and the residual plastic strain in the zone. A tool combining Matlab and Ls-Dyna was developed to automatically build 2D unit cell models and automatically post-process simulation results. Picking up suitable parameters, 2D unit cell model results well predict the experimental results from Chen's group. It should be noted that the interface between Cu and CNTs was assumed to be perfect in FE simulations since no CNTs debonding was observed in the experiments. Also, a 3D unit cell model using tetrahedral elements (with element numbers up to one million) was tentatively developed to obtain more accurate results. The purpose was to explore the interface properties of Cu/CNTs, the effect of CNTs orientation distribution, and the other reinforcement mechanism coming from geometry necessary dislocation (GND) since the size of Cu matrix is divided into nano scales by CNTs. 3D unit models are also used to verify the 2D unit cell one, which is a simplified and effective approach. Very interesting results was observed in this part of study. Further works are needed to overcome the difficulties in 3D modeling and the limitation of current CPU speed.
ID: 031001326; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Adviser: Yuanli Bai.; Title from PDF title page (viewed April 8, 2013).; Thesis (M.S.M.E.)--University of Central Florida, 2012.; Includes bibliographical references (p. 59-66).
M.S.M.E.
Masters
Mechanical and Aerospace Engineering
Engineering and Computer Science
Mechanical Engineering; Thermofluids
APA, Harvard, Vancouver, ISO, and other styles
21

Ba, Shan. "Multi-layer designs and composite gaussian process models with engineering applications." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44751.

Full text
Abstract:
This thesis consists of three chapters, covering topics in both the design and modeling aspects of computer experiments as well as their engineering applications. The first chapter systematically develops a new class of space-filling designs for computer experiments by splitting two-level factorial designs into multiple layers. The new design is easy to generate, and our numerical study shows that it can have better space-filling properties than the optimal Latin hypercube design. The second chapter proposes a novel modeling approach for approximating computationally expensive functions that are not second-order stationary. The new model is a composite of two Gaussian processes, where the first one captures the smooth global trend and the second one models local details. The new predictor also incorporates a flexible variance model, which makes it more capable of approximating surfaces with varying volatility. The third chapter is devoted to a two-stage sequential strategy which integrates analytical models with finite element simulations for a micromachining process.
APA, Harvard, Vancouver, ISO, and other styles
22

Armacost, Andrew P. (Andrew Paul). "Composite variable formulations for express shipment service network design." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/28229.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (p. 181-187).
In this thesis, we consider large-scale network design problems, specifically the problem of designing the air network of an express shipment (i.e., overnight) delivery operation. We focus on simultaneously determining the route structure, the assignment of fleet types to routes, and the flow of packages on aircraft. Traditional formulations for network design involve modeling both flow decisions and design decisions explicitly. The bounds provided by their linear programming relaxations are often weak. Common solution strategies strengthen the bounds by adding cuts, but the shear size of the express shipment problem results in models that are intractable. To overcome this shortcoming, we introduce a new modeling approach that 1) removes the flow variables as explicit decisions and embeds them within the design variables and 2) combines the design variables into composite variables, which represent the selection of multiple aircraft routes that cover the demands for some subset of commodities. The resulting composite variable formulation provides tighter bounds and enables very good solutions to be found quickly. We apply this type of formulation to the express shipment operations of the United Parcel Service (UPS). Compared with existing plans, the model produces a solution that reduces the number of required aircraft by almost 11 percent and total annual cost by almost 25 percent. This translates to potential annual savings in the hundreds of millions of dollars. We establish the composite variable formulation to be at least as strong as the traditional network design formulation, even when the latter is strengthened by Chvital-Gomory rounding, and we demonstrate cases when strength is strictly improved. We also place the composite variable formulation in a more general setting by presenting it as a Dantzig-Wolfe decomposition of the traditional (intractable) network design formulation and by comparing composite variables to Chvital-Gomory cuts in the dual of a related formulation. Finally, we present a composite variable formulation for the Pure Fixed Charge Transportation Problem to highlight the potential application of this approach to general network design and fixed-charge problems.
by Andrew P. Armacost.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
23

Wong, Toon King. "Data connectivity for the composite information system/tool kit." Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/61054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Lao, Beyer Lukas C. "Multi-modal motion planning using composite pose graph optimization." Thesis, Massachusetts Institute of Technology, 2021. https://hdl.handle.net/1721.1/130697.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 30-31).
This work presents a motion planning framework for multi-modal vehicle dynamics. An approach for transcribing cost function, vehicle dynamics, and state and control constraints into a sparse factor graph is introduced. By formulating the motion planning problem in pose graph form, the motion planning problem can be addressed using efficient optimization techniques, similar to those already widely applied in dual estimation problems, e.g., pose graph optimization for simultaneous localization and mapping (SLAM). Optimization of trajectories for vehicles under various dynamics models is demonstrated. The motion planner is able to optimize the location of mode transitions, and is guided by the pose graph optimization process to eliminate unnecessary mode transitions, enabling efficient discovery of optimized mode sequences from rough initial guesses. This functionality is demonstrated by using our planner to optimize multi-modal trajectories for vehicles such as an airplane which can both taxi on the ground or fly. Extensive experiments validate the use of the proposed motion planning framework in both simulation and real-life flight experiments using a vertical take-off and landing (VTOL) fixed-wing aircraft that can transition between hover and horizontal flight modes.
by Lukas C. Lao Beyer.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
25

Wahab, Matthew Randall. "Multi-resolution motion textures and applications." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=101660.

Full text
Abstract:
Vector fields arise from experiments and simulations in many scientific and engineering disciplines. The visualization of this data is important to understanding the inherent, nature of the field-generating process as well as providing an intuitive notion of the system's behavior. This thesis addresses some of the limitations in the flow visualization technique of Langer et al. [19]. One drawback of their method is that it uses a static uniform resolution grid. It cannot add more resolution in highly varying areas of the field without increasing the resolution for the entire grid, which comes at a substantial cost. We extend the method to use an adaptive multiresolution grid. This allows the local grid resolution to be dictated by the motion field, leading to both increased accuracy in the visualization as well as a lower computational overhead. Another limitation is that it is restricted to two dimensional grids. We extend the method to curvilinear surface grids. This allows our technique to address a wider range of problems in computer graphics and scientific visualization. Finally, we illustrate the flexibility of the new method by using our motion textures to perform texture-space bump mapping. Our method is demonstrated on various two dimensional curvilinear surfaces.
APA, Harvard, Vancouver, ISO, and other styles
26

Seth, Ankush. "Isoluminant color picking and its applications." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=82423.

Full text
Abstract:
Color and luminance play important roles within the field of visual arts. The reason for this lies in the complexity of the human visual perception system. Our brains experience a perceptual tension when processing isoluminant fields (i.e. fields of equal luminance value) because the luminance and color processing pathways perceive the two fields differently. Skilled artists exploit this fact to great effect. This thesis looks at how isoluminance can be applied to the field of Non-Photorealistic Rendering. Specifically, this thesis makes a novel contribution to NPR by emphasizing the importance of isoluminance. It shows how isolumnant color picking can be used to improve existing NPR image filters, and to create new ones. It presents a geometric technique for isoluminant color picking and then applies it in a pointillist filter, a new Chuck Close inspired filter, and a unique image mosaic filter.
APA, Harvard, Vancouver, ISO, and other styles
27

Wu, Sun. "Approximate pattern matching and its applications." Diss., The University of Arizona, 1992. http://hdl.handle.net/10150/185914.

Full text
Abstract:
In this thesis, we study approximate pattern matching problems. Our study is based on the Levenshtein distance model, where errors considered are 'insertions', 'deletions', and 'substitutions'. In general, given a text string, a pattern, and an integer k, we want to find substrings in the text that match the pattern with no more than k errors. The pattern can be a fixed string, a limited expression, or a regular expression. The problem has different variations with different levels of difficulties depending on the types of the pattern as well as the constraint imposed on the matching. We present new results both of theoretical interest and practical value. We present a new algorithm for approximate regular expression matching, which is the first to achieve a subquadratic asymptotic time for this problem. For the practical side, we present new algorithms for approximate pattern matching that are very efficient and flexible. Based on these algorithms, we developed a new software tool called 'agrep', which is the first general purpose approximate pattern matching tool in the UNIX system. 'agrep' is not only usually faster than the UNIX 'grep/egrep/fgrep' family, it also provides many new features such as searching with errors allowed, record-oriented search, AND/OR combined patterns, and mixed exact/approximate matching. 'agrep' has been made publicly available through anonymous ftp from cs.arizona.edu since June 1991.
APA, Harvard, Vancouver, ISO, and other styles
28

Homer, Patrick Thomas. "Constructing scientific applications from heterogeneous resources." Diss., The University of Arizona, 1994. http://hdl.handle.net/10150/186907.

Full text
Abstract:
The computer simulation of scientific processes is playing an increasingly important role in scientific research. For example, the development of adequate flight simulation environments, numeric wind tunnels, and numeric propulsion systems is reducing the danger and expense involved in prototyping new aircraft and engine designs. One serious problem that hinders the full realization of the potential of scientific simulation is the lack of tools and techniques for dealing with the heterogeneity inherent in today's computational resources and applications. Typically, either ad hoc connection techniques, such as manual file transfer between machines, or approximation techniques, such as boundary value equations, are employed. This dissertation develops a programming model in which scientific applications are designed as heterogeneous distributed programs, or meta-computations. The central feature of the model is an interconnection system that handles the transfer of control and data among the heterogeneous components of the meta-computation, and provides configuration tools to assist the user in starting and controlling the distributed computation. Key benefits of this programming model include the ability to simulate the interactions among the physical processes being modeled through the free exchange of data between computational components. Another benefit is the possibility of improved user interaction with the meta-computation, allowing the monitoring of intermediate results during long simulations and the ability to steer the simulation, either directly by the user or through the incorporation of an expert system into the meta-computation. This dissertation describes a specific realization of this model in the Schooner interconnection system, and its use in the construction of a number of scientific meta-computations. Schooner uses a type specification language and an application-level remote procedure call mechanism to ease the task of the scientific programmer in building meta-computations. It also provides static and dynamic configuration management features that support the creation of meta-computations from components at runtime, and their modification during execution. Meta-computations constructed using Schooner include examples involving molecular dynamics and neural nets. Schooner is also in use in several major projects as part of a NASA effort to develop improved jet engine simulations.
APA, Harvard, Vancouver, ISO, and other styles
29

Markines, Benjamin C. "Socially induced semantic networks and applications." [Bloomington, Ind.] : Indiana University, 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3358934.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Computer Science, 2009.
Title from PDF t.p. (viewed on Feb. 10, 2010). Source: Dissertation Abstracts International, Volume: 70-05, Section: B, page: 3003. Adviser: Filippo Menczer.
APA, Harvard, Vancouver, ISO, and other styles
30

Shen, Yelong. "Social Network Mining and Its Applications." Kent State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=kent1426778906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Chambers, Brent Victor. "The synthesis and characterization of model interface couples for inorganic matrix composite applications." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/32135.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering, 1994.
Includes bibliographical references (leaves 157-164).
by Brent Victor Chambers.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
32

Masi, Barbara Ann. "Fabrication methods and costs for thermoset and thermoplastic composite processing for aerospace applications." Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/72739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Mei, Yuan Ph D. Massachusetts Institute of Technology. "ZStream : a cost-based query processor for composite event detection." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44736.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 103-104).
Composite (or Complex) event processing (CEP) systems search sequences of incoming primitive events for occurrences of user-specified event patterns. Recently, they are gaining more and more attention in a variety of areas due to their powerful and expressive query language and performance potential. Sequentiality (temporal ordering) is the primary way in which CEP relates events to each other. Examples include tracing a car's movement in a predefined area (where a car moves through a series of places), detecting anomalies in stock prices (where the rise and fall of the price of some stocks is monitored), detecting intrusion in network monitoring (where a specific sequence of malicious activities is detected) or catching break points in debugging systems (where a sequence of function calls are made). But even searching for a simple sequence pattern involving only equality constraints between its components is an NP-complete problem. Furthermore, simple sequentiality is not enough to express many real world patterns, which also involve conjunction (e.g., concurrent events), disjunction (e.g., a choice between two options) and negation, making the matching problem even more complex. In this thesis, we present a CEP system called ZStream to efficiently process such sequential patterns. Besides simple sequentiality, ZStream is also able to support other relations such as conjunction, disjunction, negation and Kleene Closure. ZStream uses a tree-based plan for both the logical and physical representation of query patterns. Using this tree-based infrastructure, ZStream is able to unify the evaluation of sequence, conjunction, disjunction, negation, and Kleene Closure as variants of the join operator. A single pattern may have several equivalent physical tree plans, with different evaluation costs. Hence a cost model is proposed to estimate the computation cost of a plan.
(cont.) Experiments show that our cost model can capture the real evaluation cost of a query plan accurately. Based on this cost model and using a simple set of statistics about operator selectivity and data rates, ZStream is able to adjust the order in which it detects patterns. In addition, we design a dynamic programming algorithm and propose equivalent transition rules to automatically search for an optimal query plan for a given pattern.
by Yuan Mei.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
34

Ellison, Charles McEwen III. "A formal semantics of C with applications." Thesis, University of Illinois at Urbana-Champaign, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3600342.

Full text
Abstract:

This dissertation shows that complex, real programming languages can be completely formalized in the [special characters omitted] Framework, yielding interpreters and analysis tools for testing and bug detection. This is demonstrated by providing, in [special characters omitted], the first complete formal semantics of the C programming language. With varying degrees of effort, tools such as interpreters, debuggers, and model-checkers, together with tools that check for memory safety, races, deadlocks, and undefined behavior are then generated from the semantics.

Being executable, the semantics has been thoroughly tested against the GCC torture test suite and successfully passes 99.2% of 776 test programs. The semantics is also evaluated against popular analysis tools, using a new test suite in addition to a third-party test suite. The semantics-based tool performs at least as well or better than the other tools tested.

APA, Harvard, Vancouver, ISO, and other styles
35

Royes, Andrew. "Algorithms and applications for probabilistic relational models." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98786.

Full text
Abstract:
The vast majority of real-world data is stored using relational representations. Unfortunately, many machine learning techniques are unable to handle rich relational models. Probabilistic Relational Models (PRMs) are an extension of the Bayesian network frame work which allows relational structure to be fully exploited. They are a formalism based on relational logic for describing probabilistic models of structured data which also allow us to model uncertainty in the relationship between objects and the attributes of those objects. In this thesis we present an implementation of PRMs which allows defining conditional probability distributions over mixtures of discrete and continuous variables. This is an important new feature. We provide experimental results using our package on both synthetic and real data sets.
APA, Harvard, Vancouver, ISO, and other styles
36

Lekena, Mohato Karabo. "Designing mobile multi-touch drum sequencing applications." Master's thesis, University of Cape Town, 2015. http://hdl.handle.net/11427/19977.

Full text
Abstract:
Digital music software can limit the forms of music we create by using interfaces that directly copy those of the analogue instruments that came before. In this study we report on a new multi-touch interface that affords a completely new form of drum sequencing. Based on ideas from Avant-guard music and embodied interaction, a technology probe was created and then evaluated by a wide range of users. We found that for users with no musical training, and for users with a large amount of musical training, the software did allow them to be more creative. However, users with limited training on existing sequencing software found the new interface challenging.
APA, Harvard, Vancouver, ISO, and other styles
37

Eaglin, Todd. "Scalable, situationally aware visual analytics and applications." Thesis, The University of North Carolina at Charlotte, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10270103.

Full text
Abstract:

There is a need to understand large and complex datasets to provide better situa- tional awareness in-order to make timely well-informed actionable decisions in critical environments. These types of environments include emergency evacuations for large buildings, indoor routing for buildings in emergency situations, large-scale critical infrastructure for disaster planning and first responders, LiDAR analysis for coastal planning in disaster situations, and social media data for health related analysis. I introduce novel work and applications in real-time interactive visual analytics in these domains. I also detail techniques, systems and tools across a range of disciplines from GPU computing for real-time analysis to machine learning for interactive analysis on mobile and web-based platforms.

APA, Harvard, Vancouver, ISO, and other styles
38

Goudie, Robert J. B. "Bayesian structural inference with applications in social science." Thesis, University of Warwick, 2011. http://wrap.warwick.ac.uk/78778/.

Full text
Abstract:
Structural inference for Bayesian networks is useful in situations where the underlying relationship between the variables under study is not well understood. This is often the case in social science settings in which, whilst there are numerous theories about interdependence between factors, there is rarely a consensus view that would form a solid base upon which inference could be performed. However, there are now many social science datasets available with sample sizes large enough to allow a more exploratory structural approach, and this is the approach we investigate in this thesis. In the first part of the thesis, we apply Bayesian model selection to address a key question in empirical economics: why do some people take unnecessary risks with their lives? We investigate this question in the setting of road safety, and demonstrate that less satisfied individuals wear seatbelts less frequently. Bayesian model selection over restricted structures is a useful tool for exploratory analysis, but fuller structural inference is more appealing, especially when there is a considerable quantity of data available, but scant prior information. However, robust structural inference remains an open problem. Surprisingly, it is especially challenging for large n problems, which are sometimes encountered in social science. In the second part of this thesis we develop a new approach that addresses this problem|a Gibbs sampler for structural inference, which we show gives robust results in many settings in which existing methods do not. In the final part of the thesis we use the sampler to investigate depression in adolescents in the US, using data from the Add Health survey. The result stresses the importance of adolescents not getting medical help even when they feel they should, an aspect that has been discussed previously, but not emphasised.
APA, Harvard, Vancouver, ISO, and other styles
39

Patel, Anup. "Pulsed field magnetization of composite superconducting bulks for magnetic bearing applications." Thesis, University of Cambridge, 2013. https://www.repository.cam.ac.uk/handle/1810/256579.

Full text
Abstract:
Permanent magnets are essential components for many devices such as motors, which currently account for 45 % of global electricity consumption, generators and also superconducting magnetic bearings used for applications such as flywheel energy storage. But even the most powerful rare-earth magnets are limited to a remanent field of 1.4 T, whereas superconducting materials such as YBCO in their bulk form have the extraordinary ability to trap magnetic fields an order of magnitude higher, whilst being very compact. This gives them the potential to increase efficiency and allow significant volume and weight reductions for rotating machines despite the need for cooling. A new design of superconducting magnetic bearing has been developed which uses magnetized bulks as the field source, eliminating permanent magnets. Finite element modelling shows that the bulk – bulk design can achieve much higher force densities than existing permanent magnet – bulk designs, giving it potential to be used as a compact magnetic bearing. A system was created to magnetize bulks using a pulsed magnetic field down to 10 K and then measure levitation force. In proving the concept of the proposed design, the highest levitation forces ever reported between two superconducting bulks were measured, including a levitation force of 500 N between a 1.7 T magnetized YBCO bulk and a coaxial $MgB_{2}$ bulk tube. The biggest factor limiting the use of magnetized bulks in applications is magnetizing them in the first place. Using a pulsed magnetic field is most practical but generates excessive heat dissipation leading to a loss of flux in conventional bulk superconductors, which are 100% superconductor. Although multi-pulse techniques help maximise the trapped field, the poor thermal properties of bulk (RE)BCO are a limiting factor. New composite superconducting structures are reported which can overcome these problems by using high thermal conductivity materials, the motivation for which came from finite element modelling of the critical state coupled with heat transfer. In particular, composite structures created by cutting and stacking 12 mm wide (RE)BCO superconducting tape are shown experimentally to have exceptional field trapping ability due to superior thermal and mechanical properties compared to existing bulks. Up to 2 T was trapped in a stack of commercially available tape produced by SuperPower Inc. in the first reported pulsed magnetization of such a stack. Over 7 T was trapped between two stacks using field cooling at 4.2 K, the highest field yet trapped in such a sample.
APA, Harvard, Vancouver, ISO, and other styles
40

Chuang, Eugene (Eugene Yu) 1975. "Cyclic load resistance of reinforced concrete beams retrofitted with composite laminates." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/47496.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Cleary, Alan Michael. "Computational Pan-Genomics| Algorithms and Applications." Thesis, Montana State University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10792396.

Full text
Abstract:

As the cost of sequencing DNA continues to drop, the number of sequenced genomes rapidly grows. In the recent past, the cost dropped so low that it is no longer prohibitively expensive to sequence multiple genomes for the same species. This has led to a shift from the single reference genome per species paradigm to the more comprehensive pan-genomics approach, where populations of genomes from one or more species are analyzed together.

The total genomic content of a population is vast, requiring algorithms for analysis that are more sophisticated and scalable than existing methods. In this dissertation, we explore new algorithms and their applications to pan-genome analysis, both at the nucleotide and genic resolutions. Specifically, we present the Approximate Frequent Subpaths and Frequented Regions problems as a means of mining syntenic blocks from pan-genomic de Bruijn graphs and provide efficient algorithms for mining these structures. We then explore a variety of analyses that mining synteny blocks from pan-genomic data enables, including meaningful visualization, genome classification, and multidimensional-scaling. We also present a novel interactive data mining tool for pan-genome analysis—the Genome Context Viewer—which allows users to explore pan-genomic data distributed across a heterogeneous set of data providers by using gene family annotations as a unit of search and comparison. Using this approach, the tool is able to perform traditionally cumbersome analyses on-demand in a federated manner.

APA, Harvard, Vancouver, ISO, and other styles
42

Wei, Wutao. "Model Based Clustering Algorithms with Applications." Thesis, Purdue University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10830711.

Full text
Abstract:

In machine learning predictive area, unsupervised learning will be applied when the labels of the data are unavailable, laborious to obtain or with limited proportion. Based on the special properties of data, we can build models by understanding the properties and making some reasonable assumptions. In this thesis, we will introduce three practical problems and discuss them in detail. This thesis produces 3 papers as follow: Wei, Wutao, et al. "A Non-parametric Hidden Markov Clustering Model with Applications to Time Varying User Activity Analysis." ICMLA2015 Wei, Wutao, et al. "Dynamic Bayesian predictive model for box office forecasting." IEEE Big Data 2017. Wei, Wutao, Bowei Xi, and Murat Kantarcioglu. "Adversarial Clustering: A Grid Based Clustering Algorithm Against Active Adversaries." Submitted

User Profiling Clustering: Activity data of individual users on social media are easily accessible in this big data era. However, proper modeling strategies for user profiles have not been well developed in the literature. Existing methods or models usually have two limitations. The first limitation is that most methods target the population rather than individual users, and the second is that they cannot model non-stationary time-varying patterns. Different users in general demonstrate different activity modes on social media. Therefore, one population model may fail to characterize activities of individual users. Furthermore, online social media are dynamic and ever evolving, so are users’ activities. Dynamic models are needed to properly model users’ activities. In this paper, we introduce a non-parametric hidden Markov model to characterize the time-varying activities of social media users. In addition, based on the proposed model, we develop a clustering method to group users with similar activity patterns.

Adversarial Clustering: Nowadays more and more data are gathered for detecting and preventing cyber-attacks. Unique to the cyber security applications, data analytics techniques have to deal with active adversaries that try to deceive the data analytics models and avoid being detected. The existence of such adversarial behavior motivates the development of robust and resilient adversarial learning techniques for various tasks. In the past most of the work focused on adversarial classification techniques, which assumed the existence of a reasonably large amount of carefully labeled data instances. However, in real practice, labeling the data instances often requires costly and time-consuming human expertise and becomes a significant bottleneck. Meanwhile, a large number of unlabeled instances can also be used to understand the adversaries' behavior. To address the above mentioned challenges, we develop a novel grid based adversarial clustering algorithm. Our adversarial clustering algorithm is able to identify the core normal regions, and to draw defensive walls around the core positions of the normal objects utilizing game theoretic ideas. Our algorithm also identifies sub-clusters of attack objects, the overlapping areas within clusters, and outliers which may be potential anomalies.

Dynamic Bayesian Update for Profiling Clustering: Movie industry becomes one of the most important consumer business. The business is also more and more competitive. As a movie producer, there is a big cost in movie production and marketing; as an owner of a movie theater, it is also a problem that how to arrange the limited screens to the current movies in theater. However, all the current models in movie industry can only give an estimate of the opening week. We improve the dynamic linear model with a Bayesian framework. By using this updating method, we are also able to update the streaming adversarial data and make defensive recommendation for the defensive systems.

APA, Harvard, Vancouver, ISO, and other styles
43

Williams, Bryn V. "Evolutionary neural networks : models and applications." Thesis, Aston University, 1995. http://publications.aston.ac.uk/10635/.

Full text
Abstract:
The scaling problems which afflict attempts to optimise neural networks (NNs) with genetic algorithms (GAs) are disclosed. A novel GA-NN hybrid is introduced, based on the bumptree, a little-used connectionist model. As well as being computationally efficient, the bumptree is shown to be more amenable to genetic coding lthan other NN models. A hierarchical genetic coding scheme is developed for the bumptree and shown to have low redundancy, as well as being complete and closed with respect to the search space. When applied to optimising bumptree architectures for classification problems the GA discovers bumptrees which significantly out-perform those constructed using a standard algorithm. The fields of artificial life, control and robotics are identified as likely application areas for the evolutionary optimisation of NNs. An artificial life case-study is presented and discussed. Experiments are reported which show that the GA-bumptree is able to learn simulated pole balancing and car parking tasks using only limited environmental feedback. A simple modification of the fitness function allows the GA-bumptree to learn mappings which are multi-modal, such as robot arm inverse kinematics. The dynamics of the 'geographic speciation' selection model used by the GA-bumptree are investigated empirically and the convergence profile is introduced as an analytical tool. The relationships between the rate of genetic convergence and the phenomena of speciation, genetic drift and punctuated equilibrium arc discussed. The importance of genetic linkage to GA design is discussed and two new recombination operators arc introduced. The first, linkage mapped crossover (LMX) is shown to be a generalisation of existing crossover operators. LMX provides a new framework for incorporating prior knowledge into GAs. Its adaptive form, ALMX, is shown to be able to infer linkage relationships automatically during genetic search.
APA, Harvard, Vancouver, ISO, and other styles
44

Au, Carmen E. "Compression-based anomaly detection for video surveillance applications." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98598.

Full text
Abstract:
In light of increased demands for security, we propose a unique approach to automated video surveillance using anomaly detection. The success of this approach is dependent on the ability of the system to ascertain the novelty of a given image acquired by a video camera. We adopt a compression-based similarity measure to determine similarity between images in a video sequence. Images that are sufficiently similar to the previously-seen images are discarded; conversely, images that are sufficiently dissimilar are stored for comparison with future incoming images.
The use of a compression-based technique inherently reduces the heavy computational and storage demands that other video surveillance applications typically have placed on the system. In order to further reduce the computational and storage load, the anomaly detection algorithm is applied to edges and people, which are image features that have been extracted from the images acquired by the camera.
APA, Harvard, Vancouver, ISO, and other styles
45

Gao, Si Zhu 1975. "A survey of tree models in biological applications /." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=97956.

Full text
Abstract:
We present a survey of mathematical models explored in computational biology. Of particular interest are random tree models. The survey is intended to give an overview of the models used in this burgeoning field. We limited ourselves mainly to definitions and provide the key references.
APA, Harvard, Vancouver, ISO, and other styles
46

Leblanc, Alain 1964. "Some applications of the topological plane sweep algorithm." Thesis, McGill University, 1989. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=59443.

Full text
Abstract:
The arrangements of lines stand at the heart of many problems in Computational Geometry, and the topological plane sweep algorithm (EG 86) permits to visit these structures in optimal time and space. This algorithm already improved the resolution of several older problems. In this thesis we present new applications where visiting an arrangement plays a major role. In the first we resolve the problem of finding the points contained on a line from each of k sets of n lines in O(kn$ sp2$) time and O(kn) space. In the second the task is to find the lines that can intersect (or stab) one element from each of k sets. The sets contain line segments or polygons, and the algorithm uses O(k$ sp2$n$ sp2$) time and O(k$ sp2$n) space, where n is the total size of the elements in each set. Minor modifications to the algorithm published in (EG 86) are also described. They allow the handling of degeneracies without recourse to a perturbation technique.
APA, Harvard, Vancouver, ISO, and other styles
47

Lemieux, François 1961. "Finite groupoids and their applications to computational complexity." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=40171.

Full text
Abstract:
In our Master thesis the notions of recognition by semigroups and by programs over semigroups were extended to groupoids. As a consequence of this transformation, we obtained context-free languages instead of regular with recognition by groupoids, and we obtained SAC$ sp1$ instead of NC$ sp1$ with recognition by programs over groupoids. In this thesis, we continue the investigation of the computational power of finite groupoids.
We consider different restrictions on the original model. We examine the effect of restricting the kind of groupoids used, the way parentheses are placed, and we distinguish between the case where parentheses are explicitly given and the case where they are guessed nondeterministically.
We introduce the notions of linear recognition by groupoids and by programs over groupoids. This leads to new characterizations of linear context-free languages and NL. We also prove that the algebraic structure of finite groupoids induces a strict hierarchy on the classes of languages they linearly recognized.
We investigate the classes obtained when the groupoids are restricted to be quasigroups (i.e. the multiplication table forms a latin square). We prove that languages recognized by quasigroups are regular and that programs over quasigroups characterize NC$ sp1$.
We also consider the problem of evaluating a well-parenthesized expression over a finite loop (a quasigroup with an identity). This problem is in NC$ sp1$ for any finite loop, and we give algebraic conditions for its completeness. In particular, we prove that it is sufficient that the loop be nonsolvable, extending a well-known theorem of Barrington.
Finally, we consider programs where the groupoids are allowed to grow with the input length. We study the relationship between these programs and more classical models of computation like Turing machines, pushdown automata, and owner-read owner-write PRAM. As a consequence, we find a restriction on Boolean circuits that has some interesting properties. In particular, circuits that characterize NP and NL are shown to correspond, in presence of our restriction, to P and L respectively.
APA, Harvard, Vancouver, ISO, and other styles
48

Byrd, William E. "Relational programming in miniKanren techniques, applications, and implementations /." [Bloomington, Ind.] : Indiana University, 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3380156.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Computer Science, 2009.
Title from PDF t.p. (viewed on Jul 20, 2010). Source: Dissertation Abstracts International, Volume: 70-12, Section: B, page: 7659. Adviser: Daniel P. Friedman.
APA, Harvard, Vancouver, ISO, and other styles
49

Barrett, Brian W. "One-sided communication for high performance computing applications." [Bloomington, Ind.] : Indiana University, 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3354909.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Computer Science, 2009.
Title from PDF t.p. (viewed on Feb. 4, 2010). Source: Dissertation Abstracts International, Volume: 70-04, Section: B, page: 2379. Adviser: Andrew Lumsdaine.
APA, Harvard, Vancouver, ISO, and other styles
50

Ma, Yu. "A composable data management architecture for scientific applications." [Bloomington, Ind.] : Indiana University, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3243773.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Computer Science, 2006.
Title from PDF t.p. (viewed Nov. 18, 2008). Source: Dissertation Abstracts International, Volume: 67-12, Section: B, page: 7170. Adviser: Randall Bramley.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography