Dissertations / Theses on the topic 'Computational Design Theory'

To see the other types of publications on this topic, follow the link: Computational Design Theory.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computational Design Theory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Shomali, Farid Manawel. "The gnomonic theory of architecture : a computational theory of the geneology of design." Thesis, University of Strathclyde, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.248560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Haihang. "PAOFLOW-Aided Computational Materials Design." Thesis, University of North Texas, 2019. https://digital.library.unt.edu/ark:/67531/metadc1609102/.

Full text
Abstract:
Functional materials are essential to human welfare and to provide foundations for emerging industries. As an alternative route to experimental materials discovery, computational materials designs are playing an increasingly significant role in the whole discovery process. In this work, we use an in-house developed python utility: PAOFLOW, which generates finite basis Hamiltonians from the projection of first principles plane-wave pseudopotential wavefunctions on pseudo atomic orbitals(PAO) for post-process calculation on various properties such as the band structures, density of states, complex dielectric constants, diffusive and anomalous spin and charge transport coefficients. In particular, we calculated the dielectric function of Sr-, Pb-, and Bi-substituted BaSnO3 over wide concentration ranges. Together with some high-throughput experimental study, our result indicates the importance of considering the mixed-valence nature and clustering effects upon substitution of BaSnO3 with Pb and Bi. We also studied two prototype ferroelectric rashba semiconductors, GeTe and SnTe, and found the spin Hall conductivity(SHC) can be large either in ferroelectric or paraelectric structure phase. Upon doping, the polar displacements in GeTe can be sustained up to a critical hole concentration while the tiny distortions in SnTe vanish at a minimal level of doping. Moreover, we investigated the sensitivity of two dimensional group-IV monochalcogenides to external strain and doping, which reveal for the first time giant intrinsic SHC in these materials, providing a new route for the design of highly tunable spintronics devices based on two-dimensional materials.
APA, Harvard, Vancouver, ISO, and other styles
3

Gutierrez, Rafael. "Computational Design of Nanomaterials." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-194029.

Full text
Abstract:
The development of materials with tailored functionalities and with continuously shrinking linear dimensions towards (and below) the nanoscale is not only going to revolutionize state of the art fabrication technologies, but also the computational methodologies used to model the materials properties. Specifically, atomistic methodologies are becoming increasingly relevant in the field of materials science as a fundamental tool in gaining understanding on as well as for pre-designing (in silico material design) the behavior of nanoscale materials in response to external stimuli. The major long-term goal of atomistic modelling is to obtain structure-function relationships at the nanoscale, i.e. to correlate a definite response of a given physical system with its specific atomic conformation and ultimately, with its chemical composition and electronic structure. This has clearly its pendant in the development of bottom-up fabrication technologies, which also require a detailed control and fine tuning of physical and chemical properties at sub-nanometer and nanometer length scales. The current work provides an overview of different applications of atomistic approaches to the study of nanoscale materials. We illustrate how the use of first-principle based electronic structure methodologies, quantum mechanical based molecular dynamics, and appropriate methods to model the electrical and thermal response of nanoscale materials, provides a solid starting point to shed light on the way such systems can be manipulated to control their electrical, mechanical, or thermal behavior. Thus, some typical topics addressed here include the interplay between mechanical and electronic degrees of freedom in carbon based nanoscale materials with potential relevance for designing nanoscale switches, thermoelectric properties at the single-molecule level and their control via specific chemical functionalization, and electrical and spin-dependent properties in biomaterials. We will further show how phenomenological models can be efficiently applied to get a first insight in the behavior of complex nanoscale systems, for which first principle electronic structure calculations become computationally expensive. This will become especially clear in the case of biomolecular systems and organic semiconductors.
APA, Harvard, Vancouver, ISO, and other styles
4

Bruce, Neil. "Evolutionary Design for Computational Visual Attention." Thesis, University of Waterloo, 2003. http://hdl.handle.net/10012/900.

Full text
Abstract:
A new framework for simulating the visual attention system in primates is introduced. The proposed architecture is an abstraction of existing approaches influenced by the work of Koch and Ullman, and Tompa. Each stage of the attentional hierarchy is chosen with consideration for both psychophysics and mathematical optimality. A set of attentional operators are derived that act on basic image channels of intensity, hue and orientation to produce maps representing perceptual importance of each image pixel. The development of such operators is realized within the context of a genetic optimization. The model includes the notion of an information domain where feature maps are transformed to a domain that more closely corresponds to the human visual system. A careful analysis of various issues including feature extraction, density estimation and data fusion is presented within the context of the visual attention problem.
APA, Harvard, Vancouver, ISO, and other styles
5

Holliday, Glenn E. "Supporting design: a computational theory of design and its implementation in a software support tool." Thesis, Virginia Tech, 1994. http://hdl.handle.net/10919/40670.

Full text
Abstract:
Most work in knowledge acquisition and manipulation has focused on expert systems. Expert systems solve one kind of problem: heuristic classification. This thesis extends some advances in knowledge engineering to a broader class of problem: design.

Design is examined as a generic activity, found in many fields of professional practice. A theoretical framework is developed that supports the refinement of design from high-level concepts through implementation. This framework includes a computational model that is shown to be completely general (Turing-equivalent). Therefore, the theory and model are suitable for representing any design project. They are applied specifically to software development.

Practical support for software designers is offered in a prototype software design system. Existing work in automated knowledge acquisition is used to transfer knowledge about a design from the designer to the automated tool. Consistent support for refinement of design choices at any level of detail makes design a maintainable activity. This opens new possibilities for automated code generation, automated maintenance, and the nlore effective management of software at a higher-level design representation.
Master of Science

APA, Harvard, Vancouver, ISO, and other styles
6

Bryant, Cari Rihan. "A computational theory for the generation of solutions during early conceptual design." Diss., Rolla, Mo. : University of Missouri-Rolla, 2007. http://scholarsmine.mst.edu/thesis/pdf/crb5ea_09007dcc8042a16d.pdf.

Full text
Abstract:
Thesis (Ph. D.)--University of Missouri--Rolla, 2007.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed April 15, 2008) Includes bibliographical references (p. 236-249).
APA, Harvard, Vancouver, ISO, and other styles
7

Grigoriadis, Kostas. "Computational and conceptual blends : the epistemology of designing with functionally graded materials." Thesis, Royal College of Art, 2018. http://researchonline.rca.ac.uk/3401/.

Full text
Abstract:
Operating within the landscape of new materialism and considering recent advances in the field of additive manufacturing, the thesis is proposing a novel method of designing with a new type of material that is known as functionally graded. Two of the additive manufacturing advances that are considered of radical importance and at the same time are central to the research have to do with the progressively increasing scales of the output of 3D printing, as well as with the expanding palette of materials that can now be utilised in the process. Regarding the latter, there are already various industrial research initiatives underway that explore ways that various materials can be combined in order to allow for the additive manufacturing of multi-material (otherwise known as functionally graded material) parts or whole volumes that are continuously fused together. In light of this and pre-empting this architectural-level integration and fusing of materials within one volume, the research initially outlines the anticipated impacts of the new way of building that this technology heralds. Of a total of six main anticipated changes, it then focuses on the impact that functionally graded materiality will have on how design is practiced. In this attempt to deal with the uncertainty of a material realm that is unruly and wilful, an initial criticism posed of the scant existing methods for designing with multi-materials in the computer is that they do not consider the intrinsic behaviour of materials and their natural propensity to structure themselves in space. Additionally, these models essentially follow a similarly arbitrary assignment of sub-materiality within larger multi-materials, to the hylomorphic imposition of form on matter. What is effectively proposed as a counter design technique is to computationally ‘predict’ the way materials will fuse and self-structure, with this self-arrangement being partially instigated by their physical properties. Correspondingly, this approach instigates two main objectives that will be pursued in the thesis: – The first goal, is to formulate an appropriate epistemology (also known as the epistemology of computer simulations-EOCS), which is directly linked to the use of computer simulations to design with (computational blending). This is effectively the creation of a methodological framework for the way to set out, run, and evaluate the results of the simulations. – The second goal, concerns the new design methodology proposed, in which the conventional material-less computer aided design methods are replaced by a process of constructing b-rep moulds and allowing digital materials to fuse with one another within these virtual frameworks. Drawing from a specific strand of materialist and cognitive theory (conceptual blending), the theoretical objective in effect is to demonstrate that form and material are not separate at any instance of the proposed process. The resulting original contribution of the design research is a process model that is created in an existing simulation software that can be used in a standard laptop computer in order to design with functionally graded materials. The various ‘stages’ of this model are mapped as a diagrammatic design work ow in the concluding end of the PhD, while its main parts are expanded upon extensively in corresponding chapters in the thesis.
APA, Harvard, Vancouver, ISO, and other styles
8

Huang, Lunmei. "Computational Material Design : Diluted Magnetic Semiconductors for Spintronics." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Raghothama, Jayanth. "Integrating Computational and Participatory Simulations for Design in Complex Systems." Doctoral thesis, KTH, Vårdlogistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208170.

Full text
Abstract:
The understanding and conceptualization of cities and its constituent systems such as transportation and healthcare as open and complex is shifting the debates around the technical and communicative rationales of planning. Viewing cities in a holistic manner presents methodological challenges, where our understanding of complexity is applied in a tangible fashion to planning processes. Bridging the two rationales in the tools and methodologies of planning is necessary for the emergence of a 'non-linear rationality' of planning, one that accounts for and is premised upon complexity. Simulations representing complex systems provide evidence and support for planning, and have the potential to serve as an interface between the more abstract and political decision making and the material city systems. Moving beyond current planning methods, this thesis explores the role of simulations in planning. Recognizing the need for holistic representations, the thesis integrates multiple disparate simulations into a holistic whole achieving complex representations of systems. These representations are then applied and studied in an interactive environment to address planning problems in different contexts. The thesis contributes an approach towards the development of complex representations of systems; improvements on participatory methods to integrate computational simulations; a nuanced understanding of the relative value of simulation constructs; technologies and frameworks that facilitate the easy development of integrated simulations that can support participatory planning processes. The thesis develops contributions through experiments which involved problems and stakeholders from real world systems. The approach towards development of integrated simulations is realized in an open source framework. The framework creates computationally efficient, scalable and interactive simulations of complex systems, which used in a participatory manner delivers tangible plans and designs.

QC 20170602

APA, Harvard, Vancouver, ISO, and other styles
10

Linder, Mats. "Computational Studies and Design of Biomolecular Diels-Alder Catalysis." Doctoral thesis, KTH, Tillämpad fysikalisk kemi, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-101706.

Full text
Abstract:
The Diels-Alder reaction is one of the most powerful synthetic tools in organic chemistry, and asymmetric Diels-Alder catalysis allows for rapid construction of chiral carbon scaffolds. For this reason, considerable effort has been invested in developing efficient and stereoselective organo- and biocatalysts. However, Diels-Alder is a virtually unknown reaction in Nature, and to engineer an enzyme into a Diels-Alderase is therefore a challenging task. Despite several successful designs of catalytic antibodies since the 1980’s, their catalytic activities have remained low, and no true artificial ’Diels-Alderase’ enzyme was reported before 2010. In this thesis, we employ state-of-the-art computational tools to study the mechanism of organocatalyzed Diels-Alder in detail, and to redesign existing enzymes into intermolecular Diels-Alder catalysts. Papers I–IV explore the mechanistic variations when employing increasingly activated reactants and the effect of catalysis. In particular, the relation between the traditionally presumed concerted mechanism and a stepwise pathway, forming one bond at a time, is probed. Papers V–X deal with enzyme design and the computational aspects of predicting catalytic activity. Four novel, computationally designed Diels-Alderase candidates are presented in Papers VI–IX. In Paper X, a new parameterization of the Linear Interaction Energy model for predicting protein-ligand affinities is presented. A general finding in this thesis is that it is difficult to attain large transition state stabilization effects solely by hydrogen bond catalysis. In addition, water (the preferred solvent of enzymes) is well-known for catalyzing Diels- Alder by itself. Therefore, an efficient Diels-Alderase must rely on large binding affinities for the two substrates and preferential binding conformations close to the transition state geometry. In Papers VI–VIII, we co-designed the enzyme active site and substrates in order to achieve the best possible complementarity and maximize binding affinity and pre-organization. Even so, catalysis is limited by the maximum possible stabilization offered by hydrogen bonds, and by the inherently large energy barrier associated with the [4+2] cycloaddition. The stepwise Diels-Alder pathway, proceeding via a zwitterionic intermediate, may offer a productive alternative for enzyme catalysis, since an enzyme active site may be more differentiated towards stabilizing the high-energy states than for the standard mechanism. In Papers I and III, it is demonstrated that a hydrogen bond donor catalyst provides more stabilization of transition states having pronounced charge-transfer character, which shifts the preference towards a stepwise mechanism. Another alternative, explored in Paper IX, is to use an α,β -unsaturated ketone as a ’pro-diene’, and let the enzyme generate the diene in situ by general acid/base catalysis. The results show that the potential reduction in the reaction barrier with such a mechanism is much larger than for conventional Diels-Alder. Moreover, an acid/base-mediated pathway is a better mimic of how natural enzymes function, since remarkably few catalyze their reactions solely by non-covalent interactions.

QC 20120903

APA, Harvard, Vancouver, ISO, and other styles
11

Tuikka, T. (Tuomo). "Towards computational instruments for collaborating product concept designers." Doctoral thesis, University of Oulu, 2002. http://urn.fi/urn:isbn:9514267257.

Full text
Abstract:
Abstract The concept design of small handheld electronic and telecommunication devices is a creative and dynamic process. Interaction between the designers plays an important role in the creation of new products. This thesis addresses the communication between product concept designers. The aim of this thesis is to examine new ways of developing computer systems for remote collaboration. Multiple research methods have been used so as to enrich the view of the research subject. Product concept design has been studied in field studies and at co-located concept design workshops where the object of design was uncertain. Co-located workshops were organised to examine the moment to moment interaction between designers to discover how designers collaborate when designing a design object in common. By applying the concepts of activity theory, the concept of instrument is elaborated. Four types of instruments to mediate between a designer and the object of design and collaborating designers are identified. These are the instruments used to externalize an understanding of the design object, the concrete means of interaction, the future artefact and the hypothetical user activity. The latter two make up the design object which designers' strive for, and can also be instruments for scaffolding each other. A conceptual model was developed to describe the design action and the instruments for collaboration. This model was used to gain insight into the creation of computer support for remotely collaborating designers by posing questions for computer systems design. To develop computer systems to support designers in remote collaboration, an understanding of both the requirements set by the field and the technological feasibility is needed. Three application prototypes are presented as proof of the concept and as an experiment with virtual prototyping technology. The concept of design action has been defined on the basis of activity theory. Computer-supported geographically distributed workshops have been organised and analysed using the design action as an analytical tool for the research material. I conclude that, in order to support remote collaboration of concept designers, computer systems should support collaborative construction of the object of design. Instruments, such as the future artefact, its various representations and the conceptual construct of hypothetical user activity are potential instruments for computation.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Chunmei. "Computational discovery and design of novel materials from electronic structure engineering." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/149858/1/Chunmei_Zhang_Thesis.pdf.

Full text
Abstract:
This thesis studied the electronic structure of materials based on density functional theory calculations and theoretical tight-binding modelling. Besides the electronic structure for pure two-dimensional and three-dimensional materials, this research also explored the new physics in the interface and surface of those materials. In doing so, this thesis discovered and designed a series of novel materials which can be used in nanoelectronics, optoelectronics, information storage, as well as energy storage.↲
APA, Harvard, Vancouver, ISO, and other styles
13

Monyoncho, Evans Angwenyi. "In-Situ and Computational Studies of Ethanol Electrooxidation Reaction: Rational Catalyst Design Strategies." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/35940.

Full text
Abstract:
Fuel cells represent a promising technology for clean power generation because they convert chemical energy (fuel) into electrical energy with high efficiency and low-to-none emission of pollutants. Direct ethanol fuel cells (DEFCs) have several advantages compared to the most studied hydrogen and methanol fuel cells. First and foremost, ethanol is a non-toxic liquid, which lowers the investment of handling facilities because the current infrastructure for gasoline can be largely used. Second, ethanol can be conveniently produced from biomass, hence is carbon neutral which mitigates increasing atmospheric CO2. Last but not least, if completely oxidized to CO2, ethanol has a higher energy density than methanol since it can deliver 12 electrons per molecule. The almost exclusive oxidation to acetic acid overshadows the attractiveness of DEFCs considerably, as the energy density is divided by 3. The standard potential of acetic acid formation indicates that a reaction path including acetic acid, leads to inevitable potential losses of about 0.4 V (difference between ideal potential for CO2 and acetic acid "production"). The development of alkaline DEFCs had also been hampered by the lack of stable and efficient anion exchange membranes. Fortunately, this challenge has been well tackled in recent years,8,9 making the development of alkaline fuel cells (AFCs) which are of particular technological interest due to their simple designs and ability to operate at low temperatures (25-100 °C). In alkaline conditions, the kinetic of both the cathodic oxygen reduction and the anodic ethanol oxidation is facilitated. Furthermore, the expensive Pt catalyst can be replaced by the lower-cost and more active transition metals such as Pd. The main objectives of this project are: i) to provide detailed fundamental understanding of ethanol oxidation reaction on transition metal surfaces in alkaline media, ii) to propose the best rational catalyst design strategies to cleave the C–C bond during ethanol electrooxidation. To achieve these goals two methodologies are used, i.e., in-situ identification of ethanol electrooxidation products using polarization modulation infrared reflection absorption spectroscopy (PM-IRRAS) and mechanistic investigation using computational studies in the framework of density functional theory (DFT). The PM-IRRAS technique was advanced in this project to the level of distinguishing electrooxidation products at the surface of the nanoparticles (electrode) and in the bulk-phase of the electrolyte. This new PM-IRRAS utility makes it possible to detect molecules such as CO2 which desorbs from the catalyst surface as soon as they are formed. The DFT insights in this project, provides an explanation as to why it is difficult to break the C–C bond in ethanol and is used for screening the top candidate metals for further studies.
APA, Harvard, Vancouver, ISO, and other styles
14

He, Tianwei. "Computational discovery and design of nanocatalysts for high efficiency electrochemical reactions." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/203969/1/Tianwei_He_Thesis.pdf.

Full text
Abstract:
This thesis reports a computational discovery and design of highly efficient electrocatalysts for various of electrochemical reactions. The method is based on the Density Functional Theory (DFT) by using Vienna ab initio simulation package (VASP). This project is a step forward in developing the low-cost, high activity, selectivity, stability and scalability for the electrochemical reactions, which could make a contribution to the global-scale green energy system for a clean and sustainable energy future.
APA, Harvard, Vancouver, ISO, and other styles
15

Ferhatosmanoglu, Nilgun. "Optimal design of experiments for emerging biological and computational applications." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1179177867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Radhakrishnan, Mala Lakshmi. "Tackling the bigger picture in computational drug design : theory, methods, and application to HIV-1 protease and erythropoietin systems." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/39674.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemistry, 2007.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references.
This thesis addresses challenging aspects of drug design that require explicit consideration of more than a single drug-target interaction in an unchanging environment. In the first half, the common challenge of designing a molecule that recognizes a desired subset of target molecules amidst a large set of potential binding partners is explored. Using theoretical approaches and simulation of lattice-model molecules, relationships between binding specificity and molecular properties such as hydrophobicity, size, and conformational flexibility were achieved. Methods were developed to design molecules and molecular cocktails capable of recognizing multiple target variants, and some were integrated with existing methods to design drug cocktails that were predicted to inhibit seven variants of HIV-1 protease. In the second half of the thesis, computational modeling and designs that were used to understand how cytokine binding and trafficking events affect potency are described. A general cellular-level model was systematically explored to analyze how signaling and trafficking properties can help dictate a cytokine-receptor binding affinity appropriate for long-term potency.
(cont.) To help create an accurate cellular-level model of signaling and trafficking for one system in particular, the erythropoietin (Epo) system, we computationally designed mutant erythropoietin receptor (EpoR) molecules for use as experimental probes. By mutating a residue predicted to contribute to pH-dependent Epo-EpoR binding, reagents were designed to facilitate study of endosomal binding and trafficking. Furthermore, a pair of mutant Epo receptors was designed to form a specific, heterodimeric complex with Epo to facilitate study of each individual EpoR's role in signaling via the asymmetric Epo-(EpoR)2 complex.
by Mala Lakshmi Radhakrishnan.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
17

Zhou, Mingxia. "First-principles based micro-kinetic modeling for catalysts design." Diss., Kansas State University, 2018. http://hdl.handle.net/2097/38608.

Full text
Abstract:
Doctor of Philosophy
Department of Chemical Engineering
Bin Liu
Efficient and selective catalysis lies at the heart of many chemical reactions, enabling the synthesis of chemicals and fuels with enormous societal and technological impact. A fundamental understanding of intrinsic catalyst properties for effective manipulation of the reactivity and selectivity of industrial catalysts is essential to select proper catalysts to catalyze the reactions we want and hinder the reactions we do not want. The progress in density functional theory (DFT) makes it possible to describe interfacial catalytic reactions and predict catalytic activities from one catalyst to another. In this study, water-gas shift reaction (WGSR) was used as a model reaction. First-principles based micro-kinetic modeling has been performed to deeply understand interactions between competing reaction mechanisms, and the relationship with various factors such as catalyst materials, structures, promoters, and interactions between intermediates (e.g., CO self-interaction) that govern the observed catalytic behaviors. Overall, in this thesis, all relevant reaction mechanisms in the model reaction on well-defined active sites were developed with first-principles calculations. With the established mechanism, the promotional effect of K adatom on Ni(111) on WGSR compared to the competing methanation was understood. Moreover, the WGSR kinetic trend, with the hydrogen production rate decreasing with increasing Ni particle diameters (due to the decreasing fractions of low-coordinated surface Ni site), was reproduced conveniently from micro-kinetic modeling techniques. Empirical correlations such as Brønsted-Evans-Polanyi (BEP) relationship for O-H, and C-O bond formation or cleavage on Ni(111), Ni(100), and Ni(211) were incorporated to accelerate computational analysis and generate trends on other transition metals (e.g., Cu, Au, Pt). To improve the numerical quality of micro-kinetic modeling, later interactions of main surface reaction intermediates were proven to be critical and incorporated successfully into the kinetic models. Finally, evidence of support playing a role in the enhancement of catalyst activity and the impact on future modeling will be discussed. DFT will be a powerful tool for understanding and even predicting catalyst performance and is shaping our approach to catalysis research. Such molecular-level information obtained from computational methods will undoubtedly guide the design of new catalyst materials with high precision.
APA, Harvard, Vancouver, ISO, and other styles
18

Kanade, Gaurav Nandkumar. "Combinatorial optimization problems in geometric settings." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/1152.

Full text
Abstract:
We consider several combinatorial optimization problems in a geometric set- ting. The first problem we consider is the problem of clustering to minimize the sum of radii. Given a positive integer k and a set of points with interpoint distances that satisfy the definition of being a "metric", we define a ball centered at some input point and having some radius as the set of all input points that are at a distance smaller than the radius of the ball from its center. We want to cover all input points using at most k balls so that the sum of the radii of the balls chosen is minimized. We show that when the points lie in some Euclidean space and the distance measure is the standard Euclidean metric, we can find an exact solution in polynomial time under standard assumptions about the model of computation. The second problem we consider is the Network Spanner Topology Design problem. In this problem, given a set of nodes in the network, represented by points in some geometric setting - either a plane or a 1.5-D terrain, we want to compute a height assignment function h that assigns a height to a tower at every node such that the set of pairs of nodes that can form a direct link with each other under this height function forms a connected spanner. A pair of nodes can form a direct link if they are within a bounded distance B of each other and the heights of towers at the two nodes are sufficient to achieve Line-of-Sight connectivity - i.e. the straight line connecting the top of the towers lies above any obstacles. In the planar setting where the obstacles are modeled as having a certain maximum height and minimum clearance distance, we give a constant factor approximation algorithm. In the case where the points lie on a 1.5-D terrain we illustrate that it might be hard to use Computational Geometry to achieve efficient approximations. The final problem we consider is the Multiway Barrier Cut problem. Here, given a set of points in the plane and a set of unit disk sensors also in the plane such that any path in the plane between any pair of input points hits at least one of the given sensor disks we consider the problem of finding the minimum size subset of these disks that still achieves this separation. We give a constant factor approximation algorithm for this problem.
APA, Harvard, Vancouver, ISO, and other styles
19

Koullias, Stefanos. "Methodology for global optimization of computationally expensive design problems." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49085.

Full text
Abstract:
The design of unconventional aircraft requires early use of high-fidelity physics-based tools to search the unfamiliar design space for optimum designs. Current methods for incorporating high-fidelity tools into early design phases for the purpose of reducing uncertainty are inadequate due to the severely restricted budgets that are common in early design as well as the unfamiliar design space of advanced aircraft. This motivates the need for a robust and efficient global optimization algorithm. This research presents a novel surrogate model-based global optimization algorithm to efficiently search challenging design spaces for optimum designs. The algorithm searches the design space by constructing a fully Bayesian Gaussian process model through a set of observations and then using the model to make new observations in promising areas where the global minimum is likely to occur. The algorithm is incorporated into a methodology that reduces failed cases, infeasible designs, and provides large reductions in the objective function values of design problems. Results on four sets of algebraic test problems are presented and the methodology is applied to an airfoil section design problem and a conceptual aircraft design problem. The method is shown to solve more nonlinearly constrained algebraic test problems than state-of-the-art algorithms and obtains the largest reduction in the takeoff gross weight of a notional 70-passenger regional jet versus competing design methods.
APA, Harvard, Vancouver, ISO, and other styles
20

Code, James Edward Eick J. David. "Experimental evaluation of solvents with a biological substrate based on solubility parameter theory solubility parameter determinations by computational chemistry using a rational design process /." Diss., UMK access, 2004.

Find full text
Abstract:
Thesis (Ph. D.)--School of Dentistry and Dept. of Chemistry. University of Missouri--Kansas City, 2004.
"A dissertation in oral biology and chemistry." Advisor: J. David Eick. Typescript. Vita. Description based on contents viewed Feb. 23, 2006; title from "catalog record" of the print edition. Includes bibliographical references (leaves 137-145). Online version of the print edition.
APA, Harvard, Vancouver, ISO, and other styles
21

Ismail, Arif. "First Principles and Genetic Algorithm Studies of Lanthanide Metal Oxides for Optimal Fuel Cell Electrolyte Design." Thèse, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/20198.

Full text
Abstract:
As the demand for clean and renewable energy sources continues to grow, much attention has been given to solid oxide fuel cells (SOFCs) due to their efficiency and low operating temperature. However, the components of SOFCs must still be improved before commercialization can be reached. Of particular interest is the solid electrolyte, which conducts oxygen ions from the cathode to the anode. Samarium-doped ceria (SDC) is the electrolyte of choice in most SOFCs today, due mostly to its high ionic conductivity at low temperatures. However, the underlying principles that contribute to high ionic conductivity in doped ceria remain unknown, and so it is difficult to improve upon the design of SOFCs. This thesis focuses on identifying the atomistic interactions in SDC which contribute to its favourable performance in the fuel cell. Unfortunately, information as basic as the structure of SDC has not yet been found due to the difficulty in experimentally characterizing and computationally modelling the system. For instance, to evaluate 10.3% SDC, which is close to the 11.1% concentration used in fuel cells, one must investigate 194 trillion configurations, due to the numerous ways of arranging the Sm ions and oxygen vacancies in the simulation cell. As an exhaustive search method is clearly unfeasible, we develop a genetic algorithm (GA) to search the vast potential energy surface for the low-energy configurations, which will be most prevalent in the real material. With the GA, we investigate the structure of SDC for the first time at the DFT+U level of theory. Importantly, we find key differences in our results from prior calculations of this system which used less accurate methods, which demonstrate the importance of accurately modelling the system. Overall, our simulation results of the structure of SDCagree with experimental measurements. We identify the structural significance of defects in the doped ceria lattice which contribute to oxygen ion conductivity. Thus, the structure of SDC found in this work provides a basis for developing better solid electrolytes, which is of significant scientific and technological interest. Following the structure search, we perform an investigation of the electronic properties of SDC, to understand more about the material. Notably, we compare our calculated density of states plot to XPS measurements of pure and reduced SDC. This allows us to parameterize the Hubbard (U) term for Sm, which had not yet been done. Importantly, the DFT+U treatment of the Sm ions also allowed us to observe in our simulations the magnetization of SDC, which was found by experiment. Finally, we also study the SDC surface, with an emphasis on its structural similarities to the bulk. Knowledge of the surface structure is important to be able to understand how fuel oxidation occurs in the fuel cell, as many reaction mechanisms occur on the surface of this porous material. The groundwork for such mechanistic studies is provided in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
22

Sahai, Michelle Asha. "Computational studies of ligand-water mediated interactions in ionotropic glutamate receptors." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:b86d2f5a-3554-44c0-b985-5693241369ec.

Full text
Abstract:
Careful treatment of water molecules in ligand-protein interactions is required in many cases if the correct binding pose is to be identified for molecular docking. Water can form complex bridging networks and can play a critical role in dictating the binding mode of ligands. A particularly striking example of this can be found in the ionotropic glutamate receptors (iGluRs), a family of ligand gated ion channels that are responsible for a majority of the fast synaptic neurotransmission in the central nervous system that are thought to be essential in memory and learning. Thus, pharmacological intervention at these neuronal receptors is a valuable therapeutic strategy. This thesis relies on various computational studies and X-ray crystallography to investigate the role of ligand-water mediated interactions in iGluRs bound to glutamate and α-amino-3-hydroxy-5-methyl-4- isoxazole-propionic acid (AMPA). Comparative molecular dynamics (MD) simulations of each subtype of iGluRs bound to glutamate revealed that crystal water positions were reproduced and that all but one water molecule, W5, in the binding site can be rearranged or replaced with water molecules from the bulk. Further density functional theory calculations (DFT) have been used to confirm the MD results and characterize the energetics of W5 and another water molecule implicated in influencing the dynamics of a proposed switch in these receptors. Additional comparative studies on the AMPA subtypes of iGluRs show that each step of the calculation must be considered carefully if the results are to be meaningful. Crystal structures of two ligands, glutamate and AMPA revealed two distinct modes of binding when bound to an AMPA subtype of iGluRs, GluA2. The difference is related to the position of water molecules within the binding pocket. DFT calculations investigated the interaction energies and polarisation effects resulting in a prediction of the correct binding mode for glutamate. For AMPA alternative modes of binding have similar interaction energies as a result of a higher internal energy than glutamate. A combined MD and X-ray crystallographic study investigated the binding of the ligand AMPA in the AMPA receptor subtypes. Analysis of the binding pocket show that AMPA is not preserved in the crystal bound mode and can instead adopt an alternative mode of binding. This involves a displacement of a key water molecule followed by AMPA adopting the pose seen by glutamate. Thus, this thesis makes use of various studies to assess the energetics and dynamics of water molecules in iGluRs. The resulting data provides additional information on the importance of water molecules in mediating ligand interactions as well as identifying key water molecules that can be useful in the de novo design of new selective drugs against iGluRs.
APA, Harvard, Vancouver, ISO, and other styles
23

Jha, Rajesh. "Combined Computational-Experimental Design of High-Temperature, High-Intensity Permanent Magnetic Alloys with Minimal Addition of Rare-Earth Elements." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2621.

Full text
Abstract:
AlNiCo magnets are known for high-temperature stability and superior corrosion resistance and have been widely used for various applications. Reported magnetic energy density ((BH) max) for these magnets is around 10 MGOe. Theoretical calculations show that ((BH) max) of 20 MGOe is achievable which will be helpful in covering the gap between AlNiCo and Rare-Earth Elements (REE) based magnets. An extended family of AlNiCo alloys was studied in this dissertation that consists of eight elements, and hence it is important to determine composition-property relationship between each of the alloying elements and their influence on the bulk properties. In the present research, we proposed a novel approach to efficiently use a set of computational tools based on several concepts of artificial intelligence to address a complex problem of design and optimization of high temperature REE-free magnetic alloys. A multi-dimensional random number generation algorithm was used to generate the initial set of chemical concentrations. These alloys were then examined for phase equilibria and associated magnetic properties as a screening tool to form the initial set of alloy. These alloys were manufactured and tested for desired properties. These properties were fitted with a set of multi-dimensional response surfaces and the most accurate meta-models were chosen for prediction. These properties were simultaneously extremized by utilizing a set of multi-objective optimization algorithm. This provided a set of concentrations of each of the alloying elements for optimized properties. A few of the best predicted Pareto-optimal alloy compositions were then manufactured and tested to evaluate the predicted properties. These alloys were then added to the existing data set and used to improve the accuracy of meta-models. The multi-objective optimizer then used the new meta-models to find a new set of improved Pareto-optimized chemical concentrations. This design cycle was repeated twelve times in this work. Several of these Pareto-optimized alloys outperformed most of the candidate alloys on most of the objectives. Unsupervised learning methods such as Principal Component Analysis (PCA) and Heirarchical Cluster Analysis (HCA) were used to discover various patterns within the dataset. This proves the efficacy of the combined meta-modeling and experimental approach in design optimization of magnetic alloys.
APA, Harvard, Vancouver, ISO, and other styles
24

Washburn, Megan E. "Dynamic Procedural Music Generation from NPC Attributes." DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2193.

Full text
Abstract:
Procedural content generation for video games (PCGG) has seen a steep increase in the past decade, aiming to foster emergent gameplay as well as to address the challenge of producing large amounts of engaging content quickly. Most work in PCGG has been focused on generating art and assets such as levels, textures, and models, or on narrative design to generate storylines and progression paths. Given the difficulty of generating harmonically pleasing and interesting music, procedural music generation for games (PMGG) has not seen as much attention during this time. Music in video games is essential for establishing developers' intended mood and environment. Given the deficit of PMGG content, this paper aims to address the demand for high-quality PMGG. This paper describes the system developed to solve this problem, which generates thematic music for non-player characters (NPCs) based on developer-defined attributes in real time and responds to the dynamic relationship between the player and target NPC. The system was evaluated by means of user study: participants confront four NPC bosses each with their own uniquely generated dynamic track based on their varying attributes in relation to the player's. The survey gathered information on the perceived quality, dynamism, and helpfulness to gameplay of the generated music. Results showed that the generated music was generally pleasing and harmonious, and that while players could not detect the details of how, they were able to detect a general relationship between themselves and the NPCs as reflected by the music.
APA, Harvard, Vancouver, ISO, and other styles
25

Sinchaisri, Wichinpong (Wichinpong Park). "Pricing with quality perception : theory and experiment." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/104563.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 111-115).
Quality is one of the most important factors behind a decision to purchase any product. Consumers have long assumed that price and quality are highly correlated, and that as the price of a product increases, its quality also increases ("you get what you pay for"). Several researchers have studied how consumers use price to infer quality, but very few have investigated the impact of pricing strategies, particularly price markdowns, on quality perception and how a retailer should react to such behavior. Our key research questions, viewed through both an empirical and a theoretical lens, concern how markdowns with different discount levels may induce different consumer behaviors and how the firm should incorporate them when optimizing its markdown policy. We empirically elicit the relationship between a consumer's quality perception and available price information, and refine a consumer demand model to capture these insights, together with other motives-reference dependence, loss aversion, patience, and optimism. For the retailer, we characterize the structure of the market segmentation and analyze its optimal markdown strategy when consumers are sensitive to quality. We present conditions in which it is optimal for the firm to apply a markdown to its products. When consumers are more sensitive to the product's original price than to the discount, or are impatient to wait for the future discounts, the retailer can earn the maximum revenue when applying a markdown strategy. Furthermore, we advocate that the firm should pre-announce the information about future markdowns in order to avoid the negative effect of the consumers' inaccurate estimates.
by Wichinpong Sinchaisri.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
26

Tancred, James Anderson. "Aerodynamic Database Generation for a Complex Hypersonic Vehicle Configuration Utilizing Variable-Fidelity Kriging." University of Dayton / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1543801033672049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Berenguer, Verdú Antonio José. "Analysis and design of efficient passive components for the millimeter-wave and THz bands." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/84004.

Full text
Abstract:
This thesis tackles issues of particular interest regarding analysis and design of passive components at the mm-wave and Terahertz (THz) bands. Innovative analysis techniques and modeling of complex structures, design procedures, and practical implementation of advanced passive devices are presented. The first part of the thesis is dedicated to THz passive components. These days, THz technology suffers from the lack of suitable waveguiding structures since both, metals and dielectric, are lossy at THz frequencies. This implies that neither conventional closed metallic structures used at microwave frequencies, nor dielectric waveguides used in the optical regime, are adequate solutions. Among a variety of new proposals, the Single Wire Waveguide (SWW) stands out due to its low attenuation and dispersion. However, this surface waveguide presents difficult excitation and strong radiation on bends. A Dielectric-Coated Single Wire Waveguide (DCSWW) can be used to alleviate these problems, but advantages of the SWW are lost and new problems arise. Until now, literature has not given proper solution to radiation on bends and, on the other hand, rigorous characterization of these waveguides lacks these days. This thesis provides, for the first time, a complete modal analysis of both waveguides, appropriated for THz frequencies. This analysis is later applied to solve the problem of radiation on bends. Several structures and design procedures to alleviate radiation losses are presented and experimentally validated. The second part of the thesis is dedicated to mm-wave passive components. These days, when implementing passive components to operate at such small, millimetric wavelengths, to ensure proper metallic contact and alignment between parts results challenging. In addition, dielectric absorption becomes significant at mm-wave frequencies. Consequently, conventional hollow metallic waveguides and planar transmission lines present high attenuation so that new topologies are being considered. Gap Waveguides (GWs), based on a periodic structure introducing an Electromagnetic Bandgap effect, result very suitable since they do not require metallic contacts and avoid dielectric losses. However, although GWs have great potential, several issues prevent GW technology from becoming consolidated and universally used. On the one hand, the topological complexity of GWs difficulties the design process since full-wave simulations are time-costly and there is a lack of appropriate analysis methods and suitable synthesis procedures. On the other hand, benefits of using GWs instead of conventional structures are required to be more clearly evidenced with high-performance GW components and proper comparatives with conventional structures. This thesis introduces several efficient analysis methods, models, and synthesis techniques that will allow engineers without significant background in GWs to straightforwardly implement GW devices. In addition, several high-performance narrow-band filters operating at Ka-band and V-band, as well as a rigorous comparative with rectangular waveguide topology, are presented.
Esta tesis aborda problemas actuales en el análisis y diseño de componentes pasivos en las bandas de onda milimétrica y Terahercios (THz). Se presentan nuevas técnicas de análisis y modelado de estructuras complejas, procedimientos de diseño, e implementación práctica de dispositivos pasivos avanzados. La primera parte de la tesis se dedica a componentes pasivos de THz. Actualmente no se disponen de guías de onda adecuadas a THz debido a que ambos, metales y dieléctricos, introducen grandes pérdidas. En consecuencia, no es adecuado escalar las estructuras metálicas cerradas usadas en microondas, ni las guías dieléctricas usadas a frecuencias ópticas. Entre un gran número de recientes propuestas, la Single Wire Waveguide (SWW) destaca por su baja atenuación y casi nula dispersión. No obstante, como guía superficial, la SWW presenta difícil excitación y radiación en curvas. El uso de un recubrimiento dieléctrico, creando la Dielecric-Coated Single Wire Waveguide (DCSWW), alivia estos inconvenientes, pero las ventajas anteriores se pierden y nuevos problemas aparecen. Hasta la fecha, no se han encontrado soluciones adecuadas para la radiación en curvas de la SWW. Además, se echa en falta una caracterización rigurosa de ambas guías. Esta tesis presenta, por primera vez, un análisis modal completo de SWW y DCSWW, adecuado a la banda de THz. Este análisis es aplicado posteriormente para evitar el problema de la radiación en curvas. Se presentan y validan experimentalmente diversas estructuras y procedimientos de diseño. La segunda parte de la tesis abarca componentes pasivos de ondas milimétricas. Actualmente, estos componentes sufren una importante degradación de su respuesta debido a que resulta difícil asegurar contacto metálico y alineamiento adecuados para la operación a longitudes de onda tan pequeñas. Además, la absorción dieléctrica incrementa notablemente a estas frecuencias. En consecuencia, tanto guías metálicas huecas como líneas de transmisión planares convencionales presentan gran atenuación, siendo necesario considerar topologías alternativas. Las Gap Waveguides (GWs), basadas en una estructura periódica que introduce un efecto de Electromagnetic Bandgap, resultan muy adecuadas puesto que no requieren contacto entre partes metálicas y evitan las pérdidas en dieléctricos. No obstante, a pesar del potencial de las GWs, varias barreras impiden la consolidación y uso universal de esta tecnología. Por una parte, la compleja topología de las GWs dificulta el proceso de diseño dado que las simulaciones de onda completa consumen mucho tiempo y no existen actualmente métodos de análisis y diseño apropiados. Por otra parte, es necesario evidenciar el beneficio de usar GWs mediante dispositivos GW de altas prestaciones y comparativas adecuadas con estructuras convencionales. Esta tesis presenta diversos métodos de análisis eficientes, modelos, y técnicas de diseño que permitirán la síntesis de dispositivos GW sin necesidad de un conocimiento profundo de esta tecnología. Asimismo, se presentan varios filtros de banda estrecha operando en las bandas Ka y V con altas prestaciones, así como una comparativa rigurosa con la guía rectangular.
Aquesta tesi aborda problemes actuals en relació a l'anàlisi i disseny de components passius en les bandes d'ona mil·limètrica i Terahercis. Es presenten noves tècniques d'anàlisi i modelatge d'estructures complexes, procediments de disseny, i implementació pràctica de dispositius passius avançats. La primera part de la tesi es focalitza en components passius de THz. Actualment no es disposen de guies d'ona adequades a THz causa que tots dos, metalls i dielèctrics, introdueixen grans pèrdues. En conseqüència, no és adequat escalar les estructures metál·liques tancades usades en microones, ni les guies dielèctriques usades a freqüències òptiques. Entre un gran nombre de propostes recents, la Single Wire Waveguide (SWW) destaca per la seua baixa atenuació i quasi nul·la dispersió. No obstant això, com a guia superficial, la SWW presenta difícil excitació i radiació en corbes. L'ús d'un recobriment dielèctric, creant la Dielecric-Coated Single Wire Waveguide (DCSWW), alleuja aquests inconvenients, però els avantatges anteriors es perden i nous problemes apareixen. Fins a la data, no s'han trobat solucions adequades per a la radiació en corbes de la SWW. A més, es troba a faltar una caracterització rigorosa d'ambdues guies. Aquesta tesi presenta, per primera vegada, un anàlisi modal complet de SWW i DCSWW, adequat a la banda de THz. Aquest anàlisi és aplicat posteriorment per evitar el problema de la radiació en corbes. Es presenten i validen experimentalment diverses estructures i procediments de disseny. La segona part de la tesi es centra en components passius d'ones mil·limètriques. Actualment, aquests components pateixen una important degradació de la seua resposta a causa de que resulta difícil assegurar contacte metàl·lic i alineament adequats per a l'operació a longituds d'ona tan menudes. A més, l'absorció dielèctrica incrementa notablement a aquestes freqüències. En conseqüència, tant guies metàl·liques buides com línies de transmissió planars convencionals presenten gran atenuació, sent necessari considerar topologies alternatives. Les Gap Waveguides (GWs), basades en una estructura periòdica que introdueix un efecte de Electromagnetic Bandgap, resulten molt adequades ja que no requereixen contacte entre parts metàl·liques i eviten les pèrdues en dielèctrics. No obstant, tot i el potencial de les GWs, diverses barreres impedixen la consolidació i ús universal d'aquesta tecnologia. D'una banda, la complexa topologia de les GWs dificulta el procés de disseny atés que les simulacions d'ona completa consumeixen molt de temps i no existeixen actualment mètodes d'anàlisi i disseny apropiats. D'altra banda, és necessari evidenciar el benefici d'utilitzar GWs mitjançant dispositius GW d'altes prestacions i comparatives adequades amb estructures convencionals. Aquesta tesi presenta diversos mètodes d'anàlisi eficients, models, i tècniques de disseny que permetran la síntesi de dispositius GW sense necessitat d'un coneixement profund d'aquesta tecnologia. Així mateix, es presenten diversos filtres de banda estreta operant en les bandes Ka i V amb altes prestacions, així com una comparativa rigorosa amb la guia rectangular.
Berenguer Verdú, AJ. (2017). Analysis and design of efficient passive components for the millimeter-wave and THz bands [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/84004
TESIS
APA, Harvard, Vancouver, ISO, and other styles
28

Deng, Zeyu. "Rational design of novel halide perovskites combining computations and experiments." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/287932.

Full text
Abstract:
The perovskite family of materials is extremely large and provides a template for designing materials for different purposes. Among them, hybrid organic-inorganic perovskites (HOIPs) are very interesting and have been recently identified as possible next generation light harvesting materials because they combine low manufacturing cost and relatively high power conversion efficiencies (PCEs). In addition, some other applications like light emitting devices are also highly studied. This thesis starts with an introduction to the solar cell technologies that could use HOIPs. In Chapter 2, previously published results on the structural, electronic, optical and mechanical properties of HOIPs are reviewed in order to understand the background and latest developments in this field. Chapter 3 discusses the computational and experimental methods used in the following chapters. Then Chapter 4 describes the discovery of several hybrid double perovskites, with the formula (MA)$_2$M$^I$M$^{III}$X$_6$ (MA = methylammonium, CH$_3$NH$_3$, M$^I$ = K, Ag and Tl, M$^{III}$ = Bi, Y and Gd, X = Cl and Br). Chapter 5 presents studies on the variable presure and temperature response of formamidinium lead halides FAPbBr$_3$ (FA = formamidinium, CH(NH$_2$)$_2$) as well as the mechanical properties of FAPbBr$_3$ and FAPbI$_3$, followed by a computational study connecting the mechanical properties of halide perovskites ABX$_3$ (A = K, Rb, Cs, Fr and MA, X = Cl, Br and I) to their electronic transport properties. Chapter 6 describes a study on the phase stability, transformation and electronic properties of low-dimensional hybrid perovskites containing the guanidinium cation Gua$_x$PbI$_{x+2}$ (x = 1, 2 and 3, Gua = guanidinium, C(NH$_2$)$_3$). The conclusions and possible future work are summarized in Chapter 7. These results provide theoreticians and experimentalists with insight into the design and synthesis of novel, highly efficient, stable and environmentally friendly materials for solar cell applications as well as for other purposes in the future.
APA, Harvard, Vancouver, ISO, and other styles
29

Wahlström, Dennis. "Probabilistic Multidisciplinary Design Optimization on a high-pressure sandwich wall in a rocket engine application." Thesis, Umeå universitet, Institutionen för fysik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-138480.

Full text
Abstract:
A need to find better achievement has always been required in the space industrythrough time. Advanced technologies are provided to accomplish goals for humanityfor space explorer and space missions, to apprehend answers and widen knowledges. These are the goals of improvement, and in this thesis, is to strive and demandto understand and improve the mass of a space nozzle, utilized in an upperstage of space mission, with an expander cycle engine. The study is carried out by creating design of experiment using Latin HypercubeSampling (LHS) with a consideration to number of design and simulation expense.A surrogate model based optimization with Multidisciplinary Design Optimization(MDO) method for two different approaches, Analytical Target Cascading (ATC) and Multidisciplinary Feasible (MDF) are used for comparison and emend the conclusion. In the optimization, three different limitations are being investigated, designspace limit, industrial limit and industrial limit with tolerance. Optimized results have shown an incompatibility between two optimization approaches, ATC and MDF which are expected to be similar, but for the two limitations, design space limit and industrial limit appear to be less agreeable. The ATC formalist in this case dictates by the main objective, where the children/subproblems only focus to find a solution that satisfies the main objective and its constraint. For the MDF, the main objective function is described as a single function and solved subject to all the constraints. Furthermore, the problem is not divided into subproblems as in the ATC. Surrogate model based optimization, its solution influences by the accuracy ofthe model, and this is being investigated with another DoE. A DoE of the full factorial analysis is created and selected to study in a region near the optimal solution.In such region, the result has evidently shown to be quite accurate for almost allthe surrogate models, except for max temperature, damage and strain at the hottestregion, with the largest common impact on inner wall thickness of the space nozzle. Results of the new structure of the space nozzle have shown an improvement of mass by ≈ 50%, ≈ 15% and ≈ -4%, for the three different limitations, design spacelimit, industrial limit and industrial limit with tolerance, relative to a reference value,and ≈ 10%, ≈ 35% and ≈ 25% cheaper to manufacture accordingly to the defined producibility model.
APA, Harvard, Vancouver, ISO, and other styles
30

Yin, Weiwei. "The role and regulatory mechanisms of nox1 in vascular systems." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44833.

Full text
Abstract:
As an important endogenous source of reactive oxygen species (ROS), NADPH oxidase 1 (Nox1) has received tremendous attention in the past few decades. It has been identified to play a key role as the initial "kindle," whose activation is crucial for amplifying ROS production through several propagation mechanisms in the vascular system. As a consequence, Nox1 has been implicated in the initiation and genesis of many cardiovascular diseases and has therefore been the subject of detailed investigations. The literature on experimental studies of the Nox1 system is extensive. Numerous investigations have identified essential features of the Nox1 system in vasculature and characterized key components, possible regulatory signals and/or signaling pathways, potential activation mechanisms, a variety of Nox1 stimuli, and its potential physiological and pathophysiological functions. While these experimental studies have greatly enhanced our understanding of the Nox1 system, many open questions remain regarding the overall functionality and dynamic behavior of Nox1 in response to specific stimuli. Such questions include the following. What are the main regulatory and/or activation mechanisms of Nox1 systems in different types of vascular cells? Once Nox1 is activated, how does the system return to its original, unstimulated state, and how will its subunits be recycled? What are the potential disassembly pathways of Nox1? Are these pathways equally important for effectively reutilizing Nox1 subunits? How does Nox1 activity change in response to dynamic signals? Are there generic features or principles within the Nox1 system that permit optimal performance? These types of questions have not been answered by experiments, and they are indeed quite difficult to address with experiments. I demonstrate in this dissertation that one can pose such questions and at least partially answer them with mathematical and computational methods. Two specific cell types, namely endothelial cells (ECs) and vascular smooth muscle cells (VSMCs), are used as "templates" to investigate distinct modes of regulation of Nox1 in different vascular cells. By using a diverse array of modeling methods and computer simulations, this research identifies different types of regulation and their distinct roles in the activation process of Nox1. In the first study, I analyze ECs stimulated by mechanical stimuli, namely shear stresses of different types. The second study uses different analytical and simulation methods to reveal generic features of alternative disassembly mechanisms of Nox1 in VSMCs. This study leads to predictions of the overall dynamic behavior of the Nox1 system in VSMCs as it responds to extracellular stimuli, such as the hormone angiotensin II. The studies and investigations presented here improve our current understanding of the Nox1 system in the vascular system and might help us to develop potential strategies for manipulation and controlling Nox1 activity, which in turn will benefit future experimental and clinical studies.
APA, Harvard, Vancouver, ISO, and other styles
31

Sivan, D. D. "Design and structural modifications of vibratory systems to achieve prescribed modal spectra /." Title page, contents and abstract only, 1997. http://web4.library.adelaide.edu.au/theses/09PH/09phs6238.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Anisi, David A. "Online trajectory planning and observer based control." Licentiate thesis, Stockholm : Optimization and systems theory, Royal Institute of Technology, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Soncco, K., X. Jorge, and R. A. Arciniega. "Postbuckling Analysis of Functionally Graded Beams." Institute of Physics Publishing, 2019. http://hdl.handle.net/10757/625602.

Full text
Abstract:
This paper studies the geometrically non-linear bending behavior of functionally graded beams subjected to buckling loads using the finite element method. The computational model is based on an improved first-order shear deformation theory for beams with five independent variables. The abstract finite element formulation is derived by means of the principle of virtual work. High-order nodal-spectral interpolation functions were utilized to approximate the field variables which minimizes the locking problem. The incremental/iterative solution technique of Newton's type is implemented to solve the nonlinear equations. The model is verified with benchmark problems available in the literature. The objective is to investigate the effect of volume fraction variation in the response of functionally graded beams made of ceramics and metals. As expected, the results show that transverse deflections vary significantly depending on the ceramic and metal combination.
Revisión por pares
APA, Harvard, Vancouver, ISO, and other styles
34

Sarkar, Somwrita. "Acquiring symbolic design optimization problem reformulation knowledge." Connect to full text, 2009. http://hdl.handle.net/2123/5683.

Full text
Abstract:
Thesis (Ph. D.)--University of Sydney, 2009.
Title from title screen (viewed November 13, 2009). Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy to the Faculty of Architecture, Design and Planning in the Faculty of Science. Includes graphs and tables. Includes bibliographical references. Also available in print form.
APA, Harvard, Vancouver, ISO, and other styles
35

Anisi, David A. "On Cooperative Surveillance, Online Trajectory Planning and Observer Based Control." Doctoral thesis, KTH, Optimeringslära och systemteori, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-9990.

Full text
Abstract:
The main body of this thesis consists of six appended papers. In the  first two, different  cooperative surveillance problems are considered. The second two consider different aspects of the trajectory planning problem, while the last two deal with observer design for mobile robotic and Euler-Lagrange systems respectively.In Papers A and B,  a combinatorial optimization based framework to cooperative surveillance missions using multiple Unmanned Ground Vehicles (UGVs) is proposed. In particular, Paper A  considers the the Minimum Time UGV Surveillance Problem (MTUSP) while Paper B treats the Connectivity Constrained UGV Surveillance Problem (CUSP). The minimum time formulation is the following. Given a set of surveillance UGVs and a polyhedral area, find waypoint-paths for all UGVs such that every point of the area is visible from  a point on a waypoint-path and such that the time for executing the search in parallel is minimized.  The connectivity constrained formulation  extends the MTUSP by additionally requiring the induced information graph to be  kept recurrently connected  at the time instants when the UGVs  perform the surveillance mission.  In these two papers, the NP-hardness of  both these problems are shown and decomposition techniques are proposed that allow us to find an approximative solution efficiently in an algorithmic manner.Paper C addresses the problem of designing a real time, high performance trajectory planner for an aerial vehicle that uses information about terrain and enemy threats, to fly low and avoid radar exposure on the way to a given target. The high-level framework augments Receding Horizon Control (RHC) with a graph based terminal cost that captures the global characteristics of the environment.  An important issue with RHC is to make sure that the greedy, short term optimization does not lead to long term problems, which in our case boils down to two things: not getting into situations where a collision is unavoidable, and making sure that the destination is actually reached. Hence, the main contribution of this paper is to present a trajectory planner with provable safety and task completion properties. Direct methods for trajectory optimization are traditionally based on a priori temporal discretization and collocation methods. In Paper D, the problem of adaptive node distribution is formulated as a constrained optimization problem, which is to be included in the underlying nonlinear mathematical programming problem. The benefits of utilizing the suggested method for  online  trajectory optimization are illustrated by a missile guidance example.In Paper E, the problem of active observer design for an important class of non-uniformly observable systems, namely mobile robotic systems, is considered. The set of feasible configurations and the set of output flow equivalent states are defined. It is shown that the inter-relation between these two sets may serve as the basis for design of active observers. The proposed observer design methodology is illustrated by considering a  unicycle robot model, equipped with a set of range-measuring sensors. Finally, in Paper F, a geometrically intrinsic observer for Euler-Lagrange systems is defined and analyzed. This observer is a generalization of the observer proposed by Aghannan and Rouchon. Their contractivity result is reproduced and complemented  by  a proof  that the region of contraction is infinitely thin. Moreover, assuming a priori bounds on the velocities, convergence of the observer is shown by means of Lyapunov's direct method in the case of configuration manifolds with constant curvature.
QC 20100622
TAIS, AURES
APA, Harvard, Vancouver, ISO, and other styles
36

Mebane, Palmer. "Uniquely Solvable Puzzles and Fast Matrix Multiplication." Scholarship @ Claremont, 2012. https://scholarship.claremont.edu/hmc_theses/37.

Full text
Abstract:
In 2003 Cohn and Umans introduced a new group-theoretic framework for doing fast matrix multiplications, with several conjectures that would imply the matrix multiplication exponent $\omega$ is 2. Their methods have been used to match one of the fastest known algorithms by Coppersmith and Winograd, which runs in $O(n^{2.376})$ time and implies that $\omega \leq 2.376$. This thesis discusses the framework that Cohn and Umans came up with and presents some new results in constructing combinatorial objects called uniquely solvable puzzles that were introduced in a 2005 follow-up paper, and which play a crucial role in one of the $\omega = 2$ conjectures.
APA, Harvard, Vancouver, ISO, and other styles
37

Braga, Eduardo Cardoso. "Fluxo, corpo e percepção na comunicação digital." Pontifícia Universidade Católica de São Paulo, 2007. https://tede2.pucsp.br/handle/handle/4962.

Full text
Abstract:
Made available in DSpace on 2016-04-26T18:16:31Z (GMT). No. of bitstreams: 1 Eduardo Cardoso Braga.pdf: 858668 bytes, checksum: cd7a6a6073973445abbcddc1f7c2a5f4 (MD5) Previous issue date: 2007-10-19
There are different ways of conceiving the image in the digital communication. When investigating some discourses regarding the digital image, we came across some constant: the prevalence of the vision (ocularcentrism) and the absence of body reference (disembodiment). The digital image is conceived as simulacrum (Baudrillard), as disembodiment (Kittler), or phenomenon without reference (Mitchell). Moreover, in many of the discourses about communication, one conceives a homogeneous space of absolute transparency, in which obstacles do not exist, either conflicts, or differences (Vattimo). In our thesis we assert that the reasons of those conceptual marks are the slight knowledge of simulacrum, deriving from the Platonism; of representation, deriving from the Cartesianism; of transparency, deriving of the Neoplatonism. We mainly investigate those conceptual traditions with respect to the concept of image and the epistemic statute of the perception. We established a connection between some visual experiences of the past and the new possibilities disposed by the digital technologies. We then looked for new philosophical bases that could set the image free from inferior epistemic condition. The Bergson s phenomenology fed the thinking of the digital image and the perception as embodiment phenomenon, in which the body has a fundamental role in the significance and in the construction of the subjectivity. Thus, new corporal and subjective dimensions announce themselves and show new ways of man being. Body and subjectivity together in the relation to digital medium open new horizons to think the communication and the epistemic statute of the digital image as a knowledge possibility
Existem diferentes modos de conceber a imagem na comunicação digital. Ao investigar alguns discursos referentes à imagem digital, nos deparamos com algumas constantes: o predomínio da visão (ocularcentrismo) e a ausência de referencial corporal (desincorporação). A imagem digital é concebida como simulacro (Baudrillard), como desincorporada (Kittler), ou fenômeno sem referência (Mitchell). Além disso, em muitos dos discursos sobre a comunicação, se concebe um espaço homogêneo de absoluta transparência, no qual não existem obstáculos, nem conflitos, nem diferenças (Vattimo). Em nossa tese defendemos que as razões dessas marcas conceituais são as noções de simulacro, oriundas do platonismo; de representação, oriundas do cartesianismo; de transparência, oriundas do neoplatonismo. Investigamos essas tradições conceituais principalmente no que concerne ao conceito de imagem e ao estatuto epistemológico da percepção. Estabelecemos um nexo entre algumas experiências visuais do passado e as novas possibilidades disponibilizadas pelas tecnologias digitais. Procuramos então novas bases filosóficas que libertassem a imagem de sua condição epistemológica inferior. A fenomenologia bergsoniana forneceu os fundamentos para pensar a imagem digital e a percepção como fenômeno incorporado, no qual o corpo tem um papel fundamental na significação e na construção da subjetividade. Assim, novas dimensões corpóreas e subjetivas se anunciam e apontam para novos modos de ser do homem. Corpo e subjetividade se relacionam nas mídias digitais abrindo novos horizontes para se pensar a comunicação e o estatuto epistemológico da imagem digital como possibilidade de conhecimento
APA, Harvard, Vancouver, ISO, and other styles
38

Cekl, Jakub. "Model palivového souboru tlakovodního reaktoru západní koncepce." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-376896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Chvatík, Štěpán. "Asynchronní motor s vnějším rotorem." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-377075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Gutierrez, Laliga Rafael. "Computational Design of Nanomaterials." Doctoral thesis, 2014. https://tud.qucosa.de/id/qucosa%3A29182.

Full text
Abstract:
The development of materials with tailored functionalities and with continuously shrinking linear dimensions towards (and below) the nanoscale is not only going to revolutionize state of the art fabrication technologies, but also the computational methodologies used to model the materials properties. Specifically, atomistic methodologies are becoming increasingly relevant in the field of materials science as a fundamental tool in gaining understanding on as well as for pre-designing (in silico material design) the behavior of nanoscale materials in response to external stimuli. The major long-term goal of atomistic modelling is to obtain structure-function relationships at the nanoscale, i.e. to correlate a definite response of a given physical system with its specific atomic conformation and ultimately, with its chemical composition and electronic structure. This has clearly its pendant in the development of bottom-up fabrication technologies, which also require a detailed control and fine tuning of physical and chemical properties at sub-nanometer and nanometer length scales. The current work provides an overview of different applications of atomistic approaches to the study of nanoscale materials. We illustrate how the use of first-principle based electronic structure methodologies, quantum mechanical based molecular dynamics, and appropriate methods to model the electrical and thermal response of nanoscale materials, provides a solid starting point to shed light on the way such systems can be manipulated to control their electrical, mechanical, or thermal behavior. Thus, some typical topics addressed here include the interplay between mechanical and electronic degrees of freedom in carbon based nanoscale materials with potential relevance for designing nanoscale switches, thermoelectric properties at the single-molecule level and their control via specific chemical functionalization, and electrical and spin-dependent properties in biomaterials. We will further show how phenomenological models can be efficiently applied to get a first insight in the behavior of complex nanoscale systems, for which first principle electronic structure calculations become computationally expensive. This will become especially clear in the case of biomolecular systems and organic semiconductors.
APA, Harvard, Vancouver, ISO, and other styles
41

Arauz, Moreno Carlos Alexser, and 卡洛斯. "HAWT Design Employing Blade Element Momentum Theory and Computational Fluid Dynamics." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/67814955064521874818.

Full text
Abstract:
碩士
崑山科技大學
機械工程研究所
103
The present study was devised to contribute to the technical development of locally manufactured small wind turbines in Nicaragua. A small Hugh Piggott wind turbine was investigated and the rotor improved upon using Blade Element Momentum (BEM) Theory. To this effect, Burton’s optimum rotor was utilized as a guideline from whence alternate blade profiles were derived. Thereafter, the aerodynamic performance of the blades was analyzed using the BEM equations including Prandtl’s tip losses, empirical corrections for the turbulent wake state, as well as polynomial curve fits and the Viterna method to model the aerodynamic behavior of the airfoils that make up the cross-sections of the rotor blades. Upon selection of a final blade candidate, Computational Fluid Dynamics (CFD) techniques were employed to further validate the performance of the proposed blade design. On this subject, the simulations were carried out in ANSYS’ Fluent. The wind field was solved through steady state solutions of the Reynolds Averaged Navier Stokes (RANS) equations under a single frame of reference. Taking advantage of the CFD techniques, subjects pertaining to the structural and acoustical behavior of the blade were also briefly discussed within the present work.
APA, Harvard, Vancouver, ISO, and other styles
42

Dufton, Lachlan Thomas. "Stochastic Mechanisms for Truthfulness and Budget Balance in Computational Social Choice." Thesis, 2013. http://hdl.handle.net/10012/7231.

Full text
Abstract:
In this thesis, we examine stochastic techniques for overcoming game theoretic and computational issues in the collective decision making process of self-interested individuals. In particular, we examine truthful, stochastic mechanisms, for settings with a strong budget balance constraint (i.e. there is no net flow of money into or away from the agents). Building on past results in AI and computational social choice, we characterise affine-maximising social choice functions that are implementable in truthful mechanisms for the setting of heterogeneous item allocation with unit demand agents. We further provide a characterisation of affine maximisers with the strong budget balance constraint. These mechanisms reveal impossibility results and poor worst-case performance that motivates us to examine stochastic solutions. To adequately compare stochastic mechanisms, we introduce and discuss measures that capture the behaviour of stochastic mechanisms, based on techniques used in stochastic algorithm design. When applied to deterministic mechanisms, these measures correspond directly to existing deterministic measures. While these approaches have more general applicability, in this work we assess mechanisms based on overall agent utility (efficiency and social surplus ratio) as well as fairness (envy and envy-freeness). We observe that mechanisms can (and typically must) achieve truthfulness and strong budget balance using one of two techniques: labelling a subset of agents as ``auctioneers'' who cannot affect the outcome, but collect any surplus; and partitioning agents into disjoint groups, such that each partition solves a subproblem of the overall decision making process. Worst-case analysis of random-auctioneer and random-partition stochastic mechanisms show large improvements over deterministic mechanisms for heterogeneous item allocation. In addition to this allocation problem, we apply our techniques to envy-freeness in the room assignment-rent division problem, for which no truthful deterministic mechanism is possible. We show how stochastic mechanisms give an improved probability of envy-freeness and low expected level of envy for a truthful mechanism. The random-auctioneer technique also improves the worst-case performance of the public good (or public project) problem. Communication and computational complexity are two other important concerns of computational social choice. Both the random-auctioneer and random-partition approaches offer a flexible trade-off between low complexity of the mechanism, and high overall outcome quality measured, for example, by total agent utility. They enable truthful and feasible solutions to be incrementally improved on as the mechanism receives more information and is allowed more processing time. The majority of our results are based on optimising worst-case performance, since this provides guarantees on how a mechanism will perform, regardless of the agents that use it. To complement these results, we perform empirical, average-case analyses on our mechanisms. Finally, while strong budget balance is a fixed constraint in our particular social choice problems, we show empirically that this can improve the overall utility of agents compared to a utility-maximising assignment that requires a budget imbalanced mechanism.
APA, Harvard, Vancouver, ISO, and other styles
43

Pozun, Zachary David. "Materials design via tunable properties." Thesis, 2012. http://hdl.handle.net/2152/ETD-UT-2012-05-4997.

Full text
Abstract:
In the design of novel materials, tunable properties are parameters such as composition or structure that may be adjusted in order to enhance a desired chemical or material property. Trends in tunable properties can be accurately predicted using computational and combinatorial chemistry tools in order to optimize a desired property. I present a study of tunable properties in materials and employ a variety of algorithms that ranges from simple screening to machine learning. In the case of tuning a nanocomposite membrane for olefin/paraffin separations, I demonstrate a rational design approach based on statistical modeling followed by ab initio modeling of the interaction of olefins with various nanoparticles. My simplified model of gases diffusing on a heterogeneous lattice identifies the conditions necessary for optimal selectivity of olefins over paraffins. The ab initio modeling is then applied to identify realistic nanomaterials that will produce such conditions. The second case, [alpha]-Fe₂O₃, commonly known as hematite, is potential solar cell material. I demonstrate the use of a screened search through chemical compound space in order to identify doped hematite-based materials with an ideal band gap for maximum solar absorption. The electronic structure of hematite is poorly treated by standard density functional theory and requires the application of Hartree-Fock exchange in order to reproduce the experimental band gap. Using this approach, several potential solar cell materials are identified based on the behavior of the dopants within the overall hematite structure. The final aspect of this work is a new method for identifying low-energy chemical processes in condensed phase materials. The gap between timescales that are attainable with standard molecular dynamics and the processes that evolve on a human timescale presents a challenge for modeling the behavior of materials. This problem is particularly severe in the case of condensed phase systems where the reaction mechanisms may be highly complicated or completely unknown. I demonstrate the use of support vector machines, a machine-learning technique, to create transition state theory dividing surfaces without a priori information about the reaction coordinate. This method can be applied to modeling the stability of novel materials.
text
APA, Harvard, Vancouver, ISO, and other styles
44

Guo, Mingyu. "Computationally Feasible Approaches to Automated Mechanism Design." Diss., 2010. http://hdl.handle.net/10161/3015.

Full text
Abstract:

In many multiagent settings, a decision must be made based on the preferences of multiple agents, and agents may lie about their preferences if this is to their benefit. In mechanism design, the goal is to design procedures (mechanisms) for making the decision that work in spite of such strategic behavior, usually by making untruthful behavior suboptimal. In automated mechanism design, the idea is to computationally search through the space of feasible mechanisms, rather than to design them analytically by hand. Unfortunately, the most straightforward approach to automated mechanism design does not scale to large instances, because it requires searching over a very large space of possible functions. In this thesis, we adopt an approach to automated mechanism design that is computationally feasible. Instead of optimizing over all feasible mechanisms, we carefully choose a parameterized subfamily of mechanisms. Then we optimize over mechanisms within this family. Finally, we analyze whether and to what extent the resulting mechanism is suboptimal outside the subfamily. We apply (computationally feasible) automated mechanism design to three resource allocation mechanism design problems: mechanisms that redistribute revenue, mechanisms that involve no payments at all, and mechanisms that guard against false-name manipulation.


Dissertation
APA, Harvard, Vancouver, ISO, and other styles
45

Zheng, Lin. "Rate Distortion Theory for Causal Video Coding: Characterization, Computation Algorithm, Comparison, and Code Design." Thesis, 2012. http://hdl.handle.net/10012/6559.

Full text
Abstract:
Due to the sheer volume of data involved, video coding is an important application of lossy source coding, and has received wide industrial interest and support as evidenced by the development and success of a series of video coding standards. All MPEG-series and H-series video coding standards proposed so far are based upon a video coding paradigm called predictive video coding, where video source frames Xᵢ,i=1,2,...,N, are encoded in a frame by frame manner, the encoder and decoder for each frame Xᵢ, i =1, 2, ..., N, enlist help only from all previous encoded frames Sj, j=1, 2, ..., i-1. In this thesis, we will look further beyond all existing and proposed video coding standards, and introduce a new coding paradigm called causal video coding, in which the encoder for each frame Xᵢ can use all previous original frames Xj, j=1, 2, ..., i-1, and all previous encoded frames Sj, while the corresponding decoder can use only all previous encoded frames. We consider all studies, comparisons, and designs on causal video coding from an information theoretic point of view. Let R*c(D₁,...,D_N) (R*p(D₁,...,D_N), respectively) denote the minimum total rate required to achieve a given distortion level D₁,...,D_N > 0 in causal video coding (predictive video coding, respectively). A novel computation approach is proposed to analytically characterize, numerically compute, and compare the minimum total rate of causal video coding R*c(D₁,...,D_N) required to achieve a given distortion (quality) level D₁,...,D_N > 0. Specifically, we first show that for jointly stationary and ergodic sources X₁, ..., X_N, R*c(D₁,...,D_N) is equal to the infimum of the n-th order total rate distortion function R_{c,n}(D₁,...,D_N) over all n, where R_{c,n}(D₁,...,D_N) itself is given by the minimum of an information quantity over a set of auxiliary random variables. We then present an iterative algorithm for computing R_{c,n}(D₁,...,D_N) and demonstrate the convergence of the algorithm to the global minimum. The global convergence of the algorithm further enables us to not only establish a single-letter characterization of R*c(D₁,...,D_N) in a novel way when the N sources are an independent and identically distributed (IID) vector source, but also demonstrate a somewhat surprising result (dubbed the more and less coding theorem)---under some conditions on source frames and distortion, the more frames need to be encoded and transmitted, the less amount of data after encoding has to be actually sent. With the help of the algorithm, it is also shown by example that R*c(D₁,...,D_N) is in general much smaller than the total rate offered by the traditional greedy coding method by which each frame is encoded in a local optimum manner based on all information available to the encoder of the frame. As a by-product, an extended Markov lemma is established for correlated ergodic sources. From an information theoretic point of view, it is interesting to compare causal video coding and predictive video coding, which all existing video coding standards proposed so far are based upon. In this thesis, by fixing N=3, we first derive a single-letter characterization of R*p(D₁,D₂,D₃) for an IID vector source (X₁,X₂,X₃) where X₁ and X₂ are independent, and then demonstrate the existence of such X₁,X₂,X₃ for which R*p(D₁,D₂,D₃)>R*c(D₁,D₂,D₃) under some conditions on source frames and distortion. This result makes causal video coding an attractive framework for future video coding systems and standards. The design of causal video coding is also considered in the thesis from an information theoretic perspective by modeling each frame as a stationary information source. We first put forth a concept called causal scalar quantization, and then propose an algorithm for designing optimum fixed-rate causal scalar quantizers for causal video coding to minimize the total distortion among all sources. Simulation results show that in comparison with fixed-rate predictive scalar quantization, fixed-rate causal scalar quantization offers as large as 16% quality improvement (distortion reduction).
APA, Harvard, Vancouver, ISO, and other styles
46

(9165011), Salar Safarkhani. "GAME-THEORETIC MODELING OF MULTI-AGENT SYSTEMS: APPLICATIONS IN SYSTEMS ENGINEERING AND ACQUISITION PROCESSES." Thesis, 2020.

Find full text
Abstract:

The process of acquiring the large-scale complex systems is usually characterized with cost and schedule overruns. To investigate the causes of this problem, we may view the acquisition of a complex system in several different time scales. At finer time scales, one may study different stages of the acquisition process from the intricate details of the entire systems engineering process to communication between design teams to how individual designers solve problems. At the largest time scale one may consider the acquisition process as series of actions which are, request for bids, bidding and auctioning, contracting, and finally building and deploying the system, without resolving the fine details that occur within each step. In this work, we study the acquisition processes in multiple scales. First, we develop a game-theoretic model for engineering of the systems in the building and deploying stage. We model the interactions among the systems and subsystem engineers as a principal-agent problem. We develop a one-shot shallow systems engineering process and obtain the optimum transfer functions that best incentivize the subsystem engineers to maximize the expected system-level utility. The core of the principal-agent model is the quality function which maps the effort of the agent to the performance (quality) of the system. Therefore, we build the stochastic quality function by modeling the design process as a sequential decision-making problem. Second, we develop and evaluate a model of the acquisition process that accounts for the strategic behavior of different parties. We cast our model in terms of government-funded projects and assume the following steps. First, the government publishes a request for bids. Then, private firms offer their proposals in a bidding process and the winner bidder enters in a con- tract with the government. The contract describes the system requirements and the corresponding monetary transfers for meeting them. The winner firm devotes effort to deliver a system that fulfills the requirements. This can be assumed as a game that the government plays with the bidder firms. We study how different parameters in the acquisition procedure affect the bidders’ behaviors and therefore, the utility of the government. Using reinforcement learning, we seek to learn the optimal policies of involved actors in this game. In particular, we study how the requirements, contract types such as cost-plus and incentive-based contracts, number of bidders, problem complexity, etc., affect the acquisition procedure. Furthermore, we study the bidding strategy of the private firms and how the contract types affect their strategic behavior.

APA, Harvard, Vancouver, ISO, and other styles
47

Sivan, Dmitri D. "Design and structural modifications of vibratory systems to achieve prescribed modal spectra / Dmitri D. Sivan." Thesis, 1997. http://hdl.handle.net/2440/18916.

Full text
Abstract:
Bibliography: leaves 184-192.
xii, 198 leaves : ill. ; 30 cm.
This thesis reports on problems associated with design and structural modification of vibratory systems. Several common problems encountered in practical engineering applications are described and novel strategies for solving this problems are proposed. Mathematical formulations of these problems are generated, and solution methods are developed.
Thesis (Ph.D.)--University of Adelaide, Dept. of Mechanical Engineering, 1997
APA, Harvard, Vancouver, ISO, and other styles
48

Bombardier, William. "Symbolic Modelling and Simulation of Wheeled Vehicle Systems on Three-Dimensional Roads." Thesis, 2009. http://hdl.handle.net/10012/4818.

Full text
Abstract:
In recent years, there has been a push by automotive manufacturers to improve the efficiency of the vehicle development process. This can be accomplished by creating a computationally efficient vehicle model that has the capability of predicting the vehicle behavior in many different situations at a fast pace. This thesis presents a procedure to automatically generate the simulation code of vehicle systems rolling over three-dimensional (3-D) roads given a description of the model as input. The governing equations describing the vehicle can be formulated using either a numerical or symbolical formulation approach. A numerical approach will re-construct numerical matrices that describe the system at each time step. Whereas a symbolic approach will generate the governing equations that describe the system for all time. The latter method offers many advantages to obtaining the equations. They only have to be formulated once and can be simplified using symbolic simplification techniques, thus making the simulations more computationally efficient. The road model is automatically generated in the formulation stage based on the single elevation function (3-D mathematical function) that is used to represent the road. Symbolic algorithms are adopted to construct and optimize the non-linear equations that are required to determine the contact point. A Newton-Raphson iterative scheme is constructed around the optimized non-linear equations, so that they can be solved at each time step. The road is represented in tabular form when it can not be defined by a single elevation function. A simulation code structure was developed to incorporate the tire on a 3-D road in a symbolic computer implementation of vehicle systems. It was created so that the tire forces and moments that appear in the generalized force matrix can be evaluated during simulation and not during formulation. They are evaluated systematically by performing a number of procedure calls. A road model is first used to determine the contact point between the tire and the ground. Its location is used to calculate the tire intermediate variables, such as the camber angle, that are required by a tire model to evaluate the tire forces and moments. The structured simulation code was implemented in the DynaFlexPro software package by creating a linear graph representation of the tire and the road. DynaFlexPro was used to analyze a vehicle system on six different road profiles performing different braking and cornering maneuvers. The analyzes were repeated in MSC.ADAMS for validation purposes and good agreement was achieved between the two software packages. The results confirmed that the symbolic computing approach presented in this thesis is more computationally efficient than the purely numerical approach. Thus, the simulation code structure increases the versatility of vehicle models by permitting them to be analyzed on 3-D trajectories while remaining computationally efficient.
APA, Harvard, Vancouver, ISO, and other styles
49

Kolodny, Marcos. "Análisis de estructuras de sufijos de strings." Bachelor's thesis, 2021. http://hdl.handle.net/11086/23258.

Full text
Abstract:
Tesis (Lic. en Cs. de la Computación)--Universidad Nacional de Córdoba, Facultad de Matemática, Astronomía, Física y Computación, 2021.
El problema de desarrollar algoritmos que decidan si un cierto patrón o palabra aparece o no en un determinado texto es fundamental en ciencias de la computación. Diversos algoritmos se han desarrollado en las últimas décadas para resolver este problema (y sus múltiples variantes). Un análisis detallado de las complejidades temporales y espaciales de dichos algoritmos revela que, en la práctica, algoritmos de fuerza bruta no son viables en la mayoría de los casos. En este trabajo, se presentaron, de manera formal y estructurada, dos estructuras ampliamente utilizadas en diversos trabajos. Además, utilizando las mismas, se presentaron soluciones a tres de los principales problemas en el área de estudio.
The problem of developing algorithms that can decide whether a certain pattern or word occurs in a certain text is really important in Computer Science. Several algorithms have been created in the last decades to solve this problem (and its variants). A detailed analysis of computational and spatial complexity of these algorithms shows that, in many cases, brute force solutions are not good enough. During this work we introduced, in a formal and structured way, two data structures that are widely used in several works. Also, by using them, we presented solutions to three of the main problems in the field of study.
Fil: Kolodny, Marcos. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía, Física y Computación; Argentina.
APA, Harvard, Vancouver, ISO, and other styles
50

Demarco, Vedelago Leandro. "Selección de componentes discretos para un filtro activo mediante ​programación por restricciones ​y ​optimización por colonia de hormigas." Bachelor's thesis, 2019. http://hdl.handle.net/11086/13414.

Full text
Abstract:
Tesis (Lic. en Ciencias. de la Computación)--Universidad Nacional de Córdoba, Facultad de Matemática, Astronomía, Física y Computación, 2019.
En el diseño actual de filtros activos una de las opciones de implementación es la denominada RC (resistencia/capacitor), en la cual el filtro se construye a partir de amplificadores operacionales, resistencias y capacitores. Para satisfacer el cumplimiento de las especificaciones del filtro resulta de gran importancia la selección de los componentes discretos del filtro. Dado el amplio espectro de valores que los componentes pueden tomar, resulta ineficiente enumerar todas las combinaciones posibles y seleccionar entre ellas la mejor. En este trabajo usamos una metaheurística denominada ACOR la cual permite resolver este tipo de problemas combinatorios con restricciones en tiempos de ejecución razonables al tiempo que garantiza que las soluciones obtenidas satisfacen todas las restricciones aunque pueden no ser de la mejor calidad (donde la calidad se define respecto a alguna característica dependiente de los valores elegidos).
In the current active filter design, one of the possible implementations is the so called RC, in which the filter is built with operational amplifiers, resistors and capacitors. In order to satisfy the filter specifications it’s of great importance the selection of the discrete components that make up the filter. Given the wide range of values that these components can take, it results inefficient to enumerate all possible combinations and select amongst them the best one. In this work we use a metaheuristic called ACOR which allows to solve this kind of constrained combinatorial optimization problems in reasonable time while guaranteeing that the obtained solutions satisfy all the restrictions, though they might not be of the best quality (where quality is defined with respect to some characteristic that depends on the chosen values).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography