Dissertations / Theses on the topic 'Computational methods for Complex Systems'

To see the other types of publications on this topic, follow the link: Computational methods for Complex Systems.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computational methods for Complex Systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gravino, Pietro <1984&gt. "Novel investigation methods in Computational Social Dynamics and Complex Systems." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amsdottorato.unibo.it/6765/.

Full text
Abstract:
In this thesis the evolution of the techno-social systems analysis methods will be reported, through the explanation of the various research experience directly faced. The first case presented is a research based on data mining of a dataset of words association named Human Brain Cloud: validation will be faced and, also through a non-trivial modeling, a better understanding of language properties will be presented. Then, a real complex system experiment will be introduced: the WideNoise experiment in the context of the EveryAware european project. The project and the experiment course will be illustrated and data analysis will be displayed. Then the Experimental Tribe platform for social computation will be introduced . It has been conceived to help researchers in the implementation of web experiments, and aims also to catalyze the cumulative growth of experimental methodologies and the standardization of tools cited above. In the last part, three other research experience which already took place on the Experimental Tribe platform will be discussed in detail, from the design of the experiment to the analysis of the results and, eventually, to the modeling of the systems involved. The experiments are: CityRace, about the measurement of human traffic-facing strategies; laPENSOcosì, aiming to unveil the political opinion structure; AirProbe, implemented again in the EveryAware project framework, which consisted in monitoring air quality opinion shift of a community informed about local air pollution. At the end, the evolution of the technosocial systems investigation methods shall emerge together with the opportunities and the threats offered by this new scientific path.
APA, Harvard, Vancouver, ISO, and other styles
2

Ren, Xuchun. "Novel computational methods for stochastic design optimization of high-dimensional complex systems." Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/1738.

Full text
Abstract:
The primary objective of this study is to develop new computational methods for robust design optimization (RDO) and reliability-based design optimization (RBDO) of high-dimensional, complex engineering systems. Four major research directions, all anchored in polynomial dimensional decomposition (PDD), have been defined to meet the objective. They involve: (1) development of new sensitivity analysis methods for RDO and RBDO; (2) development of novel optimization methods for solving RDO problems; (3) development of novel optimization methods for solving RBDO problems; and (4) development of a novel scheme and formulation to solve stochastic design optimization problems with both distributional and structural design parameters. The major achievements are as follows. Firstly, three new computational methods were developed for calculating design sensitivities of statistical moments and reliability of high-dimensional complex systems subject to random inputs. The first method represents a novel integration of PDD of a multivariate stochastic response function and score functions, leading to analytical expressions of design sensitivities of the first two moments. The second and third methods, relevant to probability distribution or reliability analysis, exploit two distinct combinations built on PDD: the PDD-SPA method, entailing the saddlepoint approximation (SPA) and score functions; and the PDD-MCS method, utilizing the embedded Monte Carlo simulation (MCS) of the PDD approximation and score functions. For all three methods developed, both the statistical moments or failure probabilities and their design sensitivities are both determined concurrently from a single stochastic analysis or simulation. Secondly, four new methods were developed for RDO of complex engineering systems. The methods involve PDD of a high-dimensional stochastic response for statistical moment analysis, a novel integration of PDD and score functions for calculating the second-moment sensitivities with respect to the design variables, and standard gradient-based optimization algorithms. The methods, depending on how statistical moment and sensitivity analyses are dovetailed with an optimization algorithm, encompass direct, single-step, sequential, and multi-point single-step design processes. Thirdly, two new methods were developed for RBDO of complex engineering systems. The methods involve an adaptive-sparse polynomial dimensional decomposition (AS-PDD) of a high-dimensional stochastic response for reliability analysis, a novel integration of AS-PDD and score functions for calculating the sensitivities of the failure probability with respect to design variables, and standard gradient-based optimization algorithms, resulting in a multi-point, single-step design process. The two methods, depending on how the failure probability and its design sensitivities are evaluated, exploit two distinct combinations built on AS-PDD: the AS-PDD-SPA method, entailing SPA and score functions; and the AS-PDD-MCS method, utilizing the embedded MCS of the AS-PDD approximation and score functions. In addition, a new method, named as the augmented PDD method, was developed for RDO and RBDO subject to mixed design variables, comprising both distributional and structural design variables. The method comprises a new augmented PDD of a high-dimensional stochastic response for statistical moment and reliability analyses; an integration of the augmented PDD, score functions, and finite-difference approximation for calculating the sensitivities of the first two moments and the failure probability with respect to distributional and structural design variables; and standard gradient-based optimization algorithms, leading to a multi-point, single-step design process. The innovative formulations of statistical moment and reliability analysis, design sensitivity analysis, and optimization algorithms have achieved not only highly accurate but also computationally efficient design solutions. Therefore, these new methods are capable of performing industrial-scale design optimization with numerous design variables.
APA, Harvard, Vancouver, ISO, and other styles
3

Abdullah, Rudwan Ali Abolgasim. "Intelligent methods for complex systems control engineering." Thesis, University of Stirling, 2007. http://hdl.handle.net/1893/257.

Full text
Abstract:
This thesis proposes an intelligent multiple-controller framework for complex systems that incorporates a fuzzy logic based switching and tuning supervisor along with a neural network based generalized learning model (GLM). The framework is designed for adaptive control of both Single-Input Single-Output (SISO) and Multi-Input Multi-Output (MIMO) complex systems. The proposed methodology provides the designer with an automated choice of using either: a conventional Proportional-Integral-Derivative (PID) controller, or a PID structure based (simultaneous) Pole and Zero Placement controller. The switching decisions between the two nonlinear fixed structure controllers is made on the basis of the required performance measure using the fuzzy logic based supervisor operating at the highest level of the system. The fuzzy supervisor is also employed to tune the parameters of the multiple-controller online in order to achieve the desired system performance. The GLM for modelling complex systems assumes that the plant is represented by an equivalent model consisting of a linear time-varying sub-model plus a learning nonlinear sub-model based on Radial Basis Function (RBF) neural network. The proposed control design brings together the dominant advantages of PID controllers (such as simplicity in structure and implementation) and the desirable attributes of Pole and Zero Placement controllers (such as stable set-point tracking and ease of parameters’ tuning). Simulation experiments using real-world nonlinear SISO and MIMO plant models, including realistic nonlinear vehicle models, demonstrate the effectiveness of the intelligent multiple-controller with respect to tracking set-point changes, achieve desired speed of response, prevent system output overshooting and maintain minimum variance input and output signals, whilst penalising excessive control actions.
APA, Harvard, Vancouver, ISO, and other styles
4

Grill, Maximilian Josef [Verfasser], Wolfgang A. [Akademischer Betreuer] Wall, Wolfgang A. [Gutachter] Wall, and Philipp J. [Gutachter] Thurner. "Computational Models and Methods for Molecular Interactions of Deformable Fibers in Complex Biophysical Systems / Maximilian Josef Grill ; Gutachter: Wolfgang A. Wall, Philipp J. Thurner ; Betreuer: Wolfgang A. Wall." München : Universitätsbibliothek der TU München, 2020. http://d-nb.info/122267274X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Zifan. "Complex systems and health systems, computational challenges." Thesis, Versailles-St Quentin en Yvelines, 2015. http://www.theses.fr/2015VERS001V/document.

Full text
Abstract:
Le calcul des valeurs propres intervient dans des modèles de maladies d’épidémiques et pourrait être utilisé comme un allié des campagnes de vac- cination dans les actions menées par les organisations de soins de santé. La modélisation épidémique peut être considérée, par analogie, comme celle des viruses d’ordinateur qui dépendent de l’état de graphe sous-jacent à un moment donné. Nous utilisons PageRank comme méthode pour étudier la propagation de l’épidémie et d’envisager son calcul dans le cadre de phé- nomène petit-monde. Une mise en œuvre parallèle de méthode multiple de "implicitly restar- ted Arnoldi method" (MIRAM) est proposé pour calculer le vecteur propre dominant de matrices stochastiques issus de très grands réseaux réels. La grande valeur de "damping factor" pour ce problème fait de nombreux algo- rithmes existants moins efficace, tandis que MIRAM pourrait être promet- teuse. Nous proposons également dans cette thèse un générateur de graphe parallèle qui peut être utilisé pour générer des réseaux synthétisés distri- bués qui présentent des structures "scale-free" et petit-monde. Ce générateur pourrait servir de donnée pour d’autres algorithmes de graphes également. MIRAM est mis en œuvre dans le cadre de trilinos, en ciblant les grandes données et matrices creuses représentant des réseaux sans échelle, aussi connu comme les réseaux de loi de puissance. Hypergraphe approche de partitionnement est utilisé pour minimiser le temps de communication. L’al- gorithme est testé sur un grille national de Grid5000. Les expériences sur les très grands réseaux tels que Twitter et Yahoo avec plus de 1 milliard de nœuds sont exécutées. Avec notre mise en œuvre parallèle, une accélération de 27× est satisfaite par rapport au solveur séquentiel
The eigenvalue equation intervenes in models of infectious disease prop- agation and could be used as an ally of vaccination campaigns in the ac- tions carried out by health care organizations. The epidemiological model- ing techniques can be considered by analogy, as computer viral propagation which depends on the underlying graph status at a given time. We point out PageRank as method to study the epidemic spread and consider its calcula- tion in the context of small-world phenomenon. A parallel implementation of multiple implicitly restarted Arnoldi method (MIRAM) is proposed for calculating dominant eigenpair of stochastic matrices derived from very large real networks. Their high damp- ing factor makes many existing algorithms less efficient, while MIRAM could be promising. We also propose in this thesis a parallel graph gen- erator that can be used to generate distributed synthesized networks that display scale-free and small-world structures. This generator could serve as a testbed for graph related algorithms. MIRAM is implemented within the framework of Trilinos, targeting big data and sparse matrices representing scale-free networks, also known as power law networks. Hypergraph partitioning approach is employed to minimize the communication overhead. The algorithm is tested on a nation wide cluster of clusters Grid5000. Experiments on very large networks such as twitter and yahoo with over 1 billion nodes are conducted. With our parallel implementation, a speedup of 27× is met compared to the sequential solver
APA, Harvard, Vancouver, ISO, and other styles
6

Ahmadi, Adl Amin. "Computational Methods for Biomarker Identification in Complex Disease." Scholar Commons, 2015. http://scholarcommons.usf.edu/etd/5895.

Full text
Abstract:
In a modern systematic view of biology, cell functions arise from the interaction between molecular components. One of the challenging problems in systems biology with high-throughput measurements is discovering the important components involved in the development and progression of complex diseases, which may serve as biomarkers for accurate predictive modeling and as targets for therapeutic purposes. Due to the non-linearity and heterogeneity of these complex diseases, traditional biomarker identification approaches have had limited success at finding clinically useful biomarkers. In this dissertation we propose novel methods for biomarker identification that explicitly take into account the non-linearity and heterogeneity of complex diseases. We first focus on the methods to deal with non-linearity by taking into account the interactions among features with respect to the disease outcome of interest. We then focus on the methods for finding disease subtypes with their subtype-specific biomarkers for heterogeneous diseases, where we show how prior biological knowledge and simultaneous disease stratification and personalized biomarker identification can help achieve better performance. We develop novel computational methods for more accurate and robust biomarker identification including methods for estimating the interactive effects, a network-based feature ranking algorithm that takes into account the interactive effects between biomarkers, different approaches for finding distances between somatic mutation profiles for better disease stratification using prior knowledge, and a network-regularized bi-clique finding algorithm for simultaneous subtype and biomarker identification. Our experimental results show that our proposed methods perform better than the state-of-the-art methods for both problems.
APA, Harvard, Vancouver, ISO, and other styles
7

Miller, David J. Ghosh Avijit. "New methods in computational systems biology /." Philadelphia, Pa. : Drexel University, 2008. http://hdl.handle.net/1860/2810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gibson, Michael Andrew Bruck Jehoshua. "Computational methods for stochastic biological systems /." Diss., Pasadena, Calif. : California Institute of Technology, 2000. http://resolver.caltech.edu/CaltechETD:etd-05132005-154222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Le, Xuan Tuan. "Understanding complex systems through computational modeling and simulation." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEP003.

Full text
Abstract:
Les approches de simulation classiques ne sont en général pas adaptées pour traiter les aspects de complexité que présentent les systèmes complexes tels que l'émergence ou l'adaptation. Dans cette thèse, l'auteur s'appuie sur ses travaux menés dans le cadre d'un projet de simulation sur l’épidémie de grippe en France associée à des interventions sur une population en considérant le phénomène étudié comme un processus diffusif sur un réseau complexe d'individus, l'originalité réside dans le fait que la population y est considérée comme un système réactif. La modélisation de tels systèmes nécessite de spécifier explicitement le comportement des individus et les réactions de ceux-cis tout en produisant un modèle informatique qui doit être à la fois flexible et réutilisable. Les diagrammes d'états sont proposés comme une approche de programmation reposant sur une modélisation validée par l'expertise. Ils correspondent également à une spécification du code informatique désormais disponibles dans les outils logiciels de programmation agent. L'approche agent de type bottom-up permet d'obtenir des simulations de scénario "what-if" où le déroulement des actions peut nécessiter que les agents s'adaptent aux changements de contexte. Cette thèse propose également l'apprentissage pour un agent par l'emploi d'arbre de décision afin d'apporter flexibilité et lisibilité pour la définition du modèle de comportement des agents et une prise de décision adaptée au cours de la simulation. Notre approche de modélisation computationnelle est complémentaire aux approches traditionnelles et peut se révéler indispensable pour garantir une approche pluridisciplinaire validable par l'expertise
Traditional approaches are not sufficient, and sometimes impossible in dealing with complexity issues such as emergence, self-organization, evolution and adaptation of complex systems. As illustrated in this thesis by the practical work of the author in a real-life project, the spreading of infectious disease as well as interventions could be considered as difusion processes on complex networks of heterogeneous individuals in a society which is considered as a reactive system. Modeling of this system requires explicitly specifying of each individual’s behaviors and (re)actions, and transforming them into computational model which has to be flexible, reusable, and ease of coding. Statechart, typical for model-based programming, is a good solution that the thesis proposes. Bottom-up agent based simulation finds emergence episodes in what-if scenarios that change rules governing agent’s behaviors that requires agents to learn to adapt with these changes. Decision tree learning is proposed to bring more flexibility and legibility in modeling of agent’s autonomous decision making during simulation runtime. Our proposition for computational models such as agent based models are complementary to traditional ones, and in some case they are unique solutions due to legal, ethical issues
APA, Harvard, Vancouver, ISO, and other styles
10

Safa, Issam I. "Towards Topological Methods for Complex Scalar Data." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1322457949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Wu, Fei. "Parallel computational methods for constrained mechanical systems." Diss., The University of Arizona, 1997. http://hdl.handle.net/10150/282561.

Full text
Abstract:
Two methods suitable for parallel computation in the study of mechanical systems with holonomic and nonholonomic constraints are presented: one is an explicit solution based on generalized inverse algebra; the second solves problems of this class through the direct application of Gauss' principle of least constraint and genetic algorithms. Algorithms for both methods are presented for sequential and parallel implementations. The method using generalized inverses is able to solve problems that involve redundant, degenerate and intermittent constraints, and can identify inconsistent constraint sets. It also allows a single program to perform pure kinematic and dynamic analyses. Its computational cost is among the lowest in comparison with other methods. In addition, constraint violation control methods are investigated to improve integration accuracy and further reduce computational cost. Constrained dynamics problems are also solved using optimization methods by applying Gauss' principle directly. An objective function that incorporates constraints is derived using a symmetric scheme, which is implemented using genetic algorithms in a parallel computing environment. It is shown that this method is capable of solving the same cases of constraints as the former method. Examples and numerical experiments demonstrating the applications of the two methods to constrained multiparticle and multibody systems are presented.
APA, Harvard, Vancouver, ISO, and other styles
12

Vazquez, Montelongo Erik Antonio. "Computational Study of Intermolecular Interactions in Complex Chemical Systems." Thesis, University of North Texas, 2020. https://digital.library.unt.edu/ark:/67531/metadc1703283/.

Full text
Abstract:
This work discusses applications of computational simulations to a wide variety of chemical systems, to investigate intermolecular interactions to develop force field parameters and gain new insights into chemical reactivity and structure stability. First, we cover the characterization of hydrogen-bonding interactions in pyrazine tetracarboxamide complexes employing quantum topological analyses. Second we describe the use of quantum mechanical energy decomposition analysis (EDA) and non-covalent interactions (NCIs) analysis to investigate hydrogen-bonding and intermolecular interactions in a series of representative 1-butyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide ([bmim][Tf2N]) ion pairs extracted from classical equilibrium and non-equilibrium molecular dynamics simulations. Thirdly, we describe the use of multipolar/polarizable AMOEBA force field to study the extraction of benzene from a gasoline model employing 1,3-dimethylimidazolium tetrafluorobrorate, [DMIM][BF4], and ethylmethylimidazolium tetrafluorobrorate, [EMIM][BF4]. Fourthly, we cover the recent improvements and new capabilities of the QM/MM code "LICHEM". Finally, we describe the use of polarizable ab initio QM/MM calculations and study the reaction mechanism of N-tert-butyloxycarbonylation of aniline in [EMIm][BF4], and ground state destabilization in uracil DNA glycosylase (UDG).
APA, Harvard, Vancouver, ISO, and other styles
13

Ding, Jiarui. "Computational methods for systems biology data of cancer." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/58164.

Full text
Abstract:
High-throughput genome sequencing and other techniques provide a cost-effective way to study cancer biology and seek precision treatment options. In this dissertation I address three challenges in cancer systems biology research: 1) predicting somatic mutations, 2) interpreting mutation functions, and 3) stratifying patients into biologically meaningful groups. Somatic single nucleotide variants are frequent therapeutically actionable mutations in cancer, e.g., the ‘hotspot’ mutations in known cancer driver genes such as EGFR, KRAS, and BRAF. However, only a small proportion of cancer patients harbour these known driver mutations. Therefore, there is a great need to systematically profile a cancer genome to identify all the somatic single nucleotide variants. I develop methods to discover these somatic mutations from cancer genomic sequencing data, taking into account the noise in high-throughput sequencing data and valuable validated genuine somatic mutations and non-somatic mutations. Of the somatic alterations acquired for each cancer patient, only a few mutations ‘drive’ the initialization and progression of cancer. To better understand the evolution of cancer, as well as to apply precision treatments, we need to assess the functions of these mutations to pinpoint the driver mutations. I address this challenge by predicting the mutations correlated with gene expression dysregulation. The method is based on hierarchical Bayes modelling of the influence of mutations on gene expression, and can predict the mutations that impact gene expression in individual patients. Although probably no two cancer genomes share exactly the same set of somatic mutations because of the stochastic nature of acquired mutations across the three billion base pairs, some cancer patients share common driver mutations or disrupted pathways. These patients may have similar prognoses and potentially benefit from the same kind of treatment options. I develop an efficient clustering algorithm to cluster high-throughput and high-dimensional bio- logical datasets, with the potential to put cancer patients into biologically meaningful groups for treatment selection.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
14

Rosario, Ricardo C. H. "Computational Methods for Feedback Control in Structural Systems." NCSU, 1998. http://www.lib.ncsu.edu/theses/available/etd-19981030-155600.

Full text
Abstract:

Numerical methods, LQR control, an abstract formulation andreduced basis techniques for a system consisting of a thin cylindrical shellwith surface-mounted piezoceramic actuators are investigated.Donnell-Mushtari equations,modified to include Kelvin-Voigt damping and passive patch contributions,are used to model the system dynamics. The voltage-induced piezoceramicpatch contributions, used as input inthe control regime, enter the equations as externalforces and moments. Existence and uniqueness of solutions are demonstratedthrough variational and semigroup formulations of the system equations.The semigroup formulation is also used to establish theoretical controlresults and illustrate convergence of the finite dimensional controlsand Riccati operators.The spatial components of the state arediscretized using a Galerkin expansion resulting in an ordinarydifferential equation that can be readily marched in time by existingordinary differential equationsolvers.Full order approximation methods which employ standard basiselements such as cubic or linear splines result in large matrixdimensions rendering the system computationally expensive for real-timesimulations. To lessen on-line computational requirements, reducedbasis methods employing snapshots of the full order model as basisfunctions are investigated.As a first step in validating the model, a shell with obtainable analyticnatural frequencies and modes was considered. The derived frequenciesand modeswere then compared with numerical approximations using full order basisfunctions. Further testing on the static and dynamic performance of the fullorder model was carried out through the following steps:(i) choose true state solutions, (ii) solve for the forces in theequations that would lead to these known solutions, and (iii) comparenumerical results excited by the derived forces with the true solutions.Reduced order methods employing the Lagrange and theKarhunen-Loève proper orthogonal decomposition (POD)basis functions are implemented on the model. Finally, a statefeedback method was developed and investigated computationallyfor both the full order and reduced ordermodels.

APA, Harvard, Vancouver, ISO, and other styles
15

Cong, Yang, and 丛阳. "Optimization models and computational methods for systems biology." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B47752841.

Full text
Abstract:
Systems biology is a comprehensive quantitative analysis of the manner in which all the components of a biological system interact functionally along with time. Mathematical modeling and computational methods are indispensable in such kind of studies, especially for interpreting and predicting the complex interactions among all the components so as to obtain some desirable system properties. System dynamics, system robustness and control method are three crucial properties in systems biology. In this thesis, the above properties are studied in four different biological systems. The outbreak and spread of infectious diseases have been questioned and studied for years. The spread mechanism and prediction about the disease could enable scientists to evaluate isolation plans to have significant effects on a particular epidemic. A differential equation model is proposed to study the dynamics of HIV spread in a network of prisons. In prisons, screening and quarantining are both efficient control manners. An optimization model is proposed to study optimal strategies for the control of HIV spread in a prison system. A primordium (plural: primordia) is an organ or tissue in its earliest recognizable stage of development. Primordial development in plants is critical to the proper positioning and development of plant organs. An optimization model and two control mechanisms are proposed to study the dynamics and robustness of primordial systems. Probabilistic Boolean Networks (PBNs) are mathematical models for studying the switching behavior in genetic regulatory networks. An algorithm is proposed to identify singleton and small attractors in PBNs which correspond to cell types and cell states. The captured problem is NP-hard in general. Our algorithm is theoretically and computationally demonstrated to be much more efficient than the naive algorithm that examines all the possible states. The goal of studying the long-term behavior of a genetic regulatory network is to study the control strategies such that the system can obtain desired properties. A control method is proposed to study multiple external interventions meanwhile minimizing the control cost. Robustness is a paramount property for living organisms. The impact degree is a measure of robustness of a metabolic system against the deletion of single or multiple reaction(s). An algorithm is proposed to study the impact degree in Escherichia coli metabolic system. Moreover, approximation method based on Branching process is proposed for estimating the impact degree of metabolic networks. The effectiveness of our method is assured by testing with real-world Escherichia coli, Bacillus subtilis, Saccharomyces cerevisiae and Homo Sapiens metabolic systems.
published_or_final_version
Mathematics
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
16

Raghothama, Jayanth. "Integrating Computational and Participatory Simulations for Design in Complex Systems." Doctoral thesis, KTH, Vårdlogistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208170.

Full text
Abstract:
The understanding and conceptualization of cities and its constituent systems such as transportation and healthcare as open and complex is shifting the debates around the technical and communicative rationales of planning. Viewing cities in a holistic manner presents methodological challenges, where our understanding of complexity is applied in a tangible fashion to planning processes. Bridging the two rationales in the tools and methodologies of planning is necessary for the emergence of a 'non-linear rationality' of planning, one that accounts for and is premised upon complexity. Simulations representing complex systems provide evidence and support for planning, and have the potential to serve as an interface between the more abstract and political decision making and the material city systems. Moving beyond current planning methods, this thesis explores the role of simulations in planning. Recognizing the need for holistic representations, the thesis integrates multiple disparate simulations into a holistic whole achieving complex representations of systems. These representations are then applied and studied in an interactive environment to address planning problems in different contexts. The thesis contributes an approach towards the development of complex representations of systems; improvements on participatory methods to integrate computational simulations; a nuanced understanding of the relative value of simulation constructs; technologies and frameworks that facilitate the easy development of integrated simulations that can support participatory planning processes. The thesis develops contributions through experiments which involved problems and stakeholders from real world systems. The approach towards development of integrated simulations is realized in an open source framework. The framework creates computationally efficient, scalable and interactive simulations of complex systems, which used in a participatory manner delivers tangible plans and designs.

QC 20170602

APA, Harvard, Vancouver, ISO, and other styles
17

Mey, Antonia S. J. S. "Trajectory ensemble methods for understanding complex stochastic systems." Thesis, University of Nottingham, 2013. http://eprints.nottingham.ac.uk/13368/.

Full text
Abstract:
This thesis investigates the equilibrium and dynamic properties of stochastic systems of varying complexity. The dynamic properties of lattice models -- the 1-d Ising model and a 3-d protein model -- and equilibrium properties of continuous models -- particles in various potentials -- are presented. Dynamics are studied according to a large deviation formalism, by looking at non-equilibrium ensembles of trajectories, classified according to a dynamical order parameter. The phase structure of the ensembles of trajectories is deduced from the properties of large-deviation functions, representing dynamical free-energies. The 1-d Ising model is studied with Glauber dynamics uncovering the dynamical second-order transition at critical values of the counting field 's', confirming the analytical predictions by Jack and Solich. Next, the dynamics in an external magnetic field are studied, allowing the construction of a dynamic phase diagram in the space of temperature, s-field and magnetic field. The dynamic phase diagram is reminiscent of that of the 2-d Ising model. In contrast, Kawasaki dynamics give rise to a dynamical phase structure similar to the one observed in kinetically constrained models. The dynamics of a lattice protein model, represented by a self avoiding walk with three different Hamiltonians, are studied. For the uniform Go Hamiltonian all dynamics occurs between non-native and native trajectories, whereas for heterogeneous Hamiltonians and Full interaction Hamiltonians a first-order dynamical transition to sets of trapping trajectories is observed in the s-ensemble. The model is studied exhaustively for a particular sequence, constructing a qualitative phase diagram, from which a more general dynamic behaviour is extrapolated. Lastly, an estimator for equilibrium expectations, represented by a transition matrix in an extended space between temperatures and a set of discrete states obtained through the discretisation of a continuous space, is proposed. It is then demonstrated that this estimator outperforms conventional multi-temperature ensemble estimates by up to three orders of magnitude, by considering three models of increasing complexity: diffusive particles in a double-well potential, a multidimensional folding potential and a molecular dynamics simulations of alanine dipeptide.
APA, Harvard, Vancouver, ISO, and other styles
18

Castillo, Andrea R. (Andrea Redwing). "Assessing computational methods and science policy in systems biology." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/51655.

Full text
Abstract:
Thesis (S.M. in Technology and Policy)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program, 2009.
Includes bibliographical references (p. 109-112).
In this thesis, I discuss the development of systems biology and issues in the progression of this science discipline. Traditional molecular biology has been driven by reductionism with the belief that breaking down a biological system into the fundamental biomolecular components will elucidate such phenomena. We have reached limitations with this approach due to the complex and dynamical nature of life and our inability to intuit biological behavior from a modular perspective [37]. Mathematical modeling has been integral to current system biology endeavors since detailed analysis would be invasive if performed on humans experimentally or in clinical trials [17]. The interspecies commonalities in systemic properties and molecular mechanisms suggests that certain behaviors transcend specie differentiation and therefore easily lend to generalizing from simpler organisms to more complex organisms such as humans [7, 17]. Current methodologies in mathematical modeling and analysis have been diverse and numerous, with no standardization to progress the discipline in a collaborative manner. Without collaboration during this formative period, successful development and application of systems biology for societal welfare may be at risk. Furthermore, such collaboration has to be standardized in a fundamental approach to discover generic principles, in the manner of preceding long-standing science disciplines. This study effectively implements and analyzes a mathematical model of a three-protein biochemical network, the Synechococcus elongatus circadian clock.
(cont.) I use mass action theory expressed in kronecker products to exploit the ability to apply numerical methods-including sensitivity analysis via boundary value formulation (BVP) and trapiezoidal integration rule-and experimental techniques-including partial reaction fitting and enzyme-driven activations-when mathematically modeling large-scale biochemical networks. Amidst other applicable methodologies, my approach is grounded in the law of mass action because it is based in experimental data and biomolecular mechanistic properties, yet provides predictive power in the complete delineation of the biological system dynamics for all future time points. The results of my research demonstrate the holistic approach that mass action method-ologies have in determining emergent properties of biological systems. I further stress the necessity to enforce collaboration and standardization in future policymaking, with reconsiderations on current stakeholder incentive to redirect academia and industry focus from new molecular entities to interests in holistic understanding of the complexities and dynamics of life entities. Such redirection away from reductionism could further progress basic and applied scientific research to embetter our circumstances through new treatments and preventive measures for health, and development of new strains and disease control in agriculture and ecology [13].
by Andrea R. Castillo.
S.M.in Technology and Policy
APA, Harvard, Vancouver, ISO, and other styles
19

Elmansy, Dalia F. "Computational Methods to Characterize the Etiology of Complex Diseases at Multiple Levels." Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1583416431321447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Wiltshire, Serge William. "On The Application Of Computational Modeling To Complex Food Systems Issues." ScholarWorks @ UVM, 2019. https://scholarworks.uvm.edu/graddis/1077.

Full text
Abstract:
Transdisciplinary food systems research aims to merge insights from multiple fields, often revealing confounding, complex interactions. Computational modeling offers a means to discover patterns and formulate novel solutions to such systems-level problems. The best models serve as hubs—or boundary objects—which ground and unify a collaborative, iterative, and transdisciplinary process of stakeholder engagement. This dissertation demonstrates the application of agent-based modeling, network analytics, and evolutionary computational optimization to the pressing food systems problem areas of livestock epidemiology and global food security. It is comprised of a methodological introduction, an executive summary, three journal-article formatted chapters, and an overarching discussion section. Chapter One employs an agent-based computer model (RUSH-PNBM v.1.1) developed to study the potential impact of the trend toward increased producer specialization on resilience to catastrophic epidemics within livestock production chains. In each run, an infection is introduced and may spread according to probabilities associated with the various modes of contact between hog producer, feed mill, and slaughter plant agents. Experimental data reveal that more-specialized systems are vulnerable to outbreaks at lower spatial densities, have more abrupt percolation transitions, and are characterized by less-predictable outcomes; suggesting that reworking network structures may represent a viable means to increase biosecurity. Chapter Two uses a calibrated, spatially-explicit version of RUSH-PNBM (v.1.2) to model the hog production chains within three U.S. states. Key metrics are calculated after each run, some of which pertain to overall network structures, while others describe each actor’s positionality within the network. A genetic programming algorithm is then employed to search for mathematical relationships between multiple individual indicators that effectively predict each node’s vulnerability. This “meta-metric” approach could be applied to aid livestock epidemiologists in the targeting of biosecurity interventions and may also be useful to study a wide range of complex network phenomena. Chapter Three focuses on food insecurity resulting from the projected gap between global food supply and demand over the coming decades. While no single solution has been identified, scholars suggest that investments into multiple interventions may stack together to solve the problem. However, formulating an effective plan of action requires knowledge about the level of change resulting from a given investment into each wedge, the time before that effect unfolds, the expected baseline change, and the maximum possible level of change. This chapter details an evolutionary-computational algorithm to optimize investment schedules according to the twin goals of maximizing global food security and minimizing cost. Future work will involve parameterizing the model through an expert informant advisory process to develop the existing framework into a practicable food policy decision-support tool.
APA, Harvard, Vancouver, ISO, and other styles
21

Henning, Peter Allen. "Computational Parameter Selection and Simulation of Complex Sphingolipid Pathway Metabolism." Thesis, Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/16202.

Full text
Abstract:
Systems biology is an emerging field of study that seeks to provide systems-level understanding of biological systems through the integration of high-throughput biological data into predictive computational models. The integrative nature of this field is in sharp contrast as compared to the Reductionist methods that have been employed since the advent of molecular biology. Systems biology investigates not only the individual components of the biological system, such as metabolic pathways, organelles, and signaling cascades, but also considers the relationships and interactions between the components in the hope that an understandable model of the entire system can eventually be developed. This field of study is being hailed by experts as a potential vital technology in revolutionizing the pharmaceutical development process in the post-genomic era. This work not only provides a systems biology investigation into principles governing de novo sphingolipid metabolism but also the various computational obstacles that are present in converting high-throughput data into an insightful model.
APA, Harvard, Vancouver, ISO, and other styles
22

Kartal, Elcin. "Metamodeling Complex Systems Using Linear And Nonlinear Regression Methods." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/2/12608930/index.pdf.

Full text
Abstract:
Metamodeling is a very popular approach for the approximation of complex systems. Metamodeling techniques can be categorized according to the type of regression method employed as linear and nonlinear models. The Response Surface Methodology (RSM) is an example of linear regression. In classical RSM metamodels, parameters are estimated using the Least Squares (LS) Method. Robust regression techniques, such as Least Absolute Deviation (LAD) and M-regression, are also considered in this study due to the outliers existing in data sets. Artificial Neural Networks (ANN) and Multivariate Adaptive Regression Splines (MARS) are examples for non-linear regression technique. In this thesis these two nonlinear metamodeling techniques are constructed and their performances are compared with the performances of linear models.
APA, Harvard, Vancouver, ISO, and other styles
23

Yang, Chao. "ON PARTICLE METHODS FOR UNCERTAINTY QUANTIFICATION IN COMPLEX SYSTEMS." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1511967797285962.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Oliver, John M. "Multi-objective optimisation methods applied to complex engineering systems." Thesis, Cranfield University, 2014. http://dspace.lib.cranfield.ac.uk/handle/1826/11707.

Full text
Abstract:
This research proposes, implements and analyses a novel framework for multiobjective optimisation through evolutionary computing aimed at, but not restricted to, real-world problems in the engineering design domain. Evolutionary algorithms have been used to tackle a variety of non-linear multiobjective optimisation problems successfully, but their success is governed by key parameters which have been shown to be sensitive to the nature of the particular problem, incorporating concerns such as the number of objectives and variables, and the size and topology of the search space, making it hard to determine the best settings in advance. This work describes a real-encoded multi-objective optimising evolutionary algorithm framework, incorporating a genetic algorithm, that uses self-adaptive mutation and crossover in an attempt to avoid such problems, and which has been benchmarked against both standard optimisation test problems in the literature and a real-world airfoil optimisation case. For this last case, the minimisation of drag and maximisation of lift coefficients of a well documented standard airfoil, the framework is integrated with a freeform deformation tool to manage the changes to the section geometry, and XFoil, a tool which evaluates the airfoil in terms of its aerodynamic efficiency. The performance of the framework on this problem is compared with those of two other heuristic MOO algorithms known to perform well, the Multi-Objective Tabu Search (MOTS) and NSGA-II, showing that this framework achieves better or at least no worse convergence. The framework of this research is then considered as a candidate for smart (electricity) grid optimisation. Power networks can be improved in both technical and economical terms by the inclusion of distributed generation which may include renewable energy sources. The essential problem in national power networks is that of power flow and in particular, optimal power flow calculations of alternating (or possibly, direct) current. The aims of this work are to propose and investigate a method to assist in the determination of the composition of optimal or high-performing power networks in terms of the type, number and location of the distributed generators, and to analyse the multi-dimensional results of the evolutionary computation component in order to reveal relationships between the network design vector elements and to identify possible further methods of improving models in future work. The results indicate that the method used is a feasible one for the achievement of these goals, and also for determining optimal flow capacities of transmission lines connecting the bus bars in the network.
APA, Harvard, Vancouver, ISO, and other styles
25

Lawrence, A. Raelene. "A computational investigation of inorganic systems using ab initio methods /." free to MU campus, to others for purchase, 2000. http://wwwlib.umi.com/cr/mo/fullcit?p9998495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Dimova, Dilyana [Verfasser]. "Computational Methods Generating High-Resolution Views of Complex Structure-Activity Relationships / Dilyana Dimova." Bonn : Universitäts- und Landesbibliothek Bonn, 2014. http://d-nb.info/1052061044/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Brocklebank, Denise. "Computational methods for genetic analysis of complex pedigrees : An application to Huntington's disease." Thesis, University of Oxford, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.531952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Uapipatanakul, Sakchai. "Development of computational methods for conjugate heat transfer analysis in complex industrial applications." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/development-of-computational-methods-for-conjugate-heat-transfer-analysis-in-complex-industrial-applications(3910eec7-601d-4da1-8c08-854404bbba3a).html.

Full text
Abstract:
Conjugate heat transfer is a crucial issue in a number of turbulent engineering fluidflow applications, particularly in nuclear engineering and heat exchanger equipment. Temperature fluctuations in the near-wall turbulent fluid lead to similar fluctuationsin the temperature of the solid wall, and these fluctuations in the solid cause thermalstress in the material which may lead to fatigue and finally damage. In the present study, the Reynolds Average Navier-Stokes (RANS) modelling approachhas been adopted, with four equation k−ε−θ2−εθ eddy viscosity based modelsemployed to account for the turbulence in the fluid region. Transport equations forthe mean temperature, temperature variance, θ2, and its dissipation rate, εθ, have beensimultaneously solved across the solid region, with suitable matching conditions forthe thermal fields at the fluid/solid interface. The study has started by examining the case of fully developed channel flow withheat transfer through a thick wall, for which Tiselj et al. [2001b] provide DNS dataat a range of thermal activity ratios (essentially a ratio of the fluid and solid thermalmaterial properties). Initial simulations were performed with the existing Hanjali´cet al. [1996] four-equation model, extended across the solid region as described above. However, this model was found not to produce the correct sensitivity to thermal activityratio of the near wall θ2 values in the fluid, or the decay rate of θ2 across the solid wall. Therefore, a number of model refinements are proposed in order to improve predictionsin both fluid and solid regions over a range of thermal activity ratios. These refinementsare based on elements from a three-equation non-linear EVM designed to bring aboutbetter profiles of the variables k, ε, θ2 and εθ near the wall , and their inclusion is shownto produce a good matching with the DNS data of Tiselj et al. [2001b].Thereafter, a further, more complex test case has been investigated, namely an opposedwall jet flow, in which a hot wall jet flows vertically downward into an ascendingcold flow. As in the channel flow case, the thermal field is also solved across the solidwalls. The modified model results are compared with results from the Hanjali´c modeland LES and experimental data of Addad et al. [2004] and He et al. [2002] respectively. In this test case, the modified model presents generally good agreement with the LESand experimental data in the dynamic flow field, particularly the penetration point ofthe jet flow. In the thermal field, the modified model also shows improvements in the θ2predictions, particularly in the decay of the θ2 across the wall, which is consistent withthe behaviour found in the simple channel flow case. Although the modified model hasshown significant improvements in the conjugate heat transfer predictions, in some instancesit was difficult to obtain fully-converged steady state numerical results. Thusthe particular investigation with the inlet jet location shows non-convergence numericalresults in this steady state assumption. Thus, unsteady flow calculations have beenperformed for this case. These show large scale unsteadiness in the jet penetration area. In the dynamic field, the total rms values of the modelled and mean fluctuations showgood agreement with the LES data. In the thermal field calculation, a range of the flowconditions and solid material properties have been considered, and the predicted conjugateheat transfer predicted performance is broadly in line with the behaviour shownin the channel flow.
APA, Harvard, Vancouver, ISO, and other styles
29

Jha, Sumit Kumar. "Model Validation and Discovery for Complex Stochastic Systems." Research Showcase @ CMU, 2010. http://repository.cmu.edu/dissertations/10.

Full text
Abstract:
In this thesis, we study two fundamental problems that arise in the modeling of stochastic systems: (i) Validation of stochastic models against behavioral specifications such as temporal logics, and (ii) Discovery of kinetic parameters of stochastic biochemical models from behavioral specifications. We present a new Bayesian algorithm for Statistical Model Checking of stochastic systems based on a sequential version of Jeffreys’ Bayes Factor test. We argue that the Bayesian approach is more suited for application do- mains like systems biology modeling, where distributions on nuisance parameters and priors may be known. We prove that our Bayesian Statistical Model Checking algorithm terminates for a large subclass of prior probabilities. We also characterize the Type I/II errors associated with our algorithm. We experimentally demonstrate that this algorithm is suitable for the analysis of complex biochemical models like those written in the BioNetGen language. We then argue that i.i.d. sampling based Statistical Model Checking algorithms are not an effective way to study rare behaviors of stochastic models and present another Bayesian Statistical Model Checking algorithm that can incorporate non-i.i.d. sampling strategies. We also present algorithms for synthesis of chemical kinetic parameters of stochastic biochemical models from high level behavioral specifications. We consider the setting where a modeler knows facts that must hold on the stochastic model but is not confident about some of the kinetic parameters in her model. We suggest algorithms for discovering these kinetic parameters from facts stated in appropriate formal probabilistic specification languages. Our algorithms are based on our theoretical results characterizing the probability of a specification being true on a stochastic biochemical model. We have applied this algorithm to discover kinetic parameters for biochemical models with as many as six unknown parameters.
APA, Harvard, Vancouver, ISO, and other styles
30

Jiang, Hao, and 姜昊. "Construction and computation methods for biological networks." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hub.hku.hk/bib/B50662144.

Full text
Abstract:
Biological systems are complex in that they comprise large number of interacting entities, and their dynamics follow mechanic regulations for movement and biological function organization. Established computational modeling deals with studying and manipulating biologically relevant systems as a powerful approach. Inner structure and behavior of complex biological systems can be analyzed and understood by computable biological networks. In this thesis, models and computation methods are proposed for biological networks. The study of Genetic Regulatory Networks (GRNs) is an important research topic in genomic research. Several promising techniques have been proposed for capturing the behavior of gene regulations in biological systems. One of the promising models for GRNs, Boolean Network (BN) has gained a lot of attention. However, little light has been shed on the analysis of internal connection between the dynamics of biological molecules and network systems. Inference and completion problems of a BN from a given set of singleton attractors are considered to be important in understanding the relationship between dynamics of biological molecules and network systems. Discrete dynamic systems model has been recently proposed to model time-course microarray measurements of genes, but delay effect may be modeled as a realistic factor in studying GRNs. A delay discrete dynamic systems model is developed to model GRNs. Inference and analysis of networks is one of the grand challenges in modern statistical biology. Machine learning method, in particular, Support Vector Machine (SVM), has been successfully applied in predictions of internal connections embedded in networks. Kernels in conjunction with SVM demonstrate strong ability in performing various tasks such as biomedical diagnosis, function prediction and motif extractions. In biomedical diagnosis, data sets are always high dimensional which provide a challenging research problem in machine learning area. Novel kernels using distance-metric that are not common in machine learning framework are proposed for possible tumor differentiation discrimination problem. Protein function prediction problem is a hot topic in bioinformatics. The K-spectrum Kernel is among the top popular models in description of protein sequences. Taking into consideration of positive-semi-definiteness in kernel construction, Eigen-matrix translation technique is introduced in novel kernel formulation to give better prediction result. In a further step, power of Eigen-matrix translation technique in feature selection is demonstrated through mathematical formulation. Due to structure complexity of carbohydrates, the study of carbohydrate sugar chains has lagged behind compared to that of DNA and proteins. A weighted q-gram kernel is constructed in classifying glycan structures with limitations in feature extractions. A biochemically-weighted tree kernel is then proposed to enhance the ability in both classification as well as motif extractions. Finally the problem of metabolite biomarker discovery is researched. Human diseases, in particular metabolic diseases, can be directly caused by the lack of essential metabolites. Identification of metabolite biomarkers has significant importance in the study of biochemical reaction and signaling networks. A promising computational approach is proposed to identify metabolic biomarkers through integrating biomedical data and disease-specific gene expression data.
published_or_final_version
Mathematics
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
31

Repiscak, Peter. "Computational chemistry for complex systems : open-shell molecules to conjugated organic materials." Thesis, Heriot-Watt University, 2017. http://hdl.handle.net/10399/3348.

Full text
Abstract:
This thesis focuses on two different, but equally challenging, areas of computational chemistry: transition metal organic molecule interactions and parameterisation of organic conjugated polymers for molecular dynamics simulations. The metal-binding properties are important for understanding of biomolecular action of type 2 diabetes drug and development of novel protocols for redox calculations of copper systems. In this area the challenge is mainly related to the complex electronic structure of the open-shell transition metals. The main challenges for the parameterisation of conjugated polymers are due to the size of the studied systems, their conjugated nature and inclusion of environment. Metal-binding properties as well as electronic structures of copper complexes of type 2 diabetes drug metformin (Metf) and other similar, but often inactive, compounds were examined using DFT method. It was found that for neutral compounds it is not possible to explain the differences in their biological effects solely by examining the copper-binding properties. Further, the proposed mechanism potentially explaining the difference in the biomolecular mode of action involves a possible deprotonation of biguanide and Metf compounds under higher mitochondrial pH which would lead to formation of more stable copper complexes and potentially affecting the mitochondrial copper homeostasis. In addition, redox properties of copper-biguanide complexes could interfere with the sensitive redox chemistry or interact with important metalloproteins in the mitochondria. Understanding the copper-binding properties is also important for a systematic development and testing of computational protocols for calculations of reduction potentials of copper complexes. Copper macrocyclic complexes previously used as model systems for redox-active metalloenzymes and for which experimentally determined redox potentials are available were used as model systems. First adequacy of using single reference methods such as DFT was examined for these systems and then various DFT functionals and basis sets were tested in order to develop accurate redox potential protocol. It was shown that good relative cor-relations were obtained for several functionals while the best absolute agreement was obtained with either the M06/cc-pVTZ functional with the SMD or either M06L or TPSSTPSS functional with cc-pVTZ basis set and the PCM solvation model. Organic conjugated polymers have a great potential due to their application in organic optoelectronics. Various wavefunction and DFT methods are utilized in order to systematically develop parameterisation scheme that can be used to derive selected force-field parameters such as torsional potentials between monomer units that are critical for these systems and partial charges. Moreover, critical points of such a parameterisation are addressed in order to obtain accurate MD simulations that could provide valuable insight into material morphology and conformation that affect their optical properties and conductivity. It was shown that a two step approach of geometry optimisation with CAM-B3LYP/631G* and single point (SP) energy scan with CAM-B3LYP/cc-pVTZ is able to yield accurate dihedral potentials in agreement with the potentials calculated using higher level methods such as MP2 and CBS limit CCSD(T). Further, investigating partial charge distribution for increasing backbone length of fluorene and thiophene it has been found that it is possible to obtain a three residue model of converged charge distributions using the RESP scheme. The three partial charge residues can be then used to build and simulate much longer polymers without the need to re-parametrize charge distributions. In the case of side-chains, it was found that it is not possible to obtain converged charge sets for sidechain lengths of up to 10 carbons due to the strong asymmetry between the side-chain ends. Initial validation of derived force-field parameters performed by simulations of 32mers of fluorene with octyl side-chains (PF8) and thiophene with hexyl side-chains (P3HT) in chloroform and calculation of persistence lengths and end-to-end lengths showed close correspondence to experimentally obtained values.
APA, Harvard, Vancouver, ISO, and other styles
32

Almaatouq, Abdullah Mohammed. "Complex systems and a computational social science perspective on the labor market." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/104577.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 99-109).
Thesis: S.M.. Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2016.
Labor market institutions are central for modern economies, and their polices can directly affect unemployment rates and economic growth. At the individual level, unemployment often has a detrimental impact on people's well-being and health. At the national level, high employment is one of the central goals of any economic policy, due to its close association with national prosperity. The main goal of this thesis is to highlight the need for frameworks that take into account the complex structure of labor market interactions. In particular, we explore the benefits of leveraging tools from computational social science, network science, and data-driven theories to measure the flow of opportunities and information in the context of the labor market. First, we investigate our key hypothesis, which is that opportunity/information flow through weak ties, and this is a key determinant of the length of unemployment. We then extend the idea of opportunity/information flow to clusters of other economic activities, where we expect the flow within clusters of related activities to be higher than within isolated activities. This captures the intuition that within related activities there are more "capitals" involved and that such activities require similar "capabilities." Therefore, more extensive clusters of economic activities should generate greater growth through exploiting the greater flow of opportunities and information. We quantify the opportunity/information flow using a complexity measure of two economic activities (i.e. jobs and exports).
by Abdullah Almaatouq.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
33

Botha, Paul Jacobus. "Detecting change in complex process systems with phase space methods." Thesis, Stellenbosch : University of Stellenbosch, 2006. http://hdl.handle.net/10019/508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Zhao, Yang. "Computational Methods for Analyzing Chemical Graphs and Biological Networks." 京都大学 (Kyoto University), 2014. http://hdl.handle.net/2433/188864.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Basudhar, Anirban. "Computational Optimal Design and Uncertainty Quantification of Complex Systems Using Explicit Decision Boundaries." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/201491.

Full text
Abstract:
This dissertation presents a sampling-based method that can be used for uncertainty quantification and deterministic or probabilistic optimization. The objective is to simultaneously address several difficulties faced by classical techniques based on response values and their gradients. In particular, this research addresses issues with discontinuous and binary (pass or fail) responses, and multiple failure modes. All methods in this research are developed with the aim of addressing problems that have limited data due to high cost of computation or experiment, e.g. vehicle crashworthiness, fluid-structure interaction etc.The core idea of this research is to construct an explicit boundary separating allowable and unallowable behaviors, based on classification information of responses instead of their actual values. As a result, the proposed method is naturally suited to handle discontinuities and binary states. A machine learning technique referred to as support vector machines (SVMs) is used to construct the explicit boundaries. SVM boundaries can be highly nonlinear, which allows one to use a single SVM for representing multiple failure modes.One of the major concerns in the design and uncertainty quantification communities is to reduce computational costs. To address this issue, several adaptive sampling methods have been developed as part of this dissertation. Specific sampling methods have been developed for reliability assessment, deterministic optimization, and reliability-based design optimization. Adaptive sampling allows the construction of accurate SVMs with limited samples. However, like any approximation method, construction of SVM is subject to errors. A new method to quantify the prediction error of SVMs, based on probabilistic support vector machines (PSVMs) is also developed. It is used to provide a relatively conservative probability of failure to mitigate some of the adverse effects of an inaccurate SVM. In the context of reliability assessment, the proposed method is presented for uncertainties represented by random variables as well as spatially varying random fields.In order to validate the developed methods, analytical problems with known solutions are used. In addition, the approach is applied to some application problems, such as structural impact and tolerance optimization, to demonstrate its strengths in the context of discontinuous responses and multiple failure modes.
APA, Harvard, Vancouver, ISO, and other styles
36

Guan, Jinyan. "Bayesian Generative Modeling of Complex Dynamical Systems." Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/612950.

Full text
Abstract:
This dissertation presents a Bayesian generative modeling approach for complex dynamical systems for emotion-interaction patterns within multivariate data collected in social psychology studies. While dynamical models have been used by social psychologists to study complex psychological and behavior patterns in recent years, most of these studies have been limited by using regression methods to fit the model parameters from noisy observations. These regression methods mostly rely on the estimates of the derivatives from the noisy observation, thus easily result in overfitting and fail to predict future outcomes. A Bayesian generative model solves the problem by integrating the prior knowledge of where the data comes from with the observed data through posterior distributions. It allows the development of theoretical ideas and mathematical models to be independent of the inference concerns. Besides, Bayesian generative statistical modeling allows evaluation of the model based on its predictive power instead of the model residual error reduction in regression methods to prevent overfitting in social psychology data analysis. In the proposed Bayesian generative modeling approach, this dissertation uses the State Space Model (SSM) to model the dynamics of emotion interactions. Specifically, it tests the approach in a class of psychological models aimed at explaining the emotional dynamics of interacting couples in committed relationships. The latent states of the SSM are composed of continuous real numbers that represent the level of the true emotional states of both partners. One can obtain the latent states at all subsequent time points by evolving a differential equation (typically a coupled linear oscillator (CLO)) forward in time with some known initial state at the starting time. The multivariate observed states include self-reported emotional experiences and physiological measurements of both partners during the interactions. To test whether well-being factors, such as body weight, can help to predict emotion-interaction patterns, we construct functions that determine the prior distributions of the CLO parameters of individual couples based on existing emotion theories. Besides, we allow a single latent state to generate multivariate observations and learn the group-shared coefficients that specify the relationship between the latent states and the multivariate observations. Furthermore, we model the nonlinearity of the emotional interaction by allowing smooth changes (drift) in the model parameters. By restricting the stochasticity to the parameter level, the proposed approach models the dynamics in longer periods of social interactions assuming that the interaction dynamics slowly and smoothly vary over time. The proposed approach achieves this by applying Gaussian Process (GP) priors with smooth covariance functions to the CLO parameters. Also, we propose to model the emotion regulation patterns as clusters of the dynamical parameters. To infer the parameters of the proposed Bayesian generative model from noisy experimental data, we develop a Gibbs sampler to learn the parameters of the patterns using a set of training couples. To evaluate the fitted model, we develop a multi-level cross-validation procedure for learning the group-shared parameters and distributions from training data and testing the learned models on held-out testing data. During testing, we use the learned shared model parameters to fit the individual CLO parameters to the first 80% of the time points of the testing data by Monte Carlo sampling and then predict the states of the last 20% of the time points. By evaluating models with cross-validation, one can estimate whether complex models are overfitted to noisy observations and fail to generalize to unseen data. I test our approach on both synthetic data that was generated by the generative model and real data that was collected in multiple social psychology experiments. The proposed approach has the potential to model other complex behavior since the generative model is not restricted to the forms of the underlying dynamics.
APA, Harvard, Vancouver, ISO, and other styles
37

Mwanga, Alifas Yeko. "Reliability modelling of complex systems." Thesis, Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-12142006-121528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Al-Haddad, Tristan Farris. "PerFORMance: Integrating Structural Feedback into Design Processes for Complex Surface-Active Form." Thesis, Available online, Georgia Institute of Technology, 2006, 2006. http://etd.gatech.edu/theses/available/etd-07102006-111810/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Casellas, Soler Josep. "Elucidating deactivation and reaction paths of photosensitive organic systems through computational methods." Doctoral thesis, Universitat Rovira i Virgili, 2016. http://hdl.handle.net/10803/386574.

Full text
Abstract:
Aquesta tesis estudia computacionalment els processos que expliquen les propietats fotoquímiques de certs compostos orgànics. Per això s’han fet servir mètodes ab initio multiconfiguracionals, CASSCF i CASPT2, amb els que es descriu la topografia de les superfícies d’energia potencial dels estats de més baixa energia i es localitzen els camins de reacció. S’ha estudiat primerament el mecanisme de fotoisomerització del azobenzè i el seu derivat, la fenilazopiridina. El mecanisme de reacció obtingut explica satisfactòriament perquè el rendiment quàntic de la isomerització depèn de l’excitació inicial i del grau de restricció de la rotació interna en aquests sistemes. També s’ha analitzat un protocol computacional per la reproducció de l’espectre d’absorció en solució de sistemes flexibles, fent servir com a exemple la fenilazopiridina. S’ha estudiat també la 9-fenilfenalenona. A diferencia del sistema del que deriva, la fenalenona que presenta un alt rendiment quàntic de sensitització de oxigen singlet, el rendiment quàntic de la 9-fenilfenalenona és molt baix. S’ha demostrat que aquest fet es deu a que en el segon compost la fotoexcitació produeix la formació d’un fotoproducte metaestable per ciclació que ràpidament reverteix al compost original. Aquest estudi ha suposat un repte pel mètodes computacionals que s’han fer servir per la seva gran mida. Per últim s’ha estudiat el dehidroescualè, un carotenoide que intervé en processos d’absorció d’energia lumínica d’organisme vegetals. L’estudi determina l’estructura dels estats excitats de baixa energia i la influència de la flexibilitat d’aquest compost en les energies d’aquests estats. Es demostra que l’aproximació de fer servir tan sols la conformació més simètrica proporciona bons resultats a nivell qualitatiu però que, si es busca la reproducció quantitativa de les observacions experimentals, s’ha de tenir en conta la flexibilitat molecular.
Esta tesis estudia computacionalmente los procesos que explican las propiedades fotoquímicas de ciertos compuestos orgánicos. Para ello se han usado métodos ab initio multiconfiguracionales, CASSCF y CASPT2, con los que se describe la topografía de las superficies de energía potencial de los estados de más baja energía y se localizan los caminos de reacción. Se ha estudiado primeramente el mecanismo de fotoisomerización del azobenzeno y su derivado, la fenilazopiridina. El mecanismo de reacción obtenido explica satisfactoriamente por qué el rendimiento cuántico de la isomerización depende de la excitación inicial y del grado de restricción de la rotación interna en estos sistemas. También se ha analizado un protocolo computacional para poder reproducir el espectro de absorción en disolución de sistemas flexibles, usando como ejemplo la fenilazopiridina. Se ha estudiado también la 9-fenilfenalenona. A diferencia del sistema del que deriva, la fenalenona, que presenta un alto rendimiento cuántico de sensitización de oxígeno singlete, el de la 9-fenilfenalenona es muy bajo. Se ha demostrado que eso se debe a que en el segundo la fotoexcitación produce la formación por ciclación de un fotoproducto metaestable que rápidamente revierte en el compuesto original. Este estudio ha supuesto un reto para los métodos computacionales empleados debido a su gran tamaño. Por último se ha estudiado el dehidroescualeno, un carotenoide que interviene en procesos de absorción de energía lumínica de organismos vegetales. El estudio determina la estructura de los estados excitados de baja energía y la influencia de la flexibilidad de este compuesto en las energías de estos estados. Se demuestra que la aproximación de utilizar sólo la conformación más simétrica proporciona buenos resultados a nivel cualitativo pero que, si se busca la reproducción cuantitativa de las observaciones experimentales, se debe tener en cuenta la flexibilidad molecular.
This thesis develops a computational work to study the processes that explain the photochemical properties of some organic compounds. Multiconfigurational ab initio methods, CASSCF and CASPT2 have been used to determine the topography of the potential energy surfaces of the low energy states and locate reaction paths. We have studied first the mechanism of photoisomerization of azobenzene and its derivative phenylazopyridine. The reaction mechanism obtained explains satisfactorily why the isomerization quantum yield depends on the initial excitation and on the degree of the restriction on the internal rotation of these systems. We have also analysed the performance of a computational protocol to reproduce the absorption spectrum of flexible systems in solution, using phenylazopyridine as target system. We have also studied 9-phenylphenalenone. Its parent system, phenalenone, shows a high quantum yield in the process of singlet oxygen sensitization, but the quantum yield of this phenyl derivate for the same process is very low. Our study shows that this difference is due to a side reaction induced by the initial photoexcitation, which produces a metastable photoproduct by cyclisation that turns back quickly to the initial reactant. This study has been a challenge for the computational methods used due to its big size. Finally, we have studied dehidrosqualene, a carotenoid involved in light harvesting processes in vegetables. Our study determines the structure of the low energy excited states and the influence of the flexibility of this compound in the energy of these states. We show that to use only the most symmetric conformation on this flexible compound to calculate the energy of the excited states, give results that qualitatively agree with the experimental observations but, if quantitative agreement is required, the molecular flexibility must be taken into account.
APA, Harvard, Vancouver, ISO, and other styles
40

Russell, Gregory B. (Gregory Brian). "A systems analysis of complex software product development dynamics and methods." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/42371.

Full text
Abstract:
Includes bibliographical references (p. 64-65).
Thesis (S.M.)--Massachusetts Institute of Technology, System Design and Management Program, 2007.
Software development projects and products have long shouldered a reputation for missed deadlines, blown budgets, and low quality. Unfortunately, this negative reputation appears to be supported by more than just anecdotal evidence; quoting an industry study', respected software development expert and author Steve McConnell reports in his book Professional Software Development" that "Roughly 25 percent of all projects fail outright, and the typical project is 100 percent over budget at the point it's canceled." What's more, notes McConnell, "Fifty percent of projects are delivered late, over-budget, or with less functionality than desired." Exactly why software development projects and products have historically performed so poorly and with arguably little if any improvement over the past 40 years, however, is a subject on which there is less agreement. While blame often aligns along functional (product marketing and sales) versus technical (software development) lines, the increasing popularity of different and often contradictory software development methodologies seems to suggest that no real consensus exists within the software development community itself. The goal of this thesis is twofold: 1. To describe a set of key factors to consider when analyzing software processes 2. To outline an organizational framework that is optimized for implementing and managing software development practices
by Gregory B. Russell.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
41

Lekscha, Jaqueline Stefanie. "Complex systems methods for detecting dynamical anomalies in past climate variability." Doctoral thesis, Humboldt-Universität zu Berlin, 2020. http://dx.doi.org/10.18452/21047.

Full text
Abstract:
Die Analyse von Proxy-Zeitreihen aus Paläoklimaarchiven wie zum Beispiel Baumringen, Seesedimenten, Tropfsteinen und Eisbohrkernen mittels gefensterter Rekurrenznetzwerkanalyse ermöglicht die Identifizierung und Charakterisierung dynamischer Anomalien in der Klimavariabilität der Vergangenheit. Das Ziel der vorliegenden Arbeit ist die Entwicklung einer zuverlässigeren Routine zur gefensterten Rekurrenznetzwerkanalyse. Aufbauend auf dem bestehenden methodischen Rahmen werden die Bereiche der Phasenraumrekonstruktion und des Signifikanztests als verbesserungsfähig identifiziert. Deshalb werden verschiedene Methoden zur Rekonstruktion des Phasenraums aus unregelmäßig abgetasteten, verrauschten Daten verglichen. Außerdem wird ein allgemeiner flächenweiser Signifikanztest eingeführt, der, basierend auf einem ausgewählten Nullmodell, Korrelationen in den Analyseergebnissen numerisch abschätzt, um damit das Problem hoher Raten an falsch positiv signifikanten Ergebnissen zu adressieren. Im zweiten Teil der Arbeit wird die entwickelte Methodik genutzt, um die nichtlineare Variabilität des Klimas der Vergangenheit in Nord- und Südamerika zu untersuchen, indem vier reale Zeitreihen verschiedener Proxys studiert werden. Außerdem werden Proxy-System-Modelle genutzt, um auf die Frage der Eignung von Daten verschiedener Paläoklimaarchive zur Charakterisierung der Klimavariabilität mittels gefensterter Rekurrenznetzwerkanalyse einzugehen. Mit der Arbeit wird der Einsatz nichtlinearer Methoden zur Analyse von Paläoklima-Zeitreihen vorangebracht, das Potential und die Grenzen der gefensterten Rekurrenznetzwerkanalyse aufgezeigt und zukünftige relevante Fragestellungen, die die erhaltenen Ergebnisse und Schlussfolgerungen komplementieren können, identifiziert.
Studying palaeoclimate proxy data from archives such as tree rings, lake sediments, speleothems, and ice cores using windowed recurrence network analysis offers the possibility to characterise dynamical anomalies in past climate variability. This thesis aims at developing a more reliable framework of windowed recurrence network analysis by comparing different phase space reconstruction approaches for non-uniformly sampled noisy data and by tackling the problem of increased numbers of false positive significant points when correlations within the analysis results can not be neglected. For this, different phase space reconstruction approaches are systematically compared and a generalised areawise significance test which implements a numerical estimation of the correlations within the analysis results is introduced. In particular, the test can be used to identify patches of possibly false positive significant points. The developed analysis framework is applied to detect and characterise dynamical anomalies in past climate variability in North and South America by studying four real-world palaeoclimatic time series from different archives. Furthermore, the question whether palaeoclimate proxy time series from different archives are equally well suited for tracking past climate dynamics with windowed recurrence network analysis is approached by using the framework of proxy system modelling. This thesis promotes the use of non-linear methods for analysing palaeoclimate proxy time series, provides a detailed assessment of potentials and limitations of windowed recurrence network analysis and identifies future research directions that can complement the obtained results and conclusions.
APA, Harvard, Vancouver, ISO, and other styles
42

Sage, Aled. "Observation-driven configuration of complex software systems." Thesis, University of St Andrews, 2004. http://hdl.handle.net/10023/6479.

Full text
Abstract:
The ever-increasing complexity of software systems makes them hard to comprehend, predict and tune due to emergent properties and non-deterministic behaviour. Complexity arises from the size of software systems and the wide variety of possible operating environments: the increasing choice of platforms and communication policies leads to ever more complex performance characteristics. In addition, software systems exhibit different behaviour under different workloads. Many software systems are designed to be configurable so that policies (e.g. communication, concurrency and recovery strategies) can be chosen to meet the needs of various stakeholders. For complex software systems it can be difficult to accurately predict the effects of a change and to know which configuration is most appropriate. This thesis demonstrates that it is useful to run automated experiments that measure a selection of system configurations. Experiments can find configurations that meet the stakeholders' needs, find interesting behavioural characteristics, and help produce predictive models of the system's behaviour. The design and use of ACT (Automated Configuration Tool) for running such experiments is described, in combination a number of search strategies for deciding on the configurations to measure. Design Of Experiments (DOE) is discussed, with emphasis on Taguchi Methods. These statistical methods have been used extensively in manufacturing, but have not previously been used for configuring software systems. The novel contribution here is an industrial case study, applying the combination of ACT and Taguchi Methods to DC-Directory, a product from Data Connection Ltd (DCL). The case study investigated the applicability of Taguchi Methods for configuring complex software systems. Taguchi Methods were found to be useful for modelling and configuring DC-Directory, making them a valuable addition to the techniques available to system administrators and developers.
APA, Harvard, Vancouver, ISO, and other styles
43

Rung, Johan. "Signals and Noise in Complex Biological Systems." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7862.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Grabowicz, Przemyslaw Adam. "Complex networks approach to modeling online social systems. The emergence of computational social science." Doctoral thesis, Universitat de les Illes Balears, 2014. http://hdl.handle.net/10803/131220.

Full text
Abstract:
This thesis is devoted to quantitative description, analysis, and modeling of complex social systems in the form of online social networks. Statistical patterns of the systems under study are unveiled and interpreted using concepts and methods of network science, social network analysis, and data mining. A long-term promise of this research is that predicting the behavior of complex techno-social systems will be possible in a way similar to contemporary weather forecasting, using statistical inference and computational modeling based on the advancements in understanding and knowledge of techno-social systems. Although the subject of this study are humans, as opposed to atoms or molecules in statistical physics, the availability of extremely large datasets on human behavior permits the use of tools and techniques of statistical physics. This dissertation deals with large datasets from online social networks, measures statistical patterns of social behavior, and develops quantitative methods, models, and metrics for complex techno-social systems.
La presente tesis está dedicada a la descripción, análisis y modelado cuantitativo de sistemas complejos sociales en forma de redes sociales en internet. Mediante el uso de métodos y conceptos provenientes de ciencia de redes, análisis de redes sociales y minería de datos se descubren diferentes patrones estadísticos de los sistemas estudiados. Uno de los objetivos a largo plazo de esta línea de investigación consiste en hacer posible la predicción del comportamiento de sistemas complejos tecnológico-sociales, de un modo similar a la predicción meteorológica, usando inferencia estadística y modelado computacional basado en avances en el conocimiento de los sistemas tecnológico-sociales. A pesar de que el objeto del presente estudio son seres humanos, en lugar de los átomos o moléculas estudiados tradicionalmente en la física estadística, la disponibilidad de grandes bases de datos sobre comportamiento humano hace posible el uso de técnicas y métodos de física estadística. En el presente trabajo se utilizan grandes bases de datos provenientes de redes sociales en internet, se miden patrones estadísticos de comportamiento social, y se desarrollan métodos cuantitativos, modelos y métricas para el estudio de sistemas complejos tecnológico-sociales.
APA, Harvard, Vancouver, ISO, and other styles
45

Balachandran, Libish Kalathil. "Computational workflow management for conceptual design of complex systems : an air-vehicle design perspective." Thesis, Cranfield University, 2007. http://dspace.lib.cranfield.ac.uk/handle/1826/5070.

Full text
Abstract:
The decisions taken during the aircraft conceptual design stage are of paramount importance since these commit up to eighty percent of the product life cycle costs. Thus in order to obtain a sound baseline which can then be passed on to the subsequent design phases, various studies ought to be carried out during this stage. These include trade-off analysis and multidisciplinary optimisation performed on computational processes assembled from hundreds of relatively simple mathematical models describing the underlying physics and other relevant characteristics of the aircraft. However, the growing complexity of aircraft design in recent years has prompted engineers to substitute the conventional algebraic equations with compiled software programs (referred to as models in this thesis) which still retain the mathematical models, but allow for a controlled expansion and manipulation of the computational system. This tendency has posed the research question of how to dynamically assemble and solve a system of non-linear models. In this context, the objective of the present research has been to develop methods which significantly increase the flexibility and efficiency with which the designer is able to operate on large scale computational multidisciplinary systems at the conceptual design stage. In order to achieve this objective a novel computational process modelling method has been developed for generating computational plans for a system of non-linear models. The computational process modelling was subdivided into variable flow modelling, decomposition and sequencing. A novel method named Incidence Matrix Method (IMM) was developed for variable flow modelling, which is the process of identifying the data flow between the models based on a given set of input variables. This method has the advantage of rapidly producing feasible variable flow models, for a system of models with multiple outputs. In addition, criteria were derived for choosing the optimal variable flow model which would lead to faster convergence of the system. Cont/d.
APA, Harvard, Vancouver, ISO, and other styles
46

Chow, Fung-kiu, and 鄒鳳嬌. "Modeling the minority-seeking behavior in complex adaptive systems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B29367487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Belyaeva, Anastasiya. "Computational methods for analyzing and modeling gene regulation and 3D genome organization." Thesis, Massachusetts Institute of Technology, 2021. https://hdl.handle.net/1721.1/130828.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Computational and Systems Biology Program, February, 2021
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 261-281).
Biological processes from differentiation to disease progression are governed by gene regulatory mechanisms. Currently large-scale omics and imaging data sets are being collected to characterize gene regulation at every level. Such data sets present new opportunities and challenges for extracting biological insights and elucidating the gene regulatory logic of cells. In this thesis, I present computational methods for the analysis and integration of various data types used for cell profiling. Specifically, I focus on analyzing and linking gene expression with the 3D organization of the genome. First, I describe methodologies for elucidating gene regulatory mechanisms by considering multiple data modalities. I design a computational framework for identifying colocalized and coregulated chromosome regions by integrating gene expression and epigenetic marks with 3D interactions using network analysis.
Then, I provide a general framework for data integration using autoencoders and apply it for the integration and translation between gene expression and chromatin images of naive T-cells. Second, I describe methods for analyzing single modalities such as contact frequency data, which measures the spatial organization of the genome, and gene expression data. Given the important role of the 3D genome organization in gene regulation, I present a methodology for reconstructing the 3D diploid conformation of the genome from contact frequency data. Given the ubiquity of gene expression data and the recent advances in single-cell RNA-sequencing technologies as well as the need for causal modeling of gene regulatory mechanisms, I then describe an algorithm as well as a software tool, difference causal inference (DCI), for learning causal gene regulatory networks from gene expression data.
DCI addresses the problem of directly learning differences between causal gene regulatory networks given gene expression data from two related conditions. Finally, I shift my focus from basic biology to drug discovery. Given the current COVID19 pandemic, I present a computational drug repurposing platform that enables the identification of FDA approved compounds for drug repurposing and investigation of potential causal drug mechanisms. This framework relies on identifying drugs that reverse the signature of the infection in the space learned by an autoencoder and then uses causal inference to identify putative drug mechanisms.
by Anastasiya Belyaeva.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Computational and Systems Biology Program
APA, Harvard, Vancouver, ISO, and other styles
48

Huismann, Immo. "Computational fluid dynamics on wildly heterogeneous systems." TUDPress, 2018. https://tud.qucosa.de/id/qucosa%3A74002.

Full text
Abstract:
In the last decade, high-order methods have gained increased attention. These combine the convergence properties of spectral methods with the geometrical flexibility of low-order methods. However, the time step is restrictive, necessitating the implicit treatment of diffusion terms in addition to the pressure. Therefore, efficient solution of elliptic equations is of central importance for fast flow solvers. As the operators scale with O(p · N), where N is the number of degrees of freedom and p the polynomial degree, the runtime of the best available multigrid algorithms scales with O(p · N) as well. This super-linear scaling limits the applicability of high-order methods to mid-range polynomial orders and constitutes a major road block on the way to faster flow solvers. This work reduces the super-linear scaling of elliptic solvers to a linear one. First, the static condensation method improves the condition of the system, then the associated operator is cast into matrix-free tensor-product form and factorized to linear complexity. The low increase in the condition and the linear runtime of the operator lead to linearly scaling solvers when increasing the polynomial degree, albeit with low robustness against the number of elements. A p-multigrid with overlapping Schwarz smoothers regains the robustness, but requires inverse operators on the subdomains and in the condensed case these are neither linearly scaling nor matrix-free. Embedding the condensed system into the full one leads to a matrix-free operator and factorization thereof to a linearly scaling inverse. In combination with the previously gained operator a multigrid method with a constant runtime per degree of freedom results, regardless of whether the polynomial degree or the number of elements is increased. Computing on heterogeneous hardware is investigated as a means to attain a higher performance and future-proof the algorithms. A two-level parallelization extends the traditional hybrid programming model by using a coarse-grain layer implementing domain decomposition and a fine-grain parallelization which is hardware-specific. Thereafter, load balancing is investigated on a preconditioned conjugate gradient solver and functional performance models adapted to account for the communication barriers in the algorithm. With the new model, runtime prediction and measurement fit closely with an error margin near 5 %. The devised methods are combined into a flow solver which attains the same throughput when computing with p = 16 as with p = 8, preserving the linear scaling. Furthermore, the multigrid method reduces the cost of implicit treatment of the pressure to the one for explicit treatment of the convection terms. Lastly, benchmarks confirm that the solver outperforms established high-order codes.
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, Wei. "A robust concept exploration method for configuring complex systems." Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/20224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Surujon, Defne. "Computational approaches in infectious disease research: Towards improved diagnostic methods." Thesis, Boston College, 2020. http://hdl.handle.net/2345/bc-ir:109089.

Full text
Abstract:
Thesis advisor: Kenneth Williams
Due to overuse and misuse of antibiotics, the global threat of antibiotic resistance is a growing crisis. Three critical issues surrounding antibiotic resistance are the lack of rapid testing, treatment failure, and evolution of resistance. However, with new technology facilitating data collection and powerful statistical learning advances, our understanding of the bacterial stress response to antibiotics is rapidly expanding. With a recent influx of omics data, it has become possible to develop powerful computational methods that make the best use of growing systems-level datasets. In this work, I present several such approaches that address the three challenges around resistance. While this body of work was motivated by the antibiotic resistance crisis, the approaches presented here favor generalization, that is, applicability beyond just one context. First, I present ShinyOmics, a web-based application that allow visualization, sharing, exploration and comparison of systems-level data. An overview of transcriptomics data in the bacterial pathogen Streptococcus pneumoniae led to the hypothesis that stress-susceptible strains have more chaotic gene expression patterns than stress-resistant ones. This hypothesis was supported by data from multiple strains, species, antibiotics and non-antibiotic stress factors, leading to the development of a transcriptomic entropy based, general predictor for bacterial fitness. I show the potential utility of this predictor in predicting antibiotic susceptibility phenotype, and drug minimum inhibitory concentrations, which can be applied to bacterial isolates from patients in the near future. Predictors for antibiotic susceptibility are of great value when there is large phenotypic variability across isolates from the same species. Phenotypic variability is accompanied by genomic diversity harbored within a species. I address the genomic diversity by developing BFClust, a software package that for the first time enables pan-genome analysis with confidence scores. Using pan-genome level information, I then develop predictors of essential genes unique to certain strains and predictors for genes that acquire adaptive mutations under prolonged stress exposure. Genes that are essential offer attractive drug targets, and those that are essential only in certain strains would make great targets for very narrow-spectrum antibiotics, potentially leading the way to personalized therapies in infectious disease. Finally, the prediction of adaptive outcome can lead to predictions of future cross-resistance or collateral sensitivities. Overall, this body of work exemplifies how computational methods can complement the increasingly rapid data generation in the lab, and pave the way to the development of more effective antibiotic stewardship practices
Thesis (PhD) — Boston College, 2020
Submitted to: Boston College. Graduate School of Arts and Sciences
Discipline: Biology
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography