Дисертації з теми "Computations management"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Computations management.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Computations management".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Haraburda, David. "Arithmetic Computations and Memory Management Using a Binary Tree Encoding af Natural Numbers." Thesis, University of North Texas, 2011. https://digital.library.unt.edu/ark:/67531/metadc103323/.

Повний текст джерела
Анотація:
Two applications of a binary tree data type based on a simple pairing function (a bijection between natural numbers and pairs of natural numbers) are explored. First, the tree is used to encode natural numbers, and algorithms that perform basic arithmetic computations are presented along with formal proofs of their correctness. Second, using this "canonical" representation as a base type, algorithms for encoding and decoding additional isomorphic data types of other mathematical constructs (sets, sequences, etc.) are also developed. An experimental application to a memory management system is constructed and explored using these isomorphic types. A practical analysis of this system's runtime complexity and space savings are provided, along with a proof of concept framework for both applications of the binary tree type, in the Java programming language.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bourgey, Florian. "Stochastic approximations for financial risk computations." Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX052.

Повний текст джерела
Анотація:
Dans cette thèse, nous examinons plusieurs méthodes d'approximations stochastiques à la fois pour le calcul de mesures de risques financiers et pour le pricing de produits dérivés.Comme les formules explicites sont rarement disponibles pour de telles quantités, le besoin d'approximations analytiques rapides,efficaces et fiables est d'une importance capitale pour les institutions financières.Nous visons ainsi à donner un large aperçu de ces méthodes d'approximation et nous nous concentrons sur trois approches distinctes.Dans la première partie, nous étudions plusieurs méthodes d'approximation Monte Carlo multi-niveaux et les appliquons à deux problèmes pratiques :l'estimation de quantités impliquant des espérances imbriquées (comme la marge initiale) ainsi que la discrétisation des intégrales apparaissant dans les modèles rough pour la variance forward pour le pricing d'options sur le VIX.Dans les deux cas, nous analysons les propriétés d'optimalité asymptotique des estimateurs multi-niveaux correspondants et démontrons numériquement leur supériorité par rapport à une méthode de Monte Carlo classique.Dans la deuxième partie, motivés par les nombreux exemples issus de la modélisation en risque de crédit, nous proposons un cadre général de métamodélisation pour de grandes sommes de variables aléatoires de Bernoulli pondérées, qui sont conditionnellement indépendantes par rapport à un facteur commun X. Notre approche générique est basée sur la décomposition en polynômes du chaos du facteur commun et sur une approximation gaussienne. Les estimations d'erreur L2 sont données lorsque le facteur X est associé à des polynômes orthogonaux classiques.Enfin, dans la dernière partie de cette thèse, nous nous intéressons aux asymptotiques en temps court de la volatilité implicite américaine et les prix d'options américaines dans les modèles à volatilité locale. Nous proposons également une approximation en loi de l'indice VIX dans des modèles rough pour la variance forward, exprimée en termes de proxys log-normaux et dérivons des résultats d'expansion pour les options sur le VIX dont les coefficients sont explicites
In this thesis, we investigate several stochastic approximation methods for both the computation of financial risk measures and the pricing of derivatives.As closed-form expressions are scarcely available for such quantities, %and because they have to be evaluated daily, the need for fast, efficient, and reliable analytic approximation formulas is of primal importance to financial institutions.We aim at giving a broad overview of such approximation methods and we focus on three distinct approaches.In the first part, we study some Multilevel Monte Carlo approximation methods and apply them for two practical problems: the estimation of quantities involving nested expectations (such as the initial margin) along with the discretization of integrals arising in rough forward variance models for the pricing of VIX derivatives.For both cases, we analyze the properties of the corresponding asymptotically-optimal multilevel estimatorsand numerically demonstrate the superiority of multilevel methods compare to a standard Monte Carlo.In the second part, motivated by the numerous examples arising in credit risk modeling, we propose a general framework for meta-modeling large sums of weighted Bernoullirandom variables which are conditional independent of a common factor X.Our generic approach is based on a Polynomial Chaos Expansion on the common factor together withsome Gaussian approximation. L2 error estimates are given when the factor X is associated withclassical orthogonal polynomials.Finally, in the last part of this dissertation, we deal withsmall-time asymptotics and provide asymptoticexpansions for both American implied volatility and American option prices in local volatility models.We also investigate aweak approximations for the VIX index inrough forward variance models expressed in termsof lognormal proxiesand derive expansions results for VIX derivatives with explicit coefficients
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Lee, Yau-tat Thomas, and 李猷達. "Formalisms on semi-structured and unstructured data schema computations." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B43703914.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Lee, Yau-tat Thomas. "Formalisms on semi-structured and unstructured data schema computations." Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B43703914.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ladjel, Riad. "Secure distributed computations for the personal cloud." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG043.

Повний текст джерела
Анотація:
Grâce aux “smart disclosure initiatives”, traduit en français par « ouvertures intelligentes » et aux nouvelles réglementations comme le RGPD, les individus ont la possibilité de reprendre le contrôle sur leurs données en les stockant localement de manière décentralisée. En parallèle, les solutions dites de clouds personnels ou « système personnel de gestion de données » se multiplient, leur objectif étant de permettre aux utilisateurs d'exploiter leurs données personnelles pour leur propre bien.Cette gestion décentralisée des données personnelles offre une protection naturelle contre les attaques massives sur les serveurs centralisés et ouvre de nouvelles opportunités en permettant aux utilisateurs de croiser leurs données collectées auprès de différentes sources. D'un autre côté, cette approche empêche le croisement de données provenant de plusieurs utilisateurs pour effectuer des calculs distribués.L'objectif de cette thèse est de concevoir un protocole de calcul distribué, générique, qui passe à l’échelle et qui permet de croiser les données personnelles de plusieurs utilisateurs en offrant de fortes garanties de sécurité et de protection de la vie privée. Le protocole répond également aux deux questions soulevées par cette approche : comment préserver la confiance des individus dans leur cloud personnel lorsqu'ils effectuent des calculs croisant des données provenant de plusieurs individus ? Et comment garantir l'intégrité du résultat final lorsqu'il a été calculé par une myriade de clouds personnels collaboratifs mais indépendants ?
Thanks to smart disclosure initiatives and new regulations like GDPR, individuals are able to get the control back on their data and store them locally in a decentralized way. In parallel, personal data management system (PDMS) solutions, also called personal clouds, are flourishing. Their goal is to empower users to leverage their personal data for their own good. This decentralized way of managing personal data provides a de facto protection against massive attacks on central servers and opens new opportunities by allowing users to cross their data gathered from different sources. On the other side, this approach prevents the crossing of data from multiple users to perform distributed computations. The goal of this thesis is to design a generic and scalable secure decentralized computing framework which allows the crossing of personal data of multiple users while answering the following two questions raised by this approach. How to preserve individuals' trust on their PDMS when performing global computations crossing data from multiple individuals? And how to guarantee the integrity of the final result when it has been computed by a myriad of collaborative but independent PDMSs?
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Botadra, Harnish. "iC2mpi a platform for parallel execution of graph-structured iterative computations /." unrestricted, 2006. http://etd.gsu.edu/theses/available/etd-07252006-165725/.

Повний текст джерела
Анотація:
Thesis (M.S.)--Georgia State University, 2006.
Title from title screen. Sushil Prasad, committee chair. Electronic text (106 p. : charts) : digital, PDF file. Description based on contents viewed June 11, 2007. Includes bibliographical references. Includes bibliographical references (p. 61-53).
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Azzopardi, Marc Anthony. "Computational air traffic management." Thesis, Cranfield University, 2015. http://dspace.lib.cranfield.ac.uk/handle/1826/9200.

Повний текст джерела
Анотація:
World air transport has been on a steady exponential rise since the 1940’s and the trend has shown remarkable resilience to external shocks. The level of air traffic has greatly exceeded the wildest expectations of the air traffic management pioneers that originally defined the basic precepts ATM that persist till today. This has stretched ATM to a point where it is starting to show signs of ineffectiveness in the face of ever increasing congestion. Delays are on the rise, costs are ballooning, flights are being elongated unnecessarily, the system is becoming increasingly susceptible to disruption, and the high environmental impact of aviation is being compounded by the inability of air traffic controllers to optimise ATM operation in real-time. If these trends are not reversed, ATM could eventually face instability. The conservative, self-preserving outlook of the ATM community has confined progress to relatively minor tweaks of a tired human-centric paradigm. However, the diverging gap between ATM performance and fundamental requirements indicates the need for a step change. In this work, the traditionally incremental approach to ATM research was broken to favour a more exploratory mindset. As a result, a new discipline called Computational Air Traffic Management has been defined to address the unique set of challenges presented by the ATM problem, by taking a more objective scientific approach. A specific embodiment of a CATM system was designed, constructed, simulated and tested and shown to be a significant step towards demonstrating the feasibility of a fully autonomous multi-agent-based air transportation system based on optimisation principles. The system offers unique advantages in terms of resilience to disruption, efficiency and future scalability. The traffic density using such a system can be realistically increased many times higher than current levels while significantly improving on the current levels of safety, operating cost, environmental impact and flight delays. This work advances the field of ATM as well as the fields of Computational Intelligence and Dynamic Optimisation of High Dimensionality Non- Convex Search Spaces.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Brogliato, Marcelo Salhab. "Essays in computational management science." reponame:Repositório Institucional do FGV, 2018. http://hdl.handle.net/10438/24615.

Повний текст джерела
Анотація:
Submitted by Marcelo Salhab Brogliato (msbrogli@gmail.com) on 2018-08-23T20:44:22Z No. of bitstreams: 1 brogliato-phd-thesis-v3-final.pdf: 19935806 bytes, checksum: df0caf31076cfec5c116e0b4c18346ee (MD5)
Approved for entry into archive by ÁUREA CORRÊA DA FONSECA CORRÊA DA FONSECA (aurea.fonseca@fgv.br) on 2018-08-24T16:29:40Z (GMT) No. of bitstreams: 1 brogliato-phd-thesis-v3-final.pdf: 19935806 bytes, checksum: df0caf31076cfec5c116e0b4c18346ee (MD5)
Made available in DSpace on 2018-08-27T13:54:00Z (GMT). No. of bitstreams: 1 brogliato-phd-thesis-v3-final.pdf: 19935806 bytes, checksum: df0caf31076cfec5c116e0b4c18346ee (MD5) Previous issue date: 2018-07-15
A presente tese é formada por três trabalhos científicos na área de Management Science Computacional. A gestão moderna e a alta tecnologia interagem em múltiplas e profundas formas. O professor Andre Ng diz aos seus estudantes na Escola de Negócios de Stanford que “Inteligência Artificial é a nova eletricidade”, como sua forma hiperbólica de enfatizar o potencial transformador da tecnologia. O primeiro trabalho é inspirado na possibilidade de que haverá alguma forma de dinheiro digital e estuda ledger distribuídas, propondo e analisando o Hathor, uma arquitetura alternativa para criptomoedas escaláveis. O segundo trabalho pode ser um item crucial no entendimento de tomadas de decisão, nos trazendo um modelo formal de recognition-primed decisions. Situada na intersecção entre psicologia cognitiva, ciência da computação, neuro-ciência e inteligência artifical, ele apresenta um framework open-source, multi-plataforma e altamente paralelo da Sparse Distributed Memory e analisa a dinâmica da memória e algumas aplicações. O terceiro e último trabalho se situa na intersecção entre marketing, difusão de inovação tecnologica e modelagem, extendendo o famoso modelo de Bass para levar em consideração usuário que, após adotar a tecnologia por um tempo, decidiram rejeitá-la.
This thesis presents three specific, self-contained, scientific papers in the Computational Management Science area. Modern management and high technology interact in multiple, profound, ways. Professor Andrew Ng tells students at Stanford’s Graduate School of Business that “AI is the new electricity”, as his hyperbolic way to emphasize the potential transformational power of the technology. The first paper is inspired by the possibility that there will be some form of purely digital money and studies distributed ledgers, proposing and analyzing Hathor, an alternative architecture towards a scalable cryptocurrency. The second paper may be a crucial item in understanding human decision making, perhaps, bringing us a formal model of recognition-primed decision. Lying at the intersection of cognitive psychology, computer science, neuroscience, and artificial intelligence, it presents an open-source, cross-platform, and highly parallel framework of the Sparse Distributed Memory and analyzes the dynamics of the memory with some applications. Last but not least, the third paper lies at the intersection of marketing, diffusion of technological innovation, and modeling, extending the famous Bass model to account for users who, after adopting the innovation for a while, decide to reject it later on.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Iserte, Agut Sergio. "High-throughput Computation through Efficient Resource Management." Doctoral thesis, Universitat Jaume I, 2018. http://hdl.handle.net/10803/664128.

Повний текст джерела
Анотація:
This proposal addresses, from two different approaches, the improvement of data centers productivity through an efficient resource management. On the one hand, the combination of GPU remote virtualization technologies with workload managers in HPC clusters and cloud computing environments. On the other hand, job reconfigurations in terms of varying its number of processes during the execution. Performance evaluations reveal a non-negligible improvement not only in the throughput, but also, in the job waiting time and in the energy consumption.
Esta propuesta aborda, desde dos enfoques distintos, la mejora de la productividad de centros de procesamientos de datos mediante una gestión eficiente de los recursos. Por un lado, la combinación de tecnologías de virtualización remotas de GPUs junto con gestores de recursos en clústeres HPC y entornos de computación en la nube. Por el otro lado, la reconfiguración de trabajos en términos de modificar el número de procesos durante la ejecución. La evaluación de prestaciones revela un incremento no sólo en la productividad, sino también en el consumo energético.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Apel, Joachim, and Uwe Klaus. "Aspects of Large Scale Symbolic Computation Management." Universität Leipzig, 1998. https://ul.qucosa.de/id/qucosa%3A34525.

Повний текст джерела
Анотація:
The special-purpose computer algebra system FELIX is designed for computations in constructive commutative and non-commutative algebra. In this paper we discuss some features of the system supporting the computation of rather complex problems, especially standard basis computations, using standard hardware. There is a frst aspect concerning the definition and implementation of the basic data types which should be a good compromise between space and time efficient representations of the algebraic objects. Usually, rather complex computations are very time consuming (up to weeks) and often require several attempts. So, there are included special session saving methods in FELIX which allows to backup the attained intermediate results in form of memory images into special session files and to restart later on. Finally, we describe our efforts crunching complex problems by parallelization. The implemented interface is based on stream sockets and includes a special protocol for the data exchange. It supports the distributed computation on heterogeneous, loosely coupled systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Johansson, Björn. "Model management for computational system design /." Linköping : Univ, 2003. http://www.bibl.liu.se/liupubl/disp/disp2003/tek857s.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Ahrens, James P. "Scientific experiment management with high-performance distributed computation /." Thesis, Connect to this title online; UW restricted, 1996. http://hdl.handle.net/1773/6974.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Uichanco, Joline Ann Villaranda. "Data-driven revenue management." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41728.

Повний текст джерела
Анотація:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007.
Includes bibliographical references (p. 125-127).
In this thesis, we consider the classical newsvendor model and various important extensions. We do not assume that the demand distribution is known, rather the only information available is a set of independent samples drawn from the demand distribution. In particular, the variants of the model we consider are: the classical profit-maximization newsvendor model, the risk-averse newsvendor model and the price-setting newsvendor model. If the explicit demand distribution is known, then the exact solutions to these models can be found either analytically or numerically via simulation methods. However, in most real-life settings, the demand distribution is not available, and usually there is only historical demand data from past periods. Thus, data-driven approaches are appealing in solving these problems. In this thesis, we evaluate the theoretical and empirical performance of nonparametric and parametric approaches for solving the variants of the newsvendor model assuming partial information on the distribution. For the classical profit-maximization newsvendor model and the risk-averse newsvendor model we describe general non-parametric approaches that do not make any prior assumption on the true demand distribution. We extend and significantly improve previous theoretical bounds on the number of samples required to guarantee with high probability that the data-driven approach provides a near-optimal solution. By near-optimal we mean that the approximate solution performs arbitrarily close to the optimal solution that is computed with respect to the true demand distributions.
(cont.) For the price-setting newsvendor problem, we analyze a previously proposed simulation-based approach for a linear-additive demand model, and again derive bounds on the number of samples required to ensure that the simulation-based approach provides a near-optimal solution. We also perform computational experiments to analyze the empirical performance of these data-driven approaches.
by Joline Ann Villaranda Uichanco.
S.M.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

West, Richard. "Adaptive real-time management of communication and computation resources." Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/9237.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Kim, Jinwoo. "Memory hierarchy management through off-line computational learning." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/8194.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Mohammadi, Javad. "Distributed Computational Methods for Energy Management in Smart Grids." Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/710.

Повний текст джерела
Анотація:
It is expected that the grid of the future differs from the current system by the increased integration of distributed generation, distributed storage, demand response, power electronics, and communications and sensing technologies. The consequence is that the physical structure of the system becomes significantly more distributed. The existing centralized control structure is not suitable any more to operate such a highly distributed system. This thesis is dedicated to providing a promising solution to a class of energy management problems in power systems with a high penetration of distributed resources. This class includes optimal dispatch problems such as optimal power flow, security constrained optimal dispatch, optimal power flow control and coordinated plug-in electric vehicles charging. Our fully distributed algorithm not only handles the computational complexity of the problem, but also provides a more practical solution for these problems in the emerging smart grid environment. This distributed framework is based on iteratively solving in a distributed fashion the first order optimality conditions associated with the optimization formulations. A multi-agent viewpoint of the power system is adopted, in which at each iteration, every network agent updates a few local variables through simple computations, and exchanges information with neighboring agents. Our proposed distributed solution is based on the consensus+innovations framework, in which the consensus term enforces agreement among agents while the innovations updates ensure that local constraints are satisfied.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Chada, Daniel de Magalhães. "From cognitive science to management science: two computational contributions." reponame:Repositório Institucional do FGV, 2011. http://hdl.handle.net/10438/17053.

Повний текст джерела
Анотація:
Submitted by Kelly Ayala (kelly.ayala@fgv.br) on 2016-09-12T12:57:06Z No. of bitstreams: 1 Chada 2011 FINAL ENTREGUE.pdf: 579283 bytes, checksum: f463590c20f51b84ba0f9357ab1a6e08 (MD5)
Approved for entry into archive by Kelly Ayala (kelly.ayala@fgv.br) on 2016-09-12T12:58:17Z (GMT) No. of bitstreams: 1 Chada 2011 FINAL ENTREGUE.pdf: 579283 bytes, checksum: f463590c20f51b84ba0f9357ab1a6e08 (MD5)
Approved for entry into archive by Kelly Ayala (kelly.ayala@fgv.br) on 2016-09-12T13:00:07Z (GMT) No. of bitstreams: 1 Chada 2011 FINAL ENTREGUE.pdf: 579283 bytes, checksum: f463590c20f51b84ba0f9357ab1a6e08 (MD5)
Made available in DSpace on 2016-09-12T13:03:31Z (GMT). No. of bitstreams: 1 Chada 2011 FINAL ENTREGUE.pdf: 579283 bytes, checksum: f463590c20f51b84ba0f9357ab1a6e08 (MD5) Previous issue date: 2011
This work is composed of two contributions. One borrows from the work of Charles Kemp and Joshua Tenenbaum, concerning the discovery of structural form: their model is used to study the Business Week Rankings of U.S. Business Schools, and to investigate how other structural forms (structured visualizations) of the same information used to generate the rankings can bring insights into the space of business schools in the U.S., and into rankings in general. The other essay is purely theoretical in nature. It is a study to develop a model of human memory that does not exceed our (human) psychological short-term memory limitations. This study is based on Pentti Kanerva’s Sparse Distributed Memory, in which human memories are registered into a vast (but virtual) memory space, and this registration occurs in massively parallel and distributed fashion, in ideal neurons.
Este trabalho é composto de duas contribuições. Uma se usa do trabalhode Charles Kemp e Joshua Tenenbaum sobre a descoberta da forma estrutural: o seu modelo é usado para estudar os rankings da revista Business Week sobre escolas de administração, e para investigar como outras formas estruturais (visualizações estruturadas) da mesma informação usada para gerar os rankings pode trazer discernimento no espaço de escolas de negócios nos Estados Unidos e em rankings em geral. O outro ensaio é de natureza puramente teórica. Ele é um estudo no desenvolvimento de um modelo de memória que não excede os nossos (humanos) limites de memória de curto-prazo. Este estudo se baseia na Sparse Distributed Memory (Memória Esparsa e Distribuida) de Pentti Kanerva, na qual memórias humanas são registradas em um vasto (mas virtual) espaço, e este registro ocorre de forma maciçamente paralela e distribuida, em neurons ideais.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Kang, Sheng. "Optimization for recipe-based, diet-planning inventory management." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61895.

Повний текст джерела
Анотація:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 40-41).
This thesis presents a new modeling framework and research methodology for the study of recipe-based, diet-planning inventory management. The thesis begins with an exploration on the classic optimization problem - the diet problem based upon mixed-integer linear programming. Then, considering the fact that real diet-planning is sophisticated as it would be planning recipes rather than possible raw materials for the meals. Hence, the thesis develops the modeling framework under the assumption that given the recipes and the different purchasing options for raw materials listed in the recipes, examine the nutrition facts and calculate the purchasing decisions and the yearly optimal minimum cost for food consumption. This thesis further discusses the scenarios for different groups of raw materials in terms of shelf-timing difference. To model this inventory management, the modeling implementation includes preprocess part and the optimization part: the formal part involves with conversion of customized selection to quantitative relation with stored recipes and measurement on nutrition factors; the latter part solves the cost optimization problem.
by Sheng Kang.
S.M.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Cole, Murray Irwin. "Algorithmic skeletons : a structured approach to the management of parallel computation." Thesis, University of Edinburgh, 1988. http://hdl.handle.net/1842/11997.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Tagner, Nikita. "Optimal Energy Management for Parallel Hybrid Electric Vehicles using Dynamic Programming." Thesis, KTH, Optimeringslära och systemteori, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209776.

Повний текст джерела
Анотація:
In this thesis, two optimal control problems for the control of hybrid electric vehicles are formulated. The first is general formulation where both velocity and state of charge can vary. The second is a formulation where the velocity is prespecified and therefore only the state of charge can vary. The first formulation takes significantly more time to solve with dynamic programming than the second formulation. For the most hilly drive cycle that was evaluated, 4:45 % fuel savings were obtained by using the general formulation over the formulation with prespecified velocity. For the least hilly cycle, this number dropped to 1:75%. When the lowest admissible velocity was lowered from 75 to 70 km/h, fuel savings of 0:52 % were obtained. From 80 to 70 km/h, the number increased to 1:92 %. In conclusion, if dynamic programming is to be implemented in real time on a hybrid electric vehicle the fuel savings for hilly roads where a low minimal velocity is allowed are potentially much greater than when using prespecifued velocity. However, for less hilly roads and where the velocity is not allowed to vary as much, it might be more beneficial in terms of fuel consumption to use the formulation with prespecified velocity and include abilities such as gear shifting or switching the engine on or off.
I denna avhandling formuleras två optimala styrningsproblem för reglering av hybridelektriska fordon. I den första, mer generella, formuleringen kan både hastighet och batteriladdning variera. I den andra formuleringen är hastigheten specifierad i förväg and därmed kan endast batteriladdningen variera fritt. Den första formuleringen tar betydligt längre tid att lösa med dynamisk programmering än den andra formuleringen. Av dem utvärderade körcyklerna gav den som var mest kuperade bränslebesparingar på 4:45 % om den löstes med den generella formuleringen istället för den där hastigheten är specifierad i förväg. När den lägsta tillåtna hastigheten sänktes från 75 till 70 km/h sparades 0:52 % bränsle. Däremot, om den lägsta tillåtna hastigheten sänktes från 80 till 70 km/h ökade besparingen till 1:92 %. Sammanfattningsvis, om dynamisk programmering ska implementras i realtid på ett hybridelektriskt fordon så är dem potentiella bränslebesparingarna betydligt högre om vägen är väldigt kuperad och en låg lägsta hastighet tillåts för den generella formuleringen än om formuleringen med hastighet specifierad på förhand väljs. Därmed, för vägar som inte är lika kuperade och där hastigheten inte tillåts att variera mycket kan, potentiellt, högre bränslebesparingar uppnås om formuleringen med förspecifierad hastighet väljs och förmågan att växla alternativt stänga av eller på motorn inkluderas.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Aleksic, Mario. "Incremental computation methods in valid and transaction time databases." Thesis, Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/8126.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Abdallah, Mohamed E. S. M. "A Novel Computational Approach for the Management of Bioreactor Landfills." Thèse, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/20314.

Повний текст джерела
Анотація:
The bioreactor landfill is an emerging concept for solid waste management that has gained significant attention in the last decade. This technology employs specific operational practices to enhance the microbial decomposition processes in landfills. However, the unsupervised management and lack of operational guidelines for the bioreactor landfill, specifically leachate manipulation and recirculation processes, usually results in less than optimal system performance. Therefore, these limitations have led to the development of SMART (Sensor-based Monitoring and Remote-control Technology), an expert control system that utilizes real-time monitoring of key system parameters in the management of bioreactor landfills. SMART replaces conventional open-loop control with a feedback control system that aids the human operator in making decisions and managing complex control issues. The target from this control system is to provide optimum conditions for the biodegradation of the refuse, and also, to enhance the performance of the bioreactor in terms of biogas generation. SMART includes multiple cascading logic controllers and mathematical calculations through which the quantity and quality of the recirculated solution are determined. The expert system computes the required quantities of leachate, buffer, supplemental water, and nutritional amendments in order to provide the bioreactor landfill microbial consortia with their optimum growth requirements. Soft computational methods, particularly fuzzy logic, were incorporated in the logic controllers of SMART so as to accommodate the uncertainty, complexity, and nonlinearity of the bioreactor landfill processes. Fuzzy logic was used to solve complex operational issues in the control program of SMART including: (1) identify the current operational phase of the bioreactor landfill based on quantifiable parameters of the leachate generated and biogas produced, (2) evaluate the toxicological status of the leachate based on certain parameters that directly contribute to or indirectly indicates bacterial inhibition, and (3) predict biogas generation rates based on the operational phase, leachate recirculation, and sludge addition. The later fuzzy logic model was upgraded to a hybrid model that employed the learning algorithm of artificial neural networks to optimize the model parameters. SMART was applied to a pilot-scale bioreactor landfill prototype that incorporated the hardware components (sensors, communication devices, and control elements) and the software components (user interface and control program) of the system. During a one-year monitoring period, the feasibility and effectiveness of the SMART system were evaluated in terms of multiple leachate, biogas, and waste parameters. In addition, leachate heating was evaluated as a potential temperature control tool in bioreactor landfills. The pilot-scale implementation of SMART demonstrated the applicability of the system. SMART led to a significant improvement in the overall performance of the BL in terms of methane production and leachate stabilization. Temperature control via recirculation of heated leachate achieved high degradation rates of organic matter and improved the methanogenic activity.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Solak, Serdar. "Computational complexity management of H.264/AVC video coding standard." Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=95058.

Повний текст джерела
Анотація:
The new H.264/AVC video coding standard achieves significantly improved compression efficiency compared to previous standards by adopting highly advanced and flexible encoding techniques at the expense of increased complexity. However, the high computational complexity of H.264/AVC is a big concern primarily for low-power devices with limited processing capabilities. This thesis presents new techniques to reduce and/or control the computational complexity of an H.264/AVC encoder. A new prediction method is developed to estimate the Lagrangian rate-distortion cost of a macroblock. The prediction method is used in the design of two complexity reduction algorithms for H.264/AVC. The first algorithm uses the predicted rate-distortion costs to identify the SKIP coded macroblocks prior to any INTRA or INTER mode trial. Simulation results show that the algorithm achieves significant complexity savings with negligible loss in rate-distortion performance. Similarly, the second algorithm seeks to further reduce the encoder complexity by using the predicted costs to identify not only SKIP coded but also the INTRA and INTER coded macroblocks at earlier stages. Results indicate greater reductions in the encoder complexity at the expense of slightly larger loss in rate-distortion performance. A complexity scalable encoding framework is proposed for controlling the encoder complexity at a macroblock level using a single parameter. The framework uses a special macroblock grouping technique called the ``wave-front macroblock scheduling''. The computational resources are allocated to the macroblocks within a wave-front. The resource allocation is further developed by adopting the Lagrangian rate-distortion cost prediction into the framework. Results demonstrate significant improvements in the rate-distortion performance of the encoder operating at limited complexity. Finally, the complexity reduction algorithms are installed into the complexity scalable encoding framework. Simul
La norme de codage vidéo H.264/AVC permet une efficacité de compression grandement supérieure à celle des normes précédentes grâce à des techniques de codage avancées d'une grande flexibilité. Ceci dit, le prix de cette performance améliorée est l'augmentation de la complexité du calcul requise, ce qui est un obstacle majeur pour les appareils dont la puissance et la capacité de calcul sont limitées. Ce mémoire présente de nouvelles techniques pour réduire et contrôler la complexité du calcul requise par un codeur H.264/AVC. Une nouvelle méthode de prédiction est développée pour estimer le coût débit-distorsion Lagrangien d'un macrobloc. Cette méthode est utilisée avec deux nouveaux algorithmes de réduction de la complexité pour un codeur H.264/AVC. Le premier algorithme utilise les coûts prédits du taux de distorsion pour identifier les macroblocs codés de type SKIP avant les essais des modes INTRA ou INTER. Des simulations démontrent que cet algorithme entraîne une réduction significative de la complexité du calcul avec une diminution négligeable de la performance débit-distorsion. Le deuxième algorithme utilise la méthode de prédiction des coûts débit-distorsion pour réduire la complexité du codeur en identifiant les macroblocs codés de type INTRA et INTER plus tôt lors du processus de codage. Les résultats indiquent que des réductions encore plus grandes de la complexité peuvent être obtenues au prix d'une dégradation accrue de la performance débit-distorsion. Un dispositif de contrôle évolutif est proposé pour contrôler la complexité au niveau du macrobloc à l'aide d'un unique paramètre. Le dispositif utilise une technique de regroupement gérant l'allocation des ressources de calcul aux macroblocs et intègre la méthode de prédiction du coût débit-distorsion Lagrangien. Les résultats démontrent une amélioration significative de la performance du taux de distorsion tout en limitant la comp
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Cavalcanti, João Marcos Bastos. "A computational logic approach for Web site synthesis and management." Thesis, University of Edinburgh, 2003. http://hdl.handle.net/1842/23294.

Повний текст джерела
Анотація:
This thesis concerns the development of an approach for Web site synthesis. Previous work has demonstrated the feasibility of domain-specific methods for Web site synthesis using computational logic. We are particularly interested in extending this idea, investigating domain-independent methods whilst still being able to produce practical Web site applications. The main contribution of this thesis is to propose a design approach that joins different levels of description to produce Web sites consistent with a corresponding high level description. Our approach is based on a three-level architecture, composed of a high-level specification, intermediate representation and target language. The high-level specification corresponds to the description of a Web site application. We use an entity-relationship model for this purpose and a mapping procedure derives a corresponding intermediate representation automatically. The intermediate representation describes display units, which are sets of pieces of informa­tion, and how to navigate among them. A separate visualisation description relates each piece of information to a particular presentation style and templates to display units. From the intermediate representation and visualisation description a complete Web site is generated automatically. Experiments with HTML/CSS, XML/XSL and WML as target languages have been successfully carried out. The core of our approach is the intermediate representation. An important feature is its independence from any particular implementation, making this approach very flexible. It forms a basis for different kinds of reasoning about the application, including property and constraint checking. It also supports definitions of logic-based agents that are constructed as part of the synthesis of the Web site specification and can be employed to automate the maintenance of parts of the site. Data-intensive Web sites, such as Web portals and e-commerce sites can all benefit from this development approach as it makes design more methodical and maintenance less time consuming.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Chen, Ziwei. "Workflow Management Service based on an Event-driven Computational Cloud." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-141696.

Повний текст джерела
Анотація:
The Event-driven computing paradigm, also known as trigger computing, is widely used in computer technology. Computer systems, such as database systems, introduce trigger mechanism to reduce repetitive human intervention. With the growing complexity of industrial use case requirements, independent and isolated triggers cannot fulfil the demands any more. Fortunately, an independent trigger can be triggered by the result produced by other triggered actions, and that enables the modelling of complex use cases, where the chains or graphs that consist of triggers are called workflows. Therefore, workflow construction and manipulation become a must for implementing the complex use cases. As the developing platform of this study, VISION Cloud is a computational storage system that executes small programs called storles as independent computation units in the storage. Similar to the trigger mechanism in database systems, storlets are triggered by specific events and then execute computations. As a result, one storlet can also be triggered by the result produced by other storlets, and it is called connections between storlets. Due to the growing complexity of use case requirements, an urgent demand is to have starlet workflow management supported in VISION system. Furthermore, because of the existence of connections between storlets in VISION, problems as non-termination triggering and unexpected overwriting appear as side-effects. This study develops a management service that consists of an analysis engine and a multi-level visualization interface. The analysis engine checks the connections between storlets by utilizing the technology of automatic theorem proving and deterministic finite automaton. The involved storlets and their connections are displayed in graphs via the multi-level visualization interface. Furthermore, the aforementioned connection problems are detected with graph theory algorithms. Finally, experimental results with practical use case examples demonstrate the correctness and comprehensiveness of the service. Algorithm performance and possible optimization are also discussed. They lead the way for future work to create a portable framework of event-driven workflow management services.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Abramovich, Michael. "Impacts of revenue management on estimates of spilled passenger demand." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82413.

Повний текст джерела
Анотація:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 138-140).
In the airline industry, spill refers to passenger demand turned away from a flight because demand has exceeded capacity. The accurate estimation of spill and the lost revenue it implies is an important parameter in airline fleet assignment models, where improved estimates lead to more profitable assignments. Previous models for spill estimation did not take into account the effects of passenger choice and airline revenue management. Since revenue management systems protect seats for later-arriving higher fare passengers, revenue management controls will influence the number of spilled passengers and their value because they will restrict availability to lower fare passengers even if seats on the aircraft are available. This thesis examines the effect of various revenue management systems and fare structures on spill, and, in turn, the marginal value of incremental capacity. The Passenger Origin Destination Simulator is used to simulate realistic passenger booking scenarios and to measure the value of spilled demand. A major finding of the research is that in less restricted fare structures and with traditional revenue management systems, increasing capacity on a flight leads to buy-down which can result in negative marginal revenues and therefore revenue losses. This behavior is contrary to conventional wisdom and is not considered in existing spill models. On the other hand, marginal revenues at low capacities are greater than would be predicted by first-choice-only spill models because some passengers will sell-up to higher fares to avoid spilling out. Additionally, because of passenger recapture between flights, adding capacity to one flight can lead to revenue losses on another. Therefore, the marginal value of incremental capacity is not always positive. Negative marginal revenues and associated revenue losses with increasing capacity can at least be partially mitigated by using more advanced revenue management forecasting and optimization algorithms which take into account passenger willingness to pay. The thesis also develops a heuristic analytical method for estimating spill costs which takes into account the effects of passenger sell-up, where previous models tend to underestimate the spill cost by only modeling passengers' first choices. The heuristic demonstrates improved estimates of passenger spill: in particular, in restricted fare structures and for moderate amounts of spill, the model exhibits approximate relative errors on the order of 5%, a factor of two improvement over previous models.
by Michael Abramovich.
S.M.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Cushing, Judith Bayard. "Computational proxies : an object-based infrastructure for computational science /." Full text open access at:, 1995. http://content.ohsu.edu/u?/etd,195.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Zhao, Shouqi. "Dependent risk modelling and ruin probability : numerical computation and applications." Thesis, City University London, 2014. http://openaccess.city.ac.uk/13702/.

Повний текст джерела
Анотація:
In this thesis, we are concerned with the finite-time ruin probabilities in two alternative dependent risk models, the insurance risk model and the dual risk model, including the numerical evaluation of the explicit expressions for these quantities and the application of the probabilistic results obtained. We first investigate the numerical properties of the formulas for the finite-time ruin probability derived by Ignatov and Kaishev (2000, 2004) and Ignatov et al. (2001) for a generalized insurance risk model allowing dependence. Efficient numerical algorithms are proposed for computing the ruin probability with a prescribed accuracy in order to facilitate the following studies. We then propose a new definition of alarm time in the insurance risk model, which generalizes that of Das and Kratz (2012), expressed in terms of the joint distribution of the time to ruin and the deficit at ruin. The alarm time is devised to warn that the future ruin probability within a finite-time window has reached a pre-specified critical level and capital injection is required. Due to our definition, the implementation of the alarm time highly relies on the computation of the finite-time ruin probability, which utilizes the previous results on computing the ruin probability with a prescribed accuracy. The results of the ruin probability and the alarm time are then transferred nicely to a generalized dual risk model, whose name stems from its duality to the insurance risk model, through an enlightening link established between the two risk models. Finally, based on the two alternative risk models, we introduce a framework for analyzing the risk of systems failure based on estimating the failure probability, and illustrate how the probabilistic models and results obtained can be applied as risk analytic tools in various practical risk assessment situations, such as systems reliability, inventory management, flood control via dam management, infection disease spread and financial insolvency.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Булах, Богдан Вікторович. "Інфраструктура потокiв задач на основi композицiї грiд-сервiсiв для автоматизованого схемотехнiчного проектування". Doctoral thesis, Київ, 2013. https://ela.kpi.ua/handle/123456789/6242.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Schneck, Phyllis Adele. "Dynamic management of computation and communication resources to enable secure high-performances applications." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/8264.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Phu, Thi Vu. "A comparison of discrete and flow-based models for air traffic flow management." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45287.

Повний текст джерела
Анотація:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2008.
Includes bibliographical references (leaves 73-74).
The steady increase of congestion in air traffic networks has resulted in significant economic losses and potential safety issues in the air transportation. A potential way to reduce congestion is to adopt efficient air traffic management policies, such as, optimally scheduling and routing air traffic throughout the network. In recent years, several models have been proposed to predict and manage air traffic. This thesis focuses on the comparison of two such approaches to air traffic flow management: (i) a discrete Mixed Integer Program model, and (ii) a continuous flow-based model. The continuous model is applied in a multi-commodity setting to take into account the origins and destinations of the aircraft. Sequential quadratic programming is used to optimize the continuous model. A comparison of the performance of the two models based on a set of large scale test cases is provided. Preliminary results suggest that the linear programming relaxation of the discrete model provides results similar to the continuous flow-based model for high volumes of air traffic.
by Thi Vu Phu.
S.M.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Gog, Ionel Corneliu. "Flexible and efficient computation in large data centres." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/271804.

Повний текст джерела
Анотація:
Increasingly, online computer applications rely on large-scale data analyses to offer personalised and improved products. These large-scale analyses are performed on distributed data processing execution engines that run on thousands of networked machines housed within an individual data centre. These execution engines provide, to the programmer, the illusion of running data analysis workflows on a single machine, and offer programming interfaces that shield developers from the intricacies of implementing parallel, fault-tolerant computations. Many such execution engines exist, but they embed assumptions about the computations they execute, or only target certain types of computations. Understanding these assumptions involves substantial study and experimentation. Thus, developers find it difficult to determine which execution engine is best, and even if they did, they become “locked in” because engineering effort is required to port workflows. In this dissertation, I first argue that in order to execute data analysis computations efficiently, and to flexibly choose the best engines, the way we specify data analysis computations should be decoupled from the execution engines that run the computations. I propose an architecture for decoupling data processing, together with Musketeer, my proof-of-concept implementation of this architecture. In Musketeer, developers express data analysis computations using their preferred programming interface. These are translated into a common intermediate representation from which code is generated and executed on the most appropriate execution engine. I show that Musketeer can be used to write data analysis computations directly, and these can execute on many execution engines because Musketeer automatically generates code that is competitive with optimised hand-written implementations. The diverse execution engines cause different workflow types to coexist within a data centre, opening up both opportunities for sharing and potential pitfalls for co-location interference. However, in practice, workflows are either placed by high-quality schedulers that avoid co-location interference, but choose placements slowly, or schedulers that choose placements quickly, but with unpredictable workflow run time due to co-location interference. In this dissertation, I show that schedulers can choose high-quality placements with low latency. I develop several techniques to improve Firmament, a high-quality min-cost flow-based scheduler, to choose placements quickly in large data centres. Finally, I demonstrate that Firmament chooses placements at least as good as other sophisticated schedulers, but at the speeds associated with simple schedulers. These contributions enable more efficient and effective use of data centres for large-scale computation than current solutions.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

FONSECA, FABIANA LANZILLOTTA DA. "STORMWATER MANAGEMENT WITH WATERCOURSE VALORIZATION: COMPUTATIONAL SIMULATION OF THE TINTAS RIVER BASIN." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2018. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=34965@1.

Повний текст джерела
Анотація:
O crescente processo de urbanização no Brasil se iniciou de forma rápida e desordenada causando inúmeros impactos sociais e no ambiente natural. As bacias hidrográficas vêm sendo modificadas com a expansão urbana, causando impactos negativos nas cidades, como a ocorrência de catástrofes associadas a eventos pluviais de alta intensidade. De forma a mitigar os danos de ordem social e ambiental advindos de enchentes, medidas compensatórias em manejo de águas pluviais tornam-se imperativas. O presente trabalho tem por objetivo apresentar técnicas de controle para a redução do escoamento superficial, integrando ações sustentáveis de valorização de cursos d água na paisagem, e promovendo o aumento da resiliência em centros urbanos, acompanhado de um gerenciamento e monitoramento satisfatório. Em um estudo de caso na macrodrenagem, na bacia hidrográfica do rio das Tintas, localizado na zona oeste da cidade do Rio de Janeiro, foi avaliado o comportamento hidrológico-hidráulico da calha através do modelo Storm Water Management Model (SWMM), prevendo-se a implantação de reservatório para amortecimento de cheias com fins múltiplos. O reservatório projetado, off-line, promoveu uma redução do pico do hidrograma de cheia da ordem de 11,6 por cento, de 74,8m(3)/s para 66,1m(3)/s na seção de deságue, no rio Sarapuí. Associado à implantação de medidas de baixo impacto (LID) e ações de valorização de cursos d água comprovou-se um aumento da resiliência e a consequente redução dos impactos advindos de enchentes urbanas na área de intervenção proposta.
The increasing process of urbanization in Brazil began in a fast and disorderly way causing numerous social and natural impacts. Urban sprawl has modified the watersheds, causing negative impacts on the cities, such as the occurrence of catastrophes associated with storm events. In order to mitigate the social, environmental and financial damages caused by floods, associated with the promotion of watercourses in the urban landscape and increasing resilience in cities, compensatory measures in stormwater management become imperative.The goal of this work reffers to present alternatives and control techniques applied to drainage systems, contemplating sustainable actions to value the watercourses, integrating them to the landscape and promoting the increase of resilience in urban centers, followed by an effective management and satisfactory monitoring.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Alsouri, Sami Verfasser], Stefan [Akademischer Betreuer] Katzenbeisser, and Eric [Akademischer Betreuer] [Bodden. "Behavior Compliance Control for More Trustworthy Computation Outsourcing / Sami Alsouri. Betreuer: Stefan Katzenbeisser ; Eric Bodden." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2013. http://d-nb.info/1110064357/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Alsouri, Sami [Verfasser], Stefan Akademischer Betreuer] Katzenbeisser, and Eric [Akademischer Betreuer] [Bodden. "Behavior Compliance Control for More Trustworthy Computation Outsourcing / Sami Alsouri. Betreuer: Stefan Katzenbeisser ; Eric Bodden." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2013. http://nbn-resolving.de/urn:nbn:de:tuda-tuprints-35789.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Brown, Mikel J. "Using natural language for database queries /." Online version of thesis, 1985. http://hdl.handle.net/1850/9044.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Guergachi, Abdelaziz. "Uncertainty management in the activated sludge process, innovative applications of computational learning theory." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0016/NQ58278.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Krause, Thilo. "Evaluating congestion management schemes in liberalized electricity markets applying agent-based computational economics /." Zürich : ETH, 2007. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=16928&part=abstracts.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Walsh, Jonathan A. (Jonathan Alan). "Computational methods for efficient nuclear data management in Monte Carlo neutron transport simulations." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/95570.

Повний текст джерела
Анотація:
Thesis: S.M., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 127-133).
This thesis presents the development and analysis of computational methods for efficiently accessing and utilizing nuclear data in Monte Carlo neutron transport code simulations. Using the OpenMC code, profiling studies are conducted in order to determine the types of nuclear data that are used in realistic reactor physics simulations, as well as the frequencies with which those data are accessed. The results of the profiling studies are then used to motivate the conceptualization of a nuclear data server algorithm aimed at reducing on-node memory requirements through the use of dedicated server nodes for the storage of infrequently accessed data. A communication model for this algorithm is derived and used to make performance predictions given data access frequencies and assumed system hardware parameters. Additionally, a new, accelerated approach for rejection sampling the free gas resonance elastic scattering kernel that reduces the frequency of zero-temperature elastic scattering cross section data accesses is derived and implemented. Using this new approach, the runtime overhead incurred by an exact treatment of the free gas resonance elastic scattering kernel is reduced by more than 30% relative to a standard sampling procedure used by Monte Carlo codes. Finally, various optimizations of the commonly-used binary energy grid search algorithm are developed and demonstrated. Investigated techniques include placing kinematic constraints on the range of the searchable energy grid, index lookups on unionized material energy grids, and employing energy grid hash tables. The accelerations presented routinely result in overall code speedup by factors of 1.2-1.3 for simulations of practical systems.
by Jonathan A. Walsh.
S.M.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Leidig, Jonathan Paul. "Epidemiology Experimentation and Simulation Management through Scientific Digital Libraries." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/28759.

Повний текст джерела
Анотація:
Advances in scientific data management, discovery, dissemination, and sharing are changing the manner in which scientific studies are being conducted and repurposed. Data-intensive scientific practices increasingly require data management related services not available in existing digital libraries. Complicating the issue are the diversity of functional requirements and content in scientific domains as well as scientists' lack of expertise in information and library sciences. Researchers that utilize simulation and experimentation systems need digital libraries to maintain datasets, input configurations, results, analyses, and related documents. A digital library may be integrated with simulation infrastructures to provide automated support for research components, e.g., simulation interfaces to models, data warehouses, simulation applications, computational resources, and storage systems. Managing and provisioning simulation content allows streamlined experimentation, collaboration, discovery, and content reuse within a simulation community. Formal definitions of this class of digital libraries provide a foundation for producing a software toolkit and the semi-automated generation of digital library instances. We present a generic, component-based SIMulation-supporting Digital Library (SimDL) framework. The framework is formally described and provides a deployable set of domain-free services, schema-based domain knowledge representations, and extensible lower and higher level service abstractions. Services in SimDL are specialized for semi-structured simulation content and large-scale data producing infrastructures, as exemplified in data storage, indexing, and retrieval service implementations. Contributions to the scientific community include previously unavailable simulation-specific services, e.g., incentivizing public contributions, semi-automated content curating, and memoizing simulation-generated data products. The practicality of SimDL is demonstrated through several case studies in computational epidemiology and network science as well as performance evaluations.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Verstak, Alexandre. "Data and Computation Modeling for Scientific Problem Solving Environments." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/35299.

Повний текст джерела
Анотація:
This thesis investigates several issues in data and computation modeling for scientific problem solving environments (PSEs). A PSE is viewed as a software system that provides (i) a library of simulation components, (ii) experiment management, (iii) reasoning about simulations and data, and (iv) problem solving abstractions. Three specific ideas, in functionalities (ii)-(iv), form the contributions of this thesis. These include the EMDAG system for experiment management, the BSML markup language for data interchange, and the use of data mining for conducting non-trivial parameter studies. This work emphasizes data modeling and management, two important aspects that have been largely neglected in modern PSE research. All studies are performed in the context of S4W, a sophisticated PSE for wireless system design.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Gip, Orreborn Jakob. "Asset-Liability Management with in Life Insurance." Thesis, KTH, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-215339.

Повний текст джерела
Анотація:
In recent years, new regulations and stronger competition have further increased the importance of stochastic asset-liability management (ALM) models for life insurance firms. However, the often complex nature of life insurance contracts makes modeling to a challenging task, and insurance firms often struggle with models quickly becoming too complicated and inefficient. There is therefore an interest in investigating if, in fact, certain traits of financial ratios could be exposed through a more efficient model. In this thesis, a discrete time stochastic model framework, for the simulation of simplified balance sheets of life insurance products, is proposed. The model is based on a two-factor stochastic capital market model, supports the most important product characteristics, and incorporates a reserve-dependent bonus declaration. Furthermore, a first approach to endogenously model customer transitions is proposed, where realized policy returns are used for assigning transition probabilities. The model's sensitivity to different input parameters, and ability to capture the most important behaviour patterns, are demonstrated by the use of scenario and sensitivity analyses. Furthermore, based on the findings from these analyses, suggestions for improvements and further research are also presented.
Införandet av nya regelverk och ökad konkurrens har medfört att stokastiska ALM-modeller blivit allt viktigare för livförsäkringsbolag. Den ofta komplexa strukturen hos försäkringsprodukter försvårar dock modelleringen, vilket gör att många modeller anses vara för komplicerade samt ineffektiva, av försäkringsbolagen. Det finns därför ett intresse i att utreda om egenskaper hos viktiga finansiella nyckeltal kan studeras utifrån en mer effektiv och mindre komplicerad modell. I detta arbete föreslås ett ramverk för stokastisk modellering av en förenklad version av balansräkningen hos typiska livförsäkringsbolag. Modellen baseras på en stokastisk kapitalmarknadsmodell, med vilken såväl aktiepriser som räntenivåer simuleras. Vidare så stödjer modellen simulering av de mest väsentliga produktegenskaperna, samt modellerar kundåterbäring som en funktion av den kollektiva konsolideringsgraden. Modellens förmåga att fånga de viktigaste egenskaperna hos balansräkningens ingående komponenter undersöks med hjälp av scenario- och känslighetsanalyser. Ytterligare undersöks även huruvida modellen är känslig för förändringar i olika indata, där fokus främst tillägnas de parametrar som kräver mer avancerade skattningsmetoder.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Svensson, Frida. "Scalable Distributed Reinforcement Learning for Radio Resource Management." Thesis, Linköpings universitet, Tillämpad matematik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177822.

Повний текст джерела
Анотація:
There is a large potential for automation and optimization in radio access networks (RANs) using a data-driven approach to efficiently handle the increase in complexity due to the steep growth in traffic and new technologies introduced with the development of 5G. Reinforcement learning (RL) has natural applications in RAN control loops such as link adaptation, interference management and power control at different timescales commonly occurring in the RAN context. Elevating the status of data-driven solutions in RAN and building a new, scalable, distributed and data-friendly RAN architecture will be needed to competitively tackle the challenges of coming 5G networks. In this work, we propose a systematic, efficient and robust methodology for applying RL on different control problems. Firstly, the proposed methodology is evaluated using a well-known control problem. Then, it is adapted to a real-world RAN scenario. Extensive simulation results are provided to show the effectiveness and potential of the proposed approach. The methodology was successfully created but results on a RAN-simulator were not mature
Det finns en stor potential automatisering och optimering inom radionätverk (RAN, radio access network) genom att använda datadrivna lösningar för att på ett effektivt sätt hantera den ökade komplexiteten på grund av trafikökningar and nya teknologier som introducerats i samband med 5G. Förstärkningsinlärning (RL, reinforcement learning) har naturliga kopplingar till reglerproblem i olika tidsskalor, såsom länkanpassning, interferenshantering och kraftkontroll, vilket är vanligt förekommande i radionätverk. Att förhöja statusen på datadrivna lösningar i radionätverk kommer att vara nödvändigt för att hantera utmaningarna som uppkommer med framtida 5G nätverk. I detta arbete föreslås vi en syetematisk metodologi för att applicera RL på ett reglerproblem. I första hand används den föreslagna metodologin på ett välkänt reglerporblem. Senare anpassas metodologin till ett äkta RAN-scenario. Arbetet inkluderar utförliga resultat från simuleringar för att visa effektiviteten och potentialen hos den föreslagna metoden. En lyckad metodologi skapades men resultaten på RAN-simulatorn saknade mognad.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Grundke, Peter. "Integrated market and credit portfolio models risk measurement and computational aspects." Wiesbaden Gabler, 2006. http://d-nb.info/987215159/04.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Mabotuwana, Thusitha Dananjaya De Silva. "ChronoMedIt : a computational quality audit framework for better management of patients with chronic disease." Thesis, University of Auckland, 2010. http://hdl.handle.net/2292/6034.

Повний текст джерела
Анотація:
Chronic disease is a leading cause of death worldwide accounting for around 60% of all deaths. An important aspect of successful chronic disease management is quality audit and feedback to clinicians. However, due to the complex temporal relationships inherent in chronic disease, formulating clinically relevant queries is difficult using the querying tools often built into commercial practice management systems. The onset of this PhD research involved working with staff of a general practice clinic to develop a set of explicit quality audit indicators for blood pressure control. Eight indicators were identified as most relevant to the practice. The ability to compute these indicators reliably from routinely collected electronic medical records (EMRs) was validated by clinical panel assessment. These eight indicators informed formulation of a model of chronic disease audit with four broad classes of indicators: (1) persistence to indicated medication; (2) timely measurement recording; (3) time to achieve target; and (4) measurement contraindicating therapy. The four broad indicator classes have been implemented within the ChronoMedIt (Chronological Medical audIt) framework as an extensible and configurable architecture. The main components of the ChronoMedIt architecture are: an XML based specification for indicator formulation (with an associated XML-Schema), a drug and classification knowledge base maintained using Semantic Web technologies, a C# based criteria processing engine, a SQL-Server based patient database with related stored procedures and a graphical user interface to formulate queries and generate reports. ChronoMedIt can produce patient-specific audit reports as well as reports to benchmark an entire practice for a given evaluation period. A visualisation tool has been developed to provide an alternate representation of patient prescribing and measurement histories. By modifying the indicator specification and knowledge base an analyst can address a wide array of chronic disease management queries as specific instances of the four broad indicator classes. The framework's core computation has been verified using redundant query implementations on a battery of simulated case data and is illustrated against the EMRs of several practices. ChronoMedIt has been applied in several real-world settings; notably, identifying patients with poor antihypertensive medication adherence profiles for a feasibility study of nurse-led adherence promotion.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Östberg, Per-Olov. "Virtual infrastructures for computational science: software and architectures for distributed job and resource management." Doctoral thesis, Umeå universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-42428.

Повний текст джерела
Анотація:
In computational science, the scale of problems addressed and the resolution of solu- tions achieved are often limited by the available computational capacity. The current methodology of scaling computational capacity to large scale (i.e. larger than individ- ual resource site capacity) includes aggregation and federation of distributed resource systems. Regardless of how this aggregation manifests, scaling of scientific compu- tational problems typically involves (re)formulation of computational structures and problems to exploit problem and resource parallelism. Efficient parallelization and scaling of scientific computations to large scale is difficult and further complicated by a number of factors introduced by resource aggregation, e.g., resource heterogene- ity and coupling of computational methodology. Scaling complexity severely impacts computation enactment and necessitates the use of mechanisms that provide higher abstractions for management of computations in distributed computing environments.This work addresses design and construction of virtual infrastructures for scientific computation that abstract computation enactment complexity, decouple computation specification from computation enactment, and facilitate large-scale use of compu- tational resource systems. In particular, this thesis discusses job and resource man- agement in distributed virtual scientific infrastructures intended for Grid and Cloud computing environments. The main area studied is Grid computing, which is ap- proached using Service-Oriented Computing and Architecture methodology. Thesis contributions discuss both methodology and mechanisms for construction of virtual infrastructures, and address individual problems such as job management, application integration, scheduling job prioritization, and service-based software development.I addition to scientific publications, this work also makes contributions in the form of software artifacts that demonstrate the concepts discussed. The Grid Job Manage- ment Framework (GJMF) abstracts job enactment complexity and provides a range of middleware-agnostic job submission, control, and monitoring interfaces. The FSGrid framework provides a generic model for specification and delegation of resource allo- cations in virtual organizations, and enacts allocations based on distributed fairshare job prioritization. Mechanisms such as these decouple job and resource management from computational infrastructure systems and facilitate the construction of scalable virtual infrastructures for computational science.
Inom beräkningsvetenskap begränsar ofta mängden tillgänglig beräkningskraft både storlek på problem som kan ansättas såväl som kvalitet på lösningar som kan uppnås. Metodik för skalning av beräkningskapacitet till stor skala (dvs större än kapaciteten hos enskilda resurscentras) baseras för närvarande på aggregering och federation av distribuerade beräkningsresurser. Oavsett hur denna resursaggregering tar sig uttryck tenderar skalning av vetenskapliga beräkningar till storskalig nivå att inkludera omformulering av problemställningar och beräkningsstrukturer för att bättre utnyttja problem- och resursparallellism. Effektiv parallellisering och skalning av vetenskapliga beräkningar är svårt och kompliceras ytterligare av faktorer som medföljer resursaggregering, t.ex. heterogeneitet i resursmiljöer och beroenden i programmeringsmodeller och beräkningsmetoder. Detta utbytesförhållande illustrerar komplexiteten i utförande av beräkningar och behovet av mekanismer som erbjuder högre abstraktionsnivåer för hantering av beräkningar i distribuerade beräkningsmiljöer.Denna avhandling diskuterar design och konstruktion av virtuella beräkningsinfrastrukturer som abstraherar komplexitet i utförande av beräkningar, frikopplar design av beräkningar från utförande av beräkningar samt underlättar storskalig användning av beräkningsresurser för vetenskapliga beräkningar. I synnerhet behandlas jobb- och resurshantering i distribuerade virtuella vetenskapliga infrastrukturer avsedda för Grid och Cloud computing miljöer. Det huvudsakliga området för avhandlingen är Grid computing, vilket adresseras med service-orienterad beräknings- och arkitekturmetodik. Arbetet diskuterar metodik och mekanismer för konstruktion av virtuella beräkningsinfrastrukturer samt gör bidrag inom enskilda områden som jobbhantering, applikationsintegrering, jobbprioritering och service-baserad programvaruutveckling.Utöver vetenskapliga publikationer bidrar detta arbete också med bidrag i form av programvarusystem som illustrerar de metoder som diskuteras. The Grid Job Management Framework (GJMF) abstraherar komplexitet i hantering av beräkningsjobb och erbjuder en uppsättning middleware-agnostiska gränssnitt för körning, kontroll och övervakning av beräkningsjobb i distribuerade beräkningsmiljöer. FSGrid erbjuder en generisk modell för specifikation och delegering av resurstilldelning i virtuella organisationer och grundar sig på distribuerad rättvisebaserad jobbprioritering. Mekanismer som dessa frikopplar jobb- och resurshantering från fysiska infrastruktursystem samt underlättar konstruktion av skalbara virtuella infrastrukturer för beräkningsvetenskap.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Balachandran, Libish Kalathil. "Computational workflow management for conceptual design of complex systems : an air-vehicle design perspective." Thesis, Cranfield University, 2007. http://dspace.lib.cranfield.ac.uk/handle/1826/5070.

Повний текст джерела
Анотація:
The decisions taken during the aircraft conceptual design stage are of paramount importance since these commit up to eighty percent of the product life cycle costs. Thus in order to obtain a sound baseline which can then be passed on to the subsequent design phases, various studies ought to be carried out during this stage. These include trade-off analysis and multidisciplinary optimisation performed on computational processes assembled from hundreds of relatively simple mathematical models describing the underlying physics and other relevant characteristics of the aircraft. However, the growing complexity of aircraft design in recent years has prompted engineers to substitute the conventional algebraic equations with compiled software programs (referred to as models in this thesis) which still retain the mathematical models, but allow for a controlled expansion and manipulation of the computational system. This tendency has posed the research question of how to dynamically assemble and solve a system of non-linear models. In this context, the objective of the present research has been to develop methods which significantly increase the flexibility and efficiency with which the designer is able to operate on large scale computational multidisciplinary systems at the conceptual design stage. In order to achieve this objective a novel computational process modelling method has been developed for generating computational plans for a system of non-linear models. The computational process modelling was subdivided into variable flow modelling, decomposition and sequencing. A novel method named Incidence Matrix Method (IMM) was developed for variable flow modelling, which is the process of identifying the data flow between the models based on a given set of input variables. This method has the advantage of rapidly producing feasible variable flow models, for a system of models with multiple outputs. In addition, criteria were derived for choosing the optimal variable flow model which would lead to faster convergence of the system. Cont/d.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Ghorasi, Rahim. "An intelligent data management system for computational modelling of pollutants transport in river networks." Thesis, Loughborough University, 2007. https://dspace.lboro.ac.uk/2134/13342.

Повний текст джерела
Анотація:
Utilization of sophisticated hydroinformatic systems depends on the optimum use of appropriate input data. The main focus of the present project has been the development of a number of auxiliary data handling modules for a previously constructed IPT system. This system is an elaborate software capable of modelling a wide range of hydro-environmental phenomena. Data required for the implementation of the IPT system consist of hydro graphic and physical data regarding the geometry of hydro-environmental system and boundary and initial conditions. Due to inevitable high cost of these surveys in most cases the available data are scarce which reduces the efficiency of computer models. In this respect development of more sophisticated computer modelling schemes is in effective and techniques for enhancing the quality of the input data must be employed. The aim of this project has been to eliminate such difficulties by the use of modern IT techniques such as the optimization of physical data by genetic algorithms, fuzzy logic procedures, intelligent data bases and case based reasoning. To demonstrate the efficiency of the developed data handling modules a number of hydro-environmental problems in tidal river networks have been solved. It is shown that the modules developed in this project have general applicability and can be used to assist computer aided design and systems management in a wide variety of cases.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Aladesanmi, Ereola Johnson. "Non intrusive load monitoring & identification for energy management system using computational intelligence approach." Master's thesis, University of Cape Town, 2015. http://hdl.handle.net/11427/13561.

Повний текст джерела
Анотація:
Includes bibliography.
Electrical energy is the life line to every nation’s or continent development and economic progress. Referable to the recent growth in the demand for electricity and shortage in production, it is indispensable to develop strategies for effective energy management and system delivery. Load monitoring such as intrusive load monitoring, non-intrusive load monitoring, and identification of domestic electrical appliances is proposed especially at the residential level since it is the major energy consumer. The intrusive load monitoring provides accurate results and would allow each individual appliance's energy consumption to be transmitted to a central hub. Nevertheless, there are many practical disadvantages to this method that have motivated the introduction of non-intrusive load monitoring system. The fiscal cost of manufacturing and installing enough monitoring devices to match the number of domestic appliances is considered to be a disadvantage. In addition, the installation of one meter per household appliances would lead to congestion in the house and thus cause inconvenience to the occupants of the house, therefore, non-intrusive load monitoring technique was developed to alleviate the aforementioned challenges of intrusive load monitoring. Non-intrusive load monitoring (NILM) is the process of disaggregating a household’s total energy consumption into its contributing appliances. The total household load is monitored via a single monitoring device such as smart meter (SM). NILM provides cost effective and convenient means of load monitoring and identification. Several nonintrusive load monitoring and identification techniques are reviewed. However, the literature lacks a comprehensive system that can identify appliances with small energy consumption, appliances with overlapping energy consumption and a group of appliance ranges at once. This has been the major setback to most of the adopted techniques. In this dissertation, we propose techniques that overcome these setbacks by combining artificial neural networks (ANN) with a developed algorithm to identify appliances ranges that contribute to the energy consumption within a given period of time usually an hour interval.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Wang, Zhiyong S. M. Massachusetts Institute of Technology. "A computational method and software development for make-to-order pricing optimization." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/35097.

Повний текст джерела
Анотація:
Thesis (S.M.)--Massachusetts Institute of Technology, System Design and Management Program, 2006.
Includes bibliographical references (p. 34).
High variability of demand and inflexible capacity are inevitable in a make-to-order production despite its cost savings. A computational method is proposed in this thesis to exploit pricing opportunities in the price elasticity of demand and the up-to-date order transactions. Software development possibility was considered based on such pricing optimization method. Based on experiments conducted using a software prototype, we concluded that using the proposed computational method and software developed following the method with acceptable performance and scalability, pricing optimization was able to increase the revenue of a make-to-order production.
by Zhiyong Wang.
S.M.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії