To see the other types of publications on this topic, follow the link: Synthesis of Probabilistic Programs.

Dissertations / Theses on the topic 'Synthesis of Probabilistic Programs'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Synthesis of Probabilistic Programs.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Escalante, Marco Antonio. "Probabilistic timing verification and timing analysis for synthesis of digital interface controllers." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0023/NQ36637.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gretz, Friedrich Verfasser], Joost-Pieter [Akademischer Betreuer] [Katoen, and Sriram [Akademischer Betreuer] Sankaranarayanan. "Semantics and loop invariant synthesis for probabilistic programs / Friedrich Gretz ; Joost-Pieter Katoen, Sriram Sankaranarayanan." Aachen : Universitätsbibliothek der RWTH Aachen, 2016. http://d-nb.info/1126278491/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gretz, Friedrich [Verfasser], Joost-Pieter [Akademischer Betreuer] Katoen, and Sriram [Akademischer Betreuer] Sankaranarayanan. "Semantics and loop invariant synthesis for probabilistic programs / Friedrich Gretz ; Joost-Pieter Katoen, Sriram Sankaranarayanan." Aachen : Universitätsbibliothek der RWTH Aachen, 2016. http://d-nb.info/1126278491/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Schoner, Bernd 1969. "Probabilistic characterization and synthesis of complex driven systems." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/62352.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2000.
Includes bibliographical references (leaves 194-204).
Real-world systems that have characteristic input-output patterns but don't provide access to their internal states are as numerous as they are difficult to model. This dissertation introduces a modeling language for estimating and emulating the behavior of such systems given time series data. As a benchmark test, a digital violin is designed from observing the performance of an instrument. Cluster-weighted modeling (CWM), a mixture density estimator around local models, is presented as a framework for function approximation and for the prediction and characterization of nonlinear time series. The general model architecture and estimation algorithm are presented and extended to system characterization tools such as estimator uncertainty, predictor uncertainty and the correlation dimension of the data set. Furthermore a real-time implementation, a Hidden-Markov architecture, and function approximation under constraints are derived within the framework. CWM is then applied in the context of different problems and data sets, leading to architectures such as cluster-weighted classification, cluster-weighted estimation, and cluster-weighted sampling. Each application relies on a specific data representation, specific pre and post-processing algorithms, and a specific hybrid of CWM. The third part of this thesis introduces data-driven modeling of acoustic instruments, a novel technique for audio synthesis. CWM is applied along with new sensor technology and various audio representations to estimate models of violin-family instruments. The approach is demonstrated by synthesizing highly accurate violin sounds given off-line input data as well as cello sounds given real-time input data from a cello player.
by Bernd Schoner.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
5

Stupinský, Šimon. "Pokročilé metody pro syntézu pravděpodobnostních programů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445587.

Full text
Abstract:
Pravdepodobnostné programy zohrávajú rozhodujúcu úlohu v rôznych technických doménach, ako napríklad počítačové siete, vstavané systémy, stratégie riadenia spotreby energie alebo softvérové produčkné linky. PAYNT je nástroj na automatizovanú syntézu pravdepodobnostných programov vyhovujúcich zadaným špecifikáciam. V tejto práci rozširujeme tento nástroj predovšetkým o podporu optimálnej syntézy a syntézy viacerých špecifikácií. Ďalej sme navrhli a implementovali novú metódu, ktorá dokáže efektívne syntetizovať parametre so spojitým definičným oborom ovplyvňujúce pravdepodobnostné prechody popri syntéze topológie programov, t.j., podporu pre syntézu topológie aj parametrov súčasne. Demonštrujeme užitočnosť a výkonnosť nástroja PAYNT na širokej škále prípadových štúdií z rôznych aplikačných domén ktoré majú uplatnenie v reálnom svete. Pri náročných problémoch syntézy môže PAYNT výrazne znížiť dobu behu až z dní na minúty a zároveň zaistiť úplnosť procesu syntézy.
APA, Harvard, Vancouver, ISO, and other styles
6

Marcin, Vladimír. "GPU-akcelerovná syntéza pravděpodobnostních programů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445566.

Full text
Abstract:
V tejto práci sa zoberáme problémom automatizovanej syntézy pravdepodobnostných programov: majme konečnú rodinu kandidátnych programov, v ktorej chceme efektívne identifikovať program spĺňajúci danú špecifikáciu. Aj riešenie tých najjednoduchších syntéznych problémov v praxi predstavuje NP-ťažký problém. Pokrok v tejto oblasti prináša nástroj Paynt, ktorý na riešenie tohto problému používa novú integrovanú metódu syntézy pravdepodobnostných programov. Aj keď sa tento prístup dokáže efektívne vysporiadať s exponenciálnym rastom rodín kandidátnych riešení, stále tu existuje problém spôsobený exponenciálnym rastom jednotlivých členov týchto rodín. S cieľom vysporiadať sa aj s týmto problémom, sme implementovali GPU orientované algoritmy slúžiace na overovanie kandidátnych programov (modelov), ktoré danú úlohu paralelizujú na stavovej úrovni pravdepodobnostých modelov. Celkové zrýchlenie doshiahnuté týmto prístupom za určitých podmienok potom prinieslo takmer teoretický limit možného zrýchlenia syntézneho procesu.
APA, Harvard, Vancouver, ISO, and other styles
7

Angelopoulos, Nicos. "Probabilistic finite domains." Thesis, City University London, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.342823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Faria, Francisco Henrique Otte Vieira de. "Learning acyclic probabilistic logic programs from data." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-27022018-090821/.

Full text
Abstract:
To learn a probabilistic logic program is to find a set of probabilistic rules that best fits some data, in order to explain how attributes relate to one another and to predict the occurrence of new instantiations of these attributes. In this work, we focus on acyclic programs, because in this case the meaning of the program is quite transparent and easy to grasp. We propose that the learning process for a probabilistic acyclic logic program should be guided by a scoring function imported from the literature on Bayesian network learning. We suggest novel techniques that lead to orders of magnitude improvements in the current state-of-art represented by the ProbLog package. In addition, we present novel techniques for learning the structure of acyclic probabilistic logic programs.
O aprendizado de um programa lógico probabilístico consiste em encontrar um conjunto de regras lógico-probabilísticas que melhor se adequem aos dados, a fim de explicar de que forma estão relacionados os atributos observados e predizer a ocorrência de novas instanciações destes atributos. Neste trabalho focamos em programas acíclicos, cujo significado é bastante claro e fácil de interpretar. Propõe-se que o processo de aprendizado de programas lógicos probabilísticos acíclicos deve ser guiado por funções de avaliação importadas da literatura de aprendizado de redes Bayesianas. Neste trabalho s~ao sugeridas novas técnicas para aprendizado de parâmetros que contribuem para uma melhora significativa na eficiência computacional do estado da arte representado pelo pacote ProbLog. Além disto, apresentamos novas técnicas para aprendizado da estrutura de programas lógicos probabilísticos acíclicos.
APA, Harvard, Vancouver, ISO, and other styles
9

Paige, Timothy Brooks. "Automatic inference for higher-order probabilistic programs." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:d912c4de-4b08-4729-aa19-766413735e2a.

Full text
Abstract:
Probabilistic models used in quantitative sciences have historically co-evolved with methods for performing inference: specific modeling assumptions are made not because they are appropriate to the application domain, but because they are required to leverage existing software packages or inference methods. The intertwined nature of modeling and computational concerns leaves much of the promise of probabilistic modeling out of reach for data scientists, forcing practitioners to turn to off-the-shelf solutions. The emerging field of probabilistic programming aims to reduce the technical and cognitive overhead for writing and designing novel probabilistic models, by introducing a specialized programming language as an abstraction barrier between modeling and inference. The aim of this thesis is to develop inference algorithms that scale well and are applicable to broad model families. We focus particularly on methods that can be applied to models written in general-purpose higher-order probabilistic programming languages, where programs may make use of recursion, arbitrary deterministic simulation, and higher-order functions to create more accurate models of an application domain. In a probabilistic programming system, probabilistic models are defined using a modeling language; a backend implements generic inference methods applicable to any model written in this language. Probabilistic programs - models - can be written without concern for how inference will later be performed. We begin by considering several existing probabilistic programming languages, their design choices, and tradeoffs. We then demonstrate how programs written in higher-order languages can be used to define coherent probability models, describing possible approaches to inference, and providing explicit algorithms for efficient implementations of both classic and novel inference methods based on and extending sequential Monte Carlo. This is followed by an investigation into the use of variational inference methods within higher-order probabilistic programming languages, with application to policy learning, adaptive importance sampling, and amortization of inference.
APA, Harvard, Vancouver, ISO, and other styles
10

Crubillé, Raphaëlle. "Behavioural distances for probabilistic higher-order programs." Thesis, Sorbonne Paris Cité, 2019. http://www.theses.fr/2019USPCC084.

Full text
Abstract:
Cette thèse est consacrée à l’étude d’équivalences et de distances comportementales destinées à comparer des programmes probabilistes d’ordre supérieur. Le manuscrit est divisé en trois parties. La première partie consiste en une présentation des langages probabilistes d’ordre supérieur, et des notions d’équivalence et de distance contextuelles pour de tels langages.Dans une deuxième partie, on suit une approche opérationnelle pour construire des notions d’équivalences et de métriques plus simples à manipuler que les notions contextuelles : on prend comme point de départ les deux équivalences comportementales pour le lambda-calcul probabiliste équipé d’une stratégie d’évaluation basée sur l’appel par nom introduites par Dal Lago, Sangiorgi and Alberti : ces derniers définissent deux équivalences–la trace équivalence, et la bisimulation probabiliste, et montrent que pour ce langage, la trace équivalence permet de complètement caractériser–i.e. est pleinement abstraite– l’équivalence contextuelle, tandis que la bisimulation probabiliste est une approximation correcte de l’équivalence contextuelle, mais n’est pas pleinement abstraite. Dans la partie opérationnelle de cette thèse, on montre que la bisimulation probabiliste redevient pleinement abstraite quand on remplace la stratégie d’évaluation par nom par une stratégie d’évaluation par valeur. Le reste de cette partie est consacrée à une généralisation quantitative de la trace équivalence, i.e. une trace distance sur les programmes. On introduit d’abord une trace distance pour un λ-calcul probabiliste affine, i.e. où le contexte peut utiliser son argument au plus une fois, et ensuite pour un λ-calcul probabiliste où les contextes ont la capacité de copier leur argument; dans ces deux cas, on montre que les distances traces obtenues sont pleinement abstraites.La troisième partie considère deux modèles dénotationnels de langages probabilistes d’ordre supérieur : le modèle des espaces cohérents probabiliste, dû à Danos et Ehrhard, qui interprète le langage obtenu en équipant PCF avec des probabilités discrètes, et le modèle des cônes mesurables et des fonctions stables et mesurables, développé plus récemment par Ehrhard, Pagani and Tasson pour le langage obtenu en enrichissant PCF avec des probabilités continues. Cette thèse établit deux résultats sur la structure de ces modèles. On montre d’abord que l’exponentielle de la catégorie des espaces cohérents peut être exprimée en utilisant le comonoide commutatif libre : il s’agit d’un résultat de généricité de cette catégorie vue comme un modèle de la logique linéaire. Le deuxième résultat éclaire les liens entre ces deux modèles : on montre que la catégorie des cônes mesurables et des fonctions stables et mesurable est une extension conservatrice de la catégorie de co-Kleisli des espaces cohérents probabilistes. Cela signifie que le modèle récemment introduit par Ehrhard, Pagani et Tasson peut être vu comme la généralisation au cas continu du modèle de PCF équipé avec des probabilités discrètes dans les espaces cohérents probabilistes
The present thesis is devoted to the study of behavioral equivalences and distances for higher-order probabilistic programs. The manuscript is divided into three parts. In the first one, higher-order probabilistic languages are presented, as well as how to compare such programs with context equivalence and context distance.The second part follows an operational approach in the aim of building equivalences and metrics easier to handle as their contextual counterparts. We take as starting point here the two behavioral equivalences introduced by Dal Lago, Sangiorgi and Alberti for the probabilistic lambda-calculus equipped with a call-by-name evaluation strategy: the trace equivalence and the bisimulation equivalence. These authors showed that for their language, trace equivalence completely characterizes context equivalence—i.e. is fully abstract, while probabilistic bisimulation is a sound approximation of context equivalence, but is not fully abstract. In the operational part of the present thesis, we show that probabilistic bisimulation becomes fully abstract when we replace the call-by-name paradigm by the call-by-value one. The remainder of this part is devoted to a quantitative generalization of trace equivalence, i.e. a trace distance on programs. We introduce first e trace distance for an affine probabilistic lambda-calculus—i.e. where a function can use its argument at most once, and then for a more general probabilistic lambda-calculus where functions have the ability to duplicate their arguments. In these two cases, we show that these trace distances are fully abstract.In the third part, two denotational models of higher-order probabilistic languages are considered: the Danos and Ehrhard's model based on probabilistic coherence spaces that interprets the language PCF enriched with discrete probabilities, and the Ehrhard, Pagani and Tasson's one based on measurable cones and measurable stable functions that interpret PCF equipped with continuous probabilities. The present thesis establishes two results on these models structure. We first show that the exponential comonad of the category of probabilistic coherent spaces can be expressed using the free commutative comonoid: it consists in a genericity result for this category seen as a model of Linear Logic. The second result clarify the connection between these two models: we show that the category of measurable cones and measurable stable functions is a conservative extension of the co-Kleisli category of probabilistic coherent spaces. It means that the recently introduced model of Ehrhard, Pagani and Tasson can be seen as the generalization to the continuous case of the model for PCF with discrete probabilities in probabilistic coherent spaces
APA, Harvard, Vancouver, ISO, and other styles
11

Kwan, Victor. "A predicative model for probabilistic specifications and programs." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0027/MQ40745.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Stuhlmüller, Andreas. "Modeling cognition with probabilistic programs : representations and algorithms." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100860.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 167-176).
This thesis develops probabilistic programming as a productive metaphor for understanding cognition, both with respect to mental representations and the manipulation of such representations. In the first half of the thesis, I demonstrate the representational power of probabilistic programs in the domains of concept learning and social reasoning. I provide examples of richly structured concepts, defined in terms of systems of relations, subparts, and recursive embeddings, that are naturally expressed as programs and show initial experimental evidence that they match human generalization patterns. I then proceed to models of reasoning about reasoning, a domain where the expressive power of probabilistic programs is necessary to formalize our intuitive domain understanding due to the fact that, unlike previous formalisms, probabilistic programs allow conditioning to be represented in a model, not just applied to a model. I illustrate this insight with programs that model nested reasoning in game theory, artificial intelligence, and linguistics. In the second half, I develop three inference algorithms with the dual intent of showing how to efficiently compute the marginal distributions defined by probabilistic programs, and providing building blocks for process-level accounts of human cognition. First, I describe a Dynamic Programming algorithm for computing the marginal distribution of discrete probabilistic programs by compiling to systems of equations and show that it can make inference in models of "reasoning about reasoning" tractable by merging and reusing subcomputations. Second, I introduce the setting of amortized inference and show how learning inverse models lets us leverage samples generated by other inference algorithms to compile probabilistic models into fast recognition functions. Third, I develop a generic approach to coarse-to-fine inference in probabilistic programs and provide evidence that it can speed up inference in models with large state spaces that have appropriate hierarchical structure. Finally, I substantiate the claim that probabilistic programming is a productive metaphor by outlining new research questions that have been opened up by this line of investigation.
by Andreas Stuhlmüller.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
13

Bone, Nicholas. "Models of programs and machine learning." Thesis, University of Oxford, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.244565.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Andriushchenko, Roman. "Computer-Aided Synthesis of Probabilistic Models." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-417269.

Full text
Abstract:
Předkládaná práce se zabývá problémem automatizované syntézy pravděpodobnostních systémů: máme-li rodinu Markovských řetězců, jak lze efektivně identifikovat ten který odpovídá zadané specifikaci? Takové rodiny často vznikají v nejrůznějších oblastech inženýrství při modelování systémů s neurčitostí a rozhodování i těch nejjednodušších syntézních otázek představuje NP-těžký problém. V dané práci my zkoumáme existující techniky založené na protipříklady řízené induktivní syntéze (counterexample-guided inductive synthesis, CEGIS) a na zjemňování abstrakce (counterexample-guided abstraction refinement, CEGAR) a navrhujeme novou integrovanou metodu pro pravděpodobnostní syntézu. Experimenty nad relevantními modely demonstrují, že navržená technika je nejenom srovnatelná s moderními metodami, ale ve většině případů dokáže výrazně překonat, někdy i o několik řádů, existující přístupy.
APA, Harvard, Vancouver, ISO, and other styles
15

BORGES, Mateus Araújo. "Techniques to facilitate probabilistic software analysis in real-world programs." Universidade Federal de Pernambuco, 2015. https://repositorio.ufpe.br/handle/123456789/14932.

Full text
Abstract:
Submitted by Isaac Francisco de Souza Dias (isaac.souzadias@ufpe.br) on 2016-01-19T17:42:17Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) dissertation.pdf: 864300 bytes, checksum: 624346f890c947cf26d691a5fc74d707 (MD5)
Made available in DSpace on 2016-01-19T17:42:17Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) dissertation.pdf: 864300 bytes, checksum: 624346f890c947cf26d691a5fc74d707 (MD5) Previous issue date: 2015-04-24
FACEPE
Probabilistic software analysis aims at quantifying how likely a target event is to occur, given a probabilistic characterization of the behavior of a program or of its execution environment. Examples of target events may include an uncaught exception, the invocation of a certain method, or the access to confidential information. The technique collects constraints on the inputs that lead to the target events and analyzes them to quantify how likely it is for an input to satisfy the constraints. Current techniques either handle only linear constraints or only support continuous distributions using a “discretization” of the input domain, leading to imprecise and costly results. This work proposes an iterative distribution-aware sampling approach to support probabilistic symbolic execution for arbitrarily complex mathematical constraints and continuous input distributions. We follow a compositional approach, where the symbolic constraints are decomposed into sub-problems whose solution can be solved independently. At each iteration the convergence rate of the computation is increased by automatically refocusing the analysis on estimating the sub-problems that mostly affect the accuracy of the results, as guided by three different ranking strategies. Experiments on publicly available benchmarks show that the proposed technique improves on previous approaches in terms of scalability and accuracy of the results.
Análise Probabilística de Software (PSA) visa a quantificar a probabilidade de que um evento de interesse seja alcançado durante a execução de um programa, dada uma caracterização probabilística do comportamento do programa ou do seu ambiente de execução. O evento de interesse pode ser, por exemplo, uma exceção não capturada, a invocação de um método específico, ou o acesso à informação confidencial. A técnica coleta restrições sobre as entradas que levam para os eventos de interesse e as analisa para quantificar o quão provável que uma entrada satisfaça essas restrições. Técnicas atuais ou suportam apenas restrições lineares, ou suportam distribuições contínuas utilizando uma "discretização" do domínio de entrada, levando a resultados imprecisos e caros. Este trabalho apresenta uma abordagem iterativa, composicional e sensível às distribuições para suportar o uso de PSA em restrições com operações matemáticas arbitrariamente complexas e distribuições contínuas de entrada. Nossa abordagem composicional permite que as restrições sejam decompostas em subproblemas que podem ser resolvidos independentemente. Em cada iteração a análise é reorientada automaticamente para a estimação dos subproblemas que mais afetam a precisão dos resultados, assim aumentando a taxa de convergência da computação. Esta reorientação é guiada por três diferentes estratégias de ranqueamento. Experimentos em programas publicamente disponíveis mostram que a técnica proposta é melhor do que abordagens existentes em termos de escalabilidade e precisão.
APA, Harvard, Vancouver, ISO, and other styles
16

Shannon, Sean Matthew. "Probabilistic acoustic modelling for parametric speech synthesis." Thesis, University of Cambridge, 2014. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.708415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Sakai, Shinsuke. "A Probabilistic Approach to Concatenative Speech Synthesis." 京都大学 (Kyoto University), 2012. http://hdl.handle.net/2433/152508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Janiuk, Ludvig, and Johan Sjölén. "Probabilistic Least-violating Control Strategy Synthesis with Safety Rules." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229867.

Full text
Abstract:
We consider the problem of automatic control strategy synthesis for discrete models of robotic systems, where the goal is to travel from some region to another while obeying a given set of safety rules in an environment with uncertain properties. This is a probabilistic extension of the work by Jana Tumová et al.  that is able to handle uncertainty by modifying the least-violating strategy synthesis algorithm. The first novel contribution is a way of modelling uncertain events in a map as a Markov decision process with a specific structure, using what we call "Ghost States". We then introduce a way of constructing a Product Automaton analogous to the original work, on which a modified probabilistic version of Dijkstra's algorithm can be run to synthesize the least-violating plan. The result is a synthesis algorithm that works similarly to the original, but can handle probabilistic uncertainty. It could be used in cases where e.g. uncertain weather conditions or the behaviour of external actors can be modelled as stochastic variables.
Vi undersöker automatisk kontrollstrategisyntes (automatic control strategy synthesis) av diskreta robotsystem där målet för roboten är att färdas från en region till en annan medan den följer en mängd säkerhetsregler i en miljö med probabilistiskt osäkra egenskaper. Detta är en uppföljning av arbete gjort av Jana Tumová et al. Vi utvidgar deras arbete genom att modifiera strategisyntesalgoritmen så att den kan hantera probabilistiska situationer. Vårt första bidrag är ett sätt att modellera probabilistiska situationer i en karta genom en så kallad "markov decision process" med en specifik struktur som vi kallar för "Ghost States" (spöktillstånd). Vi bidrar även med ett sätt att konstruera en produktautomat som är analog till originalarbetets produktautomat. På vår produktautomat kan en probabilistisk variant av Dijkstras algoritm köras för att framställa en plan som är "least-violating" (bryter mot säkerhetsreglerna minst). Resultatet är en syntesalgorithm som fungerar som originalet men som även kan hantera stokastiska osäkerheter. Syntesalgoritmen skulle till exempel kunna användas i de fall där ovissa väderlekar eller beteendet av externa aktörer kan modelleras som stokastiska variabler.
APA, Harvard, Vancouver, ISO, and other styles
19

Ujma, Mateusz. "On verification and controller synthesis for probabilistic systems at runtime." Thesis, University of Oxford, 2015. https://ora.ox.ac.uk/objects/uuid:9433e3ed-ad05-4f4e-8dbb-507a09283a02.

Full text
Abstract:
Probabilistic model checking is a technique employed for verifying the correctness of computer systems that exhibit probabilistic behaviour. A related technique is controller synthesis, which generates controllers that guarantee the correct behaviour of the system. Not all controllers can be generated offline, as the relevant information may only be available when the system is running, for example, the reliability of services may vary over time. In this thesis, we propose a framework based on controller synthesis for stochastic games at runtime. We model systems using stochastic two-player games parameterised with data obtained from monitoring of the running system. One player represents the controllable actions of the system, while the other player represents the hostile uncontrollable environment. The goal is to synthesize, for a given property specification, a controller for the first player that wins against all possible actions of the environment player. Initially, controller synthesis is invoked for the parameterised model and the resulting controller is applied to the running system. The process is repeated at runtime when changes in the monitored parameters are detected, whereby a new controller is generated and applied. To ensure the practicality of the framework, we focus on its three important aspects: performance, robustness, and scalability. We propose an incremental model construction technique to improve performance of runtime synthesis. In many cases, changes in monitored parameters are small and models built for consecutive parameter values are similar. We exploit this and incrementally build a model for the updated parameters reusing the previous model, effectively saving time. To address robustness, we develop a technique called permissive controller synthesis. Permissive controllers generalise the classical controllers by allowing the system to choose from a set of actions instead of just one. By using a permissive controller, a computer system can quickly adapt to a situation where an action becomes temporarily unavailable while still satisfying the property of interest. We tackle the scalability of controller synthesis with a learning-based approach. We develop a technique based on real-time dynamic programming which, by generating random trajectories through a model, synthesises an approximately optimal controller. We guide the generation using heuristics and can guarantee that, even in the cases where we only explore a small part of the model, we still obtain a correct controller. We develop a full implementation of these techniques and evaluate it on a large set of case studies from the PRISM benchmark suite, demonstrating significant performance gains in most cases. We also illustrate the working of the framework on a new case study of an open-source stock monitoring application.
APA, Harvard, Vancouver, ISO, and other styles
20

Matthews, Brett Alexander. "Probabilistic modeling of neural data for analysis and synthesis of speech." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/50116.

Full text
Abstract:
This research consists of probabilistic modeling of speech audio signals and deep-brain neurological signals in brain-computer interfaces. A significant portion of this research consists of a collaborative effort with Neural Signals Inc., Duluth, GA, and Boston University to develop an intracortical neural prosthetic system for speech restoration in a human subject living with Locked-In Syndrome, i.e., he is paralyzed and unable to speak. The work is carried out in three major phases. We first use kernel-based classifiers to detect evidence of articulation gestures and phonological attributes speech audio signals. We demonstrate that articulatory information can be used to decode speech content in speech audio signals. In the second phase of the research, we use neurological signals collected from a human subject with Locked-In Syndrome to predict intended speech content. The neural data were collected with a microwire electrode surgically implanted in speech motor cortex of the subject's brain, with the implant location chosen to capture extracellular electric potentials related to speech motor activity. The data include extracellular traces, and firing occurrence times for neural clusters in the vicinity of the electrode identified by an expert. We compute continuous firing rate estimates for the ensemble of neural clusters using several rate estimation methods and apply statistical classifiers to the rate estimates to predict intended speech content. We use Gaussian mixture models to classify short frames of data into 5 vowel classes and to discriminate intended speech activity in the data from non-speech. We then perform a series of data collection experiments with the subject designed to test explicitly for several speech articulation gestures, and decode the data offline. Finally, in the third phase of the research we develop an original probabilistic method for the task of spike-sorting in intracortical brain-computer interfaces, i.e., identifying and distinguishing action potential waveforms in extracellular traces. Our method uses both action potential waveforms and their occurrence times to cluster the data. We apply the method to semi-artificial data and partially labeled real data. We then classify neural spike waveforms, modeled with single multivariate Gaussians, using the method of minimum classification error for parameter estimation. Finally, we apply our joint waveforms and occurrence times spike-sorting method to neurological data in the context of a neural prosthesis for speech.
APA, Harvard, Vancouver, ISO, and other styles
21

Schroeder, Deborah. "Development of computer programs to aid synthesis planning." Thesis, University of Leeds, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.329976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Ziyuan, Jiang. "Synthesis of GPU Programs from High-Level Models." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230163.

Full text
Abstract:
Modern graphics processing units (GPUs) provide high-performance general purpose computation abilities. They have massive parallel architectures that are suitable for executing parallel algorithms and operations. They are also throughput-oriented devices that are optimized to achieve high throughput for stream processing. Designing efficient GPU programs is a notoriously difficult task. The ForSyDe methodology is suitable to ease the difficulties of GPU programming. The methodology encourages software development from a high level of abstraction and then transforming the abstract model to an implementation through a series of formal methods. The existing ForSyDe models support the synchronous data flow (SDF) model of computation (MoC) which is suitable for modeling stream computations and is good for synthesizing efficient stream processing programs. There also exists high-level design models named parallel patterns that are suitable to represent parallel algorithms and operations. The thesis studies the method of modeling parallel algorithms using parallel patterns, and explores the way to synthesize efficient OpenCL implementation on GPUs for parallel patterns. The thesis also tries to enable the integration of parallel patterns into the ForSyDe SDF model in order to model stream parallel operations. An automation library that helps designing stream programs for parallel algorithms targeting GPUs is purposed in the thesis project. Several experiments are performed to evaluate the effectiveness of the proposed library regarding implementations of the high-level model.
Moderna grafikbehandlingsenheter (GPU) tillhandahåller högpresterande generella syftes-beräkningsförmågor. De har massiva parallella arkitekturer som är lämpliga för att utföra parallella algoritmer och operationer. De är också streaminriktade enheter som är optimerade för att uppnå hög streaming för streamingbehandling. Att utforma effektiva GPU-program är en notoriskt svårt uppgift. ForSyDe-metoden är lämplig för att underlätta svårigheterna med GPU-programmering. Metodiken uppmuntrar mjukvaruutveckling från en hög nivå av abstraktion för att sedan omvandla den abstrakta modellen till en implementering genom en rad formella metoder. De befintliga ForSyDe-modellerna stöder synkron dataflöde (SDF) modell av beräkning (MoC) som är lämplig för modellering av streaming-beräkningar och är bra för att syntetisera effektiv streaming-bearbetningsprogram. Det finns också högkvalitativa designmodeller som kallas parallella mönster vilka är lämpliga för att representera parallella algoritmer och operationer. Avhandlingen analyserar metoden för modellering av parallella algoritmer med parallella mönster, och utforskar sättet att syntetisera effektiv OpenCL-implementering för GPU för parallella mönster. Avhandlingen försöker även att möjliggöra integration av parallella mönster i ForSyDe SDF-modellen för att modellera streaming parallella operationer. Ett automationsbibliotek som hjälper till att designa stream-program för parallella algoritmer som riktar sig mot GPU:er är avsedda för avhandlingsprojektet. Flera experiment utförs för att utvärdera effektiviteten hos det föreslagna biblioteket avseende implementering av högnivåmodellen.
APA, Harvard, Vancouver, ISO, and other styles
23

Kaminski, Benjamin Lucien [Verfasser], Joost-Pieter [Akademischer Betreuer] Katoen, and Annabelle [Akademischer Betreuer] McIver. "Advanced weakest precondition calculi for probabilistic programs / Benjamin Lucien Kaminski ; Joost-Pieter Katoen, Annabelle McIver." Aachen : Universitätsbibliothek der RWTH Aachen, 2019. http://d-nb.info/1191375323/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Deena, Salil Prashant. "Visual speech synthesis by learning joint probabilistic models of audio and video." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/visual-speech-synthesis-by-learning-joint-probabilistic-models-of-audio-and-video(bdd1a78b-4957-469e-8be4-34e83e676c79).html.

Full text
Abstract:
Visual speech synthesis deals with synthesising facial animation from an audio representation of speech. In the last decade or so, data-driven approaches have gained prominence with the development of Machine Learning techniques that can learn an audio-visual mapping. Many of these Machine Learning approaches learn a generative model of speech production using the framework of probabilistic graphical models, through which efficient inference algorithms can be developed for synthesis. In this work, the audio and visual parameters are assumed to be generated from an underlying latent space that captures the shared information between the two modalities. These latent points evolve through time according to a dynamical mapping and there are mappings from the latent points to the audio and visual spaces respectively. The mappings are modelled using Gaussian processes, which are non-parametric models that can represent a distribution over non-linear functions. The result is a non-linear state-space model. It turns out that the state-space model is not a very accurate generative model of speech production because it assumes a single dynamical model, whereas it is well known that speech involves multiple dynamics (for e.g. different syllables) that are generally non-linear. In order to cater for this, the state-space model can be augmented with switching states to represent the multiple dynamics, thus giving a switching state-space model. A key problem is how to infer the switching states so as to model the multiple non-linear dynamics of speech, which we address by learning a variable-order Markov model on a discrete representation of audio speech. Various synthesis methods for predicting visual from audio speech are proposed for both the state-space and switching state-space models. Quantitative evaluation, involving the use of error and correlation metrics between ground truth and synthetic features, is used to evaluate our proposed method in comparison to other probabilistic models previously applied to the problem. Furthermore, qualitative evaluation with human participants has been conducted to evaluate the realism, perceptual characteristics and intelligibility of the synthesised animations. The results are encouraging and demonstrate that by having a joint probabilistic model of audio and visual speech that caters for the non-linearities in audio-visual mapping, realistic visual speech can be synthesised from audio speech.
APA, Harvard, Vancouver, ISO, and other styles
25

DIAS, DOUGLAS MOTA. "AUTOMATIC SYNTHESIS OF DIGITAL MICROCONTROLLER PROGRAMS BY GENETIC PROGRAMMING." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2005. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=6666@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Esta dissertação investiga o uso de programação genética linear na síntese automática de programas em linguagem de montagem para microcontroladores, que implementem estratégias de controle de tempo ótimo ou sub-ótimo, do sistema a ser controlado, a partir da modelagem matemática por equações dinâmicas. Uma das dificuldades encontradas no projeto convencional de um sistema de controle ótimo é que soluções para este tipo de problema normalmente implicam em uma função altamente não-linear das variáveis de estado do sistema. Como resultado, várias vezes não é possível encontrar uma solução matemática exata. Já na implementação, surge a dificuldade de se ter que programar manualmente o microcontrolador para executar o controle desejado. O objetivo deste trabalho foi, portanto, contornar tais dificuldades através de uma metodologia que, a partir da modelagem matemática de uma planta, fornece como resultado um programa em linguagem de montagem. O trabalho consistiu no estudo sobre os possíveis tipos de representações para a manipulação genética de programas em linguagem de montagem, tendo sido concluído que a linear é a mais adequada, e na implementação de uma ferramenta para realizar os três estudos de caso: water bath, cart centering e pêndulo invertido. O desempenho de controle dos programas sintetizados foi comparado com o dos sistemas obtidos por outros métodos (redes neurais, lógica fuzzy, sistemas neurofuzzy e programação genética). Os programas sintetizados demonstraram, no mínimo, o mesmo desempenho, mas com a vantagem adicional de fornecerem a solução já no formato final da plataforma de implementação escolhida: um microcontrolador.
This dissertation investigates the use of genetic programming in automatic synthesis of assembly language programs for microcontrollers, which implement time-optimal or sub-optimal control strategies of the system to be controlled, from the mathematical modeling by dynamic equations. One of the issues faced in conventional design of an optimal control system is that solutions for this kind of problem commonly involve a highly nonlinear function of the state variables of the system. As a result, frequently it is not possible to find an exact mathematical solution. On the implementation side, the difficulty comes when one has to manually program the microcontroller to run the desired control. Thus, the objective of this work was to overcome these difficulties applying a methodology that, starting from the mathematical modeling of a plant, provides as result an assembly language microcontroller program. The work included a study of the possible types of genetic representation for the manipulation of assembly language programs. In this regard, it has been concluded that the linear is the most suitable representation. The work also included the implementation of a tool to accomplish three study cases: water bath, cart centering and inverted pendulum. The performance of control of the synthesized programs was compared with the one obtained by other methods (neural networks, fuzzy logic, neurofuzzy systems and genetic programming). The synthesized programs achieved at least the same performance of the other systems, with the additional advantage of already providing the solution in the final format of the chosen implementation platform: a microcontroller.
APA, Harvard, Vancouver, ISO, and other styles
26

Gao, Xitong. "Structural optimization of numerical programs for high-level synthesis." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/42498.

Full text
Abstract:
This thesis introduces a new technique, and its associated tool SOAP, to automatically perform source-to-source optimization of numerical programs, specifically targeting the trade-off among numerical accuracy, latency, and resource usage as a high-level synthesis flow for FPGA implementations. A new intermediate representation, MIR, is introduced to carry out the abstraction and optimization of numerical programs. Equivalent structures in MIRs are efficiently discovered using methods based on formal semantics by taking into account axiomatic rules from real arithmetic, such as associativity, distributivity and others, in tandem with program equivalence rules that enable control-flow restructuring and eliminate redundant array accesses. For the first time, we bring rigorous approaches from software static analysis, specifically formal semantics and abstract interpretation, to bear on program transformation for high-level synthesis. New abstract semantics are developed to generate a computable subset of equivalent MIRs from an original MIR. Using formal semantics, three objectives are calculated for each MIR representing a pipelined numerical program: the accuracy of computation and an estimate of resource utilization in FPGA and the latency of program execution. The optimization of these objectives produces a Pareto frontier consisting of a set of equivalent MIRs. We thus go beyond existing literature by not only optimizing the precision requirements of an implementation, but changing the structure of the implementation itself. Using SOAP to optimize the structure of a variety of real world and artificially generated arithmetic expressions in single precision, we improve either their accuracy or the resource utilization by up to 60%. When applied to a suite of computational intensive numerical programs from PolyBench and Livermore Loops benchmarks, SOAP has generated circuits that enjoy up to a 12x speedup, with a simultaneous 7x increase in accuracy, at a cost of up to 4x more LUTs.
APA, Harvard, Vancouver, ISO, and other styles
27

ISHII, Katsuya, Hiroaki TAKADA, Shinya HONDA, Hiroyuki TOMIYAMA, and Yuko HARA. "Function-Level Partitioning of Sequential Programs for Efficient Behavioral Synthesis." Institute of Electronics, Information and Communication Engineers, 2007. http://hdl.handle.net/2237/15031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Lewis, Matt. "Precise verification of C programs." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:34b5ed5a-160b-4e2c-8dac-eab62a24f78c.

Full text
Abstract:
Most current approaches to software verification are one-sided -- a safety prover will try to prove that a program is safe, while a bug-finding tool will try to find bugs. It is rare to find an analyser that is optimised for both tasks, which is problematic since it is hard to know in advance whether a program you wish to analyse is safe or not. The result of taking a one-sided approach to verification is false alarms: safety provers will often claim that safe programs have errors, while bug-finders will often be unable to find errors in unsafe programs. Orthogonally, many software verifiers are designed for reasoning about idealised programming languages that may not have widespread use. A common assumption made by verification tools is that program variables can take arbitrary integer values, while programs in most common languages use fixed-width bitvectors for their variables. This can have a real impact on the verification, leading to incorrect claims by the verifier. In this thesis we will show that it is possible to analyse C programs without generating false alarms, even if they contain unbounded loops, use non-linear arithmetic and have integer overflows. To do this, we will present two classes of analysis based on underapproximate loop acceleration and second-order satisfiability respectively. Underapproximate loop acceleration addresses the problem of finding deep bugs. By finding closed forms for loops, we show that deep bugs can be detected without unwinding the program and that this can be done without introducing false positives or masking errors. We then show that programs accelerated in this way can be optimised by inlining trace automata to reduce their reachability diameter. This inlining allows acceleration to be used as a viable technique for proving safety, as well as finding bugs. In the second part of the thesis, we focus on using second-order logic for program analysis. We begin by defining second-order SAT: an extension of propositional SAT that allows quantification over functions. We show that this problem is NEXPTIME-complete, and that it is polynomial time reducible to finite-state program synthesis. We then present a fully automatic, sound and complete algorithm for synthesising C programs from a specification written in C. Our approach uses a combination of bounded model checking, explicit-state model checking and genetic programming to achieve surprisingly good performance for a problem with such high complexity. We conclude by using second-order SAT to precisely and directly encode several program analysis problems including superoptimisation, de-obfuscation, safety and termination for programs using bitvector arithmetic and dynamically allocated lists.
APA, Harvard, Vancouver, ISO, and other styles
29

Brown, Andrew Michael. "Development of a probabilistic dynamic synthesis method for the analysis of non-deterministic structures." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/19065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Paçacı, Görkem. "Representation of Compositional Relational Programs." Doctoral thesis, Uppsala universitet, Informationssystem, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-317084.

Full text
Abstract:
Usability aspects of programming languages are often overlooked, yet have a substantial effect on programmer productivity. These issues are even more acute in the field of Inductive Synthesis, where programs are automatically generated from sample expected input and output data, and the programmer needs to be able to comprehend, and confirm or reject the suggested programs. A promising method of Inductive Synthesis, CombInduce, which is particularly suitable for synthesizing recursive programs, is a candidate for improvements in usability as the target language Combilog is not user-friendly. The method requires the target language to be strictly compositional, hence devoid of variables, yet have the expressiveness of definite clause programs. This sets up a challenging problem for establishing a user-friendly but equally expressive target language. Alternatives to Combilog, such as Quine's Predicate-functor Logic and Schönfinkel and Curry's Combinatory Logic also do not offer a practical notation: finding a more usable representation is imperative. This thesis presents two distinct approaches towards more convenient representations which still maintain compositionality. The first is Visual Combilog (VC), a system for visualizing Combilog programs. In this approach Combilog remains as the target language for synthesis, but programs can be read and modified by interacting with the equivalent diagrams instead. VC is implemented as a split-view editor that maintains the equivalent Combilog and VC representations on-the-fly, automatically transforming them as necessary. The second approach is Combilog with Name Projection (CNP), a textual iteration of Combilog that replaces numeric argument positions with argument names. The result is a language where argument names make the notation more readable, yet compositionality is preserved by avoiding variables. Compositionality is demonstrated by implementing CombInduce with CNP as the target language, revealing that programs with the same level of recursive complexity can be synthesized in CNP equally well, and establishing the underlying method of synthesis can also work with CNP. Our evaluations of the user-friendliness of both representations are supported by a range of methods from Information Visualization, Cognitive Modelling, and Human-Computer Interaction. The increased usability of both representations are confirmed by empirical user studies: an often neglected aspect of language design.
APA, Harvard, Vancouver, ISO, and other styles
31

Bathon, Leander Anton. "Probabilistic Determination of Failure Load Capacity Variations for Lattice Type Structures Based on Yield Strength Variations including Nonlinear Post-Buckling Member Performance." PDXScholar, 1992. https://pdxscholar.library.pdx.edu/open_access_etds/1225.

Full text
Abstract:
With the attempt to achieve the optimum in analysis and design, the technological global knowledge base grows more and more. Engineers all over the world continuously modify and innovate existing analysis methods and design procedures to perform the same task more efficiently and with better results. In the field of complex structural analysis many researchers pursue this challenging task. The complexity of a lattice type structure is caused by numerous parameters: the nonlinear member performance of the material, the statistical variation of member load capacities, the highly indeterminate structural composition, etc. In order to achieve a simulation approach which represents the real world problem more accurately, it is necessary to develop technologies which include these parameters in the analysis. One of the new technologies is the first order nonlinear analysis of lattice type structures including the after failure response of individual members. Such an analysis is able to predict the failure behavior of a structural system under ultimate loads more accurately than the traditionally used linear elastic analysis or a classical first order nonlinear analysis. It is an analysis procedure which can more accurately evaluate the limit-state of a structural system. The Probability Based Analysis (PBA) is a new technology. It provides the user with a tool to analyze structural systems based on statistical variations in member capacities. Current analysis techniques have shown that structural failure is sensitive to member capacity. The combination of probability based analysis and the limit-state analysis will give the engineer the capability to establish a failure load distribution based on the limit-state capacity of the structure. This failure load distribution which gives statistical properties such as mean and variance improves the engineering judgment. The mean shows the expected value or the mathematical expectation of the failure load. The variance is a tool to measure the variability of the failure load distribution. Based on a certain load case, a small variance will indicate that a few members cause the tower failure over and over again; the design is unbalanced. A large variance will indicate that many different members caused the tower failure. The failure load distribution helps in comparing and evaluating actual test results versus analytical results by locating an actual test among the possible failure loads of a tower series. Additionally, the failure load distribution allows the engineer to calculate exclusion limits which are a measure of the probability of success, or conversely the probability of failure for a given load condition. The exclusion limit allows engineers to redefine their judgement on safety and usability of transmission towers. Existing transmission towers can be reanalyzed using this PBA and upgraded based on a given exclusion limit for a chosen tower capacity increase according to the elastic analysis from which the tower was designed. New transmission towers can be analyzed based on the actual yield strength data and their nonlinear member performances. Based on this innovative analysis the engineer is able to improve tower design by using a tool which represents the real world behavior of steel transmission towers more accurately. Consequently it will improve structural safety and reduce cost.
APA, Harvard, Vancouver, ISO, and other styles
32

Tremamunno, Luca. "A framework for distributed synthesis of programs from input-output examples." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
Program synthesis is the task of automatically finding a program in an underlying Domain-Specific Language (DSL) that satisfies a specification provided by the user. In recent years the topics of program synthesis and in particular programming-by-example (PBE), a branch of program synthesis that uses input-output examples as specification, have gained significant traction, also thanks to the use of PBE tools in commercial applications, such as Excel'sFlashFill. Program synthesis is often considered as a difficult search problem, and one of the main focus of synthesis algorithms is to reduce and efficiently explore the search space. In addition to these techniques some synthesizers parallelize the search to improve performance, but the benefits of this optimization is still severely limited. In this thesis I explore the possibility of using distributed systems to further improve the performance of synthesis algorithms by increasing the number of parallel searches. I introduce an algorithm for PBE that aims to efficiently parallelize the search over a distributed system and implement it into a framework, then analyze the performance of the framework and which characteristics influence the effectiveness of distributed synthesis using instantiations for the string transformation and SQL domains. The evaluation of these instantiations on existing benchmarks shows that the developed algorithm allows performance comparable to that of current state-of-the-art techniques when used in a serial manner, and can also obtain a speedup of over 3x by using 5 parallel searches over a small distributed system.
APA, Harvard, Vancouver, ISO, and other styles
33

Milligan, Peter. "The synthesis of parallel programs : with specific application to text processing." Thesis, Queen's University Belfast, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.317085.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Hark, Marcel [Verfasser], Jürgen [Akademischer Betreuer] Giesl, and Laura [Akademischer Betreuer] Kovács. "Towards complete methods for automatic complexity and termination analysis of (probabilistic) programs / Marcel Tobias Hark ; Jürgen Giesl, Laura Kovács." Aachen : Universitätsbibliothek der RWTH Aachen, 2021. http://d-nb.info/1239181108/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Baudisch, Daniel [Verfasser], and Klaus [Akademischer Betreuer] Schneider. "Synthesis of Synchronous Programs to Parallel Software Architectures / Daniel Baudisch. Betreuer: Klaus Schneider." Kaiserslautern : Technische Universität Kaiserslautern, 2014. http://d-nb.info/1053959281/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Abid, Mariem. "System-Level Hardwa Synthesis of Dataflow Programs with HEVC as Study Use Case." Thesis, Rennes, INSA, 2016. http://www.theses.fr/2016ISAR0002/document.

Full text
Abstract:
Les applications de traitement d'image et vidéo sont caractérisées par le traitement d'une grande quantité de données. La conception de ces applications complexes avec des méthodologies de conception traditionnelles bas niveau provoque 1'augmentation des coûts de développement. Afin de résoudre ces défis, des outils de synthèse haut niveau ont été proposés. Le principe de base est de modéliser le comportement de l'ensemble du système en utilisant des spécifications haut niveau afin de permettre la synthèse automatique vers des spécifications bas niveau pour implémentation efficace en FPGA. Cependant, l'inconvénient principal de ces outils de synthèse haut niveau est le manque de prise en compte de la totalité du système, c.-à-d. la création de la communication entre les différents composants pour atteindre le niveau système n'est pas considérée. Le but de cette thèse est d'élever le niveau d'abstraction dans la conception des systèmes embarqués au niveau système. Nous proposons un flot de conception qui permet une synthèse matérielle efficace des applications de traitement vidéo décrites en utilisant un langage spécifique à un domaine pour la programmation flot-de- données. Le flot de conception combine un compilateur flot- de-données pour générer des descriptions à base de code C et d'un synthétiseur pour générer des descriptions niveau de transfert de registre. Le défi majeur de l'implémentation en FPGA des canaux de communication des programmes flot-de-données basés sur un modèle de calcul est la minimisation des frais généraux de la communication. Pour cela, nous avons introduit une nouvelle approche de synthèse de l'interface qui mappe les grandes quantités des données vidéo, à travers des m'mémoires partagées sur FPGA. Ce qui conduit à une diminution considérable de la latence et une augmentation du débit. Ces résultats ont été démontrés sur la synthèse matérielle du standard vidéo émergent High-Efficiency Video Coding (HEVC)
Image and video processing applications are characterized by the processing of a huge amount of data. The design of such complex applications with traditional design methodologies at lowlevel of abstraction causes increasing development costs. In order to resolve the above mentioned challenges, Electronic System Level (ESL) synthesis or High-Level Synthesis (HLS) tools were proposed. The basic premise is to model the behavior of the entire system using high level specifications, and to enable the automatic synthesis to low-level specifications for efficient implementation in Field-Programmable Gate array (FPGA). However, the main downside of the HLS tools is the lack of the entire system consideration, i.e. the establishment of the communications between these components to achieve the system-level is not yet considered. The purpose of this thesis is to raise the level of abstraction in the design of embedded systems to the system-level. A novel design flow was proposed that enables an efficient hardware implementation of video processing applications described using a Domain Specific Language (DSL) for dataflow programming. The design flow combines a dataflow compiler for generating C-based HLS descriptions from a dataflow description and a C-to-gate synthesizer for generating Register-Transfer Level (RTL) descriptions. The challenge of implementing the communication channels of dataflow programs relying on Model of Computation (MoC) in FPGA is the minimization of the communication overhead. In this issue, we introduced a new interface synthesis approach that maps the large amounts of data that multimedia and image processing applications process, to shared memories on the FPGA. This leads to a tremendous decrease in the latency and an increase in the throughput. These results were demonstrated upon the hardware synthesis of the emerging High-Efficiency Video Coding (HEVC) standard
APA, Harvard, Vancouver, ISO, and other styles
37

Henter, Gustav Eje. "Probabilistic Sequence Models with Speech and Language Applications." Doctoral thesis, KTH, Kommunikationsteori, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-134693.

Full text
Abstract:
Series data, sequences of measured values, are ubiquitous. Whenever observations are made along a path in space or time, a data sequence results. To comprehend nature and shape it to our will, or to make informed decisions based on what we know, we need methods to make sense of such data. Of particular interest are probabilistic descriptions, which enable us to represent uncertainty and random variation inherent to the world around us. This thesis presents and expands upon some tools for creating probabilistic models of sequences, with an eye towards applications involving speech and language. Modelling speech and language is not only of use for creating listening, reading, talking, and writing machines---for instance allowing human-friendly interfaces to future computational intelligences and smart devices of today---but probabilistic models may also ultimately tell us something about ourselves and the world we occupy. The central theme of the thesis is the creation of new or improved models more appropriate for our intended applications, by weakening limiting and questionable assumptions made by standard modelling techniques. One contribution of this thesis examines causal-state splitting reconstruction (CSSR), an algorithm for learning discrete-valued sequence models whose states are minimal sufficient statistics for prediction. Unlike many traditional techniques, CSSR does not require the number of process states to be specified a priori, but builds a pattern vocabulary from data alone, making it applicable for language acquisition and the identification of stochastic grammars. A paper in the thesis shows that CSSR handles noise and errors expected in natural data poorly, but that the learner can be extended in a simple manner to yield more robust and stable results also in the presence of corruptions. Even when the complexities of language are put aside, challenges remain. The seemingly simple task of accurately describing human speech signals, so that natural synthetic speech can be generated, has proved difficult, as humans are highly attuned to what speech should sound like. Two papers in the thesis therefore study nonparametric techniques suitable for improved acoustic modelling of speech for synthesis applications. Each of the two papers targets a known-incorrect assumption of established methods, based on the hypothesis that nonparametric techniques can better represent and recreate essential characteristics of natural speech. In the first paper of the pair, Gaussian process dynamical models (GPDMs), nonlinear, continuous state-space dynamical models based on Gaussian processes, are shown to better replicate voiced speech, without traditional dynamical features or assumptions that cepstral parameters follow linear autoregressive processes. Additional dimensions of the state-space are able to represent other salient signal aspects such as prosodic variation. The second paper, meanwhile, introduces KDE-HMMs, asymptotically-consistent Markov models for continuous-valued data based on kernel density estimation, that additionally have been extended with a fixed-cardinality discrete hidden state. This construction is shown to provide improved probabilistic descriptions of nonlinear time series, compared to reference models from different paradigms. The hidden state can be used to control process output, making KDE-HMMs compelling as a probabilistic alternative to hybrid speech-synthesis approaches. A final paper of the thesis discusses how models can be improved even when one is restricted to a fundamentally imperfect model class. Minimum entropy rate simplification (MERS), an information-theoretic scheme for postprocessing models for generative applications involving both speech and text, is introduced. MERS reduces the entropy rate of a model while remaining as close as possible to the starting model. This is shown to produce simplified models that concentrate on the most common and characteristic behaviours, and provides a continuum of simplifications between the original model and zero-entropy, completely predictable output. As the tails of fitted distributions may be inflated by noise or empirical variability that a model has failed to capture, MERS's ability to concentrate on high-probability output is also demonstrated to be useful for denoising models trained on disturbed data.

QC 20131128


ACORNS: Acquisition of Communication and Recognition Skills
LISTA – The Listening Talker
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Dong [Verfasser], Martin [Akademischer Betreuer] Buss, Tamim [Gutachter] Asfour, and Martin [Gutachter] Buss. "Probabilistic Grasping for Mobile Manipulation Systems: Skills, Synthesis and Control / Dong Chen ; Gutachter: Tamim Asfour, Martin Buss ; Betreuer: Martin Buss." München : Universitätsbibliothek der TU München, 2020. http://d-nb.info/1220831212/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Demetriou, Christodoulos S. "A PC implemented kinematic synthesis system for planar linkages." Thesis, Virginia Polytechnic Institute and State University, 1987. http://hdl.handle.net/10919/101343.

Full text
Abstract:
The purpose of this thesis is to develop a PC implemented kinematic synthesis system for four-bar and six-bar planar linkages using Turbo Pascal. CYPRUS is an interactive program that calculates and displays graphically the designed four-bar and six-bar linkages. This package can be used for three and four position synthesis of path generation, path generation with input timing, body guidance, and body guidance with input timing linkages. The package can also be used for function generation linkages where the user may enter a set of angle pairs or choose one of the following functions: tangent, cosine, sine, exponential, logarithmic, and natural logarithmic. The above syntheses can be combined to design linkages that produce more complex motion. For each kinematic synthesis case the code calculates a certain number of solutions. Then the designer chooses the most suitable solution for the particular application at hand. After a mechanism is synthesized, it can be animated for a check of the mechanical action. Watching this animation allows the designer to judge criteria such as clearances, forces, velocities and acceleration of the moving links. The software operates on an IBM PC or any other PC compatible. The language used is Turbo Pascal, an extremely effective tool and one of the fastest high level languages in compilation and execution time.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
40

Cartwright, Lauren Ashley. "The Influence of Conservation Programs on Residential Water Demand: Synthesis and Analysis for Shared Vision Planning in the Rappahannock River Basin." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/30824.

Full text
Abstract:
The Rappahannock River Basin Commission is undergoing a collaborative water supply planning process for Virginiaâ s Rappahannock River Basin. Participants in the planning process have indicated an interest in technical information about the possible impact conservation programs may have on reducing residential water demand. The potential influence of conservation programs is identified through a literature synthesis and a statistical analysis of residential water demand for a locality within the basin (Stafford County). In the literature synthesis, conservation programs are classified as voluntary or mandatory. Voluntary programs utilize financial incentives (such as water pricing and rebates) or educational incentives (such as radio ads and bill inserts) to encourage conservation, and mandatory programs utilize regulatory incentives (such as plumbing standards and bans on outdoor water use). The water demand statistical model was estimated to more specifically identify how Stafford residential water customers respond to water pricing/rate structure changes (financial incentives), imposition of federal regulations on plumbing standards (regulatory incentives), and a voluntary conservation program utilizing educational incentives. The results indicate that while many studies have found residential customers are responsive to price changes, Stafford residential water users have not significantly changed their water demand in response to price/rate structure changes. Previous literature also suggests federal plumbing standards potentially have a significant impact on water demand. The influence of new plumbing standards in the Stafford demand model was inconclusive and warrants further analysis. Consistent with the literature, voluntary conservation programs utilizing educational incentives alone did not substantially alter residential water demand in Stafford County.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
41

DeMeo, Stephen. "Investigating chemical change in the laboratory : a curriculum resource for introductory chemistry teachers based on the synthesis, decomposition and analysis of zinc iodide /." Access Digital Full Text version, 1994. http://pocketknowledge.tc.columbia.edu/home.php/bybib/11624553.

Full text
Abstract:
Thesis (Ed.D.)--Teachers College, Columbia University, 1994.
Includes tables. Typescript; issued also on microfilm. Sponsor: Jean Lythcott. Dissertation Committee: Roger O. Anderson. Includes bibliographical references (leaves 174-186).
APA, Harvard, Vancouver, ISO, and other styles
42

Najahi, Mohamed amine. "Synthesis of certified programs in fixed-point arithmetic, and its application to linear algebra basic blocks : and its application to linear algebra basic blocks." Thesis, Perpignan, 2014. http://www.theses.fr/2014PERP1212.

Full text
Abstract:
Pour réduire les coûts des systèmes embarqués, ces derniers sont livrés avec des micro-processeurs peu puissants. Ces processeurs sont dédiés à l'exécution de tâches calculatoires dont certaines, comme la transformée de Fourier rapide, peuvent s'avérer exigeantes en termes de ressources de calcul. Afin que les implémentations de ces algorithmes soient efficaces, les programmeurs utilisent l'arithmétique à virgule fixe qui est plus adaptée aux processeurs dépourvus d'unité flottante. Cependant, ils se retrouvent confrontés à deux difficultés: D'abord, coder en virgule fixe est fastidieux et exige que le programmeur gère tous les détails arithmétiques. Ensuite, et en raison de la faible dynamique des nombres à virgule fixe par rapport aux nombres flottants, les calculs en fixe sont souvent perçus comme intrinsèquement peu précis. La première partie de cette thèse propose une méthodologie pour dépasser ces deux limitations. Elle montre comment concevoir et mettre en œuvre des outils pour générer automatiquement des programmes en virgule fixe. Ensuite, afin de rassurer l'utilisateur quant à la qualité numérique des codes synthétisés, des certificats sont générés qui fournissent des bornes sur les erreurs d'arrondi. La deuxième partie de cette thèse est dédiée à l'étude des compromis lors de la génération de programmes en virgule fixe pour les briques d'algèbre linéaire. Des données expérimentales y sont fournies sur la synthèse de code pour la multiplication et l'inversion matricielles
To be cost effective, embedded systems are shipped with low-end micro-processors. These processors are dedicated to one or few tasks that are highly demanding on computational resources. Examples of widely deployed tasks include the fast Fourier transform, convolutions, and digital filters. For these tasks to run efficiently, embedded systems programmers favor fixed-point arithmetic over the standardized and costly floating-point arithmetic. However, they are faced with two difficulties: First, writing fixed-point codes is tedious and requires that the programmer must be in charge of every arithmetical detail. Second, because of the low dynamic range of fixed-point numbers compared to floating-point numbers, there is a persistent belief that fixed-point computations are inherently inaccurate. The first part of this thesis addresses these two limitations as follows: It shows how to design and implement tools to automatically synthesize fixed-point programs. Next, to strengthen the user's confidence in the synthesized codes, analytic methods are suggested to generate certificates. These certificates can be checked using a formal verification tool, and assert that the rounding errors of the generated codes are indeed below a given threshold. The second part of the thesis is a study of the trade-offs involved when generating fixed-point code for linear algebra basic blocks. It gives experimental data on fixed-point synthesis for matrix multiplication and matrix inversion through Cholesky decomposition
APA, Harvard, Vancouver, ISO, and other styles
43

Cernetic, Linda K. "A Best Evidence Analysis and Synthesis of Research on Teacher Mentoring Programs for the Entry Year Teacher in the Public Elementary and Secondary Schools." Cedarville University / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=cedar1065447034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Poernomo, Iman Hafiz 1976. "Variations on a theme of Curry and Howard : the Curry-Howard isomorphism and the proofs-as-programs paradigm adapted to imperative and structured program synthesis." Monash University, School of Computer Science and Software Engineering, 2003. http://arrow.monash.edu.au/hdl/1959.1/9405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Augaitis, Darius. "Programų sintezė lygiagrečiuoju programavimo metodu." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2008. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2008~D_20080929_141449-26019.

Full text
Abstract:
Šio darbo tikslai: sukurti, išanalizuoti ir suprojektuoti grafinę scenarijų kalbą, kuri leistų vartotojui nesunkiai piešimo būdu kurti sudėtingas programas, skirtas Grid sistemoms, suprojektuoti ir sukurti vartotojo sąsaja su programavimo galimybėmis, išanalizuoti tiriamos Grid sistemos trūkumus ir pasiūlyti alternatyvų sprendimą.
Goals of this work are: to analyze, make project of visual scenario language, which will provide ability for user easily create complicated programs for Grid system with drawing method, to analyze and make project of graphical user interface for the Grid system with ability to write program source, to analyze the Grid systems limitations and suggest alternative solution.
APA, Harvard, Vancouver, ISO, and other styles
46

Zhang, Jie, Michael R. Mayton, and John J. Wheeler. "Effectiveness of Gluten-Free and Casein-Free Diets for Individuals with Autism Spectrum Disorders: An Evidence-Based Research Synthesis." Digital Commons @ East Tennessee State University, 2013. https://dc.etsu.edu/etsu-works/317.

Full text
Abstract:
In order to better assist practitioners and better serve persons with autism spectrum disorders (ASD) and their families, it is vital for professionals to systematically evaluate the existing body of literature and synthesize its scientific evidence, so that the efficacy of research can be translated to evidence-based practices (EBPs) (Wheeler, 2007; Zhang & Wheeler, 2011). This research synthesis evaluated adherence to EBP standards and analyzed the effectiveness of gluten-free and casein-free (GFCF) diets for individuals with ASD. Four hundred and seventy articles were screened among peer-reviewed journals in English language published through 2010 using the Academic Search Complete search database. Twenty-three studies were selected, and the researchers used a systematic analysis model developed by Mayton, Wheeler, Menendez, and Zhang (2010) to investigate the degree of adherence to specific evidence-based practice standards. In addition, the study utilized quality indicators proposed by (a) Horner et al. (2005) for single-subject design studies and (b) Gersten et al. (2005) for group experimental design, to evaluate the efficacy of GFCF diet interventions. Results of this synthesis indicated that the efficacy of GFCF diet interventions for individuals with ASD is inconclusive, and the field needs better controlled studies to provide the scientific evidence base for the intervention.
APA, Harvard, Vancouver, ISO, and other styles
47

Dannenberg, Frits Gerrit Willem. "Modelling and verification for DNA nanotechnology." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:a0b5343b-dcee-44ff-964b-bdf5a6f8a819.

Full text
Abstract:
DNA nanotechnology is a rapidly developing field that creates nanoscale devices from DNA, which enables novel interfaces with biological material. Their therapeutic use is envisioned and applications in other areas of basic science have already been found. These devices function at physiological conditions and, owing to their molecular scale, are subject to thermal fluctuations during both preparation and operation of the device. Troubleshooting a failed device is often difficult and we develop models to characterise two separate devices: DNA walkers and DNA origami. Our framework is that of continuous-time Markov chains, abstracting away much of the underlying physics. The resulting models are coarse but enable analysis of system-level performance, such as ‘the molecular computation eventually returns the correct answer with high probability’. We examine the applicability of probabilistic model checking to provide guarantees on the behaviour of nanoscale devices, and to this end we develop novel model checking methodology. We model a DNA walker that autonomously navigates a series of junctions, and we derive design principles that increase the probability of correct computational output. We also develop a novel parameter synthesis method for continuous-time Markov chains, for which the synthesised models guarantee a predetermined level of performance. Finally, we develop a novel discrete stochastic assembly model of DNA origami from first principles. DNA origami is a widespread method for creating nanoscale structures from DNA. Our model qualitatively reproduces experimentally observed behaviour and using the model we are able to rationally steer the folding pathway of a novel polymorphic DNA origami tile, controlling the eventual shape.
APA, Harvard, Vancouver, ISO, and other styles
48

Cordier, Nicolas. "Approches multi-atlas fondées sur l'appariement de blocs de voxels pour la segmentation et la synthèse d'images par résonance magnétique de tumeurs cérébrales." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4111/document.

Full text
Abstract:
Cette thèse s'intéresse au développement de méthodes automatiques pour la segmentation et la synthèse d'images par résonance magnétique de tumeurs cérébrales. La principale perspective clinique de la segmentation des gliomes est le suivi de la vitesse d'expansion diamétrique dans le but d'adapter les solutions thérapeutiques. A cette fin, la thèse formalise au moyen de modèles graphiques probabilistes des approches de segmentation multi-atlas fondées sur l'appariement de blocs de voxels. Un premier modèle probabiliste prolonge à la segmentation automatique de régions cérébrales pathologiques les approches multi-atlas classiques de segmentation de structures anatomiques. Une approximation de l'étape de marginalisation remplace la notion de fenêtre de recherche locale par un tamisage par atlas et par étiquette. Un modèle de détection de gliomes fondé sur un a priori spatial et des critères de pré-sélection de blocs de voxels permettent d'obtenir des temps de calcul compétitifs malgré un appariement non local. Ce travail est validé et comparé à l'état de l'art sur des bases de données publiques. Un second modèle probabiliste, symétrique au modèle de segmentation, simule des images par résonance magnétique de cas pathologiques, à partir d'une unique segmentation. Une heuristique permet d'estimer le maximum a posteriori et l'incertitude du modèle de synthèse d'image. Un appariement itératif des blocs de voxels renforce la cohérence spatiale des images simulées. Le réalisme des images simulées est évalué avec de vraies IRM et des simulations de l'état de l'art. Le raccordement d'un modèle de croissance de tumeur permet de créer des bases d'images annotées synthétiques
This thesis focuses on the development of automatic methods for the segmentation and synthesis of brain tumor Magnetic Resonance images. The main clinical perspective of glioma segmentation is growth velocity monitoring for patient therapy management. To this end, the thesis builds on the formalization of multi-atlas patch-based segmentation with probabilistic graphical models. A probabilistic model first extends classical multi-atlas approaches used for the segmentation of healthy brains structures to the automatic segmentation of pathological cerebral regions. An approximation of the marginalization step replaces the concept of local search windows with a stratification with respect to both atlases and labels. A glioma detection model based on a spatially-varying prior and patch pre-selection criteria are introduced to obtain competitive running times despite patch matching being non local. This work is validated and compared to state-of-the-art algorithms on publicly available datasets. A second probabilistic model mirrors the segmentation model in order to synthesize realistic MRI of pathological cases, based on a single label map. A heuristic method allows to solve for the maximum a posteriori and to estimate uncertainty of the image synthesis model. Iterating patch matching reinforces the spatial coherence of synthetic images. The realism of our synthetic images is assessed against real MRI, and against outputs of the state-of-the-art method. The junction of a tumor growth model to the proposed synthesis approach allows to generate databases of annotated synthetic cases
APA, Harvard, Vancouver, ISO, and other styles
49

Dickert, Jörg. "Synthese von Zeitreihen elektrischer Lasten basierend auf technischen und sozialen Kennzahlen." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-204629.

Full text
Abstract:
Kenntnisse über das prinzipielle Verhalten der Lasten und deren Benutzung durch die Endabnehmer sind im Wesentlichen vorhanden. Viele der aktuell notwendigen Untersuchungen benötigen jedoch Zeitreihen elektrischer Lasten, sogenannte Lastgänge. Mit der Synthese von Zeitreihen elektrischer Lasten können unter Berücksichtigung verschiedenster Anforderungen Lastgänge aufgebaut werden, wobei in dieser Arbeit der Fokus auf Haushaltsabnehmer liegt. Wichtige Eingangsdaten für die Lastgangsynthese sind die technischen Kenngrößen der elektrischen Geräte und die sozialen Kennzahlen zur Benutzung der Geräte durch die Endabnehmer. Anhand dieser Eingangsdaten wird die Lastgangsynthese durchgeführt und werden Anwendungsbeispiele dargestellt. Die Entwicklung von klassischen Versorgungsnetzen hin zu aktiven Verteilungsnetzen ist bedingt durch neue Verbraucher, wie Wärmepumpen, Elektroautos, sowie vielen dezentralen Erzeugungsanlagen. Speziell die fluktuierende Einspeisung durch Photovoltaik-Anlagen ist Anlass zur Forderung nach einem Verbrauchs- und Lastmanagement. Mit dem Verbrauchsmanagement wird die Last an die Einspeisung angepasst und das Lastmanagement berücksichtigt zusätzlich die Versorgungssituation des Netzes. Für die Lastgangsynthese werden die Haushaltsgeräte in fünf Geräteklassen unterteilt, für die spezifische Kennzahlen aus technischer und sozialer Sicht angegeben werden. Diese Kennzahlen sind Leistung pro Gerät oder Energieverbrauch pro Nutzung sowie Ausstattungsgrade, Benutzungshäufigkeiten und Zeiten für das Ein- und Ausschalten der Geräte. Damit wird ein neuer Ansatz gewählt, welcher nicht mehr auf die detaillierte Beschreibung des Bewohnerverhaltens beruht, da die Datenbereitstellung dafür äußerst schwierig war und ist. Vorzugsweise in Niederspannungsnetzen sind mit synthetischen Zeitreihen umfangreiche und umfassende Untersuchungen realisierbar. Es gibt verschiedenste Möglichkeiten, die Zeitreihen zusammenzustellen. Mit Lastgängen je Außenleiter können beispielsweise unsymmetrische Zustände der Netze analysiert werden. Zudem können auch Lastgänge für Geräte bzw. Gerätegruppen erstellt werden, welche für Potenzialanalysen des Verbrauchsmanagement essenziell sind. Der wesentliche Unterschied besteht darin, dass viele Berechnungen nicht mehr auf deterministische Extremwerte beruhen, sondern die stochastischen Eigenschaften der Endabnehmer mit den resultierenden Lastgängen berücksichtigt werden
Distributed generation and novel loads such as electric vehicles and heat pumps require the development towards active distribution networks. Load curves are needed for the appropriate design process. This thesis presents a feasible and expandable synthesis of load curves, which is performed exemplary on residential customers with a period under review of 1 year and time steps of as little as 30 s. The data is collected for up-to-date appliances and current statics examining the way of life. The main focus lies on the input data for the synthesis and distinguishes between technical and social factors. Some thirty home appliances have been analyzed and are classified into five appliance classes by incorporating switching operations and power consumptions. The active power is the key figure for the technical perspective and the data is derived from manufacturer information. For the social perspective six different customer types are defined. They differ in sizes of household and housekeeping. The social key figures are appliance penetration rate and depending on the appliance class the turn-on time, turn-off time, operating duration or cycle duration. The elaborated two-stage synthesis is efficiently implemented in Matlab®. First, artificial load curves are created for each appliance of the households under consideration of the appliance class. In the second step, the individual load curves of the appliances are combined to load curves per line conductor. The algorithms have been validated in the implementation process by retracing the input data in the load curves. Also, the feasibility of the results is shown by comparing the key figures maximum load and power consumption to data in literature. The generated load curves allow for unsymmetrical calculations of distribution systems and can be used for probabilistic investigations of the charging of electric vehicles, the sizing of thermal storage combined with heat pumps or the integration of battery storage systems. A main advantage is the possibility to estimate the likelihood of operating conditions. The enhancement to further appliances and the changeability of the input data allows for versatile further possible investigations
APA, Harvard, Vancouver, ISO, and other styles
50

Slezák, Josef. "Evoluční syntéza analogových elektronických obvodů s využitím algoritmů EDA." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2014. http://www.nusl.cz/ntk/nusl-233666.

Full text
Abstract:
Disertační práce je zaměřena na návrh analogových elektronických obvodů pomocí algoritmů s pravěpodobnostními modely (algoritmy EDA). Prezentované metody jsou na základě požadovaných charakteristik cílových obvodů schopny navrhnout jak parametry použitých komponent tak také jejich topologii zapojení. Tři různé metody využití EDA algoritmů jsou navrženy a otestovány na příkladech skutečných problémů z oblasti analogových elektronických obvodů. První metoda je určena pro návrh pasivních analogových obvodů a využívá algoritmus UMDA pro návrh jak topologie zapojení tak také hodnot parametrů použitých komponent. Metoda je použita pro návrh admitanční sítě s požadovanou vstupní impedancí pro účely chaotického oscilátoru. Druhá metoda je také určena pro návrh pasivních analogových obvodů a využívá hybridní přístup - UMDA pro návrh topologie a metodu lokální optimalizace pro návrh parametrů komponent. Třetí metoda umožňuje návrh analogových obvodů obsahujících také tranzistory. Metoda využívá hybridní přístup - EDA algoritmus pro syntézu topologie a metoda lokální optimalizace pro určení parametrů použitých komponent. Informace o topologii je v jednotlivých jedincích populace vyjádřena pomocí grafů a hypergrafů.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography