Rozprawy doktorskie na temat „Symbolic models”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Symbolic models.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Symbolic models”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Porter, Mark A. "Evolving inferential fermentation models using symbolic annealing". Thesis, University of Newcastle Upon Tyne, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.275517.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Devereux, Benet. "Finite-state models with multiplicities, symbolic representation and reasoning". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ62961.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Townsend, Duncan Clarke McIntire. "Using a symbolic language parser to Improve Markov language models". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100621.

Pełny tekst źródła
Streszczenie:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 31-32).
This thesis presents a hybrid approach to natural language processing that combines an n-gram (Markov) model with a symbolic parser. In concert these two techniques are applied to the problem of sentence simplification. The n-gram system is comprised of a relational database backend with a frontend application that presents a homogeneous interface for both direct n-gram lookup and Markov approximation. The query language exposed by the frontend also applies lexical information from the START natural language system to allow queries based on part of speech. Using the START natural language system's parser, English sentences are transformed into a collection of structural, syntactic, and lexical statements that are uniquely well-suited to the process of simplification. After reducing the parse of the sentence, the resulting expressions can be processed back into English. These reduced sentences are ranked by likelihood by the n-gram model.
by Duncan Clarke McIntire Townsend.
M. Eng.
Style APA, Harvard, Vancouver, ISO itp.
4

Zaner, Frederick Steven. "The development of symbolic models and their extension into space". The Ohio State University, 1991. http://rave.ohiolink.edu/etdc/view?acc_num=osu1303495012.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Keyton, Michael M. (Michael Murray). "The Development and Interpretation of Several Symbolic Models of Thought". Thesis, North Texas State University, 1986. https://digital.library.unt.edu/ark:/67531/metadc331860/.

Pełny tekst źródła
Streszczenie:
Philosophical and physiological investigations define thought to be the result of thinking. psychological Inquiry has mainly focused on discovery of the mechanisms and topology of thought. Philosophical Inquiry either has explored the mind-body problem or has analyzed the linguistics of the expression of a thought. However, neither has Investigated adequately phenomenal characteristics of thought Itself, the Intermediary between the production and the expression of a thought. The use of thought to analyze phenomenal characteristics of thought engenders a paradox. If the expression of thought requires finite series of linked words with rules governing syntax, then analysis of both the thought and the expression of the thought must necessarily transcend the linguistic level. During the last century many examples of logical paradoxes In linguistics of thought have been given. The culminating difficulty of dealing with a finite structure, a characteristic of any language, Is Godel's Incompleteness Theorem, which says in essence that in order to render all decisions about a finite system requires the use of material outside the system. Thus, a potentially complete interpretation of thought must use some technique which is basically non-linguistic . Wittgenstein proposed such a method with his "Picture theory. " This technique solves the major paradoxical problem generated by investigation of a reflective system using the system itself , but leaves unsolved the question of ultimate resolution . Using pictorial models with examples to assist in understanding phenomenal characteristics of thought, this paper investigates basic units of thought, attempting to identify properties of a basic unit of thought and of the collection of thoughts for a person, and analyzes relationships and interactions between units of thought.
Style APA, Harvard, Vancouver, ISO itp.
6

Kamienny, Pierre-Alexandre. "Efficient adaptation of reinforcement learning agents : from model-free exploration to symbolic world models". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS412.

Pełny tekst źródła
Streszczenie:
L'apprentissage par renforcement (RL) est un ensemble de techniques utilisées pour former des agents autonomes à interagir avec des environnements de manière à maximiser leur récompense. Pour déployer avec succès ces agents dans des scénarios réels, il est crucial qu'ils puissent généraliser à des situations inconnues. Bien que les réseaux de neurones aient montré des résultats prometteurs en permettant aux agents d'interpoler des comportements souhaités, leurs limites en termes de généralisation au-delà de la distribution d'entraînement entraînent souvent des performances sous-optimales sur des données issue d'une distribution différente. Ces défis sont encore amplifiés dans les environnements de RL caractérisés par des situations non stationnaires et des changements constants de la distribution lors du déploiement. Cette thèse présente de nouvelles stratégies dans le cadre du meta-RL visant à doter les agents RL de la capacité à s'adapter sur des tâches différentes du domaine d'entraînement. La première partie de la thèse se concentre sur les techniques model-free, c'est à dire qui ne modélisent pas explicitement l'environnement, pour apprendre des stratégies d'exploration efficaces. Nous examinons deux scénarios : dans le premier, l'agent dispose d'un ensemble de tâches d'entraînement, ce qui lui permet de modéliser explicitement les tâches et d'apprendre des représentations de tâches généralisables ; dans le second, l'agent apprend sans récompense à maximiser la couverture de l'espace des états. Dans la deuxième partie, nous explorons l'application de la régression symbolique, un outil puissant pour développer des modèles prédictifs offrant une interprétabilité et une meilleure robustesse face aux changements de distribution. Ces modèles sont ensuite intégrés aux agents model-based pour améliorer la modélisation de la dynamique. De plus, cette recherche contribue au domaine de la régression symbolique en introduisant une collection de techniques exploitant les modèles génératifs, en particulier le Transformer, ce qui améliore leur précision et leur efficacité. En résumé, cette thèse aborde abordant le défi de la généralisation et adaptation dans le RL. Elle développe des techniques visant à permettre aux agents meta-RL de s'adapter à des tâches hors domaine, facilitant ainsi leur déploiement dans des scénarios du monde réel
Reinforcement Learning (RL) encompasses a range of techniques employed to train autonomous agents to interact with environments with the purpose of maximizing their returns across various training tasks. To ensure successful deployment of RL agents in real-world scenarios, achieving generalization and adaptation to unfamiliar situations is crucial. Although neural networks have shown promise in facilitating in-domain generalization by enabling agents to interpolate desired behaviors, their limitations in generalizing beyond the training distribution often lead to suboptimal performance on out-of-distribution data. These challenges are further amplified in RL settings characterized by non-stationary environments and constant distribution shifts during deployment. This thesis presents novel strategies within the framework of Meta-Reinforcement Learning, aiming to equip RL agents with the ability to adapt at test-time to out-of-domain tasks. The first part of the thesis focuses on model-free techniques to learn effective exploration strategies. We consider two scenarios: one where the agent is provided with a set of training tasks, enabling it to explicitly model and learn generalizable task representations; and another where the agent learns without rewards to maximize its state coverage. In the second part, we investigate into the application of symbolic regression, a powerful tool for developing predictive models that offer interpretability and exhibit enhanced robustness against distribution shifts. These models are subsequently integrated within model-based RL agents to improve their performance. Furthermore, this research contributes to the field of symbolic regression by introducing a collection of techniques that leverage Transformer models, enhancing their accuracy and effectiveness. In summary, by addressing the challenges of adaptation and generalization in RL, this thesis focuses on the understanding and application of Meta-Reinforcement Learning strategies. It provides insights and techniques for enabling RL agents to adapt seamlessly to out-of-domain tasks, ultimately facilitating their successful deployment in real-world scenarios
Style APA, Harvard, Vancouver, ISO itp.
7

Lampka, Kai. "A symbolic approach to the state graph based analysis of high-level Markov reward models". [S.l.] : [s.n.], 2007. http://deposit.ddb.de/cgi-bin/dokserv?idn=985513926.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

RANJAN, MUKESH. "AUTOMATED LAYOUT-INCLUSIVE SYNTHESIS OF ANALOG CIRCUITS USING SYMBOLIC PERFORMANCE MODELS". University of Cincinnati / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1129922496.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Ivanova, Elena. "Efficient Synthesis of Safety Controllers using Symbolic Models and Lazy Algorithms". Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG088.

Pełny tekst źródła
Streszczenie:
Cette thèse porte sur le développement d'approches efficaces de synthèse de contrôleurs basées sur l'abstraction pour les systèmes cyber-physiques (CPS). Alors que les méthodes basées sur l'abstraction pour la conception de CPS ont fait l'objet de recherches intensives au cours des dernières décennies, l'évolutivité de ces techniques reste un problème. Cette thèse se concentre sur le développement d'algorithmes de synthèse paresseuse pour les spécifications de sécurité. Les spécifications de sécurité consistent à maintenir la trajectoire du système à l'intérieur d'un ensemble sûr donné. Cette spécification est de la plus haute importance dans de nombreux problèmes d'ingénierie, souvent prioritaires par rapport à d'autres exigences de performance. Les approches paresseuses surpassent l'algorithme de synthèse classique [Tabuada, 2009] en évitant les calculs, qui ne sont pas essentiels pour les objectifs de synthèse. Le chapitre 1 motive la thèse et présente l'état de l'art. Le chapitre 2 structure les approches de synthèse paresseuses existantes et met l'accent sur trois sources d'efficacité : les informations sur les états contrôlables a priori, les priorités sur les entrées et les états non accessibles à partir de l'ensemble initial. Le chapitre 3 propose un algorithme, qui explore itérativement les états à la frontière du domaine contrôlable tout en évitant l'exploration des états internes, en supposant qu'ils sont contrôlables en toute sécurité a priori. Un contrôleur de sécurité en boucle fermée pour le problème d'origine est alors défini comme suit : nous utilisons le contrôleur abstrait pour repousser le système d'un état limite vers l'intérieur, tandis que pour les états internes, toute entrée admissible est valide. Le chapitre 4 présente un algorithme qui restreint les calculs de synthèse du contrôleur aux seuls états atteignables tout en privilégiant les transitions de plus longue durée. Le système original est abstrait par un modèle symbolique avec une grille adaptative. De plus, un nouveau type d'échantillonnage temporel est également envisagé. Au lieu d'utiliser des transitions de durée prédéterminée, la durée des transitions est contrainte par des intervalles d'état qui doivent contenir l'ensemble accessible. Le chapitre 5 est consacré aux systèmes de transition monotones. L'approche de synthèse paresseuse introduite bénéficie d'une propriété monotone des systèmes de transition et de la structure ordonnée de l'espace d'état (d'entrée), et du fait que des spécifications de sécurité dirigées sont prises en compte. La classe de spécifications considérée s'enrichit alors d'intersections d'exigences de sécurité supérieures et inférieures fermées. Le chapitre 6 conclut la discussion et soulève de nouvelles questions pour les recherches futures
This thesis focuses on the development of efficient abstraction-based controller synthesis approaches for cyber-physical systems (CPS). While abstraction-based methods for CPS design have been the subject of intensive research over the last decades, the scalability of these techniques remains an issue. This thesis focus on developing lazy synthesis algorithms for safety specifications. Safety specifications consist in maintaining the trajectory of the system inside a given safe set. This specification is of the utmost importance in many engineering problems, often prioritized over other performance requirements. Lazy approaches outperform the classical synthesis algorithm [Tabuada, 2009] by avoiding computations, which are non-essential for synthesis goals. Chapter 1 motivates the thesis and discusses the state of the art. Chapter 2 structures the existing lazy synthesis approaches and emphasizes three sources of efficiency: information about a priori controllable states, priorities on inputs, and non-reachable from initial set states. Chapter 3 proposes an algorithm, which iteratively explores states on the boundary of controllable domain while avoiding exploration of internal states, supposing that they are safely controllable a priory. A closed-loop safety controller for the original problem is then defined as follows: we use the abstract controller to push the system from a boundary state back towards the interior, while for inner states, any admissible input is valid. Chapter 4 presents an algorithm that restricts the controller synthesis computations to reachable states only while prioritizing longer-duration transitions. The original system is abstracted by a symbolic model with an adaptive grid. Moreover, a novel type of time sampling is also considered. Instead of using transitions of predetermined duration, the duration of the transitions is constrained by state intervals that must contain the reachable set. Chapter 5 is dedicated to monotone transition systems. The introduced lazy synthesis approach benefits from a monotone property of transition systems and the ordered structure of the state (input) space, and the fact that directed safety specifications are considered. The considered class of specifications is then enriched by intersections of upper and lower-closed safety requirements. Chapter 6 concludes the discussion and raises new issues for future research
Style APA, Harvard, Vancouver, ISO itp.
10

Salgado, Mauricio. "More than words : computational models of emergence and evolution of symbolic communication". Thesis, University of Surrey, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.556464.

Pełny tekst źródła
Streszczenie:
The study of symbolic communication is a key research area in both the social and natural sciences. However, little has been done in order to bridge these scientific do- mains, so an unfortunate gulf between them still persists. Even less has been done in the field of computational sociology, in which most research using agent-based mod- els has disregarded the importance of symbolic communication. It is this lacuna that the thesis addresses. In the thesis, it is claimed that the type of emergent properties that are inherent to social phenomena are likely to result from the unique fact that the participating en- tities are symbolic agents. It is proposed that symbolic communication is a threshold phenomenon that emerges in the intersections among human cognition, social inter- actions and human biology. A theoretical framework with which to clarify this con- nection is also presented. In order to test in silica some hypotheses derived from this theoretical framework, the analysis relies upon two agent-based models. Different simulation methods and techniques were used, such as reinforcement learning algorithms, genetic algorithms, graph theory, and evolutionary game theory. To investigate the simulation results, multivariate analysis techniques, social network analysis and differential equations were used. The first agent-based model was developed to study the properties of an emergent communication system, in which groups of 'speechless' agents create local lexicons and compete with each other to spread them throughout the whole population. The model results indicate that a common lexicon can emerge on the condition that a group of agents develops a communicative strategy that favours their mutual understand- ing and allows them to reach more recipients for their utterances. An analysis of the agents' social networks reveals that strong mutual relations among agents from the same group, high 'immunity' to external influence and high capability of speaking to agents from different groups play a fundamental role in the process of spreading lexicons. The second agent-based model was built to study the pre-linguistic stage of cooper- ation among individuals required for the emergence of symbolic communication. In this model, agents reproduce sexually, males and females differ in their reproductive costs and they play the iterated prisoner's dilemma. The model results show that, when male reproductive costs are less than female reproductive costs, males cooper- ate with females even when females do not reciprocate. This non-reciprocal coopera- tion, in turn, produces a sustained population growth, because females can reproduce faster despite their high reproductive costs .. Finally, a mathematical model of cumulative cultural evolution is used to investigate different patterns of population dynamics, and it is demonstrated that the artificial so- cieties in which non-reciprocal cooperation emerges are able to sustain more complex cultural artefacts, such as communicative symbols. Linking computational sociology to appropriate theories of language evolution, com- munication, evolutionary biology and cognitive research, the thesis provides concept- ually grounded mechanisms to explain the emergence and evolution of symbolic com- munication. In so doing, the thesis contributes both substantively and methodologic- ally to academic work on computational sociology, as well as agent-based models of symbolic communication. Key words: Agent-Based Modelling, Computational Sociology, Game Theory, Co- operative Breeding, Cultural Evolution, Cultural Cognition, Emergence, Lexicons, Symbolic Communication.
Style APA, Harvard, Vancouver, ISO itp.
11

Nguyen, Ngo Minh Thang. "Test case generation for Symbolic Distributed System Models : Application to Trickle based IoT Protocol". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC092.

Pełny tekst źródła
Streszczenie:
Les systèmes distribués sont composés de nombreux sous-systèmes distants les uns des autres. Afin de réaliser une même tâche, les sous-systèmes communiquent à la fois avec l’environnement par des messages externes et avec d’autres sous-systèmes par des messages internes, via un réseau de communication. En pratique, les systèmes distribués mettent en jeu plusieurs types d’erreurs, propres aux sous-systèmes les constituant, ou en lien avec les communications internes. Afin de s’assurer de leur bon fonctionnement, savoir tester de tels systèmes est essentiel. Cependant, il est très compliqué de les tester car sans horloge globale, les sous-systèmes ne peuvent pas facilement synchroniser leurs envois de messages, ce qui explique l’existence des situations non déterministes. Le test à base de modèles (MBT) est une approche qui consiste à vérifier si le comportement d’un système sous test (SUT) est conforme à son modèle, qui spécifie les comportements souhaités. MBT comprend deux étapes principales: la génération de cas de test et le calcul de verdict. Dans cette thèse, nous nous intéressons à la génération de cas de test dans les systèmes distribués. Nous utilisons les systèmes de transition symbolique temporisé à entrées et sorties (TIOSTS) et les analysons à l’aide des techniques d’exécution symbolique pour obtenir les comportements symboliques du système distribué. Dans notre approche, l’architecture de test permet d’observer au niveau de chaque soussystème à la fois les messages externes émis vers l’environnement et les messages internes reçus et envoyés. Notre framework de test comprend plusieurs étapes: sélectionner un objectif de test global, défini comme un comportement particulier exhibé par exécution symbolique, projeter l’objectif de test global sur chaque sous-système pour obtenir des objectifs de test locaux, dériver des cas de test unitaires pour chacun des sous-systèmes. L’exécution du test consiste à exécuter des cas de test locaux sur les sous-systèmes paramétrés par les objectifs de tests en calculant à la volée les données de test à soumettre au sous-système en fonction de données observées. Enfin, nous mettons en œuvre notre approche sur un cas d’étude décrivant un protocole utilisé dans le contexte de l’IoT
Distributed systems are composed of many distant subsystems. In order to achieve a common task, subsystems communicate both with the local environment by external messages and with other subsystems by internal messages through a communication network. In practice, distributed systems are likely to reveal many kinds of errors, so that we need to test them before reaching a certain level of confidence in them. However, testing distributed systems is complicated due to their intrinsic characteristics. Without global clocks, subsystems cannot synchronize messages, leading to non-deterministic situations.Model-Based Testing (MBT) aims at checking whether the behavior of a system under test (SUT) is consistent with its model, specifying expected behaviors. MBT is useful for two main steps: test case generation and verdict computation. In this thesis, we are mainly interested in the generation of test cases for distributed systems.To specify the desired behaviors, we use Timed Input Output Symbolic Transition Systems (TIOSTS), provided with symbolic execution techniques to derive behaviors of the distributed system. Moreover, we assume that in addition to external messages, a local test case observes internal messages received and sent by the co-localized subsystem. Our testing framework includes several steps: selecting a global test purpose using symbolic execution on the global system, projecting the global test purpose to obtain a local test purpose per subsystem, deriving unitary test case per subsystem. Then, test execution consists of executing local test cases by submitting data compatible following a local test purpose and computing a test verdict on the fly. Finally, we apply our testing framework to a case study issued from a protocol popular in the context of IoT
Style APA, Harvard, Vancouver, ISO itp.
12

Bannour, Boutheina. "Symbolic analysis of scenario based timed models for component based systems : Compositionality results for testing". Phd thesis, Ecole Centrale Paris, 2012. http://tel.archives-ouvertes.fr/tel-00997776.

Pełny tekst źródła
Streszczenie:
In this thesis, we describe how to use UML sequence diagrams with MARTE timing constraints to specify entirely the behavior of component-based systems while abstracting as much as possible the functional roles of components composing it. We have shown how to conduct compositional analysis of such specifications. For this, we have defined operational semantics to sequence diagrams by translating them into TIOSTS which are symbolic automata with timing constraints. We have used symbolic execution techniques to compute possible executions of the system in the form of a symbolic tree. We have defined projection mechanisms to extract the execution tree associated with any distinguished component. The resulting projected tree characterizes the possible behaviors of the component with respect to the context of the whole system specification. As such, it represents a constraint to be satisfied by the component and it can be used as a correctness reference to validate the system in a compositional manner. For that purpose, we have grounded our validation framework on testing techniques. We have presented compositional results relating the correctness of a system to the correctness of components. Based on these results, we have defined an incremental approach for testing from sequence diagrams.
Style APA, Harvard, Vancouver, ISO itp.
13

de, Valk R. "Structuring lute tablature and MIDI data : machine learning models for voice separation in symbolic music representations". Thesis, City, University of London, 2015. http://openaccess.city.ac.uk/15659/.

Pełny tekst źródła
Streszczenie:
This thesis concerns the design, development, and implementation of machine learning models for voice separation in two forms of symbolic music representations: lute tablature and MIDI data. Three modelling approaches are described: MA1, a note-level classification approach using a neural network, MA2, a chord-level regression approach using a neural network, and MA3, a chord-level probabilistic approach using a hidden Markov model. Furthermore, three model extensions are presented: backward processing, modelling voice and duration simultaneously, and multi-pass processing using an extended (bidirectional) decision context. Two datasets are created for model evaluation: a tablature dataset, containing a total of 15 three-voice and four-voice intabulations (lute arrangements of polyphonic vocal works) in a custom-made tablature encoding format, tab+, as well as in MIDI format, and a Bach dataset, containing the 45 three-voice and four-voice fugues from Johann Sebastian Bach’s _Das wohltemperirte Clavier_ (BWV 846-893) in MIDI format. The datasets are made available publicly, as is the software used to implement the models and the framework for training and evaluating them. The models are evaluated on the datasets in four experiments. The first experiment, where the different modelling approaches are compared, shows that MA1 is the most effective and efficient approach. The second experiment shows that the features are effective, and it demonstrates the importance of the type and amount of context information that is encoded in the feature vectors. The third experiment, which concerns model extension, shows that modelling backward and modelling voice and duration simultaneously do not lead to the hypothesised increase in model performance, but that using a multi-pass bidirectional model does. In the last experiment, where the performance of the models is compared with that of existing state-of-the-art systems for voice separation, it is shown that the models described in this thesis can compete with these systems.
Style APA, Harvard, Vancouver, ISO itp.
14

Rajagopalan, Jayendar. "Symbolic and connectionist machine learning techniques for short-term electric load forecasting". Thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-08222009-040506/.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Slattery, David P. "Archaeology of Trobriand knowledge: Foucault in the Trobriand Islands". Thesis, University of St Andrews, 1992. http://hdl.handle.net/10023/2844.

Pełny tekst źródła
Streszczenie:
This thesis holds that the application of the archaeological method, developed by the French philosopher Michel Foucault, to the field of anthropology reveals a hitherto hidden primitive episteme. Such a project represents a rejection of a search for a fundamental Truth, available through the traditional figures of rationality, either vertically in history or horizontally across cultures. The form of reason posited by this project does not have a constant and universal occurrence but is given in the discontinuous figures of the episteme. The quest for a single manifestation of the conditions of validity in reason is replaced by a study of the conditions of possibility of the truths, discourses and institutions of a primitive peoples. The conditions of possibility for the emergence of the elements of primitive knowledge and practices are available through the application of the explanatory unities of the archaeological method. These unities replace the traditional explanatory role of the subject, with all of its psychological baggage, which has a central role in modern theories of rationality. The subject-knowledge link that dominates traditional anthropological analyses is replaced by a powerknowledge link that postulates the two axes of discursive and non-discursive concerns. The discursive axis is concerned with the objects, concepts, statements and discursive formations of primitive knowledge while the non-discursive axis is concerned with the systems of power that propagate and sustain those discourses. These two axes constitute the nature of the archaeology employed in this study. This thesis is sustained by both negative and positive evidence. The negative evidence takes the form of an antisubjectivist thrust where the subject-dependent explanatory unities of the tradition are replaced by the positivistic elements of archaeology. The positive evidence primarily takes the form of a detailed analysis of the presence of the guiding codes of the episteme amongst the Trobriand Islanders that give rise to their primitive knowledge and practices. In this area, I make extensive use of Malinowski's ethnographic observations for their breath of detail and application without employing his subject-dependent psychobiological conclusions. Further, I am proposing a transformative position such that orality becomes a feature of the episteme rather than its condition of possibility. The guiding codes of the Trobriand episteme take the form of enclosed oppositional figures that are everywhere related to space. The Trobriand episteme provides the conditions for the emergence of primitive discourses and orders the experiences of the Trobrianders. The guiding figures of the episteme are based in a form of complementary opposition, causation as vitality and a dogma of topological space that give rise to primitive knowledge which is a form of divination. A significant part of this dissertation is taken up with an examination of the detail and limitation of these figures where ideas from Levy-Bruhl, Hallpike, and others are employed to produce the most appropriate configuration for my project. A particular form of language as the manipulation of real signs, rather than ideational signs, has its possibility in this configuration which has consequences for the type of knowledge produced. The form of knowledge appropriate to the presence of such a model of language is magic. Writing has no possibility for emerging in this episteme and, therefore, there are significant consequences for the type of knowledge that can be maintained and propagated in a context which must utilise static tradition to the detriment of reflection. An archaeological analysis of the Trobriand Islanders, focusing on discourses on sex and marriage, the nature of tabooed sexual acts, economic relations arising out of marriage and the role of the polygamous chief, the nature of love-magic and magic in general, reveals a shared possibility for all of these discursive realms in the figures of the episteme. These discourses are regulated by the presence of a fundamental opposition between a brother and his sister. This opposition forms the motif for primitive problematizations and constitutes a vulnerable boundary which is the appropriate focus of taboos relating to sex and food, amongst others. This primitive episteme characterises the unity of the experiences of the Trobrianders. This experience is discontinuous with our own and does not involve a role for the individual ego. This project represents a worthwhile contribution to an understanding of human experience and knowledge in general which does not seek to reduce the natural diversity of man to just the monotonous experience of modern man. In conclusion, I tentatively speculate about the appropriateness of the Trobriand figures for primitive experience in general.
Style APA, Harvard, Vancouver, ISO itp.
16

Syal, Manan. "Untestable Fault Identification Using Implications". Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/46173.

Pełny tekst źródła
Streszczenie:
Untestable faults in circuits are defects/faults for which there exists no test pattern that can either excite the fault or propagate the fault effect to an observable point, which could be either a Primary output (PO) or a scan flip-flop. The current state-of-the-art automatic test pattern generators (ATPGs) spend a lot of time in trying to generate a test sequence for the detection of untestable faults, before aborting on them, or identifying them as untestable, given enough time. Thus, it would be beneficial to quickly identify faults that are redundant/untestable, so that tools such as ATPG engines or fault simulators do not waste time targeting these faults. Our work focuses on the identification of untestable faults at low cost in terms of both memory and execution time. A powerful and memory efficient implication engine, which is used to identify the effect(s) of asserting logic values in a circuit, is used as the basic building block of our tool. Using the knowledge provided by this implication engine, we identify untestable faults using a fault independent, conflict based analysis. We evaluated our tool against several benchmark circuits (ISCAS '85, ISCAS '89 and ISCAS '93), and found that we could identify considerably more untestable faults in sequential circuits compared to similar conflict based algorithms which have been proposed earlier.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
17

Zhang, Ying. "Synthesis of Local Thermo-Physical Models Using Genetic Programming". Scholar Commons, 2009. https://scholarcommons.usf.edu/etd/103.

Pełny tekst źródła
Streszczenie:
Local thermodynamic models are practical alternatives to computationally expensive rigorous models that involve implicit computational procedures and often complement them to accelerate computation for real-time optimization and control. Human-centered strategies for development of these models are based on approximation of theoretical models. Genetic Programming (GP) system can extract knowledge from the given data in the form of symbolic expressions. This research describes a fully data driven automatic self-evolving algorithm that builds appropriate approximating formulae for local models using genetic programming. No a-priori information on the type of mixture (ideal/non ideal etc.) or assumptions are necessary. The approach involves synthesis of models for a given set of variables and mathematical operators that may relate them. The selection of variables is automated through principal component analysis and heuristics. For each candidate model, the model parameters are optimized in the inner integrated nested loop. The trade-off between accuracy and model complexity is addressed through incorporation of the Minimum Description Length (MDL) into the fitness (objective) function. Statistical tools including residual analysis are used to evaluate performance of models. Adjusted R-square is used to test model's accuracy, and F-test is used to test if the terms in the model are necessary. The analysis of the performance of the models generated with the data driven approach depicts theoretically expected range of compositional dependence of partition coefficients and limits of ideal gas as well as ideal solution behavior. Finally, the model built by GP integrated into a steady state and dynamic flow sheet simulator to show the benefits of using such models in simulation. The test systems were propane-propylene for ideal solutions and acetone-water for non-ideal. The result shows that, the generated models are accurate for the whole range of data and the performance is tunable. The generated local models can indeed be used as empirical models go beyond elimination of the local model updating procedures to further enhance the utility of the approach for deployment of real-time applications.
Style APA, Harvard, Vancouver, ISO itp.
18

Misino, Eleonora. "Deep Generative Models with Probabilistic Logic Priors". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24058/.

Pełny tekst źródła
Streszczenie:
Many different extensions of the VAE framework have been introduced in the past. How­ ever, the vast majority of them focused on pure sub­-symbolic approaches that are not sufficient for solving generative tasks that require a form of reasoning. In this thesis, we propose the probabilistic logic VAE (PLVAE), a neuro-­symbolic deep generative model that combines the representational power of VAEs with the reasoning ability of probabilistic ­logic programming. The strength of PLVAE resides in its probabilistic ­logic prior, which provides an interpretable structure to the latent space that can be easily changed in order to apply the model to different scenarios. We provide empirical results of our approach by training PLVAE on a base task and then using the same model to generalize to novel tasks that involve reasoning with the same set of symbols.
Style APA, Harvard, Vancouver, ISO itp.
19

Dernehl, Christian Michael [Verfasser], Stefan [Akademischer Betreuer] Kowalewski i Dieter [Akademischer Betreuer] Moormann. "Verification of embedded software models by combining abstract interpretation, symbolic execution and stability analysis / Christian Michael Dernehl ; Stefan Kowalewski, Dieter Moormann". Aachen : Universitätsbibliothek der RWTH Aachen, 2019. http://d-nb.info/1210710358/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Tino, Peter, Christian Schittenkopf i Georg Dorffner. "Temporal pattern recognition in noisy non-stationary time series based on quantization into symbolic streams. Lessons learned from financial volatility trading". SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 2000. http://epub.wu.ac.at/1680/1/document.pdf.

Pełny tekst źródła
Streszczenie:
In this paper we investigate the potential of the analysis of noisy non-stationary time series by quantizing it into streams of discrete symbols and applying finite-memory symbolic predictors. The main argument is that careful quantization can reduce the noise in the time series to make model estimation more amenable given limited numbers of samples that can be drawn due to the non-stationarity in the time series. As a main application area we study the use of such an analysis in a realistic setting involving financial forecasting and trading. In particular, using historical data, we simulate the trading of straddles on the financial indexes DAX and FTSE 100 on a daily basis, based on predictions of the daily volatility differences in the underlying indexes. We propose a parametric, data-driven quantization scheme which transforms temporal patterns in the series of daily volatility changes into grammatical and statistical patterns in the corresponding symbolic streams. As symbolic predictors operating on the quantized streams we use the classical fixed-order Markov models, variable memory length Markov models and a novel variation of fractal-based predictors introduced in its original form in (Tino, 2000b). The fractal-based predictors are designed to efficiently use deep memory. We compare the symbolic models with continuous techniques such as time-delay neural networks with continuous and categorical outputs, and GARCH models. Our experiments strongly suggest that the robust information reduction achieved by quantizing the real-valued time series is highly beneficial. To deal with non-stationarity in financial daily time series, we propose two techniques that combine ``sophisticated" models fitted on the training data with a fixed set of simple-minded symbolic predictors not using older (and potentially misleading) data in the training set. Experimental results show that by quantizing the volatility differences and then using symbolic predictive models, market makers can generate a statistically significant excess profit. However, with respect to our prediction and trading techniques, the option market on the DAX does seem to be efficient for traders and non-members of the stock exchange. There is a potential for traders to make an excess profit on the FTSE 100. We also mention some interesting observations regarding the memory structure in the studied series of daily volatility differences. (author's abstract)
Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
Style APA, Harvard, Vancouver, ISO itp.
21

Swikir, Abdalla [Verfasser], Martin [Akademischer Betreuer] Buss, Majid [Gutachter] Zamani i Martin [Gutachter] Buss. "Compositional Synthesis of Symbolic Models for (In)Finite Networks of Cyber-Physical Systems / Abdalla Swikir ; Gutachter: Majid Zamani, Martin Buss ; Betreuer: Martin Buss". München : Universitätsbibliothek der TU München, 2020. http://d-nb.info/1225864755/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Almeida, Edgar Luis Bezerra de 1976. "Lógicas abstratas e o primeiro teorema de Lindström". [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/282036.

Pełny tekst źródła
Streszczenie:
Orientador: Itala Maria Loffredo D'Ottaviano
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Filosofia e Ciências Humanas
Made available in DSpace on 2018-08-22T15:04:13Z (GMT). No. of bitstreams: 1 Almeida_EdgarLuisBezerrade_M.pdf: 946200 bytes, checksum: e8e316a3ee7420c8d7f45a751651a436 (MD5) Previous issue date: 2013
Resumo: Esta Dissertação apresenta uma definição de lógica abstrata e caracteriza alguns sistemas lógicos bastante conhecidos na literatura como casos particulares desta. Em especial, mostramos que a lógica de primeira ordem, lógica de segunda ordem, lógica com o operador Q1 de Mostowski e a lógica infinitária L!1! são casos particulares de lógicas abstratas. Mais que isso, mostramos que tais lógicas são regulares. Na análise de cada uma das lógicas acima citadas, mostramos o comportamento das mesmas com relação às propriedades de Löwenheim-Skolem e compacidade enumerável, resultados estes centrais à teoria de modelos. Nossa análise permite-nos constatar que, dentre os quatro casos apresentados, o único que goza de ambas as propriedades é a lógica de primeira ordem; as demais falham em uma, na outra ou em ambas as propriedades. Mostramos que isso não é mera coincidência, mas sim um resultado profundo, que estabelece fronteiras bem delimitadas à lógica de primeira ordem, conhecido como primeiro teorema de Lindström: se uma lógica é regular, ao menos tão expressiva quanto à lógica de primeira ordem e satisfaz ambas as propriedades citadas, então esta é equivalente a lógica de primeira ordem. Realizamos uma prova cuidadosa do teorema, em que cada ideia e cada estratégia de prova é estabelecida criteriosamente. Com seu trabalho, Lindström inaugurou um novo e profícuo campo de estudo, a teoria abstrata de modelos que estabelece, com relação a diversas combinações de propriedades de sistemas lógicos, uma estratificação entre lógicas. Apresentamos um outro exemplo de tal estratificação através de uma versão modal do teorema de Lindström, versão esta que caracteriza a lógica modal básica como maximal quanto a bissimilaridade e compacidade. Encerramos esta Dissertação com algumas considerações acerca da influência do primeiro teorema de Lindström
Abstract: This thesis presents the definition of abstract logic and features some quite logical systems presented in the literature as particular cases of this. In particular, we show that first-order logic, second-order logic, the logic with Mostowski's operator Q1 and the infinitary logic L!1! are specific systems of abstract logic. Moreover, we show that such logics are regular. In the analysis of each above mentioned logical systems we analyses his performance with regard to the properties of compactness and Löwenheim-Skolem, results that have important role in model theory. Our analysis allows us to conclude that among the four cases, the only one who enjoys both properties is the first-order logic, and all others fail in one, other or both properties. We show that this is not mere coincidence, but rather a deep, well-defined boundaries establishing the first-order logic, known as first Lindström's theorem: a regular logic that is at least as expressive as first-order logic and satisfies both properties mentioned, then this is equivalent to first-order logic. We conducted a thorough proof of the theorem, in which each idea and each proof strategy is carefully established. With his work Lindström inaugurated a new and fruitful field of study, the abstract model theory, which establishes with respect to different combinations of properties of logical systems, stratification between logical. Here is another example of such stratification through one of the theorem of modal version Lindström, which characterizes this version of the logic basic modal such as maximal bissimimulation and compactness. We conclude the thesis with some considerations about the influence of the Lindström's theorem
Mestrado
Filosofia
Mestre em Filosofia
Style APA, Harvard, Vancouver, ISO itp.
23

Zambonin, Giuliano. "Development of Machine Learning-based technologies for major appliances: soft sensing for drying technology applications". Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3425771.

Pełny tekst źródła
Streszczenie:
In this thesis, Machine Learning techniques for the improvements in the performance of household major appliances are described. In particular, the focus is on drying technologies and domestic dryers are the machines of interest selected as case studies. Statistical models called Soft Sensors have been developed to provide estimates of quantities that are costly/time-consuming to measure in our applications using data that were available for other purposes. The work has been developed as industrially driven research activity in collaborations with Electrolux Italia S.p.a. R&D department located in Porcia, Pordenone, Italy. During the thesis, practical aspects of the implementation of the proposed approaches in a real industrial environment as well as topics related to collaborations between industry and academies are specified.
Style APA, Harvard, Vancouver, ISO itp.
24

Hadjeres, Gaëtan. "Modèles génératifs profonds pour la génération interactive de musique symbolique". Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS027/document.

Pełny tekst źródła
Streszczenie:
Ce mémoire traite des modèles génératifs profonds appliqués à la génération automatique de musique symbolique. Nous nous attacherons tout particulièrement à concevoir des modèles génératifs interactifs, c'est-à-dire des modèles instaurant un dialogue entre un compositeur humain et la machine au cours du processus créatif. En effet, les récentes avancées en intelligence artificielle permettent maintenant de concevoir de puissants modèles génératifs capables de générer du contenu musical sans intervention humaine. Il me semble cependant que cette approche est stérile pour la production artistique dans le sens où l'intervention et l'appréciation humaines en sont des piliers essentiels. En revanche, la conception d'assistants puissants, flexibles et expressifs destinés aux créateurs de contenus musicaux me semble pleine de sens. Que ce soit dans un but pédagogique ou afin de stimuler la créativité artistique, le développement et le potentiel de ces nouveaux outils de composition assistée par ordinateur sont prometteurs. Dans ce manuscrit, je propose plusieurs nouvelles architectures remettant l'humain au centre de la création musicale. Les modèles proposés ont en commun la nécessité de permettre à un opérateur de contrôler les contenus générés. Afin de rendre cette interaction aisée, des interfaces utilisateurs ont été développées ; les possibilités de contrôle se manifestent sous des aspects variés et laissent entrevoir de nouveaux paradigmes compositionnels. Afin d'ancrer ces avancées dans une pratique musicale réelle, je conclue cette thèse sur la présentation de quelques réalisations concrètes (partitions, concerts) résultant de l'utilisation de ces nouveaux outils
This thesis discusses the use of deep generative models for symbolic music generation. We will be focused on devising interactive generative models which are able to create new creative processes through a fruitful dialogue between a human composer and a computer. Recent advances in artificial intelligence led to the development of powerful generative models able to generate musical content without the need of human intervention. I believe that this practice cannot be thriving in the future since the human experience and human appreciation are at the crux of the artistic production. However, the need of both flexible and expressive tools which could enhance content creators' creativity is patent; the development and the potential of such novel A.I.-augmented computer music tools are promising. In this manuscript, I propose novel architectures that are able to put artists back in the loop. The proposed models share the common characteristic that they are devised so that a user can control the generated musical contents in a creative way. In order to create a user-friendly interaction with these interactive deep generative models, user interfaces were developed. I believe that new compositional paradigms will emerge from the possibilities offered by these enhanced controls. This thesis ends on the presentation of genuine musical projects like concerts featuring these new creative tools
Style APA, Harvard, Vancouver, ISO itp.
25

d'Orso, Julien. "New Directions in Symbolic Model Checking". Doctoral thesis, Uppsala University, Department of Information Technology, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-3753.

Pełny tekst źródła
Streszczenie:

In today's computer engineering, requirements for generally high reliability have pushed the notion of testing to its limits. Many disciplines are moving, or have already moved, to more formal methods to ensure correctness. This is done by comparing the behavior of the system as it is implemented against a set of requirements. The ultimate goal is to create methods and tools that are able to perform this kind of verfication automatically: this is called Model Checking.

Although the notion of model checking has existed for two decades, adoption by the industry has been hampered by its poor applicability to complex systems. During the 90's, researchers have introduced an approach to cope with large (even infinite) state spaces: Symbolic Model Checking. The key notion is to represent large (possibly infinite) sets of states by a small formula (as opposed to enumerating all members). In this thesis, we investigate applying symbolic methods to different types of systems:

Parameterized systems. We work whithin the framework of Regular Model Chacking. In regular model checking, we represent a global state as a word over a finite alphabet. A transition relation is represented by a regular length-preserving transducer. An important operation is the so-called transitive closure, which characterizes composing a transition relation with itself an arbitrary number of times. Since completeness cannot be achieved, we propose methods of computing closures that work as often as possible.

Games on infinite structures. Infinite-state systems for which the transition relation is monotonic with respect to a well quasi-ordering on states can be analyzed. We lift the framework of well quasi-ordered domains toward games. We show that monotonic games are in general undecidable. We identify a subclass of monotonic games: downward-closed games. We propose an algorithm to analyze such games with a winning condition expressed as a safety property.

Probabilistic systems. We present a framework for the quantitative analysis of probabilistic systems with an infinite state-space: given an initial state sinit, a set F of final states, and a rational Θ > 0, compute a rational ρ such that the probability of reaching F form sinit is between ρ and ρ + Θ. We present a generic algorithm and sufficient conditions for termination.

Style APA, Harvard, Vancouver, ISO itp.
26

Orso, Julien d'. "New directions in symbolic model checking /". Uppsala : Acta Universitatis Upsaliensis : Univ.-bibl. [distributör], 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-3753.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Shlyakhter, Ilya 1975. "Declarative symbolic pure-logic model checking". Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/30184.

Pełny tekst źródła
Streszczenie:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (p. 173-181).
Model checking, a technique for findings errors in systems, involves building a formal model that describes possible system behaviors and correctness conditions, and using a tool to search for model behaviors violating correctness properties. Existing model checkers are well-suited for analyzing control-intensive algorithms (e.g. network protocols with simple node state). Many important analyses, however, fall outside the capabilities of existing model checkers. Examples include checking algorithms with complex state, distributed algorithms over all network topologies, and highly declarative models. This thesis addresses the problem of building an efficient model checker that overcomes these limitations. The work builds on Alloy, a relational modeling language. Previous work has defined the language and shown that it can be analyzed by translation to SAT. The primary contributions of this thesis include: a modeling paradigm for describing complex structures in Alloy; significant improvements in scalability of the analyzer; and improvements in usability of the analyzer via addition of a debugger for over constraints. Together, these changes make model-checking practical for important new classes of analyses. While the work was done in the context of Alloy, some techniques generalize to other verification tools.
by Ilya A. Shlyakhter.
S.M.
Style APA, Harvard, Vancouver, ISO itp.
28

Giuliani, Luca. "Extending the Moving Targets Method for Injecting Constraints in Machine Learning". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23885/.

Pełny tekst źródła
Streszczenie:
Informed Machine Learning is an umbrella term that comprises a set of methodologies in which domain knowledge is injected into a data-driven system in order to improve its level of accuracy, satisfy some external constraint, and in general serve the purposes of explainability and reliability. The said topid has been widely explored in the literature by means of many different techniques. Moving Targets is one such a technique particularly focused on constraint satisfaction: it is based on decomposition and bi-level optimization and proceeds by iteratively refining the target labels through a master step which is in charge of enforcing the constraints, while the training phase is delegated to a learner. In this work, we extend the algorithm in order to deal with semi-supervised learning and soft constraints. In particular, we focus our empirical evaluation on both regression and classification tasks involving monotonicity shape constraints. We demonstrate that our method is robust with respect to its hyperparameters, as well as being able to generalize very well while reducing the number of violations on the enforced constraints. Additionally, the method can even outperform, both in terms of accuracy and constraint satisfaction, other state-of-the-art techniques such as Lattice Models and Semantic-based Regularization with a Lagrangian Dual approach for automatic hyperparameter tuning.
Style APA, Harvard, Vancouver, ISO itp.
29

Nizamani, Qurat Ul Ain. "Directed symbolic model checking of security protocols". Thesis, University of Leicester, 2011. http://hdl.handle.net/2381/10181.

Pełny tekst źródła
Streszczenie:
This thesis promotes the use of directed model checking for security protocol verification. In particular, we investigated the possibility of designing heuristics that can reduce the overall size of the state space and can direct the search towards states containing an attack. More precisely, • We have designed three property-specific heuristics namely H1, H2, and H3. The heuristics derive their hints from the security property to be verified and assign weights to states according to their possibility of leading to an attack. • H1 is formally proved correct, i.e., the states pruned by the heuristic H1 do not contain any attack. • An existing tool ASPASyA with conventional model checking algorithm (i.e., depth first search) has been modified so as to integrate our heuristics into it. The resulting tool H -ASPASyA uses an informed search algorithm that is equipped with our heuristics. The heuristics evaluate the states which are then explored in decreasing order of their weights. • The new tool H -ASPASyA is tested against a few protocols to gauge the performance of our heuristics. The results demonstrate the efficiency of our approach. It is worth mentioning that despite being a widely applied verification technique, model checking suffers from the state space explosion problem. Recently directed model checking has been used to mitigate the state space explosion problem in general model checking. However, the directed model checking approaches have not been studied extensively for security protocol verification. This thesis demonstrates the fact that directed model checking can be adapted for security protocol verification in order to yield better results.
Style APA, Harvard, Vancouver, ISO itp.
30

Chan, Woon Chung. "Symbolic model checking for large software specifications /". Thesis, Connect to this title online; UW restricted, 1999. http://hdl.handle.net/1773/6869.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Olsson, Cecilia. "The kaleidoscope of communication : Different perspectives on communication involving children with severe multiple disabilities". Doctoral thesis, Stockholm : Stockholm Institute of Education Press (HLS förlag), 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-1277.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Li, Bing. "Satisfiability-based abstraction refinement in symbolic model checking". Diss., Connect to online resource, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3207696.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Corchado, Rodríguez Juan Manuel. "Neuro-symbolic model for real-time forecasting problems". Thesis, University of the West of Scotland, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.323760.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Parker, David Anthony. "Implementation of symbolic model checking for probabilistic systems". Thesis, University of Birmingham, 2003. http://etheses.bham.ac.uk//id/eprint/229/.

Pełny tekst źródła
Streszczenie:
In this thesis, we present efficient implementation techniques for probabilistic model checking, a method which can be used to analyse probabilistic systems such as randomised distributed algorithms, fault-tolerant processes and communication networks. A probabilistic model checker inputs a probabilistic model and a specification, such as "the message will be delivered with probability 1", "the probability of shutdown occurring is at most 0.02" or "the probability of a leader being elected within 5 rounds is at least 0.98", and can automatically verify if the specification is true in the model. Motivated by the success of symbolic approaches to non-probabilistic model checking, which are based on a data structure called binary decision diagrams (BDDs), we present an extension to the probabilistic case, using multi-terminal binary decision diagrams (MTBDDs). We demonstrate that MTBDDs can be used to perform probabilistic analysis of large, structured models with more than 7.5 billion states, way out of the reach of conventional, explicit techniques, based on sparse matrices. We also propose a novel, hybrid approach, combining features of both symbolic and explicit implementations and show, using results from a wide range of case studies, that this technique can almost match the speed of sparse matrix based implementations, but uses significantly less memory. This increases, by approximately an order of magnitude, the size of model which can be handled on a typical workstation.
Style APA, Harvard, Vancouver, ISO itp.
35

Klein, Joachim, Christel Baier, Philipp Chrszon, Marcus Daum, Clemens Dubslaff, Sascha Klüppelholz, Steffen Märcker i David Müller. "Advances in Symbolic Probabilistic Model Checking with PRISM". Springer, 2016. https://tud.qucosa.de/id/qucosa%3A74267.

Pełny tekst źródła
Streszczenie:
For modeling and reasoning about complex systems, symbolic methods provide a prominent way to tackle the state explosion problem. It is well known that for symbolic approaches based on binary decision diagrams (BDD), the ordering of BDD variables plays a crucial role for compact representations and efficient computations. We have extended the popular probabilistic model checker PRISM with support for automatic variable reordering in its multi-terminal-BDD-based engines and report on benchmark results. Our extensions additionally allow the user to manually control the variable ordering at a finer-grained level. Furthermore, we present our implementation of the symbolic computation of quantiles and support for multi-reward-bounded properties, automata specifications and accepting end component computations for Streett conditions.
Style APA, Harvard, Vancouver, ISO itp.
36

Scott, Georgina. "Framing and symbolic modes in public service announcements". Diss., Pretoria : [s.n.], 2002. http://upetd.up.ac.za/thesis/available/etd-10132005-102737.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Zhang, Ying. "Symbolic regression of thermophysical model using genetic programming". [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000308.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Tino, Peter, i Georg Dorffner. "Constructing finite-context sources from fractal representations of symbolic sequences". SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 1998. http://epub.wu.ac.at/1738/1/document.pdf.

Pełny tekst źródła
Streszczenie:
We propose a novel approach to constructing predictive models on long complex symbolic sequences. The models are constructed by first transforming the training sequence n-block structure into a spatial structure of points in a unit hypercube. The transformation between the symbolic and Euclidean spaces embodies a natural smoothness assumption (n-blocks with long common suffices are likely to produce similar continuations) in that the longer is the common suffix shared by any two n-blocks, the closer lie their point representations. Finding a set of prediction contexts is then formulated as a resource allocation problem solved by vector quantizing the spatial representation of the training sequence n-block structure. Our predictive models are similar in spirit to variable memory length Markov models (VLMMs). We compare the proposed models with both the classical and variable memory length Markov models on two chaotic symbolic sequences with different levels of subsequence distribution structure. Our models have equal or better modeling performance, yet, their construction is more intuitive (unlike in VLMMs, we have a clear idea about the size of the model under construction) and easier to automize (construction of our models can be done in a completely self-organized manner, which is shown to be problematic in the case of VLMMs). (author's abstract)
Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
Style APA, Harvard, Vancouver, ISO itp.
39

Schlör, Rainer C. "Symbolic timing diagrams a visual formalism for model verification /". [S.l. : s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=963925326.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Siler, Todd Lael. "Architectonics of thought : a symbolic model of neuropsychological processes". Thesis, Massachusetts Institute of Technology, 1985. http://hdl.handle.net/1721.1/17200.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Architecture, 1986.
MICROFICHE COPY AVAILABLE IN ARCHIVES AND ROTCH
Bibliography: leaves 159-189.
by Todd Lael Siler.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
41

Jesser, Alexander. "Mixed signal circuit verification using symbolic model checking techniques". München Verl. Dr. Hut, 2008. http://d-nb.info/992162858/04.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Zhang, Ying. "Symbolic Regression of Thermo-Physical Model Using Genetic Programming". Scholar Commons, 2004. https://scholarcommons.usf.edu/etd/1316.

Pełny tekst źródła
Streszczenie:
The symbolic regression problem is to find a function, in symbolic form, that fits a given data set. Symbolic regression provides a means for function identification. This research describes an adaptive hybrid system for symbolic function identification of thermo-physical model that combines the genetic programming and a modified Marquardt nonlinear regression algorithm. Genetic Programming (GP) system can extract knowledge from the data in the form of symbolic expressions, i.e. a tree structure, that are used to model and derive equation of state, mixing rules and phase behavior from the experimental data (properties estimation). During the automatic evolution process of GP, the function structure of generated individual module could be highly complicated. To ensure the convergence of the regression, a modified Marquardt regression algorithm is used. Two stop criteria are attached to the traditional Marquardt algorithm to enforce the algorithm repeat the regression process before it stops. Statistic analysis is applied to the fitted model. Residual plot is used to test the goodness of fit. The χ2-test is used to test the model's adequacy. Ten experiments are run with different form of input variables, number of data points, standard errors added to data set, and fitness functions. The results show that the system is able to find models and optimize for its parameters successfully.
Style APA, Harvard, Vancouver, ISO itp.
43

Rozas, Rony. "Intégration du retour d'expérience pour une stratégie de maintenance dynamique". Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1112/document.

Pełny tekst źródła
Streszczenie:
L'optimisation de stratégies de maintenance est un sujet primordial pour un grand nombre d'industriels. Il s'agit d'établir un plan de maintenance qui garantisse des niveaux de sécurité, de sûreté et de fiabilité élevé avec un coût minimum et respectant d'éventuelles contraintes. Le nombre de travaux grandissant sur l'optimisation de paramètres de maintenance et notamment sur la planification d'actions préventives de maintenance souligne l'intérêt de ce problème. Un grand nombre d'études sur la maintenance repose sur une modélisation du processus de dégradation du système étudié. Les Modèles Graphiques Probabilistes (MGP) et particulièrement les MGP Markoviens (MGPM) fournissent un cadre de travail pour la modélisation de processus stochastiques complexes. Le problème de ce type d'approche est que la qualité des résultats est dépendante de celle du modèle. De plus, les paramètres du système considéré peuvent évoluer au cours du temps. Cette évolution est généralement la conséquence d'un changement de fournisseur pour les pièces de remplacement ou d'un changement de paramètres d'exploitation. Cette thèse aborde le problème d'adaptation dynamique d'une stratégie de maintenance face à un système dont les paramètres changent. La méthodologie proposée repose sur des algorithmes de détection de changement dans un flux de données séquentielles et sur une nouvelle méthode d'inférence probabiliste spécifique aux réseaux bayésiens dynamiques. D'autre part, les algorithmes proposés dans cette thèse sont mis en place dans le cadre d'un projet d'étude avec Bombardier Transport. L'étude porte sur la maintenance du système d'accès voyageurs d'une nouvelle automotrice destiné à une exploitation sur le réseau ferré d'Ile-de-France. L'objectif général est de garantir des niveaux de sécurité et de fiabilité importants au cours de l'exploitation du train
The optimization of maintenance strategies is a major issue for many industrial applications. It involves establishing a maintenance plan that ensures security levels, security and high reliability with minimal cost and respecting any constraints. The increasing number of works on optimization of maintenance parameters in particular in scheduling preventive maintenance action underlines the importance of this issue. A large number of studies on maintenance are based on a modeling of the degradation of the system studied. Probabilistic Models Graphics (PGM) and especially Markovian PGM (M-PGM) provide a framework for modeling complex stochastic processes. The issue with this approach is that the quality of the results is dependent on the model. More system parameters considered may change over time. This change is usually the result of a change of supplier for replacement parts or a change in operating parameters. This thesis deals with the issue of dynamic adaptation of a maintenance strategy, with a system whose parameters change. The proposed methodology is based on change detection algorithms in a stream of sequential data and a new method for probabilistic inference specific to the dynamic Bayesian networks. Furthermore, the algorithms proposed in this thesis are implemented in the framework of a research project with Bombardier Transportation. The study focuses on the maintenance of the access system of a new automotive designed to operate on the rail network in Ile-de-France. The overall objective is to ensure a high level of safety and reliability during train operation
Style APA, Harvard, Vancouver, ISO itp.
44

Bouakaz, Adnan. "Real-time scheduling of dataflow graphs". Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00945453.

Pełny tekst źródła
Streszczenie:
The ever-increasing functional and nonfunctional requirements in real-time safety-critical embedded systems call for new design flows that solve the specification, validation, and synthesis problems. Ensuring key properties, such as functional determinism and temporal predictability, has been the main objective of many embedded system design models. Dataflow models of computation (such as KPN, SDF, CSDF, etc.) are widely used to model stream-based embedded systems due to their inherent functional determinism. Since the introduction of the (C)SDF model, a considerable effort has been made to solve the static-periodic scheduling problem. Ensuring boundedness and liveness is the essence of the proposed algorithms in addition to optimizing some nonfunctional performance metrics (e.g. buffer minimization, throughput maximization, etc.). However, nowadays real-time embedded systems are so complex that real-time operating systems are used to manage hardware resources and host real-time tasks. Most of real-time operating systems rely on priority-driven scheduling algorithms (e.g. RM, EDF, etc.) instead of static schedules which are inflexible and difficult to maintain. This thesis addresses the real-time scheduling problem of dataflow graph specifications; i.e. transformation of the dataflow specification to a set of independent real-time tasks w.r.t. a given priority-driven scheduling policy such that the following properties are satisfied: (1) channels are bounded and overflow/underflow-free; (2) the task set is schedulable on a given uniprocessor (or multiprocessor) architecture. This problem requires the synthesis of scheduling parameters (e.g. periods, priorities, processor allocation, etc.) and channel capacities. Furthermore, the thesis considers two performance optimization problems: buffer minimization and throughput maximization.
Style APA, Harvard, Vancouver, ISO itp.
45

Niu, Fei. "Learning-based Software Testing using Symbolic Constraint Solving Methods". Licentiate thesis, KTH, Teoretisk datalogi, TCS, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-41932.

Pełny tekst źródła
Streszczenie:
Software testing remains one of the most important but expensive approaches to ensure high-quality software today. In order to reduce the cost of testing, over the last several decades, various techniques such as formal verification and inductive learning have been used for test automation in previous research. In this thesis, we present a specification-based black-box testing approach, learning-based testing (LBT), which is suitable for a wide range of systems, e.g. procedural and reactive systems. In the LBT architecture, given the requirement specification of a system under test (SUT), a large number of high-quality test cases can be iteratively generated, executed and evaluated by means of combining inductive learning with constraint solving. We apply LBT to two types of systems, namely procedural and reactive systems. We specify a procedural system in Hoare logic and model it as a set of piecewise polynomials that can be locally and incrementally inferred. To automate test case generation (TCG), we use a quantifier elimination method, the Hoon-Collins cylindric algebraic decomposition (CAD), which is applied on only one local model (a bounded polynomial) at a time. On the other hand, a reactive system is specified in temporal logic formulas, and modeled as an extended Mealy automaton over abstract data types (EMA) that can be incrementally learned as a complete term rewriting system (TRS) using the congruence generator extension (CGE) algorithm. We consider TCG for a reactive system as a bounded model checking problem, which can be further reformulated into a disunification problem and solved by narrowing. The performance of the LBT frameworks is empirically evaluated against random testing for both procedural and reactive systems (executable models and programs). The results show that LBT is significantly more efficient than random testing in fault detection, i.e. less test cases and potentially less time are required than for random testing.
QC 20111012
Style APA, Harvard, Vancouver, ISO itp.
46

Leth-Steensen, Craig. "A connectionist, evidence accrual model of response times in symbolic comparison". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0020/NQ44494.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Leth-Steensen, Craig. "A connectionist, evidence accrual model of response times in symbolic comparison /". Thesis, McGill University, 1997. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=35000.

Pełny tekst źródła
Streszczenie:
A cognitive process model is developed that predicts the 3 major symbolic comparison response time effects (distance, end, and semantic congruity) found in the results of the linear syllogistic reasoning task. The model assumes that people generate an ordering of a finite set of symbolic stimuli on the basis of information contained in the pairwise relations between adjacent stimulus items. The learning of this ordering is simulated within a simple connectionist framework. The decision-making component of the model utilizes 2 separate evidence accrual processes operating in parallel. One process accumulates information about the positional difference between the stimulus items being compared, and the other accumulates information about the endpoint status of each of those items. A response occurs whenever enough evidence favouring it has been accumulated within either of these processes. The model also assumes that the congruencies between the positions of the stimulus items within the ordering and the form of the comparative instruction can lead to either interfering or facilitating effects on the rate of evidence accumulation within each of these accrual processes. To test the model, data are obtained from the single-session performances of a group of 16 subjects and the multiple-session performances of an additional 2 subjects. The task is a variant of the one used by Trabasso, Riley, and Wilson (1975) and involves paired comparisons of ordered symbolic stimuli (three-letter names). Simulations of the model provide an excellent account of the group mean correct response times, as well as a very good account of the full set of data obtained from the 2 additional subjects (including percentage correct and response time distributional data).
Style APA, Harvard, Vancouver, ISO itp.
48

Künnemann, Robert. "Foundations for analyzing security APIs in the symbolic and computational model". Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2014. http://tel.archives-ouvertes.fr/tel-00942459.

Pełny tekst źródła
Streszczenie:
Security critical applications often store keys on dedicated HSM or key-management servers to separate highly sensitive cryptographic operations from more vulnerable parts of the network. Access to such devices is given to protocol parties by the means of Security APIs, e.g., the RSA PKCS#11 standard, IBM's CCA and the TPM API, all of which protect keys by providing an API that allows to address keys only indirectly. This thesis has two parts. The first part deals with formal methods that allow for the identification of secure configurations in which Security APIs improve the security of existing protocols, e.g., in scenarios where parties can be corrupted. A promising paradigm is to regard the Security API as a participant in a protocol and then use traditional protocol analysis techniques. But, in contrast to network protocols, Security APIs often rely on the state of an internal database. When it comes to an analysis of an unbounded number of keys, this is the reason why current tools for protocol analysis do not work well. We make a case for the use of MSR as the back-end for verification and propose a new process calculus, which is a variant of the applied pi calculus with constructs for manipulation of a global state. We show that this language can be translated to MSR rules while preserving all security properties expressible in a dedicated first-order logic for security properties. The translation has been implemented in a prototype tool which uses the tamarin prover as a back-end. We apply the tool to several case studies among which a simplified fragment of PKCS#11, the Yubikey security token, and a contract signing protocol. The second part of this thesis aims at identifying security properties that a) can be established independent of the protocol, b) allow to catch flaws on the cryptographic level, and c) facilitate the analysis of protocols using the Security API. We adapt the more general approach to API security of Kremer et.al. to a framework that allows for composition in form of a universally composable key-management functionality. The novelty, compared to other definitions, is that this functionality is parametric in the operations the Security API allows, which is only possible due to universal composability. A Security API is secure if it implements correctly both key-management (according to our functionality) and all operations that depend on keys (with respect to the functionalities defining those operations). We present an implementation which is defined with respect to arbitrary functionalities (for the operations that are not concerned with key-management), and hence represents a general design pattern for Security APIs.
Style APA, Harvard, Vancouver, ISO itp.
49

Adams, Sara Elisabeth. "Abstraction discovery and refinement for model checking by symbolic trajectory evaluation". Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:27276f9c-eba5-42a9-985d-1812097773f8.

Pełny tekst źródła
Streszczenie:
This dissertation documents two contributions to automating the formal verification of hardware – particularly memory-intensive circuits – by Symbolic Trajectory Evaluation (STE), a model checking technique based on symbolic simulation over abstract sets of states. The contributions focus on improvements to the use of BDD-based STE, which uses binary decision diagrams internally. We introduce a solution to one of the major hurdles in using STE: finding suitable abstractions. Our work has produced the first known algorithm that addresses this problem by automatically discovering good, non-trivial abstractions. These abstractions are computed from the specification, and essentially encode partial input combinations sufficient for determining the specification’s output value. They can then be used to verify whether the hardware model meets its specification using a technique based on and significantly extending previous work by Melham and Jones [2]. Moreover, we prove that our algorithm delivers correct results by construction. We demonstrate that the abstractions received by our algorithm can greatly reduce verification costs with three example hardware designs, typical of the kind of problems faced by the semiconductor design industry. We further propose a refinement method for abstraction schemes when over- abstraction occurs, i.e., when the abstraction hides too much information of the original design to determine whether it meets its specification. The refinement algorithm we present is based on previous work by Chockler et al. [3], which selects refinement candidates by approximating which abstracted input is likely the biggest cause of the abstraction being unsuitable. We extend this work substantially, concentrating on three aspects. First, we suggest how the approach can also work for much more general abstraction schemes. This enables refining any abstraction allowed in STE, rather than just a subset. Second, Chockler et al. describe how to refine an abstraction once a refinement candidate has been identified. We present three additional variants of refining the abstraction. Third, the refinement at its core depends on evaluating circuit logic gates. The previous work offered solutions for NOT- and AND-gates. We propose a general approach to evaluating arbitrary logic gates, which improves the selection process of refinement candidates. We show the effectiveness of our work by automatically refining an abstraction for a content-addressable memory that exhibits over-abstraction, and by evaluating some common logic gates. These two contributions can be used independently to help automate the hard- ware verification by STE, but they also complement each other. To show this, we combine both algorithms to create a fully automatic abstraction discovery and refinement loop. The only inputs required are the hardware design and the specification, which the design should meet. While only small circuits could be verified completely automatically, it clearly shows that our two contributions allow the construction of a verification framework that does not require any user interaction.
Style APA, Harvard, Vancouver, ISO itp.
50

Goel, Anuj. "Symbolic model checking techniques for BDD-based planning in distributed environments". Full text (PDF) from UMI/Dissertation Abstracts International, 2002. http://wwwlib.umi.com/cr/utexas/fullcit?p3077646.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii