Dissertations / Theses on the topic 'Rewriting techniques'

To see the other types of publications on this topic, follow the link: Rewriting techniques.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 20 dissertations / theses for your research on the topic 'Rewriting techniques.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sapiña, Sanchis Julia. "Rewriting Logic Techniques for Program Analysis and Optimization." Doctoral thesis, Universitat Politècnica de València, 2018. http://hdl.handle.net/10251/94044.

Full text
Abstract:
Esta tesis propone una metodología de análisis dinámico que mejora el diagnóstico de programas erróneos escritos en el lenguaje Maude. La idea clave es combinar técnicas de verificación de aserciones en tiempo de ejecución con la fragmentación dinámica de trazas de ejecución para detectar automáticamente errores en tiempo de ejecución, al tiempo que se reduce el tamaño y la complejidad de las trazas a analizar. En el caso de violarse una aserción, se infiere automáticamente el criterio de fragmentación, lo que facilita al usuario identificar rápidamente la fuente del error. En primer lugar, la tesis formaliza una técnica destinada a detectar automáticamente eventuales desviaciones del comportamiento deseado del programa (síntomas de error). Esta técnica soporta dos tipos de aserciones definidas por el usuario: aserciones funcionales (que restringen llamadas a funciones deterministas) y aserciones de sistema (que especifican los invariantes de estado del sistema). La técnica de verificación dinámica propuesta es demostrablemente correcta en el sentido de que todos los errores señalados definitivamente delatan la violación de las aserciones. Tras eventuales violaciones de aserciones, se generan automáticamente trazas fragmentadas (es decir, trazas simplificadas pero igualmente precisas) que ayudan a identificar la causa del error. Además, la técnica también sugiere una posible reparación para las reglas implicadas en la generación de los estados erróneos. La metodología propuesta se basa en (i) una notación lógica para especificar las aserciones que se imponen a la ejecución; (ii) una técnica de verificación aplicable en tiempo de ejecución que comprueba dinámicamente las aserciones; y (iii) un mecanismo basado en la generalización (ecuacional) menos general que automáticamente obtiene criterios precisos para fragmentar trazas de ejecución a partir de aserciones falsificadas. Por último, se presenta una implementación de la técnica propuesta en la herramienta de análisis dinámico basado en aserciones ABETS, que muestra cómo es posible combinar el trazado de las propiedades asertadas del programa para obtener un algoritmo preciso de análisis de trazas que resulta útil para el diagnóstico y la depuración de programas.
This thesis proposes a dynamic analysis methodology for improving the diagnosis of erroneous Maude programs. The key idea is to combine runtime assertion checking and dynamic trace slicing for automatically catching errors at runtime while reducing the size and complexity of the erroneous traces to be analyzed (i.e., those leading to states that fail to satisfy the assertions). In the event of an assertion violation, the slicing criterion is automatically inferred, which facilitates the user to rapidly pinpoint the source of the error. First, a technique is formalized that aims at automatically detecting anomalous deviations of the intended program behavior (error symptoms) by using assertions that are checked at runtime. This technique supports two types of user-defined assertions: functional assertions (which constrain deterministic function calls) and system assertions (which specify system state invariants). The proposed dynamic checking is provably sound in the sense that all errors flagged definitely signal a violation of the specifications. Then, upon eventual assertion violations, accurate trace slices (i.e., simplified yet precise execution traces) are generated automatically, which help identify the cause of the error. Moreover, the technique also suggests a possible repair for the rules involved in the generation of the erroneous states. The proposed methodology is based on (i) a logical notation for specifying assertions that are imposed on execution runs; (ii) a runtime checking technique that dynamically tests the assertions; and (iii) a mechanism based on (equational) least general generalization that automatically derives accurate criteria for slicing from falsified assertions. Finally, an implementation of the proposed technique is presented in the assertion-based, dynamic analyzer ABETS, which shows how the forward and backward tracking of asserted program properties leads to a thorough trace analysis algorithm that can be used for program diagnosis and debugging.
Esta tesi proposa una metodologia d'anàlisi dinàmica que millora el diagnòstic de programes erronis escrits en el llenguatge Maude. La idea clau és combinar tècniques de verificació d'assercions en temps d'execució amb la fragmentació dinàmica de traces d'execució per a detectar automàticament errors en temps d'execució, alhora que es reduïx la grandària i la complexitat de les traces a analitzar. En el cas de violar-se una asserció, s'inferix automàticament el criteri de fragmentació, la qual cosa facilita a l'usuari identificar ràpidament la font de l'error. En primer lloc, la tesi formalitza una tècnica destinada a detectar automàticament eventuals desviacions del comportament desitjat del programa (símptomes d'error). Esta tècnica suporta dos tipus d'assercions definides per l'usuari: assercions funcionals (que restringixen crides a funcions deterministes) i assercions de sistema (que especifiquen els invariants d'estat del sistema). La tècnica de verificació dinàmica proposta és demostrablement correcta en el sentit que tots els errors assenyalats definitivament delaten la violació de les assercions. Davant eventuals violacions d'assercions, es generen automàticament traces fragmentades (és a dir, traces simplificades però igualment precises) que ajuden a identificar la causa de l'error. A més, la tècnica també suggerix una possible reparació de les regles implicades en la generació dels estats erronis. La metodologia proposada es basa en (i) una notació lògica per a especificar les assercions que s'imposen a l'execució; (ii) una tècnica de verificació aplicable en temps d'execució que comprova dinàmicament les assercions; i (iii) un mecanisme basat en la generalització (ecuacional) menys general que automàticament obté criteris precisos per a fragmentar traces d'execució a partir d'assercions falsificades. Finalment, es presenta una implementació de la tècnica proposta en la ferramenta d'anàlisi dinàmica basat en assercions ABETS, que mostra com és possible combinar el traçat cap avant i cap arrere de les propietats assertades del programa per a obtindre un algoritme precís d'anàlisi de traces que resulta útil per al diagnòstic i la depuració de programes.
Sapiña Sanchis, J. (2017). Rewriting Logic Techniques for Program Analysis and Optimization [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/94044
TESIS
APA, Harvard, Vancouver, ISO, and other styles
2

Papadopoulos, George Angelos. "Parallel implementation of concurrent logic languages using graph rewriting techniques." Thesis, University of East Anglia, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.329340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Buth, Karl-Heinz [Verfasser]. "Techniques for Modelling Structured Operational and Denotational Semantics Definitions with Term Rewriting Systems / Karl Heinz Buth." Kiel : Universitätsbibliothek Kiel, 1994. http://d-nb.info/1080332669/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Feliú, Gabaldón Marco Antonio. "Logic-based techniques for program analysis and specification synthesis." Doctoral thesis, Universitat Politècnica de València, 2013. http://hdl.handle.net/10251/33747.

Full text
Abstract:
La Tesis investiga técnicas ágiles dentro del paradigma declarativo para dar solución a dos problemas: el análisis de programas y la inferencia de especificaciones a partir de programas escritos en lenguajes multiparadigma y en lenguajes imperativos con tipos, objetos, estructuras y punteros. Respecto al estado actual de la tesis, la parte de análisis de programas ya está consolidada, mientras que la parte de inferencia de especificaciones sigue en fase de desarrollo activo. La primera parte da soluciones para la ejecución de análisis de punteros especificados en Datalog. En esta parte se han desarrollado dos técnicas de ejecución de especificaciones en dicho lenguaje Datalog: una de ellas utiliza resolutores de sistemas de ecuaciones booleanas, y la otra utiliza la lógica de reescritura implementada eficientemente en el lenguaje Maude. La segunda parte desarrolla técnicas de inferencia de especificaciones a partir de programas. En esta parte se han desarrollado dos métodos de inferencia de especificaciones. El primer método se desarrolló para el lenguaje lógico-funcional Curry y permite inferir especificaciones ecuacionales mediante interpretación abstracta de los programas. El segundo método está siendo desarrollado para lenguajes imperativos realistas, y se ha aplicado a un subconjunto del lenguaje de programación C. Este método permite inferir especificaciones en forma de reglas que representan las distintas relaciones entre las propiedades que el estado de un programa satisface antes y después de su ejecución. Además, estas propiedades son expresables en términos de las abstracciones funcionales del propio programa, resultando en una especificación de muy alto nivel y, por lo tanto, de más fácil comprensión.
Feliú Gabaldón, MA. (2013). Logic-based techniques for program analysis and specification synthesis [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/33747
TESIS
APA, Harvard, Vancouver, ISO, and other styles
5

Rusinowitch, Michaël. "Démonstration automatique par des techniques de réécritures." Nancy 1, 1987. http://www.theses.fr/1987NAN10358.

Full text
Abstract:
Introduction à la logique du premier ordre et aux systèmes de réécriture. Étude de quelques ordres de simplification. Arbres sémantiques transfinis. Stratégies de paramodulation. Complétude en présence de règles de réduction. Stratégies de superposition. Ensembles complets de règles d'inférence pour les axiomes de régularité
APA, Harvard, Vancouver, ISO, and other styles
6

Kamat, Niranjan Ganesh. "Sampling-based Techniques for Interactive Exploration of Large Datasets." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1523552932728325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Karanasos, Konstantinos. "View-Based techniques for the efficient management of web data." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00755328.

Full text
Abstract:
Data is being published in digital formats at very high rates nowadays. A large share of this data has complex structure, typically organized as trees (Web documents such as HTML and XML being the most representative) or graphs (in particular, graph-structured Semantic Web databases, expressed in RDF). There is great interest in exploiting such complex data, whether in an Open Data access model or within companies owning it, and efficiently doing so for large data volumes remains challenging. Materialized views have long been used to obtain significant performance improvements when processing queries. The principle is that a view stores pre-computed results that can be used to evaluate (possibly part of) a query. Adapting materialized view techniques to the Web data setting we consider is particularly challenging due to the structural and semantic complexity of the data. This thesis tackles two problems in the broad context of materialized view-based management of Web data. First, we focus on the problem of view selection for RDF query workloads. We present a novel algorithm, which, based on a query workload, proposes the most appropriate views to be materialized in the database, in order to minimize the combined cost of query evaluation, view maintenance and view storage. Although RDF query workloads typically feature many joins, hampering the view selection process, our algorithm scales to hundreds of queries, a number unattained by existing approaches. Furthermore, we propose new techniques to account for the implicit data that can be derived by the RDF Schemas and which further complicate the view selection process. The second contribution of our work concerns query rewriting based on materialized XML views. We start by identifying an expressive dialect of XQuery, corresponding to tree patterns with value joins, and study some important properties for these queries, such as containment and minimization. Based on these notions, we consider the problem of finding minimal equivalent rewritings of a query expressed in this dialect, using materialized views expressed in the same dialect, and provide a sound and complete algorithm for that purpose. Our work extends the state of the art by allowing each pattern node to return a set of attributes, supporting value joins in the patterns, and considering rewritings which combine many views. Finally, we show how our view-based query rewriting algorithm can be applied in a distributed setting, in order to efficiently disseminate corpora of XML documents carrying RDF annotations.
APA, Harvard, Vancouver, ISO, and other styles
8

Beveraggi, Marc. "Problemes combinatoires en codage algebrique." Paris 6, 1987. http://www.theses.fr/1987PA066265.

Full text
Abstract:
La these comporte quatre parties : la premiere traite des codes a longueur variable; on etablit des bornes inferieures et superieures pour la cardinalite des codes de ce type, correcteurs ou detecteurs d'une erreur, et parfois prefixes; la deuxieme etudie la cardinalite maximale d'un ensemble de permutations tel que deux permutations quelconques soient k-compatibles; la troisieme concerne le nombre maximal de reecritures de n nombres sur une memoire a ecriture irreversible de taille n; la quatrieme traite d'un probleme similaire au precedent avec la condition supplementaire que les nombres ecrits sur la memoire sont en ordre croissant
APA, Harvard, Vancouver, ISO, and other styles
9

Ferey, Gaspard. "Higher-Order Confluence and Universe Embedding in the Logical Framework." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG032.

Full text
Abstract:
La multiplicité des systèmes formels a mis en évidence la nécessité d'un socle logique commun dans lequel les formalismes logiques pourraient être exprimés.L'enjeu principal de ce manuscrit est la définition de techniques d'encodages reposant sur la réécriture de termes et capables de réprésenter les fonctionnalités avancées des systèmes de types modernes.Nos encodages s'appuieront sur le lambda-Pi calcul modulo, un système de types dépendants, communément utilisé comme cadre logique, étendu ici par de laréécriture d'ordre supérieur. On s'intéresse, dans une première partie, aux critères de confluence de systèmes de réécriture avec la bêta réduction.La confluence d'un système linéaire à gauche se déduit de l'étude de ses paires critiques pour lesquelles il faut exhiber un diagramme décroissant vis-à-vis d'un certain étiquetage des règles.Le cas non-linéaire nécessite, lui, une compartimentalisation des termes considérés.On considère, dans un second temps, l'encodage de systèmes de types complexes.Sont étudiés successivement, la cumulativité qui nécessite de considérer des symboles privés pour encoder une forme de ``proof irrelevance'', les expressions algébriques d'univers sous contraintes d'univers et enfin le polymorphisme d'univers dont on prouve la correction d'une fonction de traduction depuis un sous-ensemble de Coq.L'implantation de ces résultats a permis de traduire en Dedukti plusieurs développements Coq de taille significative
In the context of the multiplicity of formal systems, it has become a growing need to express formal proofs into a common logical framework.This thesis focuses on the use of higher-order term rewriting to embed complex formal systems in the simple and well-studied lambda-Pi calculus modulo.This system, commonly used as a logical framework, features dependent types and is extended with higher-order term rewriting.We study, in a first part, criterias for the confluence properties of higher-order rewrite systems considered together with the usual beta reduction.In the case of left-linear systems, confluence can be reduced to the study of critical pairs which must be provided a decreasing diagram with relation to some rule labeling.We show that in the presence of non-linear rules, it is still possible to achieve confluence if the set of considered terms is layered.We then focus, in a second part, on the encoding of higher-order logics based on complex universe structures. The embeding of cumulativity, a limited form of subtyping, is handled with new rewriting techniques relying on private symbols and allowing some form of proof irrelevance.We then describe how algebraic universe expressions containing level variables can be represented, even in presence of universe constraints.Eventually we introduce an embeding of universe polymorphism as defined in the core logic of the Coq system and prove the correctness of the defined translation mechanism.These results, along with other more practical techniques, allowed the implementation of a translator to Dedukti which was used to translate several sizeable Coq developments
APA, Harvard, Vancouver, ISO, and other styles
10

Zighem, Ismail. "Etude d'invariants de graphes planaires." Université Joseph Fourier (Grenoble), 1998. http://www.theses.fr/1998GRE10211.

Full text
Abstract:
Dans la première partie, nous construisons, à partir de relations linéaires de récurrence, des invariants de graphes planaires 4-réguliers prenant leurs valeurs dans un anneau commutatif. Ces relations représentent des règles récursives bien définies sur cette catégories de graphes, ramenant le calcul des valeurs de l'invariant en ces graphes à une combinaison linéaire d'autres graphes plus réduits. Après avoir dégagé quelques conditions nécessaires pour que ces règles soient mutuellement compatibles, nous montrons en utilisant un résultat de la théorie des systèmes de réécriture qu'elles sont aussi suffisantes. Nous terminons cette partie en évoquant la relation avec le polynôme de Kauffman et en montrant que, pour une évaluation particulière de ses variables, ce polynôme peut être défini à partir de notre invariant. Ce qui constitue une nouvelle preuve d'existence de ce polynôme. La seconde partie aborde le problème de la détermination du nombre d'absorption des graphes de type grille. Dans un premier temps, nous déterminons ce nombre pour les petites grilles croisées (graphes produit croisé de deux chaînes de longueurs k et n, avec k ≤ 33 et n ≤ 40). En utilisant la programmation dynamique, nous présentons pour k fixé un algorithme linéaire en n pour calculer ce nombre. On en déduit alors que ce nombre vérifie des formules simples en fonction de k et n. Ensuite, nous montrons par récurrence, en prenant ces valeurs comme base de récurrence, que ces formules sont vérifiées par ce nombre, pour tous k = 12 ou k ≥ 14 et n ≥ k. Finalement, nous donnons quelques bornes du nombre d'absorption de la grille carrée (graphe produit carré de deux chaînes) qui améliorent les résultats déjà connus
APA, Harvard, Vancouver, ISO, and other styles
11

SAKAI, Masahiko, and Keiichirou KUSAKARI. "Static Dependency Pair Method for Simply-Typed Term Rewriting and Related Technique." Institute of Electronics, Information and Communication Engineers, 2009. http://hdl.handle.net/2237/14975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Dunn, Jennifer Erin. "Ambiguous and ambivalent signatures : rewriting, revision, and resistance in Emma Tennant's fiction." Thesis, University of Oxford, 2007. http://ora.ox.ac.uk/objects/uuid:6a4e8319-422a-48b9-8e43-cd05d742450f.

Full text
Abstract:
While existing criticism of Emma Tennant's work emphasizes its feminist agenda, less attention has been paid to her rewriting of different narratives and discourses. Tennant's career has centered on challenging literary values as well as generic categories, realist conventions, and gender stereotypes. Contrary to implications that rewriting is "re-vision," an "act of survival" that corrects or subverts earlier texts, this thesis argues that Tennant's characteristic resistance to categories also extends to the work of rewriting and revision. Her texts suggest that the act of "writing back" is not as straightforward as it may seem, but deeply ambiguous and ambivalent. Developing theories of the "signature" that return the writer-as-agent to the otherwise anonymous field of intertextuality, this thesis traces Tennant's figurations of writing, metafictional devices, and intertextual allusions to show how these relate to themes in the fiction. Examining groupings of the texts from different critical perspectives, each chapter shows how Tennant's rewritings destabilize notions of originality, identity, and agency, and represent political discourses and social progress in an ambivalent way. While this thesis offers very specific insights into Tennant's work, the close readings also encompass broader themes, such as feminism and postmodernism, the gothic, myths of home and exile, and the ventriloquistic techniques of pastiche and biofiction. The arguments centered on her work contribute to the larger discourse on rewriting in two ways. First, in problematizing assumptions that rewriting inherently strives toward progress or correction, this thesis argues that rewriting can dramatize the ambiguity and ambivalence that haunt acts of resistance. Second, in advancing challenges to the idea that intertextuality functions anonymously, it argues that rewriting can return agency to the text by offering representations of authorship that engage with literary and cultural history.
APA, Harvard, Vancouver, ISO, and other styles
13

Pérution-Kihli, Guillaume. "Data Management in the Existential Rule Framework : Translation of Queries and Constraints." Electronic Thesis or Diss., Université de Montpellier (2022-....), 2023. http://www.theses.fr/2023UMONS030.

Full text
Abstract:
Le contexte général de ce travail est la problématique de la conception de systèmes de haute qualité intégrant plusieurs sources de données via une couche sémantique codée dans un langage de représentation et de raisonnement sur les connaissances. Nous considérons les systèmes de gestion de données basés sur des connaissances (KBDM), qui sont structurés en trois couches : la couche de données, qui comprend les sources de données, la couche de connaissances (ou ontologique), et les mappings entre les deux. Les mappings et les connaissances sont exprimés dans le cadre des règles existentielles. Une des difficultés intrinsèques à la conception d'un système KBDM est la nécessité de comprendre le contenu des sources de données. Les sources de données sont souvent fournies avec des requêtes et des contraintes typiques, à partir desquelles on peut tirer des informations précieuses sur leur sémantique, tant que cette information est rendue intelligible aux concepteurs du système KBDM. Cela nous amène à notre question centrale : est-il possible de traduire les requêtes et les contraintes des données au niveau de la connaissance tout en préservant leur sémantique ? Les principales contributions de cette thèse sont les suivantes. Nous étendons les travaux antérieurs sur la traduction de requêtes sur les données vers l'ontologie avec de nouvelles techniques pour le calcul de traductions de requêtes parfaites, minimalement complètes ou maximalement adéquates. En ce qui concerne la traduction des contraintes sur les données vers l'ontologie, nous définissons un cadre général et l'appliquons à plusieurs classes de contraintes. Enfin, nous fournissons un opérateur de réécriture de requêtes adéquat et complet pour les règles existentielles disjonctives et les mappings disjonctifs, ainsi que des résultats d'indécidabilité, qui sont d'un intérêt indépendant
The general context of this work is the issue of designing high-quality systems that integrate multiple data sources via a semantic layer encoded in a knowledge representation and reasoning language. We consider knowledge-based data management (KBDM) systems, which are structured in three layers: the data layer, which comprises the data sources, the knowledge (or ontological) layer, and the mappings between the two. Mappings and knowledge are expressed within the existential rule framework. One of the intrinsic difficulties in designing a KBDM is the need to understand the content of data sources. Data sources are often provided with typical queries and constraints, from which valuable information about their semantics can be drawn, as long as this information is made intelligible to KBDM designers. This motivates our core question: is it possible to translate data queries and constraints at the knowledge level while preserving their semantics?The main contributions of this thesis are the following. We extend previous work on data-to-ontology query translation with new techniques for the computation of perfect, minimally complete, or maximally sound query translations. Concerning data-to-ontology constraint translation, we define a general framework and apply it to several classes of constraints. Finally, we provide a sound and complete query rewriting operator for disjunctive existential rules and disjunctive mappings, as well as undecidability results, which are of independent interest
APA, Harvard, Vancouver, ISO, and other styles
14

Yang, Bin. "Contribution to a kernel of symbolic asymptotic modeling software." Thesis, Besançon, 2014. http://www.theses.fr/2014BESA2055/document.

Full text
Abstract:
Cette thèse est consacrée au développement d’un noyau du logiciel MEMSALab de modélisation parcalcul symbolique qui sera utilisé pour la génération automatique de modèles asymptotiques pourdes matrices de micro et nano-systèmes. Contrairement à des logiciels traditionnels réalisant des simulationsnumériques utilisant des modèles prédéfinis, le principe de fonctionnement de MEMSALabest de construire des modèles asymptotiques qui transforment des équations aux dérivées partiellesen tenant compte de leurs caractéristiques. Une méthode appelée ”par extension-combinaison” pourla modélisation asymptotique, qui permet la construction de modèle de façon incrémentale de sorteque les caractéristiques désirées soient incluses étape par étape est tout d’abord proposé pour lemodèle d’homogénéisation dérivation. Il repose sur une combinaison de méthodes asymptotiquesissues de la théorie des équations aux dérivés partielles et de techniques de réécriture issues del’informatique. Cette méthode concentre sur la dérivation de modèle pour les familles de PDEs aulieu de chacune d’entre elles. Un modèle d’homogénéisation de l’électro thermoélastique équationdéfinie dans un domaine mince multicouche est dérivé par utiliser la méthode mathématique danscette approche. Pour finir, un outil d’optimisation a été développé en combinant SIMBAD, une boite `aoutils logicielle pour l’optimisation et développée en interne, et COMSOL-MATLAB. Il a ´ et ´e appliquépour étudier la conception optimale d’une classe de sondes de microscopie atomique thermique et apermis d’ établir des règles générale pour leurs conception
This thesis is dedicated to develop a kernel of a symbolic asymptotic modeling software packageMEMSALab which will be used for automatic generation of asymptotic models for arrays of micro andnanosystems. Unlike traditional software packages aimed at numerical simulations by using pre-builtmodels, the purpose of MEMSALab is to derive asymptotic models for input equations by taking intoaccount their own features. An approach called ”by extension-combination” for the asymptotic modelingwhich allows an incremental model construction is firstly proposed for the homogenization modelderivation. It relies on a combination of the asymptotic method used in the field of partial differentialequations with term rewriting techniques coming from computer science. This approach focuses onthe model derivation for family of PDEs instead of each of them. An homogenization model of theelectrothermoelastic equation defined in a multi-layered thin domain has been derived by applyingthe mathematical method used in this approach. At last, an optimization tool has been developed bycombining a house-made optimization software package SIMBAD and COMSOL-MATLAB simulationand it has been applied for optimization of a SThM probe
APA, Harvard, Vancouver, ISO, and other styles
15

Fournial, Céline. "Imitation et création dans le" théâtre moderne" (1550-1650) : la question des cycles d’inspiration." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUL012.

Full text
Abstract:
La seconde moitié du XVIe siècle voit naître le théâtre moderne français sous l’effet d’une vaste réflexion sur la littérature, indissociable de la théorie de l’imitation créatrice promue comme méthode d’écriture universelle par les Humanistes. Les termes et les enjeux de cette réflexion se transforment dans la première moitié du XVIIe siècle, notamment à partir de la fin des années 1620, où s’engagent des débat sur le théâtre, alors très riche en tentatives de renouvellement dramatique. Dans ces circonstances, le choix des sources d’inspiration n’est pas indifférent. Les auteurs ne sont pas seulement en quête de sujets, ils cherchent aussi des formes nouvelles chez les écrivains étrangers, qui nourrissent tant la pratique que la théorie. L’étude d’un siècle de théâtre permet de dégager plusieurs cycles d’inspiration dans l’histoire de la tragi-comédie, de la tragédie et de la comédie, et de constater que ces cycles correspondent aux étapes majeures de l’évolution de chacun de ces genres. Inventio et dramaturgie entretiennent des liens étroits. L’étude des cycles met en évidence le sens et les conséquences de la pratique de l’adaptation ou de la récriture et du choix de sources antiques, italiennes, espagnoles ou françaises, à une époque où la question des modèles est centrale et débattue. La notion de cycle permet d’envisager les sources d’inspiration comme des phénomènes périodiques et de montrer comment le théâtre s’élabore et trouve sa singularité à travers l’imitation. Enfin, l’étude du rapport cyclique des auteurs français aux sources antiques et modernes conduit à réfléchir à la circulation des sujets et des formes en Europe et à la question des cycles européens
In the second half of the sixteenth century, modern French drama is developed from humanist reflection upon both past literature and the theory of imitation, considered a universal writing method by period scholars. In the first half of the 17th century, especially from the 1620s on, the evolving terms and stakes of such reflection produce numerous debates about drama, a genre then full of dramatic experimentation and renovation. Under these circumstances, the choice of inspiration is not insignificant. The dramatists not only look for subjects, but also for novel literary forms from foreign writers that could feed practice as much as theory. Studying one century of dramatic creations enables us to outline several cycles of inspiration within the history of tragicomedy, tragedy and comedy, and to record how these cycles match the main stages of evolution of those three genres. Throughout, Inventio and dramaturgy maintain close relations with each other. At a time when the central and most debated question is one of models, analyzing these cycles highlights the meaning and consequences of the use of adaptation and rewriting, and the choice of ancient, Italian, Spanish or French inspirations. The concept of cycles enables the comprehension of the sources of inspiration as periodical phenomenon and to show how drama uniquely evolves through imitation. In conclusion, studying the cyclic relation between the French playwrights and their ancient and modern inspirations leads to the examination of Modern French drama’s European nature and the circulation and transference of subjects and literary forms
APA, Harvard, Vancouver, ISO, and other styles
16

Santiago, Pinazo Sonia. "Advanced Features in Protocol Verification: Theory, Properties, and Efficiency in Maude-NPA." Doctoral thesis, Universitat Politècnica de València, 2015. http://hdl.handle.net/10251/48527.

Full text
Abstract:
The area of formal analysis of cryptographic protocols has been an active one since the mid 80’s. The idea is to verify communication protocols that use encryption to guarantee secrecy and that use authentication of data to ensure security. Formal methods are used in protocol analysis to provide formal proofs of security, and to uncover bugs and security flaws that in some cases had remained unknown long after the original protocol publication, such as the case of the well known Needham-Schroeder Public Key (NSPK) protocol. In this thesis we tackle problems regarding the three main pillars of protocol verification: modelling capabilities, verifiable properties, and efficiency. This thesis is devoted to investigate advanced features in the analysis of cryptographic protocols tailored to the Maude-NPA tool. This tool is a model-checker for cryptographic protocol analysis that allows for the incorporation of different equational theories and operates in the unbounded session model without the use of data or control abstraction. An important contribution of this thesis is relative to theoretical aspects of protocol verification in Maude-NPA. First, we define a forwards operational semantics, using rewriting logic as the theoretical framework and the Maude programming language as tool support. This is the first time that a forwards rewriting-based semantics is given for Maude-NPA. Second, we also study the problem that arises in cryptographic protocol analysis when it is necessary to guarantee that certain terms generated during a state exploration are in normal form with respect to the protocol equational theory. We also study techniques to extend Maude-NPA capabilities to support the verification of a wider class of protocols and security properties. First, we present a framework to specify and verify sequential protocol compositions in which one or more child protocols make use of information obtained from running a parent protocol. Second, we present a theoretical framework to specify and verify protocol indistinguishability in Maude-NPA. This kind of properties aim to verify that an attacker cannot distinguish between two versions of a protocol: for example, one using one secret and one using another, as it happens in electronic voting protocols. Finally, this thesis contributes to improve the efficiency of protocol verification in Maude-NPA. We define several techniques which drastically reduce the state space, and can often yield a finite state space, so that whether the desired security property holds or not can in fact be decided automatically, in spite of the general undecidability of such problems.
Santiago Pinazo, S. (2015). Advanced Features in Protocol Verification: Theory, Properties, and Efficiency in Maude-NPA [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/48527
TESIS
APA, Harvard, Vancouver, ISO, and other styles
17

Mohammed, Shoeb Ahmed. "Coding Techniques for Error Correction and Rewriting in Flash Memories." 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-08-8476.

Full text
Abstract:
Flash memories have become the main type of non-volatile memories. They are widely used in mobile, embedded and mass-storage devices. Flash memories store data in floating-gate cells, where the amount of charge stored in cells – called cell levels – is used to represent data. To reduce the level of any cell, a whole cell block (about 106 cells) must be erased together and then reprogrammed. This operation, called block erasure, is very costly and brings significant challenges to cell programming and rewriting of data. To address these challenges, rank modulation and rewriting codes have been proposed for reliably storing and modifying data. However, for these new schemes, many problems still remain open. In this work, we study error-correcting rank-modulation codes and rewriting codes for flash memories. For the rank modulation scheme, we study a family of one- error-correcting codes, and present efficient encoding and decoding algorithms. For rewriting, we study a family of linear write-once memory (WOM) codes, and present an effective algorithm for rewriting using the codes. We analyze the performance of our solutions for both schemes.
APA, Harvard, Vancouver, ISO, and other styles
18

Shoaran, Maryam. "Automata methods and techniques for graph-structured data." Thesis, 2011. http://hdl.handle.net/1828/3249.

Full text
Abstract:
Graph-structured data (GSD) is a popular model to represent complex information in a wide variety of applications such as social networks, biological data management, digital libraries, and traffic networks. The flexibility of this model allows the information to evolve and easily integrate with heterogeneous data from many sources. In this dissertation we study three important problems on GSD. A consistent theme of our work is the use of automata methods and techniques to process and reason about GSD. First, we address the problem of answering queries on GSD in a distributed environment. We focus on regular path queries (RPQs) – given by regular expressions matching paths in graph-data. RPQs are the building blocks of almost any mechanism for querying GSD. We present a fault-tolerant, message-efficient, and truly distributed algorithm for answering RPQs. Our algorithm works for the larger class of weighted RPQs on weighted GSDs. Second, we consider the problem of answering RPQs on incomplete GSD, where different data sources are represented by materialized database views. We explore the connection between “certain answers” (CAs) and answers obtained from “view-based rewritings” (VBRs) for RPQs. CAs are answers that can be obtained on each database consistent with the views. Computing all of CAs for RPQs is NP-hard, and one has to resort to an exponential algorithm in the size of the data–view materializations. On the other hand, VBRs are query reformulations in terms of the view definitions. They can be used to obtain query answers in polynomial time in the size of the data. These answers are CAs, but unfortunately for RPQs, not all of the CAs can be obtained in this way. In this work, we show the surprising result that for RPQs under local semantics, using VBRs to answer RPQs gives all the CAs. The importance of this result is that under such semantics, the CAs can be obtained in polynomial time in the size of the data. Third, we focus on XML–an important special case of GSD. The scenario we consider is streaming XML between exchanging parties. The problem we study is flexible validation of streaming XML under the realistic assumption that the schemas of the exchanging parties evolve, and thus diverge from one another. We represent schemas by using Visibly Pushdown Automata (VPAs), which recognize Visibly Pushdown Languages (VPLs). We model evolution for XML by defining formal language operators on VPLs. We show that VPLs are closed under the defined language operators and this enables us to expand the schemas (for XML) in order to account for flexible or constrained evolution.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Li-Wei, and 王立為. "Application of DAG-Aware MIG Rewriting Technique in Logic Synthesis and Verification." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/13593673123619241948.

Full text
Abstract:
碩士
國立臺灣大學
電子工程學研究所
104
A Majority-Inverter Graph (MIG) is a recently introduced logic representation form which manipulates logic by using only 3-input majority function (MAJ) and inversion function (INV). Its algebraic and Boolean properties enables efficient logic optimizations. In particular, MIG algorithms obtained significantly superior synthesis results as compared to the state-of-the-art approaches based on AND-inverter graphs and commercial tools. In this thesis, we integrate the DAG-aware rewriting technique, a fast greedy algorithm for circuit compression, into MIG and apply it not only in the logic synthesis but also verification. Experimental results on logic optimization show that heavily-optimized MIGs can be further reduced by 20.4% of network size while depth preserved. Experimental results on datapath verification also show the effectiveness of our algorithm. With our MIG rewriting applied, datapath analysis quality can be improved with the ratio 3.16. Runtime for equivalence checking can also be effectively reduced.
APA, Harvard, Vancouver, ISO, and other styles
20

CHEN, HONG-CHIH, and 陳宏志. "Iterative Learning Control Technique Using G-code Rewriting Algorithm for Contour Control." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/k2ud6n.

Full text
Abstract:
碩士
國立中正大學
資訊工程研究所
106
The traditional iterative learning control (ILC) technology provides better position commands for the machining. However, most commercial controllers cannot accept position commands to control the machining path directly. Therefore, it is hard to leverage self-developed ILC on these commercial controllers. In this thesis, for XY plane, we develop a G-code rewriting algorithm to control machining path to solve this issue. The proposed algorithm can transfer position commands to the corresponding G-code commands. In order to preserve the same machining time, we need to properly handle feed rate and the number of segmented G-code commands. We implement the proposed algorithm and integrate it into a customized, ILC-enabled LinuxCNC for the evaluation. For the tested G-code files, the experimental result shows that ILC with the proposed algorithm for contour control can reach a convergence state.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography