Dissertations / Theses on the topic 'Logical encodings'

To see the other types of publications on this topic, follow the link: Logical encodings.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 dissertations / theses for your research on the topic 'Logical encodings.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Dubois, De Prisque Louise. "Prétraitement compositionnel en Coq." Electronic Thesis or Diss., université Paris-Saclay, 2024. https://theses.hal.science/tel-04696909.

Full text
Abstract:
Cette thèse présente une méthodologie de prétraitement visant à transformer certains énoncés de la logique de l'assistant de preuve Coq en énoncés de logique du premier ordre, de façon à les envoyer à des prouveurs automatiques, et en particulier des prouveurs SMT. Cette méthodologie consiste à composer de petites transformations indépendantes et certifiantes, prenant la forme de tactiques Coq. Une implémentation de cette méthodologie est proposée dans un plugin appelé Sniper, qui offre une tactique "pousse-bouton" d'automatisation. Par ailleurs, un ordonnanceur de transformations logiques permet d'ajouter ses propres transformations logiques à l'ensemble et de déterminer quelles transformations s'appliquent en fonction de la preuve à effectuer
This thesis presents a preprocessing methodology aimed at transforming certain statements from the Coq proof assistant's logic into first-order logic statements, in order to send them to automatic provers, in particular SMT solvers. This methodology involves composing small, independent, and certifying transformations, taking the form of Coq tactics. An implementation of this methodology is provided in a plugin called Sniper, which offers a "push-button" automation tactic. Furthermore, a logical transformation scheduler (called Orchestrator) allows adding one's own logical transformations and determines which transformations apply depending on the proof to be carried out
APA, Harvard, Vancouver, ISO, and other styles
2

Sheridan, Daniel. "Temporal logic encodings for SAT-based bounded model checking." Thesis, University of Edinburgh, 2006. http://hdl.handle.net/1842/1467.

Full text
Abstract:
Since its introduction in 1999, bounded model checking (BMC) has quickly become a serious and indispensable tool for the formal verification of hardware designs and, more recently, software. By leveraging propositional satisfiability (SAT) solvers, BMC overcomes some of the shortcomings of more conventional model checking methods. In model checking we automatically verify whether a state transition system (STS) describing a design has some property, commonly expressed in linear temporal logic (LTL). BMC is the restriction to only checking the looping and non-looping runs of the system that have bounded descriptions. The conventional BMC approach is to translate the STS runs and LTL formulae into propositional logic and then conjunctive normal form (CNF). This CNF expression is then checked by a SAT solver. In this thesis we study the effect on the performance of BMC of changing the translation to propositional logic. One novelty is to use a normal form for LTL which originates in resolution theorem provers. We introduce the normal form conversion early on in the encoding process and examine the simplifications that it brings to the generation of propositional logic. We further enhance the encoding by specialising the normal form to take advantage of the types of runs peculiar to BMC. We also improve the conversion from propositional logic to CNF. We investigate the behaviour of the new encodings by a series of detailed experimental comparisons using both hand-crafted and industrial benchmarks from a variety of sources. These reveal that the new normal form based encodings can reduce the solving time by a half in most cases, and up to an order of magnitude in some cases, the size of the improvement corresponding to the complexity of the LTL expression. We also compare our method to the popular automata-based methods for model checking and BMC.
APA, Harvard, Vancouver, ISO, and other styles
3

Malik, Usama Computer Science &amp Engineering Faculty of Engineering UNSW. "Configuration encoding techniques for fast FPGA reconfiguration." Awarded by:University of New South Wales. School of Computer Science and Engineering, 2006. http://handle.unsw.edu.au/1959.4/26212.

Full text
Abstract:
This thesis examines the problem of reducing reconfiguration time of an island-style FPGA at its configuration memory level. The approach followed is to examine configuration encoding techniques in order to reduce the size of the bitstream that must be loaded onto the device to perform a reconfiguration. A detailed analysis of a set of benchmark circuits on various island-style FPGAs shows that a typical circuit randomly changes a small number of bits in the {\it null} or default configuration state of the device. This feature is exploited by developing efficient encoding schemes for configuration data. For a wide set of benchmark circuits on various FPGAs, it is shown that the proposed methods outperform all previous configuration compression methods and, depending upon the relative size of the circuit to the device, compress within 5\% of the fundamental information theoretic limit. Moreover, it is shown that the corresponding decoders are simple to implement in hardware and scale well with device size and available configuration bandwidth. It is not unreasonable to expect that with little modification to existing FPGA configuration memory systems and acceptable increase in configuration power a 10-fold improvement in configuration delay could be achieved. The main contribution of this thesis is that it defines the limit of configuration compression for the FPGAs under consideration and develops practical methods of overcoming this reconfiguration bottleneck. The functional density of reconfigurable devices could thereby be enhanced and the range of potential applications reasonably expanded.
APA, Harvard, Vancouver, ISO, and other styles
4

Hamdaoui, Yann. "Concurrency, references and linear logic." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC190/document.

Full text
Abstract:
Le sujet de cette thèse est l’étude de l’encodage des références et de la concurrence dans la Logique Linéaire. Notre motivation est de montrer que la Logique Linéaire est capable d’encoder des effets de bords, et pourrait ainsi servir comme une cible de compilation pour des langages fonctionnels qui soit à la fois viable, formalisée et largement étudiée. La notion clé développée dans cette thèse est celle de zone de routage. C’est une famille de réseaux de preuve qui correspond à un fragment de la logique linéaire différentielle, et permet d’implémenter différentes primitives de communication. Nous les définissons et étudions leur théorie. Nous illustrons ensuite leur expressivité en traduisant un λ-calcul avec concurrence, références et réplication dans un fragment des réseaux différentiels. Pour ce faire, nous introduisons un langage semblable au λ-calcul concurrent d’Amadio, mais avec des substitutions explicites à la fois pour les variables et pour les références. Nous munissons ce langage d’un système de types et d’effets, et prouvons la normalisation forte des termes bien typés avec une technique qui combine la réductibilité et une nouvelle technique interactive. Ce langage nous permet de prouver un théorème de simulation, et un théorème d’adéquation pour la traduction proposée
The topic of this thesis is the study of the encoding of references andconcurrency in Linear Logic. Our perspective is to demonstrate the capabilityof Linear Logic to encode side-effects to make it a viable, formalized and wellstudied compilation target for functional languages in the future. The keynotion we develop is that of routing areas: a family of proof nets whichcorrespond to a fragment of differential linear logic and which implementscommunication primitives. We develop routing areas as a parametrizable deviceand study their theory. We then illustrate their expressivity by translating aconcurrent λ-calculus featuring concurrency, references and replication to afragment of differential nets. To this purpose, we introduce a language akin toAmadio’s concurrent λ-calculus, but with explicit substitutions for bothvariables and references. We endow this language with a type and effect systemand we prove termination of well-typed terms by a mix of reducibility and anew interactive technique. This intermediate language allows us to prove asimulation and an adequacy theorem for the translation
APA, Harvard, Vancouver, ISO, and other styles
5

Karmarkar, Kedar Madhav. "SCALABLE BUS ENCODING FOR ERROR-RESILIENT HIGH-SPEED ON-CHIP COMMUNICATION." OpenSIUC, 2013. https://opensiuc.lib.siu.edu/dissertations/720.

Full text
Abstract:
Shrinking minimum feature size in deep sub-micron has made fabrication of progressively faster devices possible. The performance of interconnects has been a bottleneck in determining the overall performance of a chip. A reliable high-speed communication technique is necessary to improve the performance of on-chip communication. Recent publications have demonstrated that use of multiple threshold voltages improves the performance of a bus significantly. The multi-threshold capture mechanism takes advantage of predictable temporal behavior of a tightly coupled bus to predict the next state of the bus early. However, Use of multiple threshold voltages also reduces the voltage slack and consequently increases the susceptibility to noise. Reduction in supply voltage exacerbates the situation. This work proposes a novel error detection and correction encoding technique that takes advantage of the high performance of the multi-threshold capture mechanism as well as its inbuilt redundancy to achieve reliable high-speed communication while introducing considerably less amount of redundancy as compared to the conventional methods. The proposed technique utilizes graph-based algorithms to produce a set of valid code words. The algorithm takes advantage of implicit set operations using binary decision diagram to improve the scalability of the code word selection process. The code words of many crosstalk avoidance codes including the proposed error detection and correction technique exhibit a highly structured behavior. The sets of larger valid code words can be recursively formed using the sets of smaller valid code words. This work also presents a generalized framework for scalable on-chip code word generation. The proposed CODEC implementation strategy uses a structured graph to model the recursive nature of an encoding technique that facilitates scalable CODEC implementation. The non-enumerative nature of the implementation strategy makes it highly scalable. The modular nature of the CODEC also simplifies use of pipelined architecture thereby improving the throughput of the bus.
APA, Harvard, Vancouver, ISO, and other styles
6

Johnson, Justin Scott Escobar Martha Cecilia. "Initially held hypothesis does not affect encoding of event frequencies in contingency based causal judgment." Auburn, Ala., 2009. http://hdl.handle.net/10415/1948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yuan, Zeying. "Sequential Equivalence Checking of Circuits with Different State Encodings by Pruning Simulation-based Multi-Node Invariants." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/56693.

Full text
Abstract:
Verification is an important step for Integrated Circuit (IC) design. In fact, literature has reported that up to 70% of the design effort is spent on checking if the design is functionally correct. One of the core verification tasks is Equivalence Checking (EC), which attempts to check if two structurally different designs are functionally equivalent for all reachable states. Powerful equivalence checking can also provide opportunities for more aggressive logic optimizations, meeting different goals such as smaller area, better performance, etc. The success of Combinational Equivalence Checking (CEC) has laid a foundation to industry-level combinational logic synthesis and optimization. However, Sequential Equivalence Checking (SEC) still faces much challenge, especially for those complex circuits that have different state encodings and few internal signal equivalences. In this thesis, we propose a novel simulation-based multi-node inductive invariant generation and pruning technique to check the equivalence of sequential circuits that have different state encodings and very few equivalent signals between them. By first grouping flip-flops into smaller subsets to make it scalable for large designs, we then propose a constrained logic synthesis technique to prune potential multi-node invariants without inadvertently losing important constraints. Our pruning technique guarantees the same conclusion for different instances (proving SEC or not) compared to previous approaches in which merging of such potential invariants might lose important relations if the merged relation does not turn out to be a true invariant. Experimental results show that the smaller invariant set can be very effective for sequential equivalence checking of such hard SEC instances. Our approach is up to 20x-- faster compared to previous mining-based methods for larger circuits.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
8

Mailly, Jean-Guy. "Dynamics of argumentation frameworks." Thesis, Artois, 2015. http://www.theses.fr/2015ARTO0402/document.

Full text
Abstract:
Cette thèse traite du problème de l'intégration d'une nouvelle information dans un système d'argumentation abstrait. Un tel système est un graphe orienté dont les nœuds représentent les arguments, et les arcs représentent les attaques entre arguments. Il existe divers moyen de décider quels arguments sont acceptés par l'agent qui utilise un tel système pour représenter ses croyances.Il peut arriver dans la vie d'un agent qu'il soit confronté à une information du type "tel argument devrait être accepté", alors que c'est en contradiction avec ses croyances actuelles, représentées par son système d'argumentation.Nous avons étudié dans cette thèse diverses approches pour intégrer une information à un système d'argumentation.Notre première contribution est une adaptation du cadre AGM pour la révision de croyances, habituellement utilisé lorsque les croyances de l'agent sont représentées dans un formalisme logique. Nous avons notamment adapté les postulats de rationalité proposés dans le cadre AGM pour pouvoir caractériser des opérateurs de révision de systèmes d'argumentation, et nous avons proposé différents moyens de générer les systèmes d'argumentation résultant de la révision.Nous avons ensuite proposé d'utiliser la révision AGM comme un outil pour réviser les systèmes d'argumentation. Il s'agit cette fois-ci d'une approche par encodage en logique du système d'argumentation, qui permet d'utiliser les opérateurs de révision usuels pour obtenir le résultat souhaité.Enfin, nous avons étudié le problème du forçage d'un ensemble d'arguments (comment modifier le système pour qu'un ensemble donné soit une extension). Nous avons proposé une nouvelle famille d'opérateurs qui garantissent le succès de l'opération, contrairement aux opérateurs de forçage existants, et nous avons montré qu'une traduction de nos approches en problèmes de satisfaction ou d'optimisation booléenne permet de développer des outils efficaces pour calculer le résultat du forçage
This thesis tackles the problem of integrating a new piece of information in an abstract argumentation framework. Such a framework is a directed graph such that its nodes represent the arguments, and the directed edges represent the attacks between arguments. There are different ways to decide which arguments are accepted by the agent who uses such a framework to represent her beliefs.An agent may be confronted with a piece of information such that "this argument should be accepted", which is in contradiction with her current beliefs, represented by her argumentation framework.In this thesis, we have studied several approaches to incorporate a piece of information in an argumentation framework.Our first contribution is an adaptation of the AGM framework for belief revision, which has been developed for characterizing the incorporation of a new piece of information when the agent's beliefs are represented in a logical setting. We have adapted the rationality postulates from the AGM framework to characterize the revision operators suited to argumentation frameworks, and we have identified several ways to generate the argumentation frameworks resulting from the revision.We have also shown how to use AGM revision as a tool for revising argumentation frameworks. Our approach uses a logical encoding of the argumentation framework to take advantage of the classical revision operators, for deriving the expected result.At last, we have studied the problem of enforcing a set of arguments (how to change an argumentation framework so that a given set of arguments becomes an extension). We have developed a new family of operators which guarantee the success of the enforcement process, contrary to the existing approaches, and we have shown that a translation of our approaches into satisfaction and optimization problems makes possible to develop efficient tools for computing the result of the enforcement
APA, Harvard, Vancouver, ISO, and other styles
9

Abrahamsson, Olle. "A Gröbner basis algorithm for fast encoding of Reed-Müller codes." Thesis, Linköpings universitet, Matematik och tillämpad matematik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-132429.

Full text
Abstract:
In this thesis the relationship between Gröbner bases and algebraic coding theory is investigated, and especially applications towards linear codes, with Reed-Müller codes as an illustrative example. We prove that each linear code can be described as a binomial ideal of a polynomial ring, and that a systematic encoding algorithm for such codes is given by the remainder of the information word computed with respect to the reduced Gröbner basis. Finally we show how to apply the representation of a code by its corresponding polynomial ring ideal to construct a class of codes containing the so called primitive Reed-Müller codes, with a few examples of this result.
APA, Harvard, Vancouver, ISO, and other styles
10

Sastrawan, Dewa Ayu Dwi Damaiyanti. "The Instagram News Logic : The Encoding and Decoding of News Credibility on Instagram in the COVID-19 Infodemic in Indonesia." Thesis, Uppsala universitet, Medier och kommunikation, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-446746.

Full text
Abstract:
In the occurrence of the COVID-19 pandemic, a trend of Instagram as a news source emerged in the Reuters Institute Digital News Report 2020 (Reuters, 2020). Instagram’s visual factor has made accessing news more feasible and convenient through a curated feed. Consequently, news producers are migrating to social media platforms, including Instagram to serve news consumption needs. However, journalism on social media has been criticized for its lack of journalistic legitimacy where media trust is challenged by the sensationalism of news to drive engagement metrics. Moreover, the COVID-19 infodemic (WHO, 2020) has escalated the concern of news credibility due to a circulation of misinformation regarding the pandemic and overwhelming citizens. Hence, the objective of this study is to analyze how a news outlet on Instagram maintains journalistic legitimacy and how Instagram users navigate news through information abundance in finding credible and trustworthy news.  The analysis of this study takes into account an Indonesian news outlet on Instagram called Narasi Newsroom, by interviewing a representative from the news producer, a content analysis of their news content, and interviewing Indonesian Instagram users. The empirical findings illustrate how Narasi Newsroom can revive journalistic legitimacy through an innovative approach without diminishing journalism quality on Instagram. With its principle of educating the audience to understand the news beyond factual statements, Narasi Newsroom’s strategy of riding the wave to serve audience needs upholds journalism values through critical thinking and credible sources. By conducting the study through updating the encoding/decoding model (Hall, 1973; 2009), it was found that the social media logic (Dijck & Poell, 2013) can balance journalism practice on Instagram as well as practices critical thinking for news consumers in finding trustworthy news. In light of the post-truth era, media trust and news credibility may be challenged, however, they are not lost when journalism on social media takes accountability to serve news consumers’ needs. Consequently, it takes both the news producers and consumers to take a critical stance to preserve trust in the news.
APA, Harvard, Vancouver, ISO, and other styles
11

Townsend, Joseph Paul. "Artificial development of neural-symbolic networks." Thesis, University of Exeter, 2014. http://hdl.handle.net/10871/15162.

Full text
Abstract:
Artificial neural networks (ANNs) and logic programs have both been suggested as means of modelling human cognition. While ANNs are adaptable and relatively noise resistant, the information they represent is distributed across various neurons and is therefore difficult to interpret. On the contrary, symbolic systems such as logic programs are interpretable but less adaptable. Human cognition is performed in a network of biological neurons and yet is capable of representing symbols, and therefore an ideal model would combine the strengths of the two approaches. This is the goal of Neural-Symbolic Integration [4, 16, 21, 40], in which ANNs are used to produce interpretable, adaptable representations of logic programs and other symbolic models. One neural-symbolic model of reasoning is SHRUTI [89, 95], argued to exhibit biological plausibility in that it captures some aspects of real biological processes. SHRUTI's original developers also suggest that further biological plausibility can be ascribed to the fact that SHRUTI networks can be represented by a model of genetic development [96, 120]. The aims of this thesis are to support the claims of SHRUTI's developers by producing the first such genetic representation for SHRUTI networks and to explore biological plausibility further by investigating the evolvability of the proposed SHRUTI genome. The SHRUTI genome is developed and evolved using principles from Generative and Developmental Systems and Artificial Development [13, 105], in which genomes use indirect encoding to provide a set of instructions for the gradual development of the phenotype just as DNA does for biological organisms. This thesis presents genomes that develop SHRUTI representations of logical relations and episodic facts so that they are able to correctly answer questions on the knowledge they represent. The evolvability of the SHRUTI genomes is limited in that an evolutionary search was able to discover genomes for simple relational structures that did not include conjunction, but could not discover structures that enabled conjunctive relations or episodic facts to be learned. Experiments were performed to understand the SHRUTI fitness landscape and demonstrated that this landscape is unsuitable for navigation using an evolutionary search. Complex SHRUTI structures require that necessary substructures must be discovered in unison and not individually in order to yield a positive change in objective fitness that informs the evolutionary search of their discovery. The requirement for multiple substructures to be in place before fitness can be improved is probably owed to the localist representation of concepts and relations in SHRUTI. Therefore this thesis concludes by making a case for switching to more distributed representations as a possible means of improving evolvability in the future.
APA, Harvard, Vancouver, ISO, and other styles
12

Bobot, François. "Logique de séparation et vérification déductive." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00652508.

Full text
Abstract:
Cette thèse s'inscrit dans la démarche de preuve de programmes à l'aide de vérification déductive. La vérification déductive consiste à produire, à partir des sources d'un programme, c'est-à-dire ce qu'il fait, et de sa spécification, c'est-à-dire ce qu'il est sensé faire, une conjecture qui si elle est vraie alors le programme et sa spécification concordent. On utilise principalement des démonstrateurs automatiques pour montrer la validité de ces formules. Quand ons'intéresse à la preuve de programmes qui utilisent des structures de données allouées en mémoire, il est élégant et efficace de spécifier son programme en utilisant la logique de séparation qui est apparu il y a une dizaine d'année. Cela implique de prouver des conjectures comportant les connectives de la logique de séparation, or les démonstrateurs automatiques ont surtout fait des progrès dans la logique du premier ordre qui ne les contient pas.Ce travail de thèse propose des techniques pour que les idées de la logique de séparation puissent apparaître dans les spécifications tout en conservant la possibilité d'utiliser des démonstrateurs pour la logique du premier ordre. Cependant les conjectures que l'ont produit ne sont pas dans la même logique du premier ordre que celles des démonstrateurs. Pour permettre une plus grande automatisation, ce travail de thèse a également défini de nouvelles conversions entre la logique polymorphe du premier ordre et la logique multi-sortée dupremier ordre utilisé par la plupart des démonstrateurs.La première partie a donné lieu à une implémentation dans l'outil Jessie, la seconde a donné lieu à une participation conséquente à l'écriture de l'outil Why3 et particulièrement dans l'architecture et écriture des transformations qui implémentent ces simplifications et conversions.
APA, Harvard, Vancouver, ISO, and other styles
13

Ruan, Shanq-Jang, and 阮聖彰. "Synthesis of Low Power Pipelined Logic Circuits Using Bipartition and Encoding Techniques." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/47969583754170565902.

Full text
Abstract:
博士
國立臺灣大學
電機工程學研究所
91
In the last decade, power dissipation has become a critical design metric for an increasingly large number of VLSI circuits. This is largely due to in the use of portable electronic appliances which calls for complex integrated systems that can be powered by lightweight batteries with long periods between recharges. Additionally, the reliability of high performance computation in modern processor is always defeated by increasing heat, which is due to the power consumption. Of particular interest in such systems is pipelining design fashion. In this dissertation, we are concerned with optimizing logic level pipelined circuits for low power. We study the power distribution of a pipeline stage and propose several architectures to achieve the lower power consumption. We employ bipartition and encoding techniques for reducing power in a pipeline stage. We first propose two bipartition architectures: bipartition based on output extraction and bipartition based on Shannon expansion. The former bipartitions the circuit in terms of output clustering characteristic, the latter bipartitions the circuit by Shannon expansion with minimum entropy consideration. In order to further reduce power, we apply encoding techniques to both architectures and propose two novel architectures: bipartition single-encoding architecture and biparition dual-encoding architecture. These two architectures reduce the switch activity of not only combinational logic block but pipelined register. To validate the results, we employ an accurate transistor-level power estimator to estimate power dissipation. The transistor-level power estimator provides accurate power results for analyzing the effect of bipartition and encoding techniques.
APA, Harvard, Vancouver, ISO, and other styles
14

(7871015), Nicholas Young. "Coevolution and encoding of fuzzy systems, and multiobjective optimisation." Thesis, 2007. https://figshare.com/articles/thesis/Coevolution_and_encoding_of_fuzzy_systems_and_multiobjective_optimisation/26308066.

Full text
Abstract:

This thesis covers several topics relevant to the design of fuzzy systems using evolutionary algorithms (EAs), with application to control problems and constrained multiobjective problems.

Encoding is a fundamental part of any EA. The solution encoding, and its interactions with the EA's operators, should be designed to minimise arbitrary search bias. A multidimensional encoding suitable for fully specified fuzzy logic rulebases is investigated, and is shown to have better convergence than traditional single -dimensional encoding. By comparison with 2 -point and uniform crossover, the improvement is attributed to the elimination of dimensional encoding bias.

The "curse of dimensionality" is the exponential growth of the search space as the number of decision variables increases. In particular, encoding a fully specified fuzzy logic rulebase can result in a prohibitively large search space. Cooperative coevolution and hierarchical fuzzy rulebases both mitigate the curse of dimensionality through modularity, for evolutionary algorithms and fuzzy systems respectively, and these techniques are shown to be highly compatible with one another. The evolutionary convergence, hierarchical design, and opportunity for parallel computation are analysed for the combined techniques.

Most real -world problems are characterised by multiple, conflicting objectives, and are subject to multiple constraints. Multiobjective optimisation, particularly constrained multiobjective optimisation, is investigated using control and function optimisation benchmark problems.

Two multiobjective diversity measures, hypervolume and distance -to -neighbours, are quantitatively analysed: hypervolume is found to more accurately identify the Pareto front, and distance -to neighbours is found to distribute solutions more uniformly. The inverted pendulum is used as a case study to qualitatively investigate multiobjective design and optimisation of a control problem.

This thesis presents the reconciliation of objective optimisation and constraint satisfaction as the main challenge facing any constrained multiobjective optimisation algorithm, and identifies and investigates two strategies for reconciliation: extended dominance, and blended space.

A novel blended space algorithm - the Blended Rank Evolutionary Algorithm (BREA) -is proposed. BREA dynamically maintains trade-offs between objective optimisation, constraint satisfaction, and population diversity, in order to better identify the Pareto optimal set of solutions in difficult problems. BREA is very favourably compared to the extended dominance algorithm NSGA-II on the nonlinear crop -rotation problem, improving both solution quality and reliability.

APA, Harvard, Vancouver, ISO, and other styles
15

Tong, Kuo-Feng. "Simultaneous Plant/Controller Optimization of Traction Control for Electric Vehicle." Thesis, 2007. http://hdl.handle.net/10012/3194.

Full text
Abstract:
Development of electric vehicles is motivated by global concerns over the need for environmental protection. In addition to its zero-emission characteristics, an electric propulsion system enables high performance torque control that may be used to maximize vehicle performance obtained from energy-efficient, low rolling resistance tires typically associated with degraded road-holding ability. A simultaneous plant/controller optimization is performed on an electric vehicle traction control system with respect to conflicting energy use and performance objectives. Due to system nonlinearities, an iterative simulation-based optimization approach is proposed using a system model and a genetic algorithm (GA) to guide search space exploration. The system model consists of: a drive cycle with a constant driver torque request and a step change in coefficient of friction, a single-wheel longitudinal vehicle model, a tire model described using the Magic Formula and a constant rolling resistance, and an adhesion gradient fuzzy logic traction controller. Optimization is defined in terms of the all at once variable selection of: either a performance oriented or low rolling resistance tire, the shape of a fuzzy logic controller membership function, and a set of fuzzy logic controller rule base conclusions. A mixed encoding, multi-chromosomal GA is implemented to represent the variables, respectively, as a binary string, a real-valued number, and a novel rule base encoding based on the definition of a partially ordered set (poset) by delta inclusion. Simultaneous optimization results indicate that, under straight-line acceleration and unless energy concerns are completely neglected, low rolling resistance tires should be incorporated in a traction control system design since the energy saving benefits outweigh the associated degradation in road-holding ability. The results also indicate that the proposed novel encoding enables the efficient representation of a fix-sized fuzzy logic rule base within a GA.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography