Thèses sur le sujet « Assumption-Based »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Assumption-Based.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 22 meilleures thèses pour votre recherche sur le sujet « Assumption-Based ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Fan, Xiuyi. « Assumption-based argumentation dialogues ». Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/10560.

Texte intégral
Résumé :
Formal argumentation based dialogue models have attracted some research interests recently. Within this line of research, we propose a formal model for argumentation-based dialogues between agents, using assumption-based argumentation (ABA). Thus, the dialogues amount to conducting an argumentation process in ABA. The model is given in terms of ABA-specific utterances, debate trees and forests implicitly built during and drawn from dialogues, legal-move functions (amounting to protocols) and outcome functions. Moreover, we investigate the strategic behaviour of agents in dialogues, using strategy-move functions. We instantiate our dialogue model in a range of dialogue types studied in the literature, including information-seeking, inquiry, persuasion, conflict resolution, and discovery. Finally, we prove (1) a formal connection between dialogues and well-known argumentation semantics, and (2) soundness and completeness results for our dialogue models and dialogue strategies used in different dialogue types.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Hamlet, I. M. « Assumption based temporal reasoning in medicine ». Thesis, University of Sussex, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.235351.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Cyras, Kristijonas. « ABA+ : assumption-based argumentation with preferences ». Thesis, Imperial College London, 2017. http://hdl.handle.net/10044/1/58340.

Texte intégral
Résumé :
This thesis focuses on using (computational) argumentation theory to model common-sense reasoning with preferences. Common-sense reasoning entails dealing with incomplete, uncertain and conflicting information. Argumentation as a branch of Artificial Intelligence (AI) provides means to reason with such information in a formal way. An important aspect of commonsense reasoning is reasoning with preference information. As such, dealing with preferences is an important phenomenon in argumentation. Through our research, we aim to contribute to the understanding of preference information treatment in argumentation and common-sense reasoning, as well as AI at large. Our objective is to equip a well established structured argumentation formalism - Assumption-Based Argumentation (ABA) - with a new preference handling mechanism. To this end, we propose an extension of ABA, called ABA+, where preferences are accounted for by reversing attacks. This yields a novel way of dealing with preference information in structured argumentation. We also advance a new property concerning contraposition of rules, called Weak Contraposition, applicable to ABA+, and, potentially, to generic approaches to rule-based reasoning with preferences. We argue that ABA+ (with and without Weak Contraposition) exhibits various desirable formal properties concerning argumentation and/or preference handling. We analyse ABA+ in the context of other formalisms of argumentation with preferences and contend advantages of ABA+.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Dugdale, Julie Anne. « Cooperative problem-solving using assumption-based truth maintenance ». Thesis, University of Buckingham, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.601371.

Texte intégral
Résumé :
Classical expert systems have been criticised for ignoring the problem-solving ability of the user. The ramifications of this are more than just a reduced problem-solving capability. By excluding the knowledge of the user, the knowledge-base of the system is incomplete as it is infeasible to capture all the relevant factors. Furthermore, users become alienated as they do not have the opportunity to "adapt the situation according to their skills. In many cases the conclusions of the expert system are rejected, or the user's responsibility is abrogated, because the user cannot influence the decision. In response to these criticisms a new type of system is emerging - the Cooperative Problem-solving System. Such systems provide a dynamic interactive environment in which the user and the system work together to derive a solution. A cooperative approach is appropriate in two situations. The first situation is when a problem can be broken down into sub-problems which can then be assigned to the various participants. The second situation is when the relative merits of independently derived solutions need to be investigated by participants in order to arrive at a solution that is mutually acceptable. This thesis is concerned with cooperation which falls into the latter category. The cooperative system developed in this work is the first to study cooperation in this respect. The domain chosen to implement the cooperative problem-solving system is investment management. The process of investment management described in this work is based upon the approach used by the Product Operations division of International Computers Limited (ICL). Investment management is ideal because of the nature of the reasoning used and the type of cooperative interaction that takes place. Until now, the application of such a system to investment management has not been explored. Previous methods for analysing cooperation focus on the identification and assignment of individual tasks to the various agents. These methods are therefore inappropriate to the interpretation of cooperation used in this work. The functions necessary to provide a cooperative environment have been identified by developing a new approach to analysing cooperation. Central to this approach are the transcripts obtained from management meetings. This data was supplemented by devising a case-study and running simulated meetings. Seven functions that a cooperative problem-solving system should provide were identified from the transcripts: information provision, problem-solving, explanation, user-modelling, constraint recognition, problem-modelling, and confirmation. The individual identification of these functions is not new. The novelty in this work stems from the collective use of the functions for cooperation.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Drummond, Mark Edwin. « Plan nets : a formal representation of action and belief for 'automatic planning systems' ». Thesis, University of Edinburgh, 1986. http://hdl.handle.net/1842/19705.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Waldvogel, Mark Carleton University Dissertation Computer Science. « Metaplanning using time-relation constraints and assumption-based reasoning ». Ottawa, 1986.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Bayer, S. G. M. « Practical zero-knowledge protocols based on the discrete logarithm assumption ». Thesis, University College London (University of London), 2014. http://discovery.ucl.ac.uk/1416402/.

Texte intégral
Résumé :
Zero-knowledge proofs were introduced by Goldwasser, Micali, and Rackoff. A zero-knowledge proof allows a prover to demonstrate knowledge of some information, for example that they know an element which is a member of a list or which is not a member of a list, without disclosing any further information about that element. Existing constructions of zero-knowledge proofs which can be applied to all languages in NP are impractical due to their communication and computational complexity. However, it has been known since Guillou and Quisquater's identification protocol from 1988 and Schnorr's identification protocol from 1991 that practical zero-knowledge protocols for specific problems exist. Because of this, a lot of work was undertaken over the recent decades to find practical zero-knowledge proofs for various other specific problems, and in recent years many protocols were published which have improved communication and computational complexity. Nevertheless, to find more problems which have an efficient and practical zero-knowledge proof system and which can be used as building blocks for other protocols is an ongoing challenge of modern cryptography. This work addresses the challenge, and constructs zero-knowledge arguments with sublinear communication complexity, and achievable computational demands. The security of our protocols is only based on the discrete logarithm assumption. Polynomial evaluation arguments are proposed for univariate polynomials, for multivariate polynomials, and for a batch of univariate polynomials. Furthermore, the polynomial evaluation argument is applied to construct practical membership and non-membership arguments. Finally, an efficient method for proving the correctness of a shuffle is proposed. The proposed protocols have been tested against current state of the art versions in order to verify their practicality in terms of run-time and communication cost. We observe that the performance of our protocols is fast enough to be practical for medium range parameters. Furthermore, all our verifiers have a better asymptotic behavior than earlier verifiers independent of the parameter range, and in real life settings our provers perform better than provers of existing protocols. The analysis of the results shows that the communication cost of our protocols is very small; therefore, our new protocols compare very favorably to the current state of the art.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Tian, Chun. « Assumption-Based Runtime Verification of Finite- and Infinite-State Systems ». Doctoral thesis, Università degli studi di Trento, 2022. https://hdl.handle.net/11572/357167.

Texte intégral
Résumé :
Runtime Verification (RV) is usually considered as a lightweight automatic verification technique for the dynamic analysis of systems, where a monitor observes executions produced by a system and analyzes its executions against a formal specification. If the monitor were synthesized, in addition to the monitoring specification, also from extra assumptions on the system behavior (typically described by a model as transition systems), then it may output more precise verdicts or even be predictive, meanwhile it may no longer be lightweight, since monitoring under assumptions has the same computation complexity with model checking. When suitable assumptions come into play, the monitor may also support partial observability, where non-observable variables in the specification can be inferred from observables, either present or historical ones. Furthermore, the monitors are resettable, i.e. being able to evaluate the specification at non-initial time of the executions while keeping memories of the input history. This helps in breaking the monotonicity of monitors, which, after reaching conclusive verdicts, can still change its future outputs by resetting its reference time. The combination of the above three characteristics (assumptions, partial observability and resets) in the monitor synthesis is called the Assumption-Based Runtime Verification, or ABRV. In this thesis, we give the formalism of the ABRV approach and a group of monitoring algorithms based on specifications expressed in Linear Temporal Logic with both future and past operators, involving Boolean and possibly other types of variables. When all involved variables have finite domain, the monitors can be synthesized as finite-state machines implemented by Binary Decision Diagrams. With infinite-domain variables, the infinite-state monitors are based on satisfiability modulo theories, first-order quantifier elimination and various model checking techniques. In particular, Bounded Model Checking is modified to do its work incrementally for efficiently obtaining inconclusive verdicts, before IC3-based model checkers get involved. All the monitoring algorithms in this thesis are implemented in a tool called NuRV. NuRV support online and offline monitoring, and can also generate standalone monitor code in various programming languages. In particular, monitors can be synthesized as SMV models, whose behavior correctness and some other properties can be further verified by model checking.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Liu, Huizeng. « Ocean color atmospheric correction based on black pixel assumption over turbid waters ». HKBU Institutional Repository, 2019. https://repository.hkbu.edu.hk/etd_oa/623.

Texte intégral
Résumé :
Accurate retrieval of water-leaving reflectance from satellite-sensed signal is decisive for ocean color applications, because water-leaving radiance only account for less than 10% of satellite-sensed radiance. The standard atmospheric correction algorithm relies on black pixel assumption, which assumes negligible water-radiance reflectance at the near-infrared (NIR) bands. The standard NIR-based algorithm generally works well for waters where the NIR water-leaving radiance is negligible or can be properly accounted for. However, the black pixel assumption does not hold over turbid waters, which results in biased retrievals of remote sensing reflectance (Rrs). Therefore, this study aimed to improve atmospheric correction over turbid waters. Based on Sentinel-3, two ways to cope with nonzero NIR water-leaving reflectance were explored. First, this study proposed to use artificial neural networks to estimate and correct NIR water-leaving reflectance at TOA (ANN-NIR algorithm). The rationale of it is that hydrosol optical properties are much simpler at NIR spectral region, where pure water absorptions are the dominant factor. The proposed algorithm outperformed the standard NIR-based algorithm over highly turbid waters. Considering results demonstrated in this study, ANN-NIR algorithm should be useful for ocean color sensors with less than two SWIR bands. Second, this study adapted the SWIR-based algorithm for atmospheric correction of Sentinel-3 OLCI by coupling with the two SWIR bands of SLSTR. Three SWIR band combinations were tested: 1020 and 1613, 1020 and 2256, and 1613 and 2256 nm. The SWIR-based algorithm obviously performed better than NIR-based algorithm over highly turbid waters, while the NIR-based is still preferred for clear to moderately turbid waters. The SWIR band of 1020 nm combined with either SWIR band of 1613 or 2256 nm is recommended for the SWIR-based algorithm except for extremely turbid waters, because the band of 1020 nm has better radiometric performance. Over extremely turbid waters, the band combination of 1613 and 2256 nm should be used, since the water-leaving reflectance is still non-negligible at the band of 1020 nm over these waters. Considering atmospheric correction performance obtained by the NIR- and SWIR-based algorithms, the NIR-based and SWIR-based algorithm are practically applied over clear and turbid waters, respectively. This study revisited the effectiveness of the turbidity index for the current NIR-SWIR switching scheme. The turbidity index calculated from aerosol reflectance varies from 0.7 to 2.2, which is not close to one as expected. In addition to water-leaving reflectance, its value also depends on the spectral shape of aerosol reflectance, which varies with aerosol size distributions, aerosol optical thickness, relative humidity and observing geometries. To address this problem, this study proposed a framework to determine switching threshold for the NIR-SWIR algorithm. An Rrs threshold was determined for each MODIS land band centered at 469, 555, 645 and 859 nm, respectively. Their thresholds are 0.009, 0.016, 0.009 and 0.0006 sr-1, respectively. However, Rrs(469) tends to select SWIR-based algorithm wrongly for clear waters, while NIR-SWIR switching based on Rrs(859) tends to produce patchy patterns. By contrast, NIR-SWIR switching based on Rrs(555) with a threshold of 0.016 sr-1 and Rrs(645) with a threshold of 0.009 sr-1 produced reasonable results. Considering the contrasted estuarine and coastal waters, combined applications of NIR- and SWIR-based algorithm with the switching scheme should be useful for these waters. This study will contribute to better ocean color atmospheric corrections over turbid waters. Atmospheric correction algorithms based on black pixel assumption have been implemented and tested in this study, while combined applications of NIR-based and SWIR-based algorithms are recommended over contrasted transitional waters. However, further studies would still be required to further improve and validate atmospheric correction algorithms over turbid waters.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Schulz, Claudia. « Developments in abstract and assumption-based argumentation and their application in logic programming ». Thesis, Imperial College London, 2017. http://hdl.handle.net/10044/1/56062.

Texte intégral
Résumé :
Logic Programming (LP) and Argumentation are two paradigms for knowledge representation and reasoning under incomplete information. Even though the two paradigms share common features, they constitute mostly separate areas of research. In this thesis, we present novel developments in Argumentation, in particular in Assumption-Based Argumentation (ABA) and Abstract Argumentation (AA), and show how they can 1) extend the understanding of the relationship between the two paradigms and 2) provide solutions to problematic reasoning outcomes in LP. More precisely, we introduce assumption labellings as a novel way to express the semantics of ABA and prove a more straightforward relationship with LP semantics than found in previous work. Building upon these correspondence results, we apply methods for argument construction and conflict detection from ABA, and for conflict resolution from AA, to construct justifications of unexpected or unexplained LP solutions under the answer set semantics. We furthermore characterise reasons for the non-existence of stable semantics in AA and apply these findings to characterise different scenarios in which the computation of meaningful solutions in LP under the answer set semantics fails.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Dicke, Ralf. « Strategische Unternehmensplanung mit Hilfe eines Assumption-based-truth-maintenance-Systems (ATMS) : Formalisierung eines Kontingenzansatzes in Prädikatenlogik und Anpassungsplanung nach dem Net-change-Prinzip / ». Wiesbaden : Dt. Univ.-Verl, 2007. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=015660763&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
12

Göpfert, Florian [Verfasser], Johannes [Akademischer Betreuer] Buchman et Jintai [Akademischer Betreuer] Ding. « Securely Instantiating Cryptographic Schemes Based on the Learning with Errors Assumption / Florian Göpfert ; Johannes Buchman, Jintai Ding ». Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2016. http://d-nb.info/1121781764/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
13

Sune, Dan. « Isolation of Multiple-faults with Generalized Fault-modes ». Thesis, Linköping University, Department of Electrical Engineering, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1480.

Texte intégral
Résumé :

Most AI approaches for fault isolation handle only the behavioral modes OK and NOT OK. To be able to isolate faults in components with generalized behavioral modes, a new framework is needed. By introducing domain logic and assigning the behavior of a component to a behavioral mode domain, efficient representation and calculation of diagnostic information is made possible.

Diagnosing components with generalized behavioral modes also requires extending familiar characterizations. The characterizations candidate, generalized kernel candidate and generalized minimal candidate are introduced and it is indicated how these are deduced.

It is concluded that neither the full candidate representation nor the generalized kernel candidate representation are conclusive enough. The generalized minimal candidate representation focuses on the interesting diagnostic statements to a large extent. If further focusing is needed, it is satisfactory to present the minimal candidates which have a probability close to the most probable minimal candidate.

The performance of the fault isolation algorithm is very good, faults are isolated as far as it is possible with the provided diagnostic information.

Styles APA, Harvard, Vancouver, ISO, etc.
14

Kraus, Daniel [Verfasser], Claudia [Akademischer Betreuer] [Gutachter] Czado, Roger [Gutachter] Cooke et Matthias [Gutachter] Fischer. « D-vine copula based quantile regression and the simplifying assumption for vine copulas / Daniel Kraus ; Gutachter : Roger Cooke, Matthias Fischer, Claudia Czado ; Betreuer : Claudia Czado ». München : Universitätsbibliothek der TU München, 2017. http://d-nb.info/1143125053/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Than, Soe. « Teaching language-based approaches to literature in Thailand : an experimental study of the effectiveness of 'elementary' stylistic analysis and language-based approaches to teaching literature to EFL students at Assumption University, Thailand ». Thesis, University of Nottingham, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.416898.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

Kamdem, Simo Freddy. « Model-based federation of systems of modelling ». Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2374.

Texte intégral
Résumé :
L'ingénierie des systèmes complexes et systèmes de systèmes conduit souvent à des activités de modélisation (MA) complexes. Les problèmes soulevés par les MA sont notamment : comprendre le contexte dans lequel elles sont exécutées, comprendre l'impact sur les cycles de vie des modèles qu'elles produisent, et finalement trouver une approche pour les maîtriser. L'objectif principal de cette thèse est d'élaborer une approche formelle pour adresser ce problème. Dans cette thèse, après avoir étudié les travaux connexes en ingénierie système et plus spécifiquement ceux qui portent sur la co-ingénierie du système à faire (le produit) et du système pour faire (le projet), nous développons une méthodologie nommée MODEF pour traiter ce problème. MODEF consiste en: (1) Caractériser les MA comme un système et plus généralement une fédération de systèmes. (2) Construire de manière itérative une architecture de ce système via la modélisation du contenu conceptuel des modèles produits par MA et leur cycle de vie, les tâches réalisées au sein des MA et leurs effets sur ces cycles de vie. (3) Spécifier les attentes sur ces cycles de vie. (4) Analyser les modèles (des MA) par rapport à ces attentes (et éventuellement les contraintes sur les tâches) pour vérifier jusqu'à quel point elles sont atteignables via la synthèse des points (ou états) acceptables. D'un point de vue pratique, l'exploitation des résultats de l'analyse permet de contrôler le déroulement des tâches de modélisation à partir de la mise en évidence de leur impact sur les modèles qu'elles produisent. En effet, cette exploitation fournit des données pertinentes sur la façon dont les MA se déroulent et se dérouleraient de bout en bout. A partir de ces informations, il est possible de prendre des mesures préventives ou correctives. Nous illustrons cela à l'aide de deux cas d'étude (le fonctionnement d'un supermarché et la modélisation de la couverture fonctionnelle d'un système). D'un point de vue théorique, les sémantiques formelles des modèles des MA et le formalisme des attentes sont d'abord données. Ensuite, les algorithmes d'analyse et d'exploitation sont présentés. Cette approche est brièvement comparée avec des approches de vérification des modèles et de synthèse de systèmes. Enfin, deux facilitateurs de la mise en œuvre de MODEF sont présentés. Le premier est une implémentation modulaire des blocs de base de MODEF. Le second est une architecture fédérée (FA) des modèles visant à faciliter la réutilisation des modèles formels en pratique. La formalisation de FA est faite dans le cadre de la théorie des catégories. De ce fait, afin de construire un lien entre abstraction et implémentation, des structures de données et algorithmes de base sont proposés pour utiliser FA en pratique. Différentes perspectives sur les composantes de MODEF concluent ce travail
The engineering of complex systems and systems of systems often leads to complex modelling activities (MA). Some challenges exhibited by MA are: understanding the context where they are carried out and their impacts on the lifecycles of models they produce, and ultimately providing a support for mastering them. How to address these challenges with a formal approach is the central challenge of this thesis. In this thesis, after discussing the related works from systems engineering in general and the co-engineering of the system to be made (product) and the system for make (project) systems specifically, we position and develop a methodology named MODEF, that aims to master the operation of MA. MODEF consists in: (1) characterizing MA as a system (and more globally as a federation of systems) in its own right; (2) iteratively architecting this system through: the modelling of the conceptual content of the models produced by MA and their life cycles, the tasks carried out within MA and their effects on these life cycles; (3) specifying the expectations over these life cycles and; (4) analysing models (of MA) against expectations (and possibly tasks constraints) - to check how far expectations are achievable - via the synthesis of the acceptable behaviours. On a practical perspective, the exploitation of the results of the analysis allows figuring out what could happen with the modelling tasks and their impacts on the whole state of models they handle. We show on two case studies (the operation of a supermarket and the modelling of the functional coverage of a system) how this exploitation provides insightful data on how the system is end-to-end operated and how it can behave. Based on this information, it is possible to take some preventive or corrective actions on how the MA are carried out. On the foundational perspective, the formal semantics of three kinds of involved models and the expectations formalism are first discussed. Then the analysis and exploitation algorithms are presented. Finally this approach is roughly compared with model checking and systems synthesis approaches. Last but not least, two enablers whose first objectives are to ease the implementation of MODEF are presented. The first one is a modular implementation of MODEF's buildings blocks. The second one is a federated architecture (FA) of models which aims to ease working with formal models in practice. Despite the fact that FA is formalised within the abstract framework of category theory, an attempt to bridge the gap between abstraction and implementation is sketched via some basic data structures and base algorithms. Several perspectives related to the different components of MODEF conclude this work
Styles APA, Harvard, Vancouver, ISO, etc.
17

« Stereo vision without the scene-smoothness assumption : the homography-based approach ». 1998. http://library.cuhk.edu.hk/record=b5889749.

Texte intégral
Résumé :
by Andrew L. Arengo.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.
Includes bibliographical references (leaves 65-66).
Abstract also in Chinese.
Acknowledgments --- p.ii
List Of Figures --- p.v
Abstract --- p.vii
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Motivation and Objective --- p.2
Chapter 1.2 --- Approach of This Thesis and Contributions --- p.3
Chapter 1.3 --- Organization of This Thesis --- p.4
Chapter 2 --- Previous Work --- p.6
Chapter 2.1 --- Using Grouped Features --- p.6
Chapter 2.2 --- Applying Additional Heuristics --- p.7
Chapter 2.3 --- Homography and Related Works --- p.9
Chapter 3 --- Theory and Problem Formulation --- p.10
Chapter 3.1 --- Overview of the Problems --- p.10
Chapter 3.1.1 --- Preprocessing --- p.10
Chapter 3.1.2 --- Establishing Correspondences --- p.11
Chapter 3.1.3 --- Recovering 3D Depth --- p.14
Chapter 3.2 --- Solving the Correspondence Problem --- p.15
Chapter 3.2.1 --- Epipolar Constraint --- p.15
Chapter 3.2.2 --- Surface-Continuity and Feature-Ordering Heuristics --- p.16
Chapter 3.2.3 --- Using the Concept of Homography --- p.18
Chapter 3.3 --- Concept of Homography --- p.20
Chapter 3.3.1 --- Barycentric Coordinate System --- p.20
Chapter 3.3.2 --- Image to Image Mapping of the Same Plane --- p.22
Chapter 3.4 --- Problem Formulation --- p.23
Chapter 3.4.1 --- Preliminaries --- p.23
Chapter 3.4.2 --- Case of Single Planar Surface --- p.24
Chapter 3.4.3 --- Case of Multiple Planar Surfaces --- p.28
Chapter 3.5 --- Subspace Clustering --- p.28
Chapter 3.6 --- Overview of the Approach --- p.30
Chapter 4 --- Experimental Results --- p.33
Chapter 4.1 --- Synthetic Images --- p.33
Chapter 4.2 --- Aerial Images --- p.36
Chapter 4.2.1 --- T-shape building --- p.38
Chapter 4.2.2 --- Rectangular Building --- p.39
Chapter 4.2.3 --- 3-layers Building --- p.40
Chapter 4.2.4 --- Pentagon --- p.44
Chapter 4.3 --- Indoor Scenes --- p.52
Chapter 4.3.1 --- Stereo Motion Pair --- p.53
Chapter 4.3.2 --- Hallway Scene --- p.56
Chapter 5 --- Summary and Conclusions --- p.63
Styles APA, Harvard, Vancouver, ISO, etc.
18

Göpfert, Florian. « Securely Instantiating Cryptographic Schemes Based on the Learning with Errors Assumption ». Phd thesis, 2016. https://tuprints.ulb.tu-darmstadt.de/5850/1/Thesis_f_goepfert.pdf.

Texte intégral
Résumé :
Since its proposal by Regev in 2005, the Learning With Errors (LWE) problem was used as the underlying problem for a great variety of schemes. Its applications are many-fold, reaching from basic and highly practical primitives like key exchange, public-key encryption, and signature schemes to very advanced solutions like fully homomorphic encryption, group signatures, and identity based encryption. One of the underlying reasons for this fertility is the flexibility with that LWE can be instantiated. Unfortunately, this comes at a cost: It makes selecting parameters for cryptographic applications complicated. When selecting parameters for a new LWE-based primitive, a researcher has to take the influence of several parameters on the efficiency of the scheme and the runtime of a variety of attacks into consideration. In fact, the missing trust in the concrete hardness of LWE is one of the main problems to overcome to bring LWE-based schemes to practice. This thesis aims at closing the gap between the theoretical knowledge of the hardness of LWE, and the concrete problem of selecting parameters for an LWE-based scheme. To this end, we analyze the existing methods to estimate the hardness of LWE, and introduce new estimation techniques where necessary. Afterwards, we show how to transfer this knowledge into instantiations that are at the same time secure and efficient. We show this process on three examples: - A highly optimized public-key encryption scheme for embedded devices that is based on a variant of Ring-LWE. - A practical signature scheme that served as the foundation of one of the best lattice-based signature schemes based on standard lattices. - An advanced public-key encryption scheme that enjoys the unique property of natural double hardness based on LWE instances similar to those used for fully homomorphic encryption.
Styles APA, Harvard, Vancouver, ISO, etc.
19

PERRY, Suzanne B. « Universal service in the European Union : policy goal or market-based assumption ? » Doctoral thesis, 1999. http://hdl.handle.net/1814/5652.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Guan, Chi-Hao, et 管紀豪. « A EUF-CMA RSA Signature Scheme based on Phi-Hiding Assumption and Trapdoor Hash Function in the Standard Model ». Thesis, 2011. http://ndltd.ncl.edu.tw/handle/67708044707947857243.

Texte intégral
Résumé :
碩士
國立臺灣海洋大學
資訊工程學系
99
We propose an EUF-CMA signature scheme based on Φ-hiding assumption [13] in the standard model. At the mean time, we found the RSA cryptosystem has lossy property [35]. This discovery also found by Kiltz et al [27]. On the orher side, Shamir and Tauman has proposed OnLine/OffLine signature scheme [39]: When OffLine phase, decide the trapdoor hash value until OnLine phase compute the correspond preimage by trapdoor key. Using this primitive, many EUF-CMA signature scheme has been proposed such as [5]、[9]、[10]、[11]、[12]、[17]、[23]、[26]、[29]、[32]、[33]、[41]. We using the two general ideas and try to prove the security of RSA cryptosystem satisfy the EUF-CMA property in the standard model.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Lee, Yum-Min, et 李允民. « Parallel Solver for Three-dimensional Cartesian-grid Based Time-Dependent Schrödinger Equation and Its Applications in Laser-Molecule Interaction Study with Single-Action-electron Assumption ». Thesis, 2009. http://ndltd.ncl.edu.tw/handle/04845407827280731085.

Texte intégral
Résumé :
博士
國立交通大學
機械工程系所
97
A parallelized three-dimensional Cartesian-grid based time-dependent Schrödinger equation (TDSE) solver for molecules with single active electron assumption, assuming freezing the motion of nucleus is presented in this thesis. An explicit stagger-time algorithm is employed for time integration of the TDSE, in which the real and imaginary parts of the wave function are defined at alternative times, while a cell-centered finite-volume method is utilized for spatial discretization of the TDSE on Cartesian grids. The TDSE solver is then parallelized using domain decomposition method on distributed memory machines by applying a multi-level graph-partitioning technique. The solver is validated, using a H2+ molecule system, both by observing total electron probability and total energy conservation without laser interaction, and by comparing the ionization rates with previous 2D-axisymmetric simulation results with an aligned incident laser pulse. Parallel efficiency of this TDSE solver is presented and discussed, in which the parallel efficiency can be as high as 75% using 128 processors. Finally, examples of temporal evolution of probability distribution of laser incidence onto a H2+ molecule at inter-nuclei distance of 9 a.u. (��= 0�a and 90�a) and spectral intensities of harmonic generation at inter-nuclei distance of 2 a.u. (��= 0�a, 30�a, 60�a and 90�a) and the angle effect of laser incidence on ionization rate of N2, O2 and CO2 molecules are presented to demonstrate the capability of the current TDSE solver.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Grzywacz, Norberto M., et Ellen C. Hildreth. « The Incremental Rigidity Scheme for Recovering Structure from Motion : Position vs. Velocity Based Formulations ». 1985. http://hdl.handle.net/1721.1/5610.

Texte intégral
Résumé :
Perceptual studies suggest that the visual system uses the "rigidity" assumption to recover three dimensional structures from motion. Ullman (1984) recently proposed a computational scheme, the incremental rigidity scheme, which uses the rigidity assumptions to recover the structure of rigid and non-rigid objects in motion. The scheme assumes the input to be discrete positions of elements in motion, under orthographic projection. We present formulations of Ullmans' method that use velocity information and perspective projection in the recovery of structure. Theoretical and computer analyses show that the velocity based formulations provide a rough estimate of structure quickly, but are not robust over an extended time period. The stable long term recovery of structure requires disparate views of moving objects. Our analysis raises interesting questions regarding the recovery of structure from motion in the human visual system.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie