Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Perfect information.

Rozprawy doktorskie na temat „Perfect information”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 30 najlepszych rozpraw doktorskich naukowych na temat „Perfect information”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Hummelgren, Lars, i Anton Lyxell. "Using PAQ8L to play games of perfect information". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229759.

Pełny tekst źródła
Streszczenie:
One of the best compression algorithms to date in terms of compression ratio is PAQ8L. This thesis shows how PAQ8L can be used to predict moves in a four by four variant of tic-tac-toe. We define three agents to benchmark the performance of PAQ8L. The first agent is based on memorization, the second makes random guesses and the third uses PAQ8L to predict moves. The PAQ8L agent outperforms the other two agents in terms of prediction accuracy, but uses significantly more time and memory.
En av de bästa kompressionsalgoritmerna idag med hänsyn till kompressionsgrad är PAQ8L. Den här avhandlingen visar hur PAQ8L kan användas för att förutsäga drag i en fyra gånger fyra variant av luffarschack.Vi definierar tre agenter för att utvärdera PAQ8L. Den första agenten är baserad på memorering, den andra gör slumpmässiga gissningar och den tredje använder PAQ8L för att förutsäga drag. Precisionen hos agenten baserad på PAQ8L överträffar precisionen hos de övriga agenterna. Däremot använder den betydligt mer tid och minne.
Style APA, Harvard, Vancouver, ISO itp.
2

Hidalgo, Dario. "Value of perfect information of transportation forecasting models /". The Ohio State University, 1997. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487943341526861.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Kelmendi, Edon. "Two-Player Stochastic Games with Perfect and Zero Information". Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0238/document.

Pełny tekst źródła
Streszczenie:
On considère des jeux stochastiques joués sur un graphe fini. La première partie s’intéresse aux jeux stochastiques à deux joueurs et information parfaite. Dans de tels jeux, les joueurs choisissent des actions dans ensemble fini, tour à tour, pour une durée infinie, produisant une histoire infinie. Le but du jeu est donné par une fonction d’utilité qui associe un réel à chaque histoire, la fonction est bornée et Borel-mesurable. Le premier joueur veut maximiser l’utilité espérée, et le deuxième joueur veut la minimiser. On démontre que si la fonction d’utilité est à la fois shift-invariant et submixing alors le jeu est semi-positionnel. C’est-à-dire le premier joueur a une stratégie optimale qui est déterministe et sans mémoire. Les deux joueurs ont information parfaite: ils choisissent leurs actions en ayant une connaissance parfaite de toute l’histoire. Dans la deuxième partie, on étudie des jeux de durée fini où le joueur protagoniste a zéro information. C’est-à-dire qu’il ne reçoit aucune information sur le déroulement du jeu, par conséquent sa stratégie est un mot fini sur l’ensemble des actions. Un automates probabiliste peut être considéré comme un tel jeu qui a un seul joueur. Tout d’abord, on compare deux classes d’automates probabilistes pour lesquelles le problème de valeur 1 est décidable: les automates leaktight et les automates simples. On prouve que la classe des automates simples est un sous-ensemble strict de la classe des automates leaktight. Puis, on considère des jeux semi-aveugles, qui sont des jeux à deux joueurs où le maximiseur a zéro information, et le minimiseur est parfaitement informé. On définit la classe des jeux semi-aveugles leaktight et on montre que le problème d’accessibilité maxmin est décidable sur cette classe
We consider stochastic games that are played on finite graphs. The subject of the first part are two-player stochastic games with perfect information. In such games the two players take turns choosing actions from a finite set, for an infinite duration, resulting in an infinite play. The objective of the game is given by a Borel-measurable and bounded payoff function that maps infinite plays to real numbers. The first player wants to maximize the expected payoff, and the second player has the opposite objective, that of minimizing the expected payoff. We prove that if the payoff function is both shift-invariant and submixing then the game is half-positional. This means that the first player has an optimal strategy that is at the same time pure and memoryless. Both players have perfect information, so the actions are chosen based on the whole history. In the second part we study finite-duration games where the protagonist player has zero information. That is, he gets no feedback from the game and consequently his strategy is a finite word over the set of actions. Probabilistic finite automata can be seen as an example of such a game that has only a single player. First we compare two classes of probabilistic automata: leaktight automata and simple automata, for which the value 1 problem is known to be decidable. We prove that simple automata are a strict subset of leaktight automata. Then we consider half-blind games, which are two player games where the maximizer has zero information and the minimizer is perfectly informed. We define the class of leaktight half-blind games and prove that it has a decidable maxmin reachability problem
Style APA, Harvard, Vancouver, ISO itp.
4

Matras, Omolara. "In pursuit of a perfect system : Balancing usability and security in computer system development". Thesis, Linköpings universitet, Institutionen för ekonomisk och industriell utveckling, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-123737.

Pełny tekst źródła
Streszczenie:
Our society is dependent on information and the different technologies and artifacts that gives us access to it. However, the technologies we have come to depend on in different aspects of our lives are imperfect and during the past decade, these imperfections have been the target of identity thieves, cyber criminals and malicious persons within and outside the organization. These malicious persons often target networks of organizations such as hospitals, banks and other financial organizations. Access to these networks are often gained by sidestepping security mechanisms of computer-systems connected to the organization’s network. Often, the goal of computer-systems security mechanisms is to prevent or detect threats; or recover from an eventual attack. However, despite huge investments in IT-security infrastructure and Information security, over 95% of banks, hospitals and government agencies have at least 10 malicious infections bypass existing security mechanisms and enter their network without being detected. This has resulted in the loss of valuable information and substantial sums of money from banks and other organizations across the globe. From early research in this area, it has been discovered that the reason why security mechanisms fail is because it is often used incorrectly or not used at all.  Specifically, most users find the security mechanisms on their computers too complicated and they would rather not use it. Therefore, previous research have focused on making computer-systems security usable or simplifying security technology so that they are “less complicated” for all types users, instead of designing computers that are both usable and secure. The problem with this traditional approach is that security is treated as an “add-on” to a finished computer-system design. This study is an attempt to change the traditional approach by adjusting two phases of a computer-system design model to incorporate the collection of usability as well as security requirements. Guided by the exploratory case study research design, I gained new insights into a situation that has shocked security specialists and organizational actors alike. This study resulted in the creation of a methodology for designing usable and secure computer-systems. Although this method is in its rudimentary stage, it was tested using an online questionnaire. Data from the literature study was sorted using a synthesis matrix; and analyzed using qualitative content analysis. Some prominent design and security models and methodologies discussed in this report include User-Centered System Design (UCSD), Appropriate and Effective Guidance for Information Security (AEGIS) and Octave Allegro.
Vårt samhälle är beroende av information och olika tekniker och artefakter som ger oss tillgång till den. Men tekniken vi förlitar oss på i olika aspekter av våra liv är ofullkomliga och under det senaste decenniet, har dessa brister varit föremål för identitetstjuvar, cyberbrottslingar och illvilliga personer inom och utanför organisationen. Dessa illvilliga personer riktar ofta sig till nätverk av organisationer såsom sjukhus, banker och andra finansiella organisationer. Tillgång till dessa nätverk uppnås genom att kringgå säkerhetsmekanismer av datorsystem anslutna till organisationens nätverk.   Målet med datorsystemsäkerhet är att förhindra eller upptäcka hot; eller återhämta sig från eventuella attacker. Trots stora investeringar i IT-säkerhet infrastruktur och informationssäkerhet, över 95 % av banker, sjukhus och myndigheter har minst 10 skadliga infektioner kringgå befintliga säkerhetsmekanismer och träda in i sitt nätverk utan att upptäckas. Detta har lett till förlust av värdefulla informationer och stora summor av pengar från banker och andra organisationer över hela världen. Från tidigare forskning inom detta område, har det visat sig att anledningen till att säkerhetsmekanismer misslyckas beror ofta på att den används på ett felaktigt sätt eller används inte alls. I synnerhet menar de flesta användare att säkerhetsmekanismer på sina datorer är alltför komplicerat. Därför har tidigare forskning fokuserat på att göra datorsystemsäkerhet användbar så att den är "mindre komplicerat" för alla typer av användare, i stället för att designa datorer som både är användbara och säkra. Problemet med detta traditionella synsätt är att säkerheten behandlas som ett "tillägg" till en färdig datorsystemdesign.   Denna studie är ett försök att ändra det traditionella synsättet genom att justera två faser av en datorsystemdesign modell för att integrera insamlingen av användbarhets- samt säkerhetskrav. Styrd av den explorativ fallstudie forskningsdesignen, fick jag nya insikter i en situation som har gäckat säkerhetsspecialister och organisatoriska aktörer. Denna studie resulterade i skapande av en designmetodik för användbara och säkra datorsystem. Även om denna metod är ännu i sin rudimentära fas, testades den med hjälp av en webbenkät. Data från litteraturstudien sorterades med hjälp av en syntesmatris; och analyserades med kvalitativ innehållsanalys. Några framstående design- och säkerhetsmodeller samt metoder som diskuterades i denna uppsats inkludera Användarcentrerad System Design (UCSD), Ändamålsenligt och Effektivt Vägledning för Informationssäkerhet (AEGIS) och Octave Allegro.
Style APA, Harvard, Vancouver, ISO itp.
5

Karlsson, Ann Johansson Susanne. "Den perfekta informationsspridaren? : en komparativ studie av tre organisationers intranätanvändning = [The perfect way to spread information?] : [a comparative study of the use of intranet in three organizations] /". Borås : Högsk. i Borås, Bibliotekshögskolan/Biblioteks- och informationsvetenskap, 2004. http://www.hb.se/bhs/slutversioner/2004/04-08.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Capser, Shawn Patrick Capser. "Assessing the Value of Information for ComparingMultiple, Dependent Design Alternatives". University of Toledo / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1520689318651851.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Öberg, Viktor. "EVOLUTIONARY AI IN BOARD GAMES : An evaluation of the performance of an evolutionary algorithm in two perfect information board games with low branching factor". Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-11175.

Pełny tekst źródła
Streszczenie:
It is well known that the branching factor of a computer based board game has an effect on how long a searching AI algorithm takes to search through the game tree of the game. Something that is not as known is that the branching factor may have an additional effect for certain types of AI algorithms. The aim of this work is to evaluate if the win rate of an evolutionary AI algorithm is affected by the branching factor of the board game it is applied to. To do that, an experiment is performed where an evolutionary algorithm known as “Genetic Minimax” is evaluated for the two low branching factor board games Othello and Gomoku (Gomoku is also known as 5 in a row). The performance here is defined as how many times the algorithm manages to win against another algorithm. The results from this experiment showed both some promising data, and some data which could not be as easily interpreted. For the game Othello the hypothesis about this particular evolutionary algorithm appears to be valid, while for the game Gomoku the results were somewhat inconclusive. For the game Othello the performance of the genetic minimax algorithm was comparable to the alpha-beta algorithm it played against up to and including depth 4 in the game tree. After that however, the performance started to decline more and more the deeper the algorithms searched. The branching factor of the game may be an indirect cause of this behaviour, due to the fact that as the depth increases, the search space increases proportionally to the branching factor. This increase in the search space due to the increased depth, in combination with the settings used by the genetic minimax algorithm, may have been the cause of the performance decline after that point.
Style APA, Harvard, Vancouver, ISO itp.
8

Arjonilla, Jérôme. "Sampling-Based Search Algorithms in Games". Electronic Thesis or Diss., Université Paris sciences et lettres, 2024. http://www.theses.fr/2024UPSLD031.

Pełny tekst źródła
Streszczenie:
La recherche d'algorithmes appliquée aux jeux est un domaine de recherche très dynamique. Les jeux constituent un terrain d'application privilégié pour les algorithmes de recherche, car les jeux permettent de modéliser des problèmes complexes, de manière efficace. De nombreux algorithmes ont d'abord été développés pour les jeux avant d'être étendus à d'autres domaines. Dans cette thèse, nous nous intéressons à la recherche d'algorithmes heuristiques dans le cadre des jeux, en particulier aux algorithmes de recherche heuristique basés sur le sampling, tels que Monte Carlo Tree Search (MCTS) en information parfaite, ainsi qu'à des algorithmes de détermination en information imparfaite. Nous explorons également l'intégration des algorithmes de recherche avec d'autres types d'algorithmes, notamment les algorithmes d'apprentissage par renforcement. Ce travail présente les méthodes existantes ainsi que plusieurs contributions originales dans ce domaine. La première partie de la thèse est consacrée à l'étude des algorithmes de recherche heuristique indépendants du domaine, ce qui les rend facilement testables et applicables dans divers contextes. Plus particulièrement, nous nous concentrons sur les jeux à information imparfaite, où les joueurs ne disposent pas de toutes les informations sur l'état du jeu. Dans ce contexte, certains problèmes apparaissent avec les méthodes existantes, notamment en ce qui concerne la fusion de stratégies et l'impact de la révélation d'informations. Nous discuterons en détail de ces problématiques et présenterons les méthodes proposées pour les résoudre. La seconde partie de la thèse porte sur les algorithmes de recherche heuristique spécifiques à un domaine. Ces algorithmes, dépendants du domaine, sont souvent plus efficaces que les algorithmes indépendants, car ils peuvent apprendre, généraliser et s'adapter à un contexte spécifique. Au cours de cette partie, nous étudions l'intégration des algorithmes de recherche heuristique avec d'autres types d'algorithmes, en particulier ceux d'apprentissage par renforcement. Nous présentons une contribution originale dans ce domaine ainsi qu'une autre en cours de développement. La première méthode propose de renforcer les algorithmes de recherche en intégrant des algorithmes d'apprentissage par renforcement basés sur le principe de guide. La seconde méthode vise à intégrer des méthodes basées sur des modèles (model-based) dans les algorithms recherches en information imparfaite
Algorithm research in the context of games is a highly active field. Games are a prime application domain for search algorithms because they allow for the modeling and efficient resolution of complex problems. Many algorithms were first developed for games before being extended to other domains. In this thesis, we focus on heuristic search algorithms in the context of games, particularly heuristic search algorithms based on sampling, such as Monte Carlo Tree Search (MCTS) in perfect information, and based on determinization in imperfect information. We also explore the integration of search algorithms with other types of algorithms, especially reinforcement learning algorithms. We present existing methods as well as several original contributions in this field. The first part of the thesis is dedicated to the study of domain-independent heuristic search algorithms, making them easily testable and applicable in various contexts. Specifically, we focus on games with imperfect information, where players do not know all the details about the game state. In these types of algorithms and games, certain problems arise with existing methods, particularly issues related to strategy fusion and the impact of information revelation. We will discuss these problems in detail and present two original methods to address them. The second part of the thesis explores domain-dependent heuristic search algorithms. Domain-dependent algorithms are often more efficient than domain-independent ones because they can learn, generalize, and adapt to a specific domain. Throughout this part, we investigate the integration of heuristic search algorithms with other types of algorithms, particularly reinforcement learning algorithms. We present an original contribution in this area and another contribution that is currently under development. The first method proposes to enhance search algorithms by integrating reinforcement learning algorithms based on the guiding principle. The second method aims to incorporate model-based methods into searches in imperfect information settings
Style APA, Harvard, Vancouver, ISO itp.
9

Corazza, Federico Augusto. "Analysis of graph-based quantum error-correcting codes". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23801/.

Pełny tekst źródła
Streszczenie:
With the advent of quantum computers, there has been a growing interest in the practicality of this device. Due to the delicate conditions that surround physical qubits, one could wonder whether any useful computation could be implemented on such devices. As we describe in this work, it is possible to exploit concepts from classical information theory and employ quantum error-correcting techniques. Thanks to the Threshold Theorem, if the error probability of physical qubits is below a given threshold, then the logical error probability corresponding to the encoded data qubit can be arbitrarily low. To this end, we describe decoherence which is the phenomenon that quantum bits are subject to and is the main source of errors in quantum memories. From the cause of error of a single qubit, we then introduce the error models that can be used to analyze quantum error-correcting codes as a whole. The main type of code that we studied comes from the family of topological codes and is called surface code. Of these codes, we consider both the toric and planar structures. We then introduce a variation of the standard planar surface code which better captures the symmetries of the code architecture. Once the main properties of surface codes have been discussed, we give an overview of the working principles of the algorithm used to decode this type of topological code: the minimum weight perfect matching. Finally, we show the performance of the surface codes that we introduced, comparing them based on their architecture and properties. These simulations have been performed with different error channel models to give a more thorough description of their performance in several situations showing relevant results.
Style APA, Harvard, Vancouver, ISO itp.
10

Linnusaar, Marcus. "GDPR : Jakten på den "perfekta" lösningen". Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-18943.

Pełny tekst źródła
Streszczenie:
GDPR är ett begrepp som handlar om hur personuppgifter hanteras av företag. I denna studie undersöks olika lösningar som företag har valt att använda för att informera användare om att deras data samlas in samt hur den hanteras på deras webbplatser. Med hjälp av frågeformulär skapas en bild av vad för kunskap och vilka känslor användare har angående ämnet. Genom att observera användare när de interagerar med olika GDPR-lösningar skapas kunskap om vilka lösningar som fungerar och vilka som inte gör det. Testdeltagare fick även möjlighet att använda sig av en önskvärdhetsmatris när de hade interagerat med de olika GDPR-lösningarna för att kunna uttrycka sin upplevelse med passande adjektiv. Resultatet från denna studie visade att det finns en allmänt negativ inställning till befintliga GDPR-lösningar. Det skapades även en uppfattning om varför användare har en negativ inställning. Från denna kunskap kunde olika faktorer identifieras som kan påverka UX i en GDPR-lösning. Dessa faktorer kunde sedan användas för att skapa en GDPR-lösning som prioriterar UX.
Style APA, Harvard, Vancouver, ISO itp.
11

Kline, Jeffrey Jude. "Perfect recall and the informational contents of strategies in extensive games". Diss., Virginia Tech, 1994. http://hdl.handle.net/10919/38656.

Pełny tekst źródła
Streszczenie:
This dissertation consists of five chapters on the informational contents of strategies and the role of the perfect recall condition for information partitions in extensive games. The first, introductory, chapter gives basic definitions of extensive games and some results known in the game theory literature. The questions that will be investigated in the remaining chapters and their significance in the literature are also described. In the second chapter it is shown that strategies defined as contingent plans may contain some information that is additional to what the information partition describes. Two types of additional information that strategies may contain when perfect recall is violated are considered. Both behavior and mixed strategies contain the first type of information, but only mixed strategies contain the second type. Addition of either type of information, however, leads to a refinement of the information partition that satisfies perfect recall. The perfect recall condition is found to be significant in demarcating the roles of strategies and information partitions in extensive games. In the third chapter the full informational contents of mixed strategy spaces is explored. The informational content of mixed strategy spaces is found to be invariant over a range of information partitions. A weakening of the perfect recall condition called A-loss is obtained and found to be necessary and sufficient for the information contained in mixed strategies to be equivalent to that of a game with perfect recall. An implication of this result is that a player whose information partition satisfies A-loss can play "as-if" he has perfect recall and a player without A-loss can't. In other words, if an information partition satisfies A-loss, every mixed strategy makes up for any lack of perfect recall described by the information partition. For behavior strategies, we never obtain informational equivalence between distinct information partitions. A-loss turns out to also be a necessary condition for a game without chance moves to have a Nash equilibrium in pure strategies for all payoff assignments. In the fourth chapter the role of the perfect recall condition in preserving some information in the transformation from an extensive game to its agent normal form is discussed. If we interpret a player as a team of agents (one at each information set) then the essential difference between an extensive game and the associated agent normal form game is that in the former the agents act cooperatively while in the latter they act independently. The perfect recall condition is shown to be necessary and sufficient for the perfect equilibria of an extensive game to coincide with those of the associated agent normal form game for all payoff assignments. The contribution of this result is necessity; sufficiency is already known. Since this is proved using pure strategies for the player with imperfect recall in question, one subtle implication is obtained: a perfect equilibrium of the agent normal form game where each agent effectively knows the actions taken and information acquired by his preceding agents, may not be a perfect equilibrium in the original extensive game. This means that perfect recall implies more than just effective knowledge of what happened previously. Chapter 5 concludes.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
12

GENUZIO, MARCO. "ENGINEERING COMPRESSED STATIC FUNCTIONS AND MINIMAL PERFECT HASH FUNCTIONS". Doctoral thesis, Università degli Studi di Milano, 2018. http://hdl.handle.net/2434/547316.

Pełny tekst źródła
Streszczenie:
\emph{Static functions} are data structures meant to store arbitrary mappings from finite sets to integers; that is, given universe of items $U$, a set of $n \in \mathbb{N}$ pairs $(k_i,v_i)$ where $k_i \in S \subset U, |S|=n$, and $v_i \in \{0, 1, \ldots, m-1\} , m \in \mathbb{N} $, a static function will retrieve $v_i$ given $k_i$ (usually, in constant time). When every key is mapped into a different value this function is called \emph{perfect hash function} and when $n=m$ the data structure yields an injective numbering $S\to \lbrace0,1, \ldots n-1 \rbrace$; this mapping is called a \emph{minimal perfect hash function}. Big data brought back one of the most critical challenges that computer scientists have been tackling during the last fifty years, that is, analyzing big amounts of data that do not fit in main memory. While for small keysets these mappings can be easily implemented using hash tables, this solution does not scale well for bigger sets. Static functions and MPHFs break the information-theoretical lower bound of storing the set $S$ because they are allowed to return \emph{any} value if the queried key is not in the original keyset. The classical constructions technique for static functions can achieve just $O(nb)$ bits space, where $b=\log(m)$, and the one for MPHFs $O(n)$ bits of space (always with constant access time). All these features make static functions and MPHFs powerful techniques when handling, for instance, large sets of strings, and they are essential building blocks of space-efficient data structures such as (compressed) full-text indexes, monotone MPHFs, Bloom filter-like data structures, and prefix-search data structures. The biggest challenge of this construction technique involves lowering the multiplicative constants hidden inside the asymptotic space bounds while keeping feasible construction times. In this thesis, we take advantage of the recent result in random linear systems theory regarding the ratio between the number of variables and number of the equations, and in perfect hash data structures, to achieve practical static functions with the lowest space bounds so far, and construction time comparable with widely used techniques. The new results, however, require solving linear systems that require more than a simple triangulation process, as it happens in current state-of-the-art solutions. The main challenge in making such structures usable is mitigating the cubic running time of Gaussian elimination at construction time. To this purpose, we introduce novel techniques based on \emph{broadword programming} and a heuristic derived from \emph{structured Gaussian elimination}. We obtained data structures that are significantly smaller than commonly used hypergraph-based constructions while maintaining or improving the lookup times and providing still feasible construction.We then apply these improvements to another kind of structures: \emph{compressed static hash functions}. The theoretical construction technique for this kind of data structure uses prefix-free codes with variable length to encode the set of values. Adopting this solution, we can reduce the\n space usage of each element to (essentially) the entropy of the list of output values of the function.Indeed, we need to solve an even bigger linear system of equations, and the time required to build the structure increases. In this thesis, we present the first engineered implementation of compressed hash functions. For example, we were able to store a function with geometrically distributed output, with parameter $p=0.5$in just $2.28$ bit per key, independently of the key set, with a construction time double with respect to that of a state-of-the-art non-compressed function, which requires $\approx\log \log n$ bits per key, where $n$ is the number of keys, and similar lookup time. We can also store a function with an output distributed following a Zipfian distribution with parameter $s=2$ and $N= 10^6$ in just $2.75$ bits per key, whereas a non-compressed function would require more than $20$, with a threefold increase in construction time and significantly faster lookups.
Style APA, Harvard, Vancouver, ISO itp.
13

Kalogrias, Christos. "Performance analysis of the IEEE 802.11A WLAN standard optimum and sub-optimum receiver in frequency-selective, slowly fading Nakagami channels with AWGN and pulsed noise jamming". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Mar%5FKalogrias.pdf.

Pełny tekst źródła
Streszczenie:
Thesis (M.S. in Electrical Engineering and M.S. in Systems Engineering)--Naval Postgraduate School, March 2004.
Thesis advisor(s): Clark Robertson. Includes bibliographical references (p. 143). Also available online.
Style APA, Harvard, Vancouver, ISO itp.
14

Phuong, Tran Thi Thanh. "Application of economic analysis to evaluate various infectious diseases in Vietnam". Thesis, University of Oxford, 2017. https://ora.ox.ac.uk/objects/uuid:2452971c-e5eb-4661-8675-d76f0eca9774.

Pełny tekst źródła
Streszczenie:
This thesis is composed of two economic evaluations: one trial-based study and one model-based study. In a recent study published in Clinical Infectious Diseases in 2011, a team of OUCRU investigators found that immediate antiretroviral therapy (ART) was not associated with improved 9-month survival in HIV-associated TBM patients (HR, 1.12; 95% CI, .81 to–1.55; P = .50). An economic evaluation of this clinical trial was conducted to examine the cost-effectiveness of immediate ART (initiate ART within 1 week of study entry) versus deferred ART (initiate ART after 2 months of TB treatment) in HIV-associated TBM patients. Over 9 months, immediate ART was not different from deferred ART in terms of costs and QALYs gained. Late initiation of ART during TB and HIV treatment for HIV-positive TBM patients proved to be the most cost-effective strategy. Increasing resistance of Plasmodium falciparum malaria to artemisinin is posing a major threat to the global effort to eliminate malaria. Artesmisinin combination therapies (ACT) are currently known as the most efficacious first-line therapies to treat uncomplicated malaria. However, resistance to both artemisinin and partner drugs is developing and this could result in increasing morbidity, mortality, and economic costs. One strategy advocated for delaying the development of resistance to the ACTs is the wide-scale deployment of multiple first-line therapies. A previous modeling study examined that the use of multiple first-line therapies (MFT) reduced the long-term treatment failures compared with strategies in which a single first-line ACT was recommended. Motivated by observed results of the published modelling study in the Lancet, the cost-effectiveness of the MFT versus the single first-line therapies was assessed in settings of different transmission intensities, treatment coverages and fitness cost of resistance using a previously developed model of the dynamics of malaria and a literature –based cost estimate of changing antimalarial drug policy at national level. This study demonstrates that the MFT strategies outperform the single first-line strategies in terms of costs and benefits across the wide range of epidemiological and economic scenarios considered. The second analysis of the thesis is not only internationally relevant but also with a focus towards healthcare practice in Vietnam. These two studies add significant new cost-effectiveness evidence in Vietnam. This thesis presents the first trial-based economic evaluation in Vietnam considers patient-health outcome measures as the participants have cognitive limitations (tuberculous meningitis), dealing with missing data along with the potential ways to handle this common problem by the use of multiple imputation, and the issues of censored costs data. Having identified these issues would support the decision makers or stakeholders including the pharmaceutical industry to devise a new guideline on how to implement a well-design trial-based economic evaluation in Vietnam in the future. Another novelty of this thesis is the introduction of the detailed of costing of drug regimens change in which the economic evaluations considering the drug policy change often do not include. This cost could be substantial to the healthcare system for retraining the staff and publishing the new guidelines. This thesis will document the costs incurred by the Vietnamese government by changing the first-line treatment of malaria, from single first-line therapy (ACT) to multiple first-line therapies.
Style APA, Harvard, Vancouver, ISO itp.
15

Veronese, Leonardo <1995&gt. "Practical non-perfect fuzzy rainbow trade-off: reference design for fast FPGA and SSD implementation". Master's Degree Thesis, Università Ca' Foscari Venezia, 2021. http://hdl.handle.net/10579/18642.

Pełny tekst źródła
Streszczenie:
Time/memory trade-offs are general techniques used in the cryptanalysis of hash functions, block ciphers and stream ciphers that aim to reduce the computational effort at the cost of memory usage. Among these techniques the most modern algorithm is the Fuzzy-Rainbow trade-off, which has been used to attack the GSM A5/1 cipher in 2010. Most of the existing analyses of trade-off algorithms only take into consideration the main-memory model, which doesn't reflect the hierarchical (external) storage model of real world systems. Moreover, to the best of our knowledge, there are no publicly available implementations or designs that can show the performance level that can be obtained with modern off-the-shelf hardware. In this thesis we propose a reference hardware and software design for the cryptanalysis of stream ciphers and one-way functions based on FPGAs, SSDs and the Fuzzy Rainbow trade-off algorithm. The performances of the implementations of this design can be estimated through an analytical method based on the work by Hong and Moon. We evaluate our design by building a real world system that retrieves the key from plaintext/ciphertext pairs generated by a legacy 56-bits stream cipher. We experimentally confirm that the performance figures of our real world implementation lie in the expected ranges and we propose these figures as a reference of the performance level that can be achieved with off-the-shelf components in 2020.
Style APA, Harvard, Vancouver, ISO itp.
16

Hàn, Hiêp. "Extremal hypergraph theory and algorithmic regularity lemma for sparse graphs". Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2011. http://dx.doi.org/10.18452/16402.

Pełny tekst źródła
Streszczenie:
Einst als Hilfssatz für Szemerédis Theorem entwickelt, hat sich das Regularitätslemma in den vergangenen drei Jahrzehnten als eines der wichtigsten Werkzeuge der Graphentheorie etabliert. Im Wesentlichen hat das Lemma zum Inhalt, dass dichte Graphen durch eine konstante Anzahl quasizufälliger, bipartiter Graphen approximiert werden können, wodurch zwischen deterministischen und zufälligen Graphen eine Brücke geschlagen wird. Da letztere viel einfacher zu handhaben sind, stellt diese Verbindung oftmals eine wertvolle Zusatzinformation dar. Vom Regularitätslemma ausgehend gliedert sich die vorliegende Arbeit in zwei Teile. Mit Fragestellungen der Extremalen Hypergraphentheorie beschäftigt sich der erste Teil der Arbeit. Es wird zunächst eine Version des Regularitätslemmas Hypergraphen angewandt, um asymptotisch scharfe Schranken für das Auftreten von Hamiltonkreisen in uniformen Hypergraphen mit hohem Minimalgrad herzuleiten. Nachgewiesen werden des Weiteren asymptotisch scharfe Schranken für die Existenz von perfekten und nahezu perfekten Matchings in uniformen Hypergraphen mit hohem Minimalgrad. Im zweiten Teil der Arbeit wird ein neuer, Szemerédis ursprüngliches Konzept generalisierender Regularitätsbegriff eingeführt. Diesbezüglich wird ein Algorithmus vorgestellt, welcher zu einem gegebenen Graphen ohne zu dichte induzierte Subgraphen eine reguläre Partition in polynomieller Zeit berechnet. Als eine Anwendung dieses Resultats wird gezeigt, dass das Problem MAX-CUT für die oben genannte Graphenklasse in polynomieller Zeit bis auf einen multiplikativen Faktor von (1+o(1)) approximierbar ist. Der Untersuchung von Chung, Graham und Wilson zu quasizufälligen Graphen folgend wird ferner der sich aus dem neuen Regularitätskonzept ergebende Begriff der Quasizufälligkeit studiert und in Hinsicht darauf eine Charakterisierung mittels Eigenwertseparation der normalisierten Laplaceschen Matrix angegeben.
Once invented as an auxiliary lemma for Szemerédi''s Theorem the regularity lemma has become one of the most powerful tools in graph theory in the last three decades which has been widely applied in several fields of mathematics and theoretical computer science. Roughly speaking the lemma asserts that dense graphs can be approximated by a constant number of bipartite quasi-random graphs, thus, it narrows the gap between deterministic and random graphs. Since the latter are much easier to handle this information is often very useful. With the regularity lemma as the starting point two roads diverge in this thesis aiming at applications of the concept of regularity on the one hand and clarification of several aspects of this concept on the other. In the first part we deal with questions from extremal hypergraph theory and foremost we will use a generalised version of Szemerédi''s regularity lemma for uniform hypergraphs to prove asymptotically sharp bounds on the minimum degree which ensure the existence of Hamilton cycles in uniform hypergraphs. Moreover, we derive (asymptotically sharp) bounds on minimum degrees of uniform hypergraphs which guarantee the appearance of perfect and nearly perfect matchings. In the second part a novel notion of regularity will be introduced which generalises Szemerédi''s original concept. Concerning this new concept we provide a polynomial time algorithm which computes a regular partition for given graphs without too dense induced subgraphs. As an application we show that for the above mentioned class of graphs the problem MAX-CUT can be approximated within a multiplicative factor of (1+o(1)) in polynomial time. Furthermore, pursuing the line of research of Chung, Graham and Wilson on quasi-random graphs we study the notion of quasi-randomness resulting from the new notion of regularity and concerning this we provide a characterisation in terms of eigenvalue separation of the normalised Laplacian matrix.
Style APA, Harvard, Vancouver, ISO itp.
17

Johansson, Susanne, i Ann Karlsson. "Den perfekta informationsspridaren? En komparativ studie av tre organisationers intranätanvändning". Thesis, Högskolan i Borås, Institutionen Biblioteks- och informationsvetenskap / Bibliotekshögskolan, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-16352.

Pełny tekst źródła
Streszczenie:
This is a comparative study of the use of intranet in three organizations. The organizations are one help organization, one hospital and one business company. The intranet is a relatively new medium and it might still not have been accepted by all potential users, which is a waste of resources both for the individual employee and for the overall organization. It is therefore necessary to investigate if there are any differences between organizations concerning the needs of intranets. Our purpose with this study was to deepen the understanding of the use of intranets as a channel of organizational communication and thereby acknowledge the users´ needs and opinions of the intranet. We wanted to know how the intranet is used and what the overall advantages/disadvantages are with the implementation and use of an intranet, according to the respondents. The theories we used are mainly based on the use of information systems, communication and organizational aspects, all connected to the intranet medium. We conducted interviews with one key person in each organization to get an oversight of the organizational intranet. We also handed out questionnaires personally or through our key persons to a number of employees in the three organizations. We found that all three organizations have similar information needs and opinions concerning the intranet. The fields of application used by the employees are of different kinds. The e-mail and other services they actively can participate in are the most frequently used parts of the intranet. The advantages of the intranet are that the information easily can be shared by all employees in the organization and that the intranet gives a larger insight in the entire organization. Some of the disadvantages are that the intranet is too unstructured and that the large amount of information makes it hard to retrieve relevant information. All organizations requested better search functions.
Uppsatsnivå: D
Style APA, Harvard, Vancouver, ISO itp.
18

Nordholm, Miranda. "Instagrams filter - Jakten efter den perfekta bilden : En studie om hur redigering av en bild kan påverka attityd till ett motiv". Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Datateknik och informatik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-41054.

Pełny tekst źródła
Streszczenie:
Purpose – The purpose of the thesis is to investigate whether there is any editing combination in the form of different filters published in the Instagram app, which is noticeably different from each other in the form of negative or positive response. Method – In order to obtain answers to the question, the study has sought to anchor as many theories as possible in scientific, published material. The method of ”small-N-study” in this study will be applied when collecting information about the different filters that the Instagram app allows the user to choose from. A quantitative survey is conducted in an attempt to obtain statistics on how the filters can be ranked from most popular to least popular. To further validate the answers to the questionnaire and to answer the questions 1.1 and 1.2 in the thesis, three interviews are conducted. The aim is to achieve more qualitative answers to the questions and then to compare and analyze the results of the survey. Findings – This study has found that a combination of: moderately colorful, normal contrast, medium opacity, neutral color tone, blue color and medium brightness are good variables to create good a filter. Highly matte and colorless colors, very low contrast, black and white (neutral colors), cool color tones, high brightness and high opacity of the filter are qualities for a filter that gets a lot of negative response. The most popular filter of all 23 in the Instagram app is Clarendon, while the filter with the most negative response is Hefe. Implications – A risk of working deductively may be that theories that already exist on the subject and from which the researcher assumes will target the results of the study. When conducting a ”small-N-study” there is a risk that the variables being analyzed are too few to answer the question at issue.  When conducting a survey, there is always the risk that the questions are too leading, that the test person does not understand, misinterprets or fails to engage in the subject to answer the questionnaire truthfully. When conducting an interview, there is a risk that the interviewees are insufficiently aware of the issues that they can not answer in detail or honestly on the questions.  Limitations – The study is limited to the resources available in time. The subjects who participated in interviews and questionnaires are limited to those the writer could reach out to. Just like the theory found on the subject is limited by the ability of the author to search and review information, as well as language skills and the data published in library and database. The results of the study are limited to the funds used for the objects examined.  Keywords – Color theory, Social Media, Instagram, Filter, Image Editing
Syfte – Syftet med examensarbetet är att undersöka om det finns någon redigeringskombination i form av olika filter som är utgivna i Instagramappen som märkbart står ut jämfört med de andra i form av någon typ av positiv eller negativ respons. Metod –  För att uppnå svar på frågeställningen har studien eftersträvat att kunna förankra så många teorier som möjligt i vetenskapligt material. Metoden ”små-N-studie” kommer i denna studie att tillämpas vid insamlandet av information kring de olika filter som Instagram-appen tillåter användaren att välja mellan. En kvantitativ enkätundersökning genomförs i ett försök att få fram en statistik på hur filterna kan rangordnas från mest populär till minst populär. För att ytterligare validera svaren på enkätundersökningen och få fram svar på delfrågorna 1.1 och 1.2 genomförs tre intervjuer. Intervjuerna genomförs för att kunna ge en djupare insikt och mer svarsutrymme för intervjupersonerna. Resultat – Denna studie har kommit fram till att en kombination av: måttligt färgstark, normal kontrast, medel opacitet, neutral färgton, blå färg och medel ljusstyrka är bra variabler för att skapa ett bra filter. Mycket matta och svaga färger, mycket låg kontrast, svart-vitt (neutrala färger), kalla färgtoner, ljust ljushetsstyrka och hög opacitet på filtret är kvalitéer för ett filter som får  mycket negativ respons.  Det populäraste filtret av alla 23 i Instagramappen är Clarendon, medan det filter med mest negativ respons är Hefe. Implikationer – En risk med att arbeta deduktivt kan vara att de befintliga teorier som redan finns kring ämnet och som forskaren utgår ifrån kommer att rikta studiens resultat. Vid tillämpandet av metoden ”små-N-studie” finns risken att det är svårt att vara tillräckligt nyanserad i insamlandet av variabler och att studien därmed inte blir tillräckligt omfattande för att besvara frågeställningarna.  Vid genomförandet av enkätundersökning finns alltid risken att frågorna är för ledande, inte tillräckligt ingående, att testpersonen inte förstår, tolkar på fel sätt eller inte orkar sätta sig in i ämnet för att svara tillräckligt sanningsenligt på enkäten. Vid genomförande av intervju finns risken att intervjupersonerna inte är tillräckligt insatta i ämnet, att de inte kan svara utförligt eller ärligt på frågorna.  Begränsningar – Studien är begränsad till de resurser som finns tillgängliga i mån av tid. De försökspersoner som deltagit i intervjuer och enkätundersökning är begränsade till de dem författaren kunnat nå ut till precis som den teori som finns hittad om ämnet är begränsad efter den förmåga författaren har att söka och sålla igenom information, samt språkkunnighet och den data som finns publicerad på bibliotek och databaser. Studiens resultat är begränsad till de medel som använts för de objekt som undersökts.  Nyckelord – Färglära, Sociala medier, Instagram, Filter, Bildredigering
Style APA, Harvard, Vancouver, ISO itp.
19

Reis, Luís Henrique Vecchio. "The capital structure of portuguese firms within a crisis". Master's thesis, Instituto Superior de Economia e Gestão, 2011. http://hdl.handle.net/10400.5/4565.

Pełny tekst źródła
Streszczenie:
Mestrado em Finanças
In this study we review the theoretical approach behind the capital structure decisions by presenting the ideas of the Modigliani and Miller (1958) Theorem that was based on the perfect capital markets world and with the argument of the law of one price. We show that there are two useful theories in the firm’s financing decision: the Trade‐off theory, which builds on Modigliani and Miller’s original arguments and identifies several relevant factors in determining a firm’s capital structure (such as taxes, costs of financial distress, and agency costs and benefits of debt), and the Pecking Order Theory of Myers and Majluf (1984). Further in this study we describe the evolution of the capital structure of the 16 largest listed non‐financial Portuguese firms (“PSI‐16”) during the recent crisis peaking in 2008. We present a description of the level debt (and net debt) compared to the book value and to the market value of the equity of such firms (debt to equity ratio). We find some evidence consistent with both theories. In particular we find a cautious utilization of debt due to higher risk of bankruptcy (and its costs), but still taking advantage of the interest tax shield (consistent with the trade‐off theory view), and an increase in retained earnings and absence of new issues (consistent with the pecking order theory). We explain that the firms’ financing decision can depend of several factors pointed by the Trade‐off Theory, such as tax advantages of using debt, agency costs and benefits of debt, and costs associated with financial distress. Yet, in times of crisis firms may prefer to use internal rather than external financing mainly because of asymmetry of information.
No presente estudo, fazemos uma revisão da literatura em relação às decisões de estrutura de capital através da apresentação do Teorema de Modigliani e Miller (1958), sendo este baseado num mercado de capitais perfeito com o argumento assente na Lei do Preço Único. Mostramos que existem duas teorias úteis para a decisão de financiamento de uma empresa: a Trade‐off Theory, que está assente sobre os argumentos originais de Modigliani e Miller e identifica vários factores relevantes na determinação da estrutura de capital de uma empresa (como os impostos, os custos de financial distress, custos de agência e benefícios do uso de dívida); e a Pecking Order Theory de Myers e Majluf (1984). Mais além neste estudo, descrevemos a evolução da estrutura de capital das 16 maiores empresas cotadas portuguesas não financeiras (“PSI‐ 16”) durante a recente crise que teve o seu pico em 2008. Apresentamos uma descrição do nível de dívida (e dívida líquida) comparada com o valor contabilístico e o valor de mercado das empresas (rácio debt to equity). Pudemos encontrar alguma evidência consistente com ambas as teorias. Por um lado, as empresas mostram uma certa cautela na utilização de dívida devido ao aumento do risco de falência (e os seus custos), mas ainda tirando vantagem do interest tax shield (consistente com a visão da Trade‐off Theory). Por outro lado, verificamos um aumento dos lucros retidos e nenhuma nova emissão (consistente com a Pecking Order Theory). Concluímos que as decisões de financiamento de uma empresa dependerão de diversos factores apontados pela Tradeoff Theory, como as vantagens fiscais na utilização de dívida, custos de agência e benefícios do uso de dívida, e custos associados com financial distress. Ainda, em tempos de crise as empresas podem preferir usar financiamento interno no lugar de externo, principalmente devido à assimetria de informação.
Style APA, Harvard, Vancouver, ISO itp.
20

Järvelä, Andreas, i Sebastian Lindmark. "Evaluation and comparison of a RabbitMQ broker solution on Amazon Web Services and Microsoft Azure". Thesis, Linköpings universitet, Programvara och system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-158242.

Pełny tekst źródła
Streszczenie:
In this thesis, a scalable, highly available and reactive RabbitMQ cluster is implemented on Amazon Web Services (AWS) and Microsoft Azure. An alternative solution was created on AWS using the CloudFormation service. These solutions are performance tested using the RabbitMQ PerfTest tool by simulating high loads with varied parameters. The test results are used to analyze the throughput and price-performance ratio for a chosen set of instances on the respective cloud platforms. How performance changes between instance family types and cloud platforms is tested and discussed. Additional conclusions are presented regarding the general performance differences in infrastructure between AWS and Microsoft Azure.
Style APA, Harvard, Vancouver, ISO itp.
21

Fedele, Dante. "Naissance de la diplomatie moderne. L'ambassadeur au croisement du droit, de l'éthique et de la politique". Thesis, Lyon, École normale supérieure, 2014. http://www.theses.fr/2014ENSL0968.

Pełny tekst źródła
Streszczenie:
S’appuyant sur un corpus de textes que l’on qualifie normalement de « traités sur l’ambassadeur », cette thèse s’attache à reconstruire la naissance de la diplomatie moderne tout au long d’une période qui va du XIIIe au XVIIe siècle, en essayant d’analyser la manière dont la figure de l’ambassadeur à été élaborée à l’intérieur d’un champ de problématisation qui se caractérise par une imbrication réciproque du droit, de l’éthique et de la politique et va constituer une véritable expérience de la diplomatie.Ce travail s’articule en deux parties. Dans la première il s’agit de comprendre comment la figure de l’ambassadeur a été façonnée sous le profil de son statut juridique, à savoir comme une persona publica chargée d’un officium et devant représenter son mandant, avec les conséquences qui en découlent quant à l’établissement de son pouvoir de négociation, à la définition de ses immunités ainsi qu’à la détermination des honneurs qu’il a le droit de recevoir. L’analyse de ces questions permettra d’apprécier la contribution apportée par notre corpus non seulement à la définition du statut juridique de l’ambassadeur, mais aussi à la formation du nouveau droit des gens destiné à régir l’Europe moderne. La seconde partie s’attache à comprendre comment la figure de l’ambassadeur a été façonnée sous le profil de son statut professionnel : on s’interroge alors sur les fonctions qui lui sont attribuées, sur les moyens qui lui sont fournis et les conditions qui lui sont demandées pour s’en acquitter de la manière la plus efficace, ainsi que sur la problématisation éthique à laquelle son action est soumise. Tout en essayant de faire ressortir la spécificité de l’ambassadeur, cette partie se propose aussi de contribuer à l’étude de la professionnalisation du fonctionnaire public
Using a collection of texts commonly known as the “treatises on the ambassador”, this research examines the birth and the development of the experience of diplomacy from the 13th to the 17th Century. It aims, in particular, to explore the development of the figure of the ambassador within a field of problematization involving ethics, politics and law.After some methodological and historical remarks, the thesis deals with the development of the status of the ambassador from two perspectives, the legal and the professional. Regarding his legal status, the medieval legal conceptualisation of the role of the ambassador as a genuine public “office”, and that of the diplomatic function as “representation”, are examined. The way in which these conceptualisations help to define the negotiating powers conferred on the ambassador, his immunities and the honours to which he is entitled is then considered. This analysis allows for an investigation of the complex links between the exercise of diplomacy and claims to sovereignty during Europe’s transition from the Middle Ages to Modernity. Regarding his professional status, the thesis reconstructs the functions of the ambassador (particularly in relation to information gathering and negotiation), the means provided for the ambassador to undertake his functions (his salary and the assignment of an escort) and the objective, intellectual or moral qualities required of him. As well as illustrating the techniques which have been required for ambassadorial success since the 15th Century, this analysis offers some hints for studying the professionalization of public officials and the emergence of the modern criteria of political analysis
Style APA, Harvard, Vancouver, ISO itp.
22

Subhadarshini(, Sonalin. "An Identity Based Key Exchange Scheme with Perfect Forward Security". Thesis, 2015. http://ethesis.nitrkl.ac.in/7374/1/2015_BT_Sonalin_111CS0446.pdf.

Pełny tekst źródła
Streszczenie:
Identity-based authenticated key exchange protocol(IBAKE) with perfect forward security(PFS) is one of the major advancement in the field of cryptography. This protocol is used to establish secure communication between two parties who are provided with their own unique identities, by establishing their common secret keys without the need of sending and verifying their public key certificates. This scheme involves a key generation centre(KGC) which would provide the two parties involved, with their static key that can be authenticated by the parties. Our protocol can be viewed as a variant of the protocol proposed by Xie et al. in 2012 [8]. Our protocol does not rely on bilinear pairings. We have made a comparative study of the existing protocol and the proposed protocol and proved that our protocol is more efficient. We have also provided enough proofs to verfy that our protocol is secure under attacks and is not forgeable.
Style APA, Harvard, Vancouver, ISO itp.
23

"Intelligent strategy for two-person non-random perfect information zero-sum game". 2003. http://library.cuhk.edu.hk/record=b5891609.

Pełny tekst źródła
Streszczenie:
Tong Kwong-Bun.
Thesis submitted in: December 2002.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2003.
Includes bibliographical references (leaves 77-[80]).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- An Overview --- p.1
Chapter 1.2 --- Tree Search --- p.2
Chapter 1.2.1 --- Minimax Algorithm --- p.2
Chapter 1.2.2 --- The Alpha-Beta Algorithm --- p.4
Chapter 1.2.3 --- Alpha-Beta Enhancements --- p.5
Chapter 1.2.4 --- Selective Search --- p.9
Chapter 1.3 --- Construction of Evaluation Function --- p.16
Chapter 1.4 --- Contribution of the Thesis --- p.17
Chapter 1.5 --- Structure of the Thesis --- p.19
Chapter 2 --- The Probabilistic Forward Pruning Framework --- p.20
Chapter 2.1 --- Introduction --- p.20
Chapter 2.2 --- The Generalized Probabilistic Forward Cuts Heuristic --- p.21
Chapter 2.3 --- The GPC Framework --- p.24
Chapter 2.3.1 --- The Alpha-Beta Algorithm --- p.24
Chapter 2.3.2 --- The NegaScout Algorithm --- p.25
Chapter 2.3.3 --- The Memory-enhanced Test Algorithm --- p.27
Chapter 2.4 --- Summary --- p.27
Chapter 3 --- The Fast Probabilistic Forward Pruning Framework --- p.30
Chapter 3.1 --- Introduction --- p.30
Chapter 3.2 --- The Fast GPC Heuristic --- p.30
Chapter 3.2.1 --- The Alpha-Beta algorithm --- p.32
Chapter 3.2.2 --- The NegaScout algorithm --- p.32
Chapter 3.2.3 --- The Memory-enhanced Test algorithm --- p.35
Chapter 3.3 --- Performance Evaluation --- p.35
Chapter 3.3.1 --- Determination of the Parameters --- p.35
Chapter 3.3.2 --- Result of Experiments --- p.38
Chapter 3.4 --- Summary --- p.42
Chapter 4 --- The Node-Cutting Heuristic --- p.43
Chapter 4.1 --- Introduction --- p.43
Chapter 4.2 --- Move Ordering --- p.43
Chapter 4.2.1 --- Quality of Move Ordering --- p.44
Chapter 4.3 --- Node-Cutting Heuristic --- p.46
Chapter 4.4 --- Performance Evaluation --- p.48
Chapter 4.4.1 --- Determination of the Parameters --- p.48
Chapter 4.4.2 --- Result of Experiments --- p.50
Chapter 4.5 --- Summary --- p.55
Chapter 5 --- The Integrated Strategy --- p.56
Chapter 5.1 --- Introduction --- p.56
Chapter 5.2 --- "Combination of GPC, FGPC and Node-Cutting Heuristic" --- p.56
Chapter 5.3 --- Performance Evaluation --- p.58
Chapter 5.4 --- Summary --- p.63
Chapter 6 --- Conclusions and Future Works --- p.64
Chapter 6.1 --- Conclusions --- p.64
Chapter 6.2 --- Future Works --- p.65
Chapter A --- Examples --- p.67
Chapter B --- The Rules of Chinese Checkers --- p.73
Chapter C --- Application to Chinese Checkers --- p.75
Bibliography --- p.77
Style APA, Harvard, Vancouver, ISO itp.
24

Hsueh, Chu-Hsuan, i 薛筑軒. "On Strength Analyses of Computer Programs for Stochastic Games with Perfect Information". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/ku48z7.

Pełny tekst źródła
Streszczenie:
博士
國立交通大學
資訊科學與工程研究所
107
The field of computer games is important to the researches in artificial intelligence. According to two different roles of the elements of chance involved, games can be classified as deterministic vs. stochastic and perfect information vs. imperfect information. Since many real-world problems involve uncertainty, stochastic games and imperfect information games are worthy to study. This thesis targets at stochastic games with perfect information since the games in this category is easier to model than imperfect information games. Chinese dark chess (CDC) and a reduced and solved variant, 2×4 CDC, are two games of this category which this thesis mainly focuses on. This thesis first enhances a game-playing program for CDC based on Monte-Carlo tree search (MCTS) by several existing techniques that combine additional knowledge. The additional knowledge is manually designed, and is incorporated into four techniques including early playout terminations, implicit minimax backups, quality-based rewards, and progressive bias. By combining all, the win rate is 84.75% (±1.90%) against the original program. In addition, this thesis investigates three strength analysis metrics on 2×4 CDC, including win rates playing against other players, prediction rates to expert actions, and mean squared errors to values of positions. Experiments show that win rates are indeed good indicators of programs’ strengths. The other two metrics are also good indicators, though not as good as win rates. Another analysis performed on 2×4 CDC is applying the AlphaZero algorithm, which is a kind of reinforcement learning algorithm achieved superhuman levels of plays in chess, shogi, and Go. Experiments show that the algorithm can learn the theoretical values and optimal plays even in stochastic games. Finally, this thesis studies two more stochastic games with perfect information, which are EinStein Würfelt Nicht! (EWN) and 2048-like games. Another kind of reinforcement learning algorithm, temporal difference learning, is applied to EWN and 2048-like games. For EWN, a program combining three techniques using the learned knowledge, including progressive bias, prior knowledge, and epsilon-greedy playouts, has a win rate of 62.25% (±2.12%) against the original program. For 2048-like games, a multistage variant of temporal difference learning improves the learned knowledge.
Style APA, Harvard, Vancouver, ISO itp.
25

Montazeri, Zarrin. "Achieving Perfect Location Privacy in Wireless Devices Using Anonymization". 2017. https://scholarworks.umass.edu/masters_theses_2/478.

Pełny tekst źródła
Streszczenie:
The popularity of mobile devices and location-based services (LBS) have created great concerns regarding the location privacy of the users of such devices and services. Anonymization is a common technique that is often being used to protect the location privacy of LBS users. This technique assigns a random pseudonym to each user and these pseudonyms can change over time. Here, we provide a general information theoretic definition for perfect location privacy and prove that perfect location privacy is achievable for mobile devices when using the anonymization technique appropriately. First, we assume that the user’s current location is independent from her past locations. Using this i.i.d model, we show that if the pseudonym of the user is changed before O(n2/(r−1)) number of anonymized observations is made by the adversary for that user, then she has perfect location privacy, where n is the number of users in the network and r is the number of all possible locations that the user might occupy. Then, we model each user’s movement by a Markov chain so that a user’s current location depends on his previous locations, which is a more realistic model when approximating real world data. We show that perfect location privacy is achievable in this model if the pseudonym of the user is changed before O(n2/(|E|−r)) anonymized observations is collected by the adversary for that user where |E| is the number of edges in the user’s Markov model.
Style APA, Harvard, Vancouver, ISO itp.
26

Lin, Yi-Mu. "Simultaneous Bandwidth Allocation Design for Traffic Signal Timing Plans in Urban Grid Traffic Networks under Perfect Traffic Information". 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-2707200616441500.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Lin, Yi-Mu, i 林沂穆. "Simultaneous Bandwidth Allocation Design for Traffic Signal Timing Plans in Urban Grid Traffic Networks under Perfect Traffic Information". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/70432513344242572682.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣大學
電機工程學研究所
94
In the past, traffic signal control strategies always use the traffic patterns gathered as inputs to formulate their traffic signal timing plans. As Intelligent Transportation Systems (ITS) develop, travelers’ information may be collected through modern communication technology, and thus the types of traffic information are changed. To improve the efficient use of the possible advanced traffic information, a new real-time traffic signal control scheme, the simultaneous bandwidth allocation (SBA) design, is proposed. A future scenario with perfect traffic information for both the traffic signal controller and travelers are considered. The design of simultaneous bandwidth allocation takes the queuing vehicles at each intersection of the street as inputs, and tries to maximize the utility of the given bandwidth on a local urban grid network. Several system performance indexes (PI) are also presented to examine the performance of the bandwidth selection. The bandwidth selecting problem occurring in SBA is solved by different PI-based bandwidth selecting mechanisms. To test the feasibility of the dynamic SBA design, a simple flow changing algorithm is used to illustrate the performance of the proposed bandwidth selecting strategies. By applying different total flow rate conditions, it is found that the results of these bandwidth selecting approaches are the same once the flow rate equals or exceeds the dispersing rate of queuing vehicles. In addition, the dynamic SBA has its best performance on PIs when the incoming flow rate is equal to the dispersing rate.
Style APA, Harvard, Vancouver, ISO itp.
28

Pinmanee, Saichon. "Logistics Integration for Improving Distribution Performance: in the Context of Thai Egg Industry". Thesis, 2016. https://vuir.vu.edu.au/30149/.

Pełny tekst źródła
Streszczenie:
Agricultural products are mostly perishable and require special logistics operations for storage, transportation and distribution to guarantee food safety and freshness. Logistics integration is critical for improving perishable food distribution. Although successful logistics integration has offered competitive advantage to firms operating in a wide range of industries, it has not yet achieved its full potential in the Thai agricultural sector. In Thailand, semi-industrial (commonly referred to as small and medium sized in extant literature) egg industry as an important agricultural sector. However, the industry presently faces critical issues primarily stemming from inadequate logistics. This results in suboptimal performance, such as unreliable delivery of goods and long or unpredictable order fulfilment lead times. Empirical evidence indicates that lack of comprehensive logistics supply chain and the absence of full integration of all related processes are the cause of these issues. On the other hand, in the extant studies in this field, factors such as information integration, logistics operations coordination, organisational relationship, and institutional support, are posited to play the main role in logistics integration. Hence, the present study aims to examine the role of these logistics integration factors in the ability to improve the logistics performance (specifically perfect order fulfilment and order fulfilment lead times), and identify the factors that have the potential to significantly affect the above relationships. The findings yielded by this study will assist in a better egg distribution logistics integration and will thus benefit the egg farmers, wholesalers, and retailers operating in the chain with the potential for improving distribution performance.
Style APA, Harvard, Vancouver, ISO itp.
29

Martirosyan, Sosina [Verfasser]. "Perfect hash families, identifiable parent property codes and covering arrays / vorgelegt von Sosina Martirosyan". 2003. http://d-nb.info/970934955/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Bossé, Éric-Olivier. "Transfert d'information quantique et intrication sur réseaux photoniques". Thèse, 2017. http://hdl.handle.net/1866/20307.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii