Dissertations / Theses on the topic 'Dynamic updates'

To see the other types of publications on this topic, follow the link: Dynamic updates.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Dynamic updates.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Stoyle, Gareth Paul. "A theory of dynamic software updates." Thesis, University of Cambridge, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612746.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Spetz-Nyström, Simon. "Dynamic updates of mobile apps using JavaScript." Thesis, Linköpings universitet, Programvara och system, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-119351.

Full text
Abstract:
Updates are a natural part of the life cycle of an application. The traditional way of updating an application by stopping it, replacing it with the new version and restarting it is lacking in many ways. There have been previous research in the field of dynamic software updates (DSU) that attempt to salvage this problem by updating the app while running. Most of the previous research have focused on static languages like C and Java, research with dynamic languages have been lacking. This thesis takes advantage of the dynamic features of JavaScript in order to allow for dynamic updates of applications for mobile devices. The solution is implemented and used to answer questions about how correctness can be ensured and what state transfer needs to be manually written by a programmer. The conclusion is that most failures that occur as the result of an update and is in need of a manually written state transfer can be put into one of three categories. To verify correctness of an update tests for these types of failures should be performed.
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Dong Kwan. "Applying Dynamic Software Updates to Computationally-Intensive Applications." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/28206.

Full text
Abstract:
Dynamic software updates change the code of a computer program while it runs, thus saving the programmerâ s time and using computing resources more productively. This dissertation establishes the value of and recommends practices for applying dynamic software updates to computationally-intensive applicationsâ a computing domain characterized by long-running computations, expensive computing resources, and a tedious deployment process. This dissertation argues that updating computationally-intensive applications dynamically can reduce their time-to-discovery metricsâ the total time it takes from posing a problem to arriving at a solutionâ and, as such, should become an intrinsic part of their software lifecycle. To support this claim, this dissertation presents the following technical contributions: (1) a distributed consistency algorithm for synchronizing dynamic software updates in a parallel HPC application, (2) an implementation of the Proxy design pattern that is more efficient than the existing implementations, and (3) a dynamic update approach for Java Virtual Machine (JVM)-based applications using the Proxy pattern to offer flexibility and efficiency advantages, making it suitable for computationally-intensive applications. The contributions of this dissertation are validated through performance benchmarks and case studies involving computationally-intensive applications from the bioinformatics and molecular dynamics simulation domains.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
4

Weißbach, Martin, Nguonly Taing, Markus Wutzler, Thomas Springer, Alexander Schill, and Siobhán Clarke. "Decentralized Coordination of Dynamic Software Updates in the Internet of Things." IEEE, 2016. https://tud.qucosa.de/id/qucosa%3A75282.

Full text
Abstract:
Large scale IoT service deployments run on a high number of distributed, interconnected computing nodes comprising sensors, actuators, gateways and cloud infrastructure. Since IoT is a fast growing, dynamic domain, the implementation of software components are subject to frequent changes addressing bug fixes, quality insurance or changed requirements. To ensure the continuous monitoring and control of processes, software updates have to be conducted while the nodes are operating without losing any sensed data or actuator instructions. Current IoT solutions usually support the centralized management and automated deployment of updates but are restricted to broadcasting the updates and local update processes at all nodes. In this paper we propose an update mechanism for IoT deployments that considers dependencies between services across multiple nodes involved in a common service and supports a coordinated update of component instances on distributed nodes. We rely on LyRT on all IoT nodes as the runtime supporting local disruption-minimal software updates. Our proposed middleware layer coordinates updates on a set of distributed nodes. We evaluated our approach using a demand response scenario from the smart grid domain.
APA, Harvard, Vancouver, ISO, and other styles
5

Pukall, Mario [Verfasser], and Gunter [Akademischer Betreuer] Saake. "JAVADAPTOR : unrestricted dynamic updates of Java applications / Mario Pukall. Betreuer: Gunter Saake." Magdeburg : Universitätsbibliothek, 2012. http://d-nb.info/1051445507/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Narra, Hemanth, and Egemen K. Çetinkaya. "Performance Analysis of AeroRP with Ground Station Updates in Highly-Dynamic Airborne Telemetry Networks." International Foundation for Telemetering, 2011. http://hdl.handle.net/10150/595669.

Full text
Abstract:
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada
Highly dynamic airborne telemetry networks pose unique challenges for data transmission. Domain-specific multi-hop routing protocols are necessary to cope with these challenges and AeroRP is one such protocol. In this paper, we discuss the operation of various AeroRP modes and analyse their performance using the ns-3 network simulator. We compare the performance of beacon, beaconless, and ground station (GS) modes of AeroRP. The simulation results show the advantages of having a domain-specific routing protocol and also highlight the importance of ground station updates in discovering routes.
APA, Harvard, Vancouver, ISO, and other styles
7

Keppeler, Jens. "Answering Conjunctive Queries and FO+MOD Queries under Updates." Doctoral thesis, Humboldt-Universität zu Berlin, 2020. http://dx.doi.org/10.18452/21483.

Full text
Abstract:
In dieser Arbeit wird das dynamische Auswertungsproblem über dynamische Datenbanken betrachtet, bei denen Tupel hinzugefügt oder gelöscht werden können. Die Aufgabe besteht darin einen dynamischen Algorithmus zu konstruieren, welcher unmittelbar nachdem die Datenbank aktualisiert wurde, die Datenstruktur, die das Resultat repräsentiert, aktualisiert. Die Datenstruktur soll in konstanter Zeit aktualisiert werden und das Folgende unterstützen: * Teste in konstanter Zeit ob ein Tupel zur Ausgabemenge gehört, * gebe die Anzahl der Tupel in der Ausgabemenge in konstanter Zeit aus, * zähle die Tupel aus der Ausgabemenge mit konstanter Taktung auf und * zähle den Unterschied zwischen der neuen und der alten Ausgabemenge mit konstanter Taktung auf. Im ersten Teil werden konjunktive Anfragen und Vereinigungen konjunktiver Anfragen auf relationalen Datenbanken betrachtet. Die Idee der q-hierarchischen Anfragen (und t-hierarchische Anfragen für das Testen) wird eingeführt und es wird gezeigt, dass das Resultat für jede q-hierarchische Anfrage auf dynamischen Datenbanken effizient in dem oben beschriebenen Szenario ausgewertet werden können. Konjunktive Anfragen mit Aggregaten werden weiterhin betrachtet. Es wird gezeigt, dass das Lernen von polynomiellen Regressionsfunktionen in konstanter Zeit vorbereitet werden kann, falls die Trainingsdaten aus dem Anfrageergebnis kommen. Mit logarithmischer Update-Zeit kann folgende Routine unterstützt werden: Bei Eingabe einer Zahl j, gebe das j-te Tupel aus der Aufzählung aus. Im zweiten Teil werden Anfragen, die Formeln der Logik erster Stufe (FO) und deren Erweiterung mit Modulo-Zähl Quantoren (FO+MOD) sind, betrachtet, und es wird gezeigt, dass diese effizient unter Aktualisierungen ausgewertet können, wobei die dynamische Datenbank die Gradschranke nicht überschreitet, und bei der Auswertung die Zähl-, Test-, Aufzähl- und die Unterschied-Routine unterstützt werden.
This thesis investigates the query evaluation problem for fixed queries over fully dynamic databases, where tuples can be inserted or deleted. The task is to design a dynamic algorithm that immediately reports the new result of a fixed query after every database update. In particular, the goal is to construct a data structure that allows to support the following scenario. After every database update, the data structure can be updated in constant time such that afterwards we are able * to test within constant time for a given tuple whether or not it belongs to the query result, * to output the number of tuples in the query result, * to enumerate all tuples in the new query result with constant delay and * to enumerate the difference between the old and the new query result with constant delay. In the first part, conjunctive queries and unions of conjunctive queries on arbitrary relational databases are considered. The notion of q-hierarchical conjunctive queries (and t-hierarchical conjunctive queries for testing) is introduced and it is shown that the result of each such query on a dynamic database can be maintained efficiently in the sense described above. Moreover, this notion is extended to aggregate queries. It is shown that the preparation of learning a polynomial regression function can be done in constant time if the training data are taken (and maintained under updates) from the query result of a q-hierarchical query. With logarithmic update time the following routine is supported: upon input of a natural number j, output the j-th tuple that will be enumerated. In the second part, queries in first-order logic (FO) and its extension with modulo-counting quantifiers (FO+MOD) are considered, and it is shown that they can be efficiently evaluated under updates, provided that the dynamic database does not exceed a certain degree bound, and the counting, testing, enumeration and difference routines is supported.
APA, Harvard, Vancouver, ISO, and other styles
8

Bergmark, Max. "Designing a performant real-time modular dynamic pricing system : Studying the performance of a dynamic pricing system which updates in real-time, and its application within the golfing industry." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-276241.

Full text
Abstract:
In many industries, the prices of products are dynamically calculated to maximize revenue while keeping customer satisfaction. In this paper, an approach of online calculation of prices is investigated, where customers always receive an updated price. The most common method used today is to update prices with some interval, and present that price to the customer. This standard method provides fast responses, and accurate responses for the most part. However, if the dynamic pricing model could benefit from very fast price updates, an online calculation approach might provide better price accuracy. The main advantages of this approach is the combination of short term accuracy and long term stability. Short term behaviour is handled by the online price calculation with real-time updates, while long term behaviour is handled by statistical analysis of booking behaviour, which is condensed into a demand curve. In this paper, the long term statistical analysis for calculating demand curves is described, along with the benefit of short term price adjustments which can be beneficial both for producers and consumers.
I många industrier räknas priset för produkterna ut dynamiskt för att maximera intäkterna samtidigt som kundnöjdheten bibehålls. I denna rapport så undersökt en metod för realtidsberäkningar av priser i en dynamisk kontext, där kunder alltid får uppdaterade priser. Den vanligaste metoden för dynamisk prissättning idag är att uppdatera priserna i systemet med jämna intervall, och sedan presentera det senast uträknade priset till kunden. Att använda periodiska prisuppdateringar leder till snabba responstider, och vanligtvis tillräckligt hög exakthet vad gäller pris. Men om det dynamiska prissättningssystemet kan dra nytta av väldigt snabba prisuppdateringar, så kan en onlineberäkning vara en bättre metod för att öka exaktheten för priserna. Den huvudsakliga fördelen av detta tillvägagångssätt är att det kombinerar kortsiktig exakthet med långsiktig stabilitet. På kort sikt så hanteras prisändringar av en onlineberäkning med realtidsuppdateringar, medan större trender hanteras av statistisk analys av tidigare bokningsbeteenden, som kondenseras till en efterfrågekurva. I denna rapport så beskrivs den långsiktiga statistiska analysen i kombination med den kortsiktiga onlineberäkningen, och hur denna kombination kan vara positiv både för säljare och köpare.
APA, Harvard, Vancouver, ISO, and other styles
9

Helbig, Marde. "Solving dynamic multi-objective optimisation problems using vector evaluated particle swarm optimisation." Thesis, University of Pretoria, 2012. http://hdl.handle.net/2263/28161.

Full text
Abstract:
Most optimisation problems in everyday life are not static in nature, have multiple objectives and at least two of the objectives are in conflict with one another. However, most research focusses on either static multi-objective optimisation (MOO) or dynamic singleobjective optimisation (DSOO). Furthermore, most research on dynamic multi-objective optimisation (DMOO) focusses on evolutionary algorithms (EAs) and only a few particle swarm optimisation (PSO) algorithms exist. This thesis proposes a multi-swarm PSO algorithm, dynamic Vector Evaluated Particle Swarm Optimisation (DVEPSO), to solve dynamic multi-objective optimisation problems (DMOOPs). In order to determine whether an algorithm solves DMOO efficiently, functions are required that resembles real world DMOOPs, called benchmark functions, as well as functions that quantify the performance of the algorithm, called performance measures. However, one major problem in the field of DMOO is a lack of standard benchmark functions and performance measures. To address this problem, an overview is provided from the current literature and shortcomings of current DMOO benchmark functions and performance measures are discussed. In addition, new DMOOPs are introduced to address the identified shortcomings of current benchmark functions. Guides guide the optimisation process of DVEPSO. Therefore, various guide update approaches are investigated. Furthermore, a sensitivity analysis of DVEPSO is conducted to determine the influence of various parameters on the performance of DVEPSO. The investigated parameters include approaches to manage boundary constraint violations, approaches to share knowledge between the sub-swarms and responses to changes in the environment that are applied to either the particles of the sub-swarms or the non-dominated solutions stored in the archive. From these experiments the best DVEPSO configuration is determined and compared against four state-of-the-art DMOO algorithms.
Thesis (PhD)--University of Pretoria, 2012.
Computer Science
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
10

Liang, Weifa, and wliang@cs anu edu au. "Designing Efficient Parallel Algorithms for Graph Problems." The Australian National University. Department of Computer Science, 1997. http://thesis.anu.edu.au./public/adt-ANU20010829.114536.

Full text
Abstract:
Graph algorithms are concerned with the algorithmic aspects of solving graph problems. The problems are motivated from and have application to diverse areas of computer science, engineering and other disciplines. Problems arising from these areas of application are good candidates for parallelization since they often have both intense computational needs and stringent response time requirements. Motivated by these concerns, this thesis investigates parallel algorithms for these kinds of graph problems that have at least one of the following properties: the problems involve some type of dynamic updates; the sparsification technique is applicable; or the problems are closely related to communications network issues. The models of parallel computation used in our studies are the Parallel Random Access Machine (PRAM) model and the practical interconnection network models such as meshes and hypercubes. ¶ Consider a communications network which can be represented by a graph G = (V;E), where V is a set of sites (processors), and E is a set of links which are used to connect the sites (processors). In some cases, we also assign weights and/or directions to the edges in E. Associated with this network, there are many problems such as (i) whether the network is k-edge (k-vertex) connected withfixed k; (ii) whether there are k-edge (k-vertex) disjoint paths between u and v for a pair of given vertices u and v after the network is dynamically updated by adding and/or deleting an edge etc; (iii) whether the sites in the network can communicate with each other when some sites and links fail; (iv) identifying the first k edges in the network whose deletion will result in the maximum increase in the routing cost in the resulting network for fixed k; (v) how to augment the network at optimal cost with a given feasible set of weighted edges such that the augmented network is k-edge (k-vertex) connected; (vi) how to route messages through the network efficiently. In this thesis we answer the problems mentioned above by presenting efficient parallel algorithms to solve them. As far as we know, most of the proposed algorithms are the first ones in the parallel setting. ¶ Even though most of the problems concerned in this thesis are related to communications networks, we also study the classic edge-coloring problem. The outstanding difficulty to solve this problem in parallel is that we do not yet know whether or not it is in NC. In this thesis we present an improved parallel algorithm for the problem which needs [bigcircle]([bigtriangleup][superscript 4.5]log [superscript 3] [bigtriangleup] log n + [bigtriangleup][superscript 4] log [superscript 4] n) time using [bigcircle](n[superscript 2][bigtriangleup] + n[bigtriangleup][superscript 3]) processors, where n is the number of vertices and [bigtriangleup] is the maximum vertex degree. Compared with a previously known result on the same model, we improved by an [bigcircle]([bigtriangleup][superscript 1.5]) factor in time. The non-trivial part is to reduce this problem to the edge-coloring update problem. We also generalize this problem to the approximate edge-coloring problem by giving a faster parallel algorithm for the latter case. ¶ Throughout the design and analysis of parallel graph algorithms, we also find a technique called the sparsification technique is very powerful in the design of efficient sequential and parallel algorithms on dense undirected graphs. We believe that this technique may be useful in its own right for guiding the design of efficient sequential and parallel algorithms for problems in other areas as well as in graph theory.
APA, Harvard, Vancouver, ISO, and other styles
11

Baumann, Andrew Computer Science &amp Engineering Faculty of Engineering UNSW. "Dynamic update for operating systems." Awarded by:University of New South Wales. Computer Science and Engineering, 2007. http://handle.unsw.edu.au/1959.4/28356.

Full text
Abstract:
Patches to modern operating systems, including bug fixes and security updates, and the reboots and downtime they require, cause tremendous problems for system users and administrators. The aim of this research is to develop a model for dynamic update of operating systems, allowing a system to be patched without the need for a reboot or other service interruption. In this work, a model for dynamic update based on operating system modularity is developed and evaluated using a prototype implementation for the K42 operating system. The prototype is able to update kernel code and data structures, even when the interfaces between kernel modules change. When applying an update, at no point is the system's entire execution blocked, and there is no additional overhead after an update has been applied. The base runtime overhead is also very low. An analysis of the K42 revision history shows that approximately 79% of past performance and bug-fix changes to K42 could be converted to dynamic updates, and the proportion would be even higher if the changes were being developed for dynamic update. The model also extends to other systems such as Linux and BSD, that although structured modularly, are not strictly object-oriented like K42. The experience with this approach shows that dynamic update for operating systems is feasible given a sufficiently-modular system structure, allows maintenance patches and updates to be applied without disruption, and need not constrain system performance.
APA, Harvard, Vancouver, ISO, and other styles
12

CAMARA, EDUARDO CASTRO MOTA. "A STUDY OF DYNAMIC UPDATE FOR SOFTWARE COMPONENTS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2014. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=23529@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE SUPORTE À PÓS-GRADUAÇÃO DE INSTS. DE ENSINO
PROGRAMA DE EXCELENCIA ACADEMICA
O desenvolvimento baseado em sistemas de componentes de software consiste em compor sistemas a partir de unidades de sotfware prontas e reutilizáveis. Muitos sistemas de componentes software em produção, precisam ficar disponíveis durante 24 horas por dia nos 7 dias da semana. Atualizações dinâmicas permitem que os sistemas sejam atualizados sem interromperem a execução dos seus serviços, aplicando a atualização em tempo de execução. Muitas técnicas de atualização dinâmica, na literatura, utilizam aplicações feitas especificamente para cobrir os pontos implementados e poucas utilizam um histórico de necessidades de um sistema real. Este trabalho estuda os principais casos de atualizações que ocorrem em um sistema de componentes de uso extenso, o Openbus, que consiste em uma infraestrutura de integração responsável pela comunicação de diversas aplicações de aquisição, processamento e interpretação de dados. Além deste estudo, implementamos uma solução de atualização dinâmica para acomodar as necessidades deste sistema. Depois, utilizando a solução implementada, apresentamos um teste de sobrecarga e algumas aplicações de atualizações do Openbus.
The component-based development of software systems consists on composing systems from ready and reusable sotfware units. Many software componente systems on production, need to be available 24 hours a day 7 days a week. Dynamic updates allow systems to be upgraded without interrupting the execution of its services, applying the update at runtime. Many dynamics software update techniques in the literature use applications specically implemented to cover the presented points and only a few use a historical need of a real system. This work studies the main cases of updates that occur in a system of components with extensive use, the Openbus, which consists of an integration infrastructure responsible for communication of various applications for acquisition, processing and interpretation of data. In addition to this study, we implement a solution of dynamic software update to accommodate the needs of this system. After, using the implemented solution, we present an overhead test and applications of updates on Openbus.
APA, Harvard, Vancouver, ISO, and other styles
13

Tesone, Pablo. "Dynamic Software Update for Production and Live Programming Environments." Thesis, Ecole nationale supérieure Mines-Télécom Lille Douai, 2018. http://www.theses.fr/2018MTLD0012/document.

Full text
Abstract:
Mettre à jour des applications durant leur exécution est utilisé aussi bien en production pour réduire les temps d’arrêt des applications que dans des environnements de développement interactifs (IDE pour live programming). Toutefois, ces deux scénarios présentent des défis différents qui font que les solutions de mise à jour dynamique (DSU pour Dynamic Software Updating) existantes sont souvent spécifiques à l’un des deux. Par exemple, les DSUs pour la programmation interactives ne supportent généralement pas la détection automatique de points sûrs de mise à jour ni la migration d’instances, alors que les DSUs pour la production nécessitent une génération manuelle de l’ensemble des modifications et manquent d’intégration avec l’IDE. Les solutions existantes ont également une capacité limitées à se mettre à jour elles-mêmes ou à mettre à jour les bibliothèques de base du langage ; et certaines d’entre elles introduisent mêmle une dégradation des performances d’exécution en dehors du processus de mise à jour.Dans cette thèse, nous proposons un DSU (nommé gDSU) unifié qui fonctionne à la fois pour la programmation interactive et les environnements de production. gDSU permet la détection automatique des points sûrs de mise à jour en analysant et manipulant la pile d’exécution, et offre un mécanisme réutilisable de migration d’instances afin de minimiser les interventions manuelles lors de l’application d’une migration. gDSU supporte également la mise à jour des bibliothèques du noyau du langage et du mécanisme de mise à jour lui-même. Ceci est réalisé par une copie incrémentale des objets à modifier et une application atomique de ces modifications.gDSU n’affecte pas les performances globales de l’application et ne présente qu’une pénalité d’exécution lors processus de mise à jour. Par exemple, gDSU est capable d’appliquer une mise à jour sur 100 000 instances en 1 seconde. Durant cette seconde, l’application ne répond pas pendant 250 milli-secondes seulement. Le reste du temps, l’application s’exécute normalement pendant que gDSU recherche un point sûr de mise à jour qui consiste alors uniquement à copier les éléments modifiés.Nous présentons également deux extensions de gDSU permettant un meilleur support du développement interactif dans les IDEs : la programmation interactive transactionnelle et l’application atomique de reusinages (refactorings)
Updating applications during their execution is used both in production to minimize application downtine and in integrated development environments to provide live programming support. Nevertheless, these two scenarios present different challenges making Dynamic Software Update (DSU) solutions to be specifically designed for only one of these use cases. For example, DSUs for live programming typically do not implement safe point detection or insistance migration, while production DSUs require manual generation of patches and lack IDE integration. These sollutions also have a limited ability to update themselves or the language core libraries and some of them present execution penalties outside the update window.In this PhD, we propose a unified DSU named gDSU for both live programming and production environments. gDSU provides safe update point detection using call stack manipulation and a reusable instance migration mechanism to minimize manual intervention in patch generation. It also supports updating the core language libraries as well as the update mechanism itself thanks to its incremental copy of the modified objects and its atomic commit operation.gDSU does not affect the global performance of the application and it presents only a run-time penalty during the window. For example, gDSU is able to apply an update impacting 100,000 instances in 1 second making the application not responsive for only 250 milliseconds. The rest of the time the applications runs normally while gDSU is looking for a safe update point during which modified elements will be copied.We also present extensions of gDSU to support transactional live programming and atomic automactic refactorings which increase the usability of live programming environments
APA, Harvard, Vancouver, ISO, and other styles
14

Pham, Thanh H. "Dynamic Update Techniques for Online Maps and Attributes Data." NSUWorks, 2001. http://nsuworks.nova.edu/gscis_etd/771.

Full text
Abstract:
Online databases containing geographic and related tabular data for maps and attributes often require continuous updates from widely distributed sources afield. For some applications, these data are dynamic, and thus are of little value if they do not reflect the latest information or changes. A status map that depicts graphically temporal data affecting accountability is an example of this type of data. How can accommodations be made collectively for the perpetual data updates in the database and the need to deliver online information in real time without making concessions? The goal of the dissertation was to analyze and evaluate techniques and technology for data collection and storage, online data delivery, and real-time upload. The result of this analysis culminated in the design and prototype of a system that allowed real-time delivery of up-to-date maps and attributes information. A literature review revealed that an ample amount of research material existed on the theory and practice of developing dynamic update techniques. Despite that fact, no research literature was available that specifically dealt with dynamic update techniques that provide for real-time delivery of up-to-date maps while allowing online update of attributes information. This dissertation was the first attempt at providing research material in this important area. The procedure consisted of five major steps encompassing a number of small steps, and culminated in the development of a prototype. The steps included gathering data collection and storage information, investigating technological advances in data delivery and access, studying dynamic update techniques, assessing the feasibility of an implementation solution, and developing a prototype. The results revealed that the dynamic update technique as implemented in the prototype met the need for timely delivery of accountability, geospatial, and metadata information within an infrastructure.
APA, Harvard, Vancouver, ISO, and other styles
15

Anderson, Gabrielle. "Behavioural properties and dynamic software update for concurrent programmes." Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/353281/.

Full text
Abstract:
Software maintenance is a major part of the development cycle. The traditional methodology for rolling out an update to existing programs is to shut down the system, modify the binary, and restart the program. Downtime has significant disadvantages. In response to such concerns, researchers and practitioners have investigated how to perform update on running programs whilst maintaining various desired properties. In a multi-threaded setting this is further complicated by the interleaving of different threads' actions. In this thesis we investigate how to prove that safety and liveness are preserved when updating a program. We present two possible approaches; the main intuition behind each of these is to find quiescent points where updates are safe. The first approach requires global synchronisation, and is more generally applicable, but can delay updates indefinitely. The second restricts the class of programs that can be updated, but permits update without global synchronisation, and guarantees application of update. We provide full proofs of all relevant properties.
APA, Harvard, Vancouver, ISO, and other styles
16

SOARES, Rodrigo Hernandez. "Gerenciamento Dinâmico de Modelos de Contexto: Estudo de Caso Baseado em CEP." Universidade Federal de Goiás, 2012. http://repositorio.bc.ufg.br/tede/handle/tde/521.

Full text
Abstract:
Made available in DSpace on 2014-07-29T14:57:51Z (GMT). No. of bitstreams: 1 dissertacao-rodrigohs.pdf: 1383844 bytes, checksum: b3fda2012ce5a20dc390677f308520e3 (MD5) Previous issue date: 2012-05-29
Context models that describe dynamic context-aware scenarios usually need to be frequently updated. Some examples of situations that motivate these updates are the appearance of new services and context providers, the mobility of the entities described in these models, among others. Generally, updates on models imply redevelopment of the architectural components of context-aware systems based on these models. However, as these updates in dynamic scenarios tend to be more frequent, it is desirable that they occur at runtime. This dissertation presents an infrastructure for dynamic management of context models based on the fundamentals of complex event processing, or CEP. This infrastructure allows the fundamental abstractions from which a model is built to be updated at runtime. As these updates can impact systems based on the updated models, this dissertation identifies and analyzes these impacts, which are reproduced in a case study that aims to evaluate the proposed infrastructure by demonstrating how it deals with the impacts mentioned.
Modelos contextuais que descrevem cenários de computação sensível ao contexto dinâmicos normalmente precisam ser frequentemente atualizados. Alguns exemplos de situações que motivam essas atualizações são o surgimento de novos serviços e provedores de informações contextuais, a mobilidade das entidades descritas nesses modelos, dentre outros. Normalmente, atualizações em modelos implicam em redesenvolvimento dos componentes arquiteturais dos sistemas sensíveis ao contexto baseados nesses modelos. Porém, como em cenários dinâmicos essas atualizações tendem a ser mais frequentes, é desejável que elas ocorram em tempo de execução. Essa dissertação apresenta uma infraestrutura para gerenciamento dinâmico de modelos de contexto baseada nos fundamentos de processamento complexo de eventos, ou CEP. Essa infraestrutura permite que as abstrações fundamentais a partir das quais um modelo é construído sejam atualizadas em tempo de execução. Como essas atualizações podem causar impactos nos sistemas baseados nos modelos atualizados, essa dissertação identifica e analisa esses impactos, os quais são reproduzidos em um estudo de caso que tem como finalidade avaliar a infraestrutura proposta através da demonstração de como ela lida com os impactos mencionados.
APA, Harvard, Vancouver, ISO, and other styles
17

Tumati, Pradeep. "Software Hot Swapping." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/31362.

Full text
Abstract:
The emergence of the Internet has sparked a tremendous explosion in the special class of systems called mission critical systems. These systems are so vital to their intended tasks that they must operate continuously. Two problems affect them: unplanned, and therefore disastrous, downtime and planned downtime for software maintenance. As the pressure to keep these systems operating continuously increases, scheduling downtime becomes complex. However, dynamically modifying the mission critical systems without disruption can reduce the need for a planned downtime. Every executing process has an executing code tightly coupled with an associated state, which continuously changes as the code executes. A dynamic modification at this juncture involves modifying the executable code and the state present within the binary image of the associated process. An ill-timed modification can create runtime incompatibilities that are hard to rectify and eventually cause a system crash. The purpose of the research in this thesis is to examine the causes for incompatibilities and propose the design of a dynamic modification technique: Software Hot Swapping. To achieve these objectives, the researcher proposes mechanisms which these incompatibilities can prevent, examines the characteristics and the implementation issues of such mechanisms, and demonstrates dynamic modification with a simple prototype Hot Swapping program.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
18

Mensah, Pernelle. "Generation and Dynamic Update of Attack Graphs in Cloud Providers Infrastructures." Thesis, CentraleSupélec, 2019. http://www.theses.fr/2019CSUP0011.

Full text
Abstract:
Dans les infrastructures traditionnelles, les graphes d’attaque permettent de brosser un tableau de la sécurité, car ils sont un modèle décrivant les différentes étapes suivies par un attaquant dans le but de compromettre un actif du réseau. Ces graphes peuvent ainsi servir de base à l’évaluation automatisée des risques, en s’appuyant sur l’identification et l’évaluation des actifs essentiels. Cela permet de concevoir des contre-mesures proactives et réactives pour la réduction des risques et peut être utilisé pour la surveillance et le renforcement de la sécurité du réseau.Cette thèse vise à appliquer une approche similaire dans les environnements Cloud, ce qui implique de prendre en compte les nouveaux défis posés par ces infrastructures modernes, la majorité des graphes d’attaque étant conçue pour une application dans des environnements traditionnels. Les nouveaux scénarios d’attaque liés à la virtualisation, ainsi que les propriétés inhérentes du Cloud, à savoir l’élasticité et le caractère dynamique, sont quelques-uns des obstacles à franchir à cette fin.Ainsi, pour atteindre cet objectif, un inventaire complet des vulnérabilités liées à la virtualisation a été effectué, permettant d'inclure cette nouvelle dimension dans les graphes d'attaque existants. Par l'utilisation d'un modèle adapté à l’échelle du Cloud, nous avons pu tirer parti des technologies Cloud et SDN, dans le but de construire des graphes d’attaque et de les maintenir à jour. Des algorithmes capables de faire face aux modifications fréquentes survenant dans les environnements virtualisés ont été conçus et testés à grande échelle sur une plateforme Cloud réelle afin d'évaluer les performances et confirmer la validité des méthodes proposées dans cette thèse pour permettre à l’administrateur de Cloud de disposer d’un graphe d’attaque à jour dans cet environnent
In traditional environments, attack graphs can paint a picture of the security exposure of the environment. Indeed, they represent a model allowing to depict the many steps an attacker can take to compromise an asset. They can represent a basis for automated risk assessment, relying on an identification and valuation of critical assets in the network. This allows to design pro-active and reactive counter-measures for risk mitigation and can be leveraged for security monitoring and network hardening.Our thesis aims to apply a similar approach in Cloud environments, which implies to consider new challenges incurred by these modern infrastructures, since the majority of attack graph methods were designed with traditional environments in mind. Novel virtualization attack scenarios, as well as inherent properties of the Cloud, namely elasticity and dynamism are a cause for concern.To realize this objective, a thorough inventory of virtualization vulnerabilities was performed, for the extension of existing vulnerability templates. Based on an attack graph representation model suitable to the Cloud scale, we were able to leverage Cloud and SDN technologies, with the purpose of building Cloud attack graphs and maintain them in an up-to-date state. Algorithms able to cope with the frequent rate of change occurring in virtualized environments were designed and extensively tested on a real scale Cloud platform for performance evaluation, confirming the validity of the methods proposed in this thesis, in order to enable Cloud administrator to dispose of an up-to-date Cloud attack graph
APA, Harvard, Vancouver, ISO, and other styles
19

Yin, Li. "Adaptive Background Modeling with Temporal Feature Update for Dynamic Foreground Object Removal." DigitalCommons@USU, 2016. https://digitalcommons.usu.edu/etd/5040.

Full text
Abstract:
In the study of computer vision, background modeling is a fundamental and critical task in many conventional applications. This thesis presents an introduction to background modeling and various computer vision techniques for estimating the background model to achieve the goal of removing dynamic objects in a video sequence. The process of estimating the background model with temporal changes in the absence of foreground moving objects is called adaptive background modeling. In this thesis, three adaptive background modeling approaches were presented for the purpose of developing \teacher removal" algorithms. First, an adaptive background modeling algorithm based on linear adaptive prediction is presented. Second, an adaptive background modeling algorithm based on statistical dispersion is presented. Third, a novel adaptive background modeling algorithm based on low rank and sparsity constraints is presented. The design and implementation of these algorithms are discussed in detail, and the experimental results produced by each algorithm are presented. Lastly, the results of this research are generalized and potential future research is discussed.
APA, Harvard, Vancouver, ISO, and other styles
20

Moosa, Naseera. "An updated model of the krill-predator dynamics of the Antarctic ecosystem." Master's thesis, University of Cape Town, 2017. http://hdl.handle.net/11427/25490.

Full text
Abstract:
The objective of this thesis is to update the Mori-Butterworth (2006) model of the krill-predator dynamics of the Antarctic ecosystem. Their analysis aimed to determine whether predator-prey interactions alone could broadly explain the observed population trends of the species considered in their model. In this thesis, the Antarctic ecosystem is outlined brie y and details are given of the main krill-eating predators including whales, seals, fish and penguins, together with an historical record of the human harvesting in the region. The abundances and per capita krill consumption of the krill-predators are calculated and used to determine the main krill-predators to be used in the updated model developed. These predators are found to be the blue, fin, humpback and minke whales and crabeater and Antarctic fur seals. The three main ship surveys (IDCR/SOWER, JARPA and JSV) used to estimate whale abundance, and the abundance estimation method itself (called distance sampling), are summarised. Updated estimates of abundance and trends are listed for the main krill-predators. Updated estimates for the biological parameters needed for the ecosystem model are also reported, and include some differences in approaches to those adopted for the Mori-Butterworth model. The background to the hypothesis of a krill-surplus during the mid-20th century is discussed as well as the effects of environmental change in the context of possible causes of the population changes of the main krill-feeding predators over the last century. Key features of the results of the updated model are the inclusion of a depensatory effect for Antarctic fur seals in the krill and predator dynamics, and the imposition of bounds on Ka (the carrying capacity of krill in Region a, in the absence of its predators); these lead to a better fit overall. A particular difference in results compared to those from the Mori-Butterworth model is more oscillatory behaviour in the trajectories for krill and some of its main predators. This likely results from the different approach to modelling natural mortality for krill and warrants further investigation. That may in turn resolve a key mismatch in the model which predicts minke oscillations in the Indo-Pacific region to be out of phase with results from a SCAA assessment of these whales. A number of other areas for suggested future research are listed. The updated model presented in this thesis requires further development before it might be considered sufficiently reliable for providing advice for the regulation and implementation of suitable conservation and harvesting strategies in the Antarctic.
APA, Harvard, Vancouver, ISO, and other styles
21

Sornil, Ohm. "Parallel Inverted Indices for Large-Scale, Dynamic Digital Libraries." Diss., Virginia Tech, 2001. http://hdl.handle.net/10919/26131.

Full text
Abstract:
The dramatic increase in the amount of content available in digital forms gives rise to large-scale digital libraries, targeted to support millions of users and terabytes of data. Retrieving information from a system of this scale in an efficient manner is a challenging task due to the size of the collection as well as the index. This research deals with the design and implementation of an inverted index that supports searching for information in a large-scale digital library, implemented atop a massively parallel storage system. Inverted index partitioning is studied in a simulation environment, aiming at a terabyte of text. As a result, a high performance partitioning scheme is proposed. It combines the best qualities of the term and document partitioning approaches in a new Hybrid Partitioning Scheme. Simulation experiments show that this organization provides good performance over a wide range of conditions. Further, the issues of creation and incremental updates of the index are considered. A disk-based inversion algorithm and an extensible inverted index architecture are described, and experimental results with actual collections are presented. Finally, distributed algorithms to create a parallel inverted index partitioned according to the hybrid scheme are proposed, and performance is measured on a portion of the equipment that normally makes up the 100 node Virginia Tech PetaPlex™ system. NOTE: (02/2007) An updated copy of this ETD was added after there were patron reports of problems with the file.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
22

Datta, Arijeet Suryadeep. "Self-organised critical system : Bak-Sneppen model of evolution with simultaneous update." Thesis, Imperial College London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.395826.

Full text
Abstract:
Many chaotic and complicated systems cannot be analysed by traditional methods. In 1987 P.Bak, C.Tang, and K.A.Wiesenfeld developed a new concept called Self-Organised Criticality (SOC) to explain the behaviour of composite systems containing a large number of elements that interact over a short range. In general this theory applies to complex systems that naturally evolve to a critical state in which a minor event starts a chain reaction that can affect any number of elements in the system. It was later shown that many complex phenomena such as flux pinning in superconductors, dynamics of granular systems, earthquakes, droplet formation and biological evolution show signs of SOC. The dynamics of complex systems in nature often occurs in terms of punctuation, or avalanches rather than following a smooth, gradual path. Extremal dynamics is used to model the temporal evolution of many different complex systems. Specifically the Bak-Sneppen evolution model, the Sneppen interface depinning model, the Zaitsev flux creep model, invasion percolation, and several other depinning models. This thesis considers extremal dynamics at constant flux where M>1 smallest barriers are simultaneously updated as opposed to models in the limit of zero flux where only the smallest barrier is updated. For concreteness, we study the Bak-Sneppen (BS) evolution model [Phys. Rev. Lett. 71, 4083 (1993)]. M=1 corresponds to the original BS model. The aim of the present work is to understand analytically through mean field theory the random neighbour version of the generalised BS model and verify the results against the computer simulations. This is done in order to scrutinise the trustworthiness of our numerical simulations. The computer simulations are found to be identical with results obtained from the analytical approach. Due to this agreement, we know that our simulations will produce reliable results for the nearest neighbour version of the generalised BS model. Since the nearest neighbour version of the generalised BS model cannot be solved analytically, we have to rely on simulations. We investigate the critical behaviour of both versions of the model using the scaling theory. We look at various distributions and their scaling properties, and also measure the critical exponents accurately verifying whether the scaling relations holds. The effect of increasing from M=1 to M>1 is surprising with dramatic decrease in size of the scaling regime.
APA, Harvard, Vancouver, ISO, and other styles
23

Khanna, Nikita. "A Novel Update to Dynamic Q Algorithm and a Frequency-fold Analysis for Aloha-based RFID Anti-Collision Protocols." Scholar Commons, 2015. http://scholarcommons.usf.edu/etd/5844.

Full text
Abstract:
Radio frequency identification (RFID) systems are increasingly used for a wide range of applications from supply chain management to mobile payment systems. In a typical RFID system, there is a reader/interrogator and multiple tags/transponders, which can communicate with the reader. If more than one tag tries to communicate with the reader at the same time, a collision occurs resulting in failed communications, which becomes a significantly more important challenge as the number of tags in the environment increases. Collision reduction has been studied extensively in the literature with a variety of algorithm designs specifically tailored for low-power RFID systems. In this study, we provide an extensive review of existing state-of-the-art time domain anti-collision protocols which can generally be divided into two main categories: 1) aloha based and 2) tree based. We explore the maximum theoretical gain in efficiency with a 2-fold frequency division in the ultra-high frequency (UHF) band of 902-928 MHz used for RFID systems in the United States. We analyze how such a modification would change the total number of collisions and improve efficiency for two different anti-collision algorithms in the literature: a relatively basic framed-slotted aloha and a more advanced reservation slot with multi-bits aloha. We also explore how a 2-fold frequency division can be implemented using analog filters for semi-passive RFID tags. Our results indicate significant gains in efficiency for both aloha algorithms especially for midsize populations of tags up to 50. Finally, we propose two modifications to the Q-algorithm, which is currently used as part of the industry standard EPC Class 1 Generation 2 (Gen 2) protocol. The Q-Slot-Collision-Counter (QSCC) and Q-Frame-Collision-Counter (QFCC) algorithms change the size of the frame more dynamically depending on the number of colliding tags in each time slot with the help of radar cross section technique whereas the standard Q-algorithm uses a fixed parameter for frame adjustment. In fact, QFCC algorithm is completely independent of the variable "C" which is used in the standard protocol for modifying the frame size. Through computer simulations, we show that the QFCC algorithm is more robust and provide an average efficiency gain of more than 6% on large populations of tags compared to the existing standard.
APA, Harvard, Vancouver, ISO, and other styles
24

Stephens, Sonia. "Placing birds on a dynamic evolutionary map: Using digital tools to update the evolutionary metaphor of the "tree of life"." Doctoral diss., University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5519.

Full text
Abstract:
This dissertation describes and presents a new type of interactive visualization for communicating about evolutionary biology, the dynamic evolutionary map. This web-based tool utilizes a novel map-based metaphor to visualize evolution, rather than the traditional “tree of life.” The dissertation begins with an analysis of the conceptual affordances of the traditional tree of life as the dominant metaphor for evolution. Next, theories from digital media, visualization, and cognitive science research are synthesized to support the assertion that digital media tools can extend the types of visual metaphors we use in science communication in order to overcome conceptual limitations of traditional metaphors. These theories are then applied to a specific problem of science communication, resulting in the dynamic evolutionary map. Metaphor is a crucial part of scientific communication, and metaphor-based scientific visualizations, models, and analogies play a profound role in shaping our ideas about the world around us. Users of the dynamic evolutionary map interact with evolution in two ways: by observing the diversification of bird orders over time and by examining the evidence for avian evolution at several places in evolutionary history. By combining these two types of interaction with a non-traditional map metaphor, evolution is framed in a novel way that supplements traditional metaphors for communicating about evolution. This reframing in turn suggests new conceptual affordances to users who are learning about evolution. Empirical testing of the dynamic evolutionary map by biology novices suggests that this approach is successful in communicating evolution differently than in existing tree-based visualization methods. Results of evaluation of the map by biology experts suggest possibilities for future enhancement and testing of this visualization that would help refine these successes. This dissertation represents an important step forward in the synthesis of scientific, design, and metaphor theory, as applied to a specific problem of science communication. The dynamic evolutionary map demonstrates that these theories can be used to guide the construction of a visualization for communicating a scientific concept in a way that is both novel and grounded in theory. There are several potential applications in the fields of informal science education, formal education, and evolutionary biology for the visualization created in this dissertation. Moreover, the approach suggested in this dissertation can potentially be extended into other areas of science and science communication. By placing birds onto the dynamic evolutionary map, this dissertation points to a way forward for visualizing science communication in the future.
Ph.D.
Doctorate
Arts and Humanities
Texts and Technology
APA, Harvard, Vancouver, ISO, and other styles
25

Bondsman, Benjamin. "Numerical modeling and experimental investigation of large deformation under static and dynamic loading." Thesis, Linnéuniversitetet, Institutionen för byggteknik (BY), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-104227.

Full text
Abstract:
Small kinematics assumption in classical engineering has been in the center of consideration in structural analysis for decennaries. In the recent years the interest for sustainable and optimized structures, lightweight structures and new materials has grown rapidly as a consequence of desire to archive economical sustainability. These issues involve non-linear constitutive response of materials and can only be accessed on the basis of geometrically and materially non-linear analysis. Numerical simulations have become a conventional tool in modern engineering and have proven accuracy in computation and are on the verge of superseding time consuming and costly experiments.\newlineConsequently, this work presents a numerical computational framework for modeling of geometrically non-linear large deformation of isotropic and orthotropic materials under static and dynamic loading. The numerical model is applied on isotropic steel in plane strain and orthotropic wood in 3D under static and dynamic loading. In plane strain Total Lagrangian, Updated Lagrangian, Newmark-$\beta$ and Energy Conserving Algorithm time-integration methods are compared and evaluated. In 3D, a Total Lagrangian static approach and a Total Lagrangian based dynamic approach with Newmark-$\beta$ time-integration method is proposed to numerically predict deformation of wood under static and dynamic loading. The numerical model's accuracy is validated through an experiment where a knot-free pine wood board under large deformation is studied. The results indicate accuracy and capability of the numerical model in predicting static and dynamic behaviour of wood under large deformation. Contrastingly, classical engineering solution proves its inaccuracy and incapability of predicting kinematics of the wood board under studied conditions.
Små kinematikantaganden inom klassisk ingenjörsteknik har varit centralt för konstruktionslösningar under decennier. Under de senaste åren har intresset för hållbara och optimerade strukturer, lättviktskonstruktioner och nya material ökat kraftigt till följd av önskan att uppnå ekonomisk hållbarhet. Dessa nya konstruktionslösningar involverar icke-linjär konstitutiv respons hos material och kan endast studeras baserad på geometriskt och materiellt olinjär analys. Numeriska simuleringar har blivit ett konventionellt verktyg inom modern ingenjörsteknik och visat sig ge noggrannhet i beräkning och kan på sikt ersätta tidskrävande och kostsamma experiment.\newlineDetta examensarbete presenterar ett numeriskt beräkningsramverk för modellering av geometrisk olinjäritet med stora deformationer hos isotropa och ortotropa material vid statisk och dynamisk belastning. Den numeriska modellen appliceras på isotropiskt stål i plantöjning och ortotropisk trä i 3D vid statisk och dynamisk belastning. I fallet med plantöjning jämförs och utvärderas den Totala Lagrangianen, Uppdaterade Lagrangianen, Newmark-$\beta$ och Energi Konserverings Algoritm metoderna. I 3D föreslås en statisk Total Lagrangian metod och en dynamisk Total Lagrangian-baserad metod med Newmark-$\beta$ tidsintegreringsmetod för att numeriskt förutse statisk och dynamisk deformation hos trä. Den numeriska modellens noggrannhet valideras genom ett experiment där en kvistfri furuplanka studeras under stora deformationer. Resultaten bekräftar noggrannhet och förmåga hos den numeriska modellen att förutse statiska och dynamiska processer hos trä vid stora deformationer. Däremot, visar klassisk ingenjörslösning brist på förmåga att förutse trä plankans kinematik under studerade förhållanden.
APA, Harvard, Vancouver, ISO, and other styles
26

Momenan, Bahareh. "Development of a Thick Continuum-Based Shell Finite Element for Soft Tissue Dynamics." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/35908.

Full text
Abstract:
The goal of the present doctoral research is to create a theoretical framework and develop a numerical implementation for a shell finite element that can potentially achieve higher performance (i.e. combination of speed and accuracy) than current Continuum-based (CB) shell finite elements (FE), in particular in applications related to soft biological tissue dynamics. Specifically, this means complex and irregular geometries, large distortions and large bending deformations, and anisotropic incompressible hyperelastic material properties. The critical review of the underlying theories, formulations, and capabilities of the existing CB shell FE revealed that a general nonlinear CB shell FE with the abovementioned capabilities needs to be developed. Herein, we propose the theoretical framework of a new such CB shell FE for dynamic analysis using the total and the incremental updated Lagrangian (UL) formulations and explicit time integration. Specifically, we introduce the geometry and the kinematics of the proposed CB shell FE, as well as the matrices and constitutive relations which need to be evaluated for the total and the incremental UL formulations of the dynamic equilibrium equation. To verify the accuracy and efficiency of the proposed CB shell element, its large bending and distortion capabilities, as well as the accuracy of three different techniques presented for large strain analysis, we implemented the element in Matlab and tested its application in various geometries, with different material properties and loading conditions. The new high performance and accuracy element is shown to be insensitive to shear and membrane locking, and to initially irregular elements.
APA, Harvard, Vancouver, ISO, and other styles
27

Kocina, Karel. "Studie návrhu vhodného tvaru membránových konstrukcí." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2012. http://www.nusl.cz/ntk/nusl-225357.

Full text
Abstract:
This diploma thesis deals with methods for design of membrane structure shape. Main purpose is to analyze topology designs by Formfinder and Rhinoceros in RFEM and compare results. Test a possibility of designing shape by software RFEM.
APA, Harvard, Vancouver, ISO, and other styles
28

Tanoh, Henry-Gertrude. "Implementation of Post-Build Configuration for Gateway Electronic Control Unit : Gateway ECU to enable third-party update." Thesis, KTH, Radio Systems Laboratory (RS Lab), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231545.

Full text
Abstract:
The development of embedded software in the automotive industry has reached a level of complexity, which is unmaintainable by traditional approaches. The AUTomotive Open System Architecture (AUTOSAR) was created to standardize the automotive software. In this architecture, the development of software is spread, in general, between three different entities: Original Equipment Manufacturers (OEMs), e.g. Volvo; Tier-1 Suppliers, such as Vector; and Tier-2 Suppliers, for example, Renesas Microelectronics. Another methodology that has emerged is to develop Electronic Control Units (ECUs) domain wise: infotainment, chassis & safety, powertrain, and body and security. To allow inter-domain communication, the state of art for fast and reliable communication is to use a gateway ECU. The gateway ECU is a crucial component in the electrical/electronic (E/E) architecture of a vehicle. In AUTOSAR, a third party, different from the car manufacturer, typically implements the gateway ECU. A major feature for a gateway ECU is to provide highly flexible configuration. This flexibility allows the car manufacturer (OEM) to fit the gateway ECU to different requirements and product derivations. This thesis investigates the implementation of post-build configuration for a gateway ECU. First, the thesis provides the reader with some background on AUTOSAR and the current E/E architecture of the gateway ECU. The protocols used by the gateway are explained. The design of a potential solution and its implementation are discussed. The implementation is evaluated through regression tests of the routing functionality. Processing time, memory use, and scaling of the solution are also taken into account. The results of the design and the implementation if judged adequate could be used as a springboard to allow post-build in an existing gateway ECU architecture. The results could consolidate the path towards full conformance to AUTOSAR.
Inbyggda system har okat i fordonsindustrin. Utvecklingen av dessa inbyggda programvara har varit komplex och ar inte genomforbar per ett enhet. Idag ar utvecklingen gjort av tre foretag: en OEM (Original Equipement Manufacturer), Tier-1 leverantorer som tillhandahaller mjukvara till OEMs, Tier-2 leverantorer som tillhandahaller elektroniska styrenheter (ECU) hardvaror. Förmedlingsnod ECU är en viktig komponent i ett fordons elektriska/elektroniska (E/E) arkitektur. En tredje part implementerar, som skiljer sig från OEM, de flesta funktionerna av den förmedlingsnod ECU. En viktig egenskap för en förmedlingsnod är att tillhandahålla en mycket flexibel konfiguration. Denna flexibilitet tillåter (OEM) att anpassa förmedlingsnod till olika kraven och fordonarkitekturer. Denna avhandling undersöker genomförandet av Post-build konfigurationen, ocksa kallad dynamisk konfigurationen för en förmedlingsnod ECU. För det första gers bakgrund på AUTOSAR och den nuvarande E/E arkitekturen för den ECU. De kommunikation protokoll som används förklaras. Utformningen av en potentiell lösning och dess genomförande diskuteras. Implementeringen utvärderas genom regressionstest av routingsfunktionaliteten. Behandlingstid, minneseffektivitet och skalning av lösningen beaktas också. Resultaten av konstruktionen och genomförandet om det bedömdes som lämpligt skulle kunna användas som ett springbräda för att möjliggöra postbyggnad i en befintlig förmedlingsnod arkitektur. Resultaten kan konsolidera vägen mot full överensstämmelse med AUTOSAR.
Le développement de systèmes embarqués dans l’industrie automobile a atteint un niveau de complexité très élevé. D’où la nécessité de créer de nouvelles méthodologies. AUTomotive Open Architecture (AUTOSAR) a été créé pour la mise place de standards pour le développement dans l’industrie automobile. Dans l’architecture AUTOSAR, le développement de logiciels embarqués est reparti, en général, entre trois partis : Original Equipement Manufacturer (OEM), Renault par exemple. Le deuxième niveau regroupe les fournisseurs de logiciels et outils, par exemple, Elektrobit. On retrouve en troisième position les Tier-2 suppliers, fournisseurs de cartes électroniques pour l’automobile, comme Renesas ST. Le développement de calculateurs est séparé par domaine : Multimédia, châssis, motorisation et intérieur. La communication inter-domaine passe par un calculateur passerelle. Le calculateur passerelle est essentielle dans l’architecture électronique du véhicule. Dans AUTOSAR, le calculateur est fourni par un tiers parti, différent du constructeur automobile. Il est donc nécessaire pour le constructeur d’être capable de configurer le calculateur passerelle, sans repasser par le vendeur. Par exemple, le constructeur peut décider, réception du software de rajouter une nouvelle route dans la passerelle. Cet aspect est connu sur le nom de Post-build Configuration dans AUTOSAR. Le but de ce stage est le design et l’implémentation de Post-build configuration d’un calculateur passerelle. D’abord, AUTOSAR et l’architecture électronique d’un calculateur passerelle sont détaillées. Les protocoles de communication sont aussi décrits. Ensuite, le design et les choix d’implémentation sont discutés. L’implémentation est évaluée avec des tests de régression sur les fonctionnalités de routage. Aussi, la solution finale est évaluée sur les critères de performance de routage, l’efficacité en consommation mémoire et la capacité d’être intégrée dans un produit final.
APA, Harvard, Vancouver, ISO, and other styles
29

Lounas, Razika. "Validation des spécifications formelles de la mise à jour dynamique des applications Java Card." Thesis, Limoges, 2018. http://www.theses.fr/2018LIMO0085/document.

Full text
Abstract:
La mise à jour dynamique des programmes consiste en la modification de ceux-ci sans en arrêter l'exécution. Cette caractéristique est primordiale pour les applications critiques en continuelles évolutions et nécessitant une haute disponibilité. Le but de notre travail est d'effectuer la vérification formelle de la correction de la mise à jour dynamique d'applications Java Card à travers l'étude du système EmbedDSU. Pour ce faire, nous avons premièrement établi la correction de la mise à jour du code en définissant une sémantique formelle des opérations de mise à jour sur le code intermédiaire Java Card en vue d'établir la sûreté de typage des mises à jour. Nous avons ensuite proposé une approche pour vérifier la sémantique du code mis à jour à travers la définition d'une transformation de prédicats. Nous nous sommes ensuite intéressés à la vérification de la correction concernant la détection de points sûrs de la mise à jour. Nous avons utilisé la vérification de modèles. Cette vérification nous a permis de corriger d'abord un problème d'inter blocage dans le système avant d'établir d'autres propriétés de correction : la sûreté d'activation et la garantie de mise à jour. La mise à jour des données est effectuée à travers les fonctions de transfert d'état. Pour cet aspect, nous avons proposé une solution permettant d'appliquer les fonctions de transfert d’état tout en préservant la consistance du tas de la machine virtuelle Java Card et en permettant une forte expressivité dans leurs écritures
Dynamic Software Updating (DSU) consists in updating running programs on the fly without any downtime. This feature is interesting in critical applications that are in continual evolution and that require high availability. The aim of our work is to perform formal verification the correctness of dynamic software updating in Java Card applications by studying the system EmbedDSU. To do so, we first established the correctness of code update. We achieved this by defining formal semantics for update operations on java Card bytecode in order to ensure type safety. Then, we proposed an approach to verify the semantics of updated programs by defining a predicate transformation. Afterward, we were interested in the verification of correction concerning the safe update point detection. We used model checking. This verification allowed us first to fix a deadlock situation in the system and then to establish other correctness properties: activeness safety and updatability. Data update is performed through the application of state transfer functions. For this aspect, we proposed a solution to apply state transfer functions with the preservation of the Java Card virtual machine heap consistency and by allowing a high expressiveness when writing state transfer functions
APA, Harvard, Vancouver, ISO, and other styles
30

Loyet, Raphaël. "Dynamic sound rendering of complex environments." Phd thesis, Université Claude Bernard - Lyon I, 2012. http://tel.archives-ouvertes.fr/tel-00995328.

Full text
Abstract:
De nombreuses études ont été menées lors des vingt dernières années dans le domaine de l'auralisation.Elles consistent à rendre audible les résultats d'une simulation acoustique. Ces études se sont majoritairementfocalisées sur les algorithmes de propagation et la restitution du champ acoustique dans desenvironnements complexes. Actuellement, de nombreux travaux portent sur le rendu sonore en tempsréel.Cette thèse aborde la problématique du rendu sonore dynamique d'environnements complexes selonquatre axes : la propagation des ondes sonores, le traitement du signal, la perception spatiale du son etl'optimisation informatique. Dans le domaine de la propagation, une méthode permettant d'analyser lavariété des algorithmes présents dans la bibliographie est proposée. A partir de cette méthode d'analyse,deux algorithmes dédiés à la restitution en temps réel des champs spéculaires et diffus ont été extraits.Dans le domaine du traitement du signal, la restitution est réalisée à l'aide d'un algorithme optimisé despatialisation binaurale pour les chemins spéculaires les plus significatifs et un algorithme de convolutionsur carte graphique pour la restitution du champ diffus. Les chemins les plus significatifs sont extraitsgrace à un modèle perceptif basé sur le masquage temporel et spatial des contributions spéculaires.Finalement, l'implémentation de ces algorithmes sur des architectures parallèles récentes en prenant encompte les nouvelles architectures multi-coeurs et les nouvelles cartes graphiques est présenté.
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, Yuwei. "Evolution of microservice-based applications : Modelling and safe dynamic updating." Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAS009.

Full text
Abstract:
Les architectures à base de microservices permettent de construire des systèmes répartis complexes composés de microservices indépendants. Le découplage et la modularité des microservices facilitent leur remplacement et leur mise à jour de manière indépendante. Depuis l'émergence du développement agile et de l'intégration continue (DevOps et CI/CD), la tendance est aux changements de version plus fréquents et en cours d'exécution des applications. La réalisation des changements de version est effectuée par un processus d'évolution consistant à passer de la version actuelle de l'application à une nouvelle version. Cependant, les coûts de maintenance et d'évolution de ces systèmes répartis augmentent rapidement avec le nombre de microservices.L'objectif de cette thèse est de répondre aux questions suivantes~:Comment aider les ingénieurs à mettre en place une gestion de version unifiée et efficace pour les microservices et comment tracer les changements de version dans les applications à base de microservices ?Quand les applications à base de microservices, en particulier celles dont les activités sont longues, peuvent-elles être mises à jour dynamiquement sans arrêter l'exécution de l'ensemble du système ? Comment la mise à jour doit-elle être effectuée pour assurer la continuité du service et maintenir la cohérence du système ?En réponse à ces questions, cette thèse propose deux contributions principales. La première contribution est constituée de modèles architecturaux et d'un graphe d'évolution pour modéliser et tracer la gestion des versions des microservices. Ces modèles sont construits lors de la conception et utilisés durant l'exécution. Cette contribution aide les ingénieurs à abstraire l'évolution architecturale afin de gérer les déploiements lors d'une reconfiguration, et fournit la base de connaissances nécessaire à un intergiciel de gestion autonomique des activités d'évolution. La deuxième contribution est une approche basée sur les instantanés pour la mise à jour dynamique (DSU) des applications à base de microservices. Les instantanés répartis cohérents de l'application en cours d'exécution sont construits pour être utilisés dans la spécification de la continuité du service, l'évaluation des conditions de mise à jour sûre et dans la mise en œuvre des stratégies de mise à jour. La complexité en nombre de messages de l'algorithme DSU n'est alors pas égale à la complexité de l'application répartie, mais correspond à la complexité de l'algorithme de construction d'un instantané réparti cohérent
Microservice architectures contribute to building complex distributed systems as sets of independent microservices. The decoupling and modularity of distributed microservices facilitates their independent replacement and upgradeability. Since the emergence of agile DevOps and CI/CD, there is a trend towards more frequent and rapid evolutionary changes of the running microservice-based applications in response to various evolution requirements. Applying changes to microservice architectures is performed by an evolution process of moving from the current application version to a new version. The maintenance and evolution costs of these distributed systems increase rapidly with the number of microservices.The objective of this thesis is to address the following issues: How to help engineers to build a unified and efficient version management for microservices and how to trace changes in microservice-based applications? When can microservice-based applications, especially those with long-running activities, be dynamically updated without stopping the execution of the whole system? How should the safe updating be performed to ensure service continuity and maintain system consistency?In response to these questions, this thesis proposes two main contributions. The first contribution is runtime models and an evolution graph for modelling and tracing version management of microservices. These models are built at design time and used at runtime. It helps engineers abstract architectural evolution in order to manage reconfiguration deployments, and it provides the knowledge base to be manipulated by an autonomic manager middleware in various evolution activities. The second contribution is a snapshot-based approach for dynamic software updating (DSU) of microservices. The consistent distributed snapshots of microservice-based applications are constructed to be used for specifying continuity of service, evaluating the safe update conditions and realising the update strategies. The message complexity of the DSU algorithm is not the message complexity of the distributed application, but the complexity of the consistent distributed snapshot algorithm
APA, Harvard, Vancouver, ISO, and other styles
32

Lülf, Fritz Adrian. "An integrated method for the transient solution of reduced order models of geometrically nonlinear structural dynamic systems." Phd thesis, Conservatoire national des arts et metiers - CNAM, 2013. http://tel.archives-ouvertes.fr/tel-00957455.

Full text
Abstract:
For repeated transient solutions of geometrically nonlinear structures the numerical effort often poses a major obstacle. Thus, the introduction of a reduced order model, which takes the nonlinear effects into account and accelerates the calculations considerably, is often necessary.This work yields a method that allows for rapid, accurate and parameterisable solutions by means of a reduced model of the original structure. The structure is discretised and its dynamic equilibrium described by a matrix equation. The projection on a reduced basis is introduced to obtain the reduced model. A comprehensive numerical study on several common reduced bases shows that the simple introduction of a constant basis is not sufficient to account for the nonlinear behaviour. Three requirements for an rapid, accurate and parameterisable solution are derived. The solution algorithm has to take into account the nonlinear evolution of the solution, the solution has to be independent of the nonlinear finite element terms and the basis has to be adapted to external parameters.Three approaches are provided, each responding to one requirement. These approaches are assembled to the integrated method. The approaches are the update and augmentation of the basis, the polynomial formulation of the nonlinear terms and the interpolation of the basis. A Newmark-type time-marching algorithm provides the frame of the integrated method. The application of the integrated method on test-cases with geometrically nonlinear finite elements confirms that this method leads to the initial aim of a rapid, accurate and parameterisable transient solution.
APA, Harvard, Vancouver, ISO, and other styles
33

Quan, Nguyen. "Distributed Game Environment : A Software Product Line for Education and Research." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-29077.

Full text
Abstract:
A software product line is a set of software-intensive systems that share a common, managed set of features satisfying the specific needs of a particular market segment or demand. Software product lines capitalize commonality and manage variation to reduce the time, effort, cost and complexity when creating and maintaining products in a product line. Therefore reusing core assets, software product line can address problems such as cost, time-to-market, quality, complexity of developing and maintaining variants, and need to quickly respond to market’s demands. The development of a software product line is different from conventional software development and in the area of education and research of product line there is a lack of a suitable purposefully designed and developed software product line (SPL) that can be used for educational or research purposes. In this thesis we have developed a software product line for turn-based two players distributed board games environment that can be used for educational and research purposes. The software product line supports dynamic runtime update, including games, chat, and security features, via OSGi framework. Furthermore, it supports remote gameplay via local area network and dynamic runtime activity recovery. We delivered a product configuration tool that is used to derive and configure products from the core assets based on feature selection. We have also modeled the software product line’s features and documented its requirements, architecture and user guides. Furthermore, we performed functional and integration tests of the software product line to ensure that the requirements are met according to the requirements specification prescribed by the stakeholders.
APA, Harvard, Vancouver, ISO, and other styles
34

Pisani, Paulo Henrique. "Biometrics in a data stream context." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-08052017-141153/.

Full text
Abstract:
The growing presence of the Internet in day-to-day tasks, along with the evolution of computational systems, contributed to increase data exposure. This scenario highlights the need for safer user authentication systems. An alternative to deal with this is by the use of biometric systems. However, biometric features may change over time, an issue that can affect the recognition performance due to an outdated biometric reference. This effect can be called as template ageing in the area of biometrics and as concept drift in machine learning. It raises the need to automatically adapt the biometric reference over time, a task performed by adaptive biometric systems. This thesis studied adaptive biometric systems considering biometrics in a data stream context. In this context, the test is performed on a biometric data stream, in which the query samples are presented one after another to the biometric system. An adaptive biometric system then has to classify each query and adapt the biometric reference. The decision to perform the adaptation is taken by the biometric system. Among the biometric modalities, this thesis focused on behavioural biometrics, particularly on keystroke dynamics and on accelerometer biometrics. Behavioural modalities tend to be subject to faster changes over time than physical modalities. Nevertheless, there were few studies dealing with adaptive biometric systems for behavioural modalities, highlighting a gap to be explored. Throughout the thesis, several aspects to enhance the design of adaptive biometric systems for behavioural modalities in a data stream context were discussed: proposal of adaptation strategies for the immune-based classification algorithm Self-Detector, combination of genuine and impostor models in the Enhanced Template Update framework and application of score normalization to adaptive biometric systems. Based on the investigation of these aspects, it was observed that the best choice for each studied aspect of the adaptive biometric systems can be different depending on the dataset and, furthermore, depending on the users in the dataset. The different user characteristics, including the way that the biometric features change over time, suggests that adaptation strategies should be chosen per user. This motivated the proposal of a modular adaptive biometric system, named ModBioS, which can choose each of these aspects per user. ModBioS is capable of generalizing several baselines and proposals into a single modular framework, along with the possibility of assigning different adaptation strategies per user. Experimental results showed that the modular adaptive biometric system can outperform several baseline systems, while opening a number of new opportunities for future work.
A crescente presença da Internet nas tarefas do dia a dia, juntamente com a evolução dos sistemas computacionais, contribuiu para aumentar a exposição dos dados. Esse cenário evidencia a necessidade de sistemas de autenticação de usuários mais seguros. Uma alternativa para lidar com isso é pelo uso de sistemas biométricos. Contudo, características biométricas podem mudar com o tempo, o que pode afetar o desempenho de reconhecimento devido a uma referência biométrica desatualizada. Esse efeito pode ser chamado de template ageing na área de sistemas biométricos adaptativos ou de mudança de conceito em aprendizado de máquina. Isso levanta a necessidade de adaptar automaticamente a referência biométrica com o tempo, uma tarefa executada por sistemas biométricos adaptativos. Esta tese estudou sistemas biométricos adaptativos considerando biometria em um contexto de fluxo de dados. Neste contexto, o teste é executado em um fluxo de dados biométrico, em que as amostras de consulta são apresentadas uma após a outra para o sistema biométrico. Um sistema biométrico adaptativo deve então classificar cada consulta e adaptar a referência biométrica. A decisão de executar a adaptação é tomada pelo sistema biométrico. Dentre as modalidades biométricas, esta tese foca em biometria comportamental, em particular em dinâmica da digitação e em biometria por acelerômetro. Modalidades comportamentais tendem a ser sujeitas a mudanças mais rápidas do que modalidades físicas. Entretanto, havia poucos estudos lidando com sistemas biométricos adaptativos para modalidades comportamentais, destacando uma lacuna para ser explorada. Ao longo da tese, diversos aspectos para aprimorar o projeto de sistemas biométricos adaptativos para modalidades comportamentais em um contexto de fluxo de dados foram discutidos: proposta de estratégias de adaptação para o algoritmo de classificação imunológico Self-Detector, combinação de modelos genuíno e impostor no framework do Enhanced Template Update e aplicação de normalização de scores em sistemas biométricos adaptativos. Com base na investigação desses aspectos, foi observado que a melhor escolha para cada aspecto estudado dos sistemas biométricos adaptativos pode ser diferente dependendo do conjunto de dados e, além disso, dependendo dos usuários no conjunto de dados. As diferentes características dos usuários, incluindo a forma como as características biométricas mudam com o tempo, sugerem que as estratégias de adaptação deveriam ser escolhidas por usuário. Isso motivou a proposta de um sistema biométrico adaptativo modular, chamado ModBioS, que pode escolher cada um desses aspectos por usuário. O ModBioS é capaz de generalizar diversos sistemas baseline e propostas apresentadas nesta tese em um framework modular, juntamente com a possibilidade de atribuir estratégias de adaptação diferentes por usuário. Resultados experimentais mostraram que o sistema biométrico adaptativo modular pode superar diversos sistemas baseline, enquanto que abre um grande número de oportunidades para trabalhos futuros.
APA, Harvard, Vancouver, ISO, and other styles
35

Renaud-Goud, Paul. "Energy-aware scheduling : complexity and algorithms." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2012. http://tel.archives-ouvertes.fr/tel-00744247.

Full text
Abstract:
In this thesis we have tackled a few scheduling problems under energy constraint, since the energy issue is becoming crucial, for both economical and environmental reasons. In the first chapter, we exhibit tight bounds on the energy metric of a classical algorithm that minimizes the makespan of independent tasks. In the second chapter, we schedule several independent but concurrent pipelined applications and address problems combining multiple criteria, which are period, latency and energy. We perform an exhaustive complexity study and describe the performance of new heuristics. In the third chapter, we study the replica placement problem in a tree network. We try to minimize the energy consumption in a dynamic frame. After a complexity study, we confirm the quality of our heuristics through a complete set of simulations. In the fourth chapter, we come back to streaming applications, but in the form of series-parallel graphs, and try to map them onto a chip multiprocessor. The design of a polynomial algorithm on a simple problem allows us to derive heuristics on the most general problem, whose NP-completeness has been proven. In the fifth chapter, we study energy bounds of different routing policies in chip multiprocessors, compared to the classical XY routing, and develop new routing heuristics. In the last chapter, we compare the performance of different algorithms of the literature that tackle the problem of mapping DAG applications to minimize the energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
36

Hakala, Tim. "Settling-Time Improvements in Positioning Machines Subject to Nonlinear Friction Using Adaptive Impulse Control." BYU ScholarsArchive, 2006. https://scholarsarchive.byu.edu/etd/1061.

Full text
Abstract:
A new method of adaptive impulse control is developed to precisely and quickly control the position of machine components subject to friction. Friction dominates the forces affecting fine positioning dynamics. Friction can depend on payload, velocity, step size, path, initial position, temperature, and other variables. Control problems such as steady-state error and limit cycles often arise when applying conventional control techniques to the position control problem. Studies in the last few decades have shown that impulsive control can produce repeatable displacements as small as ten nanometers without limit cycles or steady-state error in machines subject to dry sliding friction. These displacements are achieved through the application of short duration, high intensity pulses. The relationship between pulse duration and displacement is seldom a simple function. The most dependable practical methods for control are self-tuning; they learn from online experience by adapting an internal control parameter until precise position control is achieved. To date, the best known adaptive pulse control methods adapt a single control parameter. While effective, the single parameter methods suffer from sub-optimal settling times and poor parameter convergence. To improve performance while maintaining the capacity for ultimate precision, a new control method referred to as Adaptive Impulse Control (AIC) has been developed. To better fit the nonlinear relationship between pulses and displacements, AIC adaptively tunes a set of parameters. Each parameter affects a different range of displacements. Online updates depend on the residual control error following each pulse, an estimate of pulse sensitivity, and a learning gain. After an update is calculated, it is distributed among the parameters that were used to calculate the most recent pulse. As the stored relationship converges to the actual relationship of the machine, pulses become more accurate and fewer pulses are needed to reach each desired destination. When fewer pulses are needed, settling time improves and efficiency increases. AIC is experimentally compared to conventional PID control and other adaptive pulse control methods on a rotary system with a position measurement resolution of 16000 encoder counts per revolution of the load wheel. The friction in the test system is nonlinear and irregular with a position dependent break-away torque that varies by a factor of more than 1.8 to 1. AIC is shown to improve settling times by as much as a factor of two when compared to other adaptive pulse control methods while maintaining precise control tolerances.
APA, Harvard, Vancouver, ISO, and other styles
37

von, Wenckstern Michael. "Web applications using the Google Web Toolkit." Master's thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2013. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-115009.

Full text
Abstract:
This diploma thesis describes how to create or convert traditional Java programs to desktop-like rich internet applications with the Google Web Toolkit. The Google Web Toolkit is an open source development environment, which translates Java code to browser and device independent HTML and JavaScript. Most of the GWT framework parts, including the Java to JavaScript compiler as well as important security issues of websites will be introduced. The famous Agricola board game will be implemented in the Model-View-Presenter pattern to show that complex user interfaces can be created with the Google Web Toolkit. The Google Web Toolkit framework will be compared with the JavaServer Faces one to find out which toolkit is the right one for the next web project
Diese Diplomarbeit beschreibt die Erzeugung desktopähnlicher Anwendungen mit dem Google Web Toolkit und die Umwandlung klassischer Java-Programme in diese. Das Google Web Toolkit ist eine Open-Source-Entwicklungsumgebung, die Java-Code in browserunabhängiges als auch in geräteübergreifendes HTML und JavaScript übersetzt. Vorgestellt wird der Großteil des GWT Frameworks inklusive des Java zu JavaScript-Compilers sowie wichtige Sicherheitsaspekte von Internetseiten. Um zu zeigen, dass auch komplizierte graphische Oberflächen mit dem Google Web Toolkit erzeugt werden können, wird das bekannte Brettspiel Agricola mittels Model-View-Presenter Designmuster implementiert. Zur Ermittlung der richtigen Technologie für das nächste Webprojekt findet ein Vergleich zwischen dem Google Web Toolkit und JavaServer Faces statt
APA, Harvard, Vancouver, ISO, and other styles
38

"Client-Driven Dynamic Database Updates." Master's thesis, 2011. http://hdl.handle.net/2286/R.I.9521.

Full text
Abstract:
abstract: This thesis addresses the problem of online schema updates where the goal is to be able to update relational database schemas without reducing the database system's availability. Unlike some other work in this area, this thesis presents an approach which is completely client-driven and does not require specialized database management systems (DBMS). Also, unlike other client-driven work, this approach provides support for a richer set of schema updates including vertical split (normalization), horizontal split, vertical and horizontal merge (union), difference and intersection. The update process automatically generates a runtime update client from a mapping between the old the new schemas. The solution has been validated by testing it on a relatively small database of around 300,000 records per table and less than 1 Gb, but with limited memory buffer size of 24 Mb. This thesis presents the study of the overhead of the update process as a function of the transaction rates and the batch size used to copy data from the old to the new schema. It shows that the overhead introduced is minimal for medium size applications and that the update can be achieved with no more than one minute of downtime.
Dissertation/Thesis
M.S. Computer Science 2011
APA, Harvard, Vancouver, ISO, and other styles
39

Subramanian, Suriya. "Dynamic software updates : a VM-centric approach." Thesis, 2010. http://hdl.handle.net/2152/ETD-UT-2010-05-1436.

Full text
Abstract:
Because software systems are imperfect, developers are forced to fix bugs and add new features. The common way of applying changes to a running system is to stop the application or machine and restart with the new version. Stopping and restarting causes a disruption in service that is at best inconvenient and at worst causes revenue loss and compromises safety. Dynamic software updating (DSU) addresses these problems by updating programs while they execute. Prior DSU systems for managed languages like Java and C# lack necessary functionality: they are inefficient and do not support updates that occur commonly in practice. This dissertation presents the design and implementation of Jvolve, a DSU system for Java. Jvolve's combination of flexibility, safety, and efficiency is a significant advance over prior approaches. Our key contribution is the extension and integration of existing Virtual Machine services with safe, flexible, and efficient dynamic updating functionality. Our approach is flexible enough to support a large class of updates, guarantees type-safety, and imposes no space or time overheads on steady-state execution. Jvolve supports many common updates. Users can add, delete, and change existing classes. Changes may add or remove fields and methods, replace existing ones, and change type signatures. Changes may occur at any level of the class hierarchy. To initialize new fields and update existing ones, Jvolve applies class and object transformer functions, the former for static fields and the latter for object instance fields. These features cover many updates seen in practice. Jvolve supports 20 of 22 updates to three open-source programs---Jetty web server, JavaEmailServer, and CrossFTP server---based on actual releases occurring over a one to two year period. This support is substantially more flexible than prior systems. Jvolve is safe. It relies on bytecode verification to statically type-check updated classes. To avoid dynamic type errors due to the timing of an update, Jvolve stops the executing threads at a DSU safe point and then applies the update. DSU safe points are a subset of VM safe points, where it is safe to perform garbage collection and thread scheduling. DSU safe points further restrict the methods that may be on each thread's stack, depending on the update. Restricted methods include updated methods for code consistency and safety, and user-specified methods for semantic safety. Jvolve installs return barriers and uses on-stack replacement to speed up reaching a safe point when necessary. While Jvolve does not guarantee that it will reach a DSU safe point, in our multithreaded benchmarks it almost always does. Jvolve includes a tool that automatically generates default object transformers which initialize new and changed fields to default values and retain values of unchanged fields in heap objects. If needed, programmers may customize the default transformers. Jvolve is the first dynamic updating system to extend the garbage collector to identify and transform all object instances of updated types. This dissertation introduces the concept of object-specific state transformers to repair application heap state for certain classes of bugs that corrupt part of the heap, and a novel methodology that employes dynamic analysis to automatically generate these transformers. Jvolve's eager object transformation design and implementation supports the widest class of updates to date. Finally, Jvolve is efficient. It imposes no overhead during steady-state execution. During an update, it imposes overheads to classloading and garbage collection. After an update, the adaptive compilation system will incrementally optimize the updated code in its usual fashion. Jvolve is the first full-featured dynamic updating system that imposes no steady-state overhead. In summary, Jvolve is the most-featured, most flexible, safest, and best-performing dynamic updating system for Java and marks a significant step towards practical support for dynamic updates in managed language virtual machines.
text
APA, Harvard, Vancouver, ISO, and other styles
40

You, Cheng-Chi, and 游正志. "Fast IP Routing Lookups and Dynamic Updates in Hardware." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/74043014598786671354.

Full text
Abstract:
碩士
國立中正大學
電機工程研究所
89
The rapid growth of Internet has created a demand for routers that can process heavy traffic and achieve gigabit speed. Currently, the process of IP lookups is done in software and has become a major performance bottleneck for routers. In connection with this problem, many fast IP routing lookup mechanisms are proposed. But most of them focus on lookups. In this paper, we propose fast and complete architecture in hardware. Its functions are inclusive of searching, updating, inserting, and deleting. Implemented in a pipelined fashion, the lookup speed can achieve one memory access for every IP lookup. This architecture also supports fast update, insertion, and deletion. Because the region of update for every route is small, it is contributive to the router whose update is frequent.
APA, Harvard, Vancouver, ISO, and other styles
41

Vlk, Marek. "Dynamic Scheduling." Master's thesis, 2014. http://www.nusl.cz/ntk/nusl-336598.

Full text
Abstract:
One of the problems of real-life production scheduling is dynamics of manufacturing environments with new production demands and breaking machines during the schedule execution. Simple rescheduling from scratch in response to unexpected events occurring on the shop floor may require excessive computation time. Moreover, the recovered schedule may be prohibitively deviated from the ongoing schedule. This thesis reviews existing approaches in the field of dynamic scheduling and proposes techniques how to modify a schedule to accommodate disturbances such as resource failure, hot order arrival or order cancellation. The importance is put on the speed of suggested procedures as well as on a minimum modification from the original schedule. The scheduling model is motivated by the FlowOpt project, which is based on the Temporal Networks with Alternatives. The algorithms are written in the C# language.
APA, Harvard, Vancouver, ISO, and other styles
42

Crescini, Vino Fernando, University of Western Sydney, College of Health and Science, and School of Computing and Mathematics. "Implementation of a logic-based access control system with dynamic policy updates and temporal constraints." 2006. http://handle.uws.edu.au:8081/1959.7/15207.

Full text
Abstract:
As information systems evolve to cope with the ever increasing demand of today’s digital world, so does the need for more effective means of protecting information. In the early days of computing, information security started out as a branch of information technology. Over the years, several advances in information security have been made and, as a result, it is now considered a discipline in its own right. The most fundamental function of information security is to ensure that information flows to authorised entities, and at the same time, prevent unauthorised entities from accessing the protected information. In a typical information system, an access control system provides this function. Several advances in the field of information security have produced several access control models and implementations. However, as information technology evolves, the need for a better access control system increases. This dissertation proposes an effective, yet flexible access control system: the Policy Updater access control system. Policy Updater is a fully-implemented access control system that provides policy evaluations as well as dynamic policy updates. These functions are provided by the use of a logic-based language, L, to represent the underlying access control policies, constraints and policy update rules. The system performs authorisation query evaluations, as well as conditional and dynamic policy updates by translating language L policies to normal logic programs in a form suitable for evaluation using the well-known Stable Model semantics. In this thesis, we show the underlying mechanisms that make up the Policy Updater system, including the theoretical foundations of its formal language, the system structure, a full discussion of implementation issues and a performance analysis. Lastly, the thesis also proposes a non-trivial extension of the Policy Updater system that is capable of supporting temporal constraints. This is made possible by the integration of the well-established Temporal Interval Algebra into the extended authorisation language, language LT , which can also be translated into a normal logic program for evaluation. The formalisation of this extension, together with the full implementation details, are included in this dissertation.
Doctor of Philosophy (PhD)
APA, Harvard, Vancouver, ISO, and other styles
43

Liang, Weifa. "Designing Efficient Parallel Algorithms for Graph Problems." Phd thesis, 1997. http://hdl.handle.net/1885/47660.

Full text
Abstract:
Graph algorithms are concerned with the algorithmic aspects of solving graph problems. The problems are motivated from and have application to diverse areas of computer science, engineering and other disciplines. Problems arising from these areas of application are good candidates for parallelization since they often have both intense computational needs and stringent response time requirements. Motivated by these concerns, this thesis investigates parallel algorithms for these kinds of graph problems that have at least one of the following properties: the problems involve some type of dynamic updates; the sparsification technique is applicable; or the problems are closely related to communications network issues. The models of parallel computation used in our studies are the Parallel Random Access Machine (PRAM) model and the practical interconnection network models such as meshes and hypercubes. ¶ ...
APA, Harvard, Vancouver, ISO, and other styles
44

Fecteau, Anthony R. Acharya Rajgopal Sundaraj. "Bdi plan update in dynamic environments." 2009. http://etda.libraries.psu.edu/theses/approved/PSUonlyIndex/ETD-4511/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

"A Study of Backward Compatible Dynamic Software Update." Doctoral diss., 2015. http://hdl.handle.net/2286/R.I.36032.

Full text
Abstract:
abstract: Dynamic software update (DSU) enables a program to update while it is running. DSU aims to minimize the loss due to program downtime for updates. Usually DSU is done in three steps: suspending the execution of an old program, mapping the execution state from the old program to a new one, and resuming execution of the new program with the mapped state. The semantic correctness of DSU depends largely on the state mapping which is mostly composed by developers manually nowadays. However, the manual construction of a state mapping does not necessarily ensure sound and dependable state mapping. This dissertation presents a methodology to assist developers by automating the construction of a partial state mapping with a guarantee of correctness. This dissertation includes a detailed study of DSU correctness and automatic state mapping for server programs with an established user base. At first, the dissertation presents the formal treatment of DSU correctness and the state mapping problem. Then the dissertation presents an argument that for programs with an established user base, dynamic updates must be backward compatible. The dissertation next presents a general definition of backward compatibility that specifies the allowed changes in program interaction between an old version and a new version and identified patterns of code evolution that results in backward compatible behavior. Thereafter the dissertation presents formal definitions of these patterns together with proof that any changes to programs in these patterns will result in backward compatible update. To show the applicability of the results, the dissertation presents SitBack, a program analysis tool that has an old version program and a new one as input and computes a partial state mapping under the assumption that the new version is backward compatible with the old version. SitBack does not handle all kinds of changes and it reports to the user in incomplete part of a state mapping. The dissertation presents a detailed evaluation of SitBack which shows that the methodology of automatic state mapping is promising in deal with real world program updates. For example, SitBack produces state mappings for 17-75% of the changed functions. Furthermore, SitBack generates automatic state mapping that leads to successful DSU. In conclusion, the study presented in this dissertation does assist developers in developing state mappings for DSU by automating the construction of state mappings with a correctness guarantee, which helps the adoption of DSU ultimately.
Dissertation/Thesis
Doctoral Dissertation Computer Science 2015
APA, Harvard, Vancouver, ISO, and other styles
46

Ho, Hui-Chung, and 何慧忠. "Dynamic Key Update & Delegation In CP-ABE." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/75738404481846857496.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
104
Ciphertext-Policy Attribute-Based Encryption (CP-ABE) is a useful asymmetric encryption algorithm compared to traditional asymmetric cipher key systems. It enables encrypted data to be stored on cloud server with every of them retaining their own access permissions without the need of additionally define access control permission on the cloud server. In highly dynamic and heterogeneous cloud environment it is a challenging task to maintain data protections by just utilizing fine-grained access policy of CP-ABE. User rights management is made harder to implement on such systems without user interventions. Currently there is no solution from the cryptosystem that supports efficient and direct key update and user revocations. Besides, backward secrecy and forward secrecy are not supported in the CP-ABE cryptosystem. Existing revocation methods are not encouraged to deploy in large cloud environment due to their high key processing overhead upon new user joining, revoked or being assigned with a new group key. In this paper, we proposed a method to dynamically authorize the users. The key feature of our model is the users do not have to involve in key revocation process. Our model utilizes different user authentication sessions to restrict their keys to a particular session and this approach could achieve direct user revocations within a group. The operation does not require re-encryption of existing ciphertext. Our method supports backward and (perfect) forward secrecy and is escrow-free. Lastly, we present that our method is efficient in the situation where users are changing groups frequently and our method is secured under chosen identity key attack.
APA, Harvard, Vancouver, ISO, and other styles
47

Huang, Yuhsiang, and 黃昱翔. "On Efficient Update Delivery For Dynamic Web Object Reconstructions." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/75586645284283849845.

Full text
Abstract:
碩士
東海大學
資訊工程學系
100
With the continuous evolution of Internet technologies, web-based systems prevail at almost every level of Internet applications. Studies have shown that, instead of constructing whole web applications from scratch, delivering only necessary updates of web objects can enhance performance significantly. This work investigates the deployment of web objects over a chain of servers in which each server can reconstruct an object locally, or via a network transmission from a certain upstream server where the object is made available. This work formulates the cost of construction of web objects based on the global supply-demand integration among servers, and develops an efficient polynomial-time algorithm for determining an effective strategy.
APA, Harvard, Vancouver, ISO, and other styles
48

(10710867), Yucong Pan. "Dynamic Update of Sparse Voxel Octree Based on Morton Code." Thesis, 2021.

Find full text
Abstract:

Real-time global illumination has been a very important topic and is widely used in game industry. Previous offline rendering requires a large amount of time to converge and reduce the noise generated in Monte Carlo method. Thus, it cannot be easily adapted in real-time rendering. Using voxels in the field of global illumination has become a popular approach. While a naïve voxel grid occupies huge memory in video card, a data structure called sparse voxel octree is often implemented in order to reduce memory cost of voxels and achieve efficient ray casting performance in an interactive frame rate.

However, rendering of voxels can cause block effects due to the nature of voxel. One solution is to increase the resolution of voxel so that one voxel is smaller than a pixel on screen. But this is usually not feasible because higher resolution results in higher memory consumption. Thus, most of the global illumination methods of SVO (sparse voxel octree) only use it in visibility test and radiance storage, rather than render it directly. Previous research has tried to incorporate SVO in ray tracing, radiosity methods and voxel cone tracing, and all achieved real-time frame rates in complex scenes. However, most of them only focus on static scenes and does not consider dynamic updates of SVO and the influence of it on performance.

In this thesis, we will discuss the tradeoff of multiple classic real-time global illumination methods and their implementations using SVO. We will also propose an efficient approach to dynamic update SVO in animated scenes. The deliverables will be implemented in CUDA 11.0 and OpenGL.

APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Yu-Sen, and 王煜森. "Fast Update of the Best Carpool Groups in Dynamic Environment." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/60591225126839157636.

Full text
Abstract:
碩士
中原大學
資訊工程研究所
98
Traditional carpool websites only find the users who have similar starting points or destinations for a ridesharing search. The users need to choose by themselves the best among all candidates. As a result, there might be losses of many ridesharing opportunities and the users would not be willing to join the ridesharing system anymore. The previous work has proposed a floating share scheme in which the ridesharing partners can share the payment according to their routes’ distances, and a method of finding the best passenger group for a driver. In this thesis, given the set of ridesharing groups, we further consider the problem of finding the best group for a new passenger. In addition to the increase of the driver’s saving, the returned group must also lead to the lowest expense for the new passenger. We design a segment-based indexing method to keep and compress the current set of ridesharing groups. A new passenger can find the best group via the index, which can then be quickly updated. In the experiments, our method achieves 84% speedup in query processing time and 22.52% compression ratio on average to reduce the data for processing.
APA, Harvard, Vancouver, ISO, and other styles
50

Huang, Rong-Ren, and 黃榮仁. "Incremental TCAM Update for Packet Classification Table Using Dynamic Segment Tree." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/18142602863204794283.

Full text
Abstract:
碩士
國立成功大學
資訊工程學系碩博士班
96
Packet classification is extensively applied to a variety of Internet applications including network security, quality of service, multimedia communications and so on. Thus, packet classification is gaining more and more concerns nowadays. Traditionally, we consider using standard Ternary Content Addressable Memory (TCAM) as a hardware classification engine. However, this approach is inefficient to store classification tables because the port range fields of a packet have to be broken into prefixes before stored in TCAM. To solve the question, has been proposed, which a novel multifield classification scheme [2]. To reduce the cost, we don’t want waste too much TCAM space when we store the classification tables in TCAM since the port fields of the classification tables are arbitrary ranges. Thus, we adopt the -encoding schemes for the ranges can greatly reduce the usage of TCAM space. We noticed that these encoding schemes are time consuming when updating. This is because these schemes need to pre-compute results and encode the ranges in classification tables. In this thesis, we improve these encoding schemes which can map the ranges into TCAM with incremental update by using dynamic segment tree (DST), where DST is a segment tree data structure for solving dynamic table lookup problems. And we develop these schemes which can update dynamically, partially update the TCAM entries but not all the ones.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography