Rozprawy doktorskie na temat „Dynamic update”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Dynamic update.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Dynamic update”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Baumann, Andrew Computer Science &amp Engineering Faculty of Engineering UNSW. "Dynamic update for operating systems". Awarded by:University of New South Wales. Computer Science and Engineering, 2007. http://handle.unsw.edu.au/1959.4/28356.

Pełny tekst źródła
Streszczenie:
Patches to modern operating systems, including bug fixes and security updates, and the reboots and downtime they require, cause tremendous problems for system users and administrators. The aim of this research is to develop a model for dynamic update of operating systems, allowing a system to be patched without the need for a reboot or other service interruption. In this work, a model for dynamic update based on operating system modularity is developed and evaluated using a prototype implementation for the K42 operating system. The prototype is able to update kernel code and data structures, even when the interfaces between kernel modules change. When applying an update, at no point is the system's entire execution blocked, and there is no additional overhead after an update has been applied. The base runtime overhead is also very low. An analysis of the K42 revision history shows that approximately 79% of past performance and bug-fix changes to K42 could be converted to dynamic updates, and the proportion would be even higher if the changes were being developed for dynamic update. The model also extends to other systems such as Linux and BSD, that although structured modularly, are not strictly object-oriented like K42. The experience with this approach shows that dynamic update for operating systems is feasible given a sufficiently-modular system structure, allows maintenance patches and updates to be applied without disruption, and need not constrain system performance.
Style APA, Harvard, Vancouver, ISO itp.
2

CAMARA, EDUARDO CASTRO MOTA. "A STUDY OF DYNAMIC UPDATE FOR SOFTWARE COMPONENTS". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2014. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=23529@1.

Pełny tekst źródła
Streszczenie:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE SUPORTE À PÓS-GRADUAÇÃO DE INSTS. DE ENSINO
PROGRAMA DE EXCELENCIA ACADEMICA
O desenvolvimento baseado em sistemas de componentes de software consiste em compor sistemas a partir de unidades de sotfware prontas e reutilizáveis. Muitos sistemas de componentes software em produção, precisam ficar disponíveis durante 24 horas por dia nos 7 dias da semana. Atualizações dinâmicas permitem que os sistemas sejam atualizados sem interromperem a execução dos seus serviços, aplicando a atualização em tempo de execução. Muitas técnicas de atualização dinâmica, na literatura, utilizam aplicações feitas especificamente para cobrir os pontos implementados e poucas utilizam um histórico de necessidades de um sistema real. Este trabalho estuda os principais casos de atualizações que ocorrem em um sistema de componentes de uso extenso, o Openbus, que consiste em uma infraestrutura de integração responsável pela comunicação de diversas aplicações de aquisição, processamento e interpretação de dados. Além deste estudo, implementamos uma solução de atualização dinâmica para acomodar as necessidades deste sistema. Depois, utilizando a solução implementada, apresentamos um teste de sobrecarga e algumas aplicações de atualizações do Openbus.
The component-based development of software systems consists on composing systems from ready and reusable sotfware units. Many software componente systems on production, need to be available 24 hours a day 7 days a week. Dynamic updates allow systems to be upgraded without interrupting the execution of its services, applying the update at runtime. Many dynamics software update techniques in the literature use applications specically implemented to cover the presented points and only a few use a historical need of a real system. This work studies the main cases of updates that occur in a system of components with extensive use, the Openbus, which consists of an integration infrastructure responsible for communication of various applications for acquisition, processing and interpretation of data. In addition to this study, we implement a solution of dynamic software update to accommodate the needs of this system. After, using the implemented solution, we present an overhead test and applications of updates on Openbus.
Style APA, Harvard, Vancouver, ISO itp.
3

Tesone, Pablo. "Dynamic Software Update for Production and Live Programming Environments". Thesis, Ecole nationale supérieure Mines-Télécom Lille Douai, 2018. http://www.theses.fr/2018MTLD0012/document.

Pełny tekst źródła
Streszczenie:
Mettre à jour des applications durant leur exécution est utilisé aussi bien en production pour réduire les temps d’arrêt des applications que dans des environnements de développement interactifs (IDE pour live programming). Toutefois, ces deux scénarios présentent des défis différents qui font que les solutions de mise à jour dynamique (DSU pour Dynamic Software Updating) existantes sont souvent spécifiques à l’un des deux. Par exemple, les DSUs pour la programmation interactives ne supportent généralement pas la détection automatique de points sûrs de mise à jour ni la migration d’instances, alors que les DSUs pour la production nécessitent une génération manuelle de l’ensemble des modifications et manquent d’intégration avec l’IDE. Les solutions existantes ont également une capacité limitées à se mettre à jour elles-mêmes ou à mettre à jour les bibliothèques de base du langage ; et certaines d’entre elles introduisent mêmle une dégradation des performances d’exécution en dehors du processus de mise à jour.Dans cette thèse, nous proposons un DSU (nommé gDSU) unifié qui fonctionne à la fois pour la programmation interactive et les environnements de production. gDSU permet la détection automatique des points sûrs de mise à jour en analysant et manipulant la pile d’exécution, et offre un mécanisme réutilisable de migration d’instances afin de minimiser les interventions manuelles lors de l’application d’une migration. gDSU supporte également la mise à jour des bibliothèques du noyau du langage et du mécanisme de mise à jour lui-même. Ceci est réalisé par une copie incrémentale des objets à modifier et une application atomique de ces modifications.gDSU n’affecte pas les performances globales de l’application et ne présente qu’une pénalité d’exécution lors processus de mise à jour. Par exemple, gDSU est capable d’appliquer une mise à jour sur 100 000 instances en 1 seconde. Durant cette seconde, l’application ne répond pas pendant 250 milli-secondes seulement. Le reste du temps, l’application s’exécute normalement pendant que gDSU recherche un point sûr de mise à jour qui consiste alors uniquement à copier les éléments modifiés.Nous présentons également deux extensions de gDSU permettant un meilleur support du développement interactif dans les IDEs : la programmation interactive transactionnelle et l’application atomique de reusinages (refactorings)
Updating applications during their execution is used both in production to minimize application downtine and in integrated development environments to provide live programming support. Nevertheless, these two scenarios present different challenges making Dynamic Software Update (DSU) solutions to be specifically designed for only one of these use cases. For example, DSUs for live programming typically do not implement safe point detection or insistance migration, while production DSUs require manual generation of patches and lack IDE integration. These sollutions also have a limited ability to update themselves or the language core libraries and some of them present execution penalties outside the update window.In this PhD, we propose a unified DSU named gDSU for both live programming and production environments. gDSU provides safe update point detection using call stack manipulation and a reusable instance migration mechanism to minimize manual intervention in patch generation. It also supports updating the core language libraries as well as the update mechanism itself thanks to its incremental copy of the modified objects and its atomic commit operation.gDSU does not affect the global performance of the application and it presents only a run-time penalty during the window. For example, gDSU is able to apply an update impacting 100,000 instances in 1 second making the application not responsive for only 250 milliseconds. The rest of the time the applications runs normally while gDSU is looking for a safe update point during which modified elements will be copied.We also present extensions of gDSU to support transactional live programming and atomic automactic refactorings which increase the usability of live programming environments
Style APA, Harvard, Vancouver, ISO itp.
4

Pham, Thanh H. "Dynamic Update Techniques for Online Maps and Attributes Data". NSUWorks, 2001. http://nsuworks.nova.edu/gscis_etd/771.

Pełny tekst źródła
Streszczenie:
Online databases containing geographic and related tabular data for maps and attributes often require continuous updates from widely distributed sources afield. For some applications, these data are dynamic, and thus are of little value if they do not reflect the latest information or changes. A status map that depicts graphically temporal data affecting accountability is an example of this type of data. How can accommodations be made collectively for the perpetual data updates in the database and the need to deliver online information in real time without making concessions? The goal of the dissertation was to analyze and evaluate techniques and technology for data collection and storage, online data delivery, and real-time upload. The result of this analysis culminated in the design and prototype of a system that allowed real-time delivery of up-to-date maps and attributes information. A literature review revealed that an ample amount of research material existed on the theory and practice of developing dynamic update techniques. Despite that fact, no research literature was available that specifically dealt with dynamic update techniques that provide for real-time delivery of up-to-date maps while allowing online update of attributes information. This dissertation was the first attempt at providing research material in this important area. The procedure consisted of five major steps encompassing a number of small steps, and culminated in the development of a prototype. The steps included gathering data collection and storage information, investigating technological advances in data delivery and access, studying dynamic update techniques, assessing the feasibility of an implementation solution, and developing a prototype. The results revealed that the dynamic update technique as implemented in the prototype met the need for timely delivery of accountability, geospatial, and metadata information within an infrastructure.
Style APA, Harvard, Vancouver, ISO itp.
5

Anderson, Gabrielle. "Behavioural properties and dynamic software update for concurrent programmes". Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/353281/.

Pełny tekst źródła
Streszczenie:
Software maintenance is a major part of the development cycle. The traditional methodology for rolling out an update to existing programs is to shut down the system, modify the binary, and restart the program. Downtime has significant disadvantages. In response to such concerns, researchers and practitioners have investigated how to perform update on running programs whilst maintaining various desired properties. In a multi-threaded setting this is further complicated by the interleaving of different threads' actions. In this thesis we investigate how to prove that safety and liveness are preserved when updating a program. We present two possible approaches; the main intuition behind each of these is to find quiescent points where updates are safe. The first approach requires global synchronisation, and is more generally applicable, but can delay updates indefinitely. The second restricts the class of programs that can be updated, but permits update without global synchronisation, and guarantees application of update. We provide full proofs of all relevant properties.
Style APA, Harvard, Vancouver, ISO itp.
6

Mensah, Pernelle. "Generation and Dynamic Update of Attack Graphs in Cloud Providers Infrastructures". Thesis, CentraleSupélec, 2019. http://www.theses.fr/2019CSUP0011.

Pełny tekst źródła
Streszczenie:
Dans les infrastructures traditionnelles, les graphes d’attaque permettent de brosser un tableau de la sécurité, car ils sont un modèle décrivant les différentes étapes suivies par un attaquant dans le but de compromettre un actif du réseau. Ces graphes peuvent ainsi servir de base à l’évaluation automatisée des risques, en s’appuyant sur l’identification et l’évaluation des actifs essentiels. Cela permet de concevoir des contre-mesures proactives et réactives pour la réduction des risques et peut être utilisé pour la surveillance et le renforcement de la sécurité du réseau.Cette thèse vise à appliquer une approche similaire dans les environnements Cloud, ce qui implique de prendre en compte les nouveaux défis posés par ces infrastructures modernes, la majorité des graphes d’attaque étant conçue pour une application dans des environnements traditionnels. Les nouveaux scénarios d’attaque liés à la virtualisation, ainsi que les propriétés inhérentes du Cloud, à savoir l’élasticité et le caractère dynamique, sont quelques-uns des obstacles à franchir à cette fin.Ainsi, pour atteindre cet objectif, un inventaire complet des vulnérabilités liées à la virtualisation a été effectué, permettant d'inclure cette nouvelle dimension dans les graphes d'attaque existants. Par l'utilisation d'un modèle adapté à l’échelle du Cloud, nous avons pu tirer parti des technologies Cloud et SDN, dans le but de construire des graphes d’attaque et de les maintenir à jour. Des algorithmes capables de faire face aux modifications fréquentes survenant dans les environnements virtualisés ont été conçus et testés à grande échelle sur une plateforme Cloud réelle afin d'évaluer les performances et confirmer la validité des méthodes proposées dans cette thèse pour permettre à l’administrateur de Cloud de disposer d’un graphe d’attaque à jour dans cet environnent
In traditional environments, attack graphs can paint a picture of the security exposure of the environment. Indeed, they represent a model allowing to depict the many steps an attacker can take to compromise an asset. They can represent a basis for automated risk assessment, relying on an identification and valuation of critical assets in the network. This allows to design pro-active and reactive counter-measures for risk mitigation and can be leveraged for security monitoring and network hardening.Our thesis aims to apply a similar approach in Cloud environments, which implies to consider new challenges incurred by these modern infrastructures, since the majority of attack graph methods were designed with traditional environments in mind. Novel virtualization attack scenarios, as well as inherent properties of the Cloud, namely elasticity and dynamism are a cause for concern.To realize this objective, a thorough inventory of virtualization vulnerabilities was performed, for the extension of existing vulnerability templates. Based on an attack graph representation model suitable to the Cloud scale, we were able to leverage Cloud and SDN technologies, with the purpose of building Cloud attack graphs and maintain them in an up-to-date state. Algorithms able to cope with the frequent rate of change occurring in virtualized environments were designed and extensively tested on a real scale Cloud platform for performance evaluation, confirming the validity of the methods proposed in this thesis, in order to enable Cloud administrator to dispose of an up-to-date Cloud attack graph
Style APA, Harvard, Vancouver, ISO itp.
7

Tumati, Pradeep. "Software Hot Swapping". Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/31362.

Pełny tekst źródła
Streszczenie:
The emergence of the Internet has sparked a tremendous explosion in the special class of systems called mission critical systems. These systems are so vital to their intended tasks that they must operate continuously. Two problems affect them: unplanned, and therefore disastrous, downtime and planned downtime for software maintenance. As the pressure to keep these systems operating continuously increases, scheduling downtime becomes complex. However, dynamically modifying the mission critical systems without disruption can reduce the need for a planned downtime. Every executing process has an executing code tightly coupled with an associated state, which continuously changes as the code executes. A dynamic modification at this juncture involves modifying the executable code and the state present within the binary image of the associated process. An ill-timed modification can create runtime incompatibilities that are hard to rectify and eventually cause a system crash. The purpose of the research in this thesis is to examine the causes for incompatibilities and propose the design of a dynamic modification technique: Software Hot Swapping. To achieve these objectives, the researcher proposes mechanisms which these incompatibilities can prevent, examines the characteristics and the implementation issues of such mechanisms, and demonstrates dynamic modification with a simple prototype Hot Swapping program.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
8

Yin, Li. "Adaptive Background Modeling with Temporal Feature Update for Dynamic Foreground Object Removal". DigitalCommons@USU, 2016. https://digitalcommons.usu.edu/etd/5040.

Pełny tekst źródła
Streszczenie:
In the study of computer vision, background modeling is a fundamental and critical task in many conventional applications. This thesis presents an introduction to background modeling and various computer vision techniques for estimating the background model to achieve the goal of removing dynamic objects in a video sequence. The process of estimating the background model with temporal changes in the absence of foreground moving objects is called adaptive background modeling. In this thesis, three adaptive background modeling approaches were presented for the purpose of developing \teacher removal" algorithms. First, an adaptive background modeling algorithm based on linear adaptive prediction is presented. Second, an adaptive background modeling algorithm based on statistical dispersion is presented. Third, a novel adaptive background modeling algorithm based on low rank and sparsity constraints is presented. The design and implementation of these algorithms are discussed in detail, and the experimental results produced by each algorithm are presented. Lastly, the results of this research are generalized and potential future research is discussed.
Style APA, Harvard, Vancouver, ISO itp.
9

Weißbach, Martin, Nguonly Taing, Markus Wutzler, Thomas Springer, Alexander Schill i Siobhán Clarke. "Decentralized Coordination of Dynamic Software Updates in the Internet of Things". IEEE, 2016. https://tud.qucosa.de/id/qucosa%3A75282.

Pełny tekst źródła
Streszczenie:
Large scale IoT service deployments run on a high number of distributed, interconnected computing nodes comprising sensors, actuators, gateways and cloud infrastructure. Since IoT is a fast growing, dynamic domain, the implementation of software components are subject to frequent changes addressing bug fixes, quality insurance or changed requirements. To ensure the continuous monitoring and control of processes, software updates have to be conducted while the nodes are operating without losing any sensed data or actuator instructions. Current IoT solutions usually support the centralized management and automated deployment of updates but are restricted to broadcasting the updates and local update processes at all nodes. In this paper we propose an update mechanism for IoT deployments that considers dependencies between services across multiple nodes involved in a common service and supports a coordinated update of component instances on distributed nodes. We rely on LyRT on all IoT nodes as the runtime supporting local disruption-minimal software updates. Our proposed middleware layer coordinates updates on a set of distributed nodes. We evaluated our approach using a demand response scenario from the smart grid domain.
Style APA, Harvard, Vancouver, ISO itp.
10

Sornil, Ohm. "Parallel Inverted Indices for Large-Scale, Dynamic Digital Libraries". Diss., Virginia Tech, 2001. http://hdl.handle.net/10919/26131.

Pełny tekst źródła
Streszczenie:
The dramatic increase in the amount of content available in digital forms gives rise to large-scale digital libraries, targeted to support millions of users and terabytes of data. Retrieving information from a system of this scale in an efficient manner is a challenging task due to the size of the collection as well as the index. This research deals with the design and implementation of an inverted index that supports searching for information in a large-scale digital library, implemented atop a massively parallel storage system. Inverted index partitioning is studied in a simulation environment, aiming at a terabyte of text. As a result, a high performance partitioning scheme is proposed. It combines the best qualities of the term and document partitioning approaches in a new Hybrid Partitioning Scheme. Simulation experiments show that this organization provides good performance over a wide range of conditions. Further, the issues of creation and incremental updates of the index are considered. A disk-based inversion algorithm and an extensible inverted index architecture are described, and experimental results with actual collections are presented. Finally, distributed algorithms to create a parallel inverted index partitioned according to the hybrid scheme are proposed, and performance is measured on a portion of the equipment that normally makes up the 100 node Virginia Tech PetaPlex™ system. NOTE: (02/2007) An updated copy of this ETD was added after there were patron reports of problems with the file.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
11

Khanna, Nikita. "A Novel Update to Dynamic Q Algorithm and a Frequency-fold Analysis for Aloha-based RFID Anti-Collision Protocols". Scholar Commons, 2015. http://scholarcommons.usf.edu/etd/5844.

Pełny tekst źródła
Streszczenie:
Radio frequency identification (RFID) systems are increasingly used for a wide range of applications from supply chain management to mobile payment systems. In a typical RFID system, there is a reader/interrogator and multiple tags/transponders, which can communicate with the reader. If more than one tag tries to communicate with the reader at the same time, a collision occurs resulting in failed communications, which becomes a significantly more important challenge as the number of tags in the environment increases. Collision reduction has been studied extensively in the literature with a variety of algorithm designs specifically tailored for low-power RFID systems. In this study, we provide an extensive review of existing state-of-the-art time domain anti-collision protocols which can generally be divided into two main categories: 1) aloha based and 2) tree based. We explore the maximum theoretical gain in efficiency with a 2-fold frequency division in the ultra-high frequency (UHF) band of 902-928 MHz used for RFID systems in the United States. We analyze how such a modification would change the total number of collisions and improve efficiency for two different anti-collision algorithms in the literature: a relatively basic framed-slotted aloha and a more advanced reservation slot with multi-bits aloha. We also explore how a 2-fold frequency division can be implemented using analog filters for semi-passive RFID tags. Our results indicate significant gains in efficiency for both aloha algorithms especially for midsize populations of tags up to 50. Finally, we propose two modifications to the Q-algorithm, which is currently used as part of the industry standard EPC Class 1 Generation 2 (Gen 2) protocol. The Q-Slot-Collision-Counter (QSCC) and Q-Frame-Collision-Counter (QFCC) algorithms change the size of the frame more dynamically depending on the number of colliding tags in each time slot with the help of radar cross section technique whereas the standard Q-algorithm uses a fixed parameter for frame adjustment. In fact, QFCC algorithm is completely independent of the variable "C" which is used in the standard protocol for modifying the frame size. Through computer simulations, we show that the QFCC algorithm is more robust and provide an average efficiency gain of more than 6% on large populations of tags compared to the existing standard.
Style APA, Harvard, Vancouver, ISO itp.
12

Stephens, Sonia. "Placing birds on a dynamic evolutionary map: Using digital tools to update the evolutionary metaphor of the "tree of life"". Doctoral diss., University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5519.

Pełny tekst źródła
Streszczenie:
This dissertation describes and presents a new type of interactive visualization for communicating about evolutionary biology, the dynamic evolutionary map. This web-based tool utilizes a novel map-based metaphor to visualize evolution, rather than the traditional “tree of life.” The dissertation begins with an analysis of the conceptual affordances of the traditional tree of life as the dominant metaphor for evolution. Next, theories from digital media, visualization, and cognitive science research are synthesized to support the assertion that digital media tools can extend the types of visual metaphors we use in science communication in order to overcome conceptual limitations of traditional metaphors. These theories are then applied to a specific problem of science communication, resulting in the dynamic evolutionary map. Metaphor is a crucial part of scientific communication, and metaphor-based scientific visualizations, models, and analogies play a profound role in shaping our ideas about the world around us. Users of the dynamic evolutionary map interact with evolution in two ways: by observing the diversification of bird orders over time and by examining the evidence for avian evolution at several places in evolutionary history. By combining these two types of interaction with a non-traditional map metaphor, evolution is framed in a novel way that supplements traditional metaphors for communicating about evolution. This reframing in turn suggests new conceptual affordances to users who are learning about evolution. Empirical testing of the dynamic evolutionary map by biology novices suggests that this approach is successful in communicating evolution differently than in existing tree-based visualization methods. Results of evaluation of the map by biology experts suggest possibilities for future enhancement and testing of this visualization that would help refine these successes. This dissertation represents an important step forward in the synthesis of scientific, design, and metaphor theory, as applied to a specific problem of science communication. The dynamic evolutionary map demonstrates that these theories can be used to guide the construction of a visualization for communicating a scientific concept in a way that is both novel and grounded in theory. There are several potential applications in the fields of informal science education, formal education, and evolutionary biology for the visualization created in this dissertation. Moreover, the approach suggested in this dissertation can potentially be extended into other areas of science and science communication. By placing birds onto the dynamic evolutionary map, this dissertation points to a way forward for visualizing science communication in the future.
Ph.D.
Doctorate
Arts and Humanities
Texts and Technology
Style APA, Harvard, Vancouver, ISO itp.
13

Tanoh, Henry-Gertrude. "Implementation of Post-Build Configuration for Gateway Electronic Control Unit : Gateway ECU to enable third-party update". Thesis, KTH, Radio Systems Laboratory (RS Lab), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231545.

Pełny tekst źródła
Streszczenie:
The development of embedded software in the automotive industry has reached a level of complexity, which is unmaintainable by traditional approaches. The AUTomotive Open System Architecture (AUTOSAR) was created to standardize the automotive software. In this architecture, the development of software is spread, in general, between three different entities: Original Equipment Manufacturers (OEMs), e.g. Volvo; Tier-1 Suppliers, such as Vector; and Tier-2 Suppliers, for example, Renesas Microelectronics. Another methodology that has emerged is to develop Electronic Control Units (ECUs) domain wise: infotainment, chassis & safety, powertrain, and body and security. To allow inter-domain communication, the state of art for fast and reliable communication is to use a gateway ECU. The gateway ECU is a crucial component in the electrical/electronic (E/E) architecture of a vehicle. In AUTOSAR, a third party, different from the car manufacturer, typically implements the gateway ECU. A major feature for a gateway ECU is to provide highly flexible configuration. This flexibility allows the car manufacturer (OEM) to fit the gateway ECU to different requirements and product derivations. This thesis investigates the implementation of post-build configuration for a gateway ECU. First, the thesis provides the reader with some background on AUTOSAR and the current E/E architecture of the gateway ECU. The protocols used by the gateway are explained. The design of a potential solution and its implementation are discussed. The implementation is evaluated through regression tests of the routing functionality. Processing time, memory use, and scaling of the solution are also taken into account. The results of the design and the implementation if judged adequate could be used as a springboard to allow post-build in an existing gateway ECU architecture. The results could consolidate the path towards full conformance to AUTOSAR.
Inbyggda system har okat i fordonsindustrin. Utvecklingen av dessa inbyggda programvara har varit komplex och ar inte genomforbar per ett enhet. Idag ar utvecklingen gjort av tre foretag: en OEM (Original Equipement Manufacturer), Tier-1 leverantorer som tillhandahaller mjukvara till OEMs, Tier-2 leverantorer som tillhandahaller elektroniska styrenheter (ECU) hardvaror. Förmedlingsnod ECU är en viktig komponent i ett fordons elektriska/elektroniska (E/E) arkitektur. En tredje part implementerar, som skiljer sig från OEM, de flesta funktionerna av den förmedlingsnod ECU. En viktig egenskap för en förmedlingsnod är att tillhandahålla en mycket flexibel konfiguration. Denna flexibilitet tillåter (OEM) att anpassa förmedlingsnod till olika kraven och fordonarkitekturer. Denna avhandling undersöker genomförandet av Post-build konfigurationen, ocksa kallad dynamisk konfigurationen för en förmedlingsnod ECU. För det första gers bakgrund på AUTOSAR och den nuvarande E/E arkitekturen för den ECU. De kommunikation protokoll som används förklaras. Utformningen av en potentiell lösning och dess genomförande diskuteras. Implementeringen utvärderas genom regressionstest av routingsfunktionaliteten. Behandlingstid, minneseffektivitet och skalning av lösningen beaktas också. Resultaten av konstruktionen och genomförandet om det bedömdes som lämpligt skulle kunna användas som ett springbräda för att möjliggöra postbyggnad i en befintlig förmedlingsnod arkitektur. Resultaten kan konsolidera vägen mot full överensstämmelse med AUTOSAR.
Le développement de systèmes embarqués dans l’industrie automobile a atteint un niveau de complexité très élevé. D’où la nécessité de créer de nouvelles méthodologies. AUTomotive Open Architecture (AUTOSAR) a été créé pour la mise place de standards pour le développement dans l’industrie automobile. Dans l’architecture AUTOSAR, le développement de logiciels embarqués est reparti, en général, entre trois partis : Original Equipement Manufacturer (OEM), Renault par exemple. Le deuxième niveau regroupe les fournisseurs de logiciels et outils, par exemple, Elektrobit. On retrouve en troisième position les Tier-2 suppliers, fournisseurs de cartes électroniques pour l’automobile, comme Renesas ST. Le développement de calculateurs est séparé par domaine : Multimédia, châssis, motorisation et intérieur. La communication inter-domaine passe par un calculateur passerelle. Le calculateur passerelle est essentielle dans l’architecture électronique du véhicule. Dans AUTOSAR, le calculateur est fourni par un tiers parti, différent du constructeur automobile. Il est donc nécessaire pour le constructeur d’être capable de configurer le calculateur passerelle, sans repasser par le vendeur. Par exemple, le constructeur peut décider, réception du software de rajouter une nouvelle route dans la passerelle. Cet aspect est connu sur le nom de Post-build Configuration dans AUTOSAR. Le but de ce stage est le design et l’implémentation de Post-build configuration d’un calculateur passerelle. D’abord, AUTOSAR et l’architecture électronique d’un calculateur passerelle sont détaillées. Les protocoles de communication sont aussi décrits. Ensuite, le design et les choix d’implémentation sont discutés. L’implémentation est évaluée avec des tests de régression sur les fonctionnalités de routage. Aussi, la solution finale est évaluée sur les critères de performance de routage, l’efficacité en consommation mémoire et la capacité d’être intégrée dans un produit final.
Style APA, Harvard, Vancouver, ISO itp.
14

Lounas, Razika. "Validation des spécifications formelles de la mise à jour dynamique des applications Java Card". Thesis, Limoges, 2018. http://www.theses.fr/2018LIMO0085/document.

Pełny tekst źródła
Streszczenie:
La mise à jour dynamique des programmes consiste en la modification de ceux-ci sans en arrêter l'exécution. Cette caractéristique est primordiale pour les applications critiques en continuelles évolutions et nécessitant une haute disponibilité. Le but de notre travail est d'effectuer la vérification formelle de la correction de la mise à jour dynamique d'applications Java Card à travers l'étude du système EmbedDSU. Pour ce faire, nous avons premièrement établi la correction de la mise à jour du code en définissant une sémantique formelle des opérations de mise à jour sur le code intermédiaire Java Card en vue d'établir la sûreté de typage des mises à jour. Nous avons ensuite proposé une approche pour vérifier la sémantique du code mis à jour à travers la définition d'une transformation de prédicats. Nous nous sommes ensuite intéressés à la vérification de la correction concernant la détection de points sûrs de la mise à jour. Nous avons utilisé la vérification de modèles. Cette vérification nous a permis de corriger d'abord un problème d'inter blocage dans le système avant d'établir d'autres propriétés de correction : la sûreté d'activation et la garantie de mise à jour. La mise à jour des données est effectuée à travers les fonctions de transfert d'état. Pour cet aspect, nous avons proposé une solution permettant d'appliquer les fonctions de transfert d’état tout en préservant la consistance du tas de la machine virtuelle Java Card et en permettant une forte expressivité dans leurs écritures
Dynamic Software Updating (DSU) consists in updating running programs on the fly without any downtime. This feature is interesting in critical applications that are in continual evolution and that require high availability. The aim of our work is to perform formal verification the correctness of dynamic software updating in Java Card applications by studying the system EmbedDSU. To do so, we first established the correctness of code update. We achieved this by defining formal semantics for update operations on java Card bytecode in order to ensure type safety. Then, we proposed an approach to verify the semantics of updated programs by defining a predicate transformation. Afterward, we were interested in the verification of correction concerning the safe update point detection. We used model checking. This verification allowed us first to fix a deadlock situation in the system and then to establish other correctness properties: activeness safety and updatability. Data update is performed through the application of state transfer functions. For this aspect, we proposed a solution to apply state transfer functions with the preservation of the Java Card virtual machine heap consistency and by allowing a high expressiveness when writing state transfer functions
Style APA, Harvard, Vancouver, ISO itp.
15

Loyet, Raphaël. "Dynamic sound rendering of complex environments". Phd thesis, Université Claude Bernard - Lyon I, 2012. http://tel.archives-ouvertes.fr/tel-00995328.

Pełny tekst źródła
Streszczenie:
De nombreuses études ont été menées lors des vingt dernières années dans le domaine de l'auralisation.Elles consistent à rendre audible les résultats d'une simulation acoustique. Ces études se sont majoritairementfocalisées sur les algorithmes de propagation et la restitution du champ acoustique dans desenvironnements complexes. Actuellement, de nombreux travaux portent sur le rendu sonore en tempsréel.Cette thèse aborde la problématique du rendu sonore dynamique d'environnements complexes selonquatre axes : la propagation des ondes sonores, le traitement du signal, la perception spatiale du son etl'optimisation informatique. Dans le domaine de la propagation, une méthode permettant d'analyser lavariété des algorithmes présents dans la bibliographie est proposée. A partir de cette méthode d'analyse,deux algorithmes dédiés à la restitution en temps réel des champs spéculaires et diffus ont été extraits.Dans le domaine du traitement du signal, la restitution est réalisée à l'aide d'un algorithme optimisé despatialisation binaurale pour les chemins spéculaires les plus significatifs et un algorithme de convolutionsur carte graphique pour la restitution du champ diffus. Les chemins les plus significatifs sont extraitsgrace à un modèle perceptif basé sur le masquage temporel et spatial des contributions spéculaires.Finalement, l'implémentation de ces algorithmes sur des architectures parallèles récentes en prenant encompte les nouvelles architectures multi-coeurs et les nouvelles cartes graphiques est présenté.
Style APA, Harvard, Vancouver, ISO itp.
16

Lülf, Fritz Adrian. "An integrated method for the transient solution of reduced order models of geometrically nonlinear structural dynamic systems". Phd thesis, Conservatoire national des arts et metiers - CNAM, 2013. http://tel.archives-ouvertes.fr/tel-00957455.

Pełny tekst źródła
Streszczenie:
For repeated transient solutions of geometrically nonlinear structures the numerical effort often poses a major obstacle. Thus, the introduction of a reduced order model, which takes the nonlinear effects into account and accelerates the calculations considerably, is often necessary.This work yields a method that allows for rapid, accurate and parameterisable solutions by means of a reduced model of the original structure. The structure is discretised and its dynamic equilibrium described by a matrix equation. The projection on a reduced basis is introduced to obtain the reduced model. A comprehensive numerical study on several common reduced bases shows that the simple introduction of a constant basis is not sufficient to account for the nonlinear behaviour. Three requirements for an rapid, accurate and parameterisable solution are derived. The solution algorithm has to take into account the nonlinear evolution of the solution, the solution has to be independent of the nonlinear finite element terms and the basis has to be adapted to external parameters.Three approaches are provided, each responding to one requirement. These approaches are assembled to the integrated method. The approaches are the update and augmentation of the basis, the polynomial formulation of the nonlinear terms and the interpolation of the basis. A Newmark-type time-marching algorithm provides the frame of the integrated method. The application of the integrated method on test-cases with geometrically nonlinear finite elements confirms that this method leads to the initial aim of a rapid, accurate and parameterisable transient solution.
Style APA, Harvard, Vancouver, ISO itp.
17

Quan, Nguyen. "Distributed Game Environment : A Software Product Line for Education and Research". Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-29077.

Pełny tekst źródła
Streszczenie:
A software product line is a set of software-intensive systems that share a common, managed set of features satisfying the specific needs of a particular market segment or demand. Software product lines capitalize commonality and manage variation to reduce the time, effort, cost and complexity when creating and maintaining products in a product line. Therefore reusing core assets, software product line can address problems such as cost, time-to-market, quality, complexity of developing and maintaining variants, and need to quickly respond to market’s demands. The development of a software product line is different from conventional software development and in the area of education and research of product line there is a lack of a suitable purposefully designed and developed software product line (SPL) that can be used for educational or research purposes. In this thesis we have developed a software product line for turn-based two players distributed board games environment that can be used for educational and research purposes. The software product line supports dynamic runtime update, including games, chat, and security features, via OSGi framework. Furthermore, it supports remote gameplay via local area network and dynamic runtime activity recovery. We delivered a product configuration tool that is used to derive and configure products from the core assets based on feature selection. We have also modeled the software product line’s features and documented its requirements, architecture and user guides. Furthermore, we performed functional and integration tests of the software product line to ensure that the requirements are met according to the requirements specification prescribed by the stakeholders.
Style APA, Harvard, Vancouver, ISO itp.
18

Stoyle, Gareth Paul. "A theory of dynamic software updates". Thesis, University of Cambridge, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612746.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Spetz-Nyström, Simon. "Dynamic updates of mobile apps using JavaScript". Thesis, Linköpings universitet, Programvara och system, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-119351.

Pełny tekst źródła
Streszczenie:
Updates are a natural part of the life cycle of an application. The traditional way of updating an application by stopping it, replacing it with the new version and restarting it is lacking in many ways. There have been previous research in the field of dynamic software updates (DSU) that attempt to salvage this problem by updating the app while running. Most of the previous research have focused on static languages like C and Java, research with dynamic languages have been lacking. This thesis takes advantage of the dynamic features of JavaScript in order to allow for dynamic updates of applications for mobile devices. The solution is implemented and used to answer questions about how correctness can be ensured and what state transfer needs to be manually written by a programmer. The conclusion is that most failures that occur as the result of an update and is in need of a manually written state transfer can be put into one of three categories. To verify correctness of an update tests for these types of failures should be performed.
Style APA, Harvard, Vancouver, ISO itp.
20

Kim, Dong Kwan. "Applying Dynamic Software Updates to Computationally-Intensive Applications". Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/28206.

Pełny tekst źródła
Streszczenie:
Dynamic software updates change the code of a computer program while it runs, thus saving the programmerâ s time and using computing resources more productively. This dissertation establishes the value of and recommends practices for applying dynamic software updates to computationally-intensive applicationsâ a computing domain characterized by long-running computations, expensive computing resources, and a tedious deployment process. This dissertation argues that updating computationally-intensive applications dynamically can reduce their time-to-discovery metricsâ the total time it takes from posing a problem to arriving at a solutionâ and, as such, should become an intrinsic part of their software lifecycle. To support this claim, this dissertation presents the following technical contributions: (1) a distributed consistency algorithm for synchronizing dynamic software updates in a parallel HPC application, (2) an implementation of the Proxy design pattern that is more efficient than the existing implementations, and (3) a dynamic update approach for Java Virtual Machine (JVM)-based applications using the Proxy pattern to offer flexibility and efficiency advantages, making it suitable for computationally-intensive applications. The contributions of this dissertation are validated through performance benchmarks and case studies involving computationally-intensive applications from the bioinformatics and molecular dynamics simulation domains.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
21

Pukall, Mario [Verfasser], i Gunter [Akademischer Betreuer] Saake. "JAVADAPTOR : unrestricted dynamic updates of Java applications / Mario Pukall. Betreuer: Gunter Saake". Magdeburg : Universitätsbibliothek, 2012. http://d-nb.info/1051445507/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Datta, Arijeet Suryadeep. "Self-organised critical system : Bak-Sneppen model of evolution with simultaneous update". Thesis, Imperial College London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.395826.

Pełny tekst źródła
Streszczenie:
Many chaotic and complicated systems cannot be analysed by traditional methods. In 1987 P.Bak, C.Tang, and K.A.Wiesenfeld developed a new concept called Self-Organised Criticality (SOC) to explain the behaviour of composite systems containing a large number of elements that interact over a short range. In general this theory applies to complex systems that naturally evolve to a critical state in which a minor event starts a chain reaction that can affect any number of elements in the system. It was later shown that many complex phenomena such as flux pinning in superconductors, dynamics of granular systems, earthquakes, droplet formation and biological evolution show signs of SOC. The dynamics of complex systems in nature often occurs in terms of punctuation, or avalanches rather than following a smooth, gradual path. Extremal dynamics is used to model the temporal evolution of many different complex systems. Specifically the Bak-Sneppen evolution model, the Sneppen interface depinning model, the Zaitsev flux creep model, invasion percolation, and several other depinning models. This thesis considers extremal dynamics at constant flux where M>1 smallest barriers are simultaneously updated as opposed to models in the limit of zero flux where only the smallest barrier is updated. For concreteness, we study the Bak-Sneppen (BS) evolution model [Phys. Rev. Lett. 71, 4083 (1993)]. M=1 corresponds to the original BS model. The aim of the present work is to understand analytically through mean field theory the random neighbour version of the generalised BS model and verify the results against the computer simulations. This is done in order to scrutinise the trustworthiness of our numerical simulations. The computer simulations are found to be identical with results obtained from the analytical approach. Due to this agreement, we know that our simulations will produce reliable results for the nearest neighbour version of the generalised BS model. Since the nearest neighbour version of the generalised BS model cannot be solved analytically, we have to rely on simulations. We investigate the critical behaviour of both versions of the model using the scaling theory. We look at various distributions and their scaling properties, and also measure the critical exponents accurately verifying whether the scaling relations holds. The effect of increasing from M=1 to M>1 is surprising with dramatic decrease in size of the scaling regime.
Style APA, Harvard, Vancouver, ISO itp.
23

Moosa, Naseera. "An updated model of the krill-predator dynamics of the Antarctic ecosystem". Master's thesis, University of Cape Town, 2017. http://hdl.handle.net/11427/25490.

Pełny tekst źródła
Streszczenie:
The objective of this thesis is to update the Mori-Butterworth (2006) model of the krill-predator dynamics of the Antarctic ecosystem. Their analysis aimed to determine whether predator-prey interactions alone could broadly explain the observed population trends of the species considered in their model. In this thesis, the Antarctic ecosystem is outlined brie y and details are given of the main krill-eating predators including whales, seals, fish and penguins, together with an historical record of the human harvesting in the region. The abundances and per capita krill consumption of the krill-predators are calculated and used to determine the main krill-predators to be used in the updated model developed. These predators are found to be the blue, fin, humpback and minke whales and crabeater and Antarctic fur seals. The three main ship surveys (IDCR/SOWER, JARPA and JSV) used to estimate whale abundance, and the abundance estimation method itself (called distance sampling), are summarised. Updated estimates of abundance and trends are listed for the main krill-predators. Updated estimates for the biological parameters needed for the ecosystem model are also reported, and include some differences in approaches to those adopted for the Mori-Butterworth model. The background to the hypothesis of a krill-surplus during the mid-20th century is discussed as well as the effects of environmental change in the context of possible causes of the population changes of the main krill-feeding predators over the last century. Key features of the results of the updated model are the inclusion of a depensatory effect for Antarctic fur seals in the krill and predator dynamics, and the imposition of bounds on Ka (the carrying capacity of krill in Region a, in the absence of its predators); these lead to a better fit overall. A particular difference in results compared to those from the Mori-Butterworth model is more oscillatory behaviour in the trajectories for krill and some of its main predators. This likely results from the different approach to modelling natural mortality for krill and warrants further investigation. That may in turn resolve a key mismatch in the model which predicts minke oscillations in the Indo-Pacific region to be out of phase with results from a SCAA assessment of these whales. A number of other areas for suggested future research are listed. The updated model presented in this thesis requires further development before it might be considered sufficiently reliable for providing advice for the regulation and implementation of suitable conservation and harvesting strategies in the Antarctic.
Style APA, Harvard, Vancouver, ISO itp.
24

Narra, Hemanth, i Egemen K. Çetinkaya. "Performance Analysis of AeroRP with Ground Station Updates in Highly-Dynamic Airborne Telemetry Networks". International Foundation for Telemetering, 2011. http://hdl.handle.net/10150/595669.

Pełny tekst źródła
Streszczenie:
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada
Highly dynamic airborne telemetry networks pose unique challenges for data transmission. Domain-specific multi-hop routing protocols are necessary to cope with these challenges and AeroRP is one such protocol. In this paper, we discuss the operation of various AeroRP modes and analyse their performance using the ns-3 network simulator. We compare the performance of beacon, beaconless, and ground station (GS) modes of AeroRP. The simulation results show the advantages of having a domain-specific routing protocol and also highlight the importance of ground station updates in discovering routes.
Style APA, Harvard, Vancouver, ISO itp.
25

Keppeler, Jens. "Answering Conjunctive Queries and FO+MOD Queries under Updates". Doctoral thesis, Humboldt-Universität zu Berlin, 2020. http://dx.doi.org/10.18452/21483.

Pełny tekst źródła
Streszczenie:
In dieser Arbeit wird das dynamische Auswertungsproblem über dynamische Datenbanken betrachtet, bei denen Tupel hinzugefügt oder gelöscht werden können. Die Aufgabe besteht darin einen dynamischen Algorithmus zu konstruieren, welcher unmittelbar nachdem die Datenbank aktualisiert wurde, die Datenstruktur, die das Resultat repräsentiert, aktualisiert. Die Datenstruktur soll in konstanter Zeit aktualisiert werden und das Folgende unterstützen: * Teste in konstanter Zeit ob ein Tupel zur Ausgabemenge gehört, * gebe die Anzahl der Tupel in der Ausgabemenge in konstanter Zeit aus, * zähle die Tupel aus der Ausgabemenge mit konstanter Taktung auf und * zähle den Unterschied zwischen der neuen und der alten Ausgabemenge mit konstanter Taktung auf. Im ersten Teil werden konjunktive Anfragen und Vereinigungen konjunktiver Anfragen auf relationalen Datenbanken betrachtet. Die Idee der q-hierarchischen Anfragen (und t-hierarchische Anfragen für das Testen) wird eingeführt und es wird gezeigt, dass das Resultat für jede q-hierarchische Anfrage auf dynamischen Datenbanken effizient in dem oben beschriebenen Szenario ausgewertet werden können. Konjunktive Anfragen mit Aggregaten werden weiterhin betrachtet. Es wird gezeigt, dass das Lernen von polynomiellen Regressionsfunktionen in konstanter Zeit vorbereitet werden kann, falls die Trainingsdaten aus dem Anfrageergebnis kommen. Mit logarithmischer Update-Zeit kann folgende Routine unterstützt werden: Bei Eingabe einer Zahl j, gebe das j-te Tupel aus der Aufzählung aus. Im zweiten Teil werden Anfragen, die Formeln der Logik erster Stufe (FO) und deren Erweiterung mit Modulo-Zähl Quantoren (FO+MOD) sind, betrachtet, und es wird gezeigt, dass diese effizient unter Aktualisierungen ausgewertet können, wobei die dynamische Datenbank die Gradschranke nicht überschreitet, und bei der Auswertung die Zähl-, Test-, Aufzähl- und die Unterschied-Routine unterstützt werden.
This thesis investigates the query evaluation problem for fixed queries over fully dynamic databases, where tuples can be inserted or deleted. The task is to design a dynamic algorithm that immediately reports the new result of a fixed query after every database update. In particular, the goal is to construct a data structure that allows to support the following scenario. After every database update, the data structure can be updated in constant time such that afterwards we are able * to test within constant time for a given tuple whether or not it belongs to the query result, * to output the number of tuples in the query result, * to enumerate all tuples in the new query result with constant delay and * to enumerate the difference between the old and the new query result with constant delay. In the first part, conjunctive queries and unions of conjunctive queries on arbitrary relational databases are considered. The notion of q-hierarchical conjunctive queries (and t-hierarchical conjunctive queries for testing) is introduced and it is shown that the result of each such query on a dynamic database can be maintained efficiently in the sense described above. Moreover, this notion is extended to aggregate queries. It is shown that the preparation of learning a polynomial regression function can be done in constant time if the training data are taken (and maintained under updates) from the query result of a q-hierarchical query. With logarithmic update time the following routine is supported: upon input of a natural number j, output the j-th tuple that will be enumerated. In the second part, queries in first-order logic (FO) and its extension with modulo-counting quantifiers (FO+MOD) are considered, and it is shown that they can be efficiently evaluated under updates, provided that the dynamic database does not exceed a certain degree bound, and the counting, testing, enumeration and difference routines is supported.
Style APA, Harvard, Vancouver, ISO itp.
26

Bergmark, Max. "Designing a performant real-time modular dynamic pricing system : Studying the performance of a dynamic pricing system which updates in real-time, and its application within the golfing industry". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-276241.

Pełny tekst źródła
Streszczenie:
In many industries, the prices of products are dynamically calculated to maximize revenue while keeping customer satisfaction. In this paper, an approach of online calculation of prices is investigated, where customers always receive an updated price. The most common method used today is to update prices with some interval, and present that price to the customer. This standard method provides fast responses, and accurate responses for the most part. However, if the dynamic pricing model could benefit from very fast price updates, an online calculation approach might provide better price accuracy. The main advantages of this approach is the combination of short term accuracy and long term stability. Short term behaviour is handled by the online price calculation with real-time updates, while long term behaviour is handled by statistical analysis of booking behaviour, which is condensed into a demand curve. In this paper, the long term statistical analysis for calculating demand curves is described, along with the benefit of short term price adjustments which can be beneficial both for producers and consumers.
I många industrier räknas priset för produkterna ut dynamiskt för att maximera intäkterna samtidigt som kundnöjdheten bibehålls. I denna rapport så undersökt en metod för realtidsberäkningar av priser i en dynamisk kontext, där kunder alltid får uppdaterade priser. Den vanligaste metoden för dynamisk prissättning idag är att uppdatera priserna i systemet med jämna intervall, och sedan presentera det senast uträknade priset till kunden. Att använda periodiska prisuppdateringar leder till snabba responstider, och vanligtvis tillräckligt hög exakthet vad gäller pris. Men om det dynamiska prissättningssystemet kan dra nytta av väldigt snabba prisuppdateringar, så kan en onlineberäkning vara en bättre metod för att öka exaktheten för priserna. Den huvudsakliga fördelen av detta tillvägagångssätt är att det kombinerar kortsiktig exakthet med långsiktig stabilitet. På kort sikt så hanteras prisändringar av en onlineberäkning med realtidsuppdateringar, medan större trender hanteras av statistisk analys av tidigare bokningsbeteenden, som kondenseras till en efterfrågekurva. I denna rapport så beskrivs den långsiktiga statistiska analysen i kombination med den kortsiktiga onlineberäkningen, och hur denna kombination kan vara positiv både för säljare och köpare.
Style APA, Harvard, Vancouver, ISO itp.
27

Bondsman, Benjamin. "Numerical modeling and experimental investigation of large deformation under static and dynamic loading". Thesis, Linnéuniversitetet, Institutionen för byggteknik (BY), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-104227.

Pełny tekst źródła
Streszczenie:
Small kinematics assumption in classical engineering has been in the center of consideration in structural analysis for decennaries. In the recent years the interest for sustainable and optimized structures, lightweight structures and new materials has grown rapidly as a consequence of desire to archive economical sustainability. These issues involve non-linear constitutive response of materials and can only be accessed on the basis of geometrically and materially non-linear analysis. Numerical simulations have become a conventional tool in modern engineering and have proven accuracy in computation and are on the verge of superseding time consuming and costly experiments.\newlineConsequently, this work presents a numerical computational framework for modeling of geometrically non-linear large deformation of isotropic and orthotropic materials under static and dynamic loading. The numerical model is applied on isotropic steel in plane strain and orthotropic wood in 3D under static and dynamic loading. In plane strain Total Lagrangian, Updated Lagrangian, Newmark-$\beta$ and Energy Conserving Algorithm time-integration methods are compared and evaluated. In 3D, a Total Lagrangian static approach and a Total Lagrangian based dynamic approach with Newmark-$\beta$ time-integration method is proposed to numerically predict deformation of wood under static and dynamic loading. The numerical model's accuracy is validated through an experiment where a knot-free pine wood board under large deformation is studied. The results indicate accuracy and capability of the numerical model in predicting static and dynamic behaviour of wood under large deformation. Contrastingly, classical engineering solution proves its inaccuracy and incapability of predicting kinematics of the wood board under studied conditions.
Små kinematikantaganden inom klassisk ingenjörsteknik har varit centralt för konstruktionslösningar under decennier. Under de senaste åren har intresset för hållbara och optimerade strukturer, lättviktskonstruktioner och nya material ökat kraftigt till följd av önskan att uppnå ekonomisk hållbarhet. Dessa nya konstruktionslösningar involverar icke-linjär konstitutiv respons hos material och kan endast studeras baserad på geometriskt och materiellt olinjär analys. Numeriska simuleringar har blivit ett konventionellt verktyg inom modern ingenjörsteknik och visat sig ge noggrannhet i beräkning och kan på sikt ersätta tidskrävande och kostsamma experiment.\newlineDetta examensarbete presenterar ett numeriskt beräkningsramverk för modellering av geometrisk olinjäritet med stora deformationer hos isotropa och ortotropa material vid statisk och dynamisk belastning. Den numeriska modellen appliceras på isotropiskt stål i plantöjning och ortotropisk trä i 3D vid statisk och dynamisk belastning. I fallet med plantöjning jämförs och utvärderas den Totala Lagrangianen, Uppdaterade Lagrangianen, Newmark-$\beta$ och Energi Konserverings Algoritm metoderna. I 3D föreslås en statisk Total Lagrangian metod och en dynamisk Total Lagrangian-baserad metod med Newmark-$\beta$ tidsintegreringsmetod för att numeriskt förutse statisk och dynamisk deformation hos trä. Den numeriska modellens noggrannhet valideras genom ett experiment där en kvistfri furuplanka studeras under stora deformationer. Resultaten bekräftar noggrannhet och förmåga hos den numeriska modellen att förutse statiska och dynamiska processer hos trä vid stora deformationer. Däremot, visar klassisk ingenjörslösning brist på förmåga att förutse trä plankans kinematik under studerade förhållanden.
Style APA, Harvard, Vancouver, ISO itp.
28

Helbig, Marde. "Solving dynamic multi-objective optimisation problems using vector evaluated particle swarm optimisation". Thesis, University of Pretoria, 2012. http://hdl.handle.net/2263/28161.

Pełny tekst źródła
Streszczenie:
Most optimisation problems in everyday life are not static in nature, have multiple objectives and at least two of the objectives are in conflict with one another. However, most research focusses on either static multi-objective optimisation (MOO) or dynamic singleobjective optimisation (DSOO). Furthermore, most research on dynamic multi-objective optimisation (DMOO) focusses on evolutionary algorithms (EAs) and only a few particle swarm optimisation (PSO) algorithms exist. This thesis proposes a multi-swarm PSO algorithm, dynamic Vector Evaluated Particle Swarm Optimisation (DVEPSO), to solve dynamic multi-objective optimisation problems (DMOOPs). In order to determine whether an algorithm solves DMOO efficiently, functions are required that resembles real world DMOOPs, called benchmark functions, as well as functions that quantify the performance of the algorithm, called performance measures. However, one major problem in the field of DMOO is a lack of standard benchmark functions and performance measures. To address this problem, an overview is provided from the current literature and shortcomings of current DMOO benchmark functions and performance measures are discussed. In addition, new DMOOPs are introduced to address the identified shortcomings of current benchmark functions. Guides guide the optimisation process of DVEPSO. Therefore, various guide update approaches are investigated. Furthermore, a sensitivity analysis of DVEPSO is conducted to determine the influence of various parameters on the performance of DVEPSO. The investigated parameters include approaches to manage boundary constraint violations, approaches to share knowledge between the sub-swarms and responses to changes in the environment that are applied to either the particles of the sub-swarms or the non-dominated solutions stored in the archive. From these experiments the best DVEPSO configuration is determined and compared against four state-of-the-art DMOO algorithms.
Thesis (PhD)--University of Pretoria, 2012.
Computer Science
unrestricted
Style APA, Harvard, Vancouver, ISO itp.
29

Momenan, Bahareh. "Development of a Thick Continuum-Based Shell Finite Element for Soft Tissue Dynamics". Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/35908.

Pełny tekst źródła
Streszczenie:
The goal of the present doctoral research is to create a theoretical framework and develop a numerical implementation for a shell finite element that can potentially achieve higher performance (i.e. combination of speed and accuracy) than current Continuum-based (CB) shell finite elements (FE), in particular in applications related to soft biological tissue dynamics. Specifically, this means complex and irregular geometries, large distortions and large bending deformations, and anisotropic incompressible hyperelastic material properties. The critical review of the underlying theories, formulations, and capabilities of the existing CB shell FE revealed that a general nonlinear CB shell FE with the abovementioned capabilities needs to be developed. Herein, we propose the theoretical framework of a new such CB shell FE for dynamic analysis using the total and the incremental updated Lagrangian (UL) formulations and explicit time integration. Specifically, we introduce the geometry and the kinematics of the proposed CB shell FE, as well as the matrices and constitutive relations which need to be evaluated for the total and the incremental UL formulations of the dynamic equilibrium equation. To verify the accuracy and efficiency of the proposed CB shell element, its large bending and distortion capabilities, as well as the accuracy of three different techniques presented for large strain analysis, we implemented the element in Matlab and tested its application in various geometries, with different material properties and loading conditions. The new high performance and accuracy element is shown to be insensitive to shear and membrane locking, and to initially irregular elements.
Style APA, Harvard, Vancouver, ISO itp.
30

Kocina, Karel. "Studie návrhu vhodného tvaru membránových konstrukcí". Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2012. http://www.nusl.cz/ntk/nusl-225357.

Pełny tekst źródła
Streszczenie:
This diploma thesis deals with methods for design of membrane structure shape. Main purpose is to analyze topology designs by Formfinder and Rhinoceros in RFEM and compare results. Test a possibility of designing shape by software RFEM.
Style APA, Harvard, Vancouver, ISO itp.
31

Liang, Weifa, i wliang@cs anu edu au. "Designing Efficient Parallel Algorithms for Graph Problems". The Australian National University. Department of Computer Science, 1997. http://thesis.anu.edu.au./public/adt-ANU20010829.114536.

Pełny tekst źródła
Streszczenie:
Graph algorithms are concerned with the algorithmic aspects of solving graph problems. The problems are motivated from and have application to diverse areas of computer science, engineering and other disciplines. Problems arising from these areas of application are good candidates for parallelization since they often have both intense computational needs and stringent response time requirements. Motivated by these concerns, this thesis investigates parallel algorithms for these kinds of graph problems that have at least one of the following properties: the problems involve some type of dynamic updates; the sparsification technique is applicable; or the problems are closely related to communications network issues. The models of parallel computation used in our studies are the Parallel Random Access Machine (PRAM) model and the practical interconnection network models such as meshes and hypercubes. ¶ Consider a communications network which can be represented by a graph G = (V;E), where V is a set of sites (processors), and E is a set of links which are used to connect the sites (processors). In some cases, we also assign weights and/or directions to the edges in E. Associated with this network, there are many problems such as (i) whether the network is k-edge (k-vertex) connected withfixed k; (ii) whether there are k-edge (k-vertex) disjoint paths between u and v for a pair of given vertices u and v after the network is dynamically updated by adding and/or deleting an edge etc; (iii) whether the sites in the network can communicate with each other when some sites and links fail; (iv) identifying the first k edges in the network whose deletion will result in the maximum increase in the routing cost in the resulting network for fixed k; (v) how to augment the network at optimal cost with a given feasible set of weighted edges such that the augmented network is k-edge (k-vertex) connected; (vi) how to route messages through the network efficiently. In this thesis we answer the problems mentioned above by presenting efficient parallel algorithms to solve them. As far as we know, most of the proposed algorithms are the first ones in the parallel setting. ¶ Even though most of the problems concerned in this thesis are related to communications networks, we also study the classic edge-coloring problem. The outstanding difficulty to solve this problem in parallel is that we do not yet know whether or not it is in NC. In this thesis we present an improved parallel algorithm for the problem which needs [bigcircle]([bigtriangleup][superscript 4.5]log [superscript 3] [bigtriangleup] log n + [bigtriangleup][superscript 4] log [superscript 4] n) time using [bigcircle](n[superscript 2][bigtriangleup] + n[bigtriangleup][superscript 3]) processors, where n is the number of vertices and [bigtriangleup] is the maximum vertex degree. Compared with a previously known result on the same model, we improved by an [bigcircle]([bigtriangleup][superscript 1.5]) factor in time. The non-trivial part is to reduce this problem to the edge-coloring update problem. We also generalize this problem to the approximate edge-coloring problem by giving a faster parallel algorithm for the latter case. ¶ Throughout the design and analysis of parallel graph algorithms, we also find a technique called the sparsification technique is very powerful in the design of efficient sequential and parallel algorithms on dense undirected graphs. We believe that this technique may be useful in its own right for guiding the design of efficient sequential and parallel algorithms for problems in other areas as well as in graph theory.
Style APA, Harvard, Vancouver, ISO itp.
32

Pisani, Paulo Henrique. "Biometrics in a data stream context". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-08052017-141153/.

Pełny tekst źródła
Streszczenie:
The growing presence of the Internet in day-to-day tasks, along with the evolution of computational systems, contributed to increase data exposure. This scenario highlights the need for safer user authentication systems. An alternative to deal with this is by the use of biometric systems. However, biometric features may change over time, an issue that can affect the recognition performance due to an outdated biometric reference. This effect can be called as template ageing in the area of biometrics and as concept drift in machine learning. It raises the need to automatically adapt the biometric reference over time, a task performed by adaptive biometric systems. This thesis studied adaptive biometric systems considering biometrics in a data stream context. In this context, the test is performed on a biometric data stream, in which the query samples are presented one after another to the biometric system. An adaptive biometric system then has to classify each query and adapt the biometric reference. The decision to perform the adaptation is taken by the biometric system. Among the biometric modalities, this thesis focused on behavioural biometrics, particularly on keystroke dynamics and on accelerometer biometrics. Behavioural modalities tend to be subject to faster changes over time than physical modalities. Nevertheless, there were few studies dealing with adaptive biometric systems for behavioural modalities, highlighting a gap to be explored. Throughout the thesis, several aspects to enhance the design of adaptive biometric systems for behavioural modalities in a data stream context were discussed: proposal of adaptation strategies for the immune-based classification algorithm Self-Detector, combination of genuine and impostor models in the Enhanced Template Update framework and application of score normalization to adaptive biometric systems. Based on the investigation of these aspects, it was observed that the best choice for each studied aspect of the adaptive biometric systems can be different depending on the dataset and, furthermore, depending on the users in the dataset. The different user characteristics, including the way that the biometric features change over time, suggests that adaptation strategies should be chosen per user. This motivated the proposal of a modular adaptive biometric system, named ModBioS, which can choose each of these aspects per user. ModBioS is capable of generalizing several baselines and proposals into a single modular framework, along with the possibility of assigning different adaptation strategies per user. Experimental results showed that the modular adaptive biometric system can outperform several baseline systems, while opening a number of new opportunities for future work.
A crescente presença da Internet nas tarefas do dia a dia, juntamente com a evolução dos sistemas computacionais, contribuiu para aumentar a exposição dos dados. Esse cenário evidencia a necessidade de sistemas de autenticação de usuários mais seguros. Uma alternativa para lidar com isso é pelo uso de sistemas biométricos. Contudo, características biométricas podem mudar com o tempo, o que pode afetar o desempenho de reconhecimento devido a uma referência biométrica desatualizada. Esse efeito pode ser chamado de template ageing na área de sistemas biométricos adaptativos ou de mudança de conceito em aprendizado de máquina. Isso levanta a necessidade de adaptar automaticamente a referência biométrica com o tempo, uma tarefa executada por sistemas biométricos adaptativos. Esta tese estudou sistemas biométricos adaptativos considerando biometria em um contexto de fluxo de dados. Neste contexto, o teste é executado em um fluxo de dados biométrico, em que as amostras de consulta são apresentadas uma após a outra para o sistema biométrico. Um sistema biométrico adaptativo deve então classificar cada consulta e adaptar a referência biométrica. A decisão de executar a adaptação é tomada pelo sistema biométrico. Dentre as modalidades biométricas, esta tese foca em biometria comportamental, em particular em dinâmica da digitação e em biometria por acelerômetro. Modalidades comportamentais tendem a ser sujeitas a mudanças mais rápidas do que modalidades físicas. Entretanto, havia poucos estudos lidando com sistemas biométricos adaptativos para modalidades comportamentais, destacando uma lacuna para ser explorada. Ao longo da tese, diversos aspectos para aprimorar o projeto de sistemas biométricos adaptativos para modalidades comportamentais em um contexto de fluxo de dados foram discutidos: proposta de estratégias de adaptação para o algoritmo de classificação imunológico Self-Detector, combinação de modelos genuíno e impostor no framework do Enhanced Template Update e aplicação de normalização de scores em sistemas biométricos adaptativos. Com base na investigação desses aspectos, foi observado que a melhor escolha para cada aspecto estudado dos sistemas biométricos adaptativos pode ser diferente dependendo do conjunto de dados e, além disso, dependendo dos usuários no conjunto de dados. As diferentes características dos usuários, incluindo a forma como as características biométricas mudam com o tempo, sugerem que as estratégias de adaptação deveriam ser escolhidas por usuário. Isso motivou a proposta de um sistema biométrico adaptativo modular, chamado ModBioS, que pode escolher cada um desses aspectos por usuário. O ModBioS é capaz de generalizar diversos sistemas baseline e propostas apresentadas nesta tese em um framework modular, juntamente com a possibilidade de atribuir estratégias de adaptação diferentes por usuário. Resultados experimentais mostraram que o sistema biométrico adaptativo modular pode superar diversos sistemas baseline, enquanto que abre um grande número de oportunidades para trabalhos futuros.
Style APA, Harvard, Vancouver, ISO itp.
33

Renaud-Goud, Paul. "Energy-aware scheduling : complexity and algorithms". Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2012. http://tel.archives-ouvertes.fr/tel-00744247.

Pełny tekst źródła
Streszczenie:
In this thesis we have tackled a few scheduling problems under energy constraint, since the energy issue is becoming crucial, for both economical and environmental reasons. In the first chapter, we exhibit tight bounds on the energy metric of a classical algorithm that minimizes the makespan of independent tasks. In the second chapter, we schedule several independent but concurrent pipelined applications and address problems combining multiple criteria, which are period, latency and energy. We perform an exhaustive complexity study and describe the performance of new heuristics. In the third chapter, we study the replica placement problem in a tree network. We try to minimize the energy consumption in a dynamic frame. After a complexity study, we confirm the quality of our heuristics through a complete set of simulations. In the fourth chapter, we come back to streaming applications, but in the form of series-parallel graphs, and try to map them onto a chip multiprocessor. The design of a polynomial algorithm on a simple problem allows us to derive heuristics on the most general problem, whose NP-completeness has been proven. In the fifth chapter, we study energy bounds of different routing policies in chip multiprocessors, compared to the classical XY routing, and develop new routing heuristics. In the last chapter, we compare the performance of different algorithms of the literature that tackle the problem of mapping DAG applications to minimize the energy consumption.
Style APA, Harvard, Vancouver, ISO itp.
34

SOARES, Rodrigo Hernandez. "Gerenciamento Dinâmico de Modelos de Contexto: Estudo de Caso Baseado em CEP". Universidade Federal de Goiás, 2012. http://repositorio.bc.ufg.br/tede/handle/tde/521.

Pełny tekst źródła
Streszczenie:
Made available in DSpace on 2014-07-29T14:57:51Z (GMT). No. of bitstreams: 1 dissertacao-rodrigohs.pdf: 1383844 bytes, checksum: b3fda2012ce5a20dc390677f308520e3 (MD5) Previous issue date: 2012-05-29
Context models that describe dynamic context-aware scenarios usually need to be frequently updated. Some examples of situations that motivate these updates are the appearance of new services and context providers, the mobility of the entities described in these models, among others. Generally, updates on models imply redevelopment of the architectural components of context-aware systems based on these models. However, as these updates in dynamic scenarios tend to be more frequent, it is desirable that they occur at runtime. This dissertation presents an infrastructure for dynamic management of context models based on the fundamentals of complex event processing, or CEP. This infrastructure allows the fundamental abstractions from which a model is built to be updated at runtime. As these updates can impact systems based on the updated models, this dissertation identifies and analyzes these impacts, which are reproduced in a case study that aims to evaluate the proposed infrastructure by demonstrating how it deals with the impacts mentioned.
Modelos contextuais que descrevem cenários de computação sensível ao contexto dinâmicos normalmente precisam ser frequentemente atualizados. Alguns exemplos de situações que motivam essas atualizações são o surgimento de novos serviços e provedores de informações contextuais, a mobilidade das entidades descritas nesses modelos, dentre outros. Normalmente, atualizações em modelos implicam em redesenvolvimento dos componentes arquiteturais dos sistemas sensíveis ao contexto baseados nesses modelos. Porém, como em cenários dinâmicos essas atualizações tendem a ser mais frequentes, é desejável que elas ocorram em tempo de execução. Essa dissertação apresenta uma infraestrutura para gerenciamento dinâmico de modelos de contexto baseada nos fundamentos de processamento complexo de eventos, ou CEP. Essa infraestrutura permite que as abstrações fundamentais a partir das quais um modelo é construído sejam atualizadas em tempo de execução. Como essas atualizações podem causar impactos nos sistemas baseados nos modelos atualizados, essa dissertação identifica e analisa esses impactos, os quais são reproduzidos em um estudo de caso que tem como finalidade avaliar a infraestrutura proposta através da demonstração de como ela lida com os impactos mencionados.
Style APA, Harvard, Vancouver, ISO itp.
35

Hakala, Tim. "Settling-Time Improvements in Positioning Machines Subject to Nonlinear Friction Using Adaptive Impulse Control". BYU ScholarsArchive, 2006. https://scholarsarchive.byu.edu/etd/1061.

Pełny tekst źródła
Streszczenie:
A new method of adaptive impulse control is developed to precisely and quickly control the position of machine components subject to friction. Friction dominates the forces affecting fine positioning dynamics. Friction can depend on payload, velocity, step size, path, initial position, temperature, and other variables. Control problems such as steady-state error and limit cycles often arise when applying conventional control techniques to the position control problem. Studies in the last few decades have shown that impulsive control can produce repeatable displacements as small as ten nanometers without limit cycles or steady-state error in machines subject to dry sliding friction. These displacements are achieved through the application of short duration, high intensity pulses. The relationship between pulse duration and displacement is seldom a simple function. The most dependable practical methods for control are self-tuning; they learn from online experience by adapting an internal control parameter until precise position control is achieved. To date, the best known adaptive pulse control methods adapt a single control parameter. While effective, the single parameter methods suffer from sub-optimal settling times and poor parameter convergence. To improve performance while maintaining the capacity for ultimate precision, a new control method referred to as Adaptive Impulse Control (AIC) has been developed. To better fit the nonlinear relationship between pulses and displacements, AIC adaptively tunes a set of parameters. Each parameter affects a different range of displacements. Online updates depend on the residual control error following each pulse, an estimate of pulse sensitivity, and a learning gain. After an update is calculated, it is distributed among the parameters that were used to calculate the most recent pulse. As the stored relationship converges to the actual relationship of the machine, pulses become more accurate and fewer pulses are needed to reach each desired destination. When fewer pulses are needed, settling time improves and efficiency increases. AIC is experimentally compared to conventional PID control and other adaptive pulse control methods on a rotary system with a position measurement resolution of 16000 encoder counts per revolution of the load wheel. The friction in the test system is nonlinear and irregular with a position dependent break-away torque that varies by a factor of more than 1.8 to 1. AIC is shown to improve settling times by as much as a factor of two when compared to other adaptive pulse control methods while maintaining precise control tolerances.
Style APA, Harvard, Vancouver, ISO itp.
36

von, Wenckstern Michael. "Web applications using the Google Web Toolkit". Master's thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2013. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-115009.

Pełny tekst źródła
Streszczenie:
This diploma thesis describes how to create or convert traditional Java programs to desktop-like rich internet applications with the Google Web Toolkit. The Google Web Toolkit is an open source development environment, which translates Java code to browser and device independent HTML and JavaScript. Most of the GWT framework parts, including the Java to JavaScript compiler as well as important security issues of websites will be introduced. The famous Agricola board game will be implemented in the Model-View-Presenter pattern to show that complex user interfaces can be created with the Google Web Toolkit. The Google Web Toolkit framework will be compared with the JavaServer Faces one to find out which toolkit is the right one for the next web project
Diese Diplomarbeit beschreibt die Erzeugung desktopähnlicher Anwendungen mit dem Google Web Toolkit und die Umwandlung klassischer Java-Programme in diese. Das Google Web Toolkit ist eine Open-Source-Entwicklungsumgebung, die Java-Code in browserunabhängiges als auch in geräteübergreifendes HTML und JavaScript übersetzt. Vorgestellt wird der Großteil des GWT Frameworks inklusive des Java zu JavaScript-Compilers sowie wichtige Sicherheitsaspekte von Internetseiten. Um zu zeigen, dass auch komplizierte graphische Oberflächen mit dem Google Web Toolkit erzeugt werden können, wird das bekannte Brettspiel Agricola mittels Model-View-Presenter Designmuster implementiert. Zur Ermittlung der richtigen Technologie für das nächste Webprojekt findet ein Vergleich zwischen dem Google Web Toolkit und JavaServer Faces statt
Style APA, Harvard, Vancouver, ISO itp.
37

Fecteau, Anthony R. Acharya Rajgopal Sundaraj. "Bdi plan update in dynamic environments". 2009. http://etda.libraries.psu.edu/theses/approved/PSUonlyIndex/ETD-4511/index.html.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

"A Study of Backward Compatible Dynamic Software Update". Doctoral diss., 2015. http://hdl.handle.net/2286/R.I.36032.

Pełny tekst źródła
Streszczenie:
abstract: Dynamic software update (DSU) enables a program to update while it is running. DSU aims to minimize the loss due to program downtime for updates. Usually DSU is done in three steps: suspending the execution of an old program, mapping the execution state from the old program to a new one, and resuming execution of the new program with the mapped state. The semantic correctness of DSU depends largely on the state mapping which is mostly composed by developers manually nowadays. However, the manual construction of a state mapping does not necessarily ensure sound and dependable state mapping. This dissertation presents a methodology to assist developers by automating the construction of a partial state mapping with a guarantee of correctness. This dissertation includes a detailed study of DSU correctness and automatic state mapping for server programs with an established user base. At first, the dissertation presents the formal treatment of DSU correctness and the state mapping problem. Then the dissertation presents an argument that for programs with an established user base, dynamic updates must be backward compatible. The dissertation next presents a general definition of backward compatibility that specifies the allowed changes in program interaction between an old version and a new version and identified patterns of code evolution that results in backward compatible behavior. Thereafter the dissertation presents formal definitions of these patterns together with proof that any changes to programs in these patterns will result in backward compatible update. To show the applicability of the results, the dissertation presents SitBack, a program analysis tool that has an old version program and a new one as input and computes a partial state mapping under the assumption that the new version is backward compatible with the old version. SitBack does not handle all kinds of changes and it reports to the user in incomplete part of a state mapping. The dissertation presents a detailed evaluation of SitBack which shows that the methodology of automatic state mapping is promising in deal with real world program updates. For example, SitBack produces state mappings for 17-75% of the changed functions. Furthermore, SitBack generates automatic state mapping that leads to successful DSU. In conclusion, the study presented in this dissertation does assist developers in developing state mappings for DSU by automating the construction of state mappings with a correctness guarantee, which helps the adoption of DSU ultimately.
Dissertation/Thesis
Doctoral Dissertation Computer Science 2015
Style APA, Harvard, Vancouver, ISO itp.
39

Ho, Hui-Chung, i 何慧忠. "Dynamic Key Update & Delegation In CP-ABE". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/75738404481846857496.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣大學
資訊工程學研究所
104
Ciphertext-Policy Attribute-Based Encryption (CP-ABE) is a useful asymmetric encryption algorithm compared to traditional asymmetric cipher key systems. It enables encrypted data to be stored on cloud server with every of them retaining their own access permissions without the need of additionally define access control permission on the cloud server. In highly dynamic and heterogeneous cloud environment it is a challenging task to maintain data protections by just utilizing fine-grained access policy of CP-ABE. User rights management is made harder to implement on such systems without user interventions. Currently there is no solution from the cryptosystem that supports efficient and direct key update and user revocations. Besides, backward secrecy and forward secrecy are not supported in the CP-ABE cryptosystem. Existing revocation methods are not encouraged to deploy in large cloud environment due to their high key processing overhead upon new user joining, revoked or being assigned with a new group key. In this paper, we proposed a method to dynamically authorize the users. The key feature of our model is the users do not have to involve in key revocation process. Our model utilizes different user authentication sessions to restrict their keys to a particular session and this approach could achieve direct user revocations within a group. The operation does not require re-encryption of existing ciphertext. Our method supports backward and (perfect) forward secrecy and is escrow-free. Lastly, we present that our method is efficient in the situation where users are changing groups frequently and our method is secured under chosen identity key attack.
Style APA, Harvard, Vancouver, ISO itp.
40

Huang, Yuhsiang, i 黃昱翔. "On Efficient Update Delivery For Dynamic Web Object Reconstructions". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/75586645284283849845.

Pełny tekst źródła
Streszczenie:
碩士
東海大學
資訊工程學系
100
With the continuous evolution of Internet technologies, web-based systems prevail at almost every level of Internet applications. Studies have shown that, instead of constructing whole web applications from scratch, delivering only necessary updates of web objects can enhance performance significantly. This work investigates the deployment of web objects over a chain of servers in which each server can reconstruct an object locally, or via a network transmission from a certain upstream server where the object is made available. This work formulates the cost of construction of web objects based on the global supply-demand integration among servers, and develops an efficient polynomial-time algorithm for determining an effective strategy.
Style APA, Harvard, Vancouver, ISO itp.
41

Wang, Yu-Sen, i 王煜森. "Fast Update of the Best Carpool Groups in Dynamic Environment". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/60591225126839157636.

Pełny tekst źródła
Streszczenie:
碩士
中原大學
資訊工程研究所
98
Traditional carpool websites only find the users who have similar starting points or destinations for a ridesharing search. The users need to choose by themselves the best among all candidates. As a result, there might be losses of many ridesharing opportunities and the users would not be willing to join the ridesharing system anymore. The previous work has proposed a floating share scheme in which the ridesharing partners can share the payment according to their routes’ distances, and a method of finding the best passenger group for a driver. In this thesis, given the set of ridesharing groups, we further consider the problem of finding the best group for a new passenger. In addition to the increase of the driver’s saving, the returned group must also lead to the lowest expense for the new passenger. We design a segment-based indexing method to keep and compress the current set of ridesharing groups. A new passenger can find the best group via the index, which can then be quickly updated. In the experiments, our method achieves 84% speedup in query processing time and 22.52% compression ratio on average to reduce the data for processing.
Style APA, Harvard, Vancouver, ISO itp.
42

(10710867), Yucong Pan. "Dynamic Update of Sparse Voxel Octree Based on Morton Code". Thesis, 2021.

Znajdź pełny tekst źródła
Streszczenie:

Real-time global illumination has been a very important topic and is widely used in game industry. Previous offline rendering requires a large amount of time to converge and reduce the noise generated in Monte Carlo method. Thus, it cannot be easily adapted in real-time rendering. Using voxels in the field of global illumination has become a popular approach. While a naïve voxel grid occupies huge memory in video card, a data structure called sparse voxel octree is often implemented in order to reduce memory cost of voxels and achieve efficient ray casting performance in an interactive frame rate.

However, rendering of voxels can cause block effects due to the nature of voxel. One solution is to increase the resolution of voxel so that one voxel is smaller than a pixel on screen. But this is usually not feasible because higher resolution results in higher memory consumption. Thus, most of the global illumination methods of SVO (sparse voxel octree) only use it in visibility test and radiance storage, rather than render it directly. Previous research has tried to incorporate SVO in ray tracing, radiosity methods and voxel cone tracing, and all achieved real-time frame rates in complex scenes. However, most of them only focus on static scenes and does not consider dynamic updates of SVO and the influence of it on performance.

In this thesis, we will discuss the tradeoff of multiple classic real-time global illumination methods and their implementations using SVO. We will also propose an efficient approach to dynamic update SVO in animated scenes. The deliverables will be implemented in CUDA 11.0 and OpenGL.

Style APA, Harvard, Vancouver, ISO itp.
43

Lu, Tsung-Lin, i 呂宗霖. "Research on Improving the Efficiency of Dynamic Update in WSN Platform". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/06571364571024076784.

Pełny tekst źródła
Streszczenie:
碩士
國立暨南國際大學
資訊管理學系
99
As the advance of wireless communication and embedded system technologies, wireless sensor networks (WSNs) which consist of lots of sensor nodes with various sensing technologies nowadays have versatile applications in lots of fields and become a highly developed nature of information technology. Supporting dynamic software update mechanism is a very important and challenging topic in the related researches of WSNs. If a WSN platform can support remote dynamic update, software of sensor nodes can be dynamically updated or enhanced to adapt to changed environmental conditions or different requirements of applications. Many researches on dynamic updates use the diff-based approaches that do not transmit the whole new program image to sensor nodes for updating. Only the code difference needs to be transmitted to sensor nodes. Compared with the approaches basing on full image replacement, diff-based approaches are more effective whereas more complicated in processing. However, sensor nodes are often of limited energy, memory, processing power, and communication bandwidth. How to efficiently provide dynamic update in WSNs while reducing resource consumption of sensor nodes is the main concern of this thesis. In this thesis, for the resource limited WSN environments, we propose a new and more effective diff-based approach named Two-Stage Diff to dynamically update software components in sensor nodes. This mechanism aims at effectively reducing the transmission size of the update data and improving the efficiency of the update processing. Especially, flash memory characteristics in erasing and data writing are considered in our design. Besides, the design of Diff Script format is optimized to further decrease the size of transmitted update data. Such that, our mechanism can reduce the resource consumption incurred by dynamic update mechanism and be more suitable for operating in resource limited WSN environments. We have implemented the proposed Two-Stage Diff mechanism in TinyOS for component update. We mainly modify Deluge dynamic update mechanism to support our diff-based component update mechanism. Experiments show that when performing updates, our Two-Stage Diff can effectively reduce the size of transmitted update data and require only 0.2%-49% of packet transmission as compared with Deluge. At the same time, the update processing time can be largely reduced as well. For example, Two-State Diff can perform 22.64 times better than Deluge in the case of small amount of updating.
Style APA, Harvard, Vancouver, ISO itp.
44

Huang, Rong-Ren, i 黃榮仁. "Incremental TCAM Update for Packet Classification Table Using Dynamic Segment Tree". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/18142602863204794283.

Pełny tekst źródła
Streszczenie:
碩士
國立成功大學
資訊工程學系碩博士班
96
Packet classification is extensively applied to a variety of Internet applications including network security, quality of service, multimedia communications and so on. Thus, packet classification is gaining more and more concerns nowadays. Traditionally, we consider using standard Ternary Content Addressable Memory (TCAM) as a hardware classification engine. However, this approach is inefficient to store classification tables because the port range fields of a packet have to be broken into prefixes before stored in TCAM. To solve the question, has been proposed, which a novel multifield classification scheme [2]. To reduce the cost, we don’t want waste too much TCAM space when we store the classification tables in TCAM since the port fields of the classification tables are arbitrary ranges. Thus, we adopt the -encoding schemes for the ranges can greatly reduce the usage of TCAM space. We noticed that these encoding schemes are time consuming when updating. This is because these schemes need to pre-compute results and encode the ranges in classification tables. In this thesis, we improve these encoding schemes which can map the ranges into TCAM with incremental update by using dynamic segment tree (DST), where DST is a segment tree data structure for solving dynamic table lookup problems. And we develop these schemes which can update dynamically, partially update the TCAM entries but not all the ones.
Style APA, Harvard, Vancouver, ISO itp.
45

Shyu, Ruey-Cheng, i 徐瑞成. "A Study of Dynamic Location Update Strategies for Wireless Personal Communication Systems". Thesis, 1996. http://ndltd.ncl.edu.tw/handle/09626214425376880512.

Pełny tekst źródła
Streszczenie:
碩士
國立清華大學
電機工程研究所
84
Location management, accomplished through wireless and wireline links, is an important issue in a personal communication system (PCS). The wireline network tracks the current location of a user and can therefore route messages to him regardless of his location. In addition to its impact on signaling within the wireline network, location management strategy influences the expenditure of wireless resources and the power consumption on portable terminals as well. Ideally, the location tracking strategy of each user should depend on user's current mobility behavior and call arrival pattern in order to minimize the consumption of wireless resources. In this thesis, we present a performance analysis of several dynamic location update strategies, including distance-based, movement-based and time- based strategies, based on a Markovian mobility model on a two- dimentional hexagonal cellular topology. In our analysis, performance measures of each strategy can be evaluated efficiently. Based on our system formulation, different observations from those under the analysis of a one-dimentional mobility model are made, such as the performance difference among the three investigated dynamic strategies is rather small and the performance improvement of these dynamic strategies relative to the fixed update strategy will be limited if the variation of the mobility behavior and call arrival pattern of a user or the variation among users are not large enough.
Style APA, Harvard, Vancouver, ISO itp.
46

Wasim, Omer. "Preserving large cuts in fully dynamic graphs". Thesis, 2020. http://hdl.handle.net/1828/11764.

Pełny tekst źródła
Streszczenie:
This thesis initiates the study of the MAX-CUT problem in fully dynamic graphs. Given a graph $G=(V,E)$, we present the first fully dynamic algorithms to maintain a $\frac{1}{2}$-approximate cut in sublinear update time under edge insertions and deletions to $G$. Our results include the following deterministic algorithms: i) an $O(\Delta)$ \textit{worst-case} update time algorithm, where $\Delta$ denotes the maximum degree of $G$ and ii) an $O(m^{1/2})$ amortized update time algorithm where $m$ denotes the maximum number of edges in $G$ during any sequence of updates. \\ \indent We also give the following randomized algorithms when edge updates come from an oblivious adversary: i) a $\tilde{O}(n^{2/3})$ update time algorithm\footnote{Throughout this thesis, $\tilde{O}$ hides a $O(\text{polylog}(n))$ factor.} to maintain a $\frac{1}{2}$-approximate cut, and ii) a $\min\{\tilde{O}(n^{2/3}), \tilde{O}(\frac{n^{{3/2}+2c_0}}{m^{1/2}})\}$ worst case update time algorithm which maintains a $(\frac{1}{2}-o(1))$-approximate cut for any constant $c_0>0$ with high probability. The latter algorithm is obtained by designing a fully dynamic algorithm to maintain a sparse subgraph with sublinear (in $n$) maximum degree which approximates all large cuts in $G$ with high probability.
Graduate
Style APA, Harvard, Vancouver, ISO itp.
47

Hsu, Hsiang-Yu, i 徐翔宇. "A Platform for Supporting Dynamic Update and Resource Protection in an Embedded Operating System". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/08732699571228292139.

Pełny tekst źródła
Streszczenie:
碩士
國立暨南國際大學
資訊工程學系
98
As the rapid development of hardware and maturity of technology, embedded systems’ functions become more and more versatile and complex. In recent years, many researches focus on providing dynamic update functionality in embedded systems. The advantage of dynamic update is that we can dynamically upgrade system’s functionality without rebooting the whole system. Thus, this update would not corrupt system’s status or stop any system services. Dynamic update mechanism is very important for embedded systems such as wireless sensor modes. When they are deployed or sold, they can not be reclaimed to upgrade their functionalities. In this thesis, we have implemented a platform that can dynamically upgrade LyraOS [2-7] embedded operating system without rebooting the whole systems. Although the original LyraOS has already supported a dynamic update mechanism [6,7], its aim is to reduce energy consumption while upgrading system’s functionality. In addition, the mechanism only supports demand loading functionality. In this thesis, we have further implemented a platform for supporting dynamic update dissemination mechanism and providing system resource protection mechanism. A component manager is developed to maintain the downloaded components and their component dependency. The downloaded components can invoke component manager exported API to download their dependent components into our platform. Embedded systems’ resources such as memory and energy are usually limited. If our platform does not support any system resource protector functionality, the downloaded components have potential risk to misuse system resources. Although the original LyraOS has supported a memory protection mechanism, it uses ARM’s hardware protection domain to restrict the memory access permission of each downloaded component. Thus, downloaded components would not corrupt the memory spaces of other components or kernel. However, downloaded components can arbitrary acquire system resources through invoking system call service. In this thesis, we have designed and implemented a system resource protection mechanism to protect our system’s resources. Through this mechanism, the embedded client will record the information of each system resource that has been allocated to components. If our system detects the misuse of system resource from an error component, it will reclaim the wasted resource and remove the error component out of our embedded client. Currently, our platform can reclaim lost memory space, ensure normal execution of critical sections, and prevent null pointer access. Experimental results demonstrate that our platform can effectively support dynamic update and prevent incautiously components to misuse our system’s resources. Our work totally increases about 10% of the size of LyraOS kernel image. The extra overhead of garbage collection is less than 5 microseconds. In order to ensure the normal execution of a critical section, the extra overhead is less than 11 microseconds. The extra overhead for handling null pointer access is about 13915 microseconds. The extra overhead for downloading a component into our embedded client is about 66 microseconds. The extra overhead for removing a component out of our embedded client is about 190 microseconds.
Style APA, Harvard, Vancouver, ISO itp.
48

Shi, Guangfu/S G. F. "Efficient Rendering of Scenes with Dynamic Lighting Using a Photons Queue and Incremental Update Algorithm". Thesis, 2012. http://spectrum.library.concordia.ca/974959/1/main.pdf.

Pełny tekst źródła
Streszczenie:
Photon mapping is a popular extension to the classic ray tracing algorithm in the field of realistic image synthesis. Moreover, it benefits from the massive parallelism computational power brought by recent developments in graphics processor hardwareand programming models. However rendering the scenes with dynamic lights stillgreatly limits the performance due to the re-construction at each rendered frame ofa kd-tree for the photons. We developed a novel approach based on the idea that storing the photons data along with the kd-tree leaf nodes data and implemented new incremental update scheme to improve the performance for dynamic lighting. The implementation is GPU-based and fully parallelized. A series of benchmarks with the prevalent existing GPU photon mapping technique is carried out to evaluate our approach. Our new technique is shown to be faster when handling scenes with dynamic lights than the existing technique while having the same image quality.
Style APA, Harvard, Vancouver, ISO itp.
49

Chiang, Shih-Ying, i 蔣世英. "Round-Robin Load Balance with Dynamic Update DNS to Improve the Performance of Cluster WEB Server". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/08052182982535001746.

Pełny tekst źródła
Streszczenie:
碩士
大同大學
資訊工程學系(所)
94
As a result of the fast development of electronic commerce, the quality request for the network server service has also relatively been enhanced. It’s an important issue that the existing network server can provide the high efficient and the uninterrupted service in the future. The Round-Robin DNS is one of load balance methods for the web cluster server to boost web site service. But, the original structure of the Round-Robin DNS cannot solve problems of the death node from cluster server and the overload of individual server. The study of this thesis is to integrate Dynamic Update DNS standard with Round-Robin DNS technology. By adding an agent script program into dynamic update client, the agent will monitor web cluster servers, filter overloading / death node servers, and update DNS servers. The result of this study fixed the original structure of Round-Robin DNS and boosted the web site service utilization ratio as well as commercial value.
Style APA, Harvard, Vancouver, ISO itp.
50

Shen, Bor-Yeh, i 沈柏曄. "Design and Implementation of Dynamic Component Update and Memory Protection Mechanisms in an Embedded Operating System". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/77343641551681282185.

Pełny tekst źródła
Streszczenie:
碩士
國立暨南國際大學
資訊管理學系
94
Dynamic component update is a mechanism that allows component-based operating systems to update components on-the-fly. Operating systems can be patched or added extra functionalities without the need of rebooting the machines. We have implemented an extensible, flexible, and efficient dynamic component update mechanism in LyraOS [1-4] embedded operating system. In our system, updates can change not only kernel/user codes and data structures but also component exported interfaces. Additionally, we use a server-side pre-linking mechanism that keeps the added overheads minimal while providing dynamic update in an embedded operating system. However, there are invisible security risks in dynamic component update. Since lots of embedded operating systems, including LyraOS, use single privilege mode, an operating system may crash because a vicious/unverified component is downloaded and installed. Due to this reason, providing a memory protection mechanism within operating systems is very important. To minimize the overheads and make our system more flexible, we designed several system features as follows: (a) separating updatable components into two groups, trusted (kernel) components and un-trusted (user) components, which run in different protection domains enforced by hardware memory protection, (b) implementing a system call interface that divides an original single-mode kernel into a kernel with user mode and kernel mode, and (c) a component communication interface. In order to prove our system availability and measure system performance in the embedded environments, we have also ported LyraOS from x86 PC to ARM Integrator/CP920T development board. In this thesis, we present our experiences in LyraOS porting, describe the design and implementation of our system, and show our performance evaluation result.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii