Dissertations / Theses on the topic 'Information update system'

To see the other types of publications on this topic, follow the link: Information update system.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 26 dissertations / theses for your research on the topic 'Information update system.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Abrahamsson, David. "Security Enhanced Firmware Update Procedures in Embedded Systems." Thesis, Linköping University, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-16914.

Full text
Abstract:

Many embedded systems are complex, and it is often required that the firmware in these systems are updatable by the end-user. For economical and confidentiality reasons, it is important that these systems only accept firmware approved by the firmware producer.

This thesis work focuses on creating a security enhanced firmware update procedure that is suitable for use in embedded systems. The common elements of embedded systems are described and various candidate algorithms are compared as candidates for firmware verification. Patents are used as a base for the proposal of a security enhanced update procedure. We also use attack trees to perform a threat analysis on an update procedure.

The results are a threat analysis of a home office router and the proposal of an update procedure. The update procedure will only accept approved firmware and prevents reversion to old, vulnerable, firmware versions. The firmware verification is performed using the hash function SHA-224 and the digital signature algorithm RSA with a key length of 2048. The selection of algorithms and key lengths mitigates the threat of brute-force and cryptanalysis attacks on the verification algorithms and is believed to be secure through 2030.

APA, Harvard, Vancouver, ISO, and other styles
2

Law, King Yiu. "Two routing strategies with cost update in integrated automated storage and retrieval system /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?IELM%202007%20LAW.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kuhn, Olivier. "Methodology for knowledge-based engineering template update : focus on decision support and instances update." Phd thesis, Université Claude Bernard - Lyon I, 2010. http://tel.archives-ouvertes.fr/tel-00713174.

Full text
Abstract:
The present Ph.D. thesis addresses the problem of knowledge-based engineering template update in product design. The reuse of design knowledge has become a key asset for the company's competitiveness. Knowledge-based engineering templates allow to store best practices and knowhow via formulas, rules, scripts, etc. This design knowledge can then be reused by instantiating the template. The instantiation results in the creation of an instance of the template in the specified context. In the scope of complex and large products, such as cars or aircrafts, the maintenance of knowledge-based engineering templates is a challenging task. Several engineers from various disciplines work together and make evolve the templates in order to extend their capabilities or to fix bugs. Furthermore, in some cases, the modifications applied to templates should be forwarded to their instances in order that they benefit from the changes. These issues slow down the adoption of template technologies at a large scale within companies. The objective of this work is to propose an approach in order to support engineers in the template update related tasks. In order to address these issues, a process supporting the template update related tasks is defined. Then a framework is proposed that helps design engineers during the template update process by providing a decision support system and a strategy for the update of template instances. The former is a system designed to ease the collaboration between various experts in order to solve template related problems. The latter aims at providing a sequence of updates to follow, in order to forward the templates' modifications to their instances. This sequence is computed with data extracted from models and templates, which are stored in an ontology designed for this purpose. The ontology is used to represent and to infer knowledge about templates, products and their relations. This facilitates the construction of update sequences as it provides an efficient overview of relationships, even implicit ones.
APA, Harvard, Vancouver, ISO, and other styles
4

Moore, Jennifer Anne. "Image Integration and On-Screen Digitizing Method of Geographic Information System Update and Maintenance Applied to the Hofmann Forest." NCSU, 2002. http://www.lib.ncsu.edu/theses/available/etd-20020419-104500.

Full text
Abstract:

The Hofmann Forest is a self-sustaining forest that provides the North Carolina State University College of Natural Resources with support for research, education, and extension service. The management of the Hofmann Forest requires data concerning historical records, complete and current resource inventory, and the ability to model future forest conditions. A geographic information system (GIS) database was created for the Hofmann Forest in 1992 to facilitate achievement of these data objectives. The database was not maintained or used regularly. Ongoing forestry research and silvicultural activities are constantly changing the resource conditions on the forest. This research examined a practical and accurate method for maintaining currency in the GIS database. Digital imagery was integrated into the original GIS database, and silvicultural records were used to update the existing data layers. Digital orthophotography, in the form of USGS Digital Orthophoto Quarter-Quads (DOQQs), was the primary source of imagery, but where the imagery was unavailable or contained insufficient spatial detail, unrectified aerial photographs were scanned, registered, and substituted. For the vegetation data layer of the GIS, spatial and attribute updates were completed and evaluated for silvicultural operations covering over three thousand acres. Some updates involved only changes in attributes. Spatial updates were completed with the digital orthophotography or digital aerial photographs; of these, some updates involved fairly simple spatial editing and others involved more complex spatial editing. The updates required the digital aerial photographs were all spatially complex edits. Acreage estimates accompanied the silvicultural records. GIS-derived area measurements were compared with those on the silvicultural records. There was not a significant difference between the two measures of area, however some discrepancies were present. A series of comparison tests were designed and performed to identify the potential elements of the area discrepancies. Spatial complexity of the editing procedure, different sources of digital imagery, and size of updated vegetation polygons were all examined. Degree of spatial complexity in the updates did not significantly contribute to area discrepancies. There was no significant difference in area discrepancies when either the DOQQs or digital aerial photographs were used. Size of the updated vegetation polygons was significantly negatively correlated with the discrepancies, showing that small absolute differences in area in small polygons result in large relative discrepancy values. Differentially corrected global positioning system (GPS) data were used to assess the horizontal positional accuracy of the GIS data layers. Following National Map Accuracy Standard (NMAS) guidelines, a sample of 25 ?well-defined? locations were collected using a Trimble GPS Pathfinder ProXR receiver with real-time differential correction capabilities. These same locations were identified on the Roads layer of the GIS database, the DOQQs, and the digital aerial photography. Root mean-square error (RMSE) was calculated for each of the different data layers, using the GPS data as reference locations. Only the DOQQ-derived points met the NMAS Class 2 horizontal positional accuracy standard. RMSE for the aerial photography and Roads layer were greater than the limiting RMSE for the NMSE Class 3 standard. Based on these results, it can be concluded that DOQQs possess greater horizontal accuracy than the digital aerial photography and are the preferred imagery source for the on-screen digitizing. Should greater resolution be required for a database update, orthorectification of the digital aerial photography could be used to correct horizontal positional errors. Recently, software packages have become available to orthorectify aerial photographs effectively and affordably.The presence of extraneous features in the vegetation layer of the GIS database almost certainly contributes to the area discrepancies. Features such as windrows, fire ponds, and logging decks are included in vegetation polygon area but not silvicultural record area estimates. Future database improvements should consider subtracting these features (and their associated areas) from the vegetation layer and creating separate database layers for each type of feature. A methodology report was developed to accompany the GIS database as a reference for future updating. Continuous maintenance of the Hofmann Forest GIS database is necessary to provide timely information for on-site forest managers and research activities, and to preserve an account of forest conditions that may be useful in present and future management decisions. On-screen digitizing with integrated digital imagery proved to be a feasible method for updating and maintaining the Hofmann Forest GIS database.

APA, Harvard, Vancouver, ISO, and other styles
5

Sommerlot, Andrew Richard. "Coupling Physical and Machine Learning Models with High Resolution Information Transfer and Rapid Update Frameworks for Environmental Applications." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/89893.

Full text
Abstract:
Few current modeling tools are designed to predict short-term, high-risk runoff from critical source areas (CSAs) in watersheds which are significant sources of non point source (NPS) pollution. This study couples the Soil and Water Assessment Tool-Variable Source Area (SWAT-VSA) model with the Climate Forecast System Reanalysis (CFSR) model and the Global Forecast System (GFS) model short-term weather forecast, to develop a CSA prediction tool designed to assist producers, landowners, and planners in identifying high-risk areas generating storm runoff and pollution. Short-term predictions for streamflow, runoff probability, and soil moisture levels were estimated in the South Fork of the Shenandoah river watershed in Virginia. In order to allow land managers access to the CSA predictions a free and open source software based web was developed. The forecast system consists of three primary components; (1) the model, which preprocesses the necessary hydrologic forcings, runs the watershed model, and outputs spatially distributed VSA forecasts; (2) a data management structure, which converts high resolution rasters into overlay web map tiles; and (3) the user interface component, a web page that allows the user, to interact with the processed output. The resulting framework satisfied most design requirements with free and open source software and scored better than similar tools in usability metrics. One of the potential problems is that the CSA model, utilizing physically based modeling techniques requires significant computational time to execute and process. Thus, as an alternative, a deep learning (DL) model was developed and trained on the process based model output. The DL model resulted in a 9% increase in predictive power compared to the physically based model and a ten-fold decrease in run time. Additionally, DL interpretation methods applicable beyond this study are described including hidden layer visualization and equation extractions describing a quantifiable amount of variance in hidden layer values. Finally, a large-scale analysis of soil phosphorus (P) levels was conducted in the Chesapeake Bay watershed, a current location of several short-term forecast tools. Based on Bayesian inference methodologies, 31 years of soil P history at the county scale were estimated, with the associated uncertainty for each estimate. These data will assist in the planning and implantation of short term forecast tools with P management goals. The short term modeling and communication tools developed in this work contribute to filling a gap in scientific tools aimed at improving water quality through informing land manager's decisions.
PHD
APA, Harvard, Vancouver, ISO, and other styles
6

Bedewy, Ahmed M. "OPTIMIZING DATA FRESHNESS IN INFORMATION UPDATE SYSTEMS." The Ohio State University, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1618573325086709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sang, Yu. "INFORMATION-UPDATE SYSTEMS: MODELS, ALGORITHMS, AND ANALYSIS." Diss., Temple University Libraries, 2019. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/576162.

Full text
Abstract:
Computer and Information Science
Ph.D.
Age of information (AoI) has been proposed as a new metric to measure the staleness of data. For time-sensitive information, it is critical to keep the AoI at a low level. A lot of work have been done on the analysis and optimization on AoI in information-update systems. Prior studies on AoI optimization often consider a push model, which is concerned about when and how to "push" (i.e., generate and transmit) the updated information to the user. In stark contrast, we introduce a new pull model, which is more relevant for certain applications (such as the real-time stock quotes service), where a user sends requests to the servers to proactively "pull" the information of interest. Moreover, we propose to employ request replication to reduce the AoI. Interestingly, we find that under this new Pull model, replication schemes capture a novel tradeoff between different levels of information freshness and different response times across the servers, which can be exploited to minimize the expected AoI at the user's side. Specifically, assuming Poisson updating process for the servers and exponentially distributed response time with known expectation, we derive a closed-form formula for computing the expected AoI and obtain the optimal number of responses to wait for to minimize the expected AoI. Then, we extend our analysis to the setting where the user aims to maximize the utility, which is an exponential function of the negative AoI and represents the user's satisfaction level about the timeliness of the received information. We can similarly derive a closed-form formula of the expected utility and find the optimal number of responses to wait for. Further, we consider a more realistic scenario where the updating rate and the mean response time at the servers are unknown to the user. In this case, we formulate the utility maximization problem as a stochastic Multi-Armed Bandit (MAB) Problem. The formulated MAB problem has a special linear feedback graph, which can be leveraged to design policies with an improved regret upper bound. We also notice that one factor has been missing in most of the previous solutions on AoI minimization, which is the cost of performing updates. Therefore, we focus on the tradeoff between the AoI and the update cost, which is of significant importance in time-sensitive data-driven applications. We consider the applications where the information provider is directly connected to the data source, and the clients need to obtain the data from the information provider in a real-time manner (such as the real-time environmental monitoring system). The provider needs to update the data so that it can reply to the clients' requests with fresh information. However, the update cost limits the frequency that the server can refresh the data, which makes it important to design an efficient policy with optimal tradeoff between data freshness and update cost. We define the staleness cost, which reflects the AoI of the data and formulate the problem as the minimization over the summation of the update cost and the staleness cost. We first propose important guidelines of designing update policies in such information-update systems that can be applied to arbitrary request arrival processes. Then, we design an update policy with a simple threshold-based structure, which is easy to implement. Under the assumption of Poisson request arrival process, we derive the closed-form expression of the average cost of the threshold-based policy and prove its optimality among all online update policies. In almost all prior works, the analysis and optimization are based on traditional queueing models with the probabilistic approaches. However, in the traditional probabilistic study of general queueing models, the analysis is heavily dependent on the properties of specific distributions. Under this framework, it is also usually hard to handle distributions with heavy tail behavior. To that end, in this work, we take an alternative new approach and focus on the Peak Age of Information (PAoI), which is the largest age of each update shown to the end users. Specifically, we employ a recently developed analysis framework based on robust optimization and model the uncertainty in the stochastic arrival and service processes by uncertainty sets. This robust queueing framework enables us to approximate the steady-state PAoI performance of information-update systems with very general arrival and service processes, including those exhibiting heavy-tailed behavior. We first propose a new bound of the PAoI under the single-source system that performs much better than previous results, especially with light traffic. Then, we generalize it to multi-source systems with symmetric arrivals, which involves new technical challenges. It has been extensively investigated for various queueing models based on the probabilistic approaches. However, in the traditional probabilistic study of general queueing models, the analysis is heavily dependent on the properties of specific distributions, such as the memoryless property of the Poisson distribution. Under this framework, it is also usually hard to handle distributions with heavy tail behavior. To that end, we take an alternative new approach and focus on the Peak Age of Information (PAoI), which is the largest age of each update shown to the end users. Specifically, we employ a recently developed analysis framework based on robust optimization and model the uncertainty in the stochastic arrival and service processes by uncertainty sets. This robust queueing framework enables us to approximate the steady-state PAoI performance of information-update systems with very general arrival and service processes, including those exhibiting heavy-tailed behavior. We first propose a new bound of the PAoI under the single-source system that performs much better than previous results, especially with light traffic. Then, we generalize it to multi-source systems with symmetric arrivals, which involves new technical challenges.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
8

Weinlichová, Jana. "Návrh algoritmů pro modul informačního systému." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2008. http://www.nusl.cz/ntk/nusl-228040.

Full text
Abstract:
Master´s thesis is considered with design of algorithms for new module of company information system. In the beginning of thesis there are characterized types of ways to describe an information systems. For specification of described system is briefly defined IBM Lotus Notes environment. Next chapter is about object-oriented analysis and design of a module of information system by using UML´s diagrams in modeling tool Enterprise Architect. In the third chapter is made analysis and design of module´s connection with current system, specificly update data in form. Thesis shows designed algorithms in environment of Lotus Domino Designer by using LotusScript and SQL languages and Lotus Domino Connector for access into the database by using ODBC. In last part of thesis is proposed to use a mapping tool to mapp the ITC infrastructure by using Change management process according to ITIL method, to manage all the changes in developing system effectively.
APA, Harvard, Vancouver, ISO, and other styles
9

Thuresson, Marcus. "Wrapping XML-Sources to Support Update Awareness." Thesis, University of Skövde, Department of Computer Science, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-488.

Full text
Abstract:

Data warehousing is a generally accepted method of providing corporate decision support. Today, the majority of information in these warehouses originates from sources within a company, although changes often occur from the outside. Companies need to look outside their enterprises for valuable information, increasing their knowledge of customers, suppliers, competitors etc.

The largest and most frequently accessed information source today is the Web, which holds more and more useful business information. Today, the Web primarily relies on HTML, making mechanical extraction of information a difficult task. In the near future, XML is expected to replace HTML as the language of the Web, bringing more structure and content focus.

One problem when considering XML-sources in a data warehouse context is their lack of update awareness capabilities, which restricts eligible data warehouse maintenance policies. In this work, we wrap XML-sources in order to provide update awareness capabilities.

We have implemented a wrapper prototype that provides update awareness capabilities for autonomous XML-sources, especially change awareness, change activeness, and delta awareness. The prototype wrapper complies with recommendations and working drafts proposed by W3C, thereby being compliant with most off-the-shelf XML tools. In particular, change information produced by the wrapper is based on methods defined by the DOM, implying that any DOM-compliant software, including most off-the-shelf XML processing tools, can be used to incorporate identified changes in a source into an older version of it.

For the delta awareness capability we have investigated the possibility of using change detection algorithms proposed for semi-structured data. We have identified similarities and differences between XML and semi-structured data, which affect delta awareness for XML-sources. As a result of this effort, we propose an algorithm for change detection in XML-sources. We also propose matching criteria for XML-documents, to which the documents have to conform to be subject to change awareness extension.

APA, Harvard, Vancouver, ISO, and other styles
10

Biondi, Mattia. "An Updated Emulated Architecture to Support the Study of Operating Systems." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20751/.

Full text
Abstract:
One of the most effective ways to learn something new is by actively practising it, and there is—maybe—no better way to study an Operating Systems course than by building your own OS. However, it is important to emphasize how the realization of an operating system capable of running on a real hardware machine could be an overly complex and unsuitable task for an undergraduate student. Nonetheless, it is possible to use a simplified computer system simulator to achieve the goal of teaching Computer Science foundations in the University environment, thus allowing students to experience a quite realistic representation of an operating system. µMPS has been created for this purpose, a pedagogically appropriate machine emulator, based around the MIPS R2/3000 microprocessor, which features an accessible architecture that includes a rich set of easily programmable devices. µMPS has an almost two decades old historical development and the outcome of this following thesis is the third version of the software, dubbed µMPS3. This second major revision aims to simplify, even more, the complexity of the emulator in order to lighten the load of work required by the students during the OS design and implementation. Two of these simplifications are the removal of the virtual memory bit, which allowed address translation to be turned on and off, and the replacement of the tape device, used as storage devices, with a new flash drive device—certainly something more familiar to the new generation of students. Thanks to the employment of this software and the feedback received over the last decade, it has been possible to realize not just this following thesis, but also to develop some major improvements, which concern everything from the project building tools to the front-end, making µMPS a modern and reliable educational software.
APA, Harvard, Vancouver, ISO, and other styles
11

Farazi, Shahab. "Age of Information in Multi-Hop Status Update Systems: Fundamental Bounds and Scheduling Policy Design." Digital WPI, 2020. https://digitalcommons.wpi.edu/etd-dissertations/593.

Full text
Abstract:
Freshness of information has become of high importance with the emergence of many real- time applications like monitoring systems and communication networks. The main idea behind all of these scenarios is the same, there exists at least a monitor of some process to which the monitor does not have direct access. Rather, the monitor indirectly receives updates over time from a source that can observe the process directly. The common main goal in these scenarios is to guarantee that the updates at the monitor side are as fresh as possible. However, due to the contention among the nodes in the network over limited channel resources, it takes some random time for the updates before they are received by the monitor. These applications have motivated a line of research studying the Age of Information (AoI) as a new performance metric that captures timeliness of information. The first part of this dissertation focuses on the AoI problem in general multi-source multi-hop status update networks with slotted transmissions. Fundamental lower bounds on the instantaneous peak and average AoI are derived under general interference constraints. Explicit algorithms are developed that generate scheduling policies for status update dissem- ination throughout the network for the class of minimum-length periodic schedules under global interference constraints. Next, we study AoI in multi-access channels, where a number of sources share the same server with exponentially distributed service times to communicate to a monitor. Two cases depending on the status update arrival rates at the sources are considered: (i) random arrivals based on the Poisson point process, and (ii) active arrivals where each source can generate an update at any point in time. For each case, closed-form expressions are derived for the average AoI as a function of the system parameters. Next, the effect of energy harvesting on the age is considered in a single-source single- monitor status update system that has a server with a finite battery capacity. Depending on the server’s ability to harvest energy while a packet is in service, and allowing or blocking the newly-arriving packets to preempt a packet in service, average AoI expressions are derived. The results show that preemption of the packets in service is sub-optimal when the energy arrival rate is lower than the status update arrival rate. Finally, the age of channel state information (CSI) is studied in fully-connected wire- less networks with time-slotted transmissions and time-varying channels. A framework is developed that accounts for the amount of data and overhead in each packet and the CSI disseminated in the packet. Lower bounds on the peak and average AoI are derived and a greedy protocol that schedules the status updates based on minimizing the instantaneous average AoI is developed. Achievable average AoI is derived for the class of randomized CSI dissemination schedules.
APA, Harvard, Vancouver, ISO, and other styles
12

Guo, Yuxin. "Automatic Updates in the Sensible Things Platform." Thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-25502.

Full text
Abstract:
The Internet-of-Things is an open source architecture for enabling information sharing between globally connected devices, which existing system do not offer. However, the Internet-of-Things induces the single points of failure and long communication delay. Thus, the Sensible Things platform is proposed, which is a fully distributed system. So far, it has produced components to share sensor and actuator information on the Internet. In the past, manual work was problematic since physical access could be difficult on remote locations. There were also difficulties to detect if the devices were actually working properly. Therefore, the thesis mainly focuses on the functionality which is able to check status, update software automatically and restart the devices with new software. The thesis first analyzes the mechanism of the automatic updating and describes the methods for it. The automatic updates of demonstrator is implemented in My-Eclipse. Finally, this paper describes the evaluation of the automatic updating in Sensible Things platform.
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Xiwei. "Data Privacy Preservation in Collaborative Filtering Based Recommender Systems." UKnowledge, 2015. http://uknowledge.uky.edu/cs_etds/35.

Full text
Abstract:
This dissertation studies data privacy preservation in collaborative filtering based recommender systems and proposes several collaborative filtering models that aim at preserving user privacy from different perspectives. The empirical study on multiple classical recommendation algorithms presents the basic idea of the models and explores their performance on real world datasets. The algorithms that are investigated in this study include a popularity based model, an item similarity based model, a singular value decomposition based model, and a bipartite graph model. Top-N recommendations are evaluated to examine the prediction accuracy. It is apparent that with more customers' preference data, recommender systems can better profile customers' shopping patterns which in turn produces product recommendations with higher accuracy. The precautions should be taken to address the privacy issues that arise during data sharing between two vendors. Study shows that matrix factorization techniques are ideal choices for data privacy preservation by their nature. In this dissertation, singular value decomposition (SVD) and nonnegative matrix factorization (NMF) are adopted as the fundamental techniques for collaborative filtering to make privacy-preserving recommendations. The proposed SVD based model utilizes missing value imputation, randomization technique, and the truncated SVD to perturb the raw rating data. The NMF based models, namely iAux-NMF and iCluster-NMF, take into account the auxiliary information of users and items to help missing value imputation and privacy preservation. Additionally, these models support efficient incremental data update as well. A good number of online vendors allow people to leave their feedback on products. It is considered as users' public preferences. However, due to the connections between users' public and private preferences, if a recommender system fails to distinguish real customers from attackers, the private preferences of real customers can be exposed. This dissertation addresses an attack model in which an attacker holds real customers' partial ratings and tries to obtain their private preferences by cheating recommender systems. To resolve this problem, trustworthiness information is incorporated into NMF based collaborative filtering techniques to detect the attackers and make reasonably different recommendations to the normal users and the attackers. By doing so, users' private preferences can be effectively protected.
APA, Harvard, Vancouver, ISO, and other styles
14

Leng, Christof Verfasser], Alejandro P. [Akademischer Betreuer] [Buchmann, Bettina [Akademischer Betreuer] Kemme, and Klaus [Akademischer Betreuer] Wehrle. "BubbleStorm: Replication, Updates, and Consistency in Rendezvous Information Systems / Christof Leng. Betreuer: Alejandro P. Buchmann ; Bettina Kemme ; Klaus Wehrle." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2012. http://d-nb.info/110611731X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bergmark, Max. "Designing a performant real-time modular dynamic pricing system : Studying the performance of a dynamic pricing system which updates in real-time, and its application within the golfing industry." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-276241.

Full text
Abstract:
In many industries, the prices of products are dynamically calculated to maximize revenue while keeping customer satisfaction. In this paper, an approach of online calculation of prices is investigated, where customers always receive an updated price. The most common method used today is to update prices with some interval, and present that price to the customer. This standard method provides fast responses, and accurate responses for the most part. However, if the dynamic pricing model could benefit from very fast price updates, an online calculation approach might provide better price accuracy. The main advantages of this approach is the combination of short term accuracy and long term stability. Short term behaviour is handled by the online price calculation with real-time updates, while long term behaviour is handled by statistical analysis of booking behaviour, which is condensed into a demand curve. In this paper, the long term statistical analysis for calculating demand curves is described, along with the benefit of short term price adjustments which can be beneficial both for producers and consumers.
I många industrier räknas priset för produkterna ut dynamiskt för att maximera intäkterna samtidigt som kundnöjdheten bibehålls. I denna rapport så undersökt en metod för realtidsberäkningar av priser i en dynamisk kontext, där kunder alltid får uppdaterade priser. Den vanligaste metoden för dynamisk prissättning idag är att uppdatera priserna i systemet med jämna intervall, och sedan presentera det senast uträknade priset till kunden. Att använda periodiska prisuppdateringar leder till snabba responstider, och vanligtvis tillräckligt hög exakthet vad gäller pris. Men om det dynamiska prissättningssystemet kan dra nytta av väldigt snabba prisuppdateringar, så kan en onlineberäkning vara en bättre metod för att öka exaktheten för priserna. Den huvudsakliga fördelen av detta tillvägagångssätt är att det kombinerar kortsiktig exakthet med långsiktig stabilitet. På kort sikt så hanteras prisändringar av en onlineberäkning med realtidsuppdateringar, medan större trender hanteras av statistisk analys av tidigare bokningsbeteenden, som kondenseras till en efterfrågekurva. I denna rapport så beskrivs den långsiktiga statistiska analysen i kombination med den kortsiktiga onlineberäkningen, och hur denna kombination kan vara positiv både för säljare och köpare.
APA, Harvard, Vancouver, ISO, and other styles
16

Larkin, Devitt. "Aligning with the rapidly shifting technological goalposts : the review and update of the RIMPA technology survey." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2012. https://ro.ecu.edu.au/theses/516.

Full text
Abstract:
In 2008 the Records and Information Management Professionals Australasia (RIMPA) organisation (then known as the Records Management Association of Australasia – RMAA) launched its Technology Survey. The survey attempted to capture a snapshot, predominantly but not exclusively in Australia and New Zealand, and gain empirical evidence about the use of technology adoption trends, RIM capabilities in host organisations and the role of RIM personnel in technology selection and adoption. The survey had a particular focus on Records Management (RM) and Enterprise Content Management (ECM) systems and processes, but also featured questions on the demographics of the participants, organisational policies and processes around these technologies and peripheral devices. In 2010 the survey was repeated. Consequently, the survey became more than a one-off cross-sectional snapshot and could lay claim to being a longitudinal study, however as a longitudinal study instrument the current survey is lacking validity and reliability. A consensus exists, however, that changes are required going forward if the survey is to continue. This consensus is based on issues that have emerged from analysis of the two iterations of the current instrument. The issues that need to be addressed are: • Low participation rate • The relatively high number of questions skipped • The overall length of the survey • Ensuring the survey has a clear and distinct aim • Ensuring what is captured is core to the survey’s aim • Ensuring what is captured is relevant to the RIM profession • The ambiguity of questions • Misunderstanding of questions • Scope – expansion of the instrument to encompass technology learning, knowledge and skills of RIM professions These issues were identified by Brogan and Roberts in their analyses of the 2008 and 2010 data (2009, 2011 and 2012). This study is an examination and revision of the current technology survey instrument, aimed at ensuring that issues of relevancy, currency, usability, design and clarity of terms and definitions are all addressed, resulting in a valid and reliable longitudinal study instrument. The research design employed involved: a) investigation of the peer reviewed literature on survey participation and instrument design; b) investigation of peer reviewed and non-peer reviewed literature on technology in the RIM space; c) Convening of a panel of experts (focus group) to provide feedback on the existing instrument; d) Re-design of the existing instrument taking into account outcomes from a-c; and e) Validation of the re-designed instrument via the Focus Group The Focus Group review involved six highly regarded and knowledgeable participants pro-active in the RIM profession who trialled the instrument in a subsequent Pilot Test. The Focus Group provided additional feedback on scope and usability from a user perspective. The final survey produced will enable RIMPA to be informed on the technology education and training needs of its members, as well as continuing to track technology adoption and RIM program trends in the workplace.
APA, Harvard, Vancouver, ISO, and other styles
17

Ersson, Lucas. "Facilitating More Frequent Updates: Towards Evergreen : A Case Study of an Enterprise Software Vendor’s Response to the Emerging DevOps Trend, Drawing on Neo-Institutional Theory." Thesis, Linköpings universitet, Informatik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-155785.

Full text
Abstract:
The last couple of years the trend within the software industry has been to releasesmaller software updates more frequent, to overcome challenges and increase flexibility, to alignwith the swiftly changing industry environment. As an effect, we now see companies moving over tocapitalizing on subscriptions and incremental releases instead of charging for upgrades. By utilizingneo-institutional theory and Oliver’s (1991) strategic response theory, an enterprise systemsvendor’s response to the emerging DevOps trend can be determined.
APA, Harvard, Vancouver, ISO, and other styles
18

Idris, Muhammad. "Real-time Business Intelligence through Compact and Efficient Query Processing Under Updates." Doctoral thesis, Universite Libre de Bruxelles, 2019. https://dipot.ulb.ac.be/dspace/bitstream/2013/284705/5/contratMI.pdf.

Full text
Abstract:
Responsive analytics are rapidly taking over the traditional data analytics dominated by the post-fact approaches in traditional data warehousing. Recent advancements in analytics demand placing analytical engines at the forefront of the system to react to updates occurring at high speed and detect patterns, trends, and anomalies. These kinds of solutions find applications in Financial Systems, Industrial Control Systems, Business Intelligence and on-line Machine Learning among others. These applications are usually associated with Big Data and require the ability to react to constantly changing data in order to obtain timely insights and take proactive measures. Generally, these systems specify the analytical results or their basic elements in a query language, where the main task then is to maintain query results under frequent updates efficiently. The task of reacting to updates and analyzing changing data has been addressed in two ways in the literature: traditional business intelligence (BI) solutions focus on historical data analysis where the data is refreshed periodically and in batches, and stream processing solutions process streams of data from transient sources as flows of data items. Both kinds of systems share the niche of reacting to updates (known as dynamic evaluation), however, they differ in architecture, query languages, and processing mechanisms. In this thesis, we investigate the possibility of a reactive and unified framework to model queries that appear in both kinds of systems.In traditional BI solutions, evaluating queries under updates has been studied under the umbrella of incremental evaluation of queries that are based on the relational incremental view maintenance model and mostly focus on queries that feature equi-joins. Streaming systems, in contrast, generally follow automaton based models to evaluate queries under updates, and they generally process queries that mostly feature comparisons of temporal attributes (e.g. timestamp attributes) along with comparisons of non-temporal attributes over streams of bounded sizes. Temporal comparisons constitute inequality constraints while non-temporal comparisons can either be equality or inequality constraints. Hence these systems mostly process inequality joins. As a starting point for our research, we postulate the thesis that queries in streaming systems can also be evaluated efficiently based on the paradigm of incremental evaluation just like in BI systems in a main-memory model. The efficiency of such a model is measured in terms of runtime memory footprint and the update processing cost. To this end, the existing approaches of dynamic evaluation in both kinds of systems present a trade-off between memory footprint and the update processing cost. More specifically, systems that avoid materialization of query (sub)results incur high update latency and systems that materialize (sub)results incur high memory footprint. We are interested in investigating the possibility to build a model that can address this trade-off. In particular, we overcome this trade-off by investigating the possibility of practical dynamic evaluation algorithm for queries that appear in both kinds of systems and present a main-memory data representation that allows to enumerate query (sub)results without materialization and can be maintained efficiently under updates. We call this representation the Dynamic Constant Delay Linear Representation (DCLRs).We devise DCLRs with the following properties: 1) they allow, without materialization, enumeration of query results with bounded-delay (and with constant delay for a sub-class of queries), 2) they allow tuple lookup in query results with logarithmic delay (and with constant delay for conjunctive queries with equi-joins only), 3) they take space linear in the size of the database, 4) they can be maintained efficiently under updates. We first study the DCLRs with the above-described properties for the class of acyclic conjunctive queries featuring equi-joins with projections and present the dynamic evaluation algorithm called the Dynamic Yannakakis (DYN) algorithm. Then, we present the generalization of the DYN algorithm to the class of acyclic queries featuring multi-way Theta-joins with projections and call it Generalized DYN (GDYN). We devise DCLRs with the above properties for acyclic conjunctive queries, and the working of DYN and GDYN over DCLRs are based on a particular variant of join trees, called the Generalized Join Trees (GJTs) that guarantee the above-described properties of DCLRs. We define GJTs and present algorithms to test a conjunctive query featuring Theta-joins for acyclicity and to generate GJTs for such queries. We extend the classical GYO algorithm from testing a conjunctive query with equalities for acyclicity to testing a conjunctive query featuring multi-way Theta-joins with projections for acyclicity. We further extend the GYO algorithm to generate GJTs for queries that are acyclic.GDYN is hence a unified framework based on DCLRs that enables processing of queries that appear in streaming systems as well as in BI systems in a unified main-memory model and addresses the space-time trade-off. We instantiate GDYN to the particular case where all Theta-joins involve only equalities and inequalities and call this instantiation IEDYN. We implement DYN and IEDYN as query compilers that generate executable programs in the Scala programming language and provide all the necessary data structures and their maintenance and enumeration methods in a continuous stream processing model. We evaluate DYN and IEDYN against state-of-the-art BI and streaming systems on both industrial and synthetically generated benchmarks. We show that DYN and IEDYN outperform the existing systems by over an order of magnitude efficiency in both memory footprint and update processing time.
Doctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
19

Fu, Ju-Hsien, and 傅茹舷. "English Vocabulary Learning System Based on Memory Cycle Update and Contextual Information." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/38451219201948375514.

Full text
Abstract:
碩士
國立成功大學
工程科學系碩博士班
97
Vocabulary learning is one of the major impact factors in English learning. One way to approach this learning task is to undertake extensive reading of English articles. For promoting vocabulary learning in an extensive reading environment, providing multiple-choice glosses while reading is a widely studied technique. The reason is that multiple-choice glosses can connect both advantages of vocabulary glosses and meaning inferring. Besides, many scholars investigate using the feedback of the vocabulary test to update the memory cycle of the vocabulary on improving vocabulary learning. This thesis proposed an English vocabulary learning system with an extensive reading environment. The system can update memory cycles of vocabularies of a learner through the feedback of multiple-choice glosses successfully and promote the learner’s vocabulary abilities in a limited period. Furthermore, an indirect memory cycle update method is also presented in the thesis to adjust memory cycles according to contextual information, and the indirect memory cycle update method can decrease number of test times and allow the vocabularies that need to be reviewed appearing more precisely.
APA, Harvard, Vancouver, ISO, and other styles
20

Leng, Christof. "BubbleStorm: Replication, Updates, and Consistency in Rendezvous Information Systems." Phd thesis, 2012. http://tuprints.ulb.tu-darmstadt.de/3078/1/phd-thesis-leng-tuprints.pdf.

Full text
Abstract:
As distributed systems are getting more and more complex, search facilities for finding services and data within the system become crucial. Users expect search engines to deal with complex query languages like keyword search, SQL, or XPath. At the same time, application developers cannot be expected to come up with distributed versions of those query languages from scratch. Rendezvous search systems are a very scalable solution to this problem. By separating the query processing from the network communication, existing libraries for query processing can be easily reused. A wide range of rendezvous search systems for different scenarios has been proposed in the past. Their scalability and resilience make them an excellent choice for search in large-scale and dynamic peer-to-peer environments. The resilience stems mainly from the high number of replicas per datum, which however makes replica maintenance difficult. Unfortunately, most rendezvous search systems lack maintenance algorithms to sustain the desired replica count under node churn. Replica maintenance is closely related to update mechanisms for mutable data. The highly distributed nature of peer-to-peer systems in general and the high replica count of rendezvous search systems in particular require carefully designed mechanisms for consistent updates with concurrent accesses. In this thesis, replica maintenance and update mechanisms for the BubbleStorm peer-to-peer overlay and related rendezvous search systems are introduced. After analyzing the design space of replica maintenance for peer-to-peer systems, a complete solution covering all identified use cases is presented. This includes a maintainer-based mechanism for data managed by a single node and a collective mechanism for data that shall be persistent beyond any particular node’s session time. The algorithms are evaluated in BubbleStorm’s sophisticated testbed, which allows prototype experiments and simulations with the same source code.
APA, Harvard, Vancouver, ISO, and other styles
21

CHANG, YI-CHING, and 張倚菁. "Applying Updated Information System Success Model (Updated ISSM) to Explore the Benefit Analysis of Taiwan Enterprises on ERP Implementation." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/54uxv4.

Full text
Abstract:
碩士
中華大學
資訊管理學系
105
The purpose of this study is to construct a framework to explore the benefit analysis of Taiwan Enterprises on ERP(Enterprise Resource Planning) implementation. Previous researches in ERP, scholars focused on user satisfaction, case studies and key influential factors of ERP implementation. While as the rapid competitive changes of business environment, information system for most of enterprises should be modified. Nevertheless, rare studies explored benefit analysis of ERP by updated ISSM (updated Information System Success Model). This study adopts amended ISSM and Expected Confirmation Theory to explore the benefit analysis of Taiwan enterprises on ERP implementation. Data were collected from 206 ERP system users by using SPSS. The results of this study demonstrate that the quality of information system has positive impact on both quality of information and users’ satisfaction of information; quality of service provided has positive impact on information quality and users’ satisfaction of information; the degrees of expected confirmation by users have positive impact on users’ satisfaction of information and users’ satisfaction of information has positive impact on individual benefit and organizational benefit. This study recommends that the reliability and stability of ERP system, along with the professional images and problem-solving abilities of the ERP suppliers should be considered as the priority while implementing ERP system. On the other hand, with the purpose of advancing users’ sustaining uses, the ERP suppliers should to increase the dependency of system, raise the closed interaction between informational supervises and the ability of problem-solving.
APA, Harvard, Vancouver, ISO, and other styles
22

ThuNguyen, Phuong Ha, and 阮芳夏書. "The Impact of Updated AIS (Accounting Information System)’s Appropriation on Accounting Process Performance under New IFRS." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/5ep9d9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Wu, Ssu-wei, and 吳斯維. "A Study on Job Performance of Second-Generation Document Management System Based on Updated Information Systems Success Model for Civil Servants in New Taipei City." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/32365736954392234201.

Full text
Abstract:
碩士
世新大學
資訊傳播學研究所(含碩專班)
103
New Taipei City Government cooperates with the central policy to build the second generation document management system in order to achieve the documents electronically. The system changes the way how the civil servants deal with official business. This study aims to understand the influence of System quality on user satisfaction and analyze the correlation between users’ satisfaction and job performance. The study surveyed civil servants in District Offices of New Taipei City through questionnaires and statistical analysis. There are 457 copies of valid questionnaires in total. Descriptive statistical survey conducted by the statistical software package SPSS version 21.0, Independent-Samples t Test, One-way ANOVA, Pearson product-moment correlation coefficient, Regression Analysis, synthesis summarized the following conclusions: System quality, service quality and information quality have a positive impact on user satisfaction, and user satisfaction with the performance of civil servants has a positive correlation.
APA, Harvard, Vancouver, ISO, and other styles
24

Van, der Westhuizen Petra Laura. "Control room agents : an information-theoretic approach." Thesis, 2007. http://hdl.handle.net/10500/2211.

Full text
Abstract:
In this thesis, a particular class of agent is singled out for examination. In order to provide a guiding metaphor, we speak of control room agents. Our focus is on rational decision- making by such agents, where the circumstances obtaining are such that rationality is bounded. Control room agents, whether human or non-human, need to reason and act in a changing environment with only limited information available to them. Determining the current state of the environment is a central concern for control room agents if they are to reason and act sensibly. A control room agent cannot plan its actions without having an internal representation (epistemic state) of its environment, and cannot make rational decisions unless this representation, to some level of accuracy, reflects the state of its environment. The focus of this thesis is on three aspects regarding the epistemic functioning of a control room agent: 1. How should the epistemic state of a control room agent be represented in order to facilitate logical analysis? 2. How should a control room agent change its epistemic state upon receiving new information? 3. How should a control room agent combine available information from different sources? In describing the class of control room agents as first-order intentional systems hav- ing both informational and motivational attitudes, an agent-oriented view is adopted. The central construct used in the information-theoretic approach, which is qualitative in nature, is the concept of a templated ordering. Representing the epistemic state of a control room agent by a (special form of) tem- plated ordering signals a departure from the many approaches in which only the beliefs of an agent are represented. Templated orderings allow for the representation of both knowledge and belief. A control room agent changes its epistemic state according to a proposed epistemic change algorithm, which allows the agent to select between two well-established forms of belief change operations, namely, belief revision and belief update. The combination of (possibly conflicting) information from different sources has re- ceived a lot of attention in recent years. Using templated orderings for the semantic representation of information, a new family of purely qualitative merging operations is developed.
School of Computing
Ph. D. (Computer Science)
APA, Harvard, Vancouver, ISO, and other styles
25

Idris, Muhammad. "Real-time Business Intelligence through Compact and Efficient Query Processing Under Updates." 2018. https://tud.qucosa.de/id/qucosa%3A33726.

Full text
Abstract:
Responsive analytics are rapidly taking over the traditional data analytics dominated by the post-fact approaches in traditional data warehousing. Recent advancements in analytics demand placing analytical engines at the forefront of the system to react to updates occurring at high speed and detect patterns, trends and anomalies. These kinds of solutions find applications in Financial Systems, Industrial Control Systems, Business Intelligence and on-line Machine Learning among others. These applications are usually associated with Big Data and require the ability to react to constantly changing data in order to obtain timely insights and take proactive measures. Generally, these systems specify the analytical results or their basic elements in a query language, where the main task then is to maintain these results under frequent updates efficiently. The task of reacting to updates and analyzing changing data has been addressed in two ways in the literature: traditional business intelligence (BI) solutions focus on historical data analysis where the data is refreshed periodically and in batches, and stream processing solutions process streams of data from transient sources as flow (or set of flows) of data items. Both kinds of systems share the niche of reacting to updates (known as dynamic evaluation); however, they differ in architecture, query languages, and processing mechanisms. In this thesis, we investigate the possibility of a reactive and unified framework to model queries that appear in both kinds of systems. In traditional BI solutions, evaluating queries under updates has been studied under the umbrella of incremental evaluation of updates that is based on relational incremental view maintenance model and mostly focus on queries that feature equi-joins. Streaming systems, in contrast, generally follow the automaton based models to evaluate queries under updates, and they generally process queries that mostly feature comparisons of temporal attributes (e.g., timestamp attributes) along-with comparisons of non-temporal attributes over streams of bounded sizes. Temporal comparisons constitute inequality constraints, while non-temporal comparisons can either be equality or inequality constraints, hence these systems mostly process inequality joins. As starting point, we postulate the thesis that queries in streaming systems can also be evaluated efficiently based on the paradigm of incremental evaluation just like in BI systems in a main-memory model. The efficiency of such a model is measured in terms of runtime memory footprint and the update processing cost. To this end, the existing approaches of dynamic evaluation in both kind of systems present a trade-off between memory footprint and the update processing cost. More specifically, systems that avoid materialization of query (sub) results incur high update latency and systems that materialize (sub) results incur high memory footprint. We are interested in investigating the possibility to build a model that can address this trade-off. In particular, we overcome this trade-off by investigating the possibility of practical dynamic evaluation algorithm for queries that appear in both kinds of systems, and present a main-memory data representation that allows to enumerate query (sub) results without materialization and can be maintained efficiently under updates. We call this representation the Dynamic Constant Delay Linear Representation (DCLR). We devise DCLRs with the following properties: 1) they allow, without materialization, enumeration of query results with bounded-delay (and with constant delay for a sub-class of queries); 2) they allow tuple lookup in query results with logarithmic delay (and with constant delay for conjunctive queries with equi-joins only); 3) they take space linear in the size of the database; 4) they can be maintained efficiently under updates. We first study the DCLRs with the above-described properties for the class of acyclic conjunctive queries featuring equi-joins with projections and present the dynamic evaluation algorithm. Then, we present the generalization of thiw algorithm to the class of acyclic queries featuring multi-way theta-joins with projections. We devise DCLRs with the above properties for acyclic conjunctive queries, and the working of dynamic algorithms over DCLRs is based on a particular variant of join trees, called the Generalized Join Trees (GJTs) that guarantee the above-described properties of DCLRs. We define GJTs and present the algorithms to test a conjunctive query featuring theta-joins for acyclicity and to generate GJTs for such queries. To do this, we extend the classical GYO algorithm from testing a conjunctive query with equalities for acyclicity to test a conjunctive query featuring multi-way theta-joins with projections for acyclicity. We further extend the GYO algorithm to generate GJTs for queries that are acyclic. We implemented our algorithms in a query compiler that takes as input the SQL queries and generates Scala executable code – a trigger program to process queries and maintain under updates. We tested our approach against state of the art main-memory BI and CEP systems. Our evaluation results have shown that our DCLRs based approach is over an order of magnitude efficient than existing systems for both memory footprint and update processing cost. We have also shown that the enumeration of query results without materialization in DCLRs is comparable (and in some cases efficient) as compared to enumerating from materialized query results.
APA, Harvard, Vancouver, ISO, and other styles
26

Schenk, Franz. "An Active Domain Node Architecture for the Semantic Web." Doctoral thesis, 2008. http://hdl.handle.net/11858/00-1735-0000-0006-B3B7-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography