To see the other types of publications on this topic, follow the link: Agent-based data simulation.

Dissertations / Theses on the topic 'Agent-based data simulation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 18 dissertations / theses for your research on the topic 'Agent-based data simulation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Martignoni, Robert Antonio. "Evaluation of the business model for mobile data services : an agent-based simulation approach /." [S.l.] : [s.n.], 2009. http://opac.nebis.ch/cgi-bin/showAbstract.pl?sys=000293550.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tufail, M. "The extraction and usage of patterns from video data to support multi-agent based simulation." Thesis, University of Liverpool, 2017. http://livrepository.liverpool.ac.uk/3008120/.

Full text
Abstract:
The research work presented in this thesis is directed at addressing the knowledge acquisition bottleneck frequently encountered in computer simulation. The central idea is to extract the required knowledge from video data and use this to drive a computer simulation instead of the more conventional approach of interviewing domain experts and somehow encapsulating this knowledge in a manner whereby it can be used in the context of computer simulation. More specifically the idea presented in this thesis is to extract object location information from video data and then to mine this information to identify Movement Patterns (MPs) and then to utalise these MPs in the context of computer simulation. To act as a focus for the work rodent behaviour simulation was considered. Partly because video data concerning rodent behaviour was relatively easy to obtain and partly because there is a genuine need to achieve a better understanding of rodent behaviour. This is especially the case in the context of crop damage. There are a variety of computer simulation frameworks. One that naturally lends itself to rodent simulation is Multi Agent Based Simulation (MABS) whereby the objects to be simulated (rodents) are encapsulated in terms of software agents. In more detail the work presented is directed at a number of research issues in the context of the above: (i) mechanisms to identify a moving object in video data and extracting associated location information, (ii) the mining of MPs from the extracted location information, (iii) the representation of MPs in such a way that they are compatible with computer simulation frameworks especially MABS frameworks and (iv) mechanisms where by MPs can be utilized and interacted with so as to drive a MABS. Overall two types of mechanisms are considered, Absolute and Relative. The operation of rodent MABSs, driven using the proposed MP concept, is fully illustrated in the context of different categories of scenarios. The evaluation of the proposed MP driven MABSs was conducted by comparing real world scenarios to parallel simulated scenarios. The results presented in the thesis demonstrated that the proposed mechanisms for extracting locations, and consequently mining MPs, from video data to drive a MABS provides a useful approach to effective computer simulation that will have wide ranging benefits.
APA, Harvard, Vancouver, ISO, and other styles
3

Zheng, Jiaqi. "Interactive Visual Analytics for Agent-Based simulation : Street-Crossing Behavior at Signalized Pedestrian Crossing." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-264991.

Full text
Abstract:
To design a pedestrian crossing area reasonably can be a demanding task for traffic planners. There are several challenges, including determining the appropriate dimensions, and ensuring that pedestrians are exposed to the least risks. Pedestrian safety is especially obscure to analyze, given that many people in Stockholm cross the street illegally by running against the red light. To cope with these challenges, computational approaches of trajectory data visual analytics can be used to support the analytical reasoning process. However, it remains an unexplored field regarding how to visualize and communicate the street-crossing spatio-temporal data effectively. Moreover, the rendering also needs to deal with a growing data size for a more massive number of people. This thesis proposes a web-based interactive visual analytics tool for pedestrians' street-crossing behavior under various flow rates. The visualization methodology is also presented, which is then evaluated to have achieved satisfying communication and rendering effectiveness for maximal 180 agents over 100 seconds. In terms of the visualization scenario, pedestrians either wait for the red light or cross the street illegally; all people can choose to stop by a buffer island before they finish crossing. The visualization enables the analysis under multiple flow rates for 1) pedestrian movement, 2) space utilization, 3) crossing frequency in time-series, and 4) illegal frequency. Additionally, to acquire the initial trajectory data, Optimal Reciprocal Collision Avoidance (ORCA) algorithm is engaged in the crowd simulation. Then different visualization techniques are utilized to comply with user demands, including map animation, data aggregation, and time-series graph.
Att konstruera ett gångvägsområde kan rimligen vara en krävande uppgift för trafikplanerare. Det finns flera utmaningar, bland annat att bestämma lämpliga dimensioner och se till att fotgängare utsätts för minst risker. Fotgängarnas säkerhet är särskilt obskyrlig att analysera, eftersom många människor i Stockholm korsar gatan olagligt genom att springa mot det röda ljuset. För att klara av dessa utmaningar kan beräkningsmetoder för bana data visuell analys användas för att stödja den analytiska resonemangsprocessen. Det är emellertid ett oexplorerat fält om hur man visualiserar och kommunicerar gataövergången spatio-temporal data effektivt. Dessutom måste rendering också hantera en växande datastorlek för ett mer massivt antal människor. Denna avhandling föreslår ett webbaserat interaktivt visuellt analysverktyg för fotgängares gatöverföring under olika flödeshastigheter. Visualiseringsmetoden presenteras också, som sedan utvärderas för att ha uppnått tillfredsställande kommunikation och effektivitet för maximal 180 agenter över 100 sekunder. Vad beträffar visualiseringsscenariot, väntar fotgängare antingen på det röda ljuset eller tvärs över gatan; alla människor kan välja att stanna vid en buffertö innan de slutar korsa. Visualiseringen möjliggör analysen under flera flödeshastigheter för 1) fotgängarrörelse, 2) rymdutnyttjande, 3) korsfrekvens i tidsserier och 4) olaglig frekvens. För att förvärva den ursprungliga bana-data är Optimal Reciprocal Collision Avoidance (ORCA) algoritmen förknippad med folkmassimuleringen. Därefter utnyttjas olika visualiseringstekniker för att uppfylla användarnas krav, inklusive kartanimering, dataaggregering och tidsserier.
APA, Harvard, Vancouver, ISO, and other styles
4

Kratz, Jakob, and Viktor Luthman. "Comparison of spatial partitioning data structures in crowd simulations." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-302340.

Full text
Abstract:
This report investigates how the construction and query time of multiple spatial partitioning data structures is impacted by spatial distribution of and number of agents in a crowd simulation. In addition a method is investigated for updating the data structures less frequently at the cost of increasing the radius queried, without affecting the correctness of the queries. The data structures are tested in a simulation using a Boids model and update and query times are measured. It is found that the performance of the grid is better than the quad tree and the kd- tree for low number of agents, but deteriorates more quickly when the number of agents increase. It is also found that this approach can decrease the sum of time spent updating and the time spent querying in the simulation. The effectiveness of this method is highly dependent on the update of the data structure.
Denna rapport undersöker hur konstruktion och grannsökning av flera datastrukturer för spatial partitionering påverkas av spatial fördelning av simuleringens agenter och antal agenter i simuleringen. Dessutom undersöks en metod för att uppdatera datastrukturerna mindre ofta, på bekostnad av att utöka grannsökningens radie, utan att påverka grannsökningens korrekthet. Datastrukturerna testas i en simulering baserad på Boids och uppdaterings- och frågetider för datastrukturerna mäts. Det visar sig att prestandan av grid är bättre än prestandan av quad tree och kd- tree för ett litet antal agenter, men att prestandan för grid försämras snabbare när antalet agenter ökar. Dessutom visar sig denna metod kunna ge en minskning i den totala tiden som går åt till att göra grannsökningar och uppdateringar av datastrukturen. Hur effektiv denna metod är beror i hög grad på hur lång uppdateringstiden är för den använda datastrukturen.
APA, Harvard, Vancouver, ISO, and other styles
5

Hassouna, Mohammed Bassam. "Agent based modelling and simulation : an examination of customer retention in the UK mobile market." Thesis, Brunel University, 2012. http://bura.brunel.ac.uk/handle/2438/6344.

Full text
Abstract:
Customer retention is an important issue for any business, especially in mature markets such as the UK mobile market where new customers can only be acquired from competitors. Different methods and techniques have been used to investigate customer retention including statistical methods and data mining. However, due to the increasing complexity of the mobile market, the effectiveness of these techniques is questionable. This study proposes Agent-Based Modelling and Simulation (ABMS) as a novel approach to investigate customer retention. ABMS is an emerging means of simulating behaviour and examining behavioural consequences. In outline, agents represent customers and agent relationships represent processes of agent interaction. This study follows the design science paradigm to build and evaluate a generic, reusable, agent-based (CubSim) model to examine the factors affecting customer retention based on data extracted from a UK mobile operator. Based on these data, two data mining models are built to gain a better understanding of the problem domain and to identify the main limitations of data mining. This is followed by two interrelated development cycles: (1) Build the CubSim model, starting with modelling customer interaction with the market, including interaction with the service provider and other competing operators in the market; and (2) Extend the CubSim model by incorporating interaction among customers. The key contribution of this study lies in using ABMS to identify and model the key factors that affect customer retention simultaneously and jointly. In this manner, the CubSim model is better suited to account for the dynamics of customer churn behaviour in the UK mobile market than all other existing models. Another important contribution of this study is that it provides an empirical, actionable insight on customer retention. In particular, and most interestingly, the experimental results show that applying a mixed customer retention strategy targeting both high value customers and customers with a large personal network outperforms the traditional customer retention strategies, which focuses only on the customer‘s value.
APA, Harvard, Vancouver, ISO, and other styles
6

Zangeneh, L. "Investigating the challenges of data, pricing and modelling to enable agent based simulation of the Credit Default Swap market." Thesis, University College London (University of London), 2014. http://discovery.ucl.ac.uk/1435662/.

Full text
Abstract:
The Global Financial Crisis of 2007-2008 is considered by three top economists the worst financial crisis since the Great Depression of the 1930s [Pendery, 2009]. The crisis played a major role in the failure of key businesses, declines in consumer wealth, and significant downturn in economic activities leading to the 2008-2012 global recession and contributing to the European sovereign-debt crisis [Baily and Elliott, 2009] [Williams, 2012]. More importantly, the serious limitation of existing conventional tools and models as well as a vital need for developing complementary tools to improve the robustness of existing overall framework immediately became apparent. This thesis details three proposed solutions drawn from three main subject areas: Statistic, Genetic Programming (GP), and Agent-Based Modeling (ABM) to help enable agent-based simulation of Credit Default Swap (CDS) market. This is accomplished by tackling three challenges of lack of sufficient data to support research, lack of efficient CDS pricing technique to be integrated into agent based model, and lack of practical CDS market experimental model, that are faced by designers of CDS investigation tools. In particular, a general data generative model is presented for simulating financial data, a novel price calculator is proposed for pricing CDS contracts, and a unique CDS agent-based model is designed to enable the investigation of market. The solutions presented can be seen as modular building blocks that can be applied to a variety of applications. Ultimately, a unified general framework is presented for integrating these three solutions. The motivation for the methods is to suggest viable tools that address these challenges and thus enable the future realistic simulation of the CDS market using the limited real data in hand. A series of experiments were carried out, and a comparative evaluation and discussion is provided. In particular, we presented the advantages of realistic artificial data to enable open ended simulation and to design various scenarios, the effectiveness of Cartesian Genetic Programming (CGP) as a bio-inspired evolutionary method for a complex real-world financial problem, and capability of Agent Based (AB) models for investigating CDS market. These experiments demonstrate the efficiency and viability of the proposed approaches and highlight interesting directions of future research.
APA, Harvard, Vancouver, ISO, and other styles
7

Elmir, Ahmad. "PaySim Financial Simulator : PaySim Financial Simulator." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14061.

Full text
Abstract:
The lack of legitimate datasets on mobile money transactions toperform research on in the domain of fraud detection is a big prob-lem today in the scientic community. Part of the problem is theintrinsic private nature of mobile transactions, not much infor-mation can be exploited. This will leave the researchers with theburden of rst harnessing the dataset before performing the actualresearch on it. The dataset corresponds to the set of data in whichthe research is to be performed on. This thesis discusses a solutionto such a problem, namely the Paysim simulator. Paysim is a -nancial simulator that simulates mobile money transactions basedon an original dataset. We present a solution to ultimately yieldthe possibility to simulate mobile money transactions in such a waythat they become similar to the original dataset. The similarity orthe congruity will be measured by calculating the error-rate betweenthe synthetic data set and the original data set. With technologyframeworks such as "Agent Based" simulation techniques, and theapplication of mathematical statistics, it can be demonstrated thatthe synthetic data is as prudent as the original data set. The aimof this thesis is to demonstrate with statistical models that PaySimcan be used as a tool for the intents of nancial simulations.
APA, Harvard, Vancouver, ISO, and other styles
8

Lopez-Rojas, Edgar Alonso. "Applying Simulation to the Problem of Detecting Financial Fraud." Doctoral thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12932.

Full text
Abstract:
This thesis introduces a financial simulation model covering two related financial domains: Mobile Payments and Retail Stores systems.   The problem we address in these domains is different types of fraud. We limit ourselves to isolated cases of relatively straightforward fraud. However, in this thesis the ultimate aim is to introduce our approach towards the use of computer simulation for fraud detection and its applications in financial domains. Fraud is an important problem that impact the whole economy. Currently, there is a lack of public research into the detection of fraud. One important reason is the lack of transaction data which is often sensitive. To address this problem we present a mobile money Payment Simulator (PaySim) and Retail Store Simulator (RetSim), which allow us to generate synthetic transactional data that contains both: normal customer behaviour and fraudulent behaviour.    These simulations are Multi Agent-Based Simulations (MABS) and were calibrated using real data from financial transactions. We developed agents that represent the clients and merchants in PaySim and customers and salesmen in RetSim. The normal behaviour was based on behaviour observed in data from the field, and is codified in the agents as rules of transactions and interaction between clients and merchants, or customers and salesmen. Some of these agents were intentionally designed to act fraudulently, based on observed patterns of real fraud. We introduced known signatures of fraud in our model and simulations to test and evaluate our fraud detection methods. The resulting behaviour of the agents generate a synthetic log of all transactions as a result of the simulation. This synthetic data can be used to further advance fraud detection research, without leaking sensitive information about the underlying data or breaking any non-disclose agreements.   Using statistics and social network analysis (SNA) on real data we calibrated the relations between our agents and generate realistic synthetic data sets that were verified against the domain and validated statistically against the original source.   We then used the simulation tools to model common fraud scenarios to ascertain exactly how effective are fraud techniques such as the simplest form of statistical threshold detection, which is perhaps the most common in use. The preliminary results show that threshold detection is effective enough at keeping fraud losses at a set level. This means that there seems to be little economic room for improved fraud detection techniques.   We also implemented other applications for the simulator tools such as the set up of a triage model and the measure of cost of fraud. This showed to be an important help for managers that aim to prioritise the fraud detection and want to know how much they should invest in fraud to keep the loses below a desired limit according to different experimented and expected scenarios of fraud.
APA, Harvard, Vancouver, ISO, and other styles
9

Alshammari, Sultanah. "A Data-Driven Computational Framework to Assess the Risk of Epidemics at Global Mass Gatherings." Thesis, University of North Texas, 2019. https://digital.library.unt.edu/ark:/67531/metadc1505145/.

Full text
Abstract:
This dissertation presents a data-driven computational epidemic framework to simulate disease epidemics at global mass gatherings. The annual Muslim pilgrimage to Makkah, Saudi Arabia is used to demonstrate the simulation and analysis of various disease transmission scenarios throughout the different stages of the event from the arrival to the departure of international participants. The proposed agent-based epidemic model efficiently captures the demographic, spatial, and temporal heterogeneity at each stage of the global event of Hajj. Experimental results indicate the substantial impact of the demographic and mobility patterns of the heterogeneous population of pilgrims on the progression of the disease spread in the different stages of Hajj. In addition, these simulations suggest that the differences in the spatial and temporal settings in each stage can significantly affect the dynamic of the disease. Finally, the epidemic simulations conducted at the different stages in this dissertation illustrate the impact of the differences between the duration of each stage in the event and the length of the infectious and latent periods. This research contributes to a better understanding of epidemic modeling in the context of global mass gatherings to predict the risk of disease pandemics caused by associated international travel. The computational modeling and disease spread simulations in global mass gatherings provide public health authorities with powerful tools to assess the implication of these events at a different scale and to evaluate the efficacy of control strategies to reduce their potential impacts.
APA, Harvard, Vancouver, ISO, and other styles
10

MERICO, DAVIDE. "Tracking with high-density, large-scale wireless sensor networks." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2010. http://hdl.handle.net/10281/7785.

Full text
Abstract:
Given the continuous technological advances in computing and communication, it seems that we are rapidly heading towards the realization of paradigms commonly described as ubiquitous computing, pervasive computing, ambient intelligence, or, more recently, "everyware". These paradigms envision living environments pervaded by a high number of invisible technological devices affecting and improving all aspects of our lives. Therefore, it is easy to justify the need of knowing the physical location of users. Outdoor location-aware applications are already widespread today, their growing popularity showing that location-awareness is indeed a very useful functionality. Less obvious is how the growing availability of these locations and tracks will be exploited for providing more intelligent "situation-understanding" services that help people. My work is motivated by the fact that, thanks to location-awareness systems, we are more and more aware of the exact positions of the users but unfortunately we are rarely capable of exactly understanding what they are doing. Location awareness should rapidly evolve and become "situation-awareness" otherwise the ubiquitous-computing vision will become impracticable. The goal of this thesis is devising alternative and innovative approaches to the problem of indoor position estimation/assessment and evaluating them in real environments. These approaches are be based on: (i) a low-cost and energy-aware localization infrastructure; (ii) multi-sensor, statistically-based, localization algorithms; (iii) logic-based situation assessment techniques. The algorithms and techniques that are the outcome of this thesis have all been tested by implementing them and measuring (both in a quantitative sense and in a qualitative sense) the performance in the field.
APA, Harvard, Vancouver, ISO, and other styles
11

Schlitt, James Thomas. "Applying Time-Valued Knowledge for Public Health Outbreak Response." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/90399.

Full text
Abstract:
During the early stages of any epidemic, simple interventions such as quarantine and isolation may be sufficient to halt the spread of a novel pathogen. However, should this opportunity be missed, substantially more resource-intensive, complex, and societally intrusive interventions may be required to achieve an acceptable outcome. These disparities place a differential on the value of a given unit of knowledge across the time-domains of an epidemic. Within this dissertation we explore these value-differentials via extension of the business concept of the time-value of knowledge and propose the C4 Response Model for organizing the research response to novel pathogenic outbreaks. First, we define the C4 Response Model as a progression from an initial data-hungry collect stage, iteration between open-science-centric connect stages and machine-learning centric calibrate stages, and a final visualization-centric convey stage. Secondly we analyze the trends in knowledge-building across the stages of epidemics with regard to open and closed access article publication, referencing, and citation. Thirdly, we demonstrate a Twitter message mapping application to assess the virality of tweets as a function of their source-profile category, message category, timing, urban context, tone, and use of bots. Finally, we apply an agent-based model of influenza transmission to explore the efficacy of combined antiviral, sequestration, and vaccination interventions in mitigating an outbreak of an influenza-like-illness (ILI) within a simulated military base population. We find that while closed access outbreak response articles use more recent citations and see higher mean citation counts, open access articles are published and referenced in significantly greater numbers and are growing in proportion. We observe that tweet viralities showed distinct heterogeneities across message and profile type pairing, that tweets dissipated rapidly across time and space, and that tweets published before high-tweet-volume time periods showed higher virality. Finally, we saw that while timely responses and strong pharmaceutical interventions showed the greatest impact in mitigating ILI transmission within a military base, even optimistic scenarios failed to prevent the majority of new cases. This body of work offers significant methodological contributions for the practice of computational epidemiology as well as a theoretical grounding for the further use of the C4 Response Model.
Doctor of Philosophy
During the early stages of an outbreak of disease, simple interventions such as isolating those infected may be sufficient to prevent further cases. However, should this opportunity be missed, substantially more complex interventions such as the development of novel pharmaceuticals may be required. This results in a differential value for specific knowledge across the early, middle, and late stages of epidemic. Within this dissertation we explore these differentials via extension of the business concept of the time-value of knowledge, whereby key findings may yield greater benefits during early epidemics. We propose the C4 Response Model for organizing research regarding this time-value. First, we define the C4 Response Model as a progression from an initial knowledge collection stage, iteration between knowledge connection stages and machine learning-centric calibration stages, and a final conveyance stage. Secondly we analyze the trends in knowledge-building across the stages of epidemics with regard to open and closed access scientific article publication, referencing, and citation. Thirdly, we demonstrate a Twitter application for improving public health messaging campaigns by identifying optimal combinations of source-profile categories, message categories, timing, urban origination, tone, and use of bots. Finally, we apply an agent-based model of influenza transmission to explore the efficacy of combined antiviral, isolation, and vaccination interventions in mitigating an outbreak of an influenza-like-illness (ILI) within a simulated military base population. We find that while closed access outbreak response articles use more recent citations and see higher mean citation counts, open access articles are growing in use and are published and referenced in significantly greater numbers. We observe that tweet viralities showed distinct benefits to certain message and profile type pairings, that tweets faded rapidly across time and space, and that tweets published before high-tweet-volume time periods are retweeted more. Finally, we saw that while early responses and strong pharmaceuticals showed the greatest impact in preventing influenza transmission within military base populations, even optimistic scenarios failed to prevent the majority to new cases. This body of work offers significant methodological contributions for the practice of computational epidemiology as well as a theoretical grounding for the C4 Response Model.
APA, Harvard, Vancouver, ISO, and other styles
12

Colliri, Tiago Santos. "Avaliação de preços de ações: proposta de um índice baseado nos preços históricos ponderados pelo volume, por meio do uso de modelagem computacional." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/100/100132/tde-07072013-015903/.

Full text
Abstract:
A importância de se considerar os volumes na análise dos movimentos de preços de ações pode ser considerada uma prática bastante aceita na área financeira. No entanto, quando se olha para a produção científica realizada neste campo, ainda não é possível encontrar um modelo unificado que inclua os volumes e as variações de preços para fins de análise de preços de ações. Neste trabalho é apresentado um modelo computacional que pode preencher esta lacuna, propondo um novo índice para analisar o preço das ações com base em seus históricos de preços e volumes negociados. O objetivo do modelo é o de estimar as atuais proporções do volume total de papéis negociados no mercado de uma ação (free float) distribuídos de acordo com os seus respectivos preços passados de compra. Para atingir esse objetivo, foi feito uso da modelagem dinâmica financeira aplicada a dados reais da bolsa de valores de São Paulo (Bovespa) e também a dados simulados por meio de um modelo de livro de ordens (order book). O valor do índice varia de acordo com a diferença entre a atual porcentagem do total de papéis existentes no mercado que foram comprados no passado a um preço maior do que o preço atual da ação e a sua respectiva contrapartida, que seria a atual porcentagem de papéis existentes no mercado que foram comprados no passado a um preço menor do que o preço atual da ação. Apesar de o modelo poder ser considerado matematicamente bastante simples, o mesmo foi capaz de melhorar significativamente a performance financeira de agentes operando com dados do mercado real e com dados simulados, o que contribui para demonstrar a sua racionalidade e a sua aplicabilidade. Baseados nos resultados obtidos, e também na lógica bastante intuitiva que está por trás deste modelo, acredita-se que o índice aqui proposto pode ser bastante útil na tarefa de ajudar os investidores a definir intervalos ideais para compra e venda de ações no mercado financeiro.
The importance of considering the volumes to analyze stock prices movements can be considered as a well-accepted practice in the financial area. However, when we look at the scientific production in this field, we still cannot find a unified model that includes volume and price variations for stock prices assessment purposes. In this paper we present a computer model that could fulfill this gap, proposing a new index to evaluate stock prices based on their historical prices and volumes traded. The aim of the model is to estimate the current proportions of the total volume of shares available in the market from a stock distributed according with their respective prices traded in the past. In order to do so, we made use of dynamic financial modeling and applied it to real financial data from the Sao Paulo Stock Exchange (Bovespa) and also to simulated data which was generated trough an order book model. The value of our index varies based on the difference between the current proportion of shares traded in the past for a price above the current price of the stock and its respective counterpart, which would be the proportion of shares traded in the past for a price below the current price of the stock. Besides the model can be considered mathematically very simple, it was able to improve significantly the financial performance of agents operating with real market data and with simulated data, which contributes to demonstrate its rationale and its applicability. Based on the results obtained, and also on the very intuitive logic of our model, we believe that the index proposed here can be very useful to help investors on the activity of determining ideal price ranges for buying and selling stocks in the financial market.
APA, Harvard, Vancouver, ISO, and other styles
13

Malmström, David, and Stefan Kaalen. "An Investigation of an Example-Based Method for Crowd Simulations." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208896.

Full text
Abstract:
The problem of simulating a crowd to see how it would behave in certain situations or just to create a realistic-looking scene to be used in a movie or video game is important and complex, and there are many different methods to solve it. This project is primarily an investigation of the example-based crowd simulation method described in the article "Crowds by Example" by Lerner et al. In the article, traced video footage of crowds are used to create a data set. The simulation program continuously finds situations in the data set that resembles the current situations in the simulation and updates the simulation thereby. We implemented this for around 10 agents using Unity 3D. Example-based crowd simulations does not only, like some other types of crowd simulation methods (for example ORCA), take collision avoidance into account but also the more complex ways the human mind thinks and therefore does not always behave as one would predict. The main conclusion is that this method of simulating crowds has the potential to create more realistic simulations than other forms of crowd simulations. The downsides are that the time the program spends creating simulations can quickly get very high and to make realistic simulations a lot of video footage must be filmed and then traced.
APA, Harvard, Vancouver, ISO, and other styles
14

Aldabbas, Hamza. "Securing data dissemination in vehicular ad hoc networks." Thesis, De Montfort University, 2012. http://hdl.handle.net/2086/7987.

Full text
Abstract:
Vehicular ad hoc networks (VANETs) are a subclass of mobile ad hoc networks (MANETs) in which the mobile nodes are vehicles; these vehicles are autonomous systems connected by wireless communication on a peer-to-peer basis. They are self-organized, self-configured and self-controlled infrastructure-less networks. This kind of network has the advantage of being able to be set-up and deployed anywhere and anytime because it has no infrastructure set-up and no central administration. Distributing information between these vehicles over long ranges in such networks, however, is a very challenging task, since sharing information always has a risk attached to it especially when the information is confidential. The disclosure of such information to anyone else other than the intended parties could be extremely damaging, particularly in military applications where controlling the dissemination of messages is essential. This thesis therefore provides a review of the issue of security in VANET and MANET; it also surveys existing solutions for dissemination control. It highlights a particular area not adequately addressed until now: controlling information flow in VANETs. This thesis contributes a policy-based framework to control the dissemination of messages communicated between nodes in order to ensure that message remains confidential not only during transmission, but also after it has been communicated to another peer, and to keep the message contents private to an originator-defined subset of nodes in the VANET. This thesis presents a novel framework to control data dissemination in vehicle ad hoc networks in which policies are attached to messages as they are sent between peers. This is done by automatically attaching policies along with messages to specify how the information can be used by the receiver, so as to prevent disclosure of the messages other than consistent with the requirements of the originator. These requirements are represented as a set of policy rules that explicitly instructs recipients how the information contained in messages can be disseminated to other nodes in order to avoid unintended disclosure. This thesis describes the data dissemination policy language used in this work; and further describes the policy rules in order to be a suitable and understandable language for the framework to ensure the confidentiality requirement of the originator. This thesis also contributes a policy conflict resolution that allows the originator to be asked for up-to-date policies and preferences. The framework was evaluated using the Network Simulator (NS-2) to provide and check whether the privacy and confidentiality of the originators’ messages were met. A policy-based agent protocol and a new packet structure were implemented in this work to manage and enforce the policies attached to packets at every node in the VANET. Some case studies are presented in this thesis to show how data dissemination can be controlled based on the policy of the originator. The results of these case studies show the feasibility of our research to control the data dissemination between nodes in VANETs. NS-2 is also used to test the performance of the proposed policy-based agent protocol and demonstrate its effectiveness using various network performance metrics (average delay and overhead).
APA, Harvard, Vancouver, ISO, and other styles
15

Tröger, Ralph. "Supply Chain Event Management – Bedarf, Systemarchitektur und Nutzen aus Perspektive fokaler Unternehmen der Modeindustrie." Doctoral thesis, Universitätsbibliothek Leipzig, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-155014.

Full text
Abstract:
Supply Chain Event Management (SCEM) bezeichnet eine Teildisziplin des Supply Chain Management und ist für Unternehmen ein Ansatzpunkt, durch frühzeitige Reaktion auf kritische Ausnahmeereignisse in der Wertschöpfungskette Logistikleistung und -kosten zu optimieren. Durch Rahmenbedingungen wie bspw. globale Logistikstrukturen, eine hohe Artikelvielfalt und volatile Geschäftsbeziehungen zählt die Modeindustrie zu den Branchen, die für kritische Störereignisse besonders anfällig ist. In diesem Sinne untersucht die vorliegende Dissertation nach einer Beleuchtung der wesentlichen Grundlagen zunächst, inwiefern es in der Modeindustrie tatsächlich einen Bedarf an SCEM-Systemen gibt. Anknüpfend daran zeigt sie nach einer Darstellung bisheriger SCEM-Architekturkonzepte Gestaltungsmöglichkeiten für eine Systemarchitektur auf, die auf den Designprinzipien der Serviceorientierung beruht. In diesem Rahmen erfolgt u. a. auch die Identifikation SCEM-relevanter Business Services. Die Vorzüge einer serviceorientierten Gestaltung werden detailliert anhand der EPCIS (EPC Information Services)-Spezifikation illustriert. Abgerundet wird die Arbeit durch eine Betrachtung der Nutzenpotenziale von SCEM-Systemen. Nach einer Darstellung von Ansätzen, welche zur Nutzenbestimmung infrage kommen, wird der Nutzen anhand eines Praxisbeispiels aufgezeigt und fließt zusammen mit den Ergebnissen einer Literaturrecherche in eine Konsolidierung von SCEM-Nutzeffekten. Hierbei wird auch beleuchtet, welche zusätzlichen Vorteile sich für Unternehmen durch eine serviceorientierte Architekturgestaltung bieten. In der Schlussbetrachtung werden die wesentlichen Erkenntnisse der Arbeit zusammengefasst und in einem Ausblick sowohl beleuchtet, welche Relevanz die Ergebnisse der Arbeit für die Bewältigung künftiger Herausforderungen innehaben als auch welche Anknüpfungspunkte sich für anschließende Forschungsarbeiten ergeben.
APA, Harvard, Vancouver, ISO, and other styles
16

Mendoza, Silva Germán Martín. "Agent-based parking occupancy simulation." Master's thesis, 2015. http://hdl.handle.net/10362/14569.

Full text
Abstract:
The existing parking simulations, as most simulations, are intended to gain insights of a system or to make predictions. The knowledge they have provided has built up over the years, and several research works have devised detailed parking system models. This thesis work describes the use of an agent-based parking simulation in the context of a bigger parking system development. It focuses more on flexibility than on fidelity, showing the case where it is relevant for a parking simulation to consume dynamically changing GIS data from external, online sources and how to address this case. The simulation generates the parking occupancy information that sensing technologies should eventually produce and supplies it to the bigger parking system. It is built as a Java application based on the MASON toolkit and consumes GIS data from an ArcGis Server. The application context of the implemented parking simulation is a university campus with free, on-street parking places.
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Minghao. "Data Assimilation for Agent-Based Simulation of Smart Environment." 2014. http://scholarworks.gsu.edu/cs_diss/91.

Full text
Abstract:
Agent-based simulation of smart environment finds its application in studying people’s movement to help the design of a variety of applications such as energy utilization, HAVC control and egress strategy in emergency situation. Traditionally, agent-based simulation is not dynamic data driven, they run offline and do not assimilate real sensor data about the environment. As more and more buildings are equipped with various sensors, it is possible to utilize real time sensor data to inform the simulation. To incorporate the real sensor data into the simulation, we introduce the method of data assimilation. The goal of data assimilation is to provide inference about system state based on the incomplete, ambiguous and uncertain sensor data using a computer model. A typical data assimilation framework consists of a computer model, a series of sensors and a melding scheme. The purpose of this dissertation is to develop a data assimilation framework for agent-based simulation of smart environment. With the developed data assimilation framework, we demonstrate an application of building occupancy estimation which focuses on position estimation using the framework. We build an agent based model to simulate the occupants’ movement s in the building and use this model in the data assimilation framework. The melding scheme we use to incorporate sensor data into the built model is particle filter algorithm. It is a set of statistical method aiming at compute the posterior distribution of the underlying system using a set of samples. It has the benefit that it does not have any assumption about the target distribution and does not require the target system to be written in analytic form .To overcome the high dimensional state space problem as the number of agents increases, we develop a new resampling method named as the component set resampling and evaluate its effectiveness in data assimilation. We also developed a graph-based model for simulating building occupancy. The developed model will be used for carrying out building occupancy estimation with extremely large number of agents in the future.
APA, Harvard, Vancouver, ISO, and other styles
18

Makinde, O., Daniel Neagu, and Marian Gheorghe. "Agent based micro-simulation of a passenger rail system using customer survey data and an activity based approach." 2018. http://hdl.handle.net/10454/16761.

Full text
Abstract:
No
Passenger rail overcrowding is fast becoming a problem in major cities worldwide. This problem therefore calls for efficient, cheap and prompt solutions and policies, which would in turn require accurate modelling tools to effectively forecast the impact of transit demand management policies. To do this, we developed an agent-based model of a particular passenger rail system using an activity based simulation approach to predict the impact of public transport demand management pricing strategies. Our agent population was created using a customer/passenger mobility survey dataset. We modelled the temporal flexibility of passengers, based on patterns observed in the departure and arrival behavior of real travelers. Our model was validated using real life passenger count data from the passenger rail transit company, after which we evaluated the use of peak demand management instruments such as ticketing fares strategies, to influence peak demand of a passenger rail transport system. Our results suggest that agent-based simulation is effective in predicting passenger behavior for a transportation system, and can be used in predicting the impact of demand management policies.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography