Dissertations / Theses on the topic 'Sales management Data processing'

To see the other types of publications on this topic, follow the link: Sales management Data processing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Sales management Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

李安敏 and On-man Andrew Lee. "Some thoughts on the applications of management science in sales and marketing activities on the professional products." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1996. http://hub.hku.hk/bib/B31267397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ponelis, S. R. (Shana Rachel). "Data marts as management information delivery mechanisms: utilisation in manufacturing organisations with third party distribution." Thesis, University of Pretoria, 2002. http://hdl.handle.net/2263/27061.

Full text
Abstract:
Customer knowledge plays a vital part in organisations today, particularly in sales and marketing processes, where customers can either be channel partners or final consumers. Managing customer data and/or information across business units, departments, and functions is vital. Frequently, channel partners gather and capture data about downstream customers and consumers that organisations further upstream in the channel require to be incorporated into their information systems in order to allow for management information delivery to their users. In this study, the focus is placed on manufacturing organisations using third party distribution since the flow of information between channel partner organisations in a supply chain (in contrast to the flow of products) provides an important link between organisations and increasingly represents a source of competitive advantage in the marketplace. The purpose of this study is to determine whether there is a significant difference in the use of sales and marketing data marts as management information delivery mechanisms in manufacturing organisations in different industries, particularly the pharmaceuticals and branded consumer products. The case studies presented in this dissertation indicates that there are significant differences between the use of sales and marketing data marts in different manufacturing industries, which can be ascribed to the industry, both directly and indirectly.
Thesis (MIS(Information Science))--University of Pretoria, 2002.
Information Science
MIS
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Yi. "Data Management and Data Processing Support on Array-Based Scientific Data." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1436157356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vijayakumar, Nithya Nirmal. "Data management in distributed stream processing systems." [Bloomington, Ind.] : Indiana University, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3278228.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Computer Science, 2007.
Source: Dissertation Abstracts International, Volume: 68-09, Section: B, page: 6093. Adviser: Beth Plale. Title from dissertation home page (viewed May 9, 2008).
APA, Harvard, Vancouver, ISO, and other styles
5

Aronsson, Henrik. "Modeling strategies using predictive analytics : Forecasting future sales and churn management." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-167130.

Full text
Abstract:
This project was carried out for a company named Attollo, a consulting firm specialized in Business Intelligence and Corporate Performance Management. The project aims to explore a new area for Attollo, predictive analytics, which is then applied to Klarna, a client of Attollo. Attollo has a partnership with IBM, which sells services for predictive analytics. The tool that this project is carried out with, is a software from IBM: SPSS Modeler. Five different examples are given of what and how the predictive work that was carried out at Klarna consisted of. From these examples, the different predictive models' functionality are described. The result of this project demonstrates, by using predictive analytics, how predictive models can be created. The conclusion is that predictive analytics enables companies to understand their customers better and hence make better decisions.
Detta projekt har utforts tillsammans med ett foretag som heter Attollo, en konsultfirma som ar specialiserade inom Business Intelligence & Coporate Performance Management. Projektet grundar sig pa att Attollo ville utforska ett nytt omrade, prediktiv analys, som sedan applicerades pa Klarna, en kund till Attollo. Attollo har ett partnerskap med IBM, som saljer tjanster for prediktiv analys. Verktyget som detta projekt utforts med, ar en mjukvara fran IBM: SPSS Modeler. Fem olika exempel beskriver det prediktiva arbetet som utfordes vid Klarna. Fran dessa exempel beskrivs ocksa de olika prediktiva modellernas funktionalitet. Resultatet av detta projekt visar hur man genom prediktiv analys kan skapa prediktiva modeller. Slutsatsen ar att prediktiv analys ger foretag storre mojlighet att forsta sina kunder och darav kunna gora battre beslut.
APA, Harvard, Vancouver, ISO, and other styles
6

Griffin, Alan R., and R. Stephen Wooten. "AUTOMATED DATA MANAGEMENT IN A HIGH-VOLUME TELEMETRY DATA PROCESSING ENVIRONMENT." International Foundation for Telemetering, 1992. http://hdl.handle.net/10150/608908.

Full text
Abstract:
International Telemetering Conference Proceedings / October 26-29, 1992 / Town and Country Hotel and Convention Center, San Diego, California
The vast amount of data telemetered from space probe experiments requires careful management and tracking from initial receipt through acquisition, archiving, and distribution. This paper presents the automated system used at the Phillips Laboratory, Geophysics Directorate, for tracking telemetry data from its receipt at the facility to its distribution on various media to the research community. Features of the system include computerized databases, automated generation of media labels, automated generation of reports, and automated archiving.
APA, Harvard, Vancouver, ISO, and other styles
7

Karlsson, Mathias. "Sales Information Provider." Thesis, Linköping University, Department of Science and Technology, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7174.

Full text
Abstract:

Sammanfattning, max 25 rader. :

Denna rapport utreder möjligheten till att ta in stora mängder data in i en databas och göra sammanslagningar. Detta för att sedan skicka en mängd data på ett smidigt sätt till en klient som ska bearbeta datat. Arbetet sträcker sig från databas till ett API möjligt att implementera i en applikation som önskar hämta informationen. Arbetet innebär en intelligent hämtning av data för visualisering. Det är ett av två examensarbeten som ligger till grund för en visualisering av försäljningsdata för sportbutikskedjan Stadium AB. Stadium AB har idag ca 80 butiker, vilket innebär en stor försäljning per vecka. Tanken är att detta ex-jobb tillsammans med det parallellt gående ex-jobbet ska vara till hjälp för Stadium AB vid inköp av produkter till nästkommande säsonger. Det ex-jobb som löpte parallellt med detta visualiserar mängden av produkter som säljs för en viss tidpunkt vilket ger Stadium möjlighet att se vilka tidpunkter de har för lite produkter i butiken samt när de har för mycket produkter. Detta ex-jobb ska förse visualiseringsapplikationen med den information som krävs. Sales Data Provider, som applikationen heter, bygger på en datalager lösning som grund. Den innehåller beräknade försäljningsdata på olika nivåer för att lätt kunna gräva sig ner i hierarkin och se information om hur olika produkter säljer. Som transportmedel från datalager till klient använder den Web Services med XML som media, för att möjliggöra en distans mellan datalager och klient. Dessutom innehåller den en logisk klient som tar hand om alla anrop mot Web Servicen och exponerar ett API som visualiseringsapplikationen kan använda sig av. Klienten innehåller både logik för att hämta data från Web Servicen och att exponera data genom en objektmodell.

APA, Harvard, Vancouver, ISO, and other styles
8

容勁 and King Stanley Yung. "Application of multi-agent technology to supply chain management." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31223886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bashir, Omar. "Management and processing of network performance information." Thesis, Loughborough University, 1998. https://dspace.lboro.ac.uk/2134/10361.

Full text
Abstract:
Intrusive monitoring systems monitor the performance of data communication networks by transmitting and receiving test packets on the network being monitored. Even relatively small periods of monitoring can generate significantly large amounts of data. Primitive network performance data are details of test packets that are transmitted and received over the network under test. Network performance information is then derived by significantly processing the primitive performance data. This information may need to be correlated with information regarding the configuration and status of various network elements and the test stations. This thesis suggests that efficient processing of the collected data may be achieved by reusing and recycling the derived information in the data warehouses and information systems. This can be accomplished by pre-processing the primitive performance data to generate Intermediate Information. In addition to being able to efficiently fulfil multiple information requirements, different Intermediate Information elements at finer levels of granularity may be recycled to generate Intermediate Information elements at coarser levels of granularity. The application of these concepts in processing packet delay information from the primitive performance data has been studied. Different Intermediate Information structures possess different characteristics. Information systems can exploit these characteristics to efficiently re-cycle elements of these structures to derive the required information elements. Information systems can also dynamically select appropriate Intermediate Information structures on the basis of queries posted to the information system as well as the number of suitable Intermediate Information elements available to efficiently answer these queries. Packet loss and duplication summaries derived for different analysis windows also provide information regarding the network performance characteristics. Due to their additive nature, suitable finer granularity packet loss and duplication summaries can be added to provide coarser granularity packet loss and duplication summaries.
APA, Harvard, Vancouver, ISO, and other styles
10

Agarwalla, Bikash Kumar. "Resource management for data streaming applications." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34836.

Full text
Abstract:
This dissertation investigates novel middleware mechanisms for building streaming applications. Developing streaming applications is a challenging task because (i) they are continuous in nature; (ii) they require fusion of data coming from multiple sources to derive higher level information; (iii) they require efficient transport of data from/to distributed sources and sinks; (iv) they need access to heterogeneous resources spanning sensor networks and high performance computing; and (v) they are time critical in nature. My thesis is that an intuitive programming abstraction will make it easier to build dynamic, distributed, and ubiquitous data streaming applications. Moreover, such an abstraction will enable an efficient allocation of shared and heterogeneous computational resources thereby making it easier for domain experts to build these applications. In support of the thesis, I present a novel programming abstraction, called DFuse, that makes it easier to develop these applications. A domain expert only needs to specify the input and output connections to fusion channels, and the fusion functions. The subsystems developed in this dissertation take care of instantiating the application, allocating resources for the application (via the scheduling heuristic developed in this dissertation) and dynamically managing the resources (via the dynamic scheduling algorithm presented in this dissertation). Through extensive performance evaluation, I demonstrate that the resources are allocated efficiently to optimize the throughput and latency constraints of an application.
APA, Harvard, Vancouver, ISO, and other styles
11

Mousavi, Bamdad. "Scalable Stream Processing and Management for Time Series Data." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42295.

Full text
Abstract:
There has been an enormous growth in the generation of time series data in the past decade. This trend is caused by widespread adoption of IoT technologies, the data generated by monitoring of cloud computing resources, and cyber physical systems. Although time series data have been a topic of discussion in the domain of data management for several decades, this recent growth has brought the topic to the forefront. Many of the time series management systems available today lack the necessary features to successfully manage and process the sheer amount of time series being generated today. In this today we stive to examine the field and study the prior work in time series management. We then propose a large system capable of handling time series management end to end, from generation to consumption by the end user. Our system is composed of open-source data processing frameworks. Our system has the capability to collect time series data, perform stream processing over it, store it for immediate and future processing and create necessary visualizations. We present the implementation of the system and perform experimentations to show its scalability to handle growing pipelines of incoming data from various sources.
APA, Harvard, Vancouver, ISO, and other styles
12

Stein, Oliver. "Intelligent Resource Management for Large-scale Data Stream Processing." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-391927.

Full text
Abstract:
With the increasing trend of using cloud computing resources, the efficient utilization of these resources becomes more and more important. Working with data stream processing is a paradigm gaining in popularity, with tools such as Apache Spark Streaming or Kafka widely available, and companies are shifting towards real-time monitoring of data such as sensor networks, financial data or anomaly detection. However, it is difficult for users to efficiently make use of cloud computing resources and studies show that a lot of energy and compute hardware is wasted. We propose an approach to optimizing resource usage in cloud computing environments designed for data stream processing frameworks, based on bin packing algorithms. Test results show that the resource usage is substantially improved as a result, with future improvements suggested to further increase this. The solution was implemented as an extension of the HarmonicIO data stream processing framework and evaluated through simulated workloads.
APA, Harvard, Vancouver, ISO, and other styles
13

Reinhard, Erik. "Scheduling and data management for parallel ray tracing." Thesis, University of Bristol, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Wilke, Achim. "Data-processing devolopment in German design offices." Thesis, Brunel University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.292979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Tucker, Peter A. "Punctuated data streams /." Full text open access at:, 2005. http://content.ohsu.edu/u?/etd,255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Darrous, Jad. "Scalable and Efficient Data Management in Distributed Clouds : Service Provisioning and Data Processing." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEN077.

Full text
Abstract:
Cette thèse porte sur des solutions pour la gestion de données afin d'accélérer l'exécution efficace d'applications de type « Big Data » (très consommatrices en données) dans des centres de calculs distribués à grande échelle. Les applications de type « Big Data » sont de plus en plus souvent exécutées sur plusieurs sites. Les deux principales raisons de cette tendance sont 1) le déplacement des calculs vers les sources de données pour éliminer la latence due à leur transmission et 2) le stockage de données sur un site peut ne pas être réalisable à cause de leurs tailles de plus en plus importantes.La plupart des applications s'exécutent sur des clusters virtuels et nécessitent donc des images de machines virtuelles (VMI) ou des conteneurs d’application. Par conséquent, il est important de permettre l’approvisionnement rapide de ces services afin de réduire le temps d'attente avant l’exécution de nouveaux services ou applications. Dans la première partie de cette thèse, nous avons travaillé sur la récupération et le placement des données, en tenant compte de problèmes difficiles, notamment l'hétérogénéité des connexions au réseau étendu (WAN) et les besoins croissants en stockage pour les VMIs et les conteneurs d’application.Par ailleurs, les applications de type « Big Data » reposent sur la réplication pour fournir des services fiables et rapides, mais le surcoût devient de plus en plus grand. La seconde partie de cette thèse constitue l'une des premières études sur la compréhension et l'amélioration des performances des applications utilisant la technique, moins coûteuse en stockage, des codes d'effacement (erasure coding), en remplacement de la réplication
This thesis focuses on scalable data management solutions to accelerate service provisioning and enable efficient execution of data-intensive applications in large-scale distributed clouds. Data-intensive applications are increasingly running on distributed infrastructures (multiple clusters). The main two reasons for such a trend are 1) moving computation to data sources can eliminate the latency of data transmission, and 2) storing data on one site may not be feasible given the continuous increase of data size.On the one hand, most applications run on virtual clusters to provide isolated services, and require virtual machine images (VMIs) or container images to provision such services. Hence, it is important to enable fast provisioning of virtualization services to reduce the waiting time of new running services or applications. Different from previous work, during the first part of this thesis, we worked on optimizing data retrieval and placement considering challenging issues including the continuous increase of the number and size of VMIs and container images, and the limited bandwidth and heterogeneity of the wide area network (WAN) connections.On the other hand, data-intensive applications rely on replication to provide dependable and fast services, but it became expensive and even infeasible with the unprecedented growth of data size. The second part of this thesis provides one of the first studies on understanding and improving the performance of data-intensive applications when replacing replication with the storage-efficient erasure coding (EC) technique
APA, Harvard, Vancouver, ISO, and other styles
17

Monk, Kitty A. "Data management in MARRS." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Cyvoct, Alexandra, and Shirin Fathi. "Artificial Intelligence in Business-to-Business Sales Processes : The impact on the sales representatives and management implications." Thesis, Linköpings universitet, Företagsekonomi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157988.

Full text
Abstract:
Background: The sales representatives in B2B companies are experiencing several changes in their environment, which have already altered their performed activities. In order to meet the new customer needs, Artificial Intelligence (AI) provides an effective usage of the large amount of complex data that is available, defined as Big Data. AI is developing intelligence that is human-like and is expected to impact occupational roles while threating to automate tasks typically performed by humans. Previous technologies have already impacted sales representatives in the performance of their sales activities; however, it is still uncertain how AI will impact and benefit them. Previous empirical findings and the lack of studies centered on the individual impact of AI confirm the need for more academic reports. Purpose: The aim of this research is to explore how the implementation of Artificial Intelligence and usage of Big Data in Business-to-Business selling processes are impacting sales representatives, in term of performed activities. Further, the aim is also to explore the management of individuals during the implementation of AI. Methodology: This qualitative study is based on a realistic perspective with an inductive research approach. The empirical data has been collected through semi structured interviews with six AI-providers and two consulting firms that have proven experiences in working with AI and sales in B2B companies. Conclusion: AI is characterized by its adapting capability as well as its ability to process and combine a large amount of real-time, online and historical data. As a result, the selling process is constantly provided with more accurate, faster and original insights. Through the analytical capacity of AI, the sales representatives are gaining extensive knowledge about the customer and the external world. Also, AI simplifies the creation and maintenance of long- lasting customer relationships by providing specific and valuable content. Administrative tasks and non-sales activities can also become automated through the usage of AI, which enables sales representatives to focus on their core tasks, for instance relationship building and value-adding activities. The threat of automation and elimination of jobs should be redefined into the possibility to augment human capabilities. By adopting this approach, the importance of the human-machine collaboration is strongly emphasized. In order to increase the willingness for changing working procedures at individual levels, the communication during the process of change should be centered on creating a positive perception and understanding of AI. It is also important to create trust for AI and promote a data-driven culture in order to ensure the systematic usage of the system.
APA, Harvard, Vancouver, ISO, and other styles
19

Xie, Tian, and 謝天. "Development of a XML-based distributed service architecture for product development in enterprise clusters." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B30477165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Tidmus, Jonathan Paul. "Task and data management for parallel particle tracing." Thesis, University of the West of England, Bristol, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387936.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Görlitz, Olaf [Verfasser]. "Distributed query processing for federated RDF data management / Olaf Görlitz." Koblenz : Universitätsbibliothek Koblenz, 2015. http://d-nb.info/1065246986/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Pitts, David Vernon. "A storage management system for a reliable distributed operating system." Diss., Georgia Institute of Technology, 1986. http://hdl.handle.net/1853/16895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Tao, Yufei. "Indexing and query processing of spatio-temporal data /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?COMP%202002%20TAO.

Full text
Abstract:
Thesis (Ph. D.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 208-215). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
24

Chitondo, Pepukayi David Junior. "Data policies for big health data and personal health data." Thesis, Cape Peninsula University of Technology, 2016. http://hdl.handle.net/20.500.11838/2479.

Full text
Abstract:
Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2016.
Health information policies are constantly becoming a key feature in directing information usage in healthcare. After the passing of the Health Information Technology for Economic and Clinical Health (HITECH) Act in 2009 and the Affordable Care Act (ACA) passed in 2010, in the United States, there has been an increase in health systems innovations. Coupling this health systems hype is the current buzz concept in Information Technology, „Big data‟. The prospects of big data are full of potential, even more so in the healthcare field where the accuracy of data is life critical. How big health data can be used to achieve improved health is now the goal of the current health informatics practitioner. Even more exciting is the amount of health data being generated by patients via personal handheld devices and other forms of technology that exclude the healthcare practitioner. This patient-generated data is also known as Personal Health Records, PHR. To achieve meaningful use of PHRs and healthcare data in general through big data, a couple of hurdles have to be overcome. First and foremost is the issue of privacy and confidentiality of the patients whose data is in concern. Secondly is the perceived trustworthiness of PHRs by healthcare practitioners. Other issues to take into context are data rights and ownership, data suppression, IP protection, data anonymisation and reidentification, information flow and regulations as well as consent biases. This study sought to understand the role of data policies in the process of data utilisation in the healthcare sector with added interest on PHRs utilisation as part of big health data.
APA, Harvard, Vancouver, ISO, and other styles
25

Laribi, Atika. "A protection model for distributed data base management systems." Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/53883.

Full text
Abstract:
Security is important for Centralized Data Base Management Systems (CDBMS) and becomes crucial for Distributed Data Base Management Systems (DDBMS) when different organizations share information. Secure cooperation can be achieved only if each participating organization is assured that the data it makes available will not be abused by other users. In this work differences between CDBMS and DDBMS that characterize the nature of the protection problem in DDBMS are identified. These differences are translated into basic protection requirements. Policies that a distributed data base management protection system should allow are described. The system proposed in this work is powerful enough to satisfy the stated requirements and allow for variations on the policies. This system is a hybrid one where both authorizations and constraints can be defined. The system is termed hybrid because it combines features of both open and closed protection systems. In addition the hybrid system, although designed to offer the flexibility of discretionary systems, incorporates the flow control of information between users, a feature found only in some nondiscretionary systems. Furthermore, the proposed system is said to be integrated because authorizations and constraints can be defined on any of the data bases supported by the system including the data bases containing the authorizations, and the constraints themselves. The hybrid system is incorporated in a general model of DDBMS protection. A modular approach is taken for the design of the model. This approach allows us to represent the different options for the model depending on the set of policy choices taken. Three levels of abstraction describing different aspects of DDBMS protection problems are defined. The conceptual level describes the protection control of the DDBMS transactions and information flows. The logical level is concerned with the interaction between the different organizations participating in the DDBMS. The physical level is involved with the architectural implementation of the logical level.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhao, Jianbin, and 趙建賓. "A portalet-based DIY approach to collaborative product commerce." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B27769793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Tacic, Ivan. "Efficient Synchronized Data Distribution Management in Distributed Simulations." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6822.

Full text
Abstract:
Data distribution management (DDM) is a mechanism to interconnect data producers and data consumers in a distributed application. Data producers provide useful data to consumers in the form of messages. For each message produced, DDM determines the set of data consumers interested in receiving the message and delivers it to those consumers. We are particularly interested in DDM techniques for parallel and distributed discrete event simulations. Thus far, researchers have treated synchronization of events (i.e. time management) and DDM independent of each other. This research focuses on how to realize time managed DDM mechanisms. The main reason for time-managed DDM is to ensure that changes in the routing of messages from producers to consumers occur in a correct sequence. Also time managed DDM avoids non-determinism in the federation execution, which may result in non-repeatable executions. An optimistic approach to time managed DDM is proposed where one allows DDM events to be processed out of time stamp order, but a detection and recovery procedure is used to recover from such errors. These mechanisms are tailored to the semantics of the DDM operations to ensure an efficient realization. A correctness proof is presented to verify the algorithm correctly synchronizes DDM events. We have developed a fully distributed implementation of the algorithm within the framework of the Georgia Tech Federated Simulation Development Kit (FDK) software. A performance evaluation of the synchronized DDM mechanism has been completed in a loosely coupled distributed system consisting of a network of workstations connected over a local area network (LAN). We compare time-managed versus unsynchronized DDM for two applications that exercise different mobility patterns: one based on a military simulation and a second utilizing a synthetic workload. The experiments and analysis illustrate that synchronized DDM performance depends on several factors: the simulations model (e.g. lookahead), applications mobility patterns and the network hardware (e.g. size of network buffers). Under certain mobility patterns, time-managed DDM is as efficient as unsynchronized DDM. There are also mobility patterns where time-managed DDM overheads become significant, and we show how they can be reduced.
APA, Harvard, Vancouver, ISO, and other styles
28

Oelofse, Andries Johannes. "Development of a MAIME-compliant microarray data management system for functional genomics data integration." Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-08222007-135249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Persson, Mathias. "Simultaneous Data Management in Sensor-Based Systems using Disaggregation and Processing." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188856.

Full text
Abstract:
To enable high performance data management for sensor-based systems the system components in an architecture has to be tailored to the situation at hand. Therefore, each component has to handle a massive amount of data independently, and at the same time cooperate with other components within a system. To facilitate rapid data processing between components, a model detailing the flow of information and specifying internal component structures will assist in faster and more reliable system designs. This thesis presents a model for a scalable, safe, reliable and high performing system for managing sensor-based data. Based on the model a prototype is developed that can be used to handle a large amount of messages from various distributed sensors. The different components within the prototype are evaluated and their advantages and disadvantages are presented. The result merits the architecture of the prototype and validates the initial requirements of how it should operate to achieve high performance. By combining components with individual advantages, a system can be designed that allows a high amount of simultaneous data to be disaggregated into its respective category, processed to make the information usable and stored in a database for easy access to interested parties.
Om ett system som hanterar sensorbaserad data ska kunna prestera bra måste komponenterna som ingår i systemet vara skräddarsydda för att hantera olika situationer. Detta betyder att varje enskild komponent måste individuellt kunna hantera stora simultana datamängder, samtidigt som de måste samarbeta med de andra komponenterna i systemet. För att underlätta snabb bearbetning av data mellan komponenter kan en modell, som specificerar informationsflödet och interna strukturer hos komponenterna, assistera i skapande av snabbare och mer tillförlitliga systemarkitekturer. I denna uppsats presenteras en modell för skapande av skalbara, säkra, tillförlitliga och bra presterande system som hanterar sensor-baserad data. En prototyp utvecklas, baserad på modellen, som kan hantera en stor mängd meddelanden från distribuerade sensorer. De olika komponenterna som används i prototypen utvärderas och deras för- och nackdelar presenteras. Resultatet visar att arkitekturen hos prototypen fungerar enligt de initiala kraven om hur bra systemet ska prestera. Genom att kombinera individuella styrkor hos komponenterna kan ett system skapas som tillåter stora mängder data att bli fördelat enligt deras typ, behandlat för att få fram relevant information och lagrat i en databas för enkel tillgång.
APA, Harvard, Vancouver, ISO, and other styles
30

Slingsby, T. P. "An investigation into the development of a facilities management system for the University of Cape Town." Master's thesis, University of Cape Town, 2004. http://hdl.handle.net/11427/5585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Fernández, Moctezuma Rafael J. "A Data-Descriptive Feedback Framework for Data Stream Management Systems." PDXScholar, 2012. https://pdxscholar.library.pdx.edu/open_access_etds/116.

Full text
Abstract:
Data Stream Management Systems (DSMSs) provide support for continuous query evaluation over data streams. Data streams provide processing challenges due to their unbounded nature and varying characteristics, such as rate and density fluctuations. DSMSs need to adapt stream processing to these changes within certain constraints, such as available computational resources and minimum latency requirements in producing results. The proposed research develops an inter-operator feedback framework, where opportunities for run-time adaptation of stream processing are expressed in terms of descriptions of substreams and actions applicable to the substreams, called feedback punctuations. Both the discovery of adaptation opportunities and the exploitation of these opportunities are performed in the query operators. DSMSs are also concerned with state management, in particular, state derived from tuple processing. The proposed research also introduces the Contracts Framework, which provides execution guarantees about state purging in continuous query evaluation for systems with and without inter-operator feedback. This research provides both theoretical and design contributions. The research also includes an implementation and evaluation of the feedback techniques in the NiagaraST DSMS, and a reference implementation of the Contracts Framework.
APA, Harvard, Vancouver, ISO, and other styles
32

Mohamad, Baraa. "Medical Data Management on the cloud." Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22582.

Full text
Abstract:
Résumé indisponible
Medical data management has become a real challenge due to the emergence of new imaging technologies providing high image resolutions.This thesis focuses in particular on the management of DICOM files. DICOM is one of the most important medical standards. DICOM files have special data format where one file may contain regular data, multimedia data and services. These files are extremely heterogeneous (the schema of a file cannot be predicted) and have large data sizes. The characteristics of DICOM files added to the requirements of medical data management in general – in term of availability and accessibility- have led us to construct our research question as follows:Is it possible to build a system that: (1) is highly available, (2) supports any medical images (different specialties, modalities and physicians’ practices), (3) enables to store extremely huge/ever increasing data, (4) provides expressive accesses and (5) is cost-effective .In order to answer this question we have built a hybrid (row-column) cloud-enabled storage system. The idea of this solution is to disperse DICOM attributes thoughtfully, depending on their characteristics, over both data layouts in a way that provides the best of row-oriented and column-oriented storage models in one system. All with exploiting the interesting features of the cloud that enables us to ensure the availability and portability of medical data. Storing data on such hybrid data layout opens the door for a second research question, how to process queries efficiently over this hybrid data storage with enabling new and more efficient query plansThe originality of our proposal comes from the fact that there is currently no system that stores data in such hybrid storage (i.e. an attribute is either on row-oriented database or on column-oriented one and a given query could interrogate both storage models at the same time) and studies query processing over it.The experimental prototypes implemented in this thesis show interesting results and opens the door for multiple optimizations and research questions
APA, Harvard, Vancouver, ISO, and other styles
33

Paul, Daniel. "Decision models for on-line adaptive resource management." Thesis, Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/13559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Hoffman, A. R. "Information technology decision making in South Africa : a framework for company-wide strategic IT management." Master's thesis, University of Cape Town, 1989. http://hdl.handle.net/11427/15854.

Full text
Abstract:
Includes bibliography.
The area of interest in which this Study is set is the linking of a company's business strategies with its strategic planning for IT (information technology). The objectives of the Study are: to investigate how the IT planning environment is changing for business enterprises in South Africa; to establish how successfully South African companies are managing IT strategically; to propose a new approach to strategic IT decision making that will help South African management deal with the major issues; to propose a way of implementing the approach. In Chapter 2, conclusions are drawn from an examination of the key strategic IT planning literature. It appears that fundamental changes are indeed taking place, and are producing significant shifts in the way researchers, consultants and managers think about IT. The survey of South African management opinion is described in Chapter 3. The opinions analyzed range over environmental trends, strategic decision making practices, and what an acceptable strategic IT decision making framework would look like. The need for a new, comprehensive approach to strategic IT decision making in South Africa is clearly established. In Chapter 4, a theoretical Framework is proposed as a new, comprehensive approach to strategic IT decision making. The Framework covers five strategic tasks: analysing the key environmental issues; determining the purposes and uses of IT in competitive strategy and organizational designs; developing the IT infrastructure, human systems, information systems, and human resources to achieve these purposes and uses; implementing the strategic IT decisions; and learning to make better strategic IT decisions. In Chapter 5, ways of implementing the Framework in practice are .identified. A means of evaluating its acceptability in a specific company is also proposed. The general conclusions of the Study are presented in Chapter 6. The Framework developed in this Study is intended for use, not directly by the IT decision makers themselves, but by the persons responsible for designing the IT decision making processes of the company. It is not, however, offered as a theory or a methodology. The aim is· simply to provide a conceptual "filing system", to help designers uncover and classify the IT strategy problems of their own company, to identify the tools their decision makers need, and to put appropriate problem solving processes in place.
APA, Harvard, Vancouver, ISO, and other styles
35

Chen, Deji. "Real-time data management in the distributed environment /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Lynch, Kevin John. "Data manipulation in collaborative research systems." Diss., The University of Arizona, 1989. http://hdl.handle.net/10150/184923.

Full text
Abstract:
This dissertation addresses data manipulation in collaborative research systems, including what data should be stored, the operations to be performed on that data, and a programming interface to effect this manipulation. Collaborative research systems are discussed, and requirements for next-generation systems are specified, incorporating a range of emerging technologies including multimedia storage and presentation, expert systems, and object-oriented database management systems. A detailed description of a generic query processor constructed specifically for one collaborative research system is given, and its applicability to next-generation systems and emerging technologies is examined. Chapter 1 discusses the Arizona Analyst Information System (AAIS), a successful collaborative research system being used at the University of Arizona and elsewhere. Chapter 2 describes the generic query processing approach used in the AAIS, as an efficient, nonprocedural, high-level programmer interface to databases. Chapter 3 specifies requirements for next-generation collaborative research systems that encompass the entire research cycle for groups of individuals working on related topics over time. These requirements are being used to build a next-generation collaborative research system at the University of Arizona called CARAT, for Computer Assisted Research and Analysis Tool. Chapter 4 addresses the underlying data management systems in terms of the requirements specified in Chapter 3. Chapter 5 revisits the generic query processing approach used in the AAIS, in light of the requirements of Chapter 3, and the range of data management solutions described in Chapter 4. Chapter 5 demonstrates the generic query processing approach as a viable one, for both the requirements of Chapter 3 and the DBMSs of Chapter 4. The significance of this research takes several forms. First, Chapters 1 and 3 provide detailed views of a current collaborative research system, and of a set of requirements for next-generation systems based on years of experience both using and building the AAIS. Second, the generic query processor described in Chapters 2 and 5 is shown to be an effective, portable programming language to database interface, ranging across the set of requirements for collaborative research systems as well as a number of underlying data management solutions.
APA, Harvard, Vancouver, ISO, and other styles
37

Benatar, Gil. "Thermal/structural integration through relational database management." Thesis, Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/19484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Nehme, Rimma V. "Continuous query processing on spatio-temporal data streams." Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-082305-154035/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Roy, Amber Joyce. "Dynamic Grid-Based Data Distribution Management in Large Scale Distributed Simulations." Thesis, University of North Texas, 2000. https://digital.library.unt.edu/ark:/67531/metadc2699/.

Full text
Abstract:
Distributed simulation is an enabling concept to support the networked interaction of models and real world elements that are geographically distributed. This technology has brought a new set of challenging problems to solve, such as Data Distribution Management (DDM). The aim of DDM is to limit and control the volume of the data exchanged during a distributed simulation, and reduce the processing requirements of the simulation hosts by relaying events and state information only to those applications that require them. In this thesis, we propose a new DDM scheme, which we refer to as dynamic grid-based DDM. A lightweight UNT-RTI has been developed and implemented to investigate the performance of our DDM scheme. Our results clearly indicate that our scheme is scalable and it significantly reduces both the number of multicast groups used, and the message overhead, when compared to previous grid-based allocation schemes using large-scale and real-world scenarios.
APA, Harvard, Vancouver, ISO, and other styles
40

Paul, Debashis. "A methodology for assessing computer software applicability to inventory and facility management." Thesis, Virginia Tech, 1989. http://hdl.handle.net/10919/43085.

Full text
Abstract:
Computer applications have become popular and widespread in architecture and other related fields. While the architect uses a computer for design and construction of a building, the user takes the advantage of computer for maintenance of the building. Inventory and facility management are two such fields where computer applications have become predominant. The project has investigated the use and application of different commercially available computer software in the above mentioned fields. A set of user requirements for inventory and facility management were established for different organizations. Four different types of software were chosen to examine their capabilities for fulfilling the requirements. Software from different vendors were chosen to compare and study the feasibility of application of each. The process of evaluation has been developed as a methodology for assessing different computer software applications in inventory and facility management: Special software applications and hardware considerations for developing computer-aided inventory and facility management, has also been discussed. The documentation and evaluation of software shall provide a person the basic knowledge of computer applications in inventory and facility management. The study shall also help building managers and facility managers develop their own criteria for choosing computer software to fulfill their particular requirements
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
41

Chan, Sze-hang, and 陳思行. "Competitive online job scheduling algorithms under different energy management models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hdl.handle.net/10722/206690.

Full text
Abstract:
Online flow-time scheduling is a fundamental problem in computer science and has been extensively studied for years. It is about how to design a scheduler to serve computer jobs with unpredictable arrival times and varying sizes and priorities so as to minimize the total flow time (better understood as response time) of jobs. It has many applications, most notable in the operating of server farms. As energy has become an important issue, the design of scheduler also has to take power management into consideration, for example, how to scale the speed of the processors dynamically. The objectives are orthogonal as one would prefer lower processor speed to save energy, yet a good quality of service must be retained. In this thesis, I study a few scheduling problems for energy and flow time in depth and give new algorithms to tackle them. The competitiveness of our algorithms is guaranteed with worst-case mathematical analysis against the best possible or hypothetical solutions. In the speed scaling model, the power of a processor increases with its speed according to a certain function (e.g., a cubic function of speed). Among all online scheduling problems with speed scaling, the nonclairvoyant setting (in which the size of a job is not known during its execution) with arbitrary priorities is perhaps the most challenging. This thesis gives the first competitive algorithm called WLAPS for this setting. In reality, it is not uncommon that during the peak-load period, some (low-priority) users have their jobs rejected by the servers. This triggers me to study more complicated scheduling algorithms that can strike a good balance among speed scaling, flow time and rejection penalty. Two new algorithms UPUW and HDFAC for different models of rejection penalty have been proposed and analyzed. Last, but perhaps the most interesting, we study power management in large server farm environment in which the primary energy saving mechanism is to put some processors to sleep. Two new algorithms POOL and SATA have been designed to tackle jobs that cannot and can migrate among the processors, respectively. They are integrated algorithms that can consider speed scaling, job scheduling and processor sleep management together to optimize the energy usage and ow time simultaneously. These algorithms are again proven mathematically to be competitive even in the worst case.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Junxian. "Online hotel booking system." CSUSB ScholarWorks, 2006. https://scholarworks.lib.csusb.edu/etd-project/3083.

Full text
Abstract:
The Online Hotel Booking System was developed to allow customers to use a web browser to book a hotel, change the booking details, cancel the booking, change the personal profile, view the booking history, or view the hotel information through a GUI (graphical user interface). The system is implemented in PHP (Hypertext Preprocessor) and HTML (Hyper Text Markup Language).
APA, Harvard, Vancouver, ISO, and other styles
43

Hatchell, Brian. "Data base design for integrated computer-aided engineering." Thesis, Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/16744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Kebede, Selamawit. "Utilisation of MIS in manufacturing industries." Thesis, Cape Technikon, 2001. http://hdl.handle.net/20.500.11838/2294.

Full text
Abstract:
Thesis (MTech (Information Technology))--Cape Technikon, 2001.
Management information systems can be defined as information systems using formalised procedures to provide managers at all levels, in all functions, with appropriate information from all relevant sources, to enable them to make timely and effective decisions for which they are responsible. There is, and continues to be, an awareness in society that accurate and timely information is a vital resource of any organisation, and that an effective management information system is a means of providing the needed information. Many top management people are finding that information is a source of competitive power. It gives them the ability to out-manoeuvre their rivals at critical times, especially when introducing new products. Effective management information systems allow the decisionmaker (i.e .. the ll;)Ul:lger) to combine his or her subjective experience with computerised objective output to produce meaningful information for decision making (Thierauf, 1984:22). Managers must also learn how to state their wishes with precision. Management information systems (MIS) produce only what is asked, which may not be at all what is required. For effective use of information technology, managers must be able to define their information requirements as well as understand computer capabilities and limitations (Hussain and Hussain, 1995:8). The primary objective of this research was to establish the impact of utilising management information systcn»: (MIS) and applying information technology on the success of manufacturing industries. The other aim of the study was to investigate the extent of utilising management information systems and applying information technology in these industries. The study focused on medium- and large-scale chemical manufacturing companies in the Cape Metropole area that have operated for at least the past five years.
APA, Harvard, Vancouver, ISO, and other styles
45

Vossough, Ehsan. "Processing of continuous queries over infinite data streams." Access electronically, 2004. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20050112.154300/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Zuriekat, Faris Nabeeh. "Parallel remote interactive management model." CSUSB ScholarWorks, 2007. https://scholarworks.lib.csusb.edu/etd-project/3222.

Full text
Abstract:
This thesis discusses PRIMM which stands for Parallel Remote Interactive Management Model. PRIMM is a framework for object oriented applications that relies on grid computing. It works as an interface between the remote applications and the parallel computing system. The thesis shows the capabilities that could be achieved from PRIMM architecture.
APA, Harvard, Vancouver, ISO, and other styles
47

Vykunta, Venkateswara Rao. "Class management in a distributed actor system." Master's thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-02022010-020159/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Bew, M. D. "Engineering better social outcomes through requirements management & integrated asset data processing." Thesis, University of Salford, 2017. http://usir.salford.ac.uk/42341/.

Full text
Abstract:
The needs of society are no longer serviceable using the traditional methods of infrastructure providers and operators. Urbanisation, pressure on global resources, population growth and migration across borders is placing new demands which traditional methods can no longer adequately serve. The emergence of data and digital technology has enabled new skills to emerge and offer new possibilities as well as set much higher expectations for the younger generation who have only known lives in the digital age. The data describing the physical properties of built assets have been well understood and digital methods such as Building Information Modelling are providing levels of access and quality historically unknown. The concepts of human perception are not so well understood with research only being documented over the last forty years or so, but the understanding of human needs and the impact of poor infrastructure and services has now been linked to poor perception and social outcomes. This research has developed and instantiated a methodology which uses data from the delivery and operational phases of a built asset and with the aid of understanding the user community’s perceptions creates intelligence that can optimise the assets performance for the benefit of its users. The instantiation was accomplished by experiment in an educational environment using the “Test Bench” to gather physical asset data and social perception data and using analytics to implement comparative measurements and double loop feedback to identify actionable interventions. The scientific contributions of this research are the identification of methods which provide valuable and effective relationships between physical and social data to provide ‘’actionable’’ interventions for performance improvement and the instantiation of this discovery through the development and application of the ‘’Test Bench’’. The major implication has been to develop a testable relationship between social outcomes and physical assets, which with further development could provide a valid challenge to the least cost build option that is taken by the vast number of asset owners, by better understanding the full implications on people’s perceptions and social outcomes. The cost of operational staff and resources rapidly outweighs the cost of assets, and the effective motivation and productivity the right environment can provide improved or inhibited performance and social outcomes.
APA, Harvard, Vancouver, ISO, and other styles
49

Lampi, J. (Jaakko). "Large-scale distributed data management and processing using R, Hadoop and MapReduce." Master's thesis, University of Oulu, 2014. http://urn.fi/URN:NBN:fi:oulu-201406191771.

Full text
Abstract:
The exponential growth of raw, i.e. unstructured, data collected by various methods has forced companies to change their business strategies and operational approaches. The revenue strategies of a growing number of companies are solely based on the information gained from data and the utilization of it. Managing and processing large-scale data sets, also know as Big Data, requires new methods and techniques, but storing and transporting the ever-growing amount of data also creates new technological challenges. Wireless sensor networks monitor their clients and track their behavior. A client on a wireless sensor network can be anything from a random object to a living being. The Internet of Things binds these clients together, forming a single, massive network. Data is progressively produced and collected by, for example, research projects, commercial products, and governments for different means. This thesis comprises theory for managing large-scale data sets, introduces existing techniques and technologies, and analyzes the situation vis-a-vis the growing amount of data. As an implementation, a Hadoop cluster running R and Matlab is built and sample data sets collected from different sources are stored and analyzed by using the cluster. Datasets include the cellular band of the long-term spectral occupancy findings from the observatory of IIT (Illinois Institute of Technology) and open weather data from weatherunderground.com. An R software environment running on the master node is used as the main tool for calculations and controlling the data flow between different software. These include Hadoop’s HDFS and MapReduce for storage and analysis, as well as a Matlab server for processing sample data and pipelining it to R. The hypothesis that the cold weather front and snowing in the Chicago (IL, US) area should be shown on the cellular band occupancy is set. As a result of the implementation, thorough, step-by-step guides for setting up and managing a Hadoop cluster and using it via an R environment are produced, along with examples and calculations being done. Analysis of datasets and a comparison of performance between R and MapReduce is produced and speculated upon. Results of the analysis correlate somewhat with the weather, but the dataset used for performance comparison should clearly have been larger in order to produce viable results through distributed computing
Raakadatan eli eri menetelmillä kerätyn strukturoimattoman datan määrän huikea kasvu viime vuosina on ajanut yrityksiä muuttamaan strategioitaan ja toimintamallejaan. Monien uusien yritysten tuottostrategiat pohjautuvat puhtaasti datasta saatavaan informaation ja sen hyväksikäyttöön. Suuret datamäärat ja niin kutsuttu Big Data vaativat uusia menetelmiä ja sovelluksia niin datan prosessoinin kuin analysoinninkin suhteen, mutta myös suurien datamäärien fyysinen tallettaminen ja datan siirtäminen tietokannoista käyttäjille ovat luoneet uusia teknologisia haasteita. Langattomat sensoriverkot seuraavat käyttäjiään, joita voivat periaatteessa olla kaikki fyysiset objektit ja elävät olennot, ja valvovat ja tallentavat niiden käyttäytymistä. Niin kutsuttu Internet of Things yhdistää nämä objektit, tai asiat, yhteen massiiviseen verkostoon. Dataa ja informaatiota kerätään yhä kasvavalla vauhdilla esimerkiksi tutkimusprojekteissa, kaupalliseen tarkoitukseen ja valtioiden turvallisuuden takaamiseen. Diplomityössä käsitellään teoriaa suurten datamäärien hallinnasta, esitellään uusien ja olemassa olevien tekniikoiden ja teknologioiden käyttöä sekä analysoidaan tilannetta datan ja tiedon kannalta. Työosuudessa käydään vaiheittain läpi Hadoop-klusterin rakentaminen ja yleisimpien analysointityökalujen käyttö. Käytettävänä oleva testidata analysoidaan rakennettua klusteria hyväksi käyttäen, analysointitulokset ja klusterin laskentatehokkuus kirjataan ylös ja saatuja tuloksia analysoidaan olemassa olevien ratkaisujen ja tarpeiden näkökulmasta. Työssä käytetyt tietoaineistot ovat IIT (Illinois Institute of Technology) havaintoasemalla kerätty mobiilikaistan käyttöaste sekä avoin säädata weatherunderground.com:ista. Analysointituloksena mobiilikaistan käyttöasteen oletetaan korreloivan kylmään ja lumiseen aikaväliin Chigagon alueella Amerikassa. Työn tuloksena ovat tarkat asennus- ja käyttöohjeet Hadoop-klusterille ja käytetyille ohjelmistoille, aineistojen analysointitulokset sekä analysoinnin suorituskykyvertailu käyttäen R-ohjelmistoympäristöä ja MapReducea. Lopputuloksena voidaan esittää, että mobiilikaistan käyttöasteen voidaan jossain määrin todeta korreloivan sääolosuhteiden kanssa. Suorituskykymittauksessa käytetty tietoaineisto oli selvästi liian pieni, että hajautetusta laskennasta voitaisiin hyötyä
APA, Harvard, Vancouver, ISO, and other styles
50

Obeso, Duque Aleksandra. "Performance Prediction for Enabling Intelligent Resource Management on Big Data Processing Workflows." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-372178.

Full text
Abstract:
Mobile cloud computing offers an augmented infrastructure that allows resource-constrained devices to use remote computational resources as an enabler for highly intensive computation, thus improving end users experience. Being able to efficiently manage cloud elasticity represents a big challenge for dynamic resource scaling on-demand. In this sense, the development of intelligent tools that could ease the understanding of the behavior of a highly dynamic system and to detect resource bottlenecks given certain service level constrains represents an interesting case of study. In this project, a comparative study has been carried out for different distributed services taking into account the tools that are available for load generation, benchmarking and sensing of key performance indicators. Based on that, the big data processing framework Hadoop Mapreduce, has been deployed as a virtualized service on top of a distributed environment. Experiments for different cluster setups using different benchmarks have been conducted on this testbed in order to collect traces for both resource usage statistics at the infrastructure level and performance metrics at the platform level. Different machine learning approaches have been applied on the collected traces, thus generating prediction and classification models whose performance is then evaluated and compared. The highly accurate results, namely a Normalized Mean Absolute Error below 10.3% for the regressor and an accuracy score above 99.9% for the classifier, show the feasibility of the prediction models generated for service performance prediction and resource bottleneck detection that could be further used to trigger auto-scaling processes on cloud environments under dynamic loads in order to fulfill service level requirements.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography