Siga este link para ver outros tipos de publicações sobre o tema: Personnel management – Data processing.

Teses / dissertações sobre o tema "Personnel management – Data processing"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Personnel management – Data processing".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Udekwe, Emmanuel. "The impact of human resources information systems in selected retail outlets in Western Cape". Thesis, Cape Peninsula University of Technology, 2016. http://hdl.handle.net/20.500.11838/2348.

Texto completo da fonte
Resumo:
Thesis (MTech (Business Administration))--Cape Peninsula University of Technology, 2016.
Human Resource Information Systems (HRISs) are systems that merge Human Resources (HR) and Information Systems (ISs) for a fast, easy, and convenient way of operating and reporting the human and material resources in an organisation. The retail sector is an important and active sector in terms of its job creation and a major contributor to the economy. This research focuses on the level of impact HRISs have in the retail sector by reassessing its functions, problems, prospects, and benefits to the retail industries. This research further focuses on two retail outlets that use HRISs to explore how effective HRIS implementation is, the benefits these systems are able to offer, and its contribution to the organisation. A multiple case study was used as research strategy. Interviews and semi-structured questionnaires were conducted to collect the data. Data was analysed using summarising, categorising and thematic analysis. The problem statement is that HRISs are difficult to implement and maintain and as a result, organisations cannot effectively utilise these systems to their benefit. The aim of this research is based on exploring how HRISs can be implemented and maintained in order for organisations to gain the expected benefits of the system. The contribution of the study is a proposed guideline for retail organisations to assist in the effective implementation and maintenance of their preferred HRISs. All ethical standards as required by CPUT were followed. Consent was obtained in writing from the companies as well as the interviewees.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Van, Heerden Jeanne-Marie. "The impact of the implementation of E-HRM on the human resource management function". Thesis, Nelson Mandela Metropolitan University, 2011. http://hdl.handle.net/10948/d1021239.

Texto completo da fonte
Resumo:
The purpose of the research was to improve the use of electronic human resource management in South African businesses by investigating that there is a positive impact when implementing e-HRM on the human resource management function. The research was carried out within a South African business, whose parent business concern is based overseas and has branches operating within South Africa The research was significant as it shaped the researcher’s concern as to whether electronic human resource would be beneficial to a South African business if the business superiors decide to implement e-HRM within their business opinion and what impact it would have. The methodological components that guided the research were a structured questionnaire that was distributed by using a combination of convenience, snowball, and judgemental sampling techniques. Certain aspects highlighted in the literature review were used as the framework for the development of a questionnaire to assess how people perceive the implementation of e-HR on their working environment and if e-HR has helped the business run more efficiently and effectively. Six hypotheses were tested and all were accepted. The potential for generalisations of the findings are that given the potential that e-HRM has for the transformation of human resource, it is reasonable to expect that the sizeable changes required, both in organisation and mindset, are likely to provoke resistance from various end users. What was learned was that HR is often hindered by a multitude of manual, paper based processes and transactions, such as tax, payroll and benefits information, that are costly, prone to errors and time-consuming to manage. This makes it difficult for HR organisations to focus on higher value business in initiatives that may help to drive the profitability and efficiency of the organisations. The implication of the findings about the impact of the implementation of e-HR on the Human Resource Management function was that firms need to figure out how to make technology feasible and industrious, as managers and Human Resource professionals are responsible for redefining how work at their firms or business flow as they need to keep ahead of the information curve and therefore learn how to influence information for business results to be more efficient and effective. The theoretical and practical implications of the findings are discussed and recommendations based on these findings are provided.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Marazanye, Joram. "The perceived meaning and benefits of people analytics in selected organisations in South Africa". Thesis, University of Fort Hare, 2017. http://hdl.handle.net/10353/4480.

Texto completo da fonte
Resumo:
Regardless of the widespread application of analytics to a variety of business measurements, it is noteworthy that the use of people analytics is still no place close where it could be. The main aim of this study is to examine the perceived meaning and benefits of people analytics in selected South African organisations. People analytics is a burning-fresh topic in HR field aiming at using data to make organisational decisions and little has been done in this area especially in the South African context. The study employed qualitative-exploratory design which comprised of 10 senior HR officers from selected organisations in South Africa. From the findings, it shows that the employment of people analytics in South African context is in its early stage and its conception and repercussions are little understood. In addition, there is an accord on its usefulness, however the workforce analytic skills have found to be the major difficulty to foster its successful implementation and adoption by organisations. Because of its qualitative nature, this study had a limitation that it lack representativeness hence the findings cannot be generalised. Research opportunities for future can be quantitative and longitudinal research to objectively ascertain the extent future employability of people analytics.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Chitondo, Pepukayi David Junior. "Data policies for big health data and personal health data". Thesis, Cape Peninsula University of Technology, 2016. http://hdl.handle.net/20.500.11838/2479.

Texto completo da fonte
Resumo:
Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2016.
Health information policies are constantly becoming a key feature in directing information usage in healthcare. After the passing of the Health Information Technology for Economic and Clinical Health (HITECH) Act in 2009 and the Affordable Care Act (ACA) passed in 2010, in the United States, there has been an increase in health systems innovations. Coupling this health systems hype is the current buzz concept in Information Technology, „Big data‟. The prospects of big data are full of potential, even more so in the healthcare field where the accuracy of data is life critical. How big health data can be used to achieve improved health is now the goal of the current health informatics practitioner. Even more exciting is the amount of health data being generated by patients via personal handheld devices and other forms of technology that exclude the healthcare practitioner. This patient-generated data is also known as Personal Health Records, PHR. To achieve meaningful use of PHRs and healthcare data in general through big data, a couple of hurdles have to be overcome. First and foremost is the issue of privacy and confidentiality of the patients whose data is in concern. Secondly is the perceived trustworthiness of PHRs by healthcare practitioners. Other issues to take into context are data rights and ownership, data suppression, IP protection, data anonymisation and reidentification, information flow and regulations as well as consent biases. This study sought to understand the role of data policies in the process of data utilisation in the healthcare sector with added interest on PHRs utilisation as part of big health data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Ngai, Kin-fai, e 魏建輝. "An appraisal of computer-based management information systems in Hong Kong secondary schools with emphasis on human resource factors". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1992. http://hub.hku.hk/bib/B31956154.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Ceccucci, Wency A. "Decision support systems design: a nursing scheduling application". Diss., Virginia Tech, 1994. http://hdl.handle.net/10919/40303.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Honniger, Werner. "Networking the enterprise : a solution for HBR personnel". Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/16481.

Texto completo da fonte
Resumo:
Thesis (MPhil)--Stellenbosch University, 2004.
ENGLISH ABSTRACT: This Extended Research Assignment discusses the information systems found in HBR Personnel. The discussion, based on the research problems, proposes steps in which the systems of HBR can be integrated so that they add the most value. Furthermore, a review of Corporate Portals is undertaken to show the potential impact it may have on organisational efficiencies and knowledge. The Assignment, according to the methodologies given, analyses the HBR information system for system incompatibilities and bottlenecks and proposes solutions for these problems. The solutions include changing core system databases and computer systems, together with a portal to fully integrate HBR Personnel’s information systems.
AFRIKAANSE OPSOMMING: Hierdie Uitgebreide Navorsingsopdrag bespreek die informasiestelsels gevind in HBR Personnel. Die bespreking, gebaseer op die navorsingsprobleme, stel stappe voor waardeur die stelsels van HBR geïntegreer kan word om die meeste waarde toe te voeg. Verder word ‘n oorsig gedoen van Korporatiewe Portale om te wys watter potensiële impak dit kan hê op organisatoriese doeltreffendheid en kennis. Na aanleiding van die gegewe metodologieë analiseer die opdrag HBR se informasiestelsel vir sistemiese probleme en bottelnekke en stel oplossings voor vir hierdie probleme. Die oplossings sluit in ‘n verandering van kern-sisteem databasisse en rekenaarstelsels, tesame met ‘n portaal om HBR Personnel se informasiestelsels ten volle te integreer.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Harmse, Magda Susanna. "Physicians' perspectives on personal health records: a descriptive study". Thesis, Nelson Mandela Metropolitan University, 2016. http://hdl.handle.net/10948/6876.

Texto completo da fonte
Resumo:
A Personal Health Record (PHR) is an electronic record of a patient’s health-related information that is managed by the patient. The patient can give access to other parties, such as healthcare providers and family members, as they see fit. These parties can use the information in emergency situations, in order to help improve the patient’s healthcare. PHRs have an important role to play in ensuring that a patient’s complete health history is available to his healthcare providers at the point of care. This is especially true in South Africa, where the majority of healthcare organizations still rely on paper-based methods of record-keeping. Research indicates that physicians play an important role in encouraging the adoption of PHRs amongst patients. Whilst various studies have focused on the perceptions of South African citizens towards PHRs, to date no research has focused on the perceptions of South African physicians. Considering the importance of physicians in encouraging the adoption of PHRs, the problem being addressed by this research project thus relates to the lack of information relating to the perceptions of South African physicians of PHRs. Physicians with private practices at private hospitals in Port Elizabeth, South Africa were surveyed in order to determine their perceptions towards PHRs. Results indicate perceptions regarding benefits to the physician and the patient, as well as concerns to the physician and the patient. The levels of trust in various potential PHR providers and the potential uses of a PHR for the physician were also explored. The results of the survey were compared with the results of relevant international literature in order to describe the perceptions of physicians towards PHRs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Tsegaye, Melekam Asrat. "A model for a context aware machine-based personal memory manager and its implementation using a visual programming environment". Thesis, Rhodes University, 2007. http://hdl.handle.net/10962/d1006563.

Texto completo da fonte
Resumo:
Memory is a part of cognition. It is essential for an individual to function normally in society. It encompasses an individual's lifetime experience, thus defining his identity. This thesis develops the concept of a machine-based personal memory manager which captures and manages an individual's day-to-day external memories. Rather than accumulating large amounts of data which has to be mined for useful memories, the machine-based memory manager automatically organizes memories as they are captured to enable their quick retrieval and use. The main functions of the machine-based memory manager envisioned in this thesis are the support and the augmentation of an individual's biological memory system. In the thesis, a model for a machine-based memory manager is developed. A visual programming environment, which can be used to build context aware applications as well as a proof-of-concept machine-based memory manager, is conceptualized and implemented. An experimental machine-based memory manager is implemented and evaluated. The model describes a machine-based memory manager which manages an individual's external memories by context. It addresses the management of external memories which accumulate over long periods of time by proposing a context aware file system which automatically organizes external memories by context. It describes how personal memory management can be facilitated by machine using six entities (life streams, memory producers, memory consumers, a memory manager, memory fragments and context descriptors) and the processes in which these entities participate (memory capture, memory encoding and decoding, memory decoding and retrieval). The visual programming environment represents a development tool which contains facilities that support context aware application programming. For example, it provides facilities which enable the definition and use of virtual sensors. It enables rapid programming with a focus on component re-use and dynamic composition of applications through a visual interface. The experimental machine-based memory manager serves as an example implementation of the machine-based memory manager which is described by the model developed in this thesis. The hardware used in its implementation consists of widely available components such as a camera, microphone and sub-notebook computer which are assembled in the form of a wearable computer. The software is constructed using the visual programming environment developed in this thesis. It contains multiple sensor drivers, context interpreters, a context aware file system as well as memory retrieval and presentation interfaces. The evaluation of the machine-based memory manager shows that it is possible to create a machine which monitors the states of an individual and his environment, and manages his external memories, thus supporting and augmenting his biological memory.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Van, der Walt J. C. "The strategy and approach with the use of open-source software in Sanlam Personal Finance (SPF)". Thesis, Stellenbosch : Stellenbosch University, 2006. http://hdl.handle.net/10019.1/21123.

Texto completo da fonte
Resumo:
Thesis (MBA)--Stellenbosch University, 2006.
ENGLISH ABSTRACT: Open-source software (055) refers to software collaboratively developed by developers across the globe, which embraces the philosophy of sharing. The fundamental idea behind open-source is that when programmers can read, redistribute, and modify the source code for a piece of software, the software evolves. The Internet plays an extremely important role in the distribution of the software and today, many 055 products are downloadable free from the Internet. Despite the inherent challenges, the research organisation Gartner predicts that the majority of mainstream IT organisations will successfully adopt formal open-source management strategies as core IT disciplines. What more, IT organisations and technology vendors who ignore the potential threats and opportunities of 055 will increasingly find themselves at a competitive disadvantage. However, organisations are not always clear on the appropriate strategy, direction, and approach to take when deciding on the role of 055 in their organisations. There is so much hype surrounding the use and the risks of open-source that it can be difficult for organisations to know what is real and what is not. Furthermore, organisations are intrigued but also stymied by the myths of the costs, support, and risks of 055. Also in South Africa, organisations and the South African Government are asking themselves how relevant the benefits and risks of the software are to them. Consequently, the aim of the study is to broaden the existing knowledge of 055 in South Africa by investigating a South African organisation's approach and decisions regarding the use of 055 in the organisation.
AFRIKAANSE OPSOMMING: "Open-Source" sagteware (OSS) verwys na sagteware wat gesamentlik ontwikkel word deur programmeerders regoor die wêreld en die filosofie van "deel met mekaar" omvat. Die wesenlike idee agter "open-source" is dat wanneer programmeerders in staat is om die die bronkode van 'n program te kan lees, versprei en wysig, die sagteware verder en beter kan ontwikkel. Die Internet speel 'n belangrike rol in die verspreiding van die sagteware, en baie OSS- produkte is vandag gratis beskikbaar vir aflaai van die Internet af. Ongeag die inherente uitdagings, voorspel die navorsingsorganisasie Gartner, dat die meerderheid hoofstroom IT -organisasies formele "open-source" bestuurstrategieë suksesvol as kern IT-dissiplines sal aanneem. Wat meer is : IT-organisasies en verskaffers van tegnologie (harde en sagteware) wat die potensiële bedreigings en geleenthede van OSS ignoreer, sal hulself toenemend in 'n nadelig-kompeterende situasie bevind. Organisasies is dikwels nuuskierig, maar ook skepties ten opsigte van die mites rondom kostes, ondersteuningstelsels en risiko's verbonde aan OSS. Sommige organisasies het nie altyd helderheid rondom die toepaslike strategie, rigting en aanslag wat gevolg moet word, wanneer 'n besluit rondom die rol van OSS binne hul organisasies geneem moet word nie. Verder is soveel verkeerde persepsies en onsekerheid rondom die gebruik van, en risiko's verbonde aan "open-source", dat dit vir sommige organisasies moeilik raak om te onderskei tussen die feite en fiksie. Ook in Suid-Afrika vra organisasies en die Suid-Afrikaanse Regering hulself die vraag af hoe relevant die voordele en risiko's van die sagteware werklik is, en hoe dit hul raak. Die doel van hierdie studie is om die bestaande kennis rondom OSS in Suid-Afrika te verbreed, deur ondersoek in te stel na 'n Suid-Afrikaanse organisasie se benadering en besluite rondom die gebruik van OSS in hul organisasie.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Bantom, Simlindile Abongile. "Accessibility to patients’ own health information: a case in rural Eastern Cape, South Africa". Thesis, Cape Peninsula University of Technology, 2016. http://hdl.handle.net/20.500.11838/2411.

Texto completo da fonte
Resumo:
Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2016.
Access to healthcare is regarded as a basic and essential human right. It is widely known that ICT solutions have potential to improve access to healthcare, reduce healthcare cost, reduce medical errors, and bridge the digital divide between rural and urban healthcare centres. The access to personal healthcare records is, however, an astounding challenge for both patients and healthcare professionals alike, particularly within resource-restricted environments (such as rural communities). Most rural healthcare institutions have limited or non-existent access to electronic patient healthcare records. This study explored the accessibility of personal healthcare records by patients and healthcare professionals within a rural community hospital in the Eastern Cape Province of South Africa. The case study was conducted at the St. Barnabas Hospital with the support and permission from the Faculty of Informatics and Design, Cape Peninsula University of Technology and the Eastern Cape Department of Health. Semi-structured interviews, observations, and interactive co-design sessions and focus groups served as the main data collection methods used to determine the accessibility of personal healthcare records by the relevant stakeholders. The data was qualitatively interpreted using thematic analysis. The study highlighted the various challenges experienced by healthcare professionals and patients, including time-consuming manual processes, lack of infrastructure, illegible hand-written records, missing records and illiteracy. A number of recommendations for improved access to personal healthcare records are discussed. The significance of the study articulates the imperative need for seamless and secure access to personal healthcare records, not only within rural areas but within all communities.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Wang, Yi. "Data Management and Data Processing Support on Array-Based Scientific Data". The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1436157356.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Vijayakumar, Nithya Nirmal. "Data management in distributed stream processing systems". [Bloomington, Ind.] : Indiana University, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3278228.

Texto completo da fonte
Resumo:
Thesis (Ph.D.)--Indiana University, Dept. of Computer Science, 2007.
Source: Dissertation Abstracts International, Volume: 68-09, Section: B, page: 6093. Adviser: Beth Plale. Title from dissertation home page (viewed May 9, 2008).
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Emerson, Glen D. "Projected performance requirements for personnel entering information processing jobs for the federal government /". Full-text version available from OU Domain via ProQuest Digital Dissertations, 1985.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Griffin, Alan R., e R. Stephen Wooten. "AUTOMATED DATA MANAGEMENT IN A HIGH-VOLUME TELEMETRY DATA PROCESSING ENVIRONMENT". International Foundation for Telemetering, 1992. http://hdl.handle.net/10150/608908.

Texto completo da fonte
Resumo:
International Telemetering Conference Proceedings / October 26-29, 1992 / Town and Country Hotel and Convention Center, San Diego, California
The vast amount of data telemetered from space probe experiments requires careful management and tracking from initial receipt through acquisition, archiving, and distribution. This paper presents the automated system used at the Phillips Laboratory, Geophysics Directorate, for tracking telemetry data from its receipt at the facility to its distribution on various media to the research community. Features of the system include computerized databases, automated generation of media labels, automated generation of reports, and automated archiving.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

容勁 e King Stanley Yung. "Application of multi-agent technology to supply chain management". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31223886.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Bashir, Omar. "Management and processing of network performance information". Thesis, Loughborough University, 1998. https://dspace.lboro.ac.uk/2134/10361.

Texto completo da fonte
Resumo:
Intrusive monitoring systems monitor the performance of data communication networks by transmitting and receiving test packets on the network being monitored. Even relatively small periods of monitoring can generate significantly large amounts of data. Primitive network performance data are details of test packets that are transmitted and received over the network under test. Network performance information is then derived by significantly processing the primitive performance data. This information may need to be correlated with information regarding the configuration and status of various network elements and the test stations. This thesis suggests that efficient processing of the collected data may be achieved by reusing and recycling the derived information in the data warehouses and information systems. This can be accomplished by pre-processing the primitive performance data to generate Intermediate Information. In addition to being able to efficiently fulfil multiple information requirements, different Intermediate Information elements at finer levels of granularity may be recycled to generate Intermediate Information elements at coarser levels of granularity. The application of these concepts in processing packet delay information from the primitive performance data has been studied. Different Intermediate Information structures possess different characteristics. Information systems can exploit these characteristics to efficiently re-cycle elements of these structures to derive the required information elements. Information systems can also dynamically select appropriate Intermediate Information structures on the basis of queries posted to the information system as well as the number of suitable Intermediate Information elements available to efficiently answer these queries. Packet loss and duplication summaries derived for different analysis windows also provide information regarding the network performance characteristics. Due to their additive nature, suitable finer granularity packet loss and duplication summaries can be added to provide coarser granularity packet loss and duplication summaries.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Agarwalla, Bikash Kumar. "Resource management for data streaming applications". Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34836.

Texto completo da fonte
Resumo:
This dissertation investigates novel middleware mechanisms for building streaming applications. Developing streaming applications is a challenging task because (i) they are continuous in nature; (ii) they require fusion of data coming from multiple sources to derive higher level information; (iii) they require efficient transport of data from/to distributed sources and sinks; (iv) they need access to heterogeneous resources spanning sensor networks and high performance computing; and (v) they are time critical in nature. My thesis is that an intuitive programming abstraction will make it easier to build dynamic, distributed, and ubiquitous data streaming applications. Moreover, such an abstraction will enable an efficient allocation of shared and heterogeneous computational resources thereby making it easier for domain experts to build these applications. In support of the thesis, I present a novel programming abstraction, called DFuse, that makes it easier to develop these applications. A domain expert only needs to specify the input and output connections to fusion channels, and the fusion functions. The subsystems developed in this dissertation take care of instantiating the application, allocating resources for the application (via the scheduling heuristic developed in this dissertation) and dynamically managing the resources (via the dynamic scheduling algorithm presented in this dissertation). Through extensive performance evaluation, I demonstrate that the resources are allocated efficiently to optimize the throughput and latency constraints of an application.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Mousavi, Bamdad. "Scalable Stream Processing and Management for Time Series Data". Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42295.

Texto completo da fonte
Resumo:
There has been an enormous growth in the generation of time series data in the past decade. This trend is caused by widespread adoption of IoT technologies, the data generated by monitoring of cloud computing resources, and cyber physical systems. Although time series data have been a topic of discussion in the domain of data management for several decades, this recent growth has brought the topic to the forefront. Many of the time series management systems available today lack the necessary features to successfully manage and process the sheer amount of time series being generated today. In this today we stive to examine the field and study the prior work in time series management. We then propose a large system capable of handling time series management end to end, from generation to consumption by the end user. Our system is composed of open-source data processing frameworks. Our system has the capability to collect time series data, perform stream processing over it, store it for immediate and future processing and create necessary visualizations. We present the implementation of the system and perform experimentations to show its scalability to handle growing pipelines of incoming data from various sources.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Stein, Oliver. "Intelligent Resource Management for Large-scale Data Stream Processing". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-391927.

Texto completo da fonte
Resumo:
With the increasing trend of using cloud computing resources, the efficient utilization of these resources becomes more and more important. Working with data stream processing is a paradigm gaining in popularity, with tools such as Apache Spark Streaming or Kafka widely available, and companies are shifting towards real-time monitoring of data such as sensor networks, financial data or anomaly detection. However, it is difficult for users to efficiently make use of cloud computing resources and studies show that a lot of energy and compute hardware is wasted. We propose an approach to optimizing resource usage in cloud computing environments designed for data stream processing frameworks, based on bin packing algorithms. Test results show that the resource usage is substantially improved as a result, with future improvements suggested to further increase this. The solution was implemented as an extension of the HarmonicIO data stream processing framework and evaluated through simulated workloads.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Reinhard, Erik. "Scheduling and data management for parallel ray tracing". Thesis, University of Bristol, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302169.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Wilke, Achim. "Data-processing devolopment in German design offices". Thesis, Brunel University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.292979.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Fritz, Godfried. "The relationship of sense of coherence to health and work in data processing personnel". Master's thesis, University of Cape Town, 1989. http://hdl.handle.net/11427/16845.

Texto completo da fonte
Resumo:
Bibliography: pages 80-86.
The aim of the present study was to test a model of stress and to examine whether the theoretical construct of sense of coherence (SOC) moderated the relationship between stressors and health-related and work-related outcomes. This construct of SOC was identified by an Israeli medical sociologist, Antonovsky. He maintained that the current focus of research on stress is largely pathogenic in nature. He suggested that it would be of value to shift research more towards that which identifies the origins of health. He consequently developed the term "salutogenesis", which requires people to focus on those factors which promote well-being. He also argued that people are not either sick or well, but rather are located on a continuum between health-ease/dis-ease. With respect to their health, persons will find themselves somewhere along this continuum, where they may shift between the two positions. He then suggests that certain factors contribute to facilitating the movement along this continuum. These factors together form a construct which he calls the SOC. The SOC is comprised of core components. He hypothesizes that someone with a strong SOC is likely to make better sense of the world around him/her, thereby engendering resilience towards the impinging stressors. The person with a weak SOC is likely to capitulate to these stressors · more readily and by succumbing to them is going to increase the likelihood that (s)he will move to the dis-ease end of the continuum. This study attempted to investigate the following research questions, namely, whether (1) the stressors were related to the stress outcomes, (2) the SOC was related to the stressors and outcomes, and (3) the SOC moderated the relationships between stressors and outcomes. In the present study the subjects were drawn from all data processing professionals in a large financial organisation. The respondents (~ = 194) replied to a questionnaire which contained scales which measured a variety of job-related stressors, an SOC scale as well as job-related and health-related outcome variables. Intercorrelations between the stressor, moderator and outcome variables were calculated. Other statistical procedures that were utilized were subgroup analyses and the moderated multiple regression analyses. Partial support for all three research questions was obtained. Four of the six stressors were found to correlate significantly with somatic complaints, thereby suggesting that stressors result in persons feeling the results of stress and reporting them physically. The SOC was found to relate to some of the stressors and outcome variables. This would lend partial support to an interpretation of the SOC as having a main effect relationship to stressor and outcome variables. In the subgroup analyses the results showed that out of a possible 54 relationships, the SOC moderated in only seven of them that the moderated multiple regression (MMR) analyses showed out of 54 possible relationships, the SOC moderated in 12 of them health-related variables. Furthermore, the SOC moderated between six outcome variables and six work-related outcomes. These findings then partially support research question 3, which examined whether the SOC would moderate relationships between stressors and outcome variables. This study was concluded by a discussion of the findings, its implications, and the limitations of this research.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Tucker, Peter A. "Punctuated data streams /". Full text open access at:, 2005. http://content.ohsu.edu/u?/etd,255.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Darrous, Jad. "Scalable and Efficient Data Management in Distributed Clouds : Service Provisioning and Data Processing". Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEN077.

Texto completo da fonte
Resumo:
Cette thèse porte sur des solutions pour la gestion de données afin d'accélérer l'exécution efficace d'applications de type « Big Data » (très consommatrices en données) dans des centres de calculs distribués à grande échelle. Les applications de type « Big Data » sont de plus en plus souvent exécutées sur plusieurs sites. Les deux principales raisons de cette tendance sont 1) le déplacement des calculs vers les sources de données pour éliminer la latence due à leur transmission et 2) le stockage de données sur un site peut ne pas être réalisable à cause de leurs tailles de plus en plus importantes.La plupart des applications s'exécutent sur des clusters virtuels et nécessitent donc des images de machines virtuelles (VMI) ou des conteneurs d’application. Par conséquent, il est important de permettre l’approvisionnement rapide de ces services afin de réduire le temps d'attente avant l’exécution de nouveaux services ou applications. Dans la première partie de cette thèse, nous avons travaillé sur la récupération et le placement des données, en tenant compte de problèmes difficiles, notamment l'hétérogénéité des connexions au réseau étendu (WAN) et les besoins croissants en stockage pour les VMIs et les conteneurs d’application.Par ailleurs, les applications de type « Big Data » reposent sur la réplication pour fournir des services fiables et rapides, mais le surcoût devient de plus en plus grand. La seconde partie de cette thèse constitue l'une des premières études sur la compréhension et l'amélioration des performances des applications utilisant la technique, moins coûteuse en stockage, des codes d'effacement (erasure coding), en remplacement de la réplication
This thesis focuses on scalable data management solutions to accelerate service provisioning and enable efficient execution of data-intensive applications in large-scale distributed clouds. Data-intensive applications are increasingly running on distributed infrastructures (multiple clusters). The main two reasons for such a trend are 1) moving computation to data sources can eliminate the latency of data transmission, and 2) storing data on one site may not be feasible given the continuous increase of data size.On the one hand, most applications run on virtual clusters to provide isolated services, and require virtual machine images (VMIs) or container images to provision such services. Hence, it is important to enable fast provisioning of virtualization services to reduce the waiting time of new running services or applications. Different from previous work, during the first part of this thesis, we worked on optimizing data retrieval and placement considering challenging issues including the continuous increase of the number and size of VMIs and container images, and the limited bandwidth and heterogeneity of the wide area network (WAN) connections.On the other hand, data-intensive applications rely on replication to provide dependable and fast services, but it became expensive and even infeasible with the unprecedented growth of data size. The second part of this thesis provides one of the first studies on understanding and improving the performance of data-intensive applications when replacing replication with the storage-efficient erasure coding (EC) technique
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Monk, Kitty A. "Data management in MARRS". Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9939.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Xie, Tian, e 謝天. "Development of a XML-based distributed service architecture for product development in enterprise clusters". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B30477165.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Tidmus, Jonathan Paul. "Task and data management for parallel particle tracing". Thesis, University of the West of England, Bristol, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387936.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Görlitz, Olaf [Verfasser]. "Distributed query processing for federated RDF data management / Olaf Görlitz". Koblenz : Universitätsbibliothek Koblenz, 2015. http://d-nb.info/1065246986/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Pitts, David Vernon. "A storage management system for a reliable distributed operating system". Diss., Georgia Institute of Technology, 1986. http://hdl.handle.net/1853/16895.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Tao, Yufei. "Indexing and query processing of spatio-temporal data /". View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?COMP%202002%20TAO.

Texto completo da fonte
Resumo:
Thesis (Ph. D.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 208-215). Also available in electronic version. Access restricted to campus users.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Laribi, Atika. "A protection model for distributed data base management systems". Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/53883.

Texto completo da fonte
Resumo:
Security is important for Centralized Data Base Management Systems (CDBMS) and becomes crucial for Distributed Data Base Management Systems (DDBMS) when different organizations share information. Secure cooperation can be achieved only if each participating organization is assured that the data it makes available will not be abused by other users. In this work differences between CDBMS and DDBMS that characterize the nature of the protection problem in DDBMS are identified. These differences are translated into basic protection requirements. Policies that a distributed data base management protection system should allow are described. The system proposed in this work is powerful enough to satisfy the stated requirements and allow for variations on the policies. This system is a hybrid one where both authorizations and constraints can be defined. The system is termed hybrid because it combines features of both open and closed protection systems. In addition the hybrid system, although designed to offer the flexibility of discretionary systems, incorporates the flow control of information between users, a feature found only in some nondiscretionary systems. Furthermore, the proposed system is said to be integrated because authorizations and constraints can be defined on any of the data bases supported by the system including the data bases containing the authorizations, and the constraints themselves. The hybrid system is incorporated in a general model of DDBMS protection. A modular approach is taken for the design of the model. This approach allows us to represent the different options for the model depending on the set of policy choices taken. Three levels of abstraction describing different aspects of DDBMS protection problems are defined. The conceptual level describes the protection control of the DDBMS transactions and information flows. The logical level is concerned with the interaction between the different organizations participating in the DDBMS. The physical level is involved with the architectural implementation of the logical level.
Ph. D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Zhao, Jianbin, e 趙建賓. "A portalet-based DIY approach to collaborative product commerce". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B27769793.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Tacic, Ivan. "Efficient Synchronized Data Distribution Management in Distributed Simulations". Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6822.

Texto completo da fonte
Resumo:
Data distribution management (DDM) is a mechanism to interconnect data producers and data consumers in a distributed application. Data producers provide useful data to consumers in the form of messages. For each message produced, DDM determines the set of data consumers interested in receiving the message and delivers it to those consumers. We are particularly interested in DDM techniques for parallel and distributed discrete event simulations. Thus far, researchers have treated synchronization of events (i.e. time management) and DDM independent of each other. This research focuses on how to realize time managed DDM mechanisms. The main reason for time-managed DDM is to ensure that changes in the routing of messages from producers to consumers occur in a correct sequence. Also time managed DDM avoids non-determinism in the federation execution, which may result in non-repeatable executions. An optimistic approach to time managed DDM is proposed where one allows DDM events to be processed out of time stamp order, but a detection and recovery procedure is used to recover from such errors. These mechanisms are tailored to the semantics of the DDM operations to ensure an efficient realization. A correctness proof is presented to verify the algorithm correctly synchronizes DDM events. We have developed a fully distributed implementation of the algorithm within the framework of the Georgia Tech Federated Simulation Development Kit (FDK) software. A performance evaluation of the synchronized DDM mechanism has been completed in a loosely coupled distributed system consisting of a network of workstations connected over a local area network (LAN). We compare time-managed versus unsynchronized DDM for two applications that exercise different mobility patterns: one based on a military simulation and a second utilizing a synthetic workload. The experiments and analysis illustrate that synchronized DDM performance depends on several factors: the simulations model (e.g. lookahead), applications mobility patterns and the network hardware (e.g. size of network buffers). Under certain mobility patterns, time-managed DDM is as efficient as unsynchronized DDM. There are also mobility patterns where time-managed DDM overheads become significant, and we show how they can be reduced.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Baxter, David. "Perception of organisational politics and workplace innovation : an investigation of the perceptions and behaviour of staff in an Australian IT services organisation /". Swinburne Research Bank, 2004. http://hdl.handle.net/1959.3/46062.

Texto completo da fonte
Resumo:
Thesis (D.B.A.)--Swinburne University of Technology, Australian Graduate School of Entrepreneurship, 2004.
A thesis submitted to the fulfilment of the requirements for the degree of Doctor of Philosophy, Australian Graduate School of Entrepreneurship, Swinburne University of Technology, 2004. Typescript. Includes bibliographical references (p. 229-230).
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Oelofse, Andries Johannes. "Development of a MAIME-compliant microarray data management system for functional genomics data integration". Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-08222007-135249.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Persson, Mathias. "Simultaneous Data Management in Sensor-Based Systems using Disaggregation and Processing". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188856.

Texto completo da fonte
Resumo:
To enable high performance data management for sensor-based systems the system components in an architecture has to be tailored to the situation at hand. Therefore, each component has to handle a massive amount of data independently, and at the same time cooperate with other components within a system. To facilitate rapid data processing between components, a model detailing the flow of information and specifying internal component structures will assist in faster and more reliable system designs. This thesis presents a model for a scalable, safe, reliable and high performing system for managing sensor-based data. Based on the model a prototype is developed that can be used to handle a large amount of messages from various distributed sensors. The different components within the prototype are evaluated and their advantages and disadvantages are presented. The result merits the architecture of the prototype and validates the initial requirements of how it should operate to achieve high performance. By combining components with individual advantages, a system can be designed that allows a high amount of simultaneous data to be disaggregated into its respective category, processed to make the information usable and stored in a database for easy access to interested parties.
Om ett system som hanterar sensorbaserad data ska kunna prestera bra måste komponenterna som ingår i systemet vara skräddarsydda för att hantera olika situationer. Detta betyder att varje enskild komponent måste individuellt kunna hantera stora simultana datamängder, samtidigt som de måste samarbeta med de andra komponenterna i systemet. För att underlätta snabb bearbetning av data mellan komponenter kan en modell, som specificerar informationsflödet och interna strukturer hos komponenterna, assistera i skapande av snabbare och mer tillförlitliga systemarkitekturer. I denna uppsats presenteras en modell för skapande av skalbara, säkra, tillförlitliga och bra presterande system som hanterar sensor-baserad data. En prototyp utvecklas, baserad på modellen, som kan hantera en stor mängd meddelanden från distribuerade sensorer. De olika komponenterna som används i prototypen utvärderas och deras för- och nackdelar presenteras. Resultatet visar att arkitekturen hos prototypen fungerar enligt de initiala kraven om hur bra systemet ska prestera. Genom att kombinera individuella styrkor hos komponenterna kan ett system skapas som tillåter stora mängder data att bli fördelat enligt deras typ, behandlat för att få fram relevant information och lagrat i en databas för enkel tillgång.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Slingsby, T. P. "An investigation into the development of a facilities management system for the University of Cape Town". Master's thesis, University of Cape Town, 2004. http://hdl.handle.net/11427/5585.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Fernández, Moctezuma Rafael J. "A Data-Descriptive Feedback Framework for Data Stream Management Systems". PDXScholar, 2012. https://pdxscholar.library.pdx.edu/open_access_etds/116.

Texto completo da fonte
Resumo:
Data Stream Management Systems (DSMSs) provide support for continuous query evaluation over data streams. Data streams provide processing challenges due to their unbounded nature and varying characteristics, such as rate and density fluctuations. DSMSs need to adapt stream processing to these changes within certain constraints, such as available computational resources and minimum latency requirements in producing results. The proposed research develops an inter-operator feedback framework, where opportunities for run-time adaptation of stream processing are expressed in terms of descriptions of substreams and actions applicable to the substreams, called feedback punctuations. Both the discovery of adaptation opportunities and the exploitation of these opportunities are performed in the query operators. DSMSs are also concerned with state management, in particular, state derived from tuple processing. The proposed research also introduces the Contracts Framework, which provides execution guarantees about state purging in continuous query evaluation for systems with and without inter-operator feedback. This research provides both theoretical and design contributions. The research also includes an implementation and evaluation of the feedback techniques in the NiagaraST DSMS, and a reference implementation of the Contracts Framework.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Bennett, Sandra M. "Exploring the relationship between continuing professional education and job satisfaction for information technology professionals in higher education". Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5296/.

Texto completo da fonte
Resumo:
The study had four main hypotheses that examined the relationships between job satisfaction and the reasons for attending continuing professional education (CPE). The purpose of this study was to examine the relationships between training and job satisfaction with the objective of adding to the body of knowledge related to both job satisfaction and training and development. Participation Reasons Scale was used to measure the reasons for attending CPE activities, and the Job in General Scale and Job Descriptive Index was used to measure job satisfaction. The surveys were administered over the Internet to information technology professionals working in higher education. The participants were contacted by email with a message explaining the purpose of the research and a Web link that took the participants directly to the survey. After collecting the data, it was exported into SPSS and analyzed using Spearman Rho and Mann Whitney U statistics and a simple structure exploratory factor to determine any underlying structures between the job satisfaction and CPE.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Mohamad, Baraa. "Medical Data Management on the cloud". Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22582.

Texto completo da fonte
Resumo:
Résumé indisponible
Medical data management has become a real challenge due to the emergence of new imaging technologies providing high image resolutions.This thesis focuses in particular on the management of DICOM files. DICOM is one of the most important medical standards. DICOM files have special data format where one file may contain regular data, multimedia data and services. These files are extremely heterogeneous (the schema of a file cannot be predicted) and have large data sizes. The characteristics of DICOM files added to the requirements of medical data management in general – in term of availability and accessibility- have led us to construct our research question as follows:Is it possible to build a system that: (1) is highly available, (2) supports any medical images (different specialties, modalities and physicians’ practices), (3) enables to store extremely huge/ever increasing data, (4) provides expressive accesses and (5) is cost-effective .In order to answer this question we have built a hybrid (row-column) cloud-enabled storage system. The idea of this solution is to disperse DICOM attributes thoughtfully, depending on their characteristics, over both data layouts in a way that provides the best of row-oriented and column-oriented storage models in one system. All with exploiting the interesting features of the cloud that enables us to ensure the availability and portability of medical data. Storing data on such hybrid data layout opens the door for a second research question, how to process queries efficiently over this hybrid data storage with enabling new and more efficient query plansThe originality of our proposal comes from the fact that there is currently no system that stores data in such hybrid storage (i.e. an attribute is either on row-oriented database or on column-oriented one and a given query could interrogate both storage models at the same time) and studies query processing over it.The experimental prototypes implemented in this thesis show interesting results and opens the door for multiple optimizations and research questions
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Chui, Ka-lam Elsa, e 徐嘉琳. "A semantic web architecture for personalized profiles". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B2961336X.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Paul, Daniel. "Decision models for on-line adaptive resource management". Thesis, Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/13559.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Hoffman, A. R. "Information technology decision making in South Africa : a framework for company-wide strategic IT management". Master's thesis, University of Cape Town, 1989. http://hdl.handle.net/11427/15854.

Texto completo da fonte
Resumo:
Includes bibliography.
The area of interest in which this Study is set is the linking of a company's business strategies with its strategic planning for IT (information technology). The objectives of the Study are: to investigate how the IT planning environment is changing for business enterprises in South Africa; to establish how successfully South African companies are managing IT strategically; to propose a new approach to strategic IT decision making that will help South African management deal with the major issues; to propose a way of implementing the approach. In Chapter 2, conclusions are drawn from an examination of the key strategic IT planning literature. It appears that fundamental changes are indeed taking place, and are producing significant shifts in the way researchers, consultants and managers think about IT. The survey of South African management opinion is described in Chapter 3. The opinions analyzed range over environmental trends, strategic decision making practices, and what an acceptable strategic IT decision making framework would look like. The need for a new, comprehensive approach to strategic IT decision making in South Africa is clearly established. In Chapter 4, a theoretical Framework is proposed as a new, comprehensive approach to strategic IT decision making. The Framework covers five strategic tasks: analysing the key environmental issues; determining the purposes and uses of IT in competitive strategy and organizational designs; developing the IT infrastructure, human systems, information systems, and human resources to achieve these purposes and uses; implementing the strategic IT decisions; and learning to make better strategic IT decisions. In Chapter 5, ways of implementing the Framework in practice are .identified. A means of evaluating its acceptability in a specific company is also proposed. The general conclusions of the Study are presented in Chapter 6. The Framework developed in this Study is intended for use, not directly by the IT decision makers themselves, but by the persons responsible for designing the IT decision making processes of the company. It is not, however, offered as a theory or a methodology. The aim is· simply to provide a conceptual "filing system", to help designers uncover and classify the IT strategy problems of their own company, to identify the tools their decision makers need, and to put appropriate problem solving processes in place.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Chen, Deji. "Real-time data management in the distributed environment /". Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Lynch, Kevin John. "Data manipulation in collaborative research systems". Diss., The University of Arizona, 1989. http://hdl.handle.net/10150/184923.

Texto completo da fonte
Resumo:
This dissertation addresses data manipulation in collaborative research systems, including what data should be stored, the operations to be performed on that data, and a programming interface to effect this manipulation. Collaborative research systems are discussed, and requirements for next-generation systems are specified, incorporating a range of emerging technologies including multimedia storage and presentation, expert systems, and object-oriented database management systems. A detailed description of a generic query processor constructed specifically for one collaborative research system is given, and its applicability to next-generation systems and emerging technologies is examined. Chapter 1 discusses the Arizona Analyst Information System (AAIS), a successful collaborative research system being used at the University of Arizona and elsewhere. Chapter 2 describes the generic query processing approach used in the AAIS, as an efficient, nonprocedural, high-level programmer interface to databases. Chapter 3 specifies requirements for next-generation collaborative research systems that encompass the entire research cycle for groups of individuals working on related topics over time. These requirements are being used to build a next-generation collaborative research system at the University of Arizona called CARAT, for Computer Assisted Research and Analysis Tool. Chapter 4 addresses the underlying data management systems in terms of the requirements specified in Chapter 3. Chapter 5 revisits the generic query processing approach used in the AAIS, in light of the requirements of Chapter 3, and the range of data management solutions described in Chapter 4. Chapter 5 demonstrates the generic query processing approach as a viable one, for both the requirements of Chapter 3 and the DBMSs of Chapter 4. The significance of this research takes several forms. First, Chapters 1 and 3 provide detailed views of a current collaborative research system, and of a set of requirements for next-generation systems based on years of experience both using and building the AAIS. Second, the generic query processor described in Chapters 2 and 5 is shown to be an effective, portable programming language to database interface, ranging across the set of requirements for collaborative research systems as well as a number of underlying data management solutions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Shakya, Sujan. "Web-based employment application & processing support system /". Connect to title online, 2008. http://minds.wisconsin.edu/handle/1793/34222.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Benatar, Gil. "Thermal/structural integration through relational database management". Thesis, Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/19484.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Nehme, Rimma V. "Continuous query processing on spatio-temporal data streams". Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-082305-154035/.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Roy, Amber Joyce. "Dynamic Grid-Based Data Distribution Management in Large Scale Distributed Simulations". Thesis, University of North Texas, 2000. https://digital.library.unt.edu/ark:/67531/metadc2699/.

Texto completo da fonte
Resumo:
Distributed simulation is an enabling concept to support the networked interaction of models and real world elements that are geographically distributed. This technology has brought a new set of challenging problems to solve, such as Data Distribution Management (DDM). The aim of DDM is to limit and control the volume of the data exchanged during a distributed simulation, and reduce the processing requirements of the simulation hosts by relaying events and state information only to those applications that require them. In this thesis, we propose a new DDM scheme, which we refer to as dynamic grid-based DDM. A lightweight UNT-RTI has been developed and implemented to investigate the performance of our DDM scheme. Our results clearly indicate that our scheme is scalable and it significantly reduces both the number of multicast groups used, and the message overhead, when compared to previous grid-based allocation schemes using large-scale and real-world scenarios.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia