Dissertations / Theses on the topic 'Managed data'

To see the other types of publications on this topic, follow the link: Managed data.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Managed data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Deb, Debzani. "Achieving self-managed deployment in a distributed environment via utility functions." Thesis, Montana State University, 2008. http://etd.lib.montana.edu/etd/2008/deb/DebD0508.pdf.

Full text
Abstract:
This dissertation presents algorithms and mechanisms that enable self-managed, scalable and efficient deployment of large-scale scientific and engineering applications in a highly dynamic and unpredictable distributed environment. Typically these applications are composed of a large number of distributed components and it is important to meet the computational power and network bandwidth requirements of those components and their interactions. However satisfying these requirements in a large-scale, shared, heterogeneous, and highly dynamic distributed environment is a significant challenge. As systems and applications grow in scale and complexity, attaining the desired level of performance in this uncertain environment using current approaches based on global knowledge, centralized scheduling and manual reallocation becomes infeasible. This dissertation focuses on the modeling of the application and underlying architecture into a common abstraction and on the incorporations of autonomic features into those abstractions to achieve self-managed deployment. In particular, we developed techniques for automatically identifying application components and their estimated resource requirements within an application and used them in order to model the application into a graph abstraction. We also developed techniques that allow the distributed resources to self-organize in a utility-aware way while assuming minimal knowledge about the system. Finally, to achieve self-managed deployment of application components to the distributed nodes, we designed a scalable and adaptive scheduling algorithm which is governed by a utility function. The utility function, which combines several application and system level attributes, governs both the initial deployment of the application components and their reconfigurations despite the dynamism and uncertainty associated with the computing environment. The experimental results show that it is possible to achieve and maintain efficient deployment by applying the utility function derived in this paper based solely on locally available information and without costly global communication or synchronization. The self-management is therefore decentralized and provides better adaptability, scalability and robustness.
APA, Harvard, Vancouver, ISO, and other styles
2

Loudon, Melissa. "Data management and reporting for drinking water quality monitoring in community-managed supplies." Master's thesis, University of Cape Town, 2007. http://hdl.handle.net/11427/5031.

Full text
Abstract:
Includes bibliographical references (leaves 111-126).
Water Service Authorities, which may be district municipalities with hundreds of community-managed supplies under their jurisdiction, are legally responsible for ensuring the quality of water supplied to all consumers. Without the assistance of communities, this requirement, which would involve regular testing in many remote and inaccessible supplies, is extremely difficult to fulfil. Water Service Authorities also struggle to respond timeously to problems in remote supplies, as they are often unaware of the problem for some days. Two-way communication between the Water Service Authority and the Community-based Water Services Provider is therefore essential to an effective monitoring programme. Information and communication technologies, particularly mobile phones on the cellular network, offer potential solution to the challenge of supporting community-managed supplies. Following an investigation into the information needs of various stakeholders in community management, a prototype drinking water quality information system for community-managed supplies was developed.
APA, Harvard, Vancouver, ISO, and other styles
3

Yeom, Jae-seung. "Optimizing Data Accesses for Scaling Data-intensive Scientific Applications." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/64180.

Full text
Abstract:
Data-intensive scientific applications often process an enormous amount of data. The scalability of such applications depends critically on how to manage the locality of data. Our study explores two common types of applications that are vastly different in terms of memory access pattern and workload variation. One includes those with multi-stride accesses in regular nested parallel loops. The other is for processing large-scale irregular social network graphs. In the former case, the memory location or the data item accessed in a loop is predictable and the load on processing a unit work (an array element) is relatively uniform with no significant variation. On the other hand, in the latter case, the data access per unit work (a vertex) is highly irregular in terms of the number of accesses and the locations being accessed. This property is further tied to the load and presents significant challenges in the scalability of the application performance. Designing platforms to support extreme performance scaling requires understanding of how application specific information can be used to control the locality and improve the performance. Such insights are necessary to determine which control and which abstraction to provide for interfacing an underlying system and an application as well as for designing a new system. Our goal is to expose common requirements of data-intensive scientific applications for scalability. For the former type of applications, those with regular accesses and uniform workload, we contribute new methods to improve the temporal locality of software-managed local memories, and optimize the critical path of scheduling data transfers for multi-dimensional arrays in nested loops. In particular, we provide a runtime framework allowing transparent optimization by source-to-source compilers or automatic fine tuning by programmers. Finally, we demonstrate the effectiveness of the approach by comparing against a state-of-the-art language-based framework. For the latter type, those with irregular accesses and non-uniform workload, we analyze how the heavy-tailed property of input graphs limits the scalability of the application. Then, we introduce an application-specific workload model as well as a decomposition method that allows us to optimize locality with the custom load balancing constraints of the application. Finally, we demonstrate unprecedented strong scaling of a contagion simulation on two state-of-the-art high performance computing platforms.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
4

Johansson, Tobias. "Managed Distributed TensorFlow with YARN : Enabling Large-Scale Machine Learning on Hadoop Clusters." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-248007.

Full text
Abstract:
Apache Hadoop is the dominant open source platform for the storage and processing of Big Data. With the data stored in Hadoop clusters, it is advantageous to be able to run TensorFlow applications on the same cluster that holds the input data sets for training machine learning models. TensorFlow supports distributed executions where Deep Neural Networks can be trained utilizing a large amount of compute nodes. To configure and launch distributed TensorFlow applications manually is complex and impractical, and gets worse with more nodes. This project presents a framework that utilizes Hadoop’s resource manager YARN to manage distributed TensorFlow applications. The proposal is a native YARN application with one ApplicationMaster (AM) per job, utilizing the AM as a registry for discovery prior to job execution. Conforming TensorFlow code to the framework typically is about a few lines of code. In comparison to TensorFlowOnSpark, the user experience is very similar, and collected performance data indicates that there exists an advantage of running TensorFlow directly on YARN with no extra layer in between.
Apache Hadoop är den ledande öppen källkod-plattformen för lagringen och processeringen av big data. Med data lagrat i Hadoop-kluster, är det fördelaktigt att kunna köra TensorFlow-applikationer på samma kluster som håller ingående dataset för träning av maskininlärningsmodeller. TensorFlow stödjer distribuerade exekveringar där djupa neurala nätverk kan tränas genom att använda en stor mängd berräkningsnoder. Att konfigurera och starta distribuerade TensorFlowapplikationer manuellt är komplext och opraktiskt och blir värre med fler noder.Detta projekt presenterar ett ramverk som använder Hadoops resurhanterare YARN för att hantera distribuerade TensorFlow-applikationer. Förslaget är en hemmahörande YARN-applikation med en ApplicationMaster (AM) per jobb som använder AM som ett register för upptäckt innan jobbet körs. Att anpassa TensorFlow-kod till ramverket handlar typiskt om några rader kod. I jämförelse med TensorFlowOnSpark är användarupplevelse väldigt likt och insamlad prestandadata indikerar att det finns en fördel med att köra TensorFlow direkt på YARN utan något extra lager däremellan.
APA, Harvard, Vancouver, ISO, and other styles
5

Lienhard, Jasper Z. (Jasper Zebulon). "What is measured is managed : statistical analysis of compositional data towards improved materials recovery." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/98661.

Full text
Abstract:
Thesis: S.B., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 35-36).
As materials consumption increases globally, minimizing the end-of-life impact of solid waste has become a critical challenge. Cost-effective methods of quantifying and tracking municipal solid waste contents and disposal processes are necessary to drive and track increases in material recovery and recycling. This work presents an algorithm for estimating the average quantity and composition of municipal waste produced by individual locations. Mass fraction confidence intervals for different types of waste were calculated from data collected by sorting and weighing waste samples from municipal sites. This algorithm recognizes the compositional nature of mass fraction waste data. The algorithm developed in this work also evaluated the value of additional waste samples in refining mass fraction confidence intervals. Additionally, a greenhouse gas emissions model compared carbon dioxide emissions for different disposal methods of waste, in particular landfilling and recycling, based on the waste stream. This allowed for identification of recycling opportunities based on carbon dioxide emission savings from offsetting the need for primary materials extraction. Casework was conduced with this methodology using site-specific waste audit data from industry. The waste streams and carbon dioxide emissions of three categories of municipal waste producers, retail, commercial, and industrial, were compared. Paper and plastic products, whose mass fraction averages ranged from 40% to 52% and 26% to 29%, respectively, dominated the waste streams of these three industries. Average carbon dioxide emissions in each of these three industries ranged from 2.18 kg of CO₂ to 2.5 kg of CO₂ per kilogram of waste thrown away. On average, Americans throw away about 2 kilograms per person per day of solid waste.
by Jasper Z. Lienhard.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
6

Nilsson, Maximiliam, and Gusten Hansson. "Are Mutual Fund Managers’ Compensation Reasonable In Relation To Their Contributions? : - A study regarding actively managed mutual funds." Thesis, Linnéuniversitetet, Institutionen för nationalekonomi och statistik (NS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-96945.

Full text
Abstract:
This thesis aims to investigate fund managers salaries in relation to their contributions. The study is conducted on the Swedish fund market under a period over five years, 2014-2018, and include 332 funds. The result observed shows a positive relation between salaries and risk-adjusted performance. The result proves that fund managers are able to outperform the market on average, which should not be possible to do systematically over time according to the efficient market hypothesis. It also turns out that salary has a positive relationship with assets under management. This indicates that fund managers are employed and compensated for more reasons than to generate a high return, namely to contribute to more significant inflows of cash to the fund company. Interpretations of fund managers’ salaries are primarily linked to agency theory and economics of superstars. The agency problem alter in the fund industry since the setting is two-folded. Agency problem could be mitigated by implementing a performance-based compensation structure, to aligning investors, management and fund managers’ ambitions. The result shows signs that a performance-based salary is present in the fund industry. A fund managers’ salary assumes to be based on his/her skillfulness, but could also be due to an individual’s stardom. To conclude, the thesis state that fund managers’ deserve their salary, which in relative terms are fairly high, since they procure additional benefits to the fund company.
APA, Harvard, Vancouver, ISO, and other styles
7

Salim, Christian. "Data Reduction based energy-efficient approaches for secure priority-based managed wireless video sensor networks." Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCD052/document.

Full text
Abstract:
L'énorme quantité de données dans les réseaux de capteurs vidéo sans fil (WVSN) pour les nœuds de capteurs de ressources limitées augmente les défis liés à la consommation d'énergie et à la consommation de bande passante. La gestion du réseau est l’un des défis de WMSN en raison de l’énorme quantité d’images envoyées simultanément par les capteurs au coordinateur. Dans cette thèse, pour surmonter ces problèmes, plusieurs contributions ont été apportées. Chaque contribution se concentre sur un ou deux défis, comme suit: Dans la première contribution, pour réduire la consommation d'énergie, une nouvelle approche pour l'agrégation des données dans WVSN basée sur des fonctions de similarité des plans est proposée. Il est déployé sur deux niveaux: le niveau du nœud du capteur vidéo et le niveau du coordinateur. Au niveau du nœud de capteur, nous proposons une technique d'adaptation du taux de trame et une fonction de similarité pour réduire le nombre de trames détectées par les nœuds de capteur et envoyées au coordinateur. Au niveau du coordinateur, après avoir reçu des plans de différents nœuds de capteurs voisins, la similarité entre ces plans est calculée pour éliminer les redondances. Dans la deuxième contribution, certains traitements et analyses sont ajoutés en fonction de la similarité entre les images au niveau du capteur-nœud pour n'envoyer que les cadres importants au coordinateur. Les fonctions cinématiques sont définies pour prévoir l'étape suivante de l'intrusion et pour planifier le système de surveillance en conséquence. Dans la troisième contribution, sur la phase de transmission, au niveau capteur-nœud, un nouvel algorithme d'extraction des différences entre deux images est proposé. Cette contribution prend également en compte le défi de sécurité en adaptant un algorithme de chiffrement efficace au niveau du nœud de capteur. Dans la dernière contribution, pour éviter une détection plus lente des intrusions conduisant à des réactions plus lentes du coordinateur, un protocole mac-layer basé sur le protocole S-MAC a été proposé pour contrôler le réseau. Cette solution consiste à ajouter un bit de priorité au protocole S-MAC pour donner la priorité aux données critiques
The huge amount of data in Wireless Video Sensor Networks (WVSNs) for tiny limited resources sensor nodes increases the energy and bandwidth consumption challenges. Controlling the network is one of the challenges in WMSN due to the huge amount of images sent at the same time from the sensors to the coordinator. In this thesis, to overcome these problems, several contributions have been made. Each contribution concentrates on one or two challenges as follows: In the first contribution, to reduce the energy consumption a new approach for data aggregation in WVSN based on shot similarity functions is proposed. It is deployed on two levels: the video-sensor node level and the coordinator level. At the sensor node level, we propose a frame rate adaptation technique and a similarity function to reduce the number of frames sensed by the sensor nodes and sent to the coordinator. At the coordinator level, after receiving shots from different neighboring sensor nodes, the similarity between these shots is computed to eliminate redundancies. In the second contribution, some processing and analysis are added based on the similarity between frames on the sensor-node level to send only the important frames to the coordinator. Kinematic functions are defined to predict the next step of the intrusion and to schedule the monitoring system accordingly. In the third contribution, on the transmission phase, on the sensor-node level, a new algorithm to extract the differences between two images is proposed. This contribution also takes into account the security challenge by adapting an efficient ciphering algorithm on the sensor node level. In the last contribution, to avoid slower detection of intrusions leading to slower reactions from the coordinator, a mac-layer protocol based on S-MAC protocol has been proposed to control the network. This solution consists in adding a priority bit to the S-MAC protocol to give priority to critical data
APA, Harvard, Vancouver, ISO, and other styles
8

Muthuswamy, Sunil. "System implementation of a real-time, content based application router for a managed publish-subscribe system." Online access for everyone, 2008. http://www.dissertations.wsu.edu/Thesis/Summer2008/S_Muthuswamy_080408.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Luong, Johannes [Verfasser], Wolfgang [Gutachter] Lehner, and Ziawasch [Gutachter] Abedjan. "A Common Programming Interface for Managed Heterogeneous Data Analysis / Johannes Luong ; Gutachter: Wolfgang Lehner, Ziawasch Abedjan." Dresden : Technische Universität Dresden, 2021. http://d-nb.info/1238140599/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Boland, Samuel James. "Hydrologic and biogeochemical signatures in intensively managed catchments: data synthesis and development of a passive surfacewater quality sampler." Thesis, University of Iowa, 2011. https://ir.uiowa.edu/etd/1205.

Full text
Abstract:
This study presents a conjunctive data synthesis and technology development approach to aid in enhancing understanding of the scaling behavior of water flow and nutrient transport in intensively managed agricultural catchments. Anthropogenic modifications to the landscape, with agricultural activities being a primary driver, have resulted in significant alterations to hydrologic and biogeochemical cycles. Significant research has been directed towards understanding and predicting the changes in these cycles in an effort to mitigate the associated adverse effects. Typical modeling efforts suffer from scaling issues associated with heterogeneities that arise at the catchment scale. New parsimonious approaches that rely on emergent patterns in data have been proposed to aid current modeling efforts. This study adopts a data synthesis approach to identify emergent patterns in hydrologic and nitrogen solute behavior in the context of agricultural activities at the catchment scale. The results of the synthesis indicate a strong anthropogenic signature in agricultural landscapes through (a) decrease in variability in the streamflow distribution with increase in the proportion of the catchment that is artificially drained and (b) relatively low variability in nitrogen concentration relative to discharge. Due to the dependence of such data synthesis methods on reliable data, a new method of data collection, through the use of an innovative passive sampling device, is developed to aid in future data synthesis and subsequent modeling efforts. Initial laboratory studies towards the development of the device achieved in this thesis indicate its ability to capture flow-averaged solute concentration over a specified deployment period. Future work involves testing the device under various field deployment conditions. The relatively low cost of the device would enable the estimation of spatially distributed flow-averaged concentrations that would complement existing costlier measurement methods, and significantly aid future modeling efforts and management decisions.
APA, Harvard, Vancouver, ISO, and other styles
11

Floriano, Sanchez Sergio. "A Self-organized Wireless Sensor Network (WSN) for a Home-event Managed System : Design of a cost efficient 6LoWPAN-USB Gateway with RFID security." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186384.

Full text
Abstract:
Wireless Sensor Networks (WSN) have existed for many years in industry applications for different purposes but their use has not been fully extended to the global consumers. Sensor networks have lately resulted to be greatly helpful to people in everyday life, specially in home automation applications for monitoring events, security, and control of devices and different elements in the house by using actuators. One of the main barriers to overcome in order to increase their popularity and achieve an worldwide deployment are costs and integration within other networks. This Thesis investigates the most appropriate choices to avoid those impediments from a hardware and software design perspective, trying to find a cost-efficient solution for the implementation of a simple and scalable wireless sensor network. The present work studies the elements that form part of a constrained network and focuses on the design by analysing several network protocol alternatives, radio transmission mechanisms, different hardware devices and software implementations. Once an optimal solution is found, the construction of a gateway board that starts and coordinates a sensor network will be the main target of this document including the development of an application that manages the sensors. The network is designed to be compliant with the TCP/IP stack by means of 6LoWPAN, an adaptation layer protocol used for comprising IPv6 headers over IEEE 802.15.4 radio links in constrained networks. In addition, a small implementation of CoAP (Constrained Application Protocol) is developed that allows interoperability with the sensor nodes on the application layer, similarly as HTTP does in IP networks. The controller device (gateway) acts as a client for the remote sensor devices (nodes) that behave as servers in the CoAP application. The gateway exchange data and is managed from outside the WSN through a USB interface that can be connected to a computer. Security mechanisms are also considered by providing packet encryption and a method for identification of nodes. The authorization of new nodes entering the network is performed by an RFID reader connected to the gateway. An RFID tag is attached to the sensor nodes with authentication information stored in it. The gateway reads that information through the RFID modules and handle it internally to give access to that node. As a result of this, it is proven from the conclusions of the study the implementation of the gateway that inexpensive, self-managed, scalable WSNs provided with a robust security mechanism can be achieved and easily deployed . The work presented in this document is part of a larger project that also includes the design of sensor boards and the acquisition and analysis of sensor data. These works are mentioned and referenced in the related parts in this text.
Trådlösa sensornätverk har funnits i många år inom industrin för olika ändamål, men dess användning har inte helt och hållet nått ut till de globala konsumenterna. Sensornätverk har på senare tid visat sig vara mycket hjälpfulla för människor i deras vardagsliv, och särskilt automatiseringsapplikationer för säkerhet, övervakning och kontroll av apparater och olika delar i huset, genom användning av manöverdon. Ett av de huvudsakliga hindren att ta sig förbi för att kunna öka dess popularitet och skapa en världsomfattande spridning är kostnader, integration inom andra nätverk och en enkel hantering. I den här avhandlingen undersöks vilka som är de lämpligaste alternativen för att undvika hinder ur ett hårdvaru- och mjukvarudesigns-perspektiv, genom att försöka hitta kostnadseffektiva lösningar för implementering av ett trådlöst sensornätverk. Arbetet undersöker de beståndsdelar vilka ett begränsat nätverk består av, samt fokuserar på designen genom att analysera flera olika nätverksprotokollsalternativ, radiosändningsmekanismer, olika hårdvaror och implementering av mjukvara. När väl den optimala lösningen hittats, kommer huvudmålet för detta dokument att vara en gateways konstruktion, vilken sätter igång och koordinerar ett sensornätverk, samt utvecklingen av en applikation som sköter sensorerna. Nätverket är designat för att vara medgörligt med TCP/IP-stacken med hjälp via 6LoWPAN, ett anpassat lagerprotokoll vilket används för att komprimera IPv6-headern i begränsade nätverk över IEEE 802.15.4 radionätverk. Dessutom har en liten implementering av CoAP (Constrained Application Protocol) utvecklats vilket tillåter interoperabilitet med sensornoderna i applikationslagret, liknande HTTP i IP-nätverk. Gatewayen fungerar som en klient för sensornoderna, vilka beter sig som servrar i CoAP-applikationen. Gatewayen utbyter data och styrs utifrån det trådlösa sensornätverket genom ett USB-interface som kan kopplas till datorn. Säkerhetskonstruktioner tas också i akt genom att tillhandahålla kryptering och en metod för att identifiera noder. Behörighet för nya noder i nätverket utförs av en RFID-läsare som är kopplad till gatewayen. En RFID-bricka bifogas sensornoderna med lagrad verifieringsinformation. Porten läser den informationen genom RFID-moduler och hanterar den internt för att ge behörighet till noden. I och med detta är det bevisat, med den implementerade gatewayen och slutsatser från studien, att mycket effektiva, billiga och hanterbara trådlösa sensornätverk med kraftiga säkerhetskonstruktioner kan uppnås och enkelt distribueras. Arbetet som presenteras i det här dokumentet är en del av ett större projekt som också inkluderar uppbyggnaden av sensornoderna samt anskaffning och analys av sensordata. Dessa arbeten nämns och refereras till i de berörda delarna av texten.
APA, Harvard, Vancouver, ISO, and other styles
12

Galletti, Thomas. "Data breach - services for threat detection, analysis and response." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/7026/.

Full text
Abstract:
Nel mondo della sicurezza informatica, le tecnologie si evolvono per far fronte alle minacce. Non è possibile prescindere dalla prevenzione, ma occorre accettare il fatto che nessuna barriera risulterà impenetrabile e che la rilevazione, unitamente ad una pronta risposta, rappresenta una linea estremamente critica di difesa, ma l’unica veramente attuabile per poter guadagnare più tempo possibile o per limitare i danni. Introdurremo quindi un nuovo modello operativo composto da procedure capaci di affrontare le nuove sfide che il malware costantemente offre e allo stesso tempo di sollevare i comparti IT da attività onerose e sempre più complesse, ottimizzandone il processo di comunicazione e di risposta.
APA, Harvard, Vancouver, ISO, and other styles
13

Miwa, Masato. "Physical and Hydrologic Responses of an Intensively Managed Loblolly Pine Plantation to Forest Harvesting and Site Preparation." Diss., Virginia Tech, 1999. http://hdl.handle.net/10919/29049.

Full text
Abstract:
The Southeastern Lower Coastal Plain wet pine flats include thousands of acres of jurisdictional wetlands that are economically, socially, and environmentally important. These highly productive forests have been intensively managed as pine plantations for the past few decades. More recently, harvesting and site preparation practices have become a concern among natural resource managers because intensive forestry practices may alter soil physical properties and site hydrology. These alterations could decrease seedling survival, growth, and future site productivity. However, the effects of soil disturbance on long-term site productivity and the effects of amelioration techniques on site hydrology are uncertain. The overall objectives of this study were (1) to characterize disturbed forest soil morphology and physical properties, (2) to assess their impact on the processes that control site hydrology and site productivity, (3) to determine effects of harvesting and site preparation on site hydrology, specifically on the overall hydrological balance and on spatial and temporal patterns of surface water storage. The study site is located in an intensively managed loblolly pine (Pinus taeda L.) plantation in the lower coastal plain of South Carolina. This study was established in winter 1991, and dry- and wet-weather harvesting treatments were installed in summer 1993 and winter 1994, respectively. Bedding and mole channel/bedding treatments were installed in both dry- and wet-harvested plots in fall 1995. Soil profiles were described for a recently disturbed, deeply-rutted area, and 2-year-old deeply-rutted and churned areas, bedded and undisturbed areas. Intact soil core samples and composite loose soil samples were collected from each morphological section for soil physical characterizations. Automated weather station and wells were used to collect continuous climatic and surface water level data since 1996. Surface water levels were monitored monthly on a 20 x 20 m grid of 1-m wells since 1992. Total groundwater heads were determined from differential piezometer measurements at high and low elevation places in each treatment plot. Soil profile descriptions and soil physical property measurements indicated that significant amounts of organic debris were incorporated into the surface horizons, and subsurface soil horizons showed significant soil structural changes and increased redoximorphic features caused by soil disturbance. The disturbed soil layers in recently created traffic ruts consisted of exposed and severely disturbed subsurface soils, but this layer was naturally ameliorated 2 years after the disturbance. Bedding site preparation had little amelioration effects on the physical properties of surface soil horizons because the surface horizons already had some incorporation of organic debris. Overall, the main consequence of bedding in a disturbed wet site was to increase the aerated soil volume. The bedding appeared to have little effect on disturbed subsurface horizons. Groundwater head in the study site was constantly higher than -25 cm during the study period, which caused groundwater inflow when the surface water level was low. Frequent fluctuation of the surface water level and constant water supply from the groundwater probably explain the high productivity of the study site. Results of the annual water balance showed that surface soil water storage changes were very small, and annual precipitation and potential evapotranspiration were approximately equal. Silvicultural practices and minor topography on the study site had significant effects on the water balance because they influenced surface water level. Surface water hydraulic gradient evaluation and multivariate cluster analysis indicated that micro-site hydrology and water flow patterns were significantly altered by wet-weather harvesting and bedding site preparation, but overall site hydrology was not altered. Evaluation of predicted surface water level indicated that micro-topography and precipitation patterns had significant influences on surface water levels during the site establishment period. These results revealed that the hydrologic components of wetland delineation are complex in the wet pine flatwoods.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
14

Colley, Mary Sue Huckaby. "Assessing the Integration of Technology into the Academic Administrative Environment: College Administrators and Microcomputers." Thesis, North Texas State University, 1985. https://digital.library.unt.edu/ark:/67531/metadc331397/.

Full text
Abstract:
This study was conducted to determine the administrative functions that community college academic administrators perform with microcomputers; to identify demographic characteristics that distinguish administrators who rate their overall use of the microcomputer higher than others; to ascertain whether the importance placed on (1) microcomputer uses, (2) computer training, and (3) non-training conditions affecting computer use differed from the perceived current uses, training, and adequacy of conditions. Data for this study were collected through a survey instrument that was devised and evaluated for use in the study. The survey instrument was delivered during the fall, 1984 semester to the forty—two division chairs serving at the seven colleges that comprise the Dallas County Community College District. Thirty five division chairs responded to the survey for an 83.33 per cent return rate, and thirty-four of the survey forms returned were useable for analysis.
APA, Harvard, Vancouver, ISO, and other styles
15

Adhikari, Ramesh. "Two Essays in Finance: “Selection Biases and Long-run Abnormal Returns” And “The Impact of Financialization on the Benefits of Incorporating Commodity Futures in Actively Managed Portfolios”." ScholarWorks@UNO, 2015. http://scholarworks.uno.edu/td/2050.

Full text
Abstract:
This dissertation consists of two essays. First essay investigates the implications of researcher data requirement on the risk-adjusted returns of firms. Using the monthly CRSP data from 1925 to 2013, we present evidence that firms which survive longer have higher average returns and lower standard deviation of annualized returns than the firms which do not. I further demonstrate that there is a positive relation between firms’ survival and average performance. In order to account for the positive correlation between survival and average performance, I model the relation of survival and pricing errors using a Farlie-Gumbel-Morgenstern joint distribution function and fit resulting the moment conditions to the data. Our results show that even a low correlation between firm survival time and pricing errors can lead to a much higher correlation between the survival time and average pricing errors. Failure to adjust for this data selection biases can result in over/under estimates of abnormal returns by 5.73 % in studies that require at least five years of returns data. Second essay examines diversification benefits of commodity futures portfolios in the light of the rapid increase in investor participation in commodity futures market since 2000. Many actively managed portfolios outperform traditional buy and hold portfolios for the sample period from January, 1986 to October, 2013. The evidence documented through traditional intersection test and stochastic discount factor based spanning test indicates that financializaiton has reduced segmentation of commodity market with equity and bond market and has increased the riskiness of investing in commodity futures markets. However, diversifying property of commodity portfolios have not disappeared despite the increased correlation between commodity portfolios returns and equity index returns.
APA, Harvard, Vancouver, ISO, and other styles
16

Omar, Ebrahim. "Educators' access, training and use of computer-based technology at selected primary schools in the Cape Town suburb of Athlone, Western Cape." Thesis, University of the Western Cape, 2003. http://etd.uwc.ac.za/index.php?module=etd&amp.

Full text
Abstract:
This research study determines designated primary school educator's use of computer technology for accomplishing teaching related tasks such as using the computer to create instructional material
administrative record keeping
to access information via CD-ROM and the Internet for best practice teaching, model lesson plans and e-mail communication. In addition, the research also investigates factors influenicing designated primary schools' ability to become ICT ready and the purposes for which primary school educators use computer technology.
APA, Harvard, Vancouver, ISO, and other styles
17

Duarte, David. "A profile of changes in vehicle characteristics following the I-85 HOV-to-HOT conversion." Thesis, Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47689.

Full text
Abstract:
A 15.5-mile portion of the I-85 high-occupancy vehicle (HOV) lane in the metropolitan area of Atlanta, GA was converted to a high-occupancy toll (HOT) lane as part of a federal demonstration project designed to provide a reliable travel option through this congested corridor. Results from the I-85 demonstration project provided insight into the results that may follow the Georgia Department of Transportation's planned implementation of a $16 billion HOT lane network along metropolitan Atlanta's other major roadways [2]. To evaluate the impacts of the conversion, it was necessary to measure changes in corridor travel speed, reliability, vehicle throughput, passenger throughput, lane weaving, and user demographics. To measure such performance, a monitoring project, led by the Georgia Institute of Technology collected various forms of data through on-site field deployments, GDOT video, and cooperation from the State Road and Toll Authority (SRTA). Changes in the HOT lane's speed, reliability or other performance measure can affect the demographic and vehicle characteristics of those who utilize the corridor. The purpose of this particular study was to analyze the changes to the vehicle characteristics by comparing vehicle occupancy, vehicle classifications, and vehicle registration data to their counterparts from before the HOV-to-HOT conversion. As part of the monitoring project, the Georgia Tech research team organized a two-year deployment effort to collect data along the corridor during morning and afternoon peak hours. One year of data collection occurred before the conversion date to establish a control and a basis from which to compare any changes. The second year of data collection occurred after the conversion to track those changes and observe the progress of the lane's performance. While on-site, researchers collected data elements including visually-observed vehicle occupancy, license plate numbers, and vehicle classification [25]. The research team obtained vehicle records by submitting the license plate tag entries to a registration database [26]. In previous work, vehicle occupancy data were collected independently of license plate records used to establish the commuter shed. For the analyses reported in this thesis, license plate data and occupancy data were collected concurrently, providing a link between occupancy records of specific vehicles and relevant demographic characteristics based upon census data. The vehicle records also provided characteristics of the users' vehicles (light-duty vehicle vs. sport utility vehicle, model year, etc.) that the researchers aggregated to identify general trends in fleet characteristics. The analysis reported in this thesis focuses on identifying changes in vehicle characteristics that resulted from the HOV-to-HOT conversion. The data collected from post-conversion are compared to pre-conversion data, revealing changes in vehicle characteristics and occupancy distributions that most likely resulted from the implementation of the HOT lane. Plausible reasons affecting the vehicle characteristics alterations will be identified and further demographic research will enhance the data currently available to better pinpoint the cause and effect relationship between implementation and the current status of the I-85 corridor. Preliminary data collection outliers were identified by using vehicle occupancy data. However, future analysis will reveal the degree of their impact on the project as a whole. Matched occupancy and license plate data revealed vehicle characteristics for HOT lane users as well as indications that the tested data collectors are predominantly synchronized when concurrently collecting data, resulting in an argument to uphold the validity of the data collection methods. Chapter two provides reasons for why HOT lanes were sought out to replace I-85's HOV lanes. Chapter two will also provide many details regarding how the HOT lanes function and it will describe the role the Georgia Institute of Technology played in the assessment the HOV-to-HOT conversion. Chapter three includes the methodologies used to complete this document while chapter four provides results and analysis for the one year period before the conversion and the one year period after the conversion.
APA, Harvard, Vancouver, ISO, and other styles
18

Smith, Katie S. "A profile of HOV lane vehicle characteristics on I-85 prior to HOV-to-HOT conversion." Thesis, Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42923.

Full text
Abstract:
The conversion of high-occupancy vehicle (HOV) lanes to high-occupancy toll (HOT) lanes is currently being implemented in metro Atlanta on a demonstration basis and is under consideration for more widespread adoption throughout the metro region. Further conversion of HOV lanes to HOT lanes is a major policy decision that depends on knowledge of the likely impacts, including the equity of the new HOT lane. Rather than estimating these impacts using modeling or surveys, this study collects revealed preference data in the form of observed vehicle license plate data and vehicle occupancy data from users of the HOV corridor. Building on a methodology created in Spring 2011, researchers created a new methodology for matching license plate data to vehicle occupancy data that required extensive post-processing of the data. The new methodology also presented an opportunity to take an in-depth look at errors in both occupancy and license plate data (in terms of data collection efforts, processing, and the vehicle registration database). Characteristics of individual vehicles were determined from vehicle registration records associated with the license plate data collected during AM and PM peak periods immediately prior to the HOV lanes conversion to HOT lanes. More than 70,000 individual vehicle license plates were collected for analysis, and over 3,500 records are matched to occupancy values. Analysis of these data have shown that government and commercial vehicle were more prevalent in the HOV lane, while hybrid and alternative fuel vehicles were much less common in either lane than expected. Vehicle occupancy data from the first four quarters of data collection were used to create the distribution of occupancy on the HOV and general purpose lane, and then the matched occupancy and license plate data were examined. A sensitivity analysis of the occupancy data established that the current use of uncertain occupancy values is acceptable and that bus and vanpool occupancy should be considered when determining the average occupancy of all vehicles on the HOV lane. Using a bootstrap analysis, vehicle values were compared to vehicle occupancy values and the results found that there is no correlation between vehicle value and vehicle occupancy. A conclusions section suggests possible impacts of the findings on policy decisions as Georgia considers expanding the HOT network. Further research using these data, and additional data that will be collected after the HOT lane opens, will include emissions modeling and a study of changes in vehicle characteristics associated with the HOT lane conversion.
APA, Harvard, Vancouver, ISO, and other styles
19

Abdiu, Daniel, Mikael Strandberg, and Martin Stridsberg. "The impact of a real-time IT-Logistics solution : Implementation effects and consequences." Thesis, Jönköping University, Jönköping International Business School, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-122.

Full text
Abstract:

Today’s business market is highly competitive, therefore companies need to be constantly updated and change the way they operate their business, in order to survive and remain competitive. The situation on today’s market requires that companies have the ability to quickly respond to market changes and new customer demands within short product lifecycles. In order to deal with this new market situation, companies need to improve the integration with other companies within their business. This integration facilitates the companies’ ability to quickly adapt to new market situations and survive on a fast changing market. One of the main underlying concepts of this collaborative commerce is Supply Chain Management (SCM) which integrates and coordinates a company’s processes both internally and externally. Information Technology (IT) could improve the effectiveness of SCM. IT-solutions make the business processes more effective and improves the integration with other actors within the supply chain. The purpose with this thesis is to describe and explain the effects for businesses and the consequences for its processes when implementing a real-time IT-Logistics solution together with identifying the critical success factors. The thesis has been conducted by studying theory regarding supply chain management, business renewal and implementation effects. Further, a case study has been conducted where three actors have been interviewed; a manufacturer (Volvo Powertrain), a subcontractor (Metallfabriken Ljunghäll AB) and a system developer (PipeChain). The analysis of the theoretical framework and the empirical research has contributed with an identification of major effects and consequences when implementing a real-time IT-Logistics solution. Some of the effects are: inventory reduction, higher delivery accuracy, improved relations and increased flexibility. Examples of consequences these effects have caused are: more accurate planning and production, effective production processes as well as an improved delivery process. Additionally, success factors for an implementation have been identified such as mutual trust, understanding of change and evaluation.


Konkurrensen är stor inom dagens affärsmarknad vilket medför att företag ständigt måste vara uppdaterade och förändra deras affärsverksamhet för att kunna överleva och vara konkurrenskraftiga på marknaden. Dagens marknadssituation erfodrar att företag snabbt kan reagera på marknadsförändringar och nya kundkrav vid allt kortare produktlivscyklar. För att företag skall kunna hantera denna nya marknadssituation måste företagen bli integrerade med varandra inom deras affärsområde. Denna integrering underlättar företagens förmåga att snabbt kunna anpassa sig till nya marknadssituationer och överleva på en snabbt föränderlig marknad. Ett utav de största grundläggande koncepten för denna integrering är flödeshantering (Supply Chain Management). Flödeshanteringen integrerar och koordinerar företagets processer både internt som externt. Informationsteknologi (IT) kan förbättra effektiviteten av flödeshanteringen. IT-lösningar skapar effektivare affärsprocesser och förbättrar integrationen med andra aktörer inom företagets försörjningskedja. Syftet med uppsatsen är att beskriva och förklara effekter för affärsverksamheten samt konsekvenser för dess affärsprocesser, vid en implementering av en realtidslogistiklösning. Uppsatsen syftar vidare till att identifiera framgångsfaktorer vid en sådan implementering. Uppsatsarbetet har genomförts genom en litteraturstudie där teori om flödeshantering, affärsförändring och implementationseffekter har behandlats. En fallstudie i uppsatsen har använts där tre aktörer har blivit intervjuade; en tillverkare (Volvo Powertrain), en underleverantör (Metallfabriken Ljunghäll) och en systemutvecklare (PipeChain). Analysen av litteraturstudien och den empiriska undersökningen har bidragit till en identifiering av huvudeffekter samt konsekvenser vid en implementering av en IT-logistiklösning. Några av dessa effekter är: lagerreducering, högre leveranssäkerhet, förbättrade relationer samt ökad flexibilitet. Exempel på konsekvenser dessa effekter har orsakat är: precisare planering och produktion, effektivare produktionsprocesser samt förbättrad leveransprocess. Vidare så har framgångsfaktorer för lyckad implementering identifierats så som ömsesidig tillit, förståelse för förändring samt utvärdering.

APA, Harvard, Vancouver, ISO, and other styles
20

Kelley, Antoinette Cutler. "The prevalence of computer programming in teacher education coursework: A California State University profile." CSUSB ScholarWorks, 1993. https://scholarworks.lib.csusb.edu/etd-project/662.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Miller, Sally Anne. "A guide for technology coordinators." CSUSB ScholarWorks, 1996. https://scholarworks.lib.csusb.edu/etd-project/1273.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Altabba, Abdulrahman, and Lina Karlsson. "A framework for implementing the VMI model in an MRO partnership." Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-21961.

Full text
Abstract:
Purpose - The purpose of this paper is to investigate the feasibility of implementing the Vendor Managed Inventory (VMI) model in an MRO (maintenance, repair, and operations) partnership, and highlight its potential economic, environmental, and organizational benefits, as well as limitations. Approach - First, a comprehensive literature review was conducted on fields relevant to VMI. Second, empirical data was gathered from a single exploratory case study with Momentum Industrial, and its customer Stora Enso. Semi-structured interviews were used to gather data from the case companies. Findings - Results suggest that VMI results in benefits for the supply chain in general, such as reduced administration and inventory costs, improved service levels, reduced information distortion, and improved relationship among partners. For the particular case of VMI in an MRO partnership, improved service levels can be obtained by a reduced risk of production downtime for the customer. Moreover, the implementation of VMI has potential environmental benefits, such as reduced paper use, and higher transportation fill rate. Limitations of implementing VMI include the difficulty in system integration, and information sharing. Trust could be a potential issue that limits information sharing amongst supply chain partners. Moreover, the difference in organizational cultures and policies of partners should be taken into consideration. Limitations - The study is limited to opinions from one MRO customer in the paper and packaging industry. Even though the questions asked to informants in Momentum and Stora Enso tackled benefits to MRO customers in general, a broader image could have been achieved by interviewing customers from different industries. Moreover, the case companies do not currently adopt VMI in their partnership, so the case study results are based on what they think would be the potential benefits and limitations of implementing VMI in an MRO partnership. Practical Implications - This paper can serve as a guideline for logistics managers who are considering VMI in an MRO partnership specifically, as it provides them with the benefits and limitations associated with VMI. More generally, any company considering VMI can also benefit from the theoretical framework presented.
APA, Harvard, Vancouver, ISO, and other styles
23

Asgharzadeh, Shishavan Reza. "Nonlinear Estimation and Control with Application to Upstream Processes." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/5291.

Full text
Abstract:
Subsea development and production of hydrocarbons is challenging due to remote andharsh conditions. Recent technology development with high speed communication to subsea anddownhole equipment has created a new opportunity to both monitor and control abnormal or undesirableevents with a proactive and preventative approach rather than a reactive approach. Twospecific technology developments are high speed, long-distance fiber optic sensing for productionand completion systems and wired pipe for drilling communications. Both of these communicationsystems offer unprecedented high speed and accurate sensing of equipment and processes that aresusceptible to uncontrolled well situations, leaks, issues with flow assurance, structural integrity,and platform stability, as well as other critical monitoring and control issues. The scope of thisdissertation is to design monitoring and control systems with new theoretical developments andpractical applications. For estimators, a novel `1-norm method is proposed that is less sensitiveto data with outliers, noise, and drift in recovering the true value of unmeasured parameters. Forcontrollers, a similar `1-norm strategy is used to design optimal control strategies that utilize a comprehensivedesign with multivariate control and nonlinear dynamic optimization. A framework forsolving large scale dynamic optimization problems with differential and algebraic equations is detailedfor estimation and control. A first area of application is in fiber optic sensing and automationfor subsea equipment. A post-installable fiber optic clamp is used to transmit structural informationfor a tension leg platform. A proposed controller automatically performs ballast operationsthat both stabilize the floating structure and minimize fatigue damage to the tendons that hold thestructure in place. A second area of application is with managed pressure drilling with movinghorizon estimation and nonlinear model predictive control. The purpose of this application is tomaximize rate of drilling penetration, maintain pressure in the borehole, respond to unexpected gasinflux, detect cuttings loading and pack-off, and better manage abnormal events with the drillingprocess through automation. The benefit of high speed data accessibility is quantified as well asthe potential benefit from a combined control strategy versus separate controllers.
APA, Harvard, Vancouver, ISO, and other styles
24

Fohlin, Johan. "Home Storage Manager." Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-17494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Bocca, Jorge B. "A desk-top information manager." Thesis, University of Southampton, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.357192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Weir, Mitchell Drake. "Attitudes and Perceptions of Community College Educators Toward the Implementation of Computers for Administrative and Instructional Purposes." Thesis, North Texas State University, 1986. https://digital.library.unt.edu/ark:/67531/metadc332373/.

Full text
Abstract:
This study examines the main research hypothesis that there is significant interaction between the effects of computer use/non-use and level of computer training among community college educators in the state of Texas regarding attitudes toward the implementation of administrative and instructional computing. A statewide survey was conducted with deans of instruction and full-time faculty members who represented the three academic transfer departments of natural/physical sciences, social science, and humanities/fine arts. Fifty-five deans of instruction and three hundred fifty-six faculty members participated in the study. A factor analysis of data from the questionnaires revealed four factors which were identified and labeled: Factor One: Computer Applications: Advantages and Disadvantages; Factor Two: Administrative Computer Applications: Advantages and Disadvantages; Factor Three: Apprehensions About Educational Computing; Factor Four: Situational Factors Associated With Computer Applications in Education. A 4x3x2 (professional position x level of computer training x level of computer experience) multivariate analysis of variance of both main and interaction effects was then performed within and across these factors.
APA, Harvard, Vancouver, ISO, and other styles
27

Rasmussen, Arthur N. "AN INTELLIGENT MANAGER FOR A DISTRIBUTED TELEMETRY SYSTEM." International Foundation for Telemetering, 1993. http://hdl.handle.net/10150/608865.

Full text
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada
A number of efforts at NASA's Johnson Space Center are exploring ways of improving operational efficiency and effectiveness of telemetry data distribution. An important component of this is the Real-Time Data System project in the Shuttle Mission Control Center. This project's telemetry system is based on a network of engineering workstations that acquire, distribute, analyze, and display the data. Telemetry data is acquired and partially processed through a commercial programmable telemetry processor. The data is then transferred into workstations where the remaining decommutation, conversion and calibration steps are performed. The results are sent over the network to applications operating within end user workstations. This complex distributed environment is managed by PILOT, an intelligent system that monitors data flow and process integrity with the goal of providing a very high level of availability requiring minimal human involvement. PILOT is a rule-based expert system that oversees the operation of the system. It interacts with agents that operate in the local environment of each workstation and advises the local agents of system status and configuration. This enables each local agent to manage its local environment and provides a resource to which it can come with issues that need a global view for resolution. PILOT is implemented using a commercially available real-time expert system shell and operates in a heterogeneous set of hardware platforms.
APA, Harvard, Vancouver, ISO, and other styles
28

Roger, Kathleen Mary Louise. "A nursing workload manager for a patient data management system /." Thesis, McGill University, 1992. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=61047.

Full text
Abstract:
This thesis presents the design and implementation of a Nursing Workload Manager module for a Patient Data Management System in an intensive care unit. The Nursing Workload Manager aids in the planning and documentation of the nurse's workload. It automates the generation of the nursing care plan and automatically assigns a score to the care plan based on a nursing workload measurement system. In the thesis a literature survey of patient data management systems, nursing workload measurement systems and system evaluation methods is presented. This is followed by an overview of the work environment of an intensive care unit. The functionality of the Nursing Workload Manager is described and details of the software environment and application implementation are discussed. Finally, the results of a user evaluation of the module are presented, and future work on the module is discussed.
APA, Harvard, Vancouver, ISO, and other styles
29

Lönneborg, Rickard. "Extending an MPEG-21 viewer to manage access rights." Access electronically, 2004. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20041026.124836/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kakantousis, Theofilos. "Scaling YARN: A Distributed Resource Manager for Hadoop." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177200.

Full text
Abstract:
In recent years, there has been a growing need for computer systems that are capable of handling unprecedented amounts of data. To this end, Hadoop HDFS and Hadoop YARN have become the de facto standard for meeting demanding storage requirements and for managing applications that can process this data. Although YARN is a major advancement from its predecessor MapReduce in terms of scalability and fault-tolerance, its Resource Manager component that performs resource allocation introduces a potential single point of failure and a performance bottleneck due to its centralized architecture. This thesis presents a novel architecture in which the Resource Manager runs on a distributed network of stateless commodity machines as its state is migrated to MySQL Cluster, a relational write-scalable and highly available in-memory database. By doing so, the Resource Manager becomes more scalable as it can now run on multiple nodes as well as more fault-tolerant as arbitrary node failures do not result in state loss. In this work we implemented the proposed architecture for the Resource Tracker service which performs cluster node management for the Resource Manager. Experimental results validate the correctness of our proposal, demonstrate how it scales well by utilizing stateless Resource Manager machines and evaluate its performance in terms of request throughput, system resource and database utilization.
APA, Harvard, Vancouver, ISO, and other styles
31

Wurtz, Joshua. "A geographic information system application to visualize and manage data." Kansas State University, 2015. http://hdl.handle.net/2097/19126.

Full text
Abstract:
Master of Science
Department of Computing and Information Sciences
Scott A. DeLoach
A geographic information system (GIS) allows an individual to map, model, query, and analyze large quantities of data from a database according to their spatial locations. This project uses the ArcGis Java software Development Kit (SDK) to visualize, manipulate, and comprehend large amounts of publicly available information relevant to a spatial location. The application developed uses a graphical user interface to examine the public data of Riley County, Kansas. The user is able to load shapefiles through the interface and then examine the many spatial locations. By examining a spatial location the user is able to view the associated attribute information, manipulate it, and add additional attributes. Beyond viewing information at selected geometric locations, a user can also query the layer(s) to return the spatial locations that fit the query. These abilities can allow a user to understand and visualize patterns that they would not have been able to easily see from looking at the raw data. Increasing users' understanding of the environment they are working with improves their likelihood of success in their desired objectives.
APA, Harvard, Vancouver, ISO, and other styles
32

Francis, Alexandra Michelle. "REST API to Access and Manage Geospatial Pipeline Integrity Data." DigitalCommons@CalPoly, 2015. https://digitalcommons.calpoly.edu/theses/1496.

Full text
Abstract:
Today’s economy and infrastructure is dependent on raw natural resources, like crude oil and natural gases, that are optimally transported through a net- work of hundreds of thousands of miles of pipelines throughout America[28]. A damaged pipe can negatively a↵ect thousands of homes and businesses so it is vital that they are monitored and quickly repaired[1]. Ideally, pipeline operators are able to detect damages before they occur, but ensuring the in- tegrity of the vast amount of pipes is unrealistic and would take an impractical amount of time and manpower[1]. Natural disasters, like earthquakes, as well as construction are just two of the events that could potentially threaten the integrity of pipelines. Due to the diverse collection of data sources, the necessary geospatial data is scat- tered across di↵erent physical locations, stored in di↵erent formats, and owned by di↵erent organizations. Pipeline companies do not have the resources to manually gather all input factors to make a meaningful analysis of the land surrounding a pipe. Our solution to this problem involves creating a single, centralized system that can be queried to get all necessary geospatial data and related informa- tion in a standardized and desirable format. The service simplifies client-side computation time by allowing our system to find, ingest, parse, and store the data from potentially hundreds of repositories in varying formats. An online web service fulfills all of the requirements and allows for easy remote access to do critical analysis of the data through computer based decision support systems (DSS). Our system, REST API for Pipeline Integrity Data (RAPID), is a multi- tenant REST API that utilizes HTTP protocol to provide a online and intuitive set of functions for DSS. RAPID’s API allows DSS to access and manage data stored in a geospatial database with a supported Django web framework. Full documentation of the design and implementation of RAPID’s API are detailed in this thesis document, supplemented with some background and validation of the completed system.
APA, Harvard, Vancouver, ISO, and other styles
33

Gureya, Daharewa David. "Self-trained Proactive Elasticity Manager for Cloud-based Storage Services." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-187353.

Full text
Abstract:
The pay-as-you-go pricing model and the illusion of unlimited resources makes cloud computing a conducive environment for provision of elastic services where different resources are dynamically requested and released in response to changes in their demand. The benefit of elastic resource allocation to cloud systems is to minimize resource provisioning costs while meeting service level objectives (SLOs). With the emergence of elastic services, and more particularly elastic key-value stores, that can scale horizontally by adding/removing servers, organizations perceive potential in being able to reduce cost and complexity of large scale Web 2.0 applications. A well-designed elasticity controller helps reducing the cost of hosting services using dynamic resource provisioning and, in the meantime, does not compromise service quality. An elasticity controller often needs to be trained either online or offline in order to make it intelligent enough to make decisions on spawning or removing extra instances when workload increase or decrease. However, there are two main issues on the process of control model training. A significant amount of recent works train the models offline and apply them to an online system. This approach may lead the elasticity controller to make inaccurate decisions since not all parameters can be considered when building the model offline. The complete training of the model consumes large efforts, including modifying system setups and changing system configurations. Worse, some models can even include several dimensions of system parameters. To overcome these limitations, we present the design and evaluation of a self-trained proactive elasticity manager for cloud-based elastic key-value stores. Our elasticity controller uses online profiling and support vector machines (SVM) to provide a black-box performance model of an application’s SLO violation for a given resource demand. The model is dynamically updated to adapt to operating environment changes such as workload pattern variations, data rebalance, changes in data size, etc. We have implemented and evaluated our controller using the Apache Cassandra key-value store in an OpenStack Cloud environment. Our experiments with artificial workload traces shows that our controller guarantees a high level of SLO commitments while keeping the overall resource utilization optimal.
APA, Harvard, Vancouver, ISO, and other styles
34

Yu, Hong. "A data-driven approach for personalized drama management." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53851.

Full text
Abstract:
An interactive narrative is a form of digital entertainment in which players can create or influence a dramatic storyline through actions, typically by assuming the role of a character in a fictional virtual world. The interactive narrative systems usually employ a drama manager (DM), an omniscient background agent that monitors the fictional world and determines what will happen next in the players' story experience. Prevailing approaches to drama management choose successive story plot points based on a set of criteria given by the game designers. In other words, the DM is a surrogate for the game designers. In this dissertation, I create a data-driven personalized drama manager that takes into consideration players' preferences. The personalized drama manager is capable of (1) modeling the players' preference over successive plot points from the players' feedback; (2) guiding the players towards selected plot points without sacrificing players' agency; (3) choosing target successive plot points that simultaneously increase the player's story preference ratings and the probability of the players selecting the plot points. To address the first problem, I develop a collaborative filtering algorithm that takes into account the specific sequence (or history) of experienced plot points when modeling players' preferences for future plot points. Unlike the traditional collaborative filtering algorithms that make one-shot recommendations of complete story artifacts (e.g., books, movies), the collaborative filtering algorithm I develop is a sequential recommendation algorithm that makes every successive recommendation based on all previous recommendations. To address the second problem, I create a multi-option branching story graph that allows multiple options to point to each plot point. The personalized DM working in the multi-option branching story graph can influence the players to make choices that coincide with the trajectories selected by the DM, while gives the players the full agency to make any selection that leads to any plot point in their own judgement. To address the third problem, the personalized DM models the probability that the players transitioning to each full-length stories and selects target stories that achieve the highest expected preference ratings at every branching point in the story space. The personalized DM is implemented in an interactive narrative system built with choose-your-own-adventure stories. Human study results show that the personalized DM can achieve significantly higher preference ratings than non-personalized DMs or DMs with pre-defined player types, while preserve the players' sense of agency.
APA, Harvard, Vancouver, ISO, and other styles
35

Wang, Xiaobo. "Collaboration Instance Manager of UbiCollab 2008 : Collaboration Instance Synchronization and Management in P2P network." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9714.

Full text
Abstract:

This report is for my research of Collaboration Instance Manager of UbiCollab project. UbiCollab want to be the platform for ubiquitous collaborative active. UbiCollab project aims to develop a distributed collaborative platform which makes people in distributed space ubiquitous collaborate with friends and colleagues. Collaboration instance manager (CIM) is a core component of the UbiCollab platform, which manage such collaborative activities. My research topics of CIM include in the P2P network development by using JXME, the data synchronization through this P2P network and how to manage these synchronized date by using a local file system. The result of my research is a CIM system, which deployed as OSGI bundle. User can use that do some collaborative active. This CIM system manage the service level of data synchronization, other modules and applications can use that to handle data synchronization between each other without know the details of how to implement it. For that purpose I first reviewed the related theories of distributed systems, ubiquitous systems, mobile systems and CSCW. After that review I researched on some alternatives for developing such system and choose the candidate technologies for my prototype. Secondly I analyzed the requirements of UbiCollab and designed the prototype. Based on that design, I implemented and tested that CIM system based on agreed common scenarios and developed a simple GUI for show the utility. Finally, I evaluate the system by analysis system requirements and scenario criteria.

APA, Harvard, Vancouver, ISO, and other styles
36

Alexander, Kristy, Mike Bender, Rick Boisvert, and Mike Gibson. "Cost Benefit Analysis of Options to Manage E-2C Hawkeye Aircraft Techincal Data." Thesis, Monterey, California. Naval Postgraduate School, 2011. http://hdl.handle.net/10945/7067.

Full text
Abstract:
EMBA Project Report
EXECUTIVE SUMMARY: Program Management Aircraft Office 231 (PMA231) is tasked with providing “cradle to grave” acquisition support to the U.S. Navy’s fleet of E-2C Hawkeye airborne early warning aircraft. A major portion of this support centers on providing and updating integrated logistics support (ILS) elements, which include technical data. Effective and efficient management of aircraft technical data ensures that the Navy’s aviation maintenance personnel have the most accurate, up to date technical manuals available. Availability of these manuals forms a critical link in providing safe, full mission capable (FMC) aircraft ready for immediate tasking as well as the safety of the maintenance personnel. Source data for both E-2C and C-2A technical manuals is generated by In-Service Support Center at Naval Air Station North Island (ISSC NI). This source data is then incorporated into the manuals and published for use. Incorporation of this validated source data forms the bulk the technical data management process. Northrop Grumman Corporation (NGC), Bethpage New York, currently manages E-2C technical publications; however, prior to 2007, many of these publications were maintained at ISSC NI. ISSC NI currently manages the C-2A technical publications. Given the vital nature of these technical publications, CAPT Gahagan, Program Manager PMA231, has tasked Pax River Consulting (PRC) to provide the program office with a cost benefit analysis based on two options: • Option 1: Retain E-2C technical publications management with Northrop Grumman Corporation. • Option 2: Transfer E-2C technical publication management to the E-2C/C-2A ISSC at NAS North Island, CA. PRC obtained data from both ISSC NI and NGC and evaluated the data on three key areas: quality, cost, and schedule. Based on these criteria, there was found to be no quality advantage between NGC and ISSC; however, there was a clear advantage in favor of ISSC NI in both the cost and schedule areas. Based on this data analysis, PRC recommends that PMA231 transfer the management of the E-2C technical manuals to ISSC North Island.
APA, Harvard, Vancouver, ISO, and other styles
37

Sanchez, Jimmy Kraimer Martin Valverde. "Distributed data analysis over meteorological datasets using the actor model." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/169958.

Full text
Abstract:
Devido ao contínuo crescimento dos dados científicos nos últimos anos, a análise intensiva de dados nessas quantidades massivas de dados é muito importante para extrair informações valiosas. Por outro lado, o formato de dados científicos GRIB (GRIdded Binary) é amplamente utilizado na comunidade meteorológica para armazenar histórico de dados e previsões meteorológicas. No entanto, as ferramentas atuais disponíveis e métodos para processar arquivos neste formato não realizam o processamento em um ambiente distribuído. Essa situação limita as capacidades de análise dos cientistas que precisam realizar uma avaliação sobre grandes conjuntos de dados com o objetivo de obter informação no menor tempo possível fazendo uso de todos os recursos disponíveis. Neste contexto, este trabalho apresenta uma alternativa ao processamento de dados no formato GRIB usando o padrão Manager-Worker implementado com o modelo de atores fornecido pelo Akka toolkit. Realizamos também uma comparação da nossa proposta com outros mecanismos, como o round-robin, random, balanceamento de carga adaptativo, bem como com um dos principais frameworks para o processamento de grandes quantidades de dados tal como o Apache Spark. A metodologia utilizada considera vários fatores para avaliar o processamento dos arquivos GRIB. Os experimentos foram conduzidos em um cluster na plataforma Microsoft Azure. Os resultados mostram que nossa proposta escala bem à medida que o número de nós aumenta. Assim, nossa proposta atingiu um melhor desempenho em relação aos outros mecanismos utilizados para a comparação, particularmente quando foram utilizadas oito máquinas virtuais para executar as tarefas. Nosso trabalho com o uso de metadados alcançou um ganho de 53.88%, 62.42%, 62.97%, 61.92%, 62.44% e 59.36% em relação aos mecanismos round-robin, random, balanceamento de carga adaptativo que usou métricas CPU, JVM Heap e um combinado de métricas, e o Apache Spark, respectivamente, em um cenário onde um critério de busca é aplicado para selecionar 2 dos 27 parâmetros totais encontrados no conjunto de dados utilizado nos experimentos.
Because of the continuous and overwhelming growth of scientific data in the last few years, data-intensive analysis on this vast amount of scientific data is very important to extract valuable scientific information. The GRIB (GRIdded Binary) scientific data format is widely used within the meteorological community and is used to store historical meteorological data and weather forecast simulation results. However, current libraries to process the GRIB files do not perform the computation in a distributed environment. This situation limits the analytical capabilities of scientists who need to perform analysis on large data sets in order to obtain information in the shortest time possible using of all available resources. In this context, this work presents an alternative to data processing in the GRIB format using the well-know Manager-Worker pattern, which was implemented with the Actor model provided by the Akka toolkit. We also compare our proposal with other mechanisms, such as the round-robin, random and an adaptive load balancing, as well as with one of the main frameworks currently existing for big data processing, Apache Spark. The methodology used considers several factors to evaluate the processing of the GRIB files. The experiments were conducted on a cluster in Microsoft Azure platform. The results show that our proposal scales well as the number of worker nodes increases. Our work reached a better performance in relation to the other mechanisms used for the comparison particularly when eight worker virtual machines were used. Thus, our proposal upon using metadata achieved a gain of 53.88%, 62.42%, 62.97%, 61.92%, 62.44% and 59.36% in relation to the mechanisms: round-robin, random, an adaptive load balancing that used CPU, JVM Heap and mix metrics, and the Apache Spark respectively, in a scenario where a search criteria is applied to select 2 of 27 total parameters found in the dataset used in the experiments.
APA, Harvard, Vancouver, ISO, and other styles
38

Fitzgerald, Amy Lynn. "An exercise in database customized programming to compare the Smart Data Manager and dBaseIII." Thesis, Kansas State University, 1985. http://hdl.handle.net/2097/9838.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Braun, Sebastian, and Eicke Leffers. "Service Data Management : How data around services can help to manage services internally - a case study at the Volvo Group." Thesis, Linköpings universitet, Logistik- och kvalitetsutveckling, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-122815.

Full text
Abstract:
Traditional manufacturing firms today are on the way to extend their product portfolio with services in order to broaden their offering and to strengthen the relation to their customers. New services are being developed, old services are substituted and replaced and existing services adapted, while the amount of services tends to increase in total. As the number of services increases, so does the administrative work and complexity that comes along with services. This thesis aims at investigating how to handle the rising amount of data internally created around services and which service data need to be managed to describe a service. The objective is to find a way that allows Volvo to focus on the development and adaptation of services and not being distracted with work on administration of data around services, but rather focus on value-creating work. Different types of metadata are identified that describe a service along its lifecycle, e.g. versions, lifecycle stages, dependencies to other services, etc. The way is summarized in a framework that illustrates which data are needed and how this information can be managed. Besides the literature study, a benchmarking study is conducted within three other, global operating companies that have made their way from a manufacturer to a service provider or are still on the way to become one. The objective is to analyze their way on handling service data and to compare those to the one of Volvo and use the gathered information to provide a recommendation that suits Volvo’s way of working. The benchmarking study aimed at investigating the current status and future objectives of the firms through interviews with service experts. The results show that there is a great variation among the firms, including Volvo. While the third benchmarking firm has a mature way of dealing with services in their administration, Volvo and the benchmarking firm 2 are at an early stage in the servitization process. However, benchmarking firm 1 has an intermediate state that is strongly supported by an existing ERP system that is capable of registering service data. The Volvo Group acts in a strong competitive market that demands fleet management services, maintenance agreements and repair contracts in addition to the product offering. A lean way of service data management with high efficiency allows Volvo to compete in this market successfully. Services can be developed quicker by reuse of existing modules, a tracking of changes allows users to see the evolution of a particular service and the impact of a change can be estimated with the right system that considers dependencies among services. A solution to improve in the service data management is presented in the developed framework of this thesis. It is one way to account for the growing number of data and services and to simplify the daily work with services with a software-based solution. It can help to approach the growing number of services and to structure them in a daily work environment as well as from a more holistic portfolio management perspective.
APA, Harvard, Vancouver, ISO, and other styles
40

Campos, Carlos Rogério de Rezende. "A importância do Big Data e do CRM para o gestor de produto." Master's thesis, Instituto Superior de Economia e Gestão, 2018. http://hdl.handle.net/10400.5/17658.

Full text
Abstract:
Mestrado em Gestão e Estratégia Industrial
A recente evolução da tecnologia permitiu que uma enorme quantidade e variedade de dados ficassem disponíveis para as empresas a uma velocidade cada vez maior. Assim, as empresas passam a poder usufruir de mais informação para tomar decisões. Mas o Big Data, o Customer Relationship Manager (CRM) e os sistemas utilizados para trabalhar essas tecnologias, ainda são recentes e as empresas ainda procuram compreender de que forma é que os dados podem contribuir para reais ganhos de valor. Dada a variedade dos dados, foi escolhida a profissão de Gestor de Produto para ser estudada pois este interage com várias áreas da empresa, sendo considerado um mini-CEO. Assim, a questão de investigação é "De que forma é que o Big Data e o CRM influenciam a tomada de decisão do Gestor de Produto?", tendo este trabalho como objetivo compreender a importância que dois sistemas de dados, o Big Data e o CRM, têm para o Gestor de Produto e de que forma é que esses sistemas contribuem para as funções do mesmo. Para o efeito, foram realizadas entrevistas a peritos visando a compreensão do contexto da pesquisa e dos processos que estão a ser desenvolvidos. Por fim, os principais resultados indicam a importância da utilização de sistemas de CRM e da utilização do Big Data pelas empresas. Os resultados evidenciam que os dados contribuem para uma tomada de decisão mais informada, permitindo assim o Gestor de Produto tomar melhores decisões.
The recent evolution in technology has allowed an enormous amount and variety of data to be available for companies at a growing pace. Within this context, companies are capable of enjoying more information to allow them to reach decisions. However, Big Data and CRM (Customer Relationship Management) and the systems that can be used to work this data are still fairly recent and therefore companies are still trying to understand in which way these figures can contribute can add real value. Considering the variety of data, the profession of Product Manager was chosen to be analysed as it mingles with different areas across the firm as well as being considered a mini CEO. Consequently, the question under focus here is "In what way does Big Data and CRM influence the decision-making process for the Product Manager?" This study aims to comprehend how important two data systems, Big Data and CRM, are to a product manager and how they impact its actions. Hence, an investigation has been conducted to comprehend the background of studies and methods being developed. This was achieved through qualitative research based on primary data obtained from semi structured interviews with experienced professionals within the industry. Finally, the main results indicate the importance of using CRM systems and the use of Big Data by the companies. The results show that data contribute to more informed decision making, thus allowing the Product Manager to make better decisions.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
41

Skuza, Patrik. "Exploring the perspectives of managers on data presentation in software analytics tools." Thesis, Malmö universitet, Institutionen för datavetenskap och medieteknik (DVMT), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43087.

Full text
Abstract:
There is a lack in research on the perspectives of different managerial roles on data about software projects in software analytics tools, such as the perspectives of chief financial officers (CFOs), chief executive officers (CEOs) and compliance officers. Today, software analytics tools are mainly developed to address the needs of technical stakeholders such as developers, but research shows that there exist potentials of expanding this technical users’ scope of focus to also include higher level stakeholders, such as managers. The goal of this study is to explore what managers working in software development organizations consider to be useful data to have about software projects in software analytics tools, as well as examining how they want data about software projects to be presented to them in such tools. This study was done in four steps. First, a literature review was conducted. Second, a questionnaire was conducted with four CFOs, one CEO and one compliance officer working in six different Swedish software development organizations. Third, semi-structured interviews were conducted with three CFOs, one CEO and one compliance officer working in five different Swedish software development organizations. Fourth, a visual prototype simulating a software analytics tool was constructed based on the data gathered from the interviews. The result of this study shows that abstraction, limitation, and visualization of data about software projects, as well as presentation of useful data in software analytics tools that support the work tasks of managers, is helpful in addressing the perspectives and views of the target group.
APA, Harvard, Vancouver, ISO, and other styles
42

Fulton, Neale Leslie Aerospace &amp Mechanical Engineering Australian Defence Force Academy UNSW. "Regional airspace design: a structured systems engineering approach." Awarded by:University of New South Wales - Australian Defence Force Academy. School of Aerospace and Mechanical Engineering, 2002. http://handle.unsw.edu.au/1959.4/38722.

Full text
Abstract:
There has been almost fifteen years of political controversy surrounding changes to the rules and procedures by which aircraft conduct their flight within regional Australia. Decisions based on a predominately heuristic (rule of thumb) approach to design have had many adverse consequences for the integrity of the proximity warning function. A sound mathematical model is required to establish this function on a mature engineering foundation. To achieve this, the proximity warning function has been investigated as a hybrid-system. This approach recognises the dual nature of the design: that aircraft dynamics give rise to continuous mathematical models while the communication protocols controlling proximity require discrete mathematical approaches. The blending of each aspect has yielded a deeper insight into the operational limitations and failure modes of this function. The presentation of the thesis follows a design thread through the function. It begins with a description of existing standards and implementations. Risk models are then developed. The pilot interface is recognised as a primary design constraint. Mathematical models are then developed to describe the topology of flow, proximity dynamics, and the scheduling constraints associated with visual, voice, and data-link communications required by the proximity warning function. These analyses show that many aspects of design can be bounded by analytical formulae that bring new robustness to the design and resolve some of the misconceptions arising from the often inaccurate perceptions of present airspace operations. Failure modes, unaccounted for in existing designs are found to actually aggravate failure in the very situations in which the airspace design should be robust and should act to prevent collisions. In particular, there are divergences of performance between the demands required by the system design and the ability of the pilot to deliver such performances. In some cases, these failures may be traced to policy decisions such as service between Instrument Flight Rule and Visual Flight Rule category aircraft. On the basis of the conclusions of this research, a formal engineering review of the proximity warning function is required to assure the containment of the likelihood of mid-air collision for all future operations.
APA, Harvard, Vancouver, ISO, and other styles
43

Park, Hee Yong. "Peer List Update Manager (PLUM) implementation in Open Computing Exchanging and Arbitration Network (OCEAN)." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE1000151.

Full text
Abstract:
Thesis (M.S.)--University of Florida, 2002.
Title from title page of source document. Document formatted into pages; contains ix, 44 p.; also contains graphics. Includes vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
44

MacLean, Roger R. "A trans-disciplinary approach integrating farm system data to better manage and predict Striga infestations /." Thesis, McGill University, 2002. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=38228.

Full text
Abstract:
The following research developed an approach and methodology to simultaneously gather and integrate social and natural science farm system data of developing countries into one data base. The overall approach was based on Weber's theory of abstraction, which requires the identification of the broadest number of variables as possible. The first step to understanding the farm system was to overview a number of the key variables which represented a number of key farm components; the second step was to juxtapose and blend together the various forms of data in linear forms against a test variable of Striga infestation levels; the third step was to evaluate if the amount of knowledge gained in predicting Striga infestation levels was statistically significant by cross correlating soil nutrient levels, crop management approaches, farmers' perceptions of Striga infestation and spatial distances; the fourth step was to use parametric and non-parameterc analytical tools in conjunction with data compression to locate the best combination of parameters to better manage Striga. The final part of the process was to identify and integrate the crop, field and social data into a profile of farmer's who have the highest and lowest likelihood of being infested by Striga by using a soil nutrient concentration baseline as the indicator. The results were that natural and social science data could be successfully combined, integrated and have statistically significant cross correlations. These correlations indicate that specific spatial parameters combined with specific soil components, farmer's management and crop placement could be used as predictors of Striga infestation levels. As well the farmers' perception could be validated using natural science data.
APA, Harvard, Vancouver, ISO, and other styles
45

Conte, Simone Ivan. "The Sea of Stuff : a model to manage shared mutable data in a distributed environment." Thesis, University of St Andrews, 2019. http://hdl.handle.net/10023/16827.

Full text
Abstract:
Managing data is one of the main challenges in distributed systems and computer science in general. Data is created, shared, and managed across heterogeneous distributed systems of users, services, applications, and devices without a clear and comprehensive data model. This technological fragmentation and lack of a common data model result in a poor understanding of what data is, how it evolves over time, how it should be managed in a distributed system, and how it should be protected and shared. From a user perspective, for example, backing up data over multiple devices is a hard and error-prone process, or synchronising data with a cloud storage service can result in conflicts and unpredictable behaviours. This thesis identifies three challenges in data management: (1) how to extend the current data abstractions so that content, for example, is accessible irrespective of its location, versionable, and easy to distribute; (2) how to enable transparent data storage relative to locations, users, applications, and services; and (3) how to allow data owners to protect data against malicious users and automatically control content over a distributed system. These challenges are studied in detail in relation to the current state of the art and addressed throughout the rest of the thesis. The artefact of this work is the Sea of Stuff (SOS), a generic data model of immutable self-describing location-independent entities that allow the construction of a distributed system where data is accessible and organised irrespective of its location, easy to protect, and can be automatically managed according to a set of user-defined rules. The evaluation of this thesis demonstrates the viability of the SOS model for managing data in a distributed system and using user-defined rules to automatically manage data across multiple nodes.
APA, Harvard, Vancouver, ISO, and other styles
46

Huang, Kun. "THE RELATIONSHIP BETWEEN EMBEDDEDNESS AND ORGANIZATIONAL SOCIAL PERFORMANCE IN A COMMUNITY MENTAL HEALTH NETWORK UNDER MANAGED CARE." Diss., Tucson, Ariz. : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1211%5F1%5Fm.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Niklasson, Karl, and Joakim Skog. "Prediktiv modellering av fotbollsspelares utveckling baserat på semifiktiv data." Thesis, Högskolan i Borås, Institutionen Handels- och IT-högskolan, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-17796.

Full text
Abstract:
Det finns ett behov att hitta och rekrytera skickliga spelare till sin fotbollsklubb på ett kostnadseffektivt sätt. Den nuvarande processen med talangscouter är tidsödande och kostsam. Automatiserad dataanalys kan underlätta sökandet efter de önskade spelarna. Då automatiserad dataanalys med semifiktiv data inte undersökts i någon större utsträckning tidigare hade det varit intressant att ta reda på hur väl vanliga tekniker inom data mining fungerar när de appliceras på semifiktiv data.Ändamålet med studien var att med hjälp av kvantitativa experiment skapa prediktiva modeller som förutspår om en fotbollsspelare kommer att öka sina marknadsandelar i framtiden. Studien hade även för avsikt att ta reda på om det, med hjälp av semifiktiv data, gick att skapa tolkningsbara modeller som gav generella insikter om viktiga attribut hos fotbollsspelare på olika positioner.Studiens forskningsmetod var kvantitativ forskning då det är en metod som värdesätter struktur och objektivitet; vilket var viktigt för studien och dess forskningsfrågor. Den forskningsstrategi som användes under studien var experiment, vilken passade bra till den kvantitativa datainsamlingen och dataanalysen.I studien utfördes tre experiment. Det första experimentet gick ut på att, med så hög prestanda som möjligt, skapa klassificeringsmodeller som förutspår om en spelare kommer ha en positiv utveckling. Det andra experimentet gick ut på att ta reda på om det gick att skapa tolkningsbara klassificeringsmodeller för att dra generella slutsatser kring fotbollsspelares egenskaper. Det tredje experimentet gick ut på att ta reda på vilka fotbollsspelare som kommer få en positiv utveckling i framtiden.Resultaten från det första experimentet visar att modellerna har en bra prestanda vilket betyder att det går att skapa klassificeringsmodeller som kan förutspå om en spelare kommer ha en positiv utveckling i framtiden. Resultaten från det andra experimentet visar dock att det inte går att skapa tolkningsbara modeller som ger generella insikter om viktiga attribut på specifika positioner. Detta på grund av modellernas låga prestanda vilket gjorde att tilliten till träden sänktes rejält. Det tredje experimentet gav en del intressanta resultat som kan tidigast verifieras i slutet av år 2014.Då studiens resultat är positiva och originella blir påföljden att fotbollsklubbar bör få upp ögonen för mer kvantitativa angreppssätt, exempelvis dataanalys, när det kommer till värvningar av spelare. Även forskare kan ha nytta av studien då den ger en grund vilken kan utökas i framtida studier.
Program: Systemarkitekturutbildningen
APA, Harvard, Vancouver, ISO, and other styles
48

Beu, Jesse Garrett. "Design of heterogeneous coherence hierarchies using manager-client pairing." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47710.

Full text
Abstract:
Over the past ten years, the architecture community has witnessed the end of single-threaded performance scaling and a subsequent shift in focus toward multicore and manycore processing. While this is an exciting time for architects, with many new opportunities and design spaces to explore, this brings with it some new challenges. One area that is especially impacted is the memory subsystem. Specifically, the design, verification, and evaluation of cache coherence protocols becomes very challenging as cores become more numerous and more diverse. This dissertation examines these issues and presents Manager-Client Pairing as a solution to the challenges facing next-generation coherence protocol design. By defining a standardized coherence communication interface and permissions checking algorithm, Manager-Client Pairing enables coherence hierarchies to be constructed and evaluated quickly without the high design-cost previously associated with hierarchical composition. Further, Manager-Client Pairing also allows for verification composition, even in the presence of protocol heterogeneity. As a result, this rapid development of diverse protocols is ensured to be bug-free, enabling architects to focus on performance optimization, rather than debugging and correctness concerns, while comparing diverse coherence configurations for use in future heterogeneous systems.
APA, Harvard, Vancouver, ISO, and other styles
49

Schmid, Wolfgang. "A farm package for MODFLOW-2000 simulation of irrigation demand and conjunctively managed surface-water and ground-water supply /." Diss., The University of Arizona, 2004. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu_e9791_2004_287_sip1_w.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Schwarz, Lisa Kimberley. "Survival rate estimates of Florida manatees (Trichechus manatus latirostris) using carcass recovery data." Diss., Montana State University, 2007. http://etd.lib.montana.edu/etd/2007/schwarz/SchwarzL1207.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography