Dissertations / Theses on the topic 'Computing approach'

To see the other types of publications on this topic, follow the link: Computing approach.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computing approach.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Constantinescu-Fuløp, Zoran. "A Desktop Grid Computing Approach for Scientific Computing and Visualization." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-2191.

Full text
Abstract:

Scientific Computing is the collection of tools, techniques, and theories required to solve on a computer, mathematical models of problems from science and engineering, and its main goal is to gain insight in such problems. Generally, it is difficult to understand or communicate information from complex or large datasets generated by Scientific Computing methods and techniques (computational simulations, complex experiments, observational instruments etc.). Therefore, support of Scientific Visualization is needed, to provide the techniques, algorithms, and software tools needed to extract and display appropriately important information from numerical data.

Usually, complex computational and visualization algorithms require large amounts of computational power. The computing power of a single desktop computer is insufficient for running such complex algorithms, and, traditionally, large parallel supercomputers or dedicated clusters were used for this job. However, very high initial investments and maintenance costs limit the availability of such systems. A more convenient solution, which is becoming more and more popular, is based on the use of nondedicated desktop PCs in a Desktop Grid Computing environment. Harnessing idle CPU cycles, storage space and other resources of networked computers to work together on a particularly computational intensive application does this. Increasing power and communication bandwidth of desktop computers provides for this solution.

In a desktop grid system, the execution of an application is orchestrated by a central scheduler node, which distributes the tasks amongst the worker nodes and awaits workers’ results. An application only finishes when all tasks have been completed. The attractiveness of exploiting desktop grids is further reinforced by the fact that costs are highly distributed: every volunteer supports her resources (hardware, power costs and internet connections) while the benefited entity provides management infrastructures, namely network bandwidth, servers and management services, receiving in exchange a massive and otherwise unaffordable computing power. The usefulness of desktop grid computing is not limited to major high throughput public computing projects. Many institutions, ranging from academics to enterprises, hold vast number of desktop machines and could benefit from exploiting the idle cycles of their local machines.

In the work presented in this thesis, the central idea has been to provide a desktop grid computing framework and to prove its viability by testing it in some Scientific Computing and Visualization experiments. We present here QADPZ, an open source system for desktop grid computing that have been developed to meet the above presented needs. QADPZ enables users from a local network or Internet to share their resources. It is a multi-platform, heterogeneous system, where different computing resources from inside an organization can be used. It can be used also for volunteer computing, where the communication infrastructure is the Internet. QADPZ supports the following native operating systems: Linux, Windows, MacOS and Unix variants. The reason behind natively supporting multiple operating systems, and not only one (Unix or Windows, as other systems do), is that often, in real life, this kind of limitation restricts very much the usability of desktop grid computing.

QADPZ provides a flexible object-oriented software framework that makes it easy for programmers to write various applications, and for researchers to address issues such as adaptive parallelism, fault-tolerance, and scalability. The framework supports also the execution of legacy applications, which for different reasons could not be rewritten, and that makes it suitable for other domains as business. It also supports low-level programming languages as C/C++ or high-level language applications, (e.g. Lisp, Python, and Java), and provides the necessary mechanisms to use such applications in a computation. Consequently, users with various backgrounds can benefit from using QADPZ. The flexible object-oriented structure and the modularity allow facile improvements and further extensions to other programming languages.

We have developed a general-purpose runtime and an API to support new kinds of high performance computing applications, and therefore to benefit from the advantages offered by desktop grid computing. This API directly supports the C/C++ programming language. We have shown how distributed computing extends beyond the master-worker paradigm (typical for such systems) and provided QADPZ with an extended API that supports in addition lightweight tasks and parallel computing (using the message passing paradigm - MPI). This extends the range of applications that can be used to already existing MPI based applications - e.g. parallel numerical solvers used in computational science, or parallel visualization algorithms.

Another restriction of existing systems, especially middleware based, is that each resource provider needs to install a runtime module with administrator privileges. This poses some issues regarding data integrity and accessibility on providers computers. The QADPZ system tries to overcome this by allowing the middleware module to run as a non-privileged user, even with restricted access, to the local system.

QADPZ provides also low-level optimizations, such as on-the-fly compression and encryption for communication. The user can choose from different algorithms, depending on the application, improving both the communication overhead imposed by large data transfers and keeping privacy of the data. The system goes further, by providing an experimental, adaptive compression algorithm, which can transparently choose different algorithms to improve the application. QADPZ support two different protocols (UDP and TCP/IP) in order to improve the efficiency of communication.

Free source code allows its flexible installations and modifications based on the particular needs of research projects and institutions. In addition to being a very powerful tool for computationally intensive research, the open sourceness makes QADPZ a flexible educational platform for numerous smallsize student projects in the areas of operating systems, distributed systems, mobile agents, parallel algorithms, etc. Open source software is a natural choice for modern research as well, because it encourages effectively integration, cooperation and boosting of new ideas.

This thesis proposes also an improved conceptual model (based on the master-worker paradigm), which makes contributions in several directions: pull vs. push work-units, pipelining of work-units, more work-units sent at a time, adaptive number of workers, adaptive time-out interval for work-units, and multithreading. We have also demonstrated that the use of desktop grids should not be limited to only master-worker applications, but it can be used for more fine-grained parallel Scientific Computing and Visualization applications, by performing some specific experiments. This thesis makes supplementary contributions: a hierarchical taxonomy of the main existing desktop grids, and an adaptive compression algorithm for remote visualization. QADPZ has also pioneered autonomic computing approach for desktop grids and presents specific self-management features: self-knowledge, self-configuration, selfoptimization and self-healing. It is worth to mention that to the present the QADPZ has over a thousand users who have download it (since July, 2001 when it has been uploaded to sourceforge.net), and many of them use it for their daily tasks (see the appendix). Many of the results have been published or are in course of publishing as it can be seen from the references.

APA, Harvard, Vancouver, ISO, and other styles
2

Abukmail, Ahmed Ahed. "Pervasive computing approach to energy management." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0013060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Al-Shammaa, Mohammed. "Granular computing approach for intelligent classifier design." Thesis, Brunel University, 2016. http://bura.brunel.ac.uk/handle/2438/13686.

Full text
Abstract:
Granular computing facilitates dealing with information by providing a theoretical framework to deal with information as granules at different levels of granularity (different levels of specificity/abstraction). It aims to provide an abstract explainable description of the data by forming granules that represent the features or the underlying structure of corresponding subsets of the data. In this thesis, a granular computing approach to the design of intelligent classification systems is proposed. The proposed approach is employed for different classification systems to investigate its efficiency. Fuzzy inference systems, neural networks, neuro-fuzzy systems and classifier ensembles are considered to evaluate the efficiency of the proposed approach. Each of the considered systems is designed using the proposed approach and classification performance is evaluated and compared to that of the standard system. The proposed approach is based on constructing information granules from data at multiple levels of granularity. The granulation process is performed using a modified fuzzy c-means algorithm that takes classification problem into account. Clustering is followed by a coarsening process that involves merging small clusters into large ones to form a lower granularity level. The resulted granules are used to build each of the considered binary classifiers in different settings and approaches. Granules produced by the proposed granulation method are used to build a fuzzy classifier for each granulation level or set of levels. The performance of the classifiers is evaluated using real life data sets and measured by two classification performance measures: accuracy and area under receiver operating characteristic curve. Experimental results show that fuzzy systems constructed using the proposed method achieved better classification performance. In addition, the proposed approach is used for the design of neural network classifiers. Resulted granules from one or more granulation levels are used to train the classifiers at different levels of specificity/abstraction. Using this approach, the classification problem is broken down into the modelling of classification rules represented by the information granules resulting in more interpretable system. Experimental results show that neural network classifiers trained using the proposed approach have better classification performance for most of the data sets. In a similar manner, the proposed approach is used for the training of neuro-fuzzy systems resulting in similar improvement in classification performance. Lastly, neural networks built using the proposed approach are used to construct a classifier ensemble. Information granules are used to generate and train the base classifiers. The final ensemble output is produced by a weighted sum combiner. Based on the experimental results, the proposed approach has improved the classification performance of the base classifiers for most of the data sets. Furthermore, a genetic algorithm is used to determine the combiner weights automatically.
APA, Harvard, Vancouver, ISO, and other styles
4

Ingram, Colin. "Computing education in FE : a systems approach." Thesis, University of the West of Scotland, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.419787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hiziroglu, Abdulkadir. "A soft computing approach to customer segmentation." Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.503072.

Full text
Abstract:
Improper selection of segmentation variables and tools may have an effect on segmentation results and can cause a negative financial impact (Tsai & Chiu, 2004). With regards to the selection of segmentation variables, although general segmentation variables such as demographics are frequently utilised based on the assumption that customers with similar demographics and lifestyles tend to exhibit similar purchasing behaviours (Tsai & Chiu, 2004), it is believed the behavioural variables of customers are more suitable to use as segmentation bases (Hsieh, 2004). As far as segmentation techniques are concerned, two conclusions can be made. First, the cluster-based segmentation methods, particularly hierarchical and non-hierarchical methods, have been widely used in the related literature. But, the hierarchical methods are criticised for nonrecovery while the non-hierarchical ones are not able to determine the initial number of clusters (Lien, 2005). Hence, the integration of hierarchical and partitional methods (as a two-stage approach) is suggested to make the clustering results powerful in large databases (Kuo, Ho & Hu, 2002b). Second, none of those traditional approaches has the ability to establish non-strict customer segments that are significantly crucial for today's competitive consumer markets. One crucial area that can meet this requirement is known as soft computing. Although there have been studies related to the usage of soft computing techniques for segmentation problems, they are not based on the effective two-stage methodology. The aim of this study is to propose a soft computing model for customer segmentation using purchasing behaviours of customers in a data mining framework. The segmentation process in this study includes segmentation (clustering and profiling) of existing consumers and classification-prediction of segments for existing and new customers. Both a combination and an integration of soft computing techniques were used in the proposed model. Clustering was performed via a proposed neuro-fuzzy two stage-clustering approach and classification-prediction was employed using a supervised artificial neural network method. Segmenting customers was done according to the purchasing behaviours of customers based on RFM (Recency, Frequency, Monetary) values, which can be considered as an important variable set in identifying customer value. The model was also compared with other two-stage methods (Le., Ward's method followed by k-means and self-organising maps followed by k-means) based on select segmentability criteria. The proposed model was employed in a secondary data set from a UK retail company. The data set included more than 300,000 unique customer records and a random sample of approximately 1 % of it was used for conducting analyses .. The findings indicated that the proposed model provided better insights and managerial implications in comparison with the traditional two-stage methods with respect to the select segmentability criteria. --' The main contribution of this study is threefold. Firstly it has the potential benefits and implications of having fuzzy segments, which enables us to have flexible segments through the availability of membership degrees of each customer to the corresponding customer segments. Secondly the development of a new two-stage clustering model could be considered to be superior to its peers in terms of computational ability. And finally, through the classification phase of the model it was possible to extract knowledge regarding segment stability, which was utilised to calculate customer retention or chum rate over time for corresponding segments.
APA, Harvard, Vancouver, ISO, and other styles
6

Mallett, Jacky 1963. "Kami : an anarchic approach to distributed computing." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/61847.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 2000.
Includes bibliographical references (p. 83-84).
This thesis presents a distributed computing system, Kami, which provides support for applications running in an environment of heterogeneous workstations linked together by a high speed network. It enables users to easily create distributed applications by providing a backbone infrastructure of localized daemons which operate in a peer-to-peer networking environment, providing support for software distribution, network communication, and data streaming suitable for use by coarse grained distributed applications. As a collective entity, kami daemons, each individually run on a single machine, form a cooperating anarchy of processes. These support their applications using adaptive algorithms with no form of centralized control. Instead of attempting to provide a controlled environment, this thesis assumes a heterogeneous and uncontrolled environment, and presents a model for distributed computation that is completely decentralized and uses multicast communication between workstations to form an ecology of co-operating processes, which actively attempt to maintain an equilibrium between the demands of their users and the capabilities of the workstations on which they are running.
by Jacky Mallett.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
7

Millard, Ian C. "Contextually aware pervasive computing : a semantic approach." Thesis, University of Southampton, 2008. https://eprints.soton.ac.uk/266002/.

Full text
Abstract:
We live in a world which is becoming increasingly rich in technology, with a wide array of portable and embedded devices being readily available and surrounding us in everyday use. Similarly, advances in communications technologies and the explosive growth of data being published on the Internet have provided access to information on an unparalleled scale. However, device interoperability is often poor at best, and accessing data which is relevant to any given situation can be difficult due to the sheer quantity of information which is available. A contextually aware environment is envisioned as one in which integrated computer systems have an understanding or representation of not only the physical space and the resources within it, but also the activities, interests, actions and intent of the human occupants at any given time. Given such knowledge, a contextually aware and technology rich pervasive environment may offer services and applications which attempt to adapt the surroundings in a manner which assists its users, such as by configuring devices or assimilating information which is relevant to activities currently being undertaken. The research presented in this thesis combines the fields of knowledge management, semantic technologies, logic and reasoning with those from the predominantly hardware and communications oriented field of pervasive computing, in order to facilitate the creation of contextually aware environments. Requirements for such a system are discussed in detail, resulting in the development of a generic framework of components and data representations from which domain specific deployments can be created. To demonstrate and test the proposed framework, experimentation has been conducted in the example domain of an academic environment, including the development of two contextually aware applications. The experiences and lessons learned during this research are documented throughout, and have influenced the proposed avenues for future related research in this area.
APA, Harvard, Vancouver, ISO, and other styles
8

Craven, Stephen Douglas. "Structured Approach to Dynamic Computing Application Development." Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/27730.

Full text
Abstract:
The ability of some configurable logic devices to modify their hardware during operation has long held great potential to increase performance and reduce device cost. However, despite many research projects and a decade of research, the dynamic reconfiguration of Field Programmable Gate Arrays (FPGAs) is still very much an art practiced by few. Previous attempts to automate the many low-level details that complicate Run-Time Reconfigurable (RTR) application development suffer severe limitations. This dissertation describes a comprehensive approach to dynamic hardware development, providing a designer with appropriate models for computation, communication, and reconfiguration integrated with a high-level design environment. In this way, many manual and time consuming tasks associated with partial reconfiguration are hidden, permitting a designer to focus instead on a design's behavior. This design and implementation environment has been validated on a variety of relevant applications, quantifying the effects of high-level design.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

Taylor, Daniel Kyle. "A Model-Based Approach to Reconfigurable Computing." Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/36202.

Full text
Abstract:
Throughout the history of software development, advances have been made that improve the ability of developers to create systems by enabling them to work closer to their application domain. These advances have given programmers higher level abstractions with which to reason about problems. A separation of concerns between logic and implementation allows for reuse of components, portability between implementation platforms, and higher productivity. Parallels can be drawn between the challenges that the field of reconfigurable computing (RC) is facing today and what the field of software engineering has gone through in the past. Most RC work is done in low level hardware description languages (HDLs) at the circuit level. A large productivity gap exists between the ability of RC developers and the potential of the technology. The small number of RC experts is not enough to meet the demands for RC applications. Model-based engineering principles provide a way to reason about RC devices at a higher level, allowing for greater productivity, reuse, and portability. Higher level abstractions allow developers to deal with larger and more complex systems. A modeling environment has been developed to aid users in creating models, storing, reusing and generating hardware implementation code for their system. This environment serves as a starting point to apply model-based techniques to the field of RC to tighten the productivity gap. Future work can build on this model-based framework to take advantage of the unique features of reconfigurable devices, optimize their performance, and further open the field to a wider audience.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
10

Andersson, Casper. "Reservoir Computing Approach for Network Intrusion Detection." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54983.

Full text
Abstract:
Identifying intrusions in computer networks is important to be able to protect the network. The network is the entry point that attackers use in an attempt to gain access to valuable information from a company or organization or to simply destroy digital property. There exist many good methods already but there is always room for improvement. This thesis proposes to use reservoir computing as a feature extractor on network traffic data as a time series to train machine learning models for anomaly detection. The models used in this thesis are neural network, support vector machine, and linear discriminant analysis. The performance is measured in terms of detection rate, false alarm rate, and overall accuracy of the identification of attacks in the test data. The results show that the neural network generally improved with the use of a reservoir network. Support vector machine wasn't hugely affected by the reservoir. Linear discriminant analysis always got worse performance. Overall, the time aspect of the reservoir didn't have a huge effect. The performance of my experiments is inferior to those of previous works, but it might perform better if a separate feature selection or extraction is done first. Extracting a sequence to a single vector and determining if it contained any attacks worked very well when the sequences contained several attacks, otherwise not so well.
APA, Harvard, Vancouver, ISO, and other styles
11

Stoicescu, Miruna. "Architecting Resilient Computing Systems : a Component-Based Approach." Thesis, Toulouse, INPT, 2013. http://www.theses.fr/2013INPT0120/document.

Full text
Abstract:
L'évolution des systèmes pendant leur vie opérationnelle est incontournable. Les systèmes sûrs de fonctionnement doivent évoluer pour s'adapter à des changements comme la confrontation à de nouveaux types de fautes ou la perte de ressources. L'ajout de cette dimension évolutive à la fiabilité conduit à la notion de résilience informatique. Parmi les différents aspects de la résilience, nous nous concentrons sur l'adaptativité. La sûreté de fonctionnement informatique est basée sur plusieurs moyens, dont la tolérance aux fautes à l'exécution, où l'on attache des mécanismes spécifiques (Fault Tolerance Mechanisms, FTMs) à l'application. A ce titre, l'adaptation des FTMs à l'exécution s'avère un défi pour développer des systèmes résilients. Dans la plupart des travaux de recherche existants, l'adaptation des FTMs à l'exécution est réalisée de manière préprogrammée ou se limite à faire varier quelques paramètres. Tous les FTMs envisageables doivent être connus dès le design du système et déployés et attachés à l'application dès le début. Pourtant, les changements ont des origines variées et, donc, vouloir équiper un système pour le pire scénario est impossible. Selon les observations pendant la vie opérationnelle, de nouveaux FTMs peuvent être développés hors-ligne, mais intégrés pendant l'exécution. On dénote cette capacité comme adaptation agile, par opposition à l'adaptation préprogrammée. Dans cette thèse, nous présentons une approche pour développer des systèmes sûrs de fonctionnement flexibles dont les FTMs peuvent s'adapter à l'exécution de manière agile par des modifications à grain fin pour minimiser l'impact sur l'architecture initiale. D'abord, nous proposons une classification d'un ensemble de FTMs existants basée sur des critères comme le modèle de faute, les caractéristiques de l'application et les ressources nécessaires. Ensuite, nous analysons ces FTMs et extrayons un schéma d'exécution générique identifiant leurs parties communes et leurs points de variabilité. Après, nous démontrons les bénéfices apportés par les outils et les concepts issus du domaine du génie logiciel, comme les intergiciels réflexifs à base de composants, pour développer une librairie de FTMs adaptatifs à grain fin. Nous évaluons l'agilité de l'approche et illustrons son utilité à travers deux exemples d'intégration : premièrement, dans un processus de développement dirigé par le design pour les systèmes ubiquitaires et, deuxièmement, dans un environnement pour le développement d'applications pour des réseaux de capteurs
Evolution during service life is mandatory, particularly for long-lived systems. Dependable systems, which continuously deliver trustworthy services, must evolve to accommodate changes e.g., new fault tolerance requirements or variations in available resources. The addition of this evolutionary dimension to dependability leads to the notion of resilient computing. Among the various aspects of resilience, we focus on adaptivity. Dependability relies on fault tolerant computing at runtime, applications being augmented with fault tolerance mechanisms (FTMs). As such, on-line adaptation of FTMs is a key challenge towards resilience. In related work, on-line adaption of FTMs is most often performed in a preprogrammed manner or consists in tuning some parameters. Besides, FTMs are replaced monolithically. All the envisaged FTMs must be known at design time and deployed from the beginning. However, dynamics occurs along multiple dimensions and developing a system for the worst-case scenario is impossible. According to runtime observations, new FTMs can be developed off-line but integrated on-line. We denote this ability as agile adaption, as opposed to the preprogrammed one. In this thesis, we present an approach for developing flexible fault-tolerant systems in which FTMs can be adapted at runtime in an agile manner through fine-grained modifications for minimizing impact on the initial architecture. We first propose a classification of a set of existing FTMs based on criteria such as fault model, application characteristics and necessary resources. Next, we analyze these FTMs and extract a generic execution scheme which pinpoints the common parts and the variable features between them. Then, we demonstrate the use of state-of-the-art tools and concepts from the field of software engineering, such as component-based software engineering and reflective component-based middleware, for developing a library of fine-grained adaptive FTMs. We evaluate the agility of the approach and illustrate its usability throughout two examples of integration of the library: first, in a design-driven development process for applications in pervasive computing and, second, in a toolkit for developing applications for WSNs
APA, Harvard, Vancouver, ISO, and other styles
12

Dziallas, Sebastian. "Characterising graduateness in computing education : a narrative approach." Thesis, University of Kent, 2018. https://kar.kent.ac.uk/69292/.

Full text
Abstract:
This thesis examines the concept of graduateness in computing education. Graduateness is related to efforts to articulate the outcomes of a university education. It is commonly defined as the attributes all graduates should develop by the time they graduate regardless of university attended or discipline studied (Glover, Law and Youngman 2002). This work takes a different perspective grounded in disciplinary and institutional contexts. It aims to explore how graduates make sense of their experiences studying computing within their wider learning trajectories. The research presented here uses a narrative approach. Whilst narrative methodologies are not commonly used in computing education, people construct stories both to make sense of their experiences and to integrate the "past, present, and an anticipated future" (McAdams 1985, p.120). Stories are then a particularly appropriate way of examining the sense people make of their learning experiences. This work draws on narrative interviews with graduates from the School of Computing at the University of Kent and Olin College of Engineering in the United States. It contributes a new perspective about the effect of a computing education beyond short-term outcome measures and proposes several analytic constructs that expose significant aspects in participants' learning experiences. In this, it describes themes related to students' acquisition of disciplinary knowledge and examines the evolution of their stories of learning computing over time.
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Xuan. "A creative computing approach to poetry as data." Thesis, Bath Spa University, 2018. http://researchspace.bathspa.ac.uk/11560/.

Full text
Abstract:
With the rapid advent of emerging new services such as cloud computing, mobile technology, and social media, more and more people prefer posting their literary creations such as poems, on the Internet instead of in traditional papers. The era of Digital Humanities has truly arrived. With ever-growing concerns regarding literary data, ways to utilise and manage them has become a major concern. Many researchers have worked on that and proposed different solutions. However, owing to new challenges and creative requirements, traditional methods need adjustments. For example, most poetry data collection methods, such as surveys, are based on single target searching; that is, only relying upon keywords and themes. Thus, the result can be monotonous. Moreover, the accuracy of algorithms for poetry data analysis is no longer the only benchmark. The underlying meaning of poetry data has drawn people’s attention. Meanwhile, traditional poetry data presentation methods need to be enhanced to reflect diversity and media richness. The aim of this research is to present a Creative Computing approach to poetry data collection, analysis and presentation. The thesis demonstrates the feasibility and details the proposed methods in the following phases. Firstly, poetry data is being creatively regarded as an object with mass, volume and resistance, from an interdisciplinary perspective. A novel data relevancy rule is proposed to retrieve the closely-related data of an input, which is adapted from the Newton’s law of universal gravitation in physics. In this way, a broadened variety of data is being searched using web crawler based on multi-purpose rules. Then, the search results are filtered on the basis of buoyancy phenomenon and Ohm’s law. Secondly, with reference to chemical principles this research carries out innovative poetry data analysis based on the notion that chemical reactions always bring in brand new outcomes, despite having exactly the same elements. The mood, theme and personal reflection, after going through a piece of literature, presented difficulties for traditional data analysis. In this work, they have been investigated relying on acidity estimation, organic abstraction and oxidation-reduction reactions. Lastly, presenting the poetry analysis results through creative visualisation has been studied thanks to the elegant mathematics expressions of curves and shapes which are believed to effectively convey underlying emotions of poetry data. To illustrate this idea, a rainbow of variable spectrum and diverse types of trajectories are proposed as background and rolling titles, respectively. In summary, the proposed approach carries out manipulations on traditional poetry data processing based on models and algorithms of Creative Computing. The proposed approach was evaluated by a selected case study, where a prototype system was built for poetry analysis. Conclusions are drawn and future research is also discussed. Initial experiment results show this work contributes to an effective and Creative Computing approach to poetry data manipulation. This research has potential applications to academic research of texts, to making word recommendations for users to better comprehend literature such as poetry, a novel or drama. Furthermore, it sees the possibility of inspiring creative thinking for human art creation.
APA, Harvard, Vancouver, ISO, and other styles
14

Simon, Gordon Peter. "The intermediate machine approach to distributed computing system design." Thesis, University of British Columbia, 1986. http://hdl.handle.net/2429/26072.

Full text
Abstract:
This thesis proposes that an intermediate machine be viewed as the software base of a distributed operating system. In this role the services provided to the operating system by the intermediate machine are similar to those provided by a security kernel. Such an intermediate machine software base differs from the traditional security kernel in that it additionally provides an interpreted instruction set. The advantage of this approach is that software extension of a machine architecture - whereby multi-processing and device abstraction are provided - is implemented in a homogeneous manner across all nodes of the distributed computer system. The objective of the thesis is to determine the feasibility of an intermediate machine approach to distributed computer system design. Through investigation and experimentation, an intermediate machine based distributed computer system is developed and evaluated. This paper describes the system and its evaluation. As well, the merits of an intermediate machine software base are considered and the approach is contrasted with popular contemporary distributed system designs. Since it is expected that an interpretative intermediate machine would slow execution of system software, the focus of the project is in exploring this weakness in an effort to determine its extent. These efforts are to manifest in suggestions for a workable design of an intermediate machine based distributed computer.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
15

Shah, ShairBaz. "Using P2P approach for resource discovery in Grid Computing." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3088.

Full text
Abstract:
One of the fundamental requirements of Grid computing is efficient and effective resource discovery mechanism. Resource discovery involves discovery of appropriate resources required by user applications. In this regard various resource discovery mechanisms have been proposed during the recent years. These mechanisms range from centralized to hierarchical information servers approach. Most of the techniques developed based on these approaches have scalability and fault tolerance limitations. To overcome these limitations Peer to Peer based discovery mechanisms are proposed.
shairbaz@gmail.com
APA, Harvard, Vancouver, ISO, and other styles
16

Kokkinos, Andreas Filippos, and D'Cruze Ricky Stanley. "Cloud Computing: a new approach for Hallstahammar’s IT companies." Thesis, Mälardalens högskola, Akademin för hållbar samhälls- och teknikutveckling, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-11859.

Full text
Abstract:
Thesis Purpose Examine the possibility of small IT companies being benefited from a Cloud Computing transition. Through one case study of a software development company and interviews from five Hallstahmmar’s IT companies, we showed how Cloud Computing can enable organizations to decrease IT investments and related costs. Besides we critically analyzed some drawbacks of this latest concept.   Methodology Primary and secondary data has been collected based on a qualitative method and a structured approach. The collected material of the secondary data was mainly based on latest journals. The interviewing parts have been recorded and summarized.   Theoretical Perspective We have used theories of various aspects of business related to Cloud Computing; e.g. innovation and Cloud Computing, business model and Cloud Computing in order to acquire a complete knowledge base for analyzing our empirical data.   Empirical Foundation A case study of TotalAssist, interview data of LifeCenter AB and interviews of four IT companies of Hallstahammar, are our empirical foundation of the reserach.   Conclusion IT companies of Hallstahammar may adopt the Cloud Computing paradigm. Besides, yet this new concept has its risks. Security remains a concern among many CIO’s. In addition, we recommend means that a company can pursue while implementing a Cloud Computing transition.
APA, Harvard, Vancouver, ISO, and other styles
17

Islam, Nilufar. "Evaluating source water protection strategies : a soft computing approach." Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/30842.

Full text
Abstract:
Source water protection is an important step in the implementation of a multi-barrier approach that ensures delivery of safe drinking water cost effectively. However, implementing source water protection strategies can be a challenging task due to technical and administrative issues. Currently many decision support tools are available that mainly use complex mathematical formulations. These tools require large data sets to conduct the analysis, which make their use very limited. A simple soft-computing model is proposed in this research that can estimate and predict a reduction in the pollutant loads based on selected source water protection strategies that include storm water management ponds, vegetated filter strips, and pollution control by agricultural practice. The proposed model uses an export coefficient approach and number of animals to calculate the pollutant loads generated from different land uses (e.g., agricultural lands, forests, roads, livestock, and pasture). A surrogate measure, water quality index, is used for the water assessment after the pollutant loads are discharged into the source water. To demonstrate the proof of concept of the proposed model, a Page Creek Case Study in Clayburn Watershed (British Columbia, Canada) was conducted. The results show that rapid urban development and improperly managed agricultural area have the most adverse effects on the source water quality. On the other hand, forests were found to be the best land use around the source water that ensures acceptable drinking water quality with a minimal requirement for treatment. The proposed model can help decision-makers at different levels of government (Federal/ Provincial/ Municipal) to make informed decisions related to land use, resource allocation and capital investment
APA, Harvard, Vancouver, ISO, and other styles
18

Guan, Jian. "A semi-supervised learning approach to interactive visual computing." Thesis, University of Nottingham, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.493123.

Full text
Abstract:
In many computer vision and computer graphics applications, it is often very difficult or maybe even impossible to develop fully automatic solutions. On the other hand, human has remarkable abilities in distinguishing different image regions and separating different classes of objects. Moreover, users may have different intentions in different application scenarios.
APA, Harvard, Vancouver, ISO, and other styles
19

Tracy, Judd. "AN APPROACH FOR COMPUTING INTERVISIBILITY USING GRAPHICAL PROCESSING U." Master's thesis, University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2505.

Full text
Abstract:
In large scale entity-level military force-on-force simulations it is essential to know when one entity can visibly see another entity. This visibility determination plays an important role in the simulation and can affect the outcome of the simulation. When virtual Computer Generated Forces (CGF) are introduced into the simulation these intervisibilities must now be calculated by the virtual entities on the battlefield. But as the simulation size increases so does the complexity of calculating visibility between entities. This thesis presents an algorithm for performing these visibility calculations using Graphical Processing Units (GPU) instead of the Central Processing Units (CPU) that have been traditionally used in CGF simulations. This algorithm can be distributed across multiple GPUs in a cluster and its scalability exceeds that of CGF-based algorithms. The poor correlations of the two visibility algorithms are demonstrated showing that the GPU algorithm provides a necessary condition for a "Fair Fight" when paired with visual simulations.
M.S.Cp.E.
Department of Electrical and Computer Engineering
Engineering and Computer Science
Computer Engineering
APA, Harvard, Vancouver, ISO, and other styles
20

Harkin, James. "Hardware software partitioning : a reconfigurable and evolutionary computing approach." Thesis, University of Ulster, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.274414.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Michaelides, Danius Takis. "Exact tests via complete enumeration : a distributed computing approach." Thesis, University of Southampton, 1997. https://eprints.soton.ac.uk/250749/.

Full text
Abstract:
The analysis of categorical data often leads to the analysis of a contingency table. For large samples, asymptotic approximations are sufficient when calculating p-values, but for small samples the tests can be unreliable. In these situations an exact test should be considered. This bases the test on the exact distribution of the test statistic. Sampling techniques can be used to estimate the distribution. Alternatively, the distribution can be found by complete enumeration. A new algorithm is developed that enables a model to be defined by a model matrix, and all tables that satisfy the model are found. This provides a more efficient enumeration mechanism for complex models and extends the range of models that can be tested. The technique can lead to large calculations and a distributed version of the algorithm is developed that enables a number of machines to work efficiently on the same problem.
APA, Harvard, Vancouver, ISO, and other styles
22

Grein, Ederson Augusto. "A parallel computing approach applied to petroleum reservoir simulation." reponame:Repositório Institucional da UFSC, 2015. https://repositorio.ufsc.br/xmlui/handle/123456789/160633.

Full text
Abstract:
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia Mecânica, Florianópolis, 2015.
Made available in DSpace on 2016-04-19T04:03:44Z (GMT). No. of bitstreams: 1 337626.pdf: 16916870 bytes, checksum: a0cb8bc1bf93f21cc1a78cd631272e49 (MD5) Previous issue date: 2015
A simulação numérica é uma ferramenta de extrema importância à indústria do petróleo e gás. Entretanto, para que os resultados advindos da simulação sejam fidedignos, é fundamental o emprego de modelos físicos fiéis e de uma boa caracterização geométrica do reservatório. Isso tende a introduzir elevada carga computacional e, consequentemente, a obtenção da solução do modelo numérico correspondente pode demandar um excessivo tempo de simulação. É evidente que a redução desse tempo interessa profundamente à engenharia de reservatórios. Dentre as técnicas de melhoria de performance, uma das mais promissoras é a aplicação da computação paralela. Nessa técnica, a carga computacional é dividida entre diversos processadores. Idealmente, a carga computacional é dividida de maneira igualitária e, assim, se N é o número de processadores, o tempo computacional é N vezes menor. No presente estudo, a computação paralela foi aplicada a dois simuladores numéricos: UTCHEM e EFVLib. UTCHEM é um simulador químico-composicional desenvolvido pela The University of Texas at Austin. A EFVLib, por sua vez, é uma biblioteca desenvolvida pelo laboratório SINMEC  laboratório ligado ao Departamento de Engenharia Mecânica da Universidade Federal de Santa Catarina  cujo intuito é prover suporte à aplicação do Método dos Volumes Finitos Baseado em Elementos. Em ambos os casos a metodologia de paralalelização é baseada na decomposição de domínio.

Abstract : Numerical simulation is an extremely relevant tool to the oil and gas industry. It makes feasible the procedure of predicting the production scenery in a given reservoir and design more advantageous exploit strategies fromits results. However, in order to obtain reliability fromthe numerical results, it is essential to employ reliable numerical models and an accurate geometrical characterization of the reservoir. This leads to a high computational load and consequently the achievement of the solution of the corresponding numerical method may require an exceedingly large simulation time. Seemingly, reducing this time is an accomplishment of great interest to the reservoir engineering. Among the techniques of boosting performance, parallel computing is one of the most promising ones. In this technique, the computational load is split throughout the set of processors. In the most ideal situation, this computational load is split in an egalitarian way, in such a way that if N is the number of processors then the computational time is N times smaller. In this study, parallel computing was applied to two distinct numerical simulators: UTCHEM and EFVLib. UTCHEM is a compositional reservoir simulator developed at TheUniversity of Texas atAustin. EFVLib, by its turn, is a computational library developed at SINMEC  a laboratory at theMechanical Enginering Department of The Federal University of Santa Catarina  with the aim of supporting the Element-based Finite Volume Method employment. The parallelization process were based on the domain decomposition on the both cases formerly described.
APA, Harvard, Vancouver, ISO, and other styles
23

Goettel, Colby. "A Cognitive Approach to Predicting Academic Success in Computing." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/6732.

Full text
Abstract:
This research examines the possible correlations between a computing student's learning preference and their academic success, as well as their overall satisfaction with their major. CS and IT seniors at BYU were surveyed about their learning preferences and satisfaction with their major. The research found that IT students who are more reflective in their learning preference tend to have higher grades in their major. Additionally, it found that student age and their parents' education level were significant players in their academic success. However, there were no correlations found between major satisfaction and academic performance.
APA, Harvard, Vancouver, ISO, and other styles
24

Gergeleit, Martin. "A monitoring based approach to object oriented real time computing." [S.l.] : [s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=964150719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Jakob, Henner. "Towards securing pervasive computing systems by design: a language approach." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2011. http://tel.archives-ouvertes.fr/tel-00719170.

Full text
Abstract:
Dans de multiples domaines, un nombre grandissant d'applications interagissant avec des entités ommunicantes apparaissent dans l'environnement pour faciliter les activités quotidiennes (domotique et télémédecine). Leur impact sur la vie de tous les jours des utilisateurs rend ces applications critiques: leur défaillance peut mettre en danger des personnes et leurs biens. Bien que l'impact de ces défaillances puisse être majeur, la sécurité est souvent considérée comme un problème secondaire dans le processus de développement et est traitée par des approches ad hoc. Cette thèse propose d'intégrer des aspects de sécurité dans le cycle de développement des systèmes d'informatique ubiquitaire. La sécurité est spécifiée à la conception grâce à des déclarations dédiées et de haut niveau. Ces déclarations sont utilisées pour générer un support de programmation afin de faciliter l'implémentation des mécanismes de sécurité, tout en séparant ces aspects de sécurité de la logique applicative. Notre approche se concentre sur le contrôle d'accès aux entités et la protection de la vie privée. Notre travail a été implémenté et fait levier sur une suite outillée existante couvrant le cycle de développement logiciel.
APA, Harvard, Vancouver, ISO, and other styles
26

Ralph, Scott K. "A constraint-based approach for computing fault tolerant robot programs." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0017/NQ46408.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

McKeon, Sean Patrick. "A GPU Stream Computing Approach to Terrain Database Integrity Monitoring." Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/cs_theses/65.

Full text
Abstract:
Synthetic Vision Systems (SVS) provide an aircraft pilot with a virtual 3-D image of surrounding terrain which is generated from a digital elevation model stored in an onboard database. SVS improves the pilot's situational awareness at night and in inclement weather, thus reducing the chance of accidents such as controlled flight into terrain. A terrain database integrity monitor is needed to verify the accuracy of the displayed image due to potential database and navigational system errors. Previous research has used existing aircraft sensors to compare the real terrain position with the predicted position. We propose an improvement to one of these models by leveraging the stream computing capabilities of commercial graphics hardware. "Brook for GPUs," a system for implementing stream computing applications on programmable graphics processors, is used to execute a streaming ray-casting algorithm that correctly simulates the beam characteristics of a radar altimeter during all phases of flight.
APA, Harvard, Vancouver, ISO, and other styles
28

Mathias, Elton. "Hierarchical multi-domain computing based upon a component-oriented approach." Nice, 2010. http://www.theses.fr/2010NICE4068.

Full text
Abstract:
Dans cette thèse, nous présentons un intergiciel modulaire pour le calcul distribué dans les plateformes multi-domaine Grid/Cloud, qui permet le traitement de ces questions d’une façon extérieure aux applications. L’idée principale derrière cet intergiciel c’est d’offrir une infrastructure modulaire qui peut être composée hiérarchiquement selon la topologie des ressources et dynamiquement en fonction des ressources disponibles. Cet intergiciel fonctionne comme une colle entre les processus des applications en cours d’exécution dans des différents domaines administratifs, à travers les mécanismes tels que la communication point à point et collective en prenant en compte la topologie des réseaux. Cet intergiciel est basé sur GCM (Grid component model) et l’intergiciel ProActive, que nous l’avons amélioré avec des fonctionnalités telles que : des sémantiques de communication gathercast (Mx1) et multicast (Mx1) génériques, des interfaces gather-multicast (MxN) avec le support à la création des raccourcis pour la communication MxN directe entre les composants, le déploiement automatisé, le tunneling et le redirectionnement des communications. Tout au long de cette thèse, nous motivons notre travail en mettant en perspective deux plateformes multi-domaines hautement communicantes, que nous présentons comme des cas d’utilisation de notre intergiciel : une plateforme HPC, qui permet le couplage des applications basées sur la méthode de décomposition de domaine et une approche de programmation SPMD similaire à MP1, dans des environnement hétérogènes (Runtime DiscoGrid) et une fédération des Enterprise service Buses (ESB) à l’échelle de l’Internet qui permet à des ESBs indépendants d’être fédérées selon les relations de partenariat entre les fournisseurs-consommateurs de services. Les résultats expérimentaux obtenus dans le cadre des deux cas d’utilisation montrent que l’approche proposée est prometteuse, non seulement en termes d’approche de programmation, mais aussi en termes de performance
In this thesis, we introduce a modular middleware for multi-domain Grid and Cloud computing that allows the treatment of issues related to deployment, resources access and communication in heterogeneous networks externally to applications. The main idea behind this middleware is to offer a modular infrastructure that can be composed hierarchically, according to resources topology, and dynamically, according to the availability resources. This middleware works as a glue between application processes running in different domains, featuring mechanisms like topology-aware point-to-point and collective communication. Our middleware grounds from the GCM (the Grid Component Model) and the ProActive Grid middleware, that we improved with features, such as : generic gathergast (Mx1) and multigast (Mx1) communication semantics, gateher-multigast (MxN) component interfaces, MxN shortcuts, automated deployment and communication tunnelling and forwarding. All along this thesis, we motivate our work by putting in perspective two highly communicating multi-domain frameworks, which we present as us-cases of our middleware : an HPC runtime, which allow the coupling of domain-decomposition applications in heterogeneous environments through an MPI-like SPMD programming (the DicoGrid Runtime) and an Internet wide federation of Distributed Enterprise Service Buses, which allows independent distributed service buses to be federated according to partnership relations among service providers. Experimental results obtained in the context of both use-cases show that the proposed approach is promising, not only in terms of programming approach but also in terms of performance
APA, Harvard, Vancouver, ISO, and other styles
29

LEE, TAI-CHUN. "AN EVENT-BASED APPROACH TO DEMAND-DRIVEN DYNAMIC RECONFIGURABLE COMPUTING." University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin990821256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Stephens, Richard Sturge. "The Hough Transform : a probabilistic approach." Thesis, University of Cambridge, 1990. https://www.repository.cam.ac.uk/handle/1810/251579.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Nugroho, Lukito Edi 1966. "A context-based approach for mobile application development." Monash University, School of Computer Science and Software Engineering, 2001. http://arrow.monash.edu.au/hdl/1959.1/8139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Varghese, Blesson. "Swarm-array computing : a swarm robotics inspired approach to achieve automated fault tolerance in high-performance computing systems." Thesis, University of Reading, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.559260.

Full text
Abstract:
Abstract: Fault tolerance is an important area of research in high-performance computing. Traditional fault tolerant methods which require human administrator intervention are challenged by many drawbacks and hence pose a constraint in achieving efficient fault tolerance for high-performance computer systems. The research presented in this dissertation is motivated towards the development of automated fault tolerant methods for high-performance computing. To this end, four questions are addressed: (1) How can autonomic computing concepts be ap- plied to parallel computing? (2) How can a bridge between multi-agent systems and parallel computing systems be built for achieving fault tolerance? (3) How can pro- cessor virtualization for process migration be extended for achieving fault tolerance in parallel computing systems? (4) How can traditional fault tolerant methods be replaced to achieve efficient fault tolerance in high-performance computing systems? In this dissertation, Swarm-Array Computing, a novel framework inspired by the concept of multi-agents in swarm robotics, and built on the foundations of parallel and autonomic computing is proposed to address these questions. The framework comprises three approaches, firstly, intelligent agents, secondly, intelligent cores, and thirdly, a combination of these as a means to achieving automated fault tolerance inline with the goals of autonomic computing. The feasibility of the framework is evaluated using simulation and practical experimental studies. The simulation studies were performed by emulating a field programmable gate array on a multi-agent simulator. The practical studies involved the implementation of a parallel reduction algorithm using message passing interfaces on a computer cluster. The statistics gathered from the experiments confirm that the swarm-array computing approaches improve the fault tolerance of high-performance computing systems over traditional fault tolerant mechanisms. The agent concepts within the framework are formalised by mapping a layered architecture onto both intelligent agents and intelligent cores. Elements of the work reported in this dissertation have been published as journal and conference papers (Appendix A) and presented as public lectures, conference presentations and posters (Appendix B).
APA, Harvard, Vancouver, ISO, and other styles
33

Ragsdale, Scott. "Pursuing and Completing an Undergraduate Computing Degree from a Female Perspective: A Quantitative and Qualitative Approach." NSUWorks, 2013. http://nsuworks.nova.edu/gscis_etd/279.

Full text
Abstract:
The computing profession in the United States would benefit from an increasingly diverse workforce, specifically a larger female presence, because a more gender-balanced workforce would likely result in better technological solutions to difficulties in many areas of American life. However, to achieve this balance, more women with a solid educational foundation in computing need to enter the computing workplace. Yet a common problem is most colleges and universities offering computer-related degrees have found it challenging to attract females to their programs. Also, the women who begin a computing major have shown a higher tendency than men to leave the major. The combination of these factors has resulted in a low percentage of females graduating with a computing degree, providing one plausible explanation for the current gender imbalance in the computing profession. It is readily apparent that female enrollment and retention must be improved to increase female graduation percentages. Although recruiting women into computing and keeping them in it has been problematic, there are some who decide to pursue a computer-related degree and successfully finish. The study focused on this special group of women who provided their insight into the pursuit and completion of an undergraduate computing degree. It is hoped that the knowledge acquired from this research will inspire and encourage more women to consider the field of computing and to seek an education in it. Also, the information gathered in this study may prove valuable to recruiters, professors, and administrators in computing academia. Recruiters will have a better awareness of the factors that direct women toward computing, which may lead to better recruitment strategies. Having a better awareness of the factors that contribute to persistence will provide professors and administrators with information that can help create better methods of encouraging females to continue rather than leave. The investigation used a sequential explanatory methodology to explore how a woman determined to pursue an undergraduate computing major and to persevere within it until attaining a degree.
APA, Harvard, Vancouver, ISO, and other styles
34

Hinze, Thomas, and Monika Sturm. "A universal functional approach to DNA computing and its experimental practicability." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-100882.

Full text
Abstract:
The rapid developments in the field of DNA computing reflects two substantial questions: 1. Which models for DNA based computation are really universal? 2. Which model fulfills the requirements to a universal lab-practicable programmable DNA computer that is based on one of these models? This paper introduces the functional model DNA-HASKELL focussing its lab-practicability. This aim could be reached by specifying the DNA based operations in accordiance to an analysis of molecular biological processes. The specification is determined by an abstraction level that includes nucleotides and strand end labels like 5'-phosphate. Our model is able to describe DNA algorithms for any NP-complete problem - here exemplified by the knapsacik problem - as well as it is able to simulate some established mathematical models for computation. We point out the splicing operation as an example. The computational completeness of DNA-HASKELL can be supposed. This paper is based on discussions about the potenzial and limits of DNA computing, in particular the practicability of a universal DNA computer.
APA, Harvard, Vancouver, ISO, and other styles
35

Garcia, Raymond Christopher. "A soft computing approach to anomaly detection with real-time applicability." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/21808.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Zeileis, Achim. "Implementing a class of structural change tests: An econometric computing approach." Institut für Statistik und Mathematik, WU Vienna University of Economics and Business, 2004. http://epub.wu.ac.at/1316/1/document.pdf.

Full text
Abstract:
The implementation of a recently suggested class of structural change tests, which test for parameter instability in general parametric models, in the R language for statistical computing is described: Focus is given to the question how the conceptual tools can be translated into computational tools that reflect the properties and flexiblity of the underlying econometric metholody while being numerically reliable and easy to use. More precisely, the class of generalized M-fluctuation tests (Zeileis & Hornik, 2003) is implemented in the package strucchange providing easily extensible functions for computing empirical fluctuation processes and automatic tabulation of critical values for a functional capturing excessive fluctuations. Traditional significance tests are supplemented by graphical methods which do not only visualize the result of the testing procedure but also convey information about the nature and timing of the structural change and which component of the parametric model is affected by it.
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
37

Hinze, Thomas, and Monika Sturm. "A universal functional approach to DNA computing and its experimental practicability." Technische Universität Dresden, 2000. https://tud.qucosa.de/id/qucosa%3A26319.

Full text
Abstract:
The rapid developments in the field of DNA computing reflects two substantial questions: 1. Which models for DNA based computation are really universal? 2. Which model fulfills the requirements to a universal lab-practicable programmable DNA computer that is based on one of these models? This paper introduces the functional model DNA-HASKELL focussing its lab-practicability. This aim could be reached by specifying the DNA based operations in accordiance to an analysis of molecular biological processes. The specification is determined by an abstraction level that includes nucleotides and strand end labels like 5'-phosphate. Our model is able to describe DNA algorithms for any NP-complete problem - here exemplified by the knapsacik problem - as well as it is able to simulate some established mathematical models for computation. We point out the splicing operation as an example. The computational completeness of DNA-HASKELL can be supposed. This paper is based on discussions about the potenzial and limits of DNA computing, in particular the practicability of a universal DNA computer.
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Yinlin. "A High-quality Digital Library Supporting Computing Education: The Ensemble Approach." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/78750.

Full text
Abstract:
Educational Digital Libraries (DLs) are complex information systems which are designed to support individuals' information needs and information seeking behavior. To have a broad impact on the communities in education and to serve for a long period, DLs need to structure and organize the resources in a way that facilitates the dissemination and the reuse of resources. Such a digital library should meet defined quality dimensions in the 5S (Societies, Scenarios, Spaces, Structures, Streams) framework - including completeness, consistency, efficiency, extensibility, and reliability - to ensure that a good quality DL is built. In this research, we addressed both external and internal quality aspects of DLs. For internal qualities, we focused on completeness and consistency of the collection, catalog, and repository. We developed an application pipeline to acquire user-generated computing-related resources from YouTube and SlideShare for an educational DL. We applied machine learning techniques to transfer what we learned from the ACM Digital Library dataset. We built classifiers to catalog resources according to the ACM Computing Classification System from the two new domains that were evaluated using Amazon Mechanical Turk. For external qualities, we focused on efficiency, scalability, and reliability in DL services. We proposed cloud-based designs and applications to ensure and improve these qualities in DL services using cloud computing. The experimental results show that our proposed methods are promising for enhancing and enriching an educational digital library. This work received support from ACM, as well as the National Science Foundation under Grant Numbers DUE-0836940, DUE-0937863, and DUE-0840719, and IMLS LG-71-16-0037-16.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
39

Bhupatiraju, Murali K. "Direct and inverse models in metal forming : a soft computing approach /." The Ohio State University, 1999. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488190595941775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Inggs, Gordon. "Portable, predictable and partitionable : a domain specific approach to heterogeneous computing." Thesis, Imperial College London, 2015. http://hdl.handle.net/10044/1/31595.

Full text
Abstract:
Computing is increasingly heterogeneous. Beyond Central Processing Units (CPUs), different architectures such as massively parallel Graphics Processing Units (GPUs) and reconfigurable Field Programmable Gate Arrays (FPGAs) are seeing widespread adoption. However, the failure of conventional programming approaches to support portable execution, predict the runtime characteristics and partition workloads optimally is hindering the realisation of heterogeneous computing. By narrowing the scope of expression in a natural manner, using a domain specific approach, these three challenges can be addressed. A domain specific heterogeneous computing methodology enables three features: Portability, Prediction and Partitioning. Portable, efficient execution is enabled by a domain specific approach because only a subset of domain functions need to be supported across the heterogeneous computing platforms. Predictive models of runtime characteristics are enabled as the structure of the domain functions may be analysed a priori. Finally optimal partitioning is possible because the metric models can be used to form an optimisation program that can be solved by either heuristic, machine learning or Mixed Integer Linear Programming (MILP) approaches. Using the example of the application domain of financial derivatives pricing, a domain specific application framework, the Forward Financial Framework (F^3), can execute a single pricing task upon a diverse range of CPU, GPU and FPGA platforms from many different vendors. Not only do these portable implementations exhibit strong parallel scaling, but are competitive with state-of-the-art, expert created implementations of the same option pricing problems. Furthermore, F^3 can model the crucial runtime metrics of latency and accuracy for these heterogeneous platforms using a small benchmarking procedure to within 10% of the run-time value of these metrics. Finally, the framework can optimally partition work across heterogeneous platforms, using a MILP framework, that is up to 270 times more efficient than what is achieved by using a heuristic approach.
APA, Harvard, Vancouver, ISO, and other styles
41

Coons, Samuel W. "Virtual thin client a scalable service discovery approach for pervasive computing /." [Gainesville, Fla.] : University of Florida, 2001. http://purl.fcla.edu/fcla/etd/anp4316.

Full text
Abstract:
Thesis (M.E.)--University of Florida, 2001.
Title from first page of PDF file. Document formatted into pages; contains xi, 68 p.; also contains graphics. Vita. Includes bibliographical references (p. 66-67).
APA, Harvard, Vancouver, ISO, and other styles
42

Grahn, Cecilia, and Martin Sund. "Cloud computing - Moving to the cloud." Thesis, Högskolan Dalarna, Informatik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:du-12916.

Full text
Abstract:
Cloud computing innebär användning av datorresurser som är tillgängliga via ett nätverk, oftast Internet och är ett område som har vuxit fram i snabb takt under de senaste åren. Allt fler företag migrerar hela eller delar av sin verksamhet till molnet. Sogeti i Borlänge har behov av att migrera sina utvecklingsmiljöer till en molntjänst då drift och underhåll av dessa är kostsamma och tidsödande. Som Microsoftpartners vill Sogeti använda Microsoft tjänst för cloud computing, Windows Azure, för detta syfte. Migration till molnet är ett nytt område för Sogeti och de har inga beskrivningar för hur en sådan process går till. Vårt uppdrag var att utveckla ett tillvägagångssätt för migration av en IT-lösning till molnet. En del av uppdraget blev då att kartlägga cloud computing, dess beståndsdelar samt vilka för- och nackdelar som finns, vilket har gjort att vi har fått grundläggande kunskap i ämnet. För att utveckla ett tillvägagångssätt för migration har vi utfört flera migrationer av virtuella maskiner till Windows Azure och utifrån dessa migrationer, litteraturstudier och intervjuer dragit slutsatser som mynnat ut i ett generellt tillvägagångssätt för migration till molnet. Resultatet har visat att det är svårt att göra en generell men samtidigt detaljerad beskrivning över ett tillvägagångssätt för migration, då scenariot ser olika ut beroende på vad som ska migreras och vilken typ av molntjänst som används. Vi har dock utifrån våra erfarenheter från våra migrationer, tillsammans med litteraturstudier, dokumentstudier och intervjuer lyft vår kunskap till en generell nivå. Från denna kunskap har vi sammanställt ett generellt tillvägagångssätt med större fokus på de förberedande aktiviteter som en organisation bör genomföra innan migration. Våra studier har även resulterat i en fördjupad beskrivning av cloud computing. I vår studie har vi inte sett att någon tidigare har beskrivit kritiska framgångsfaktorer i samband med cloud computing. I vårt empiriska arbete har vi dock identifierat tre kritiska framgångsfaktorer för cloud computing och i och med detta täckt upp en del av kunskapsgapet där emellan.
Cloud computing involves the use of computer resources that are available through a network, usually the Internet and it is an area that has grown rapidly in recent years. More and more companies move entire or part of their operations to the cloud.Sogeti in Borlänge needs to move their development environments to a cloud service as operating and maintaining of these are costly and time-consuming. As a Microsoft Partner, Sogeti wants to use Microsoft´s services for cloud computing, Windows Azure, for this purpose. Migration to the cloud is a new area for Sogeti and they do not have any descriptions of how this process works.Our mission was to develop an approach for the migration of an IT-solution to the cloud. Part of the mission included the identifying of cloud computing, its components, benefits and drawbacks, which lead to us acquiring basic knowledge of the subject.To develop an approach to migration, we performed several migrations of virtual machines to Windows Azure, and based on these migrations, literature studies and interviews we drew conclusions that resulted in an overall approach for migration to the cloud.The results have shown that it is difficult to make a general but detailed description of an approach to migration, as the scenario looks different depending on what to migrate and what type of cloud service is used. However, based on our experiences from our migrations, along with literature, documents and interviews we have lifted our knowledge to a general level. From this knowledge, we have compiled a general approach with greater focus on the preparatory activities that an organization should implement before migration.Our studies also resulted in an in-depth description of cloud computing. In our studies we did not find previous works in which the critical success factors have been described in the context of cloud computing. In our empirical work, we identified three critical success factors for cloud computing and in doing so covered up some of the knowledge gap in between.
APA, Harvard, Vancouver, ISO, and other styles
43

Schnizler, Björn. "Resource allocation in the Grid a market engineering approach /." Karlsruhe : Univ.-Verl. Karlsruhe, 2007. http://nbn-resolving.de/urn:nbn:de:0072-67769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Witt, Hendrik. "Human computer interfaces for wearable computers a systematic approach to development and evaluation /." kostenfrei kostenfrei, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?idn=987607065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Santhana, Krishnan Archanaa. "Top-down Approach To Securing Intermittent Embedded Systems." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/105128.

Full text
Abstract:
The conventional computing techniques are based on the assumption of a near constant source of input power. While this assumption is reasonable for high-end devices such as servers and mobile phones, it does not always hold in embedded devices. An increasing number of Internet of Things (IoTs) is powered by intermittent power supplies which harvest energy from ambient resources, such as vibrations. While the energy harvesters provide energy autonomy, they introduce uncertainty in input power. Intermittent computing techniques were proposed as a coping mechanism to ensure forward progress even with frequent power loss. They utilize non-volatile memory to store a snapshot of the system state as a checkpoint. The conventional security mechanisms do not always hold in intermittent computing. This research takes a top-down approach to design secure intermittent systems. To that end, we identify security threats, design a secure intermittent system, optimize its performance, and evaluate our design using embedded benchmarks. First, we identify vulnerabilities that arise from checkpoints and demonstrates potential attacks that exploit the same. Then, we identify the minimum security requirements for protecting intermittent computing and propose a generic protocol to satisfy the same. We then propose different security levels to configure checkpoint security based on application needs. We realize configurable intermittent security to optimize our generic secure intermittent computing protocol to reduce the overhead of introducing security to intermittent computing. Finally, we study the role of application in intermittent computing and study the various factors that affect the forward progress of applications in secure intermittent systems. This research highlights that power loss is a threat vector even in embedded devices, establishes the foundation for security in intermittent computing.
Doctor of Philosophy
The embedded systems are present in every aspect of life. They are available in watches, mobile phones, tablets, servers, health aids, home security, and other everyday useful technology. To meet the demand for powering up a rising number of embedded devices, energy harvesters emerged as a solution to provide an autonomous solution to power on low-power devices. With energy autonomy, came energy scarcity that introduced intermittent computing, where embedded systems operate intermittently because of lack of constant input power. The intermittent systems store snapshots of their progress as checkpoints in non-volatile memory and restore the checkpoints to resume progress. On the whole, the intermittent system is an emerging area of research that is being deployed in critical locations such as bridge health monitoring. This research is focused on securing intermittent systems comprehensively. We perform a top-down analysis to identify threats, mitigate them, optimize the mitigation techniques, and evaluate the implementation to arrive at secure intermittent systems. We identify security vulnerabilities that arise from checkpoints to demonstrate the weakness in intermittent systems. To mitigate the identified vulnerabilities, we propose secure intermittent solutions to protect intermittent systems using a generic protocol. Based on the implementation of the generic protocol and its performance, we propose several optimizations based on the needs of the application to securing intermittent systems. And finally, we benchmark the security properties using two-way relation between security and application in intermittent systems. With this research, we create a foundation for designing secure intermittent systems.
APA, Harvard, Vancouver, ISO, and other styles
46

Kannan, Vijayasarathy. "A Distributed Approach to EpiFast using Apache Spark." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/55272.

Full text
Abstract:
EpiFast is a parallel algorithm for large-scale epidemic simulations, based on an interpretation of the stochastic disease propagation in a contact network. The original EpiFast implementation is based on a master-slave computation model with a focus on distributed memory using message-passing-interface (MPI). However, it suffers from few shortcomings with respect to scale of networks being studied. This thesis addresses these shortcomings and provides two different implementations: Spark-EpiFast based on the Apache Spark big data processing engine and Charm-EpiFast based on the Charm++ parallel programming framework. The study focuses on exploiting features of both systems that we believe could potentially benefit in terms of performance and scalability. We present models of EpiFast specific to each system and relate algorithm specifics to several optimization techniques. We also provide a detailed analysis of these optimizations through a range of experiments that consider scale of networks and environment settings we used. Our analysis shows that the Spark-based version is more efficient than the Charm++ and MPI-based counterparts. To the best of our knowledge, ours is one of the preliminary efforts of using Apache Spark for epidemic simulations. We believe that our proposed model could act as a reference for similar large-scale epidemiological simulations exploring non-MPI or MapReduce-like approaches.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
47

Costa, Philipp Bernardino. "An approach for Mobile Multiplatform Offloading System." Universidade Federal do CearÃ, 2014. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=13884.

Full text
Abstract:
FundaÃÃo Cearense de Apoio ao Desenvolvimento Cientifico e TecnolÃgico
Os dispositivos mÃveis, especificamente os smartphones e os tablets, evoluÃram bastante em termos computacionais nos Ãltimos anos, e estÃo cada vez mais presentes no cotidiano das pessoas. Apesar dos avanÃos tecnolÃgicos, a principal limitaÃÃo desses dispositivos està relacionada com a questÃo energÃtica e com seu baixo desempenho computacional, quando comparado com um notebook ou computador de mesa. Com base nesse contexto, surgiu o paradigma do Mobile Cloud Computing (MCC), o qual estuda formas de estender os recursos computacionais e energÃticos dos dispositivos mÃveis atravÃs da utilizaÃÃo das tÃcnicas de offloading. A partir do levantamento bibliogrÃfico dos frameworks em MCC verificou-se, para o problema da heterogeneidade em plataformas mÃveis, ausÃncia de soluÃÃes de offloading. Diante deste problema, esta dissertaÃÃo apresenta um framework denominado de MpOS (Multiplataform Offloading System), que suporta a tÃcnica de offloading, em relaÃÃo ao desenvolvimento de aplicaÃÃes para diferentes plataformas mÃveis, sendo desenvolvido inicialmente para as plataformas Android e Windows Phone. Para validaÃÃo foram desenvolvidas para cada plataforma mÃvel, duas aplicaÃÃes mÃveis, denominadas de BenchImage e Collision, que demonstram o funcionamento da tÃcnica de offloading em diversos cenÃrios. No caso do experimento realizado com BenchImage foi analisado o desempenho da aplicaÃÃo mÃvel, em relaÃÃo à execuÃÃo local, no cloudlet server e em uma nuvem pÃblica na Internet, enquanto no experimento do Collision (um aplicativo de tempo real) foi analisado o desempenho do offloading, utilizando tambÃm diferentes sistemas de serializaÃÃo de dados. Em ambos os experimentos houve situaÃÃes que era mais vantajoso executar localmente no smartphone, do que realizar a operaÃÃo de offloading e vice-versa, por causa de diversos fatores associados com a qualidade da rede e com volume de processamento exigido nesta operaÃÃo.
The mobile devices, like smartphones and tablets, have evolved considerably in last years in computational terms. Despite advances in their hardware, these devices have energy constraints regarded to their poor computing performance. Therefore, on this context, a new paradigm called Mobile Cloud Computing (MCC) has emerged. MCC studies new ways to extend the computational and energy resources, on mobile devices using the offloading techniques. A literature survey about MCC, has shown that there is no support heterogeneity on reported studies. In response, we propose a framework called MpOS (Multi-platform Offloading System), which supports the offloading technique in mobile application development, for two mobile platforms (Android and Windows Phone). Two case studies were developed with MpOS solution in order to evaluate the framework for each mobile platform. These case studies show how the offloading technique works on several perspectives. In BenchImage experiment, the offloading performance was analyzed, concerning to its execution on a remote execution site (a cloudlet on local network and public cloud in the Internet). The Collision application promotes the analysis of the offloading technique performance on real-time application, also using different serialization systems. In both experiments, results show some situations where it was better to run locally on smarphone, than performing the offloading operation and vice versa.
APA, Harvard, Vancouver, ISO, and other styles
48

Foresta, Francesco. "Integration of SDN frameworks and Cloud Computing platforms: an Open Source approach." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14271/.

Full text
Abstract:
As a result of the explosion in the number of services offered over the Internet, network traffic has experienced a remarkable increment and is supposed to increase even more in the few next years. Therefore, Telco operators are investigating new solutions aiming at managing this traffic efficiently and transparently to guarantee the users the needed Quality of Service. The most viable solution is to have a paradigm shift in the networking field: the old and legacy routing will be indeed replaced by something more dynamic, through the use of Software Defined Networking. In addition to it, Network Functions Virtualization will play a key role making possible to virtualize the intermediate nodes implementing network functions, also called middle-boxes, on general purpose hardware. The most suitable environment to understand their potentiality is the Cloud, where resources, as computational power, storage, development platforms, etc. are outsourced and provided to the user as a service on a pay-per-use model. All of this is done in a complete dynamic way, as a result of the presence of the implementation of the above cited paradigms. However, whenever it comes to strict requirements, Telecommunication Networks are still underperforming: one of the cause is the weak integration among these paradigms to reactively intervene to the users' need. It is therefore remarkably important to properly evaluate solutions where SDN and NFV are cooperating actively inside the Cloud, leading to more adaptive systems. In this document, after the description of the state of the art in networking, the deployment of an OpenStack Cloud platform on an outperforming cluster will be shown. In addition, its networking capabilities will be improved via a careful cloud firewalling configuration; moreover, this cluster will be integrated with Open Source SDN frameworks to enhance its services. Finally, some measurements showing how much this approach could be interesting will be provided.
APA, Harvard, Vancouver, ISO, and other styles
49

Bambini, Alberto. "Combining Active Learning and Mathematical Programming: a hybrid approach for Transprecision Computing." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19664/.

Full text
Abstract:
This paper explores the possibility of applying a hybrid approach between Active Learning and Mathematical Programming to Transprecision Computing. This would entail embedding a machine learning model trained by means of an Active Learning approach into an optimization model to automatically and intelligently tweak the representation of floating-point numerical data. This project aims to lower the energetic expenditure of every single intermediate computation in a given program, while also avoiding errors that are systematically introduced when manipulating variables using this technique, and ensure that they do not exceed a maximum acceptable error rate decided prior.
APA, Harvard, Vancouver, ISO, and other styles
50

Volonnino, Chiara. "A Reinforcement Learning approach to discriminate unsafe devices in aggregate computing systems." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20488/.

Full text
Abstract:
Reinforcement learning is a machine learning approach that has been studied for many years, but particularly nowadays the interest about this topic has exponentially grown. Its purpose is to create autonomous agents able to sense and act in their environment. They should learn to choose optimal actions to achieve their goals, in order to maximise a cumulative reward. Aggregate programming is a paradigm that supports the large-scale programming of adaptive systems by focusing on the behaviour of the cluster instead of the singles. One promising aggregate programming approach is based on the field calculus, that allows the definition of aggregate programs by the functional composition of computational fields. A topic of interest related to Aggregate Computing is computer security. Aggregate Computing systems are, in fact, vulnerable to security threats due to their distributed nature, situatedness and openness, which can make participant nodes leave and join the computation at any time. A solution that enables to combine reinforcement learning, aggregate computing and security, would be an interesting and innovative approach, especially because there are no experiments so far that include this combination. The goal of this thesis is to implement a Scala library for reinforcement learning, which must be easily integrated with the aggregate computing context. Starting from an existing work, on trust computation in aggregate applications, we want to train a network, via reinforcement learning, which through the calculation of the gradient -- a fundamental pattern of collective coordination -- is able to identify and discriminate compromised nodes. The dissertation work focused on: 1. development of a generic Scala library that implements the reinforcement approach, in accord to an aggregate computing model; 2. development of a reinforcement learning based solution; 2. integration of the solution that allows us to calculate the trust gradient.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography