Dissertations / Theses on the topic 'Computing and Mathematical Sciences'

To see the other types of publications on this topic, follow the link: Computing and Mathematical Sciences.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computing and Mathematical Sciences.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Theron, Piet. "Criteria for the evaluation of private cloud computing." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/85858.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2013.
ENGLISH ABSTRACT: Cloud computing is seen as one of top 10 disruptive changes in IT for the next decade by leading research analysts. Consequently, enterprises are starting to investigate the effect it will have on the strategic direction of their businesses and technology stacks. Because of the disruptive nature of the paradigm shift introduced by it, as well as the strategic impact thereof, it is necessary that a structured approach with regard to risk, value and operational cost is followed with the decision on its relevance, as well as the selection of a platform if needed. The purpose of this thesis is to provide a reference model and its associating framework that can be used to evaluate private cloud management platforms, as well as the technologies associated with it.
AFRIKAANSE OPSOMMING: Wolk berekening word deur vooraanstaande navorsing ontleders as een van die top 10 ontwrigtende veranderings vir IT in die volgende dekade beskou. Gevolglik begin korporatiewe ondernemings met ondersoeke om te bepaal wat die invloed daarvan op hulle strategiese rigting en tegnologië gaan wees. Die ontwrigtende aard van die paradigma skuif, asook die strategiese impak daarvan, noodsaak ’n gestruktureerde ondersoek na die toepaslikheid en keuse van ’n platform, indien nodig, met betrekking tot risiko, waarde en operasionele koste. Die doel van hierdie tesis is om ’n verwysings model, en ’n raamwerk wat dit implementeer, saam te stel wat dan gebruik kan word om privaat wolk berekening platforms te evalueer.
APA, Harvard, Vancouver, ISO, and other styles
2

Nivens, Ryan Andrew. "Computing in STEM." Digital Commons @ East Tennessee State University, 2016. https://dc.etsu.edu/etsu-works/239.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Burlutskiy, Nikolay. "Prediction of user behaviour on the Web." Thesis, University of Brighton, 2017. https://research.brighton.ac.uk/en/studentTheses/7ad2ede5-c7e3-4f99-ba68-ef257dc2387a.

Full text
Abstract:
The Web has become an ubiquitous environment for human interaction, communication, and data sharing. As a result, large amounts of data are produced. This data can be utilised by building predictive models of user behaviour in order to support business decisions. However, the fast pace of modern businesses is creating the pressure on industry to provide faster and better decisions. This thesis addresses this challenge by proposing a novel methodology for an effcient prediction of user behaviour. The problems concerned are: (i) modelling user behaviour on the Web, (ii) choosing and extracting features from data generated by user behaviour, and (iii) choosing a Machine Learning (ML) set-up for an effcient prediction. First, a novel Time-Varying Attributed Graph (TVAG) is introduced and then a TVAG-based model for modelling user behaviour on the Web is proposed. TVAGs capture temporal properties of user behaviour by their time varying component of features of the graph nodes and edges. Second, the proposed model allows to extract features for further ML predictions. However, extracting the features and building the model may be unacceptably hard and long process. Thus, a guideline for an effcient feature extraction from the TVAG-based model is proposed. Third, a method for choosing a ML set-up to build an accurate and fast predictive model is proposed and evaluated. Finally, a deep learning architecture for predicting user behaviour on the Web is proposed and evaluated. To sum up, the main contribution to knowledge of this work is in developing the methodology for fast and effcient predictions of user behaviour on the Web. The methodology is evaluated on datasets from a few Web platforms, namely Stack Exchange, Twitter, and Facebook.
APA, Harvard, Vancouver, ISO, and other styles
4

Akhir, Emelia Akashah Patah. "The implementation of information strategies to support sustainable procurement." Thesis, University of Brighton, 2017. https://research.brighton.ac.uk/en/studentTheses/7d32bc16-2943-4228-8aef-3bbe9d0a58c8.

Full text
Abstract:
In our research context, sustainable procurement can be seen as a process to reduce damage to the environment by integrating certain aspects into making procurement decisions, such as value for money throughout the whole life cycle and being of benefit to society and the economy. This research has found more than one way of interpreting the ‘sustainable system’, for example, ‘green-friendly’ versus remaining effective in the long term. Sustainable procurement requires specific information to support the procurement process. The study reported in this thesis aimed to investigate the type of information needed in order for organisations to make correct sustainable procurement decisions. From these findings, information architecture for sustainable procurement in UK universities has been derived. While the initial focus has been on the information needed to make informed decisions in purchasing sustainable information technology (IT) equipment, it is believed that the framework would also be more widely applicable to other types of purchases. To ensure that these findings would support the university aspiration in terms of sustainability practices, a goal-context modelling technique called VMOST/B-SCP was chosen to analyse the sustainable procurement strategy in order to evaluate the alignment of IT strategy and its business strategy. A goal-context model using VMOST/B-SCP was produced to evaluate the procurement strategy, with this validated by procurement staff. This research helps to improve the way that goals and context are identified by integrating another technique, namely, social network analysis (SNA) to produce actor network diagrams. The VMOST/B-SCP technique is transferrable to the mapping of action strategies. The findings from goal-context modelling show that a goal-context model is not static: it changes as external circumstances and organisational priorities change. Most changes to the strategy occurred where external entities on which the change programme depended did not act as planned. The actor networks produced in our version of VMOST/B-SCP can be used to identify such risks. This research was pioneering in its use of VMOST/B-SCP in examining a business change while it was actually taking place rather than after it had been completed (and thus needed to accommodate changes in objectives and strategies). In addition, the research analysed a system with some IT support but where human-operated procedures predominated. The original B-SCP framework used Jackson’s problem frames which focus on possible software components: in our scenario, SNA-inspired actor diagrams were found to be more appropriate.
APA, Harvard, Vancouver, ISO, and other styles
5

Webber, Thomas. "Methods for the improvement of power resource prediction and residual range estimation for offroad unmanned ground vehicles." Thesis, University of Brighton, 2017. https://research.brighton.ac.uk/en/studentTheses/0fa4a3b9-bb71-413a-9b0e-ed0e1574225a.

Full text
Abstract:
Unmanned Ground Vehicles (UGVs) are becoming more widespread in their deployment. Advances in technology have improved not only their reliability but also their ability to perform complex tasks. UGVs are particularly attractive for operations that are considered unsuitable for human operatives. These include dangerous operations such as explosive ordnance disarmament, as well as situations where human access is limited including planetary exploration or search and rescue missions involving physically small spaces. As technology advances, UGVs are gaining increased capabilities and consummate increased complexity, allowing them to participate in increasingly wide range of scenarios. UGVs have limited power reserves that can restrict a UGV’s mission duration and also the range of capabilities that it can deploy. As UGVs tend towards increased capabilities and complexity, extra burden is placed on the already stretched power resources. Electric drives and an increasing array of processors, sensors and effectors, all need sufficient power to operate. Accurate prediction of mission power requirements is therefore of utmost importance, especially in safety critical scenarios where the UGV must complete an atomic task or risk the creation of an unsafe environment due to failure caused by depleted power. Live energy prediction for vehicles that traverse typical road surfaces is a wellresearched topic. However, this is not sufficient for modern UGVs as they are required to traverse a wide variety of terrains that may change considerably with prevailing environmental conditions. This thesis addresses the gap by presenting a novel approach to both off and on-line energy prediction that considers the effects of weather conditions on a wide variety of terrains. The prediction is based upon nonlinear polynomial regression using live sensor data to improve upon the accuracy provided by current methods. The new approach is evaluated and compared to existing algorithms using a custom ‘UGV mission power’ simulation tool. The tool allows the user to test the accuracy of various mission energy prediction algorithms over a specified mission routes that include a variety of terrains and prevailing weather conditions. A series of experiments that test and record the ‘real world’ power use of a typical small electric drive UGV are also performed. The tests are conducted for a variety of terrains and weather conditions and the empirical results are used to validate the results of the simulation tool. The new algorithm showed a significant improvement compared with current methods, which will allow for UGVs deployed in real world scenarios where they must contend with a variety of terrains and changeable weather conditions to make accurate energy use predictions. This enables more capabilities to be deployed with a known impact on remaining mission power requirement, more efficient mission durations through avoiding the need to maintain excessive estimated power reserves and increased safety through reduced risk of aborting atomic operations in safety critical scenarios. As supplementary contribution, this work created a power resource usage and prediction test bed UGV and resulting data-sets as well as a novel simulation tool for UGV mission energy prediction. The tool implements a UGV model with accurate power use characteristics, confirmed by an empirical test series. The tool can be used to test a wide variety of scenarios and power prediction algorithms and could be used for the development of further mission energy prediction technology or be used as a mission energy planning tool.
APA, Harvard, Vancouver, ISO, and other styles
6

Peacock, Chloe. "'Double distinction' : an analysis of consumer participation in Apple branding." Thesis, University of Brighton, 2013. https://research.brighton.ac.uk/en/studentTheses/6c41abda-4d97-40c4-8b34-ae18a9655ceb.

Full text
Abstract:
This thesis aimed to understand the relationship between the Apple brand and Apple consumers. It presents an historical semiotic analysis of a selection of the Apple brand from 1978 to 2009 and in-depth interviews with Apple consumers. The interviews were then analysed thematically, looking at the ways participants employed Apple in the construction of identity. The thesis extends theoretical critical approaches to branding with the inclusion of participant interviews. Approaches to branding consider the role of consumers in brand production and ownership, but this thesis moves focus beyond abstraction to interrogate how much of consumer participation is predetermined by the brand. This was achieved by actually examining the ways in which brand consumers articulate the brand. In doing so findings showed that Apple consumers distinguish themselves from non-Apple consumers, but significantly they made a second distinction. For the first distinction, Apple consumers articulated emotional investment, superior aesthetic taste, and feelings of being part of an exclusive community. The second distinction is an articulation of uniqueness within the Apple community. This is achieved through creating a sense of critical distance from consumption via individual lifestyle and taste.
APA, Harvard, Vancouver, ISO, and other styles
7

Fallahkhair, Sanaz. "Development of a cross platform support system for language learners via interactive television and mobile phone." Thesis, University of Brighton, 2009. https://research.brighton.ac.uk/en/studentTheses/9c5e53bd-2010-4edc-af61-f2e0ba77bda8.

Full text
Abstract:
This thesis explores and develops the potential of interactive television (iTV) technology for language learning. Through a modified form of the socio-cognitive engineering approach (Sharpies et al., 2002a), a range of learner centred design activities were carried out and a system developed to provide cross platform support, blending iTV and mobile phones, for adult language learners.
APA, Harvard, Vancouver, ISO, and other styles
8

Hulshof, Ana Vitoria Joly. "Interactive television for young children : developing design principles." Thesis, University of Brighton, 2010. https://research.brighton.ac.uk/en/studentTheses/c20c561b-b374-460d-b48b-66ec3cc58729.

Full text
Abstract:
The research reported in this thesis investigates preschoolers‟ interactions with interactive television applications. The study involved the development of an electronic programme guide prototype and the empirical evaluation thereof. There were three main aims. The first aim was to analyse children‟s interactions and illustrate them in a framework to further understanding of the way preschoolers interact with the television. The second aim was to contribute design principles for preschool interactive television and the third aim was to refine methods and add to the knowledge of design and evaluation techniques involving young children.
APA, Harvard, Vancouver, ISO, and other styles
9

de, Souza Pereira Candello Heloisa Caroline. "Design for outdoor mobile multimedia : representation, content and interactivity for mobile tourist guides." Thesis, University of Brighton, 2012. https://research.brighton.ac.uk/en/studentTheses/0de623b2-11d7-462b-aa8b-06433c9f78e7.

Full text
Abstract:
The research reported in this thesis explores issues of information design for mobile devices, in particular those relating to selection and presentation of on-screen information and interactive functionality for users of mobile phones. The example domain is that of mobile tour guides for tourists, local people, students and families. Central to the research is the issue of multimodality, particularly the graphic and interaction design issues involved in viewing video, in combination with other media, on a mobile device, in an outdoor context. The study produced three main results: 1. An analytical framework for user-experience concerns in cultural heritage settings, 2. Design recommendations for outdoor mobile multimedia guides and 4. Refinements in methods for collecting and analysing data from fieldwork with visitors in cultural heritage settings. Those results were formulated for the use of mobile guide designers. The methodology used to inform and structure the work was Design Research, involving literature review and empirical work, including user trials of a prototype tourist guide developed in the project. The literature review covered areas of tourism, multimedia design, mobile HCI and existing mobile guides. Outdoor fieldwork exercises were carried out with three different cultural information sources - human tour guide, paper based guide and mobile guide app - in order to identify any problems that visitors might have and to gather requirements for the development of a mobile cultural guide. Qualitative analysis was applied to analyse the video observations and questionnaires completed during the tours. Requirements were grouped and analysed to give substantial information for a conceptual design. Personas and scenarios were created based on real participants and situations that occurred on the tours. A mobile guide prototype was developed and evaluated in the field with visitors. Qualitative analysis and descriptive statistics were used to analyse the data. Visitors were asked about their preferences among various multimedia design elements and answered a questionnaire on their experience. The elements that affect the user experience with outdoor mobile guides were categorised and organised into a framework. It became apparent that users' experience of technology (in this case the mobile tourist guide) and environment are affected by context, content and look and feel elements. This framework of user experience generated a design toolkit with a collection of recommendations for designers of such systems. The recommendations are described in context of usage and have a rating system with strength of evidence and confidence based on how often they appeared in the field works and solutions tested.
APA, Harvard, Vancouver, ISO, and other styles
10

Delaney, Aidan. "Defining star-free regular languages using diagrammatic logic." Thesis, University of Brighton, 2012. https://research.brighton.ac.uk/en/studentTheses/d1c53bda-f520-4807-9de9-8de12eda3d9e.

Full text
Abstract:
Spider diagrams are a recently developed visual logic that make statements about relationships between sets, their members and their cardinalities. By contrast, the study of regular languages is one of the oldest active branches of computer science research. The work in this thesis examines the previously unstudied relationship between spider diagrams and regular languages. In this thesis, the existing spider diagram logic and the underlying semantic theory is extended to allow direct comparison of spider diagrams and star-free regular languages. Thus it is established that each spider diagram defines a commutative star-free regular language. Moreover, we establish that every com- mutative star-free regular language is definable by a spider diagram. From the study of relationships between spider diagrams and commutative star-free regular languages, an extension of spider diagrams is provided. This logic, called spider diagrams of order, increases the expressiveness of spider di- agrams such that the language of every spider diagram of order is star-free and regular, but not-necessarily commutative. Further results concerning the expres- sive power of spider diagrams of order are gained through the use of a normal form for the diagrams. Sound reasoning rules which take a spider diagram of order and produce a semantically equivalent diagram in the normal form are pro- vided. A proof that spider diagrams of order define precisely the star-free regular languages is subsequently presented. Further insight into the structure and use of spider diagrams of order is demonstrated by restricting the syntax of the logic. Specifically, we remove spiders from spider diagrams of order. We compare the expressiveness of this restricted fragment of spider diagrams of order with the unrestricted logic.
APA, Harvard, Vancouver, ISO, and other styles
11

Lourenco, Cardosa Tiago José Peres. "Port fuel injection strategies for a lean burn gasoline engine." Thesis, University of Brighton, 2011. https://research.brighton.ac.uk/en/studentTheses/70452fa8-4e63-42dd-8403-05dc6d8d4d60.

Full text
Abstract:
A spark ignition (SI) engine operating with a lean burn has the potential for higher thermal efficiency, and lower nitrogen oxide emissions than that of stoichiometric operation. However, a lean or highly diluted mixture leads to poor combustion stability impacting detrimentally upon engine performance. An experimental investigation was carried out, on a 4-valve single cylinder gasoline engine with a split intake tract and two identical production port-fuel injectors installed, allowing independent fuel delivery to each intake valve. The main objective of the study was to extend the limit of lean combustion through the introduction of charge stratification. Novel port fuel injection strategies such as, dual split injection, multiple injections and phased injection, were developed to achieve this goal. In parallel, a model of the engine was developed in the Ricardo WAVE software. The model was used to calculate parameters such as in-cylinder residual gas, for different test points. Combustion stability was improved for the engine conditions tested. At 1000 rpm and 1.0 bar gross indicated mean effective pressure (GIMEP), the lean combustion limit was extended from a 14:1 air-to-fuel ratio (AFR) to 17.5:1. At 1500 rpm and 1.5 bar GIMEP the lean combustion limit was extended from 17.5:1 to approximately 21:1 AFR. Finally for 1800 rpm and 1.8 bar GIMEP, lean combustion was improved from 21: 1 AFR to 22: 1 An experimental spark plug, with an infrared detector, was used to measure the variation in fuel distribution at the spark plug gap. It showed that the different fuel injection strategies generated different levels of fuel concentration. It was identified that injections in a single port created fuel stratification in the spark plug area but were more prone to cycle to cycle variations in fuel concentration. These variations did not correlate with combustion stability or flame speed propagation at the speeds and loads tested. The most important parameter to influence the flame propagation speed was found to be the variation in local lambda with crank angle just after the ignition timing. It was shown that the fastest flame propagation speeds did not necessarily result in the lowest CoV in GIMEP. Finally the fuel injection strategies were investigated for highly dilute conditions, achieved by means of internal residual gas trapping, with the aim of promoting (spark-assisted) compression ignition combustion conditions.
APA, Harvard, Vancouver, ISO, and other styles
12

Yin, Ling. "The theory of extended topic and its application in information retrieval." Thesis, University of Brighton, 2012. https://research.brighton.ac.uk/en/studentTheses/957adc51-7be2-45a3-8207-f07831f7310e.

Full text
Abstract:
This thesis analyses the structure of natural language queries to document repositories, with the aim of finding better methods for information retrieval. The exponential increase of information on the Web and in other large document repositories during recent decades motivates research on facilitating the process of finding relevant information to meet end users' information needs. A shared problem among several related research areas, such as information retrieval, text summarisation and question answering, is to derive concise textual expressions to describe what a document is about, to function as the bridge between queries and the document content. In current approaches, such textual expressions are typically generated by shallow features, for example, by simply selecting a few most-frequently- occurring key words. However, such approaches are inadequate to generate expressions that truly resemble user queries. The study of what a document is about is closely related to the widely discussed notion of topic, which is defined in many different ways in theoretical linguistics as well as in practical natural language processing research. We compare these different definitions and analyse how they differ from user queries. The main function of a query is that it defines which facts are relevant in some underlying knowledge base. We show that, to serve this purpose, queries are typically formulated by first (a) specifying a focused entity and then (b) defining a perspective from which the entity is approached. For example, in the query 'history of Britain', 'Britain' is the focused entity and 'history' is the perspective. Existing theories of topic often focus on (a) and leave out (b). We develop a theory of extended topic to formalise this distinction. We demonstrate the distinction in experiments with real life topic expressions, such as WH-questions and phrases describing plans of academic papers. The theory of extended topic could be applied to help various application areas, including knowledge organisation and generating titles, etc. We focus on applying the theory to the problem of information retrieval from a document repository. Currently typical information retrieval systems retrieve relevant documents to a query by counting numbers of key word matches between a document and the query. This approach is better suited to retrieving the focused entities than the perspectives. We aim to improve the performance of information retrieval by providing better support for perspectives. To do so, we further subdivide the perspectives into different types and present different approaches to addressing each type. We illustrate our approaches with three example perspectives: 'cause', 'procedure' and 'biography'. Experiments on retrieving causal, procedural and biographical questions achieve better results than the traditional key-word-matching-based approach.
APA, Harvard, Vancouver, ISO, and other styles
13

Rodriguez, Triguero Camino. "Low-spin states in 102-108Zr in the Interacting Boson Model context." Thesis, University of Brighton, 2013. https://research.brighton.ac.uk/en/studentTheses/a58d0300-c783-4b7d-b266-6d3bca37b2ad.

Full text
Abstract:
The region of the nuclear chart around A~100 is an area of structural changes where different shapes coexist and therefore, an interesting place to study structural evolution and test nuclear models. Within the clement that populate this region, zirconium is one which is expected to present well deformed states, but for which little experimental data has been measured so far. The structure of the 102-108Zr nuclei has been studied using the Interactiug Boson Model (IBM). Energy states and transition probabilities have been predicted and tested using the limited amount of existing experimental data. However, the results of these calculations produced several possibilities, so knowledge about non-yrast. states is needed in order to deepen the understanding of the structural changes in zirconium nuclei. Therefore a. series of experiments to measure non-yrast states of 102- 108Zr are required. A new technique, for separating different states of nuclei, has been developed and tested at the University of Jyvaskyla, using the IGISOL III facility for the known case of 100Nb β-decay into 100 Mo. This technique has been successfully extended to allow the separate study of the gamma.-ray decay of states populated by the different parent states. Lower spin states of 102- 108Zr are populated via beta-decay from 102- 108y' In order to measure the non-yrast states of 102- 108Zr post-trap online spectroscopy will be used at IGISOL IV. IGISOL IV is the improved version of IGISOL 111 and is currently under construction. Part of my Ph.D. consisted of helping with the development of IGISOL IV, the improvements of this facility are explained in this thesis alongside its operation and several tests performed during 2012.
APA, Harvard, Vancouver, ISO, and other styles
14

Clark, Robin Philip. "Failure mode modular de-composition." Thesis, University of Brighton, 2013. https://research.brighton.ac.uk/en/studentTheses/b42594c5-2ed1-4d78-a481-0ed91bbf7943.

Full text
Abstract:
The certification process of safety critical products for European and other international standards typically demand environmental stress, endurance and electro magnetic compatibility testing. Theoretical, or `static testing' also a requirement. Failure Mode Effects Analysis (FMEA) is a tool used for static testing. FMEA is a bottom-up technique that aims to assess the effects of all component failure modes in a system. Its use is traditionally limited to hardware systems. With the growing complexity of modern electronics traditional FMEA is suffering from state explosion and re-use of analysis problems. Also with the now ubiquitous use of microcontrollers in smart instruments and control systems, software is increasingly being seen as a `missing factor' for FMEA This thesis presents a new modular variant of FMEA, Failure Mode Modular Decomposition (FMMD). FMMD has been designed to integrate mechanical/electronic and software failure models, by treating them all as components in terms of their failure modes. For instance, software functions, electronic and mechanical components can all be assigned sets of failure modes. FMMD builds failure mode models from the bottom-up by incrementally analysing functional groupings of components, using the results of analysis to create higher level derived components, which in turn can be used to build functional groupings. In this way a hierarchical failure mode model is built. Software functions are treated as components by FMMD and can thus be incorporated seamlessly into the failure mode hierarchical model. A selection of examples, electronic circuits and hardware/software hybrids are analysed using this new methodology. The results of these analyses are then discussed from the perspective of safety critical application. Performance in terms of test efficiency is greatly improved by FMMD and the examples analysed and theoretical models are used to demonstrate this. This thesis presents a methodology that mitigates the state explosion problems of FMEA; provides integrated hardware and software failure mode models; facilitates multiple failure mode analysis; encourages re-use of analysis work and can be used to produce traditional format FMEA reports.
APA, Harvard, Vancouver, ISO, and other styles
15

Andone, Diana Maria. "Designing elearning spaces for higher education students of the digital generation." Thesis, University of Brighton, 2011. https://research.brighton.ac.uk/en/studentTheses/e6957a8f-9f3c-4323-ac1c-1bc7661cbbfe.

Full text
Abstract:
The main aim of this research project is to investigate the relationship between students and their electronic learning environments, and in particular, how eLearning spaces influence and are influenced by the adaptable and adaptive learning attitudes of the new student generation. In particular, it focuses on what I defined as digital students as young adult students who have grown up with active participation in technology as an everyday feature of their lives. The characteristics of the technologically confident digital students were found to include a strong need for instantaneity, a desire to control their environment and to channel their social life via extensive use of technology.
APA, Harvard, Vancouver, ISO, and other styles
16

Denis, Bacelar Ana Maria. "Isomeric ratios of high-spin states in neutron-deficient N≈126 nuclei produced in projectile fragmentation reactions." Thesis, University of Brighton, 2012. https://research.brighton.ac.uk/en/studentTheses/62edb7eb-7e42-4e1e-be42-6926ccf600d0.

Full text
Abstract:
The population of high-spin isomeric states in neutron-deficient N≈126 nuclei has been studied in order to further understand the reaction mechanism of projectile fragmentation. The nuclei of interest were populated following projectile-fragmentation of a 1 GeV/A 238U beam on a 9Be target at GSI, Germany. The reaction products were selected and separated in the FRS FRagment Separator and brought to rest in an 8 mm plastic stopper placed at the focus of the RISING gamma-ray detector array. The results on the development of an add-back method for the RISING array are presented and discussed for source and in-beam data.
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Huan. "Survival analysis for censored data under referral bias." Thesis, University of Brighton, 2014. https://research.brighton.ac.uk/en/studentTheses/5b39ddc3-1c64-4dd2-8182-a4014c6b97b6.

Full text
Abstract:
This work arises from a hepatitis C cohort study and focuses on estimating the effects of covariates on progression to cirrhosis. In hepatitis C cohort studies, patients may be recruited to the cohort with referral bias because clinically the patients with more rapid disease progression are preferentially referred to liver clinics. This referral bias can lead to significantly biased estimates of the effects of covariates on progression to cirrhosis.
APA, Harvard, Vancouver, ISO, and other styles
18

Ahmedshareef, Zana. "Controlling schedule duration during software project execution." Thesis, University of Brighton, 2015. https://research.brighton.ac.uk/en/studentTheses/819fb81b-e3c1-40ce-bad9-f44308fdbc79.

Full text
Abstract:
This thesis describes a method of identifying the influences on schedule delays in projects that develop large software systems. Controlling schedule duration is a fundamental aspect of managing projects because of the financial losses associated with late projects. While challenges with controlling software projects have been investigated, there still seemed to be more to be learned about the interplay of a range of factors during project execution and that affect project duration when developing and integrating software systems within enterprise architecture environment.
APA, Harvard, Vancouver, ISO, and other styles
19

Thomeczek, Gregor. "Data centric resource and capability management in modern network enabled vehicle fleets." Thesis, University of Brighton, 2015. https://research.brighton.ac.uk/en/studentTheses/36663467-e75e-4c04-bb60-fe5c2062d404.

Full text
Abstract:
The objective of this thesis is to improve battlefield communications capability through improved management of existing platform and fleet level resources. Communication is a critical capability for any platform node deployed on a modern battlefield and enables vital Network Enabled Capabilities (NEC). However, the dynamicity and unpredictability of wireless battlefield networks, as well as the constant threat of equipment damage make wireless battlefield networks inherently unreliable and as such the provision of a stable communication represents a significant technology management challenge. Fulfilling increasingly complex communications requirements of diverse platform types in a chaotic and changing battlefield environment requires the use of novel Resource and Capability Management Algorithms (RCMA) informed by application level context data to manage limited heterogeneous resources at the platform and the fleet level while fulfilling current mission goals.
APA, Harvard, Vancouver, ISO, and other styles
20

Winter, Marcus. "A design space for social object labels in museums." Thesis, University of Brighton, 2016. https://research.brighton.ac.uk/en/studentTheses/73a2c271-e987-4226-a7b0-3031a824f6d7.

Full text
Abstract:
Taking a problematic user experience with ubiquitous annotation as its point of departure, this thesis defines and explores the design space for Social Object Labels (SOLs), small interactive displays aiming to support users' in-situ engagement with digital annotations of physical objects and places by providing up-to-date information before, during and after interaction. While the concept of ubiquitous annotation has potential applications in a wide range of domains, the research focuses in particular on SOLs in a museum context, where they can support the institution's educational goals by engaging visitors in the interpretation of exhibits and providing a platform for public discourse to complement official interpretations provided on traditional object labels. The thesis defines and structures the design space for SOLs, investigates how they can support social interpretation in museums and develops empirically validated design recommendations. Reflecting the developmental character of the research, it employs Design Research as a methodological framework, which involves the iterative development and evaluation of design artefacts together with users and other stakeholders. The research identifies the particular characteristics of SOLs and structures their design space into ten high-level aspects, synthesised from taxonomies and heuristics for similar display concepts and complemented with aspects emerging from the iterative design and evaluation of prototypes. It presents findings from a survey exploring visitors' mental models, preferences and expectations of commenting in museums and translates them into requirements for SOLs. It reports on scenario-based design activities, expert interviews with museum professionals, formative user studies and co-design sessions, and two empirical evaluations of SOL prototypes in a gallery environment. Pulling together findings from these research activities it then formulates design recommendations for SOLs and supports them with related evidence and implementation examples. The main contributions are (i) to delineate and structure the design space for SOLs, which helps to ground SOLs in the literature and understand them as a distinct display concept with its own characteristics; (ii) to explore, for the first time, a visitor perspective on commenting in museums, which can inform research, development and policies on user-generated content in museums and the wider cultural heritage sector; (iii) to develop empirically validated design recommendations, which can inform future research and development into SOLs and related display concept. The thesis concludes by summarising findings in relation to its stated research questions, restating its contributions from ubiquitous computing, domain and methodology perspectives, and discussing open issues and future work.
APA, Harvard, Vancouver, ISO, and other styles
21

Burton, James. "Generalized constraint diagrams : the classical decision problem in a diagrammatic reasoning system." Thesis, University of Brighton, 2011. https://research.brighton.ac.uk/en/studentTheses/e3e0410c-eba7-41b7-9baa-867ce6125e6b.

Full text
Abstract:
Constraint diagrams are part of the family of visual logics based on Euler diagrams. They have been studied since the 1990s, when they were first proposed by Kent as a means of describing formal constraints within software models. Since that time, constraint diagrams have evolved in a number of ways; a crucial re- finement came with the recognition of the need to impose a reading order on the quantifiers represented by diagrammatic syntax. This resulted first in augmented constraint diagrams and, most recently, generalized constraint diagrams (GCDs), which are composed of one or more unitary diagrams in a connected graph. The design of GCDs includes several syntactic features that bring increased expressivity but which also make their metatheory more complex than is the case with preceding constraint diagram notations. In particular, GCDs are given a second order semantics.
APA, Harvard, Vancouver, ISO, and other styles
22

Newman, Philip Ryan. "Ion trajectories at collisionless shocks in space plasmas." Thesis, University of Brighton, 2012. https://research.brighton.ac.uk/en/studentTheses/7517e6c2-ff20-41bd-9203-5dca3cf60e2f.

Full text
Abstract:
The thesis investigates ion behaviour at collisionless shocks, with a focus on two areas of interest. The first area concerns the reflection of particles from collisionless shocks, a necessary mechanism for thermalization at a shock at sufficiently high Mach numbers such as ordinarily prevail at the Earth's bow shock. Previous studies have examined the trajectories of reflected ions with the assumption of a planar shock. In this study, a general framework is developed to describe the trajectory of an ion after reflection, with application to a variety of shock geometries. The conditions allowing an ion to return to the shock after reflection and to return with an increased normal velocity are studied, with three primary parameters considered: the radius of curvature, the magnetic field orientation, and the incident velocity in the shock normal direction. Each of these parameters depends on the shape of the shock and the location of incidence. Results are reported for cylindrical, spherical, and parabolic shock geometries, over ranges of shock curvatures, magnetic field orientations, and incident velocities. Second, we consider the thermalization of the ion distribution initially transmitted through the shock under low Mach number conditions, where reflection is a less significant contributor to thermalization. Previous work has considered the phase area invariant in an exactly perpendicular case. This is generalized to a quasi-perpendicular shock, and invariants of the flow are determined for a Hamiltonian formulation. The evolution of the distribution through the shock is then studied analytically and numerically. Results regarding the shape of phase shells of constant probability, the phase volume within these shells, and the temperature of the distribution are given.
APA, Harvard, Vancouver, ISO, and other styles
23

Holm, Marcus. "Scientific computing on hybrid architectures." Licentiate thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-200242.

Full text
Abstract:
Modern computer architectures, with multicore CPUs and GPUs or other accelerators, make stronger demands than ever on writers of scientific code. As a rule of thumb, the fastest, most efficient program consists of labor-intensive code written by expert programmers for a certain application on a particular computer. This thesis deals with several algorithmic and technical approaches towards effectively satisfying the demand for high-performance parallel programming without incurring such a high cost in expert programmer time. Effective programming is accomplished by writing performance-portable code where performance-critical functionality is provided either by external software or at least a balance between maintainability/generality and efficiency.
UPMARC
eSSENCE
APA, Harvard, Vancouver, ISO, and other styles
24

Zhu, Huaiyu. "Neural networks and adaptive computers : theory and methods of stochastic adaptive computation." Thesis, University of Liverpool, 1993. http://eprints.aston.ac.uk/365/.

Full text
Abstract:
This thesis studies the theory of stochastic adaptive computation based on neural networks. A mathematical theory of computation is developed in the framework of information geometry, which generalises Turing machine (TM) computation in three aspects - It can be continuous, stochastic and adaptive - and retains the TM computation as a subclass called "data processing". The concepts of Boltzmann distribution, Gibbs sampler and simulated annealing are formally defined and their interrelationships are studied. The concept of "trainable information processor" (TIP) - parameterised stochastic mapping with a rule to change the parameters - is introduced as an abstraction of neural network models. A mathematical theory of the class of homogeneous semilinear neural networks is developed, which includes most of the commonly studied NN models such as back propagation NN, Boltzmann machine and Hopfield net, and a general scheme is developed to classify the structures, dynamics and learning rules. All the previously known general learning rules are based on gradient following (GF), which are susceptible to local optima in weight space. Contrary to the widely held belief that this is rarely a problem in practice, numerical experiments show that for most non-trivial learning tasks GF learning never converges to a global optimum. To overcome the local optima, simulated annealing is introduced into the learning rule, so that the network retains adequate amount of "global search" in the learning process. Extensive numerical experiments confirm that the network always converges to a global optimum in the weight space. The resulting learning rule is also easier to be implemented and more biologically plausible than back propagation and Boltzmann machine learning rules: Only a scalar needs to be back-propagated for the whole network. Various connectionist models have been proposed in the literature for solving various instances of problems, without a general method by which their merits can be combined. Instead of proposing yet another model, we try to build a modular structure in which each module is basically a TIP. As an extension of simulated annealing to temporal problems, we generalise the theory of dynamic programming and Markov decision process to allow adaptive learning, resulting in a computational system called a "basic adaptive computer", which has the advantage over earlier reinforcement learning systems, such as Sutton's "Dyna", in that it can adapt in a combinatorial environment and still converge to a global optimum. The theories are developed with a universal normalisation scheme for all the learning parameters so that the learning system can be built without prior knowledge of the problems it is to solve.
APA, Harvard, Vancouver, ISO, and other styles
25

Tekiner, Firat. "Distributed and intelligent routing algorithm." Thesis, Northumbria University, 2006. http://nrl.northumbria.ac.uk/178/.

Full text
Abstract:
A Network's topology and its routing algorithm are the key factors in determining the network performance. Therefore, in this thesis a generic model for implementing logical interconnection topologies in the software domain has been proposed to investigate the performance of the logical topologies and their routing algorithms for packet switched synchronous networks. A number of topologies are investigated using this model and a simple priority rule is developed to utilise the usage of the asymmetric 2 x 2 optical node. Although, logical topologies are ideal for optical (or any other) networks because of their relatively simple routing algorithms, there is a requirement for much more flexible algorithms that can be applied to arbitrary network topologies. Antnet is a software agent based routing algorithm that is influenced by the unsophisticated and individual ant's emergent behaviour. In this work a modified antnet algorithm for packet switched networks has been proposed that offers improvement in the packet throughput and the average delay time. Link usage information known as "evaporation" has also been introduced as an additional feedback signal to the algorithm to prevent stagnation within the network for the first time in the literature for the best our knowledge. Results show that, with "evaporation" the average delay experienced by the data packets is reduced nearly 30% compared to the original antnet routing algorithm for all cases when non-uniform traffic model is employed. The multiple ant colonies concept is also introduced and applied to packet switched networks for the first time which has increased the packet throughput. However, no improvement in the average packet delay is observed in this case. Furthermore, for the first time extensive analysis on the effect of a confidence parameter is produced here. A novel scheme which provides a more realistic implementation of the algorithms and flexibility to the programmer for simulating communication networks is proposed and used to implement these algorithms.
APA, Harvard, Vancouver, ISO, and other styles
26

Pandit, Diptandshu. "Intelligent ECG processing and abnormality detection using adaptive ensemble models." Thesis, Northumbria University, 2017. http://nrl.northumbria.ac.uk/36139/.

Full text
Abstract:
This thesis explores the automated Electrocardiogram (ECG) signal analysis and the feasibility of using a set of computationally inexpensive algorithms to process raw ECG signals for abnormality detection. The work is divided into three main stages which serve towards the main aim of this research, i.e. the abnormality detection from single channel raw ECG signals. In the first stage, a lightweight baseline correction algorithm is proposed along with a modified moving window average method for real-time noise reduction. Additionally, for further offline analysis, a wavelet transform and adaptive thresholding based method is proposed for noise reduction to improve signal-to-noise ratio. In the second stage, a sliding window based lightweight algorithm is proposed for real-time heartbeat detection on the raw ECG signals. It includes max-min curve and dynamic (adaptive) threshold generation, and error correction. The thresholds are adapted automatically. Moreover, a sliding window based search strategy is also proposed for real-time feature extraction. Subsequently, a hybrid classifier is proposed, which embeds multiple ensemble methods, for abnormality classification in the final stage. It works as a meta classifier which generates multiple instances of base models to improve the overall classification accuracy. The proposed hybrid classifier is superior in performance, however, it is dedicated to offline processing owing to high computational complexity. Especially, the proposed hybrid classifier is also further extended to conduct novel class detection (i.e. unknown newly appeared abnormality types). A modified firefly algorithm is also proposed for parameter optimization to further improve the performance for novel class detection. The overall proposed system is evaluated using benchmark ECG databases to prove its efficiency. To illustrate the advantage of each key component, the proposed feature extraction, classification and optimization algorithms are compared with diverse state-of-the-art techniques. The empirical results indicate that the proposed algorithms show great superiority over existing methods.
APA, Harvard, Vancouver, ISO, and other styles
27

Akutekwe, Arinze. "Development of dynamic Bayesian network for the analysis of high-dimensional biomedical data." Thesis, Northumbria University, 2017. http://nrl.northumbria.ac.uk/36183/.

Full text
Abstract:
Inferring gene regulatory networks (GRNs) from time-course expression data is a major challenge in Bioinformatics. Advances in microarray technology have given rise to cheap and easy production of high-dimensional biological datasets, however, accurate analysis and prediction have been hampered by the curse of dimensionality problem whereby the number of features exponentially larger than the number of samples. Therefore, the need for the development of better statistical and predictive methods is continually on the increase. The main aim of this thesis is to develop dynamic Bayesian network (DBN) methods for analysis and prediction temporal biomedical data. A two stage computational bionetwork discovery approach is proposed. In the ovarian cancer case study, 39 out of 592 metabolomic features were selected by the Least Angle Shrinkage and Subset Operator (LASSO) with highest accuracy of 93% and 21 chemical compounds identified. The proposed approach is further improved by the application of swarm optimisation methods for parameter optimization. The improved method was applied to colorectal cancer diagnosis with 1.8% improvement in total accuracy, which was achieved with much less feature subsets of clinical importance than thousands of features when compared to previous studies. In order to address the modelling inefficiencies in inferring GRNs from time-course data, two nonlinear hybrid algorithms were proposed using support vector regression with DBN, and recurrent neural network with DBN. Experiments showed that the proposed method was better at predicting nonlinearities in GRNs than previous methods. Stratified analysis using Ovarian cancer time-course data further showed that the expression levels Prostrate differentiation factor and BTG family member 2 genes, were significantly increased by the cisplatin and oxaliplatin platinum drugs; while expression levels of Polo-like kinase and Cyclin B1 genes, were both decreased by the platinum drugs. The methods and results obtained may be useful in the designing of drugs and vaccines.
APA, Harvard, Vancouver, ISO, and other styles
28

Morshed, Md Monzur. "Effective protocols for privacy and security in RFID systems applications." Thesis, Staffordshire University, 2012. http://eprints.staffs.ac.uk/1896/.

Full text
Abstract:
Radio Frequency Identification (RFID) is a technology to identify objects or people automatically and has received many applications recent years. An RFID tag is a small and low-priced device consisting of a microchip with limited functionality and data storage and antenna for wireless communication with the readers. RFID tags can be passive, active or semi-active depending on the powering technique. In general passive tags are inexpensive. They have no on-board power; they get power from the signal of the interrogating reader. Active tags contain batteries for their transmission. The low-cost passive RFID tags are expected to become pervasive device in commerce. Each RFID tag contains a unique identifier to serve as object identity so that this identity can be used as a link to relate information about the corresponding object. Due to this unique serial number in an RFID tag it is possible to track the tag uniquely. The challenge raised by the RFID systems for certain applications is that the information in it is vulnerable to an adversary. People who carry an object with an RFID tag could be tracked by an adversary without their knowledge. Also, implementation of conventional cryptography is not possible in a low-cost RFID tag due to its limited processing capability and memory limitations. There are various types of RFID authentication protocols for the privacy and security of RFID systems and a number of proposals for secure RFID systems using one-way hash functions and random number. Few researchers have proposed privacy and security protocols for RFID systems using varying identifiers. These are secured against most of the attacks. Due to varying identifiers they also include the recovery from desynchronization due to incomplete authentication process. However, due to the hash function of the identifier if one authentication process is unsuccessful, an adversary can use the responses in the subsequent phase to break the security. In this case the adversary can use the response for impersonation and replay attack and also can break the location privacy. Some protocols protect privacy and security using static tag identifier with varying responses so that they can work in pervasive computing environment. Most of these protocols work with computationally expensive hash functions and large storage. Since 2001 a number of lightweight protocols have been proposed by several researchers. This thesis proposes seven protocols for the privacy and security of the RFID systems. Five of them use a hash function and a static identifier such as SUAP1, SUAP2, SUAP3 and EMAP. These iii protocols are based on challenge-response method using one-way hash function, hash-address and randomized hash function. The protocols are operable in pervasive environment since the identifier of the tag is static. Another protocol named ESAP also works with static identifier but it updates the timestamp that is used with another random number to make the response unidentifiable. The protocol GAPVI uses varying identifier with hash function to ensure privacy and security of the tag. It is based on challenge-response method using one-way hash function and randomized hash function RFID system. Another proposed protocol EHB-MP is a lightweight encryption protocol which is more suitable for low-cost RFID tag because it does not require comparatively more computationally expensive hash function. Since 2001 Hopper and Blum developed the lightweight HB protocol for RFID systems, a number of lightweight protocols have been proposed by several researchers. This work investigates the possible attacks in the existing light weight protocols HB, HB+ and HB-MP of RFID systems and proposes a new lightweight authentication protocol that improves HB-MP protocol and provides the identified privacy and security in an efficient manner for pervasive computing environment. The validity and performance of the hash-based protocols are tested using analysis; simulation programs and some cases mathematical proofs have been given to prove the protection particularly from the special man-in-the attack in the EHB-MP protocol. Finally this research work investigates the privacy and security problems in few most potential application areas that are suitable for RFID implementation. The areas are e-passport, healthcare systems and baggage handling in airport. Suitable RFID authentication protocols are also proposed for these systems to ensure the privacy and security of the users. This thesis uses the symmetric cryptography for privacy and security protocols. In the future asymmetric protocols may be an important research consideration for this area together with ownership transfer of the tag could be a potential work area for research.
APA, Harvard, Vancouver, ISO, and other styles
29

Wonders, Martin. "Activity recognition in monitored environments using utility meter disaggregation." Thesis, Northumbria University, 2016. http://nrl.northumbria.ac.uk/31614/.

Full text
Abstract:
Activity recognition in monitored environments where the occupants are elderly or disabled is currently a popular research topic and is being proposed as a possible solution that may help maintain the independence of an aging population within their homes, where these homes are adapted as monitored environments. Current activity recognition systems implement ubiquitous sensing or video surveillance techniques which inherently, to varying degrees, impinge on the privacy of the occupants of these environments. The research presented in this thesis investigates the use of Ubiquitous sensors within a smart home setting with a view to establishing whether activity recognition is possible with a reduced, less intrusive subset of sensors that can be realised using utility meter disaggregation techniques. The thesis considers the selection of sensors as a feature selection problem and concludes that data produced from water, electricity and PIR sensors contribute significantly to the recognition of selected activities. With an established method of activity recognition that implements a reduced number of sensors it can be argued that occupants of the monitored environment maintain a greater level of privacy. This level of privacy, however, is dependent on such systems being practically implementable into homes that are designed to assist and monitor the residents, and as such configuration and maintenance of these systems are also considered here. The utility meter disaggregation technique presented proves to perform exceptionally well when trained with large quantities of data, but gathering and labelling this data is, in itself, an intrusive process that requires significant effort and could compromise the practicality of such promising systems. This thesis considers methods for implementing synthesised, labelled training data for both disaggregation and activity recognition systems and shows that such techniques can significantly reduce the quantity of labelled training data required. The work presented shows a significant contribution, in the areas of sensor selection and the use of utility meter disaggregation for activity recognition, and also the use of synthesised labelled training data to reduce significant system training times. The work is carried out using a combination of publicly available datasets and data collected from a purpose built smart home which includes water and electricity meter disaggregation. It is shown that a system for non-intrusive monitoring within an ambient environment, occupied by a single resident, is achievable using repurposed versions of the standard domestic infrastructure. More specifically it is demonstrated that a minimum baseline accuracy of 93.45% and F1-measure of 91.22 can be achieved using disaggregation at the water and electricity meters combined with locality context provided by home security PIR sensors. Methods of speeding up the deployment and commissioning process are proven to be viable, further demonstrating the potential practical application of the proposed system.
APA, Harvard, Vancouver, ISO, and other styles
30

Alajlan, Hayat Abdulrahman. "Mobile learning in Saudi higher education." Thesis, University of Brighton, 2017. https://research.brighton.ac.uk/en/studentTheses/243abf65-8e6c-4994-ab76-61c0cad6c738.

Full text
Abstract:
This study investigated female students’ practices and experiences of using mobile technology for learning in Saudi higher education during the period of 2014-2017, and built a theoretical framework for mobile learning in this context. The rapid expansion of higher education in Saudi Arabia, coupled with the rapid increase in student numbers, is raising the need to find more effective ways to teach, reach and communicate with such a large student body. Mobile technology has been widely used in the context of Saudi higher education by both students and university teachers, but little is known about female students’ experiences of using mobile technology to support their learning. A better understanding of the context of mobile use in higher education in Saudi Arabia might help in exploiting the affordances of mobile technology for learning purposes and uses. As a contribution to innovations in Saudi higher education, this study explored mobile learning experiences of Saudi female students at one of the universities in Saudi Arabia, King Saud University. The study implemented a case study methodology and used a qualitative-led mixed methods design. A large-scale online survey of 7,865 female students provided information about the ownership and practices of mobile technology among higher education students; the extent of Internet access via mobile technology, as well as times, locations, and purposes of the use. The study also investigated the opportunities provided by mobile technology that enhance and foster learning experiences for higher education students through an in-depth investigation of 52 participants through personal diaries, group interviews and in-depth, semistructured interviews. The contribution to knowledge lays in the development of a theoretical framework for mobile learning to describe contemporary practices and experiences in Saudi higher education. Themes of mobile learners’ ubiquitous use, mobile learners’ movement, and mobile learners’ strategies for achieving learning goals emerged through the analysis. One major conclusion of the research is that, as a country with a gender segregated education system and very strong cultural demands on women, mobile learning enables Saudi females to negotiate their way through the different constraints, restrictions and boundaries that prevent or hinder them in their learning process, while maintaining their own cultural values, principles and traditions. The research concluded that the mobile learning framework, in the context of Saudi females in higher education, is about active learners showing their agency through appropriating tools and resources, crossing boundaries of contexts, and personalizing their learning with and through the use of their mobile technology as a cultural resource and boundary-crossing tool to accomplish learning tasks, purposes and goals.
APA, Harvard, Vancouver, ISO, and other styles
31

Rattray, Magnus. "Modelling the dynamics of genetic algorithms using statistical mechanics." Thesis, University of Manchester, 1996. http://publications.aston.ac.uk/598/.

Full text
Abstract:
A formalism for modelling the dynamics of Genetic Algorithms (GAs) using methods from statistical mechanics, originally due to Prugel-Bennett and Shapiro, is reviewed, generalized and improved upon. This formalism can be used to predict the averaged trajectory of macroscopic statistics describing the GA's population. These macroscopics are chosen to average well between runs, so that fluctuations from mean behaviour can often be neglected. Where necessary, non-trivial terms are determined by assuming maximum entropy with constraints on known macroscopics. Problems of realistic size are described in compact form and finite population effects are included, often proving to be of fundamental importance. The macroscopics used here are cumulants of an appropriate quantity within the population and the mean correlation (Hamming distance) within the population. Including the correlation as an explicit macroscopic provides a significant improvement over the original formulation. The formalism is applied to a number of simple optimization problems in order to determine its predictive power and to gain insight into GA dynamics. Problems which are most amenable to analysis come from the class where alleles within the genotype contribute additively to the phenotype. This class can be treated with some generality, including problems with inhomogeneous contributions from each site, non-linear or noisy fitness measures, simple diploid representations and temporally varying fitness. The results can also be applied to a simple learning problem, generalization in a binary perceptron, and a limit is identified for which the optimal training batch size can be determined for this problem. The theory is compared to averaged results from a real GA in each case, showing excellent agreement if the maximum entropy principle holds. Some situations where this approximation brakes down are identified. In order to fully test the formalism, an attempt is made on the strong sc np-hard problem of storing random patterns in a binary perceptron. Here, the relationship between the genotype and phenotype (training error) is strongly non-linear. Mutation is modelled under the assumption that perceptron configurations are typical of perceptrons with a given training error. Unfortunately, this assumption does not provide a good approximation in general. It is conjectured that perceptron configurations would have to be constrained by other statistics in order to accurately model mutation for this problem. Issues arising from this study are discussed in conclusion and some possible areas of further research are outlined.
APA, Harvard, Vancouver, ISO, and other styles
32

Svénsen, Johan F. M. "GTM: the generative topographic mapping." Thesis, Aston University, 1998. http://publications.aston.ac.uk/1245/.

Full text
Abstract:
This thesis describes the Generative Topographic Mapping (GTM) --- a non-linear latent variable model, intended for modelling continuous, intrinsically low-dimensional probability distributions, embedded in high-dimensional spaces. It can be seen as a non-linear form of principal component analysis or factor analysis. It also provides a principled alternative to the self-organizing map --- a widely established neural network model for unsupervised learning --- resolving many of its associated theoretical problems. An important, potential application of the GTM is visualization of high-dimensional data. Since the GTM is non-linear, the relationship between data and its visual representation may be far from trivial, but a better understanding of this relationship can be gained by computing the so-called magnification factor. In essence, the magnification factor relates the distances between data points, as they appear when visualized, to the actual distances between those data points. There are two principal limitations of the basic GTM model. The computational effort required will grow exponentially with the intrinsic dimensionality of the density model. However, if the intended application is visualization, this will typically not be a problem. The other limitation is the inherent structure of the GTM, which makes it most suitable for modelling moderately curved probability distributions of approximately rectangular shape. When the target distribution is very different to that, theaim of maintaining an `interpretable' structure, suitable for visualizing data, may come in conflict with the aim of providing a good density model. The fact that the GTM is a probabilistic model means that results from probability theory and statistics can be used to address problems such as model complexity. Furthermore, this framework provides solid ground for extending the GTM to wider contexts than that of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
33

Csató, Lehel. "Gaussian processes : iterative sparse approximations." Thesis, Aston University, 2002. http://publications.aston.ac.uk/1327/.

Full text
Abstract:
In recent years there has been an increased interest in applying non-parametric methods to real-world problems. Significant research has been devoted to Gaussian processes (GPs) due to their increased flexibility when compared with parametric models. These methods use Bayesian learning, which generally leads to analytically intractable posteriors. This thesis proposes a two-step solution to construct a probabilistic approximation to the posterior. In the first step we adapt the Bayesian online learning to GPs: the final approximation to the posterior is the result of propagating the first and second moments of intermediate posteriors obtained by combining a new example with the previous approximation. The propagation of em functional forms is solved by showing the existence of a parametrisation to posterior moments that uses combinations of the kernel function at the training points, transforming the Bayesian online learning of functions into a parametric formulation. The drawback is the prohibitive quadratic scaling of the number of parameters with the size of the data, making the method inapplicable to large datasets. The second step solves the problem of the exploding parameter size and makes GPs applicable to arbitrarily large datasets. The approximation is based on a measure of distance between two GPs, the KL-divergence between GPs. This second approximation is with a constrained GP in which only a small subset of the whole training dataset is used to represent the GP. This subset is called the em Basis Vector, or BV set and the resulting GP is a sparse approximation to the true posterior. As this sparsity is based on the KL-minimisation, it is probabilistic and independent of the way the posterior approximation from the first step is obtained. We combine the sparse approximation with an extension to the Bayesian online algorithm that allows multiple iterations for each input and thus approximating a batch solution. The resulting sparse learning algorithm is a generic one: for different problems we only change the likelihood. The algorithm is applied to a variety of problems and we examine its performance both on more classical regression and classification tasks and to the data-assimilation and a simple density estimation problems.
APA, Harvard, Vancouver, ISO, and other styles
34

Stewart, Sean. "Deploying a CMS Tier-3 Computing Cluster with Grid-enabled Computing Infrastructure." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2564.

Full text
Abstract:
The Large Hadron Collider (LHC), whose experiments include the Compact Muon Solenoid (CMS), produces over 30 million gigabytes of data annually, and implements a distributed computing architecture—a tiered hierarchy, from Tier-0 through Tier-3—in order to process and store all of this data. Out of all of the computing tiers, Tier-3 clusters allow scientists the most freedom and flexibility to perform their analyses of LHC data. Tier-3 clusters also provide local services such as login and storage services, provide a means to locally host and analyze LHC data, and allow both remote and local users to submit grid-based jobs. Using the Rocks cluster distribution software version 6.1.1, along with the Open Science Grid (OSG) roll version 3.2.35, a grid-enabled CMS Tier-3 computing cluster was deployed at Florida International University’s Modesto A. Maidique campus. Validation metric results from Ganglia, MyOSG, and CMS Dashboard verified a successful deployment.
APA, Harvard, Vancouver, ISO, and other styles
35

Muhamed, Abera Ayalew. "Moduli spaces of topological solitons." Thesis, University of Kent, 2015. https://kar.kent.ac.uk/47961/.

Full text
Abstract:
This thesis presents a detailed study of phenomena related to topological solitons (in $2$-dimensions). Topological solitons are smooth, localised, finite energy solutions in non-linear field theories. The problems are about the moduli spaces of lumps in the projective plane and vortices on compact Riemann surfaces. Harmonic maps that minimize the Dirichlet energy in their homotopy classes are known as lumps. Lump solutions in real projective space are explicitly given by rational maps subject to a certain symmetry requirement. This has consequences for the behaviour of lumps and their symmetries. An interesting feature is that the moduli space of charge $3$ lumps is a $7$- dimensional manifold of cohomogeneity one. In this thesis, we discuss the charge $3$ moduli space, calculate its metric and find explicit formula for various geometric quantities. We discuss the moment of inertia (or angular integral) of moduli spaces of charge $3$ lumps. We also discuss the implications for lump decay. We discuss interesting families of moduli spaces of charge $5$ lumps using the symmetry property and Riemann-Hurwitz formula. We discuss the K\"ahler potential for lumps and find an explicit formula on the $1$-dimensional charge $3$ lumps. The metric on the moduli spaces of vortices on compact Riemann surfaces where the fields have zeros of positive multiplicity is evaluated. We calculate the metric, K\"{a}hler potential and scalar curvature on the moduli spaces of hyperbolic $3$- and some submanifolds of $4$-vortices. We construct collinear hyperbolic $3$- and $4$-vortices and derive explicit formula of their corresponding metrics. We find interesting subspaces in both $3$- and $4$-vortices on the hyperbolic plane and find an explicit formula for their respective metrics and scalar curvatures. We first investigate the metric on the totally geodesic submanifold $\Sigma_{n,m},\, n+m=N$ of the moduli space $M_N$ of hyperbolic $N$-vortices. In this thesis, we discuss the K\"{a}hler potential on $\Sigma_{n,m}$ and an explicit formula shall be derived in three different approaches. The first is using the direct definition of K\"ahler potential. The second is based on the regularized action in Liouville theory. The third method is applying a scaling argument. All the three methods give the same result. We discuss the geometry of $\Sigma_{n,m}$, in particular when $n=m=2$ and $m=n-1$. We evaluate the vortex scattering angle-impact parameter relation and discuss the $\frac{\pi}{2}$ vortex scattering of the space $\Sigma_{2,2}$. Moreover, we study the $\frac{\pi}{n}$ vortex scattering of the space $\Sigma_{n,n-1}$. We also compute the scalar curvature of $\Sigma_{n,m}$. Finally, we discuss vortices with impurities and calculate explicit metrics in the presence of impurities.
APA, Harvard, Vancouver, ISO, and other styles
36

Oger, Benoit. "Soot characterisation in diesel engines using laser-induced incandescence." Thesis, University of Brighton, 2012. https://research.brighton.ac.uk/en/studentTheses/f6833b2f-0a5b-44b2-9fbe-ad3953909c01.

Full text
Abstract:
Nowadays, the European automotive market is dominated by Diesel engines. Despite their high efficiency, these produce significant levels of pollutants. Among the various pollutants released, nitrogen oxides and soot are the main issues. Their formation is linked to the combustion process and attempts to reduce one often lead to an increase of the other. Laser diagnostics are among the best tools for experimental, non-intrusive studies inside combustion chambers for a better understanding of the complex combustion processes. Depending on the optical diagnostic, numerous combustion characteristics and processes can be investigated. The work presented here intends initially to develop a quantitative laser technique for characterising soot and, secondly, to further the knowledge on soot formation in Diesel engines by the application of this technique in an optical combustion chamber. Some of the main characteristics describing soot formation are the soot volume fraction, number density and particle sizes. Soot volume fraction is the major one as it is representative of the volume of soot produced. Planar characterisation of soot volume fraction, number density and particle size were achieved for the first time by simultaneous recording laser-induced incandescence (LII), laser scattering and two-colour time-resolved (2C-TiRe) LII signals. Qualitative planar distributions of particle diameter and soot volume fraction were derived from the image ratio of scattering and incandescence signals. 2C- TiRe LII technique allowed the simultaneous recording of the temporal LII signal for two different wavelengths in order to obtain quantitative values of the laser-heated particles temperature, soot volume fraction and particle size for a local or global part of the flame. These were used to recalibrate relative size and soot volume distributions. An initial development of the technique was performed on a laminar diffusion flame (Santoro burner) to validate its viability and performance. Equivalent temperature, soot volume fraction and particle diameter were determined throughout the flame. The results were found to be in good agreement with the ones published in the literature. The diagnostic was subsequently applied to an optical Diesel rapid compression machine, and further refinements were undertaken to cope with the higher soot concentration and lower LII signal. Tests were conducted for in-cylinder pressures ranging from 4 to 10 MPa, and injection pressures up to 160 MPa. A fixed injection timing and injected fuel quantity were used. Effects of in-cylinder pressure, fuel injection pressure and cetane number on soot formation and characteristics were observed. High injection pressure, cetane number and in-cylinder pressure caused a reduction of soot particle size and volume fraction but an increase of the soot particle density.
APA, Harvard, Vancouver, ISO, and other styles
37

Cavaglia, Gabriela Maria Chiara. "Measuring the homogeneity and similarity of language corpora." Thesis, University of Brighton, 2005. https://research.brighton.ac.uk/en/studentTheses/8b46265d-65c5-477e-9296-412fbb053ed0.

Full text
Abstract:
Corpus-based methods are now dominant in Natural Language Processing (NLP). Creating big corpora is no longer difficult and the technology to analyze them is growing faster, more robust and more accurate. However, when an NLP application performs well on one corpus, it is unclear whether this level of performance would be maintained on others. To make progress on these questions, we need methods for comparing corpora. This thesis investigates comparison methods based on the notions of corpus homogeneity and similarity.
APA, Harvard, Vancouver, ISO, and other styles
38

Cardno, Elizabeth Jayne. "The PlaceToBe.Net : forced delivery of a community 'health' information initiative." Thesis, University of Brighton, 2009. https://research.brighton.ac.uk/en/studentTheses/73590299-10c9-4bea-b479-d3ef2e0f812e.

Full text
Abstract:
This doctoral research is propelled by a single question: when partners from public and private sectors unify with the aim of increasing access to quality community ‘health’ information, what factors shape the selection of technological platforms? In monitoring the processes of planning and decision making, the choice of platform reveals the interests, ideologies and values of groups given labels such as ‘stakeholders.’
APA, Harvard, Vancouver, ISO, and other styles
39

Ahwidy, Mansour. "The development and implementation of e-health services for the Libyan NHS : case studies of hospitals and clinics in both urban and rural areas." Thesis, University of Brighton, 2016. https://research.brighton.ac.uk/en/studentTheses/0c0b3d75-0ee6-484a-9c3c-f34f38f612d2.

Full text
Abstract:
This thesis provides an assessment of the readiness levels within both urban and rural hospitals and clinics in Libya for the implementation of E-health systems. This then enabled the construction of a framework for E-health implementation in the Libyan National Health Service (LNHS). The E-health readiness study assessed how medications were prescribed, how patients were referred, how information communication technology (ICT) was utilised in recording patient records, how healthcare staff were trained to use ICT, and how the ways in which consultations were carried out by healthcare staff. The research was done in five rural clinics and five urban medical centres and focused on the E-health readiness levels of the technology, social attitudes, engagement levels and any other needs that were apparent. Collection of the data was carried out using a mixed methods approach with qualitative interviews and quantitative questionnaires. The study indicated that any IT equipment present was not being utilised for clinical purposes and there was no evidence of any E-health technologies being employed. This implies that the maturity level of the healthcare institutions studied was at level zero in the E-health maturity model used in this thesis. In order for the LNHS to raise its maturity levels for the implementation of E-health systems, it needs to persuade LNHS staff and patients to adopt E-health systems. This can be carried out at a local level throughout the LNHS, though this will need to be coordinated at a national level through training, education and programmes to encourage compliance and providing incentives. In order to move E-health technology usage in the participating Libyan healthcare institutions from Level 0 to Level 2 in the E-health Maturity Model levels, an E-health framework was created that is based on the findings of this research study. The primary aim of the LNHS E-Health Framework is the integration of E-health services for improving the delivery of healthcare within the LNHS. To construct the framework and ensure that it was creditable and applicable, work on it was informed directly by the findings from document analysis, literature review, and expert feedback, in conjunction with the primary research findings presented in Chapter Five. When the LNHS E-Health Framework was compiled there were several things taken into consideration, such as: the abilities of healthcare staff, the needs of healthcare institutions and the existing ICT infrastructure that had been recorded in the E-readiness assessment which was carried out in the healthcare institutions (Chapter 5). The framework also provides proposals for E-health systems based on the infrastructure network that will be developed. The processes addressed are electronic health records, E-consultations, E-prescriptions, E-referrals and E-training. The researcher has received very positive, even enthusiastic, feedback from the LNHS and other officals, and that expect the framework to be further developed and implemented by the LNHS in the near future.
APA, Harvard, Vancouver, ISO, and other styles
40

Meiners, Justin. "Computing the Rank of Braids." BYU ScholarsArchive, 2021. https://scholarsarchive.byu.edu/etd/8947.

Full text
Abstract:
We describe a method for computing rank (and determining quasipositivity) in the free group using dynamic programming. The algorithm is adapted to computing upper bounds on the rank for braids. We test our method on a table of knots by identifying quasipositive knots and calculating the ribbon genus. We consider the possibility that rank is not theoretically computable and prove some partial results that would classify its computational complexity. We then present a method for effectively brute force searching band presentations of small rank and conjugate length.
APA, Harvard, Vancouver, ISO, and other styles
41

Zhang, Yulong. "TOWARDS AN INCENTIVE COMPATIBLE FRAMEWORK OF SECURE CLOUD COMPUTING." VCU Scholars Compass, 2012. http://scholarscompass.vcu.edu/etd/2739.

Full text
Abstract:
Cloud computing has changed how services are provided and supported through the computing infrastructure. It has the advantages such as flexibility , scalability , compatibility and availability . However, the current architecture design also brings in some troublesome problems, like the balance of cooperation benefits and privacy concerns between the cloud provider and the cloud users, and the balance of cooperation benefits and free-rider concerns between different cloud users. Theses two problems together form the incentive problem in cloud environment. The first conflict lies between the reliance of services and the concerns of secrets of cloud users. To solve it, we proposes a novel architecture, NeuCloud, to enable partially, trusted, transparently, accountably privacy manipulation and revelation. With the help of this architecture, the privacy-sensitive users can be more confident to move to public clouds. A trusted computing base is not enough, in order to stimulate incentive-compatible privacy trading, we present a theoretical framework and provide the guidelines for cloud provider to compensate the cloud user's privacy-risk-aversion. We implement the NeuCloud and evaluate it. Moreover, a improved model of NeuCloud is discussed. The second part of this thesis strives to solve the free-rider problem in cloud environment. For example, the VM-colocation attacks have become serious threats to cloud environment. We propose to construct an incentive-compatible moving-target-defense by periodically migrating VMs, making it much harder for adversaries to locate the target VMs. We developed theories about whether the migration of VMs is worthy and how the optimal migration interval can be determined. To the best of our knowledge, our work is the first effort to develop a formal and quantified model to guide the migration strategy of clouds to improve security. Our analysis shows that our placement based defense can significantly improve the security level of the cloud with acceptable costs. In summary, the main objective of this study is to provide an incentive-compatible to eliminate the cloud user's privacy or cooperative concerns. The proposed methodology can directly be applied in commercial cloud and help this new computing fashion go further in the history. The theoretical part of this work can be extended to other fields where privacy and free-rider concerns exist.
APA, Harvard, Vancouver, ISO, and other styles
42

Fasan, Mary Oluwasola. "Distributed binary decision diagrams." Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/5411.

Full text
Abstract:
Thesis (MSc (Mathematical Sciences)--University of Stellenbosch, 2010.
ENGLISH ABSTRACT: Binary Decision Diagrams (BDDs) are data structures that have been used to solve various problems in different aspects of computer aided design and formal verification. The large memory and time requirements of BDD applications are the major constraints that usually prevent the use of BDDs since there is a limited amount of memory available on a machine. One way of overcoming this resource limitation problem is to utilize the memory available on a network of workstations (NOW). This requires the distribution of the computation and memory requirements involved in the manipulation of BDDs over a NOW. In this thesis, an algorithm for manipulating BDDs on a NOW is presented. The algorithm makes use of the breadth-first technique to manipulate BDDs so that various BDD operations can be started concurrently on the different workstations on the NOW. The design and implementation details of the distributed BDD package are described. The various approaches considered in order to optimize the performance of the algorithm are also discussed. Experimental results demonstrating the performance and capabilities of the distributed package and the benefits of the different optimization approaches are given.
AFRIKAANSE OPSOMMING: Binêre besluitnemingsbome (BBBs) is data strukture wat gebruik word om probleme in verskillende areas van Rekenaarwetenskap, soos by voorbeeld rekenaargesteunde ontwerp en formele verifikasie, op te los. Die tyd- en spasiekoste van BBB-gebaseerde toepassings is die hoofrede waarom BBBs nie altyd gebruik kan word nie; die geheue van ’n enkele is ongelukkig te beperkend. Een manier om hierdie hulpbronprobleem te omseil, is om die gedeelde geheue van die werkstasies in ’n netwerk van werkstasies (Engels: “network of workstations”, oftewel, ’n NOW) te benut. Dit is dus nodig om die berekening en geheuevoorvereistes van die BBB bewerking oor die NOW te versprei. Hierdie tesis bied ’n algoritme aan om BBBs op ’n NOW te hanteer. Die algoritme gebruik die breedte-eerste soektegniek, sodat BBB operasies gelyklopend kan uitvoer. Die details van die ontwerp en implementasie van die verspreide BBB bilbioteek word beskryf. Verskeie benaderings om die gedrag van die biblioteek te optimeer word ook aangespreek. Empiriese resultate wat die werkverrigting en kapasiteit van die biblioteek meet, en wat die uitwerking van die onderskeie optimerings aantoon, word verskaf.
APA, Harvard, Vancouver, ISO, and other styles
43

Turkedjiev, Emil. "Hybrid neural network analysis of short-term financial shares trading." Thesis, Northumbria University, 2017. http://nrl.northumbria.ac.uk/36122/.

Full text
Abstract:
Recent advances in machine intelligence, particularly Artificial Neural Networks (ANNs) and Particle Swarm Optimisation (PSO), have introduced conceptually advanced technologies that can be utilised for financial market share trading analysis. The primary goal of the present research is to model short-term daily trading in Financial Times Stock Exchange 100 Index (FTSE 100) shares to make forecasts with certain levels of confidence and associated risk. The hypothesis to be tested is that financial shares time series contain significant non-linearity and that ANN, either separately or in conjunction with PSO, could be utilised effectively. Validation of the proposed model shows that nonlinear models are likely to be better choices than traditional linear regression for short-term trading. Some periodicity and trend lines were apparent in short- and long-term trading. Experiments showed that a model using an ANN with the Discrete Fourier Transform (DFT) and Discrete Wavelet Transform (DWT) model features performed significantly better than analysis in the time domain. Mathematical analysis of the PSO algorithm from a systemic point of view along with stability analysis was performed to determine the choice of parameters, and a possible proportional, integral and derivative (PID) algorithm extension was recommended. The proposed extension was found to perform better than traditional PSO. Furthermore, a chaotic local search operator and exponentially varying inertia weight factor algorithm considering constraints were proposed that gave better ability to converge to a high quality solution without oscillations. A hybrid example combining an ANN with the PSO forecasting regression model significantly outperformed the original ANN and PSO approaches in accuracy and computational complexity. The evaluation of statistical confidence for the models gave good results, which is encouraging for further experimentation considering model cross-validation for generalisation to show how accurately the predictive models perform in practice.
APA, Harvard, Vancouver, ISO, and other styles
44

Walker, Philip Raymond. "How does website design in the e-banking sector affect customer attitudes and behaviour?" Thesis, Northumbria University, 2011. http://nrl.northumbria.ac.uk/5849/.

Full text
Abstract:
This thesis researches the interface between ebanks and their customers. An industry traditionally based upon personal contact, the rise of ebanking has changed this relationship such that transactions are now mainly conducted via website interfaces. The resultant loss of personal contact between bank and customer has removed many of the cues available to customers upon which judgments of service, reliability and trust were made. The question raised by this change is: what factors influence consumer choice when viewing bank websites? The arguments of this thesis are that user evaluation of websites and their willingness to use those websites is based not only on user centred factors such as motivation, experience and knowledge but also upon their appraisal of website structure and content.
APA, Harvard, Vancouver, ISO, and other styles
45

Leitner, Michael. "Mobile interaction trajectories : a design focused approach for generative mobile interaction design research." Thesis, Northumbria University, 2015. http://nrl.northumbria.ac.uk/32700/.

Full text
Abstract:
Mobile HCI’s (Human Computer Interaction) understanding of mobility can benefit from novel theoretical perspectives that have been largely underexploited. This thesis develops and applies a novel middle range theory for mobile interaction design called mobile interaction trajectories, demonstrating the theory’s use and value in practical design settings. Mobile interaction trajectories offer a new theoretical perspective for mobile interaction design, considering people’s everyday trajectories as a baseline for mediated communication, with foci on practices and experiences of changing states of connectedness, chronologies of mediated communication, and mobile communication routines. Following a research through design methodology, probing was used as a creative research method. Two probing experiments informed the theory’s development. A new Probe resource was designed and applied, called the Hankie Probe. It was used to collect instances of mobile interaction trajectories and informed a range of design workshops. The Hankie Probe is based on a fabric-based format and expresses everyday trajectories, and mobile communication practice and experience via stitched and drawn handmade space-time diaries. Research about design analysed the design processes with the completed Probes revealing the middle range theory’s value. The theory’s distinctive characteristics have shown to inform generative design processes. The trajectory-based perspective inspired design concepts for contextually adaptive services that enable new communication experiences and alter the chronology of social interaction. The thesis contributes to knowledge by underpinning generative design work with novel mobility theories via a new Probe format for mobile interaction design research. The following additional discoveries were made: There are three basic probing functions in generative design workshops; designer’s experiences and subjective interpretation augment insights about users and contexts in design workshops, the fabric-based handmade Probes influenced design work offering a captivating authentic format that requires subjective interpretation.
APA, Harvard, Vancouver, ISO, and other styles
46

Matsui, Kazunori. "Asymptotic analysis of an ε-Stokes problem with Dirichlet boundary conditions." Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-71938.

Full text
Abstract:
In this thesis, we propose an ε-Stokes problem connecting the Stokes problem and the corresponding pressure-Poisson equation using one pa- rameter ε > 0. We prove that the solution to the ε-Stokes problem, converges as ε tends to 0 or ∞ to the Stokes and pressure-Poisson prob- lem, respectively. Most of these results are new. The precise statements of the new results are given in Proposition 3.5, Theorem 4.1, Theorem 5.2, and Theorem 5.3. Numerical results illustrating our mathematical results are also presented.
STINT (DD2017-6936) "Mathematics Bachelor Program for Efficient Computations"
APA, Harvard, Vancouver, ISO, and other styles
47

Nakrani, Sunil. "Biomimetic and autonomic server ensemble orchestration." Thesis, University of Oxford, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.534214.

Full text
Abstract:
This thesis addresses orchestration of servers amongst multiple co-hosted internet services such as e-Banking, e-Auction and e-Retail in hosting centres. The hosting paradigm entails levying fees for hosting third party internet services on servers at guaranteed levels of service performance. The orchestration of server ensemble in hosting centres is considered in the context of maximising the hosting centre's revenue over a lengthy time horizon. The inspiration for the server orchestration approach proposed in this thesis is drawn from nature and generally classed as swarm intelligence, specifically, sophisticated collective behaviour of social insects borne out of primitive interactions amongst members of the group to solve problems beyond the capability of individual members. Consequently, the approach is self-organising, adaptive and robust. A new scheme for server ensemble orchestration is introduced in this thesis. This scheme exploits the many similarities between server orchestration in an internet hosting centre and forager allocation in a honeybee (Apis mellifera) colony. The scheme mimics the way a honeybee colony distributes foragers amongst flower patches to maximise nectar influx, to orchestrate servers amongst hosted internet services to maximise revenue. The scheme is extended by further exploiting inherent feedback loops within the colony to introduce self-tuning and energy-aware server ensemble orchestration. In order to evaluate the new server ensemble orchestration scheme, a collection of server ensemble orchestration methods is developed, including a classical technique that relies on past history to make time varying orchestration decisions and two theoretical techniques that omnisciently make optimal time varying orchestration decisions or an optimal static orchestration decision based on complete knowledge of the future. The efficacy of the new biomimetic scheme is assessed in terms of adaptiveness and versatility. The performance study uses representative classes of internet traffic stream behaviour, service user's behaviour, demand intensity, multiple services co-hosting as well as differentiated hosting fee schedule. The biomimetic orchestration scheme is compared with the classical and the theoretical optimal orchestration techniques in terms of revenue stream. This study reveals that the new server ensemble orchestration approach is adaptive in a widely varying external internet environments. The study also highlights the versatility of the biomimetic approach over the classical technique. The self-tuning scheme improves on the original performance. The energy-aware scheme is able to conserve significant energy with minimal revenue performance degradation. The simulation results also indicate that the new scheme is competitive or better than classical and static methods.
APA, Harvard, Vancouver, ISO, and other styles
48

Kunkhet, Arus. "Harmonised shape grammar in design practice." Thesis, Staffordshire University, 2015. http://eprints.staffs.ac.uk/2209/.

Full text
Abstract:
The aim of this thesis is to address the contextual and harmony issues in shape grammar (SG) by applying knowledge from the field of natural language processing (NLP). Currently shape grammars are designed for static models (Ilčík et al., 2010), limited domain (Chau et al., 2004), time-consuming process (Halatsch, 2008), high user skills (Lee and Tang, 2009), and cannot guarantee aesthetic results (Huang et al., 2009). The current approaches to shape grammar produce infinite design and often meaningless shapes. This thesis addresses this problem by proposing a harmonised shape grammar framework which involves applying five levels of analysis namely morphological, lexical, syntactic, semantic, and pragmatic levels to enhance the overall design process. In satisfying these semantically well-formed and pragmatically well-formed shapes, the generated shapes can be contextual and harmonious. The semantic analysis level focuses on the character’s anatomy, body function, and habitat in order to produce meaningful design whereas the pragmatic level achieves harmony in design by selecting relevant character’s attributes, characteristics, and behaviour. In order to test the framework, this research applies the five natural language processing levels to a set of 3D humanoid characters. To validate this framework, a set of criteria related to aesthetic requisites has been applied to generate humanoid characters; these include the principles of design (i.e. contrast, emphasis, balance, unity, pattern, and rhythm) and aspects of human perception in design (i.e. visceral, behavioural and reflective). The framework has ensured that the interrelationships between each design part are mutually beneficial and all elements of the humanoid characters are combined to accentuate their similarities and bind the picture parts into a whole.
APA, Harvard, Vancouver, ISO, and other styles
49

George, Gary R. "New methods of mathematical modeling of human behavior in the manual tracking task." Diss., Online access via UMI:, 2008.

Find full text
Abstract:
Thesis (Ph. D.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Mechanical Engineering, 2008.
Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
50

Gkolias, Theodoros. "Shape analysis in protein structure alignment." Thesis, University of Kent, 2018. https://kar.kent.ac.uk/66682/.

Full text
Abstract:
In this Thesis we explore the problem of structural alignment of protein molecules using statistical shape analysis techniques. The structural alignment problem can be divided into three smaller ones: the representation of protein structures, the sampling of possible alignments between the molecules and the evaluation of a given alignment. Previous work done in this field, can be divided in two approaches: an adhoc algorithmic approach from the Bioinformatics literature and an approach using statistical methods either in a likelihood or Bayesian framework. Both approaches address the problem from a different scope. For example, the algorithmic approach is easy to implement but lacks an overall modelling framework, and the Bayesian address this issue but sometimes the implementation is not straightforward. We develop a method which is easy to implement and is based on statistical assumptions. In order to asses the quality of a given alignment we use a size and shape likelihood density which is based in the structure information of the molecules. This likelihood density is also extended to include sequence infor- mation and gap penalty parameters so that biologically meaningful solution can be produced. Furthermore, we develop a search algorithm to explore possible alignments from a given starting point. The results suggest that our approach produces better or equal alignments when it is compared to the most recent struc- tural alignment methods. In most of the cases we managed to achieve a higher number of matched atoms combined with a high TMscore. Moreover, we extended our method using Bayesian techniques to perform alignments based on posterior modes. In our approach, we estimate directly the mode of the posterior distribution which provides the final alignment between two molecules. We also, choose a different approach for treating the mean parameter. In previous methods the mean was either integrated out of the likelihood density or considered as fixed. We choose to assign a prior over it and obtain its posterior mode. Finally, we consider an extension of the likelihood model assuming a Normal density for both the matched and unmatched parts of a molecule and diagonal covariance structure. We explore two different variants. In the first we consider a fixed zero mean for the unmatched parts of the molecules and in the second we consider a common mean for both the matched and unmatched parts. Based on simulated and real results, both models seems to perform well in obtaining high number of matched atoms and high TMscore.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography