To see the other types of publications on this topic, follow the link: Computer modelling.

Dissertations / Theses on the topic 'Computer modelling'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computer modelling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Leinala, Tommi J. "Computer modelling of landslides." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0008/MQ34119.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ma, Li Li. "Computer modelling of magnetrons." Thesis, Queen Mary, University of London, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.413665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Petric, J. "Computer modelling of landscapes." Thesis, University of Strathclyde, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.382253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ching, Peter Ho Yin. "Modelling pipelined computer architectures." Thesis, Imperial College London, 1988. http://hdl.handle.net/10044/1/46997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

趙宇飛 and Yufei Zhao. "Computer modelling of cloth objects." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1997. http://hub.hku.hk/bib/B31236431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Reed, Melissa. "Computer modelling of mammoth extinction." Thesis, University of Reading, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.297314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Raine, A. R. C. "Computer modelling of protein structure." Thesis, University of York, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.277222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mills, Glen E. "Computer modelling of polymer electrolytes." Thesis, Keele University, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.242281.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Anderson, Thomas R. "Computer modelling of agroforestry systems." Thesis, University of Edinburgh, 1991. http://hdl.handle.net/1842/13429.

Full text
Abstract:
The potential of agroforestry in the British uplands depends largely on the ability of system components to efficiently use resources for which they compete. A typical system would comprise conifers planted at wide spacing, with sheep grazing pasture beneath. Computer models were developed to investigate the growth of trees and pasture in a British upland agroforest system, assuming that growth is primarily a function of light intercepted. Some of the implications of growing trees at wide spacing compared to conventional spacings, and the impact of trees on the spatial and annual production of pasture, were examined. Competition for environmental resources between trees and pasture was assumed to be exclusively for light: below-ground interactions were ignored. Empirical methods were used to try and predict timber production in agroforest stands based on data for conventional forest stands, and data for widely-spaced radiata pine grown in South Africa. These methods attempted to relate stem volume increment to stand density, age, and derived competition measures. Inadequacy of the data base prevented successful extrapolation of growth trends of British stands, although direct extrapolation of the South African data did permit predictions to be made. A mechanistic individual-tree growth model was developed, both to investigate the mechanisms of tree growth at wide spacings, and to provide an interface for a pasture model to examine pasture growth under the shading conditions imposed by a tree canopy. The process of light interception as influenced by radiation geometry and stand architecture was treated in detail. Other features given detailed consideration include carbon partitioning, respiration, the dynamics of foliage and crown dimensions, and wood density within tree stems. The predictive ability of the model was considered poor, resulting from inadequate knowledge and data on various aspects of tree growth. The model highlighted the need for further research into the dynamics of crown dimensions, foliage dynamics, carbon partitioning patterns and wood density within stems, and how these are affected by wide spacing. A pasture model was developed to investigate growth beneath the heterogeneous light environment created by an agroforest tree canopy. Pasture growth was closely related to light impinging on the crop, with temperature having only a minor effect. The model highlighted the fact that significant physiological adaptation (increased specific leaf area, decreased carbon partitioned below-ground and changes in the nitrogen cycle) is likely to occur in pasture shaded by a tree canopy.
APA, Harvard, Vancouver, ISO, and other styles
10

Taylor, Steven John. "Computer modelling of pyrotechnic combustion." Thesis, Rhodes University, 1996. http://hdl.handle.net/10962/d1006913.

Full text
Abstract:
One of the most important industrial uses of pyrotechnic compositions is as delay fuses in electric detonators. Many factors influence the rate of burning of such fuses. These include (a) the primary choice of chemical components, followed by (b) the physical properties of these components, particularly the particle-size and distribution of the fuel, (c) the composition of the system chosen and (d) the presence of additives and/or impurities. A full experimental study of the influences of even a few of these factors, while attempting to hold other potential variables constant, would be extremely time consuming and hence attention has been focused on the possibilities of modelling pyrotechnic combustion. Various approaches to the modelling of pyrotechnic combustion are discussed. These include:- (i) one-dimensional finite-difference models; (ii) two-dimensional finite-element models; (iii) particle-packing considerations; (iv) Monte Carlo models. Predicted behaviour is compared with extensive experimental information for the widely-used antimony/potassium permanganate pyrotechnic system, and the tungsten /potassium dichromate pyrotechnic system. The one-dimensional finite-difference model was investigated to give a simple means of investigating the effects of some parameters on the combustion of a pyrotechnic. The two-dimensional finite-difference model used similar inputs, but at the expense of considerably more computer power, gave more extensive information such as the shape of the burning front and the temperature gradients throughout the column and within the casing material. Both these models gave improved results when allowance was made for autocatalytic kinetics in place of the usual assumption of an "order-of-reaction", n ≤ 1. The particle-packing model investigated the qualitative relationship between the maximum burning rate of a pyrotechnic system and the maximum number of contact points (per 1.00 g composition) calculated for that system. Qualitative agreement was found for those systems which are presumed to burn mainly via solid-solid reactions. The Monte Carlo model investigated the effect of the random packing of fuel and oxidant particles on the variability of the burning rate of a pyrotechnic composition.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhao, Yufei. "Computer modelling of cloth objects /." Hong Kong : University of Hong Kong, 1997. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18614061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Fleming, Sean D. "Computer modelling of gibbsite crystallization." Thesis, Curtin University, 1999. http://hdl.handle.net/20.500.11937/1274.

Full text
Abstract:
This thesis documents the development and application of a computer model for gibbsite, an aluminium tri-hydroxide polymorph. In particular, the work has emphasized the idea of computer modelling techniques combining with ex- observations to provide greater insight than either could separately. Chapter One provides an overview and introduction to the fields of solid state chemistry, crystallization and computer modelling. These ideas are extended in Chapter Two to include a more detailed discussion of the theoretical principles behind the modelling in this project. The development of transferable oxalate and hydroxide potential models, intended primarily for sodium oxalate and gibbsite, is described in Chapter Three. Both ab initio hypersurface fitting and lattice fitting techniques were utilized, with an average structural fitting error of under two percent. In addition, the potentials were used to successfully reproduce several (related) crystal structures, thus establishing the quality of the model. In Chapter Four, the model for gibbsite was employed in generating equilibrium and growth morphologies. The equilibrium morphology was found to give excellent agreement with experiment, with all observed faces present. However, the importance of the prismatic planes is underestimated. Also discussed in the chapter is a method for predicting the phenomenon of crystalline twinning. This technique was successfully applied to a number of systems, including gibbsite and sodium oxalate. In Chapter Five, the equilibrium morphology calculations performed earlier were extended by probing the effects of cation incorporation on the habit of gibbsite. This study was conducted in order to provide a first step in estimating the role of the crystallizing solution. Calculations of the change in surface energy caused by the replacement of a surface proton with a cation from solution were made. Different crystal habits were constructed by applying a range of defect surface coverage values to each of the faces appearing in the morphology. The resulting defect morphologies were in excellent agreement with crystal habits commonly observed by experimentalists. Also, the work provided an explanation for the earlier underestimation of the prismatic faces. Chapter Six documents molecular simulations of solutions containing the major species known to be present in industrial and experimental Bayer liquors. The structuring in two solutions, one containing sodium hydroxide and the other potassium hydroxide, was probed by constructing graphs of the radial distribution functions. These plots indicated that a significant degree of ion pairing was occurring between the alkali metal cations (Na+ and K+) and the aluminate monomer ([Al(OH)4(subscript)]-). Furthermore, these cations were found to be acting as 'bridges' which stabilize multiple aluminate monomers, allowing them to form clusters. This data was used to assist in explaining vibrational spectra, and to postulate that clustering may be the origin of the fine particle suspensions noted during the induction period.
APA, Harvard, Vancouver, ISO, and other styles
13

Fleming, Sean D. "Computer modelling of gibbsite crystallization." Curtin University of Technology, School of Applied Chemistry, 1999. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=10273.

Full text
Abstract:
This thesis documents the development and application of a computer model for gibbsite, an aluminium tri-hydroxide polymorph. In particular, the work has emphasized the idea of computer modelling techniques combining with ex- observations to provide greater insight than either could separately. Chapter One provides an overview and introduction to the fields of solid state chemistry, crystallization and computer modelling. These ideas are extended in Chapter Two to include a more detailed discussion of the theoretical principles behind the modelling in this project. The development of transferable oxalate and hydroxide potential models, intended primarily for sodium oxalate and gibbsite, is described in Chapter Three. Both ab initio hypersurface fitting and lattice fitting techniques were utilized, with an average structural fitting error of under two percent. In addition, the potentials were used to successfully reproduce several (related) crystal structures, thus establishing the quality of the model. In Chapter Four, the model for gibbsite was employed in generating equilibrium and growth morphologies. The equilibrium morphology was found to give excellent agreement with experiment, with all observed faces present. However, the importance of the prismatic planes is underestimated. Also discussed in the chapter is a method for predicting the phenomenon of crystalline twinning. This technique was successfully applied to a number of systems, including gibbsite and sodium oxalate. In Chapter Five, the equilibrium morphology calculations performed earlier were extended by probing the effects of cation incorporation on the habit of gibbsite. This study was conducted in order to provide a first step in estimating the role of the crystallizing solution. Calculations of the change in surface energy caused by the replacement of a surface proton with a cation from solution ++
were made. Different crystal habits were constructed by applying a range of defect surface coverage values to each of the faces appearing in the morphology. The resulting defect morphologies were in excellent agreement with crystal habits commonly observed by experimentalists. Also, the work provided an explanation for the earlier underestimation of the prismatic faces. Chapter Six documents molecular simulations of solutions containing the major species known to be present in industrial and experimental Bayer liquors. The structuring in two solutions, one containing sodium hydroxide and the other potassium hydroxide, was probed by constructing graphs of the radial distribution functions. These plots indicated that a significant degree of ion pairing was occurring between the alkali metal cations (Na+ and K+) and the aluminate monomer ([Al(OH)4(subscript)]-). Furthermore, these cations were found to be acting as 'bridges' which stabilize multiple aluminate monomers, allowing them to form clusters. This data was used to assist in explaining vibrational spectra, and to postulate that clustering may be the origin of the fine particle suspensions noted during the induction period.
APA, Harvard, Vancouver, ISO, and other styles
14

Care, Charles. "From analogy-making to modelling : the history of analog computing as a modelling technology." Thesis, University of Warwick, 2008. http://wrap.warwick.ac.uk/2381/.

Full text
Abstract:
Today, modern computers are based on digital technology. However, during the decades after 1940, digital computers were complemented by the separate technology of analog computing. But what was analog computing, what were its merits, and who were its users? This thesis investigates the conceptual and technological history of analog computing. As a concept, analog computing represents the entwinement of a complex pre-history of meanings, including calculation, modelling, continuity and analogy. These themes are not only landmarks of analog's etymology, but also represent the blend of practices, ways of thinking, and social ties that together comprise an `analog culture'. The first half of this thesis identifies how the history of this technology can be understood in terms of the two parallel themes of calculation and modelling. Structuring the history around these themes demonstrates that technologies associated with modelling have less representation in the historiography. Basing the investigation around modelling applications, the thesis investigates the formation of analog culture. The second half of this thesis applies the themes of modelling and information generation to understand analog use in context. Through looking at examples of analog use in academic research, oil reservoir modelling, aeronautical design, and meteorology, the thesis explores why certain communities used analog and considers the relationship between analog and digital in these contexts. This study demonstrates that analog modelling is an example of information generation rather than information processing. Rather than focusing on the categories of analog and digital, it is argued that future historical scholarship in this field should give greater prominence to the more general theme of modelling.
APA, Harvard, Vancouver, ISO, and other styles
15

Clippingdale, Simon. "Multiresolution image modelling and estimation." Thesis, University of Warwick, 1988. http://wrap.warwick.ac.uk/109834/.

Full text
Abstract:
Multiresolution representations make explicit the notion of scale in images, and facilitate the combination of information from different scales. To date, however, image modelling and estimation schemes have not exploited such representations and tend rather to be derived from two- dimensional extensions of traditional one-dimensional signal processing techniques. In the causal case, autoregressive (AR) and ARMA models lead to minimum mean square error (MMSE) estimators which are two-dimensional variants of the well-established Kalman filter. Noncausal approaches tend to be transform-based and the MMSE estimator is the two- dimensional Wiener filter. However, images contain profound nonstationarities such as edges, which are beyond the descriptive capacity of such signal models, and defects such as blurring (and streaking in the causal case) are apparent in the results obtained by the associated estimators. This thesis introduces a new multiresolution image model, defined on the quadtree data structure. The model is a one-dimensional, first-order gaussian martingale process causal in the scale dimension. The generated image, however, is noncausal and exhibits correlations at all scales unlike those generated by traditional models. The model is capable of nonstationary behaviour in all three dimensions (two position and one scale) and behaves isomorphically but independently at each scale, in keeping with the notion of scale invariance in natural images. The optimal (MMSE) estimator is derived for the case of corruption by additive white gaussian noise (AWGN). The estimator is a one-dimensional, first-order linear recursive filter with a computational burden far lower than that of traditional estimators. However, the simple quadtree data structure leads to aliasing and 'block' artifacts in the estimated images. This could be overcome by spatial filtering, but a faster method is introduced which requires no additional multiplications but involves the insertion of some extra nodes into the quadtree. Nonstationarity is introduced by a fast, scale-invariant activity detector defined on the quadtree. Activity at all scales is combined in order to achieve noise rejection. The estimator is modified at each scale and position by the detector output such that less smoothing is applied near edges and more in smooth regions. Results demonstrate performance superior to that of existing methods, and at drastically lower computational cost. The estimation scheme is further extended to include anisotropic processing, which has produced good results in image restoration. An orientation estimator controls anisotropic filtering, the output of which is made available to the image estimator.
APA, Harvard, Vancouver, ISO, and other styles
16

LoVetri, Joseph (Joe). "Computer techniques for electromagnetic interaction modelling." Thesis, University of Ottawa (Canada), 1991. http://hdl.handle.net/10393/7619.

Full text
Abstract:
Computer techniques for the modelling of complex electromagnetic interactions are explored. The main thesis is that these techniques, or methods, can be divided into two types: non-algorithmic and algorithmic techniques. Approximate algorithmic methods for the modelling of electromagnetic interactions have undergone great advances in the past twenty years but they are still only feasible for relatively small problems (i.e. where the space-time discretization produces and requires only a relatively small number of unknowns). The computer implementation of non-algorithmic methods have recently become a reality with the maturing of expert system technology and knowledge based engineering. In Part I of this thesis, a knowledge-based approach for the modelling of electromagnetic (EM) interactions in a system is described. The purpose is to determine any unwanted EM effects which could jeopardize the safety and operation of the system. Modelling the interactions in a system requires the examination of the compounded and propagated effects of the electromagnetic fields. A useful EM modelling approach is one which is incremental and constraint-based. The approach taken here subdivides the modelling task into two parts: (a) the definition of the related physical topology, and (b) the propagation of the electromagnetic constraints. A prototype of some of the EM constraints has been implemented in Quintus Prolog under NeWS on a Sun workstation. User interaction is through a topology drawing tool and a stack-based attribute interface similar to the HyperCard$\sp{\rm TM}$ interface of the Apple Macintosh computer. In Part II, numerical methods which discretize the space-time region of interest and provide a solution to the electromagnetics problem, given appropriate initial and boundary conditions, are investigated. Specifically, time-domain finite difference methods as applied to Maxwell's equations are analyzed, compared and implemented. As the basis of this analysis, Maxwell's equations are expressed as a system of hyperbolic conservation laws. Analytical properties of these systems, based on the method of characteristics, are used to study the numerical solution of Maxwell's equations. Practical issues, such as computational efficiency and memory requirements, are discussed for the implementation of the finite difference schemes. Advanced programming techniques are used to implement all the finite difference schemes discussed. The schemes are used to solve the problem of the penetration of electromagnetic energy through a shield with a thick gap. A two-dimensional time-domain finite element method, implemented as the software package PDE/PROTRAN, is also applied to shielding problems. The software package is first validated for simple hyperbolic problems and is then applied to perfectly conducting shields with apertures.
APA, Harvard, Vancouver, ISO, and other styles
17

Bose, Sonia Manjusri. "Computer simulation modelling of polymer ageing." Thesis, University of Surrey, 2002. http://epubs.surrey.ac.uk/843495/.

Full text
Abstract:
Detailed information of the underlying mechanisms of macromolecular disintegration processes is not always fully available from lab-based experiments and GPC. A powerful computer simulation technique was thus indispensable in this respect whilst saving time, labour and expense. This project aims to develop an interactive computer program to capture the behaviour of a complex reactor and offer the following functionality: mathematical calculations, graph/chart generation, processing simulation-experiment pertaining to user-given scission and environmental characteristics, data saving and re-loading etc, Windows-style menu-driven interfaces provide templates for easy implementation of complex mathematical algorithms -a new simulation technique (Slider interfaces) presented in the thesis, based on cellulose-ageing study in electrical transformers (Heywood, 2000). A novel statistical concept was introduced to significantly improve real-tune performance of mathematical calculations to simulate polymer chain fragmentation phenomena, enabling transformation of the simple iterative to a semi-iterative and instant calculation algorithm. Three new mathematical functions were constructed - (a) Monte Carlo Dynamic (Slider), (b) Algebraic Exact, (c) Markov Statistical models, initially using an arbitrary time scale for degradation. Real-time simulation was developed using three time model variants that included the interpretation of deviations in the order reaction rate from linearity to an exponential type function. The above transformation enhanced reproducibility and accuracy of degraded MWD curve sampling whilst then- graphical display & clarity via 'Cubic B-Spline' smoothing-algorithm. Complex models were created from a ranking ensemble of single scission mechanisms, structured with levels of probability constructs to effectively simulate GPC-like curve-deformities and side-shifts. The simulation results provided new information in the following key areas: the temporal shift patterns of MWD/PCLD under different ageing conditions graphical comparisons between simulated and observed Idnetic and scission parameters. the dominant types of scission strategy at different reactor conditions, the dependence of reaction behaviour on the polymer structural order. An alternative way of predicting life expectancy of an ageing polymer via relating time- temperature to the magnitude of intermediate MWD curve shifts that is independent on DP. The latter is an average value and subjected to errors. An equation was derived for this. Introduction of a binary tree "death time" algorithm for calculation of the life expectancy of different categories of polymer chain species. Non-iterative techniques developed here opens up new avenues of further research. The developed algorithms and computer program may provide ample scope for investigating the ageing of other industrially important polymers and can be utilised in other areas of polymer research with little modification where probability distribution is sought.
APA, Harvard, Vancouver, ISO, and other styles
18

Fearing, Paul. "The computer modelling of fallen snow." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0015/NQ56540.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Chow, Hon-nin. "Computer aided modelling of porous structures." Click to view the E-thesis via HKUTO, 2008. http://sunzi.lib.hku.hk/hkuto/record/B39848929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Roll, Sebastian. "Ontology-based Computer-aided modelling Tool." Thesis, Norges Teknisk-Naturvitenskaplige Universitet, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-21118.

Full text
Abstract:
A long-lived software project for using the advantages of a computer in modelling of physical-chemical-biological processes has been under development by supervisor Prof Heinz A. Preisig. The software is designed to assist the modeller in making consistent and structurally solvable models, reducing the time frame and the amount of random mistakes. The model is built as a basic graph in the bottom, with subsequent layering of information onto the graph, describing its physical nature. The finished models is meant to have automatic code generation for existing solvers, so that it can be used for simulation, system identification or for control optimization problems. \\The scope of the project has been to assist Prof Presig with software design and overall strategy. The biggest concepts faced were on how to deal with phase change in models, what physical descriptions should be available for the graph and how these physical descriptions are best structured to prevent infeasible systems. The current version of the computer-aided modelling project is written in Python, using the graphical user interface environment PyQt4.
APA, Harvard, Vancouver, ISO, and other styles
21

Crossland, J. D. "Computer modelling of laser-plasma ablation." Thesis, University of Hull, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.384648.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Cox, I. J. "Advanced computer modelling in steel making." Thesis, Swansea University, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.636309.

Full text
Abstract:
The most critical factors to control during steel making are carbon composition, steel temperature and time. At Corus Port Talbot Steel Works the priority is to achieve the aim temperature and carbon target ‘window’ at the end of the steel making process. An analysis of the historical end-point carbon and temperature performance revealed that in many cases the aim temperature and aim carbon composition was exceeded, and worst still, fall outside acceptable limits with expensive side effects. The use of computer control in the steel making process is essential to obtain accurate end-point temperature and carbon control in liquid steel. The current computer model employed to execute this task at Corus Port Talbot Steel Works is a procedural model that must be maintained by a person with considerable steel making knowledge. An initial investigation into the use of Artificial Neural Networks (ANNs) for prediction of oxygen and coolant requirements during the last few minutes of steel making proved that this technique can accurately model oxygen but has difficulty when modelling coolant requirements. An analysis of historical steel data on the carbon-temperature diagram was performed and a relationship between the position of the steel data (on the carbon-temperature diagram) at the in-blow sample point and values of the end-blow oxygen and coolant addition was discovered. Several further ANN models were created using steel process data where the target window was achieved. These models displayed improved accuracy when compared to linear and polynomial models. An assessment of each ANN model output concluded that the ANN models would provide improved end-point carbon and temperature control with more steel heat arriving in the end-point target window. The next stage is to investigate a strategy for providing feedback and updating the ANN model. Finally, when the ANN model is deployed, the control should be closed-loop so as to minimise the need for human intervention.
APA, Harvard, Vancouver, ISO, and other styles
23

Chow, Hon-nin, and 周漢年. "Computer aided modelling of porous structures." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B39848929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Darwish, Mona. "Computer modelling of clinical pharmacokinetic data." Thesis, Queen's University Belfast, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Croghan, James A. "Recursive computer modelling for mine design." Thesis, University of Nottingham, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.328358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Candan, Cevza. "Computer modelling of warp knit structures." Thesis, University of Leeds, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.436049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Azoff, Eitan Michael. "Computer modelling of heterojunction bipolar transistors." Thesis, University of Sheffield, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.420290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Hanna, Jonathan Robert Paul. "Preserving relationship information in computer modelling." Thesis, University of Ulster, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.385686.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Anwar, Najm. "Computer modelling of electro-optic modulators." Thesis, City University London, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433648.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Deng, Hui-Fang. "Computer modelling of liquid crystal displays." Thesis, University College London (University of London), 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.392171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Alshahid, Kuteiba. "Computer modelling of the human hand." Thesis, University of Sussex, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316650.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Milazzo, Lorenzo. "Computer modelling of diamond radiation detectors." Thesis, King's College London (University of London), 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.412683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Bennett, Victoria Jane. "Computer modelling the Serengeti-Mara ecosystem." Thesis, University of Leeds, 2003. http://etheses.whiterose.ac.uk/1553/.

Full text
Abstract:
At present, the viability of biodiversity in most of the remaining natural areas of the world is primarily threatened by human encroachment. This has led to an increased demand for active conservation. However, in order to devise and implement appropriate management strategies for a particular area, a specific understanding of ecosystem function is required. Creating a simulation model using available research data may provide a way to achieve this. In this thesis, the construction of a comprehensive model delineating the dynamics of the Serengeti-Mara ecosystem is initiated. Using the abundance of research data collected on this ecosystem over the last 40 years, the processes involved in setting-up such a model are investigated. First, a basic foundation, accommodating the spatial and temporal variation in climate and physiography across the Serengeti region, is established. The relationship between grass growth and rainfall is then incorporated, along with the mechanisms concerned with limiting grass availability, the subsequent survival and recruitment of grazing herbivores and finally, the influence of predation upon those herbivores. The model, even in these early stages of development, adequately depicted dynamics equivalent to those in the Serengeti-Mara ecosystem, indicating that the methods used were appropriate. It was found that grass availability was not the primary factor influencing the overall dynamics of grazing herbivores within the ecosystem, and only migratory wildebeest appeared to be strongly influenced by this factor throughout the time-scale of the model. It was suggested that other factors were responsible for regulating the majority of herbivore populations. By identifying where further research is required to increase our understanding of this particular ecosystem's function, the model demonstrates its effectiveness as an analytical tool. For the long-term conservation of the Serengeti-Mara ecosystem and other similar ecosystems, this reveals that the construction of such models is certainly beneficial, if not essential.
APA, Harvard, Vancouver, ISO, and other styles
34

Kelsey, E. T. "Computer modelling of complex oxide surfaces." Thesis, University of Bath, 1993. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Andersson, Tim. "Computer Modeling of Thermodynamic Flows in Reactors for Activated Carbon Production." Thesis, Karlstads universitet, Institutionen för ingenjörs- och kemivetenskaper, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-33282.

Full text
Abstract:
There's a big demand for activated carbon in Ghana, it's used for the country's mining industry as well as in a multitude of other applications. Currently all activated carbon is imported despite the fact that the country has a large supply of agricultural waste that could be used for its production. This study focuses on activated carbon production from oil palm kernel shells from the nations palm oil industry. Earlier research points to a set of specific conditions needed for the production. The pyrolysis process produces biochar from the biomass and the process is set to take place for 2 h at 600  °C after a initial heating of 10 °C/min. The activation process then produces the activated carbon from the biochar and is set to take place for 2 h at 850 °C with a heating rate of 11.6 °C/min. Two reactors are designed to meet the desired conditions. The reactors are both set up to use secondary gases from diesel burners to heat the biomass. The heating is accomplished by leading the hot gases in an enclosure around a rotating steel drum that holds the biomass. To improve the ability to control the temperature profile in the biomass two outlet pipes are set up on top of the reactor, one above the biomass inlet and one above the biomass outlet. By controlling how much gas that flows to each outlet both the heating rate and the stability of the temperature profile can be controlled. The secondary gas inlet is set up facing downwards at the transition between the heating zone (area of initial heating) and the maintaining zone (area of constant temperature). The two reactors are modeled the physics simulation software COMSOL Multiphysics. Reference operating parameters are established and these parameters, as well as parts of the design, are then changed to evaluate how the temperature profile in the biomass and biochar can be controlled. A goal area was set up for the profile in the biomass where it was required to maintain a temperature of between 571.5 and 628.5 °C after the initial heating to be seen as acceptable. Similarly a goal area was set for the biochar between 809 °C and 891 °C after the initial heating. It's found from the simulations that the initial design of the reactors work well and can be used to produce the desired temperature profiles in the biomass and biochar. Furthermore it's concluded that the initial design for the pyrolysis reactor can be improved by having the gas outlet pipe situated by the biomass inlet face downwards instead of upwards. The redesign improves the overall efficiency of the reactor by increasing the heating rate and maintained temperature. The evaluation of the operating parameters led to the conclusion that the secondary gas inlet temperature effects the temperature profile to a greater extent than the gas mass flow in both reactors thereby making them more energy efficient. The increase in efficiency comes with a drawback of more unstable temperature profile. If the temperature profile becomes too unstable it will include temperatures that are too high or too low to be seen as acceptable.
Det finns en stor efterfrågan på aktivt kol i Ghana, det används dels i landets gruvnäring men även för en mängd andra applikationer. Idag importeras allt aktivt kol, trots att landet har stora mängder restprodukter från jordbruk som skulle kunna användas för produktion av aktivt kol. Det här arbetet fokuserar på produktion av aktivt kol från oiljepalmskärnor från landets palmoljeindustri. Tidigare forskning påvisar en mängd specifika förhållanden som krävs för produktionen. Pyrolysprocessen producerar biokol från biomassa och som mål för processen sätts att den ska hålla 600 °C i två timmar efter en uppvärmningstakt av 10 °C/min. För aktiveringsprocessen som sedan producerar aktivt kol från biokolet sätts målet till att hålla en temperatur av 850 °C med en uppvärmningstakt av 11.6 °C/min. Två reaktorer designas för att skapa dom efterfrågade förhållandena. Reaktorerna värms av sekundärgas från dieselbrännare för att värma biomassan och biokolet. Värmningen sker genom att den värma sekundärgasen leds runt en roterande ståltrumma genom vilken biomassan flödar. För att kunna ha en bra kontroll av temperaturprofilen i biomassan så används två utloppsrör för gasen på reaktorernas ovansida. Genom att kontrollera gasflödet till respektive utloppsrör kan både uppvärmningstakt och stabiliteten hos temperaturen justeras. Sekundärgasens inloppsrör placeras på reaktorns undersida och riktas mot övergångszonen mellan uppvärmning och stabilisering. Reaktorerna modelleras i fysiksimuleringsprogrammet COMSOL Multiphysics 4.3b. I COMSOL simuleras driften och de parametrar som påverkar den evalueras genom att varieras mot ett referensvärde. Temperaturprofilens målområde i pyrolysreaktorn sätts till att hålla en temperatur mellan 571.5 och 628.5 °C för pyrolysen och efter uppvärmningen, om temperaturprofilen går utanför målområdet så klassas den som oacceptabel. För biokolet i aktiveringsreaktorn sätts ett liknade mål till att det ska hålla mellan 809 °C och 891 °C efter uppvärmningen. Resultaten från simuleringarna visa att reaktorernas design fungerar som önskat och att dom kan producera dom önskade temperaturprofilerna. Det visas även att designen för pyrolysreaktorn kan förbättras ytterligare genom att sätta det främre utloppsröret för sekundärgasen på reaktorns undersida istället för dess ovansida. Förändringen leder till en effektivare värmeöverföring till biomassan samt höjer dess temperatur genom hela reaktorn. Analysen av driftparametrar som flöde och temperatur av sekundärgas, visar att dess temperatur påverkar processerna till en mycket större grad än dess massflöde. Genom att höja temperaturen kan flödet sänkas och hela processen blir mer energieffektiv, dock så leder det till en ökad instabilitet inom målområdet och om instabiliteten blir för stor så börjar temperaturprofilen gå ur målområdet.
APA, Harvard, Vancouver, ISO, and other styles
36

Ness, Paul Edward. "Creative software development : an empirical modelling framework." Thesis, University of Warwick, 1997. http://wrap.warwick.ac.uk/3059/.

Full text
Abstract:
The commercial success of software development depends on innovation [Nar93a]. However, conventional approaches inhibit the development of innovative products that embody novel concepts. This thesis argues that this limitation of conventional software development is largely due to its use of analytical artefacts, and that other activities, notably Empirical Modelling and product design, avoid the same limitation by using creative artefacts. Analytical artefacts promote the methodical representation of familiar subjects whereas creative artefacts promote the exploratory representation of novel subjects. The subjects, constraints, environments and knowledge associated with a design activity are determined by the nature of its artefacts. The importance of artefacts was discovered by examining the representation of different kinds of lift system in respect of Empirical Modelling, product design and software development. The artefacts were examined by identifying creative properties, as characterized in the theory of creative cognition [FWS92], together with their analytical counterparts. The processes of construction were examined by identifying generative and exploratory actions. It was found that, in software development, the artefacts were analytical and the processes transformational, whereas, in Empirical Modelling and product design, the artefacts were both creative and analytical, and the processes exploratory. A creative approach to software development using both creative and analytical artefacts is proposed for the development of innovative products. This new approach would require a radical departure from the established ideas and principles of software development. The existing paradigm would be replaced by a framework based on Empirical Modelling. Empirical Modelling can be though of as a situated approach to modelling that uses the computer in exploratory ways to construct artefacts. The likelihood of the new paradigm being adopted is assessed by considering how it addresses the topical issues in software development.
APA, Harvard, Vancouver, ISO, and other styles
37

Chen, Yih-Chang. "Empirical modelling for participative business process reengineering." Thesis, University of Warwick, 2001. http://wrap.warwick.ac.uk/4204/.

Full text
Abstract:
The purpose of this thesis is to introduce a new broad approach to computing - Empirical Modelling (EM) - and to propose a way of applying this approach for system development so as to avoid the limitations of conventional approaches and integrate system development with business process reengineering (BPR). Based on the concepts of agency, observable and dependency, EM is an experiencebased approach to modelling with computers in which the modeller interacts with an artefact through continuous observations and experiments. It is a natural way of working for business process modelling because the modeller is involved in, and takes account of, the real world context. It is also adaptable to a rapidly changing environment as the computer-based models serve as creative artefacts with which the modeller can interact in a situated and open-ended manner. This thesis motivates and illustrates the EM approach to new concepts of participative BPR and participative process modelling. That is, different groups of people, with different perceptions, competencies and requirements, can be involved during the process of system development and BPR, rather than just being involved at an early stage. This concept aims to address the well-known high failure rate of BPR. A framework SPORE (situated process of requirements engineering), which has been proposed to guide the process of cultivating requirements in a situated manner, is extended to participative BPR (i.e. to support many users in a distributed environment). Two levels of modelling are proposed for the integration of contextual understanding and system development. A comparison between EM and object-orientation is also provided to give insight into how EM differs from current methodologies and to point out the potential of EM in system development and BPR. The ISMs (interactive situation models), built using the principles and tools of EM, are used to form artefacts during the modelling process. A warehouse and logistics management system is taken as an illustrative case study for applying this framework.
APA, Harvard, Vancouver, ISO, and other styles
38

Hawkins, James David. "Computer simulation of trachoma." Thesis, University of Southampton, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.255761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Weir, Daryl. "Modelling uncertainty in touch interaction." Thesis, University of Glasgow, 2014. http://theses.gla.ac.uk/6318/.

Full text
Abstract:
Touch interaction is an increasingly ubiquitous input modality on modern devices. It appears on devices including phones, tablets, smartwatches and even some recent laptops. Despite its popularity, touch as an input technology suffers from a high level of measurement uncertainty. This stems from issues such as the ‘fat finger problem’, where the soft pad of the finger creates an ambiguous contact region with the screen that must be approximated by a single touch point. In addition to these physical uncertainties, there are issues of uncertainty of intent when the user is unsure of the goal of a touch. Perhaps the most common example is when typing a word, the user may be unsure of the spelling leading to touches on the wrong keys. The uncertainty of touch leads to an offset between the user’s intended target and the touch position recorded by the device. While numerous models have been proposed to model and correct for these offsets, existing techniques in general have assumed that the offset is a deterministic function of the input. We observe that this is not the case — touch also exhibits a random component. We propose in this dissertation that this property makes touch an excellent target for analysis using probabilistic techniques from machine learning. These techniques allow us to quantify the uncertainty expressed by a given touch, and the core assertion of our work is that this allows useful improvements to touch interaction to be obtained. We show this through a number of studies. In Chapter 4, we apply Gaussian Process regression to the touch offset problem, producing models which allow very accurate selection of small targets. In the process, we observe that offsets are both highly non-linear and highly user-specific. In Chapter 5, we make use of the predictive uncertainty of the GP model when applied to a soft keyboard — this allows us to obtain key press probabilities which we combine with a language model to perform autocorrection. In Chapter 6, we introduce an extension to this framework in which users are given direct control over the level of uncertainty they express. We show that not only can users control such a system succesfully, they can use it to improve their performance when typing words not known to the language model. Finally, in Chapter 7 we show that users’ touch behaviour is significantly different across different tasks, particularly for typing compared to pointing tasks. We use this to motivate an investigation of the use of a sparse regression algorithm, the Relevance Vector Machine, to train offset models using small amounts of data.
APA, Harvard, Vancouver, ISO, and other styles
40

Kaya, Eylem. "Computer modelling of reinjection in geothermal fields." Thesis, University of Auckland, 2010. http://hdl.handle.net/2292/5929.

Full text
Abstract:
This thesis describes a computer modelling study of reinjection into geothermal systems. The aim of this work was to decide on optimum reinjection strategies for various types of geothermal systems. First an idealized 3D closed model used by Sigurdsson et al. (1995) is extended to examine the effect of the natural recharge from groundwater, from the basement and laterally from the boundaries of the system. The results show that injection increases steam flow if recharge is small because the reservoir is acting as a closed system, or if the caprock is permeable and allows groundwater recharge. Otherwise injection may cause a decrease in steam production by suppressing hot recharge from depth or replacing lateral recharge by colder injected water. For hot-water reservoirs the effect of different well configurations on the production performance is examined with a model of the East Mesa field and the results show that deep far-infield reinjection provides an optimum strategy that supports reservoir pressures without causing an early thermal breakthrough. The impacts of different rates of infield and outfield reinjection on two-phase liquid-dominated reservoirs are investigated by using a model of Wairakei- Tauhara. The results show that outfield reinjection is a safe method for disposing of water. A high rate of infield reinjection prevents boiling in the reservoir and causes a drop in the production enthalpies. A significant decline occurs in the surface features which are close to the injection zones. Reinjection infield of 25% of the separated geothermal water appears to be a good strategy since it does not cause a significant pressure or temperature decrease. For two-phase vapour-dominated reservoirs reinjection impacts on steam production are investigated by using a model of Darajat. Investigation of various production/reinjection schemes show that; reinjecting 50% of the condensate above the production zones increases steam production significantly. However for higher reinjection rates, the steam production rate may decline owing to the breakthrough of cold reinjected water. If the production zones are deeper, reinjection is much more beneficial. Introducing a larger number of production and reinjection wells scattered throughout the field increases the reservoir life.
APA, Harvard, Vancouver, ISO, and other styles
41

Rotaru-Varga, Adam. "Computer modelling of humpback whale foraging behaviours." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0012/MQ61489.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Dandre, C. A. "Computer modelling of fatigue in titanium alloys." Thesis, Swansea University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.636342.

Full text
Abstract:
A computer package has been developed that models the inter-granular stress distributions that are considered to be responsible for the fatigue crack initiation and short crack growth stages in near-α titanium alloys. The computer package incorporates the finite element method, and by modelling stress distributions at the microstructural level, this research is placed at the forefront of the field. A computer program generates at hypothetical uniform grain structure consisting of hexagonal grains. However, in order to model the anisotropic nature that is inherent in titanium alloys, a texture is developed for the computer generated structure. The directional variations for elastic and plastic properties are incorporated into the model by allocating crystallographic orientations to each grain individually. Since failure in near-α titanium alloys has been attributed to slip on the basal plane, the grain orientations describe the inclination of the basal plane to the direction of applied stress. The computer package models the principal inter-granular stress redistributions that occur at grain discontinuities, where 'weak' grains off-load stress onto adjacent 'strong' grains. Certain grains that are suitably orientated for slip experience an increased stress. When resolved onto the basal plane, this is evident as a unique combination of tensile and shear stresses that are considered to activate the separation of slip bands that have formed. In order to support the theoretical model, a limited material testing programme was devised which was considered to provide important information regarding the failure mechanisms. Four-point bend tests were performed on IMI 829 barstock material which was heat-treated to produce a coarse grain structure consisting of colonies of aligned α-platelets. SEM measurements were taken to determine the texture at crack initiation sites. The computer package was implemented to model the inter-granular stresses for these local textiles. Stress contour plots indicated that significant inter-granular stress distributions existed which were unique to each initiation site. These results were supported by the fatigue observations in the bend specimen.
APA, Harvard, Vancouver, ISO, and other styles
43

Hayward, L. R. "Computer modelling of fluid flow and solidification." Thesis, Swansea University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.637252.

Full text
Abstract:
Mould filling is an important part of the casting process and cannot be neglected if an accurate analysis of casting solidification is desired. A numerical technique has been developed to model heat transfer and solidification of metal during and after the filling of a mould. The simulation incorporates a macroscopic fluid flow and heat transfer analysis from the initial stages of filling until filling is completed. Solidification is accounted for as is the temperature dependence of the fluid velocity as solid forms at the mould walls. The computing technique is based on the SOLA-VOF finite difference method for two dimensional mould geometries. Heat conduction and energy transport are modelled using coupled heat conduction-heat convection equations. Limitations of the computing technique are demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
44

Harrington, Liam. "Computer modelling of night-time natural ventilation." Thesis, Loughborough University, 2001. https://dspace.lboro.ac.uk/2134/7535.

Full text
Abstract:
Wind-induced ventilation has the potential to reduce cooling energy use in buildings. One method this can be achieved is by the use of night-time ventilation to cool down the structure of a building, resulting in lower air and radiant temperatures during the day. To design effective naturally ventilated buildings, evaluation tools are needed that are able to assess the performance of a building. The primary goal of this work was to develop such a tool, that is suitable for use in annual building energy simulation. The model presented, is intermediate in complexity between a CFD numerical model and current single air node models, having seven nodes. The thesis describes how numerical and experimental data have been used to develop the structure and define the parameters of the simplified nodal model. Numerical calculations of the flow and temperature fields have been made with a coupled flow and radiant exchange CFD code. Numerically derived velocity dependent convective heat transfer coefficients are compared with experimental measurements made in room ventilated by cross-flow means, and with empirical correlations cited by other studies. Bulk convection between the air nodes of the simplified nodal model has been derived from a numerical study of contaminant dispersal. The performance of the model is demonstrated by making comparisons with the predictions of a single air node model.
APA, Harvard, Vancouver, ISO, and other styles
45

Grau, Crespo Ricardo. "Computer modelling studies of iron antimonate FeSbOâ‚„." Thesis, Birkbeck (University of London), 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.429418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Jassim, A. R. "Performance modelling of fault tolerant computer networks." Thesis, University of Nottingham, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.371127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Calvert, M. "Computer modelling of comminution and classification plant." Thesis, University of Leeds, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.375346.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Gatcombe, Christopher Peter. "Computer modelling of high resolution ultrasonic transducers." Thesis, City University London, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.236597.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Gillespie, Jennifer L. "Modelling and computer simulation of patient flow." Thesis, Ulster University, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.646847.

Full text
Abstract:
The population of the United Kingdom is increasingly ageing and diseases, like cancer and stroke, are becoming more common in our society. This is having a detrimental affect on the performance of the National Health Service. Various schemes and services have been introduced to increase efficiency, and key performance indicators help to identify areas of best practice. By realistically modelling healthcare facilities with analytic and simulation models, based on queueing theory, we can provide detailed information to healthcare managers and clinicians. These models can help to identify issues and cost inefficiencies for early intervention. Analytic models are less data and computationally intensive, and provide results in a quick time frame compared to simulation models. However, they tend to be mathematically complex which means healthcare managers can find them difficult to understand, and are more reluctant to implement the solutions. Simulations are more data and computationally intensive compared to analytic models, but they are much easier to explain to healthcare managers when they are built in a user friendly environment. This means that managers tend to be more willing to introduce the results of the model into their department. Therefore, we use both analytic and simulation models in this work to utilise the benefits of both techniques. In this body of work a novel analytic cost model has been presented for a system which can be regarded as a network of M/M/∞ queues. The model considers the flow of patients through primary and secondary care, and is based on a mixture of Coxian phase-type models with multiple absorbing states. Costs are attached to each state of the model allowing the average cost per patient in the system to be calculated. We also provide a model which assesses whether the implementation of a new intervention is cost-effective. The model calculates the maximum cost the intervention can incur before the benefits no longer outweigh the cost of administering it. These analytic models have been applied to stroke patients deemed eligible for thrombolysis in order to assess the cost-effectiveness of thrombolytic therapy. We also present a novel simulation model for stroke patients, who are eligible for thrombolysis, in order to validate our analytic models. 'What-If' scenarios and Probabilistic Sensitivity Analysis have also been carried out to provide healthcare managers with more confidence in our models. An analytic model has been presented for a complex system of M / M / c queues in steady state. The model analyses the system to find bottlenecks and assesses whether the staff are being efficiently utilised. Two resource allocation models have then been defined: the first determines the minimum number of resources required within the department, and the second efficiently distributes the resources throughout the department. These resource allocation models have been applied to orthopaedic Integrated Clinical Assessment and Treatment Service (ICATS) data to reduce the current queues within the department. A novel simulation model has also been created for orthopaedic ICATS which includes extra variation and realistic features. This allows us to assess how robust and reliable our analytic models are, as the results are applied to our simulation model which has different assumptions. The novel analytic models provide very similar results to the simulation models built for each healthcare environment. This implies that our analytic models are robust and reliable even when applied to a department which includes different assumptions. Therefore, our analytic models will provide reliable results when healthcare managers need to make decisions in a short time frame. Simulation models have been found to be a good validation technique for analytic models, as healthcare managers understand them better. Extra components can also be easily included within a simulation model, such as complex distributions to represent the inter-arrival and service rate, and realistic features such as shift patterns.
APA, Harvard, Vancouver, ISO, and other styles
50

Poolman, Mark Graham. "Computer modelling applied to the Calvin Cycle." Thesis, Oxford Brookes University, 1999. http://radar.brookes.ac.uk/radar/items/23a3e616-7ec2-6ace-9ab0-9b266a9a61f8/1.

Full text
Abstract:
This thesis developes computer modelling techniques, and their use in the investigation of biochemical systems, principally the photosynthetic Calvin cycle. A set of metabolic modelling software tools, "Scampi", constructed as part of this project is presented. A unique feature of Scampi is that it allows the user to make a particular model the subject of arbitrary algorithms. This provides a much greater flexibility than is available with other metabolic modelling software, and is necessary for work on models of (or approaching) realistic complexity. A detailed model of the Calvin cycle is introduced. It differs from previously published models of this system in that all reactions are assigned explicit rate equations (no equilibrium assumptions are made), and it includes the degradation, as well as the synthesis, of starch. The model is later extended to include aspects of the thioredoxin system, and oxidative pentose phosphate pathway. Much of the observed behaviour is consistent with experimental observation. In particular, Metabolic Control Analysis of the model shows that control of assimilation flux is likely to be shared between two enzymes, rubisco and sedoheptulose bisphosphotase (SBPase), and can readily be transferred between them. This appears to offer an explanation of experimental evidence, obtained by genetic manipulation, that both of these enzymes can exert high control over assimilation. A further finding is that the output fluxes from the cycle (to starch and the cytosol), show markedly different patterns of control from assimilation, and from each other. An novel observation in behaviour of the Calvin cycle model is that, under certain circumstances, particularly at low light levels, the model has two steady-states and can be induced to switch between them. Although this exact behaviour has not been described experimentally, published results show charecteristics suggesting the potential is there in vivo. An explanation of all the observed behaviour is proposed, based upon the topology of the model. If this is correct then it may be concluded that the qualitative behaviour observed in the model is to be expected in vivo, although the quantitative detail may vary considerably.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography