Dissertations / Theses on the topic 'Modelling'

To see the other types of publications on this topic, follow the link: Modelling.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Modelling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Singh, Peterson Lila Physics Faculty of Science UNSW. "Modelling secondary organic aerosol formation :from chemical mechanistic modelling to empirical modelling." Awarded by:University of New South Wales. Physics, 2007. http://handle.unsw.edu.au/1959.4/40466.

Full text
Abstract:
The work presented in this thesis is primarily concerned with modelling the formation of secondary organic aerosols (SOAs). SOAs cannot easily be measured with direct analytical chemical methods; indirect methods like applying organic carbon to elemental carbon ratios and utilising computer models have been employed to provide an estimate of the SOA mass concentrations in ambient air. The five models presented in this work were either developed or assessed using environmental chamber data. Chamber experiments were undertaken using initial isoprene concentrations in the range of 22 ppb to 343 ppb, with the reactive organic carbon (ROC) to NOx ratios in the range of 2.0 to about 18. Chamber experiments were also performed for the a-pinene / NOx system with initial a-pinene concentrations ranging from 79 ppb to 225 ppb, with ROC/NOx ratios varying from 5.5 to about 41. All of the experiments were performed without the addition of propene or seed aerosol. Background aerosol levels were very low for the experiments presented in the thesis and so homogeneous nucleation processes were considered to occur in the chamber in addition to absorption and oligomerisation formation processes. Initial nucleation events resulting from the photooxidation of isoprene could be detected once the aerosol diameter was greater than 12 nm. In the a-pinene system,new particles formed via homogeneous nucleation processes were detectable in the 100-200nm diameter range. The models presented range in complexity from the near explicit Master Chemical Mechanism to an empirical model whose key feature is its simplicity. The mechanistic model provides an insight into the SOA formation pathways and the influence of varying the initial experimental conditions and the duration of photooxidation on the simulated SOA composition. The aim of the empirical model is to simulate the SOA mass concentration produced during a chamber experiment. The development of the model is intentionally simple so that it can be applied to any hydrocarbon and has been applied successfully to isoprene and a-pinene chamber experiments. In this way, the empirical model is presented as an alternative approach to predicting the temporal variation in SOA mass concentrations. An analysis of the partitioning absorption models developed by Odum et al. (1996) and Hoffmann et al. (1997) has informed the development of the SOA module which has been coupled to a 3D atmospheric model. Embodied within the SOA module is the gas / aerosol partitioning theory which includes the model proposed initially by Pankow et al. (1994) and by Odum et al. (1996).
APA, Harvard, Vancouver, ISO, and other styles
2

Romanowska, Izabela Anna. "Modelling hominin dispersals with agent-based modelling." Thesis, University of Southampton, 2017. https://eprints.soton.ac.uk/420372/.

Full text
Abstract:
The early dispersals of hominins have been a major focus of Palaeolithic research in the last 50 years. In this dissertation I have applied a formal modelling technique, simulation, to a selection of archaeological research topics concerning hominin dispersals in order to test the suitability of this tool for formal theory development and hypothesis testing. Therefore, the aims of this research are twofold. The archaeological aim is to improve our understanding of hominin dispersals in the Pleistocene, whilst the methodological contribution was achieved by employing a relatively unknown and underused (in archaeology) computational tool. An overview of the existing data pertinent to the topic of Pleistocene hominin dispersals demonstrated that the current methodology used for researching hominin dispersals, that is data analysis paired with conceptual modelling, is unlikely to solve many of the existing research questions highlighting the need to assess a wider range of scientific tools in order to progress. To that end, two case studies were developed using a specific simulation technique: agent-based modelling. The first case study was used to evaluate if, and under what conditions, the early ‘Out of Africa’ dispersal could lead to a specific demographic pattern reflected in a disparity between two regions with different stone tool traditions known as the Movius line. The model comprises a dynamic environmental reconstruction of Old World vegetation in the timeframe 2.5-0.25 Ma, coupled with standard biological models of population growth and dispersal. It is demonstrated that, under a wide sweep of both environmental and behavioural parameter values, the demographic consequence of dispersal is not a gradual attenuation of the population size away from the point of origin but a pattern of ecologically driven local variation in population density. The second case study looks at the relationships between climate change, migration and the evolution of behavioural plasticity or versatility among hominins. The agent-based model investigates the dynamics between individuals with different adaptations (including ‘versatilist’ individuals) within a non-homogenous population. The results show that dispersal accelerates the evolution of versatilism in the population, therefore promoting a more flexible range of adaptations. In addition, a set of scenarios was tested in which a dispersal wave crosses an environmental barrier. The results do not support the common intuition that such barriers shape the composition of hominin populations. The methodology presented here opens a new route to understanding large-scale spatiotemporal patterns in the archaeological record as well as to testing many of the previously proposed factors affecting early hominins’ lives. It has the potential to provide new insights into a number of ongoing debates, in particular on the relationship between different processes involved in shaping the past such as demographics and cultural complexity. This study also highlights the potential of simulation studies for testing complex conceptual models and the importance of building reference frameworks based on known proxies in order to achieve more rigorous model development in Palaeolithic archaeology and beyond.
APA, Harvard, Vancouver, ISO, and other styles
3

Griesser, Martin Thomas. "The modelling of fabric behaviour for automated modelling." Thesis, University of Hull, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.262413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tilgner, Hagen 1980. "Modelling splicing." Doctoral thesis, Universitat Pompeu Fabra, 2011. http://hdl.handle.net/10803/85056.

Full text
Abstract:
L’Splicing de les molècules d’ARN és el procés pel qual les seqüències interposades (“introns”) s’eliminen, i les seqüències restants es concatenen per a formar l’ARN madur. La investigació recent mostra que gairebé tots els gens amb splicing es veuen afectats per splicing alternatiu. Aquí, en primer lloc definim la longitud mínima d’un oligomer d’ARN per a funcionar com a lloc d’unió d’un factor d’splicing. A continuació, explorem la capacitat d’aquests oligomers per a predir estructures completes exó-intró. Destaquem els oligomers que són més informatius per a això, i demostrem que la mateixa precisió com en enfocaments anteriors es pot aconseguir amb menys oligomers. L’observació de que aquest enfocament és lluny de predir amb exactitud tota l’estructura exó-intró ens va portar a investigar els factors que juguen un paper en l’splicing co-transcripcional. Demostrem que els nucleosomes es col.loquen preferentment en els exons i plantegem la hipòtesi que juguen un paper en les decisions de l’splicing. A continuació, introduïm el “completed splicing index” i concluem que l’splicing co-transcripcional és molt generalitzat. A més, l’splicing co-transcripcional mostra vincles amb l’organització de la cromatina. A la llum d’aquests resultats, es van supervisar els canvis de la cromatina en exons diferencialment inclosos en dos teixits. Hem descobert una varietat de marques de les histones, però no totes, mostrant un comportament significativament diferent en els exons més inclosos i més exclosos. Las marques més destacades que apareixen són H3K9ac i dos estats de metilació de lisina 4.
Splicing of RNA molecules is the process, by which intervening sequences (“introns”) in the primary transcript are excised, and the remaining sequences (“exons”) are concatenated to form the mature RNA. Recent evidence shows that almost all spliced genes are affected by alternative splicing. Here, we define the minimal length of RNA oligomers that can sensibly be called splicing factor binding sites. Then, we explore the capacity of these oligomers to predict complete exon-intron structures. We highlight those oligomers that are most informative for this and show, that equal accuracy as in previous approaches can be achieved with less RNA oligomers. The observation, that this approach falls short of accurately predicting the entire exon-intron structure, led us to investigate determinants linked to co-transcriptional splicing. We show that nucleosomes are preferentially positioned on exons and hypothesize that they play a role in splicing decisions. We then introduce the “completed splicing index” and conclude that co-transcriptional splicing is very wide-spread in humans. Furthermore co-transcriptional splicing exhibits links to chromatin organization. In the light of these results, we go on to monitor chromatin changes on differentially included exons in pair-wise tissue comparisons. We find a variety of histone marks, but not all, showing significantly different behavior on up- and downregulated exons. The most prominently appearing marks are H3K9ac and two lysine 4 methylation states.
APA, Harvard, Vancouver, ISO, and other styles
5

Spackman, Roger A. "Electrochemical modelling." Thesis, University of Oxford, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mackie, Jo. "Situation modelling." Thesis, Lancaster University, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.527167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cox, Sander. "Dynamical modelling." Thesis, Uppsala universitet, Tillämpad matematik och statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-262477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Guruprasad, Kunchur. "Protein modelling." Thesis, Birkbeck (University of London), 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.283204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Vaidya, A. M. "Ultrafiltration modelling." Thesis, University of Manchester, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.237688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Diamantopoulos, Panayiotis. "Biomedical modelling." Thesis, University of Sussex, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391860.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Finbow, Gerard Mark. "Modelling nanostructures." Thesis, University of Cambridge, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.624252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Ammari-Allahyari, Mojtaba. "Modelling recursion." Thesis, University of Warwick, 2008. http://wrap.warwick.ac.uk/3602/.

Full text
Abstract:
The purpose of my research is to examine and explore the ways that undergraduate students understand the concept of recursion. In order to do this, I have designed computer-based software, which provides students with a virtual and interactive environment where they can explore the concept of recursion, and demonstrate and develop their knowledge of recursion through active engagement. I have designed this computer-based software environment with the aim of investigating how students think about recursion. My approach is to design digital tools to facilitate students' understanding of recursion and to expose that thinking. My research investigates students' understanding of the hidden layers and inherent complexity of recursion, including how they apply it within relevant contexts. The software design embedded the idea of functional abstraction around two basic principles of: 'functioning' and 'functionality'. The functionality principle focuses on what recursion achieve, and the functioning dimension concerns how recursion is operationalised. I wanted to answer the following crucial question: How does the recursive thinking of university students evolve through using carefully designed digital tools? In the process of exploring this main question, other questions emerged: 1. Do students understand the difference between recursion and iteration? 2. How is tail and embedded recursion understood by the students? 3. To what extent does prior knowledge of the concept of iteration influence students' understanding of tail and embedded recursion? 4. Why is it important to have a clear understanding of the control passing mechanisms in order to understand recursion? 5. What is the role of functional abstraction in both, the design of computer-based tools and the students' understanding of recursion? 6. How are students' mental models of recursion shaped by their engagement with computer-based tools? From a functional abstraction point of view almost all previous research into the concept of recursion has focused on the functionality dimension. Typically, it has focused on procedures for the calculation of the factorial of a natural number, and students were tested to see if they are able to work out the values of the a function recursively (Wiedenbeck, 1988; Anazi and Uesato, 1982) or if they are able to recognize a recursive structure (Sooriamurthi, 2001; Kurland and Pea, 1985). Also, I invented the Animative Visualisation in the Domain of Abstraction (AVDA) which combines the functioning and functionality principles regarding the concept of recursion. In the AVDA environment, students are given the opportunity to explore the hidden layers and the complicated behaviour of the control passing mechanisms of the concept of recursion. In addition, most of the textbooks in mathematics and computer sciences usually fail to explain how to use recursion to solve a problem. Although it is also true that text books do not typically explain how to use iteration to solve problems, students are able to draw on to facilitate solving iterative problems (Pirolli et al, 1988). My approach is inspired by how recursion can be found in everyday life and in real world phenomena, such as fractal-shaped objects like trees and spirals. This research strictly adheres to a Design Based Research methodology (DBR), which is founded on the principle of the cycle of designing, testing (observing the students' experiments with the design), analysing, and modifying (Barab and Squire, 2004; Cobb and diSessa, 2003). My study was implemented throughout three iterations. The results showed that in the AVDA (Animative Visualisation in the Domain of Abstraction) environment students' thinking about the concept of recursion changed significantly. In the AVDA environment they were able to see and experience the complicated control passing mechanism of the tail and embedded recursion, referred to a delegatory control passing. This complicated control passing mechanism is a kind of generalization of flow in the iterative procedures, which is discussed later in the thesis. My results show that, to model a spiral, students prefer to use iterative techniques, rather than tail recursion. The AVDA environment helped students to appreciate the delegatory control passing for tail recursive procedures. However, they still demonstrated difficulties in understanding embedded recursive procedures in modelling binary and ternary trees, particularly regarding the transition of flow between recursive calls. Based on the results of my research, I have devised a model of the evolution of students' mental model of recursion which I have called – the quasi-pyramid model. This model was derived from applying functional abstraction including both functionality and functioning principles. Pedagogic implications are discussed. For example, the teaching of recursion might adopt 'animative' visualization, which is of vitally important for students' understanding of latent layers of recursion.
APA, Harvard, Vancouver, ISO, and other styles
13

Roseburgh, David Stewart. "Epioptical modelling." Thesis, University of Edinburgh, 2006. http://hdl.handle.net/1842/14327.

Full text
Abstract:
We have modelled epioptic systems using semi-empirical methods. We apply one such theoretical technique, Berreman’s 4x4 matrix formalism, to simulate and compare the results of various epioptic spectroscopies when applied to model systems. We place particular emphasis on 45° reflectivity spectroscopy (45DR), and demonstrate its potential for the study of systems with surface regions of mixed isotropic and anisotropic character. We also demonstrate the superior surface senstiviity of 45DR when compared to spectroscopic ellipsometry (SE) and show that 45DR is inherently more stable with regards to inaccuracies in bulk data. However, we also reveal that practical considerations lessen the potential impact of this technique. We continue the application of this matrix formalism to simulate the absorption of molecular species onto model surfaces. We show that reflection anisotropy spectroscopy (RAS) is sensitive to both molecular orientation and substrate properties, and that a simple polarisable dipole model, omitting local field contributions, enables us to reproduce the experimental spectra of Goletti et al and Weightmann et al, who studied porphyrin and DNA base molecule absorption respectively. We also assess the effect of optical activity on RAS spectra. Finally we apply the derivative model, in which surface optical spectra are related to perturbations of bulk electronic transitions, to noble metal (110) RAS spectra. We demonstrate that this model describes, consistently and accurately, the spectra of these intrinsically anisotropic systems. Furthermore, we show that observed transition broadening in ion-bombarded systems can be explained in terms of quantum mechanical uncertainty derived from roughness induced localisation of near-surface electrons.
APA, Harvard, Vancouver, ISO, and other styles
14

Cole, Ian R. "Modelling CPV." Thesis, Loughborough University, 2015. https://dspace.lboro.ac.uk/2134/18050.

Full text
Abstract:
A methodology for the simulation of CPV systems is presented in four distinct sections: input, optics, uncertainty and electrical output. In the input section, existing methods of describing the solar irradiation that is incident at the primary optical element of a CPV system are discussed, the inadequacies of the existing methods are explored and conditions of validity for their use drawn. An improved and spectrally extended model for a variable, spatially resolved solar image is arrived at. The model is used to analyse losses at the primary concentration device stage under varying solar profiles and air masses. A contextual analysis of an example Seattle based CPV system operating with constant solar tracking errors of 0.3-0.4° show a corresponding loss in isolation available to the optical system of 5-20%, respectively. In the optics section, an optical ray trace model is developed specifically for this work. The optical ray trace model is capable of the spectrally resolved ray tracing of all insolation input models discussed above. Plano-convex and Fresnel lenses are designed, investigated and compared using each of the insolation models described in the input section. Common CPV component material samples for the plano-convex and Fresnel lenses are analysed for their spectrally resolved optical properties. The computational expense of high resolution spatial and spectral modelling is addressed by means of a spectrally weighted banding method. The optical properties parameter spectral weighting method can be applied to any arbitrary spectral band. The bands used herein correspond to the active ranges of a typical triple-junction solar cell. Each band shows a different spectral dependency. Banded beam irradiation proportions are shown to change by as much as 10% in absolute terms within the air mass range of 1 to 3. Significant variations in spectrally banded illumination profiles are found with the extended light source insolation model. These banded variations are mostly unaccounted for with the use of approximated insolation models, further compounding the argument for extended light source Sun models in CPV system simulations. In the uncertainty section, the limitations of the manufacturing process are explored. Manufacturing tolerance errors from manufacturer datasheets are presented. These production uncertainties are used in the design of an erroneous plano-convex lens which is then analysed with the optical modelled presented in the optics section and compared to the ideal design specification. A 15% variation in maximum intensity value is found alongside a linear shift in the focal crossover point of approximately 0.2mm, although the optical efficiency of the lens remains the same. Framing manufacture errors are investigated for a square Fresnel lens system resulting in a linear shift of the focal centre of approximately 0.85mm. A process for the calculation of wind loading force on a CPV array is also presented. The process uses real 2 second resolution wind data and highlights the chaotic nature of loading force. A maximum force of 1.4kN was found on an example day for a 3m by 3m by 0.1m cuboid (i.e. CPV array); corresponding to a wind speed of approximately 13m/s, which is well within the typical operating range of a CPV tracking system. In the electrical output section, a spatially resolved solar cell model is identified and used for the investigation of solar cell performance under the inhomogeneous cell illumination profiles produced in the uncertainty section. Significant differences in the maximum power point of the cell IVs are found for the ideal and erroneous system illumination profiles. Approximately, a 15% variation is found in the plano-convex lens example, with a relative difference of 4% attributable to illumination profile distortion, and a 6% variation in the module framing component example. These results further highlight the need for the consideration of production uncertainties in CPV system simulation.
APA, Harvard, Vancouver, ISO, and other styles
15

Hebeler, Felix. "Modelling topographic uncertainty : impacts on large scale : environmental modelling /." Zürich, 2008. http://opac.nebis.ch/cgi-bin/showAbstract.pl?sys=000253809.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Headey, Jonathan Mark. "Modelling of river corridors : modelling urban particulate transport processes." Thesis, University of Birmingham, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.289714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Wang. "Empirical Investigations of Conceptual Modelling and the Modelling Process." Thesis, Lancaster University, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.518147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kjellsdotter, Frida. "MBS-modelling of a heavy truck : Modelling and model validation." Thesis, KTH, Fordonsdynamik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-32076.

Full text
Abstract:
As a result of the accelerating demands for faster development within the heavy vehicle industry, computer aided simulations have become a more important tool in the development process. Simulations can offer faster evaluation of loads acting on the vehicle and more cost effective fatigue life predictions than physical testing, since physical prototypes are not needed for load measurements or fatigue tests. However, accurate fatigue life predictions without physical verification are today a difficult task with many uncertainties, yet simulations are still an important part of modern product development.The objective of this work is to investigate the accuracy of a virtual model of a physical truck. The thesis focuses only on load simulation accuracy, leaving the material uncertainties aside. The vehicle model is built using Adams/Car with two different complexities of the frame model. A part of the work is to investigate how the frame model complexity affects the accuracy of the results.The virtual truck is simulated in a virtual test rig that excites the model with displacement on the wheel hubs to represent the forces induced when the truck is driven on the test track. The process to make a drive signal to the test rig is iterative. Simulations are also performed with the virtual model equipped with tires and driven on a virtual 3D road.Model performance is evaluated using TDDI (Time Domain Discrepancy Index) and pseudo-damage. TDDI evaluates the results in the time domain and the pseudo-damage considers the potential fatigue damage in the time series. A value of the TDDI below 0.3 and between 0.5 and 2 for the pseudo-damage is found good. The accuracy is approximately the same as can be repeated by different test engineers driving the same test schedule with the same vehicle.When iterating using the cab and the front and rear end of the frame as response feedback, the results for the model with the simple frame model show good values of TDDI and pseudo damage for the front end of the frame and the cab. Though the axles and the mid of the frame show poor results. The rear end of the frame does not reach the model performance targets, getting a too low value of the pseudo-damage while the TDDI value is good. The vehicle model with the complex frame shows similar results, when using the same response feedback, although the frame model is not optimized.The full vehicle model driving on 3d-road does not, at present, deliver accurate results. However, the relative damping for the beams, representing the leaf springs, has turned out to highly affect the results. The leaf spring model thus need to be optimized. The complex frame model is not showing results good enough to justify the extra modelling time. The accuracy of the full-vehicle model can be considerably improved by optimizing the model/-s of the wheel suspension and the complex frame model.
APA, Harvard, Vancouver, ISO, and other styles
19

Strathern, M. "Modelling Organisational Evolution and Change - a Complex Systems Modelling Perspective." Thesis, Cranfield University, 2008. http://hdl.handle.net/1826/3527.

Full text
Abstract:
The cumulative output of these papers emphasise that modelling organisational evolution and change from a complex systems perspective makes a significant contribution to organisational studies and brings new insight and understandings both to theory and practice. It is also true that the studies and modelling presented in these papers has pushed forward the boundaries of complex systems science, again both in theory and practice. The papers have made new findings and understandings of the processes, drivers and outcomes of the evolution of social systems and organisations through the development of new evolutionary models and frameworks that contribute both to organisational and complexity sciences. They have through a number of innovations based in complexity science addressed questions in organisational science concerning the importance of knowledge and learning, together with questions about the evolution and survival of organisations and industries. These innovations have played back into and developed complexity science.
APA, Harvard, Vancouver, ISO, and other styles
20

Lu, Zhengxin. "Modelling geotechnical uncertainty using numerical modelling techniques : Kuldip Narbheshankar Modha." Thesis, University of Nottingham, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.446386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Meng, Xuesong. "Modelling multi-scale problems in the transmission line modelling method." Thesis, University of Nottingham, 2014. http://eprints.nottingham.ac.uk/14461/.

Full text
Abstract:
Modern electromagnetic problems are becoming increasingly complex and their simulation must take into account geometrical features that are both large and small compared to the wavelength of interest. These multi-scale problems lead to a heavy computational burden in a discretised computational simulation approach since the small features require fine mesh to be used in the simulation, resulting in large run time and memory storage. To overcome such problems, this thesis presents an efficient and versatile method for embedding small features into an otherwise coarse mesh. The embedded model eliminates the need for discretising the small features and allows for a relative large mesh size to be used, thus saving the computational costs. The subject of the thesis is embedding a thin film as a small feature into the numerical Transmission Line Modelling (TLM) method, although any small feature with known analytical response can be implemented in practice. In the embedded model, the thin film is treated as a section of transmission line, whose admittance matrix is used to describe the frequency response of the thin film. The admittance matrix is manipulated by expanding the constituent cotangent and cosecant functions analytically, and then transforming them from the frequency domain to the time domain using the inverse Z transform and general digital filter theory. In this way the frequency responses of the thin film are successfully embedded into the TLM algorithm. The embedded thin film model can be applied to both single and multiple thin film layers. The embedded thin film model has been implemented in the one-dimensional (1D) and two-dimensional (2D) TLM method in the thesis. In the 1D TLM method, the embedded thin film model is used to investigate the reflection and transmission properties of lossy,anisotropic and lossless thin films, e.g. carbon fibre composite (CFC) panels, titanium panels, antireflection (AR) coatings and fibre Bragg gratings (FBG). The shielding performance of CFC panels is also discussed. In the 2D TLM method, the embedded thin film model is extended to model arbitrary excitations and curved thin films. The electromagnetic behaviour of infinitely long CFC panels with oblique incidence and a CFC panel of finite length with a point source excitation are studied using the embedded thin film model. The resonant effects of CFC circular and elliptical resonators and the shielding performance of a CFC airfoil with the profile of NACA2415 are investigated using the embedded curved thin film model. In addition, the effects of small gaps in the airfoil structure on the shielding performance are also reported. All the examples discussed in the thesis have validated the accuracy, stability, convergence and efficiency of the embedded thin film model developed. At the same time, the embedded thin film model has been proven to have the advantage of significantly saving computational overheads.
APA, Harvard, Vancouver, ISO, and other styles
22

Gebrehiwet, Yemane Fisseha. "Modelling agricultural input expenditure in a multi-market modelling framework." Thesis, University of Pretoria, 2010. http://hdl.handle.net/2263/25139.

Full text
Abstract:
Agricultural input expenditures have not been widely incorporated in most partial equilibrium models. Thus, investigating the effect of economic policies and other exogenous factors on the agricultural sector will produce only a partial analysis, since the simultaneous impact of these factors on input expenditures is excluded. This study, therefore, extends the existing partial equilibrium multi-market model of the South African agricultural sector (BFAP model) by incorporating agricultural input expenditure. Thus, the analysis of the impact of economic policies on the agricultural sector, which was limited only on the gross income (production, area planted and prices), has now been extended to assess their effects on input expenditures, gross value added and net farming income of the sector. In addition, the analysis is further extended to evaluate the financial and economic position of the agricultural sector by investigating the implications of the policies on the asset and debt values of the sector. The comparative result obtained from the shocks of a crude oil and world fertiliser price rise shows that due to the inclusion of the recursive effect from the output to input side of the sector and vice versa and endogenising input costs, the effect of the shock on gross value added and net farming income converges slowly and cyclically in the recursively linked model, compared to the unlinked model, in which the effect abruptly halts after a single year. Thus, the recursively linked integrated model replicates the dynamics experienced by the agricultural sector better than the recursively unlinked integrated model. In addition, the endogenisation of domestic input costs on the integrated model allows a comprehensive analysis of the effect of macroeconomic variables on the agricultural sector by considering their impact on both outputs and inputs. Thus, using the recursively linked model, a fifty percent devaluation of exchange rate is assessed. The result showed that a depreciation of exchange rate resulted in a net benefit for the sector, as the gain from enhancing agricultural income outweighs the rise in expenditure. Excluding the simultaneous impact on input expenditure would have overestimated the benefit by looking only at its effect on gross income. The integrated model was also used to project a baseline for the South African agricultural sector’s main aggregate variables for the medium term (2010-2015) under the status quo of policy assumptions and forecast values of exogenous variables. The baseline projections of the gross income, intermediate input expenditure and gross value added show a modest average annual growth rate during the baseline period. The net farming income, however, depicts a relatively lower growth due to the general modest rise in agricultural gross income compared with total input expenditure. Based on the projected values of main aggregate variables, several financial and economic performance indicators for the agricultural sector are also projected. In general, the economic performance indicators of the sector, measured by the net return on the sector’s investment and equity, show good performance when compared with the average cost of borrowing during the baseline period. Thus, this study shows that integrating input expenditure in a multi-market output model by recursively linking both sides and endogenising domestic input costs would improve the result of the standard partial equilibrium by generating projections for several key aggregate variables, providing the net effect of economic policies on the agricultural sector and replicating the dynamics of the agricultural sector better than models that have few/no input components or that assess the effects separately and ignore the recursive linkage. Thus, this study provides a powerful modelling tool to be used by policy makers to comprehensively investigate the net effects of economic policies on the agricultural sector and to answer several ‘what if’ questions.
Thesis (DCom)--University of Pretoria, 2011.
Agricultural Economics, Extension and Rural Development
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
23

Deshmukh, Pushkaraj M. "Modelling error estimation and adaptive modelling of functionally graded materials." Cincinnati, Ohio : University of Cincinnati, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=ucin1096036755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Nilsson, Cecilia. "Modelling of Dynamically Loaded Shotcrete." Thesis, KTH, Betongbyggnad, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-36923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Brignoni, Jimmy. "Modelling transfer processes /." [S.l.] : [s.n.], 2009. http://www.ub.unibe.ch/content/bibliotheken_sammlungen/sondersammlungen/dissen_bestellformular/index_ger.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Lamberti, Giuseppe. "Modelling with heterogeneity." Doctoral thesis, Universitat Politècnica de Catalunya, 2015. http://hdl.handle.net/10803/309295.

Full text
Abstract:
When collecting survey data for a specific study it is usual to have some background information, in the form for example, of socio-demographic variables. In our context, these variables may be useful in identifying potential sources of heterogeneity. Resolving the heterogeneity may mean to perform distinct analyses based on the main variables for distinct and homogeneous segments of the data, defined in terms of the segmentation variables. In 2009 Gastón Sánchez proposed an algorithm PATHMOX with the aim to automatic detecting heterogeneous segments within the PLS-PM methodology. This technique, based on recursive partitioning, produces a segmentation tree with a distinct path models in each node. At each node PATHMOX searches among all splits based on the segmentation variables and chooses the one resulting in the maximal difference between the PLS-PM models in the children nodes. Starting from the work of Sanchez the purpose of the thesis is to extend PATHMOX in the following points: 1. Extension to the PATHMOX approach to detect which constructs differentiate segments. The PATHMOX approach uses a F-global test to identify the best split in heterogeneous segments. Following the same approach it is possible to extend the testing to find which the endogenous constructs are and which are the relationships between constructs responsible of the difference between the segments. 2. Extension to the PATHMOX approach to deal with the factor invariance problem. Originally PATHMOX adapted the estimation of constructs to each detected segment, that is, once a split is performed the PLS-PM model is recalculated in every child. This leads to the problem of invariance: if the the estimation of the latent variables are recalculated in each terminal node of the tree, we cannot be sure to compare the distinct behavior of two individuals who belong to two different terminal nodes. To solve this problem we will propose a invariance test based on the X^2 distribution, where the goal of to test whether the measurement models of each terminal node can be considered equal or not among them. 3. Extension to the PATHMOX approach to overcome the parametric hypothesis of F-test. One critic to the PATHMOX approach, applied in the context of partial least square path modeling, is that it utilizes a parametric test based on the hypothesis that the residuals have a normal distribution to compare two structural models. PLS-PM in general, is utilized to model data that come from survey analysis. These data are characterized by an asymmetric distribution. This situation produces skewness in the distribution of data. As we know, PLS-PM methodology, is based in the absence of assumptions about the distribution of data. Hence, the parametric F test used in PATHMOX may represent a limit of the methodology. To overcome this limit, we will extend the test in the context of LAD robust regression. 4. Generalization of PATHMOX algorithm to any type of modeling methodology. The PATHMOX algorithm has been proposed to analyze heterogeneity in the context of the partial least square path modeling. However, this algorithm can be applied to many other kind of methodologies according to the appropriate split criterion. To generalize PATHMOX we will consider three distinct scenarios: Regression analysis (OLS, LAD, GLM regression) and Principal Component Analysis. 5. Implement the methodology, using the R software as specific library.
Cuando se realiza un estudio científico, el análisis hace énfasis sobre las variables recogidas para responder a las preguntas que se quieren hallar durante el mismo estudio. Sin embargo en muchos análisis se suele recoger más variables, como por ejemplo variables socio demográfico: sexo, status social, edad. Estas variables son conocidas como variables de segmentación, ya que pueden ser útiles en la identificación de posibles fuentes de heterogeneidad. Analizar la heterogeneidad quiere decir realizar distintas análisis para distintos colectivos homogéneos definidos a partir de las variables de segmentación. Muchas veces, si hay algún conocimiento previo, esta heterogeneidad puede ser controlada mediante la definición de segmentos a priori. Sin embargo no siempre se dispone de conocimiento suficiente para definir a priori los grupos. Por otro lado muchas variables de segmentación podrían ser disponibles para analizar la heterogeneidad de acuerdo con un apropiado algoritmo. Un algoritmo desarrollado con este objetivo fue PATHMOX, propuesto por Gastón Sanchez en 2009. Esta técnica, utilizando particiones recursivas, produce un árbol de segmentación con distintos modelos asociados a cada nodo. Para cada nodo, PATHMOX busca entre todas las variables de segmentación aquella que produce una diferencia máxima entre los modelos de los nodos hijos. Tomando como punto de partida el trabajo de Gastón Sanchez esta tesis se propone: 1. Extender PATHMOX para identificar los constructos responsables de la diferencias. PATHMOX nos permite detectar distintos modelos en un data-set sin identificar grupos a priori. Sin embargo, PATHMOX es un criterio global. Pera identificar las distintas ecuaciones y coeficientes responsables de las particiones, introduciremos los test F-block y F-coefficient. 2. Extender PATHMOX para solucionar el problema de la invariancia. En el contexto del PLS-PM (Partial Least Squares Path Modeling), PATHMOX funciona fijando las relaciones causales entre las variables latentes y el objetivo es identificar modelos con coeficientes path lo más posible distintos sin poner ninguna restricción sobre el modelo de medida. Por lo tanto, cada vez que una diferencia significativa es identificada, y dos nodos hijos vienen definidos, las relaciones causales entre las variables latentes son las mismas en ambos modelos "hijos", pero la estimación de cada variable latente se recalcula y no podemos estar seguros de comparar el comportamiento de dos individuos distintos que pertenecen a dos nodos diferentes. Para resolver este problema propondremos un test de invariancia basado en la distribución X^2, donde el objetivo del test es verificar si los modelos de cada nodo terminales se puede considerar igual o no entre ellos. 3. Extender PATHMOX para superar la hipótesis paramétrica del F-test. Una crítica a PATHMOX, aplicadas en el contexto del PLS-PM, es que el algoritmo utiliza una prueba paramétrica, basada en la hipótesis de que los residuos tienen una distribución normal, para comparar dos modelos estructurales. Para superar este límite, extenderemos el test para comparar dos regresiones robustas LAD en el contexto del PLS. 4. La generalización del algoritmo PATHMOX a cualquier tipo de metodología. El algoritmo PATHMOX ha sido propuesto para analizar la heterogeneidad en el contexto PLS-PM. Sin embargo, este algoritmo se puede aplicar a muchos otros tipos de metodologías de acuerdo con un apropiado criterio de partición. Para generalizar PATHMOX consideraremos tres escenarios distintos: modelos de regresión (modelos OLS, LAD, GLM) y el análisis en componentes principales. 5. Implementar la metodología, utilizando el software R como librería específica.
APA, Harvard, Vancouver, ISO, and other styles
27

Larsson, Johan. "Modelling a memory." Thesis, Linköping University, Department of Electrical Engineering, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1028.

Full text
Abstract:

The purpose of this master thesis is to describe the work behind the building of a scalable model of a memory designed at Zarlink Semiconductor AB. This model is to be part of a memory generator and used to extract timing parameters for all available memory sizes instead of simulating the layout, as layout simulation takes too much time.

The report starts with the basic theory of passive circuit elements that has to be considered in a model and what effect these elements have on functionality and robustness of the design. There’s also a short chapter on how to layout for optimisation towards high speed, minimal area or low power consumption.

After that, the work behind three different models of a memory is described. The models are a skeletal model, a mixed-mode model and a digital model. The skeletal model was the only one that could be finished and this model is then evaluated and compared to a simulation made on the original layout of the memory.

Included in the description of the mixed-mode and digital models are a description of how you characterise cells and how you include power information in digital simulation.

At the end there’s a short chapter on the future of modelmaking.

APA, Harvard, Vancouver, ISO, and other styles
28

Björklund, Oscar. "Modelling of failure." Thesis, Linköping University, Department of Management and Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11466.

Full text
Abstract:

This report is a review of some failure models today used for determine failure in thin sheets of high strength steels. Focus has been given on phenomenlogical models and only some simple simulations have been carried out. The phenomenlogical models that have been summarized here are of four different categories, namely stress based, strain based, combined stress and strain based and damaged models. However, the simulations have only been preformed for some of the models.

APA, Harvard, Vancouver, ISO, and other styles
29

Meier, Stefanie. "Identifying Modelling Tasks." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-80629.

Full text
Abstract:
The Comenius Network Project “Developing Quality in Mathematics Education II” funded by the European Commission consists of partners from schools, universities and teacher training centres from eleven European countries. One advantage of the project is the mutual exchange between teachers, teacher trainers and researchers in developing learning material. To support the teachers most effectively the researchers asked the teachers what they wanted the researchers to do. The answer was also a question: How can we identify (good) modelling tasks? A discussion ensued in the research group of this project which resulted in a list of descriptors characterising modelling tasks. This paper focuses on the theoretical background of mathematical modelling and will thereby substantiate the list of descriptors for modelling tasks.
APA, Harvard, Vancouver, ISO, and other styles
30

Ozkan, Erhun. "Stochastic Inventory Modelling." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12612097/index.pdf.

Full text
Abstract:
In this master thesis study, new inventory control mechanisms are developed for the repairables in Nedtrain. There is a multi-item, multi echelon system with a continuous review and one for one replenishment policy and there are different demand supply options in each control mechanism. There is an aggregate mean waiting time constraint in each local warehouse and the objective is to minimize the total system cost. The base stock levels in each warehouse are determined with an approximation method. Then different demand supply options are compared with each other.
APA, Harvard, Vancouver, ISO, and other styles
31

Brännström, Åke. "Modelling animal populations." Doctoral thesis, Umeå universitet, Matematik och matematisk statistik, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-205.

Full text
Abstract:
This thesis consists of four papers, three papers about modelling animal populations and one paper about an area integral estimate for solutions of partial differential equations on non-smooth domains. The papers are: I. Å. Brännström, Single species population models from first principles. II. Å. Brännström and D. J. T. Sumpter, Stochastic analogues of deterministic single species population models. III. Å. Brännström and D. J. T. Sumpter, Coupled map lattice approximations for spatially explicit individual-based models of ecology. IV. Å. Brännström, An area integral estimate for higher order parabolic equations. In the first paper we derive deterministic discrete single species population models with first order feedback, such as the Hassell and Beverton-Holt model, from first principles. The derivations build on the site based method of Sumpter & Broomhead (2001) and Johansson & Sumpter (2003). A three parameter generalisation of the Beverton-Holtmodel is also derived, and one of the parameters is shown to correspond directly to the underlying distribution of individuals. The second paper is about constructing stochastic population models that incorporate a given deterministic skeleton. Using the Ricker model as an example, we construct several stochastic analogues and fit them to data using the method of maximum likelihood. The results show that an accurate stochastic population model is most important when the dynamics are periodic or chaotic, and that the two most common ways of constructing stochastic analogues, using additive normally distributed noise or multiplicative lognormally distributed noise, give models that fit the data well. The latter is also motivated on theoretical grounds. In the third paper we approximate a spatially explicit individual-based model with a stochastic coupledmap lattice. The approximation effectively disentangles the deterministic and stochastic components of the model. Based on this approximation we argue that the stable population dynamics seen for short dispersal ranges is a consequence of increased stochasticity from local interactions and dispersal. Finally, the fourth paper contains a proof that for solutions of higher order real homogeneous constant coefficient parabolic operators on Lipschitz cylinders, the area integral dominates the maximal function in the L2-norm.
APA, Harvard, Vancouver, ISO, and other styles
32

Sprague, Daniel Alexander. "Modelling health behaviour." Thesis, University of Warwick, 2015. http://wrap.warwick.ac.uk/77458/.

Full text
Abstract:
Many diseases can be prevented or mitigated through behaviour change, but we lack a quantitative model that can accurately predict these changes and inform policies designed to promote them. Here we introduce a quantitative model of health behaviour that takes into account individual-level barriers, the health system, and spread between individuals. We investigate limits of the model where each of these determining factors is dominant, and use them to predict behaviour from data. We apply the model to individual-level geographic barriers to mothers giving birth in a health facility, and find evidence that ease-of-access is a major determinant of delivery location. The geographic barriers allow us to explain the observed spatial distribution of this behaviour, and to accurately predict low prevalence regions. We then apply the model to the role of the health system in determining health facility usage by mothers of sick children. We show that local health facility quality does predict usage, but that this predictive power is significantly less than that gained by including unaccounted-for spatial correlation such as social influence. We also show evidence that results-based funding, rather than traditional input-based funding, increases usage. We develop a psychologically-motivated ‘complex contagion’ model for social influence and incorporate it into a general model of behaviour spread. We apply this model to short-lived behavioural fads, and show that ‘nudges’ can be very effective in systems with social influence. We successfully fit the model to data for the online spread of real-world behaviour, and use it to predict the peak time and duration of a fad before the peak occurred. Finally, we discuss ways to incorporate disease state into the model, and to relax the limits used in the rest of the thesis. We consider a model which links health behaviour to disease, and show that complex contagion leads to a feature that is not present in traditional models of disease: the survival of an epidemic depends non-trivially on the initial fraction of the population that is infected. We then introduce two possible models that include both social influence and an inhomogeneous population, and discuss the type of data that might be required to use them predictively. The model introduced here can be used to understand and predict health behaviours, and we therefore believe that it provides a valuable tool for informing policies to combat disease.
APA, Harvard, Vancouver, ISO, and other styles
33

Sheikh, Irfan. "Modelling power prices." Thesis, Imperial College London, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.529355.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Yeung, William Wai-Hung. "Modelling stalled airfoils." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/31120.

Full text
Abstract:
The thesis deals with some new applications of the wake source model, a two-dimensional incompressible potential flow model used for bodies experiencing flow separation. The body contour is conformally mapped to a circle, for which the flow problem is solved using source singularities to create free streamlines simulating the separating shear layers. In common with other inviscid theories, it generally requires the pressure in the separated flow region, and the location of separation if boundary-layer controlled. Different mapping sequences and flow models have been constructed for the following five problems, 1. the trailing-edge stall for single element airfoils, 2. flat plates with separation bubbles, 3. separation bubbles upstream of spoilers with downstream wakes, 4. spoiler/slotted flap combinations, at which the spoiler inclination is arbitrary, and 5. two-element airfoils near (trailing-edge) stall. Predictions of pressure distribution are compared with wind tunnel measurements, and good agreement is found in cases 1 and 5. The initial shape of the separation streamlines also appears to be satisfactory. Results in cases 2 and 3 are promising although more work is needed to improve the bubble shapes and their pressure distributions. Partial success has been achieved on spoiler/ slotted flap configurations, depending on the spoiler inclination. For strong wake effect on the flap (e. g. δ = 90° ), the model predicts a very high suction peak over it. Whereas the experimental data resemble a stalled distribution even though flow visualization indicates the flap to be unstalled. This may be related to a limitation of the method, also noted in the separation-bubble problems, that it cannot specify a complete boundary condition on a free streamline. This discrepancy diminishes as the spoiler angle becomes smaller (e. g. δ = 30° ) in the cases of higher incidences so that the wake boundary tugs away from the flap sooner.
Applied Science, Faculty of
Mechanical Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
35

Lewis-Bowen, J. "Modelling grid architecture." Thesis, University College London (University of London), 2007. http://discovery.ucl.ac.uk/1445669/.

Full text
Abstract:
This thesis evaluates software engineering methods, especially event modelling of distributed systems architecture, by applying them to specific data-grid projects. Other methods evaluated include requirements' analysis, formal architectural definition and discrete event simulation. A novel technique for matching architectural styles to requirements is introduced. Data-grids are a new class of networked information systems arising from e-science, itself an emergent method for computer-based collaborative research in the physical sciences. The tools used in general grid systems, which federate distributed resources, are reviewed, showing that they do not clearly guide architecture. The data-grid projects, which join heterogeneous data stores specifically, put required qualities at risk. Such risk of failure is mitigated in the EGSO and AstroGrid solar physics data-grid projects' designs by modelling. Design errors are trapped by rapidly encoding and evaluating informal concepts, architecture, component interaction and objects. The success of software engineering modelling techniques depends on the models' accuracy, ability to demonstrate the required properties, and clarity (so project managers and developers can act on findings). The novel formal event modelling language chosen, FSP, meets these criteria at the diverse early lifecycle stages (unlike some techniques trialled). Models permit very early testing, finding hidden complexity, gaps in designed protocols and risks of unreliability. However, simulation is shown to be more suitable for evaluating qualities like scalability, which emerge when there are many component instances. Design patterns (which may be reused in other data-grids to resolve commonly encountered challenges) are exposed in these models. A method for generating useful models rapidly, introducing the strength of iterative lifecycles to sequential projects, also arises. Despite reported resistance to innovation in industry, the software engineering techniques demonstrated may benefit commercial information systems too.
APA, Harvard, Vancouver, ISO, and other styles
36

Mickael, Michel Edwar Khalil. "Modelling baroreceptors function." Thesis, Durham University, 2012. http://etheses.dur.ac.uk/3901/.

Full text
Abstract:
Cardiovascular diseases form one of the most dangerous events that affect human life. They are usually the result of high blood pressure. Thus controlling blood pressure within patient specific healthy limits is a goal that we must target. There are two control loops for blood haemostasis inside the body either long term or short term. Baroreceptors control the short term blood pressure regulation. They are nerve endings that exist in certain locations within the blood vessel walls and they report blood pressure into the brain and the central nervous system. However the basics of their function are not yet known. We propose here that the baroreceptors work by converting circumferential and axial pressure into a stress into their respective direction and they start to send nerve signals based on a threshold of strain energy of the location they are embedded in. Thus baroreceptors A fibre is highly likely to exist in the stiffer adventitia, while the media will contain C fibres. This explains the reason behind having identical fibres with different threshold. We were able to arrive to this solution by getting a relationship between stress–strain relationship for the whole wall and for the arterial vessels. These findings are quiet significant as they allow a method to identify different stress in the arterial wall layers using whole wall experimental data and also as they were able to differentiate between different fibres based on their locations inside the arterial wall. A complete modelling of the baroreceptors function might lead to the formation of biosynthetic material that could interact with the body on the cellular level, so as to give humans the mean to the control of short term blood regulation thus preventing hypertension and its accompanying diseases such as atherosclerosis.
APA, Harvard, Vancouver, ISO, and other styles
37

Robbins, Lynda E. "Modelling cryptographic systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ31163.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Bredenberg, Johan. "Modelling biomolecular interactions /." Stockholm, 2003. http://diss.kib.ki.se/2003/91-7349-571-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Hutcheson, Kenneth Andrew. "Geomagnetic field modelling." Thesis, University of Cambridge, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.385503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Leith, Miguel Francis. "Modelling linguistic events." Thesis, Imperial College London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299691.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Marchant, Ben. "Modelling cell invasion." Thesis, University of Oxford, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Höfer, Thomas. "Modelling Dictyostelium aggregation." Thesis, University of Oxford, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Hall, R. J. "Modelling fungicide resistance." Thesis, University of Cambridge, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.599864.

Full text
Abstract:
Fungicide resistance, whereby a mutation conferring reduced sensitivity to chemical control arises and spreads through a fungal population, severely inhibits the successful control of crop disease. Mathematical models play a vital role in assessing the risk of invasion of fungicide-resistant pathogens, and in the design of effective resistance management strategies. In this thesis, I investigate the factors affecting the invasion of resistance in heterogeneous crop environments. I develop a simple, nonlinear model for fungicide resistance which, improving on existing work, incorporates the dynamics of the host crop and quantities how the amount, decay and timing of a fungicide dose affect selection for resistance. The model structure is similar to those used to describe antibiotic resistance, and hence much of the analysis presented here applies more generally to drug and pesticide resistance. I identify a threshold for the invasion of resistance in terms of two key parameters, both of which are amenable to estimation in the field. These are the fitness of the resistant strain relative to the wild-type, and treatment efficacy (which summarises how control inhibits pathogen survival and reproduction). Using a discrete, stochastic formulation of the model, I demonstrate that this threshold is robust to the effects of demographic stochasticity, and estimate the probabilities of resistance pre-existing or emerging during treatment. In the final section of the thesis, I extend the simple model to examine the dynamics of multiple pathogen strains, the effects of seasonal disturbance to the host (through planting and harvesting) on persistence of the resistant pathogen, and how the scale of pathogen dispersal affects the spatial propagation of resistance.
APA, Harvard, Vancouver, ISO, and other styles
44

Biswas, P. "Inclusive user modelling." Thesis, University of Cambridge, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.596672.

Full text
Abstract:
I have investigated how physical capabilities of users with a wide range of abilities are reflected in their interactions with digital devices. I have formulated a simulator that embodies both the internal state of a computer application and also the perceptual, cognitive and motor processes of its user. The simulator can predict the likely interaction patterns when undertaking a task using a variety of input devices, and estimate the time to complete the task in the presence of different disabilities and for different levels of skill. The simulator consists of a perception model, a cognitive model and a motor behaviour model. The preparation model simulates the phenomena of visual perception (like focussing and shifting attention) and can also simulate the effects of different visual impairments on interaction. It has predicted the visual search time and eye gaze pattern of able-bodied people and a few types of visually impaired users with statistically significant accuracy. The cognitive model simulates expert performance by using CPM-GOMS model. It can also simulate performance of novices by using a dual-space model. The motor-behaviour model is based on statistical analysis of cursor traces from motor-impaired users. As part of the model, I have also developed a new scale of characterizing the extent of disability of users by measuring their grip strength. I have evaluated the simulator through an icon searching task undertaken by visually and motor impaired people and also used the simulator to develop a new assistive interaction technique. My studies have already been used to design an accessible game and the University has been awarded EU funding for a project that will build on results from my PhD research.
APA, Harvard, Vancouver, ISO, and other styles
45

Tse, Jonathan. "Market microstructure modelling." Thesis, University of Oxford, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.540272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Bennett, Donald John. "Semiconductor process modelling." Thesis, University of the West of Scotland, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.259677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Axt, Konstantin. "Modelling pre-rRNA." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/7746.

Full text
Abstract:
In this project rRNA maturation was investigated with the help of mathematical models of processing pathways from pre-rRNA to mature rRNA species. Previously described models were transferred from Excel to Mathematica. Additionally, two Mathematica based software applications were created, which help to analyse metabolic [3H]-Uracil labelling of pre-rRNA species. The first program, M Fit helps to visualize dependencies in the pre-rRNA processing. The other program S Fit tries to find a best fit of the model response to labelling time course data, hence optimizing parameter values. To validate the model anything that has an influence on the co-transcriptional cleavage is of interest, as these would have distinct effects on the 20S pre-rRNA labelling curve. A list of proteins which might play a role in A2 cleavage of the 35S was compiled and Rat1 was selected as the first candidate to investigate. All prerRNA species except the 35S pre-rRNA consist of two populations. One set created by nascent transcript cleavage (35S gets cleaved during transcription process) and one set created by released transcript cleavage (if a fully transcribed 35S pre-rRNA was released). These two species are not usually distinguishable on gels. However, with the help of the models the two different populations can be differentiated. This allows useful predictions to be made about [3H]-Uracil labelling courses in cases of high or low co-transcriptional cleavage. Experimental data for Rat1 depletion strains indeed showed an inhibition of co-transcriptional cleavage with a curve pattern as predicted by the models. Loss of another ribosome synthesis factor Srp40 was predicted to inhibit cotranscriptional pre-rRNA methylation. Of particular interest here was the effect on the 20S as this species supposed to be mostly methylated co-transcriptionally. Labelling with [3H] methionine the 20S curve for the Srp40 deletion mutant should have an earlier onset as compared to 20S curve from the corresponding wild type strain. A higher tritium response was shown for srp40Δ as compared to wild type; this might proof loss of co-transcriptional methylation.
APA, Harvard, Vancouver, ISO, and other styles
48

Crapper, Martin. "Fluid mud modelling." Thesis, University of Liverpool, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Morrell, Kevin. "Modelling employee turnover." Thesis, Loughborough University, 2002. https://dspace.lboro.ac.uk/2134/6794.

Full text
Abstract:
This thesis reports the first independent test of an influential model of employee turnover (Lee, Mitchell, Holtom, McDaniel and Hill 1999). The context for this test is the case of nurse turnover in the National Health Service (NHS). There have been many hundreds of turnover studies in the last fifty years, and many ways of understanding the turnover phenomenon. The thesis organises this literature, by selectively analysing and discussing the more influential of these studies. This selective, critical review allows for the model tested here to be placed in a theoretical and historical context. A critique of the model signalled the need for theoretical development prior to operationalisation. However, the relative paucity of empirical evidence in support of the model suggested that replicating the basic findings of the authors would also be desirable. Accordingly, the case for a critical test was clear, and an outline of the role of this type of replication facilitated this. The research involved eight NHS trusts, in three regions. In total, 352 full-time nurse leavers participated. Data relating to their decision to leave was collected via an eight page survey, which comprised both closed and open items. Analysis and interpretation of these data challenge the current formulation of the model tested, as well as contributing to the understanding of employee turnover and nursing turnover. Note: The term model is defined here as 'conceptual framework'.
APA, Harvard, Vancouver, ISO, and other styles
50

Tighe, Stephen Patrick. "Macroscopic dislocation modelling." Thesis, University of Oxford, 1992. http://ora.ox.ac.uk/objects/uuid:e4f00d59-71b0-43c0-b141-0bb7e6108fff.

Full text
Abstract:
Work-hardened metals typically possess large numbers of dislocations in complex three-dimensional configurations about which little is known theoretically. Here these large numbers of dislocations are accounted for by means of a dislocation density tensor, which is obtained by applying an averaging process to families of discrete dislocations. Some simple continuous distributions are examined and an analogy is drawn with solenoids in electromagnetism before the question of the equilibrium of dislocation configurations is studied. It is then proved that the only finite, simply-connected distribution of dislocations in equilibrium in the absence of applied stresses are ones in which all components of stress vanish everywhere. Some examples of these zero stress everywhere (ZSE) distributions are then given, and the concept of 'plastic distortion' is used to facilitate their interpretation as rotations of the crystal lattice. Plastic distortion can also be understood as a distribution of infinitesimal dislocation loops ('Kroupa loops'), and this idea is used in Chapter 4 to investigate the dislocation distributions which correspond to elastic inclusions. The evolution, under an applied stress, of some simple ZSEs is analysed, and the idea of 'polarisation' is introduced, again in analogy with electromagnetism. Finally, a mechanism is conjectured for the onset of plastic flow.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography