Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: RE-MODELLING.

Rozprawy doktorskie na temat „RE-MODELLING”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „RE-MODELLING”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Dimitrova, Dimitrina S. "Dependent risk modelling in (re)insurance and ruin". Thesis, City, University of London, 2007. http://openaccess.city.ac.uk/18910/.

Pełny tekst źródła
Streszczenie:
The work presented in this dissertation is motivated by the observation that the classical (re)insurance risk modelling assumptions of independent and identically distributed claim amounts, Poisson claim arrivals and premium income accumulating linearly at a certain rate, starting from possibly non-zero initial capital, are often not realistic and violated in practice. There is an abundance of examples in which dependence is observed at various levels of the underlying risk model. Developing risk models which are more general than the classical one and can successfully incorporate dependence between claim amounts, consecutively arriving at the insurance company, and/or dependence between the claim inter-arrival times, is at the heart of this dissertation. The main objective is to consider such general models and to address the problem of (non-) ruin within a finite-time horizon of an insurance company. Furthermore, the aim is to consider general risk and performance measures in the context of a risk sharing arrangement such as an excess of loss (XL) re insurance contract. There are two parties involved in an XL re insurance contract and their interests are contradictory, as has been first noted by Karl Borch in the 1960s. Therefore, we define joint, between the cedent and the reinsurer, risk and performance measures, both based on the probability of ruin, and show how the latter can be used to optimally set the parameters of an XL reinsurance treaty. Explicit expressions for the proposed risk and performance measures are derived and are used efficiently in numerical illustrations.
Style APA, Harvard, Vancouver, ISO itp.
2

Patrick, A. C. "The dentist-patient relationship : re-modelling autonomy for dentistry". Thesis, University of Sheffield, 2014. http://etheses.whiterose.ac.uk/8302/.

Pełny tekst źródła
Streszczenie:
Previous work in the field of the clinician-patient relationship has relied on a generalized understanding of the ethical structure of the clinical relationship. This thesis seeks to rebut that presumption, by claiming that differing clinical relationships raise diverse ethical issues that call for specific ethical solutions. By looking closely at the primary dental care relationship this thesis will propose three specific instances where the dental-patient relationship faces unique challenges. The thesis will also go on to establish the claim that the current reliance on a rational notion of autonomy; one that is firmly attached to the consent process, is unable to theoretically address and adequately support the issues raised in relation to the dentist-patient relationship. This work considers, through philosophical enquiry, a number of theoretical alternatives and examines in detail the extent to which an alternative way of understanding the dentist-patient relationship might be more effective in addressing the matters of ethical concern raised and, as a consequence, be more ethically robust. The thesis concludes that a separation between our understanding of promoting and protecting autonomy enables us to re-visit and develop a more appropriate model of autonomy for the dentist-patient relationship that relies on a moderated, negative libertarian view. This transforms and simplifies obligations to the patient by providing an account that operates as a constraint in the clinical setting with our wish to promote autonomy being understood as the action of restoring health itself.
Style APA, Harvard, Vancouver, ISO itp.
3

Tajtehranifard, Hasti. "Incident duration modelling and system optimal traffic re-routing". Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/110525/1/Hasti_Tajtehranifard_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
Traffic incidents are among the most significant contributory factors to congestion, particularly in metropolitan areas. In this dissertation, we have developed state-of-the-art statistical models to provide in-depth insights into how various incident-specific characteristics and the associated temporal and spatial determinants impact freeway incident durations. Next, we have proposed, developed and tested two novel and computationally efficient System Optimal incident traffic re-routing algorithms that provide optimal traffic flow patterns, for minimized total system travel time. Specifically, a single-destination System Optimal Dynamic Traffic Assignment model and a multi-destination System Optimal Quasi-Dynamic Traffic Assignment model are proposed, developed and demonstrated to improve total system travel times, both under incident-free and incident scenarios.
Style APA, Harvard, Vancouver, ISO itp.
4

Tuta, Navajas Gilmar. "Modelling and Pitch Control of a Re-Configurable Unmanned Airship". Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/41998.

Pełny tekst źródła
Streszczenie:
Lighter than air (LTA) vehicles have many advantageous capabilities over other aircraft, including low power consumption, high payload capacity, and long endurance. However, they exhibit manoeuvrability and control reliability challenges, and these limitations are particularly significant for smaller unmanned LTA. In this thesis, a 4 m length autonomous airship with a sliding gondola is presented. A rigid keel, mounted to the helium envelope, follows the helium envelope profile from the midsection to the nose of the vehicle. Moving the gondola along the keel produces upwards of 90-degree changes in pitch angle, thereby improving manoeuvrability and allowing for rapid changes in altitude. The longitudinal multi-body equations of motion were developed for this prototype using the Boltzmann–Hamel method. An adaptive PID controller was then designed to control the pitch inclination using the gondola’s position. This control system is capable of self-tuning the controller gains in real time by minimizing a pre-defined sliding condition. Experimental flight tests were carried out to evaluate the controller’s performance on the prototype.
Style APA, Harvard, Vancouver, ISO itp.
5

Yan, Wai Man. "Experimental study and constitutive modelling of re-compacted completely decomposed granite /". View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?CIVL%202003%20YAN.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 168-177). Also available in electronic version. Access restricted to campus users.
Style APA, Harvard, Vancouver, ISO itp.
6

Fuchs, Philippe [Verfasser]. "Re-modelling of Mitochondrial Respiration in Arabidopsis during Drought / Philippe Fuchs". Bonn : Universitäts- und Landesbibliothek Bonn, 2019. http://d-nb.info/1208937510/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Ward, Heidi. "Modelling the re-design decision utilizing warranty data and consumer claim behaviour". Thesis, University of Salford, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.248941.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

German, Laura. "Academic research data re-usage in a digital age : modelling best practice". Thesis, University of Southampton, 2015. https://eprints.soton.ac.uk/383481/.

Pełny tekst źródła
Streszczenie:
Recent high profile retractions – such as the case of Woo Suk Hwang and others – demonstrate that there are still significant issues regarding the reliability of published academic research data. While technological advances offer the potential for greater data re-usability on the Web, models of best practice are yet to be fully re-purposed for a digital age. Employing interdisciplinary web science practices, this thesis asks what makes for excellent quality academic research across the sciences, social sciences and humanities. This thesis uses a case study approach to explore five existing digital data platforms within chemistry, marine environmental sciences and modern languages research. It evaluates their provenance metadata, legal, technological and socio cultural frameworks. This thesis further draws on data collected from semi-structured interviews conducted with eighteen individuals connected to these five data platforms. The participants have a wide range of expertise in the following areas: data management, data policy, academia, law and technology. Through the interdisciplinary literature review and cross-comparison of the three case studies, this thesis identifies the five main principles for improved modelling of best practice for academic research data re-usage both now and in the future. These principles are: (1) sustainability, (2) working towards a common understanding, (3) accreditation, (4) discoverability, and (5) a good user experience. It also reveals nine key grey areas that require further investigation.
Style APA, Harvard, Vancouver, ISO itp.
9

Berry, Daniel. "The analysis, modelling and simulation of a re-engineered PC supply chain". Thesis, Online version, 1994. http://ethos.bl.uk/OrderDetails.do?did=1&uin=uk.bl.ethos.364254c.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Breen, L. "Re-modelling clay : ceramic practice and the museum in Britain (1970-2014)". Thesis, University of Westminster, 2016. https://westminsterresearch.westminster.ac.uk/item/9x230/re-modelling-clay-ceramic-practice-and-the-museum-in-britain-1970-2014.

Pełny tekst źródła
Streszczenie:
This thesis analyses how the dialogue between ceramic practice and museum practice has contributed to the discourse on ceramics. Taking Mieke Bal’s theory of exposition as a starting point, it explores how ‘gestures of showing’ have been used to frame art‑oriented ceramic practice. Examining the gaps between the statements these gestures have made about and through ceramics, and the objects they seek to expose, it challenges the idea that ceramics as a category of artistic practice has ‘expanded.’ Instead, it forwards the idea that ceramics is an integrative practice, through which practitioners produce works that can be read within a range of artistic (and non-artistic) frameworks. Focusing on activity in British museums between 1970 and 2014, it takes a thematic and broadly chronological approach, interrogating the interrelationship of ceramic practice, museum practice and political and critical shifts at different points in time. Revealing an ambiguity at the core of the category ‘ceramics,’ it outlines numerous instances in which ‘gestures of showing’ have brought the logic of this categorisation into question, only to be returned to the discourse on ‘ceramics’ as a distinct category through acts of institutional recuperation. Suggesting that ceramics practitioners who wish to move beyond this category need to make their vitae as dialogic as their works, it indicates that many of those trying to raise the profile of ‘ceramics’ have also been complicit in separating it from broader artistic practice. Acknowledging that those working within institutions that sustain this distinction are likely to re-make, rather than reconsider ceramics, it leaves the ball in their court.
Style APA, Harvard, Vancouver, ISO itp.
11

Viladegut, Farran Alan. "Assessment of gas-surface interaction modelling for lifting body re-entry flight design". Doctoral thesis, Universitat Politècnica de Catalunya, 2017. http://hdl.handle.net/10803/461893.

Pełny tekst źródła
Streszczenie:
Space re-entry is a challenging endeavor due to the harsh thermo-chemical environment around the vehicle. Heat flux being the reference parameter for Thermal Protection System (TPS) design, the total energy transfer can significantly increase due to the exothermic atomic recombination enhanced by TPS catalytic properties. The catalytic recombination coefficient modelling is critical for heat flux computation during TPS design. This work assesses the ability to determine the recombination coefficient at Von Karman Institute's (VKI) plasma wind tunnel (Plasmatron) as a step towards future validation of catalytic models : from a reference catalytic model development for enthalpy characterization of the facility, to the identification of the most influential parameters found in non-equilibrium boundary layers. Plasmatron test results encourage a flight extrapolation strategy development in order to link the catalysis measured on ground to the catalysis appearing in flight. The strategy, focused on off-stagnation point conditions, shall contribute to future post-flight activities of the CATalytic Experiment (CATE) on board of the Intermediate eXperimental Vehicle (IXV). Relevant data from IXV and CATE are also presented, laying the foundation for for future developments at VKI.
Style APA, Harvard, Vancouver, ISO itp.
12

Platts, Louise Ann Marie. "The L-arginine/nitric oxide pathway in bone (re)modelling and articular inflammation". Thesis, Imperial College London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391503.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Yu, Shih Bun. "Modelling of complexity in manufacturing networks and its application to system re-engineering". Thesis, University of Oxford, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.427639.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Adhya, Sima. "Thermal re-radiation modelling for the precise prediction and determination of spacecraft orbits". Thesis, University College London (University of London), 2005. http://discovery.ucl.ac.uk/1445243/.

Pełny tekst źródła
Streszczenie:
Thermal re-radiation (TRR) affects spacecraft orbits when a net recoil force results from the uneven emission of radiation from the spacecraft surface these forces can perturb spacecraft trajectories by several metres over a few hours. The mis-modelling of TRR limits the accuracy with which some spacecraft orbits can be computed, and in turn limits capabilities of applications where satellite positioning is key. These range from real-time navigation to geodetic measurements using low earth orbiting spacecraft. Approaches for the precise analytical modelling of TRR forces are presented. These include methods for the treatment of spacecraft multilayer insulation (MLI), solar panels and other spacecraft components. Techniques for determining eclipse boundary crossing times for an oblate earth and modelling penumbral fluxes are also described. These affect both the thermal force and the considerably larger solar radiation pressure (SRP) force. These methods are implemented for the Global Positioning System (GPS) Block IIR spacecraft and the altimetry satellite Jason-1. For GPS Block IIR, model accuracy is assessed by orbit prediction through numerical integration of the spacecraft force model. Orbits were predicted over 12 hours and compared to precise orbits before and after thermal and eclipse-related models were included. When the solar panel model was included, mean orbit prediction errors dropped from 3.3m to 3.0m over one orbit inclusion of the MLI model reduced errors further to 0.6m. For eclipsing satellites, the penumbral flux model reduced errors from 0.7 m to 0.56m. The Jason-1 models were tested by incorporation into GIPSY-OASIS II, the Jet Propulsion Laboratory's (JPL) orbit determination software. A combined SRP and TRR model yielded significant improvements in orbit determination over all other existing models and is now used routinely by JPL in the operational orbit determination of Jason-1.
Style APA, Harvard, Vancouver, ISO itp.
15

Dorigatti, Ilaria. "Mathematical modelling of emerging and re-emerging infectious diseases in human and animal populations". Doctoral thesis, Università degli studi di Trento, 2011. https://hdl.handle.net/11572/369140.

Pełny tekst źródła
Streszczenie:
The works presented in this thesis are very different one from the other but they all deal with the mathematical modelling of emerging infectious diseases which, beyond being the leitmotiv of this thesis, is an important research area in the field of epidemiology and public health. A minor but significant part of the thesis has a theoretical flavour. This part is dedicated to the mathematical analysis of the competition model between two HIV subtypes in presence of vaccination and cross-immunity proposed by Porco and Blower (1998). We find the sharp conditions under which vaccination leads to the coexistence of the strains and using arguments from bifurcation theory, draw conclusions on the equilibria stability and find that a rather unusual behaviour of histeresis-type might emerge after repeated variations of the vaccination rate within a certain range. The most of this thesis has been inspired by real outbreaks occurred in Italy over the last 10 years and is about the modelling of the 1999-2000 H7N1 avian influenza outbreak and of the 2009-2010 H1N1 pandemic influenza. From an applied perspective, parameter estimation is a key part of the modelling process and in this thesis statistical inference has been performed within both a classical framework (i.e. by maximum likelihood and least square methods) and a Bayesian setting (i.e. by Markov Chain Monte Carlo techniques). However, my contribution goes beyond the application of inferential techniques to specific case studies. The stochastic, spatially explicit, between-farm transmission model developed for the transmission of the H7N1 virus has indeed been used to simulate different control strategies and asses their relative effectiveness. The modelling framework presented here for the H1N1 pandemic in Italy constitutes a novel approach that can be applied to a variety of different infections detected by surveillance system in many countries. We have coupled a deterministic compartmental model with a statistical description of the reporting process and have taken into account for the presence of stochasticity in the surveillance system. We thus tackled some statistical challenging issues (such as the estimation of the fraction of H1N1 cases reporting influenza-like-illness symptoms) that had not been addressed before. Last, we apply different estimation methods usually adopted in epidemiology to real and simulated school outbreaks, in the attempt to explore the suitability of a specific individual-based model at reproducing empirically observed epidemics in specific social contexts.
Style APA, Harvard, Vancouver, ISO itp.
16

Dorigatti, Ilaria. "Mathematical modelling of emerging and re-emerging infectious diseases in human and animal populations". Doctoral thesis, University of Trento, 2011. http://eprints-phd.biblio.unitn.it/458/2/thesis_Dorigatti_2.pdf.

Pełny tekst źródła
Streszczenie:
The works presented in this thesis are very different one from the other but they all deal with the mathematical modelling of emerging infectious diseases which, beyond being the leitmotiv of this thesis, is an important research area in the field of epidemiology and public health. A minor but significant part of the thesis has a theoretical flavour. This part is dedicated to the mathematical analysis of the competition model between two HIV subtypes in presence of vaccination and cross-immunity proposed by Porco and Blower (1998). We find the sharp conditions under which vaccination leads to the coexistence of the strains and using arguments from bifurcation theory, draw conclusions on the equilibria stability and find that a rather unusual behaviour of histeresis-type might emerge after repeated variations of the vaccination rate within a certain range. The most of this thesis has been inspired by real outbreaks occurred in Italy over the last 10 years and is about the modelling of the 1999-2000 H7N1 avian influenza outbreak and of the 2009-2010 H1N1 pandemic influenza. From an applied perspective, parameter estimation is a key part of the modelling process and in this thesis statistical inference has been performed within both a classical framework (i.e. by maximum likelihood and least square methods) and a Bayesian setting (i.e. by Markov Chain Monte Carlo techniques). However, my contribution goes beyond the application of inferential techniques to specific case studies. The stochastic, spatially explicit, between-farm transmission model developed for the transmission of the H7N1 virus has indeed been used to simulate different control strategies and asses their relative effectiveness. The modelling framework presented here for the H1N1 pandemic in Italy constitutes a novel approach that can be applied to a variety of different infections detected by surveillance system in many countries. We have coupled a deterministic compartmental model with a statistical description of the reporting process and have taken into account for the presence of stochasticity in the surveillance system. We thus tackled some statistical challenging issues (such as the estimation of the fraction of H1N1 cases reporting influenza-like-illness symptoms) that had not been addressed before. Last, we apply different estimation methods usually adopted in epidemiology to real and simulated school outbreaks, in the attempt to explore the suitability of a specific individual-based model at reproducing empirically observed epidemics in specific social contexts.
Style APA, Harvard, Vancouver, ISO itp.
17

ROLLE, MATTEO. "Modelling of water balance and crop growth based on Earth Observation and re-analysis data". Doctoral thesis, Politecnico di Torino, 2022. http://hdl.handle.net/11583/2972001.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Rogers, Benjamin James. "ATP dependent chromatin re-modelling factors regulate expression of genes involved in Dictyostelium discoideum development and chemotaxis". Thesis, Cardiff University, 2010. http://orca.cf.ac.uk/54125/.

Pełny tekst źródła
Streszczenie:
ATP dependent chromatin re-modeling factors have previously been shown to play a pivotal role in the regulation of gene expression in several model organisms, including yeast, fruit fly and human. When encountered with a nutrient depleted environment Dictyostelium discoideum enter a process of multicellular development which requires the correct temporal and spatial expression of a large subset of genes. Here it is shown that two of these ATP dependent chromatin re-modelling factors, 1NO80 and CHDC, are required for the correct expression of developmental genes of Dictyostelium discoideum and subsequent multicellular morphogenesis. These factors are identified as having a key role in the earlier stage of aggregation and cellular chemotaxis towards the developmental chemoattractant cAMP. Genetic disruption of genes encoding major subunits of these complexes, arp8 and chdC, both result in a decreased ability to form correct fruiting bodies, also showing a marked decrease in chemotactic ability. In each case, these defects are seen to occur through different mechanisms, indicating the role of multiple pathways in the regulation of Dictyostelium chemotaxis. Interestingly, both mutant cell lines are also responsive to the neuropsychiatric treatment drug lithium and are shown to affect elements of the inositol signaling pathway.
Style APA, Harvard, Vancouver, ISO itp.
19

Rollason, Edward David. "Re-evaluating participatory catchment management : integrating mapping, modelling, and participatory action to deliver more effective risk management". Thesis, Durham University, 2018. http://etheses.dur.ac.uk/12857/.

Pełny tekst źródła
Streszczenie:
Recent policy changes, such as the EU Water Framework Directive, have transformed catchment management to consider connected socio-ecological systems at the catchment scale, and integrate concept of public participation. However, there is relatively little research exploring how effective these changes have been in altering existing practices of management. Adopting a transdisciplinary approach, this thesis investigates a range of perspectives to explore existing participatory practices in current catchment management, and understand how we can integrate alternative knowledges and perspectives. The research employs diverse social and physical science methods, including participant led interviews and participatory mapping, numerical flood modelling, and the creation of a participatory competency group. The research finds that, despite the participatory policy turn, established supracatchment scale drivers continue to dictate top-down practices of everyday catchment management, excluding local communities from decision-making power. In contrast, participation in managing extreme events is actively encouraged, with the development of community resilience a key objective for management agencies. However, the research findings suggest that a similar lack of meaningful participation in knowledge creation and decision-making restricts resilience building. Based on these findings, the research explores practical ways in which participation and resilience can be embedded in ICM, using the typically expert-led practice of numerical flood modelling to show how existing practices of knowledge creation can be enhanced. The thesis also demonstrates how new practices of knowledge creation, based on social learning, can be used to develop new, more effective ways of communicating flood risk and building local resilience. The thesis proposes a new framework for the management of connected socio-ecological catchment systems, embedding evolutionary resilience as a practical mechanism by which public participation and the management of everyday and extreme events could be unified to develop more effective and sustainable catchment management and more resilient communities.
Style APA, Harvard, Vancouver, ISO itp.
20

Shin, Dongkyu. "A study of re-ignition phenomena and arc modelling to evaluate switching performance of low-voltage switching devices". Thesis, University of Southampton, 2018. https://eprints.soton.ac.uk/423475/.

Pełny tekst źródła
Streszczenie:
This thesis deals with the prediction, improvement and simulation of switching performance of low-voltage switching devices (LVSDs). A literature review of arc characteristics, interruption principle, switching performance and arc modelling of LVSDs has been conducted. The experimental investigations of switching tests, arc imaging measurement and arc spectra measurement are also discussed. Switching tests have been carried out with 10kA, 20kA, 55kA and 100kA test circuits using either miniature circuit breakers or moulded case circuit breakers to investigate re-ignition phenomena and re-ignition evaluators. It is found that the ratio of the recovery voltage to exit arc voltage, where the exit arc voltage is defined as the value of the arc voltage immediately prior to the current zero point, is a reliable evaluator for the prediction of re-ignition in the switching tests of LVSDs. It is also noted that there are no occurrences of instantaneous re-ignition where this voltage ratio lies in the range of 1.0 to -1.0 and there is a threshold of the voltage ratio at approximately -2.0, which can distinguish the successful interruption and instantaneous re-ignition. Arc imaging measurement has been conducted through an array of total 109 optical fibres to allow observation of the overall quenching chamber of the flexible test apparatus. This experiment reveals that arc motion fluctuation (repeat of back- and forwards-motion) in the splitter plate region leads to the instability of the arc voltage. Moreover, the arc moves further as well as more quickly in the case of the larger vent size. The well distributed vent contributes to an increase in an arc motion velocity and reduction in a total arc duration. Arc spectrum is captured by a spectrometer to calculate the arc temperature when the arc is ignited by copper wire in a narrow enclosed chamber. It is found that the arc light intensity measured by the arc imaging system is directly related to the arc temperature: the light intensity increases as the arc temperature rises. 3-D arc modelling has been implemented, based on the magnetohydrodynamics theory. Lorentz force, plasma properties depending on temperature as well as pressure, contact motion, radiation loss, arc root voltage, and external circuit are considered in this modelling. It is observed that the simulated results have a similar trend with the experimental data and it is able to predict current limitation and exit arc voltage, which are key features of switching performance.
Style APA, Harvard, Vancouver, ISO itp.
21

Baslyman, Malak. "Activity-based Process Integration Framework to Improve User Satisfaction and Decision Support in Healthcare". Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/38104.

Pełny tekst źródła
Streszczenie:
Requirements Engineering (RE) approaches are widely used in several domains such as telecommunications systems, information systems, and even regulatory compliance. However, they are rarely applied in healthcare beyond requirements elicitation. Healthcare is a multidisciplinary environment in which clinical processes are often performed across multiple units. Introducing a new Information Technology (IT) system or a new process in such an environment is a very challenging task, especially in the absence of recognized RE practices. Currently, many IT systems are not welcomed by caregivers and are considered to be failures because they change what caregivers are familiar with and bring new tasks that often consume additional time. This thesis introduces a new RE-based approach aiming to evaluate and estimate the potential impact of new system integrations on current practices, organizational goals,and user satisfaction using goal modelling and process modelling techniques. This approach is validated with two case studies conducted in real hospitals and a usability study involving healthcare practitioners. The contributions of the thesis are: • Major: a novel Activity-based Process Integration (AbPI) framework that enables the integration of a new process into existing practices incrementally, in a way that permits continuous analysis and evaluation. AbPI also provides several alternatives to a given integration to ensure effective flowing and minimal disturbance to current practices. AbPI has a Goal Integration Method to integrate new goals, an Integration Method to integrate new processes, and an Alternative Evaluation Method exploiting multi-criteria decision-making algorithms to select among strategies. The modelling concepts of AbPI are supported by a profile of the User Requirements Notation augmented with a new distance-based goal-oriented approach to alternative selection and a new data-quality-driven algorithm for the propagation of confidence levels in goal models. • Minor: a usability study of AbPI to investigate the usefulness of the framework in a healthcare context. This usability study is part of the validation and is also a minor contribution due to: 1) the lack of usability studies when proposing requirements engineering frameworks, and 2) an intent to discover the potential usefulness of the framework in a context where recognized RE practices are seldom used.
Style APA, Harvard, Vancouver, ISO itp.
22

Hamilton, William Derek. "The use of radiocarbon and Bayesian modelling to (re)write later Iron Age settlement histories in east-central Britain". Thesis, University of Leicester, 2011. http://hdl.handle.net/2381/9066.

Pełny tekst źródła
Streszczenie:
This thesis focuses on the use of radiocarbon dating and Bayesian modelling to develop more precise settlement chronologies for later prehistoric settlements over an area extending from the Tees valley in the south to the Firth of Forth in Scotland and bounded by the Pennines to the west. The project has produced a corpus of 168 new radiocarbon dates from nine sites and used these, together with dates that were already available for another 10 sites to develop new chronological models for 18 settlements representative of different parts of the study area. The results of the modelling underline the dynamic character of later prehistoric social organization and processes of change in east-central Britain over a period of several centuries. A widespread shift from nucleated settlements to dispersed farmsteads apparently occurred over a period of no more than a generation on either side of 200 cal BC, with a subsequent move back to open sites in the period following Caesar’s invasions in 55/54 BC. It is not yet clear why the settlement pattern became more focused on enclosed settlements around 200 cal BC, but whatever the cause, this seems to form a single archaeological horizon all the way from the Forth to the Tees. The shift to open settlement around 50 cal BC seems, however, to be tied to new economic forces developing in the region as southern England becomes more focused on economic and diplomatic relations with Rome in the century leading up to the Roman occupation of northern England shortly after AD 70. Questions of duration are also explored, related more specifically to the lifespan of settlements and even of individual structures or enclosure ditches. These questions lead to ones of tempo, whereby the cycle of rebuilding a roundhouse or redigging a ditch is examined.
Style APA, Harvard, Vancouver, ISO itp.
23

Sondermann, Martin [Verfasser], i Daniel [Akademischer Betreuer] Hering. "Modelling the spatial dispersal of aquatic invertebrates to predict (re-)colonisation processes within river catchments / Martin Sondermann ; Betreuer: Daniel Hering". Duisburg, 2018. http://d-nb.info/1154385876/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Davies, Owen. "Measuring and modelling Sitka spruce (Picea sitchensis (Bong.) Carrière) and birch (Betula spp.) crowns, with special reference to terrestrial photogrammetry". Thesis, Bangor University, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.431791.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Alalshuhai, Ahmed. "Requirements engineering of context-aware applications". Thesis, De Montfort University, 2015. http://hdl.handle.net/2086/12487.

Pełny tekst źródła
Streszczenie:
Context-aware computing envisions a new generation of smart applications that have the ability to perpetually sense the user’s context and use these data to make adaptation decision in response to changes in the user’s context so as to provide timely and personalized services anytime and anywhere. Unlike the traditional distribution systems where the network topology is fixed and wired, context-aware computing systems are mostly based on wireless communication due to the mobility of the network nodes; hence the network topology is not fixed but changes dynamically in an unpredictable manner as nodes join and the leave network, in addition to the fact that wireless communication is unstable. These factors make the design and development of context-aware computing systems much more challenging, as the system requirements change depending on the context of use. The Unified Modelling Language (UML) is a graphical language commonly used to specify, visualize, construct, and document the artefacts of software-intensive systems. However, UML is an all-purpose modelling language and does not have notations to distinguish context-awareness requirements from other system requirements. This is critical for the specification, visualization, construction and documentation of context-aware computing systems because context-awareness requirements are highly important in these systems. This thesis proposes an extension of UML diagrams to cater for the specification, visualization, construction and documentation of context-aware computing systems where new notations are introduced to model context-awareness requirements distinctively from other system requirements. The contributions of this work can be summarized as follows: (i) A context-aware use case diagram is a new notion which merges into a single diagram the traditional use case diagram (that describes the functions of an application) and the use context diagram, which specifies the context information upon which the behaviours of these functions depend. (ii) A Novel notion known as a context-aware activity diagram is presented, which extends the traditional UML activity diagrams to enable the representation of context objects, context constraints and adaptation activities. Context constraints express conditions upon context object attributes that trigger adaptation activities; adaptation activities are activities that must be performed in response to specific changes in the system’s context. (iii) A novel notion known as the context-aware class diagram is presented, which extends the traditional UML class diagrams to enable the representation of context information that affect the behaviours of a class. A new relationship, called utilisation, between a UML class and a context class is used to model context objects; meaning that the behaviours of the UML class depend upon the context information represented by the context class. Hence a context-aware class diagram is a rich and expressive language that distinctively depicts both the structure of classes and that of the contexts upon which they depend. The pragmatics of the proposed approach are demonstrated using two real-world case studies.
Style APA, Harvard, Vancouver, ISO itp.
26

Currie, Andrew E. "Modelling surface runoff using TOPMODEL to determine the feasibility of re-forestation as a form of flood defence in the Upper River Calder Valley /". Leeds : University of Leeds, School of Geography, 2006. http://0-www.leeds.ac.uk.wam.leeds.ac.uk/library/secure/counter/geogbsc/200506/currie.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Clegg, Sally Ann. "The changing role of this supply teacher : assessing the impact of the national strategies, professionalisation, globalisation and workforce re-modelling on the traditions of supply teachers' work". Thesis, Manchester Metropolitan University, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.442697.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Zabanoot, Zaid Ahmed Said. "Modelling and Analysis of Resource Management Schemes in Wireless Networks. Analytical Models and Performance Evaluation of Handoff Schemes and Resource Re-Allocation in Homogeneous and Heterogeneous Wireless Cellular Networks". Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5383.

Pełny tekst źródła
Streszczenie:
Over recent years, wireless communication systems have been experiencing a dramatic and continuous growth in the number of subscribers, thus placing extra demands on system capacity. At the same time, keeping Quality of Service (QoS) at an acceptable level is a critical concern and a challenge to the wireless network designer. In this sense, performance analysis must be the first step in designing or improving a network. Thus, powerful mathematical tools for analysing most of the performance metrics in the network are required. A good modelling and analysis of the wireless cellular networks will lead to a high level of QoS. In this thesis, different analytical models of various handoff schemes and resource re-allocation in homogeneous and heterogeneous wireless cellular networks are developed and investigated. The sustained increase in users and the request for advanced services are some of the key motivations for considering the designing of Hierarchical Cellular Networks (HCN). In this type of system, calls can be blocked in a microcell flow over to an overlay macrocell. Microcells in the HCN can be replaced by WLANs as this can provide high bandwidth and its users have limited mobility features. Efficient sharing of resources between wireless cellular networks and WLANs will improve the capacity as well as QoS metrics. This thesis first presents an analytical model for priority handoff mechanisms, where new calls and handoff calls are captured by two different traffic arrival processes, respectively. Using this analytical model, the optimised number of channels assigned to II handover calls, with the aim of minimising the drop probability under given network scenarios, has been investigated. Also, an analytical model of a network containing two cells has been developed to measure the different performance parameters for each of the cells in the network, as well as altogether as one network system. Secondly, a new solution is proposed to manage the bandwidth and re-allocate it in a proper way to maintain the QoS for all types of calls. Thirdly, performance models for microcells and macrocells in hierarchical cellular networks have been developed by using a combination of different handoff schemes. Finally, the microcell in HCN is replaced by WLANs and a prioritised vertical handoff scheme in an integrated UMTS/WLAN network has been developed. Simulation experiments have been conducted to validate the accuracy of these analytical models. The models have then been used to investigate the performance of the networks under different scenarios.
Style APA, Harvard, Vancouver, ISO itp.
29

Saba, R. "Cochlear implant modelling : stimulation and power consumption". Thesis, University of Southampton, 2012. https://eprints.soton.ac.uk/348818/.

Pełny tekst źródła
Streszczenie:
Cochlear implants have been shown to successfully restore hearing to the profoundly deaf. Despite this achievement, issues remain concerning the power consumption and the accuracy of stimulation. This thesis is mainly concerned with investigating the spread of stimulation voltage within the cochlea. The power required to generate the stimulus is also investigated, as is the feasibility of powering a fully implanted cochlear implant by harvesting energy from head motion. Several different models have been used to study the voltage distribution within the cochlea due to electrical stimulation from individual electrodes of a cochlear implant. A resistive cable model is first used to illustrate the fall-off of the voltage with distance at the electrode positions along the cochlea. A three-dimensional finite element model of the cochlea is then developed to obtain the voltage distribution at positions closer to the site of neural stimulation. This model is used to demonstrate the way that the voltage distribution varies with the geometry of the cochlea and the electrode array. It was found that placing the return electrode of the implant within the modiolus, as opposed to outside the cochlea, resulted in higher stimulation for the same current input, which reduces the power requirements. The model has also been used to investigate the consequences of a current-steering, or stimulation focussing, strategy that has previously been proposed. A generalisation of this strategy is suggested, whereby impedance information at the neural level, along the path of the spiral ganglion, was used to optimise the focussed voltage distribution at the target neurons. The power consumption of various stimulation strategies is then estimated in order to assess their energy efficiency. Strategies are defined by parameters such as stimulation rate and number of active channels. The feasibility has also been investigated of harvesting electrical energy from head motion, to power a fully-implanted cochlear implant. It was demonstrated that more power could be harvested from higher harmonics but that this would be sensitive to walking speed. The practical approach is to have a heavily damped device that is insensitive.
Style APA, Harvard, Vancouver, ISO itp.
30

Pitton, Anne-Cécile. "Contribution à la ré-identification de véhicules par analyse de signatures magnétiques tri-axiales mesurées par une matrice de capteurs". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT005/document.

Pełny tekst źródła
Streszczenie:
La ré-identification de véhicules permet d’estimer deux paramètres clés en gestion dynamique de trafic : les temps de parcours et les matrices origine-destination. Dans cette thèse, nous avons choisi d'effectuer cette ré-identification par analyse des signatures magnétiques mesurées par des capteurs tri-axiaux placés sur la chaussée. La signature magnétique est générée par l'aimantation du véhicule : elle est alors susceptible de varier en fonction de l'orientation du véhicule par rapport au champ magnétique terrestre (à cause de l'aimantation induite dans le plan horizontal), et en fonction de sa position latérale relative par rapport aux capteurs. Les expérimentations que nous avons menées nous ont permis d'obtenir une base de données de signatures magnétiques, et ainsi d'évaluer les performances des deux méthodes de ré-identification que nous avons élaborées.La première méthode consiste à comparer directement des paires de signatures magnétiques mesurées par les capteurs. Les calculs de distances entre les paires sont effectués avec des algorithmes classiques comme la distance euclidienne. Les résultats obtenus sont très bons, et baissent peu lorsque le véhicule change d'orientation. Toutefois, ils sont très sensibles à la déformation des signaux due au décalage latéral du véhicule, et nécessitent donc de positionner un capteur tous les 0.20m sur toute la largeur de la voie.Dans un second temps, nous proposons une méthode de ré-identification qui compare des paires de modèles magnétiques de véhicules. Ces modèles sont composés de plusieurs dipôles, et sont calculés à partir des signatures mesurées. La modélisation a pour but de s’affranchir du décalage latéral du véhicule, en remontant à la position relative du véhicule par rapport aux capteurs. Avec deux fois moins de capteurs que la méthode précédente, les résultats obtenus sur signaux réels sont également très bons, même s'ils sont un peu plus sensibles au changement d'orientation du véhicule. De plus, une simulation nous permet d'extrapoler qu'il est effectivement possible de s'affranchir du décalage latéral avec cette méthode
Vehicle re-identification gives access to two essential data for dynamic traffic management: travel times and origin-destination matrices. In this thesis, we chose to re-identify vehicles by analysing their magnetic signatures measured with several 3-axis magnetic sensors located on the road. A magnetic signature is created by the vehicle magnetization. Therefore, the vehicle orientation to the Earth’s magnetic field (which determines the induced magnetization) and the variation of the lateral position of the vehicle relative to the sensors’ one might both have an impact on the magnetic signature. We gathered our experiments’ results into a database of magnetic signatures that we used to evaluate the performances of the two vehicle re-identification methods we developed.The first method is a direct comparison of pairs of magnetic signatures measured by the sensors. Distances between pairs of signatures are computed using classic algorithms such as the Euclidean distance. This method’s results are very positive and the vehicle change of orientation has only a slight impact on them. However, the distortion of signals due to a lateral offset in the vehicle position has a strong impact on the results. As a consequence, sensors have to be placed every 0.20m over the road’s entire width.The second proposed method compares pairs of vehicles’ magnetic models. Those models are composed of several magnetic dipoles and are determined from the measured signatures. Magnetic modelling aims to suppress the influence of the vehicle lateral position on the results by assessing the relative position of the vehicle above the sensors. Although the vehicle orientation has slightly more impact on the performances than with the first method, the overall results are more promising. This method also allows us to divide by two the number of sensors used
Style APA, Harvard, Vancouver, ISO itp.
31

Macdougall, Lindsey C. "Mathematical modelling of retinal metabolism". Thesis, University of Nottingham, 2015. http://eprints.nottingham.ac.uk/30615/.

Pełny tekst źródła
Streszczenie:
Age-related macular degeneration and diabetic retinopathy, in which the cells at the back of the eye degrade due to age and diabetes respectively, are prevalent causes of vision loss in adults. We formulate mathematical models of retinal metabolic regulation to investigate defects that may be responsible for pathology. Continuum PDE models are developed to test whether rod photoreceptors, light detecting cells in the eye, may regulate their energy demand by adapting their length under light and dark conditions. These models assume photoreceptor length depends on the availability of nutrients, such as oxygen, which diffuse and are consumed within the photoreceptor. Our results suggest that the length is limited by oxygen and phosphocreatine shuttle-derived ATP under dark and light conditions respectively. Parameter sensitivity analysis indicates that lowered mitochondrial efficiency due to ageing may be responsible for the damage to and death of photoreceptors that are characteristic of age-related macular degeneration. In the latter part of this thesis we shift our focus to the inner retina and examine how metabolite levels in the tissue surrounding the neurons (highly sensitive, excitable cells that transmit electrical signals) are regulated by glial cells. For instance, stimulated neurons activate their neighbours via the release of the neurotransmitter glutamate, while glial cells regulate neuronal activity via glutamate uptake. Diabetes produces large fluctuations in blood glucose levels, and eventually results in neuronal cell death, causing vision loss. We generate an ODE model for the exchange of key metabolites between neurons and surrounding cells. Using numerical and analytical techniques, we use the model to show that the fluctuations in blood glucose and metabolic changes associated with diabetes may result in abnormally high glutamate levels in the inner retina, which could lead to neuronal damage via excitotoxicity (unregulated neuronal stimulation).
Style APA, Harvard, Vancouver, ISO itp.
32

Hill, Lisa Jayne. "Modelling and treating dysregulated fibrosis in primary open angle glaucoma". Thesis, University of Birmingham, 2015. http://etheses.bham.ac.uk//id/eprint/5683/.

Pełny tekst źródła
Streszczenie:
Introduction A risk factor for POAG is elevated intraocular pressure induced by reduced outflow of aqueous humour through the fibrosed trabecular meshwork, causing retinal ganglion cell death. This study aimed to: 1) create a rodent model of increased intraocular pressure by inducing trabecular meshwork fibrosis; 2) evaluate the ability of the anti-fibrotic agent, Decorin to diminish trabecular meshwork fibrosis and lower intraocular pressure; 3) correlate these findings to RCG survival and 4) investigate alternative translational methods for delivery of Decorin into the eye. Methods Fibrotic agents, kaolin or TGF-β, were intracamerally injected to induce fibrosis and intraocular pressure elevation. Following this human recombinant Decorin was administered intracamerally. Fibrosis was measured by immunohistochemistry and electron microscopy. Retinal ganglion cell counts were performed from retinal whole mounts. Results TGF-β induced fibrosis of the trabecular meshwork leading IOP elevation and retinal ganglion cell death, thus providing a reliable in vivo model of the ocular fibrosis. Decorin reversed the TGF-β induced fibrosis in the trabecular meshwork, lowered intraocular pressure and indirectly prevented retinal ganglion cell death. Decorin was successfully delivered to the anterior chamber in an eye drop formulation. Conclusion Decorin may be an effective treatment to preserving visual function in patients with POAG.
Style APA, Harvard, Vancouver, ISO itp.
33

Anilkumar, A. K. "NEW PERSPECTIVES FOR ANALYZING THE BREAKUP, ENVIRONMENT, EVOLUTION, COLLISION RISK AND REENTRY OF SPACE DEBRIS OBJECTS". Thesis, Indian Institute of Science, 2004. https://etd.iisc.ac.in/handle/2005/80.

Pełny tekst źródła
Streszczenie:
In the space surrounding the earth there are two major regions where orbital debris causes concern. They are the Low Earth Orbits (LEO) up to about 2000 km, and Geosynchronous Orbits (GEO) at an altitude of around 36000 km. The impact of the debris accumulations are in principle the same in the two regions; nevertheless they require different approaches and solutions, due to the fact that the perturbations in the orbital decay due to atmospheric drag effects predominates in LEO, gravitational forces including earth’s oblateness and luni solar effects dominating in GEO are different in these two regions. In LEO it is generally known that the debris population dominates even the natural meteoroid population for object sizes 1 mm and larger. This thesis focuses the study mainly in the LEO region. Since the first satellite breakup in 1961 up to 01 January 2003 more than 180 spacecraft and rocket bodies have been known to fragment in orbit. The resulting debris fragments constitute nearly 40% of the 9000 or more of the presently tracked and catalogued objects by USSPACECOM. The catalogued fragment count does not include the much more numerous fragments, which are too small to be detected from ground. Hence in order to describe the trackable orbital debris environment, it is important to develop mathematical models to simulate the trackable fragments and later expand it to untrackable objects. Apart from the need to better characterize the orbital debris environment down to sub millimeter particles, there is also a pressing necessity of simulation tools able to model in a realistic way the long term evolution of space debris, to highlight areas, which require further investigations, and to study the actual mitigation effects of space policy measures. The present thesis has provided newer perspectives for five major issues in space debris modeling studies. The issues are (i) breakup modeling, (ii) environment modeling, (iii) evolution of the debris environment, (iv) collision probability analysis and (v) reentry prediction. The Chapter 1 briefly describes an overview of space debris environment and the issues associated with the growing space debris populations. A literature survey of important earlier work carried out regarding the above mentioned five issues are provided in the Chapter 2. The new contributions of the thesis commence from Chapter 3. The Chapter 3 proposes a new breakup model to simulate the creation of debris objects by explosion in LEO named “A Semi Stochastic Environment Modeling for Breakup in LEO” (ASSEMBLE). This model is based on a study of the characteristics of the fragments from on orbit breakups as provided in the TLE sets for the INDIAN PSLV-TES mission spent upper stage breakup. It turned out that based on the physical mechanisms in the breakup process the apogee, perigee heights (limited by the breakup altitude) closely fit suitable Laplace distributions and the eccentricity follows a lognormal distribution. The location parameters of these depend on the orbit of the parent body at the time of breakup and their scale parameters on the intensity of explosion. The distribution of the ballistic coefficient in the catalogue was also found to follow a lognormal distribution. These observations were used to arrive at the proper physical, aerodynamic, and orbital characteristics of the fragments. Subsequently it has been applied as an inverse problem to simulate and further validate it based on some more typical well known historical on orbit fragmentation events. All the simulated results compare quite well with the observations both at the time of breakup and at a later epoch. This model is called semi stochastic in nature since the size and mass characteristics have to be obtained from empirical relations and is capable of simulating the complete scenario of the breakup. A new stochastic environment model of the debris scenario in LEO that is simple and impressionistic in nature named SIMPLE is proposed in Chapter 4. Firstly among the orbital debris, the distribution of the orbital elements namely altitude, perigee height, eccentricity and the ballistic coefficient values for TLE sets of data in each of the years were analyzed to arrive at their characteristic probability distributions. It is observed that the altitude distribution for the number of fragments exhibits peaks and it turned out that such a feature can be best modeled with a tertiary mixture of Laplace distributions with eight parameters. It was noticed that no statistically significant variations could be observed for the parameters across the years. Hence it is concluded that the probability density function of the altitude distribution of the debris objects has some kind of equilibrium and it follows a three component mixture of Laplace distributions. For the eccentricity ‘e’ and the ballistic parameter ‘B’ values the present analysis showed that they could be acceptably quite well fitted by Lognormal distributions with two parameters. In the case of eccentricity also the describing parameter values do not vary much across the years. But for the parameters of the B distribution there is some trend across the years which perhaps may be attributed to causes such as decay effect, miniaturization of space systems and even the uncertainty in the measurement data of B. However in the absence of definitive cause that can be attributed for the variation across the years, it turns out to be best to have the most recent value as the model value. Lastly the same kind of analysis has also been carried out with respect to the various inclination bands. Here the orbital parameters are analyzed with respect to the inclination bands as is done in ORDEM (Kessler et al 1997, Liou et al 2001) for near circular orbits in LEO. The five inclination bands considered here are 0-36 deg (in ORDEM they consider 19-36 deg, and did not consider 0-19 deg), 36-61 deg, 61-73 deg, 73-91 deg and 91- 180 deg, and corresponding to each band, the altitude, eccentricity and B values were modeled. It is found that the third band shows the models with single Laplace distribution for altitude and Lognormal for eccentricity and B fit quite well. The altitude of other bands is modeled using tertiary mixture of Laplace distributions, with the ‘e’ and ‘B’ following once again a Lognormal distribution. The number of parameter values in SIMPLE is, in general, just 8 for each description of altitude or perigee distributions whereas in ORDEM96 it is more. The present SIMPLE model captures closely all the peak densities without losing the accuracy at other altitudes. The Chapter 5 treats the evolution of the debris objects generated by on orbit breakup. A novel innovative approach based on the propagation of an equivalent fragment in a three dimensional bin of semi major axis, eccentricity, and the ballistic coefficient (a, e, B) together with a constant gain Kalman filter technique is described in this chapter. This new approach propagates the number density in a bin of ‘a’ and ‘e’ rapidly and accurately without propagating each and every of the space debris objects in the above bin. It is able to assimilate the information from other breakups as well with the passage of time. Further this approach expands the scenario to provide suitable equivalent ballistic coefficient values for the conglomeration of the fragments in the various bins. The heart of the technique is to use a constant Kalman gain filter, which is optimal to track the dynamically evolving fragment scenario and further expand the scenario to provide time varying equivalent ballistic coefficients for the various bins. In the next chapter 6 a new approach for the collision probability assessment utilizing the closed form solution of Wiesel (1989) by the way of a three dimensional look up table, which takes only air drag effect and an exponential model of the atmosphere, is presented. This approach can serve as a reference collision probability assessment tool for LEO debris cloud environment. This approach takes into account the dynamical behavior of the debris objects propagation and the model utilizes a simple propagation for quick assessment of collision probability. This chapter also brings out a comparison of presently available collision probability assessment algorithms based on their complexities, application areas and sample space on which they operate. Further the quantitative assessment of the collision probability estimates between different presently available methods is carried out and the obtained collision probabilities are match qualitatively. The Chapter 7 utilizes once again the efficient and robust constant Kalman gain filter approach that is able to handle the many uncertain, variable, and complex features existing in the scenario to predict the reentry time of the risk objects. The constant gain obtained by using only a simple orbit propagator by considering drag alone is capable of handling the other modeling errors in a real life situation. A detailed validation of the approach was carried out based on a few recently reentered objects and comparison of the results with the predictions of other agencies during IADC reentry campaigns are also presented. The final Chapter 8 provides the conclusions based on the present work carried together with suggestions for future efforts needed in the study of space debris. Also the application of the techniques evolved in the present work to other areas such as atmospheric data assimilation and forecasting have also been suggested.
Vikram Sarabhai Space Centre,Trivandrum
Style APA, Harvard, Vancouver, ISO itp.
34

Anilkumar, A. K. "NEW PERSPECTIVES FOR ANALYZING THE BREAKUP, ENVIRONMENT, EVOLUTION, COLLISION RISK AND REENTRY OF SPACE DEBRIS OBJECTS". Thesis, Indian Institute of Science, 2004. http://hdl.handle.net/2005/80.

Pełny tekst źródła
Streszczenie:
Vikram Sarabhai Space Centre,Trivandrum
In the space surrounding the earth there are two major regions where orbital debris causes concern. They are the Low Earth Orbits (LEO) up to about 2000 km, and Geosynchronous Orbits (GEO) at an altitude of around 36000 km. The impact of the debris accumulations are in principle the same in the two regions; nevertheless they require different approaches and solutions, due to the fact that the perturbations in the orbital decay due to atmospheric drag effects predominates in LEO, gravitational forces including earth’s oblateness and luni solar effects dominating in GEO are different in these two regions. In LEO it is generally known that the debris population dominates even the natural meteoroid population for object sizes 1 mm and larger. This thesis focuses the study mainly in the LEO region. Since the first satellite breakup in 1961 up to 01 January 2003 more than 180 spacecraft and rocket bodies have been known to fragment in orbit. The resulting debris fragments constitute nearly 40% of the 9000 or more of the presently tracked and catalogued objects by USSPACECOM. The catalogued fragment count does not include the much more numerous fragments, which are too small to be detected from ground. Hence in order to describe the trackable orbital debris environment, it is important to develop mathematical models to simulate the trackable fragments and later expand it to untrackable objects. Apart from the need to better characterize the orbital debris environment down to sub millimeter particles, there is also a pressing necessity of simulation tools able to model in a realistic way the long term evolution of space debris, to highlight areas, which require further investigations, and to study the actual mitigation effects of space policy measures. The present thesis has provided newer perspectives for five major issues in space debris modeling studies. The issues are (i) breakup modeling, (ii) environment modeling, (iii) evolution of the debris environment, (iv) collision probability analysis and (v) reentry prediction. The Chapter 1 briefly describes an overview of space debris environment and the issues associated with the growing space debris populations. A literature survey of important earlier work carried out regarding the above mentioned five issues are provided in the Chapter 2. The new contributions of the thesis commence from Chapter 3. The Chapter 3 proposes a new breakup model to simulate the creation of debris objects by explosion in LEO named “A Semi Stochastic Environment Modeling for Breakup in LEO” (ASSEMBLE). This model is based on a study of the characteristics of the fragments from on orbit breakups as provided in the TLE sets for the INDIAN PSLV-TES mission spent upper stage breakup. It turned out that based on the physical mechanisms in the breakup process the apogee, perigee heights (limited by the breakup altitude) closely fit suitable Laplace distributions and the eccentricity follows a lognormal distribution. The location parameters of these depend on the orbit of the parent body at the time of breakup and their scale parameters on the intensity of explosion. The distribution of the ballistic coefficient in the catalogue was also found to follow a lognormal distribution. These observations were used to arrive at the proper physical, aerodynamic, and orbital characteristics of the fragments. Subsequently it has been applied as an inverse problem to simulate and further validate it based on some more typical well known historical on orbit fragmentation events. All the simulated results compare quite well with the observations both at the time of breakup and at a later epoch. This model is called semi stochastic in nature since the size and mass characteristics have to be obtained from empirical relations and is capable of simulating the complete scenario of the breakup. A new stochastic environment model of the debris scenario in LEO that is simple and impressionistic in nature named SIMPLE is proposed in Chapter 4. Firstly among the orbital debris, the distribution of the orbital elements namely altitude, perigee height, eccentricity and the ballistic coefficient values for TLE sets of data in each of the years were analyzed to arrive at their characteristic probability distributions. It is observed that the altitude distribution for the number of fragments exhibits peaks and it turned out that such a feature can be best modeled with a tertiary mixture of Laplace distributions with eight parameters. It was noticed that no statistically significant variations could be observed for the parameters across the years. Hence it is concluded that the probability density function of the altitude distribution of the debris objects has some kind of equilibrium and it follows a three component mixture of Laplace distributions. For the eccentricity ‘e’ and the ballistic parameter ‘B’ values the present analysis showed that they could be acceptably quite well fitted by Lognormal distributions with two parameters. In the case of eccentricity also the describing parameter values do not vary much across the years. But for the parameters of the B distribution there is some trend across the years which perhaps may be attributed to causes such as decay effect, miniaturization of space systems and even the uncertainty in the measurement data of B. However in the absence of definitive cause that can be attributed for the variation across the years, it turns out to be best to have the most recent value as the model value. Lastly the same kind of analysis has also been carried out with respect to the various inclination bands. Here the orbital parameters are analyzed with respect to the inclination bands as is done in ORDEM (Kessler et al 1997, Liou et al 2001) for near circular orbits in LEO. The five inclination bands considered here are 0-36 deg (in ORDEM they consider 19-36 deg, and did not consider 0-19 deg), 36-61 deg, 61-73 deg, 73-91 deg and 91- 180 deg, and corresponding to each band, the altitude, eccentricity and B values were modeled. It is found that the third band shows the models with single Laplace distribution for altitude and Lognormal for eccentricity and B fit quite well. The altitude of other bands is modeled using tertiary mixture of Laplace distributions, with the ‘e’ and ‘B’ following once again a Lognormal distribution. The number of parameter values in SIMPLE is, in general, just 8 for each description of altitude or perigee distributions whereas in ORDEM96 it is more. The present SIMPLE model captures closely all the peak densities without losing the accuracy at other altitudes. The Chapter 5 treats the evolution of the debris objects generated by on orbit breakup. A novel innovative approach based on the propagation of an equivalent fragment in a three dimensional bin of semi major axis, eccentricity, and the ballistic coefficient (a, e, B) together with a constant gain Kalman filter technique is described in this chapter. This new approach propagates the number density in a bin of ‘a’ and ‘e’ rapidly and accurately without propagating each and every of the space debris objects in the above bin. It is able to assimilate the information from other breakups as well with the passage of time. Further this approach expands the scenario to provide suitable equivalent ballistic coefficient values for the conglomeration of the fragments in the various bins. The heart of the technique is to use a constant Kalman gain filter, which is optimal to track the dynamically evolving fragment scenario and further expand the scenario to provide time varying equivalent ballistic coefficients for the various bins. In the next chapter 6 a new approach for the collision probability assessment utilizing the closed form solution of Wiesel (1989) by the way of a three dimensional look up table, which takes only air drag effect and an exponential model of the atmosphere, is presented. This approach can serve as a reference collision probability assessment tool for LEO debris cloud environment. This approach takes into account the dynamical behavior of the debris objects propagation and the model utilizes a simple propagation for quick assessment of collision probability. This chapter also brings out a comparison of presently available collision probability assessment algorithms based on their complexities, application areas and sample space on which they operate. Further the quantitative assessment of the collision probability estimates between different presently available methods is carried out and the obtained collision probabilities are match qualitatively. The Chapter 7 utilizes once again the efficient and robust constant Kalman gain filter approach that is able to handle the many uncertain, variable, and complex features existing in the scenario to predict the reentry time of the risk objects. The constant gain obtained by using only a simple orbit propagator by considering drag alone is capable of handling the other modeling errors in a real life situation. A detailed validation of the approach was carried out based on a few recently reentered objects and comparison of the results with the predictions of other agencies during IADC reentry campaigns are also presented. The final Chapter 8 provides the conclusions based on the present work carried together with suggestions for future efforts needed in the study of space debris. Also the application of the techniques evolved in the present work to other areas such as atmospheric data assimilation and forecasting have also been suggested.
Style APA, Harvard, Vancouver, ISO itp.
35

Weerheim, Marieke S. "Distribution patterns and habitat use of black cockatoos (Calyptorhynchus spp.) in modified landscapes in the south-west of Western Australia". Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2008. https://ro.ecu.edu.au/theses/126.

Pełny tekst źródła
Streszczenie:
Conservation planning for animal species inhabiting modified landscapes requires understanding of where animals occur and how they utilise both natural and modified habitats. In this study the distribution and foraging behaviour of the forest red-tailed cockatoo (Calyptorhynchus banksii naso), Baudin’s cockatoo (C. baudinii) and Carnaby’s cockatoo (C. latirostris) was investigated in three study areas which each contained a different combination of modified habitats. Pickering Brook contained native forest and orchards, Wungong contained a mosaic of native forest and revegetation, while Karnet contained primarily native forest and paddocks. The relationship between cockatoo distribution and land use types was examined by constructing Generalised Linear Models based on bird counts and land use data along 90.5 km of road transects. The Akaike Information Criterion (AIC) was used to select a set of the most parsimonious candidate models. Models were constructed at two scales: Regional models incorporated the datasets of all three study areas, while study area models used the datasets of single study areas. Models for the forest red-tailed cockatoo indicated selection against young post-1988 revegetation. This response was apparent at both the regional scale and within the Wungong study area. Baudin’s cockatoo selected in favour of orchards at the regional scale, but their distribution was unrelated to any land use variable within the (orchard-rich) Pickering Brook study area. No models were constructed for Carnaby’s cockatoo due to a limited number of observations for this species. Feeding observations demonstrated the importance of the native eucalypts marri (Corymbia calophylla) and jarrah (Eucalyptus marginata) as a food source for the forest red-tailed cockatoo and Baudin’s cockatoo. In contrast Carnaby’s cockatoo fed most frequently in plantations of introduced pine (Pinus spp.). Contrary to model predictions, Baudin’s cockatoo was never observed feeding in apple orchards during the study. This discrepancy may be due to timing of the surveys outside the hours when Baudin’s cockatoo fed in orchards, or it could indicate that orchards are of limited importance as a food source. Forest red-tailed cockatoos consistently fed on particular marri trees while ignoring others, but this selectivity was unrelated to fruit morphology or seed nutrient content. Instead, foraging patterns may have been driven by ingrained habits, or by variation in the concentration of secondary compounds. iv In conservation efforts, identification of critical habitats is an important first step. This study highlighted the importance of studying habitat selection and constructing management plans at an appropriate scale, relative to the range of the target species. Wide ranging species like black cockatoos require regional scale protection of important broad vegetation types such as the northern jarrah forest, combined with landscape scale protection and restoration – for instance during postmining revegetation – of specific feeding habitat and food species, such as pine for Carnaby’s cockatoo and possibly Fraser’s sheoak (Allocasuarina fraseriana) for the forest red-tailed cockatoo.
Style APA, Harvard, Vancouver, ISO itp.
36

Blyth, Andrew John Charles. "Enterprise modelling and its application to organisational requirements, capture and definition". Thesis, University of Newcastle Upon Tyne, 1995. http://hdl.handle.net/10443/1954.

Pełny tekst źródła
Streszczenie:
Computers have gone from being solely large number crunching machines to small devices capable of performing a myriad of functions in a very small space of time. Computers are now used to control just about every facet of daily life; they can now be found in automobiles, washing machines and home heating systems. This rapid diversification brings a great many problems. Traditional software engineering methodologies are failing to meet and address these new problems. The goal of this thesis is to develop a new approach to organisational requirements engineering. A new modelling approach to representing organisations will be developed which will draw upon the concepts of a systems architecture, modelling the life cycle of responsibilities and the execution of conversations. Using this architecture an organisation will be able to embed social and cultural aspects within the modelling notation. From the modelling of responsibilities a clearer picture of the organisation's aims, objectives and policies will be developed along with a definition of what objects and access rights are required in order for the organisation to function. Using speech act and Petri net based models to model conversations a clearer understanding of the dynamics and constraints governing organisational behaviour can be developed. These modelling approaches are then applied to two real life case studies in order to demonstrate and evaluate their performance and usefulness.
Style APA, Harvard, Vancouver, ISO itp.
37

Wade, Andrew John. "Assessment and modelling of water chemistry in a large catchment, River Dee, NE Scotland". Thesis, University of Aberdeen, 1999. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU133417.

Pełny tekst źródła
Streszczenie:
This thesis describes the water chemistry of the River Dee and its tributaries, and the potential water chemistry changes that may occur under acid deposition and land use change scenarios. Historic water quality and flow records were collated and supplemented with new water chemistry data. These data were analysed in relation to catchment geography and river flow using both mathematical modelling and novel, GIS based techniques. This analysis established the importance of diffuse inputs and highlighted differences between upland and lowland regions in the catchment. In headwater streams, different geological types create hydrochemical source areas that strongly influence stream chemistry whilst in lowland tributaries, agricultural sources are particularly important. In the upland region most major ions were diluted as flows increased, further emphasizing the influence of deeper geological sources on baseflow chemistry, but showing soilwater controls on stormflow composition. The headwaters, which drain predominantly acid rocks, are presently oligotrophic but threatened by the impact of acid deposition and land use change (re-afforestation). In some of the lowland tributaries, increased NO3-N concentrations have resulted from more intensive land management. The potential impacts of acid deposition and land use change were simulated in both upland and lowland catchments by considering existing and new models within a Functional Unit Network. For upland regions this consisted of developing a new, two component hydrochemical mixing model to simulate the spatial and flow-related variations in streamwater acidity. The mixing model was based on End Member Mixing Analysis (EMMA), and site specific end members (alkalinity and Ca) could be predicted from emergent catchment characteristics (soil and land use) using linear regression.
Style APA, Harvard, Vancouver, ISO itp.
38

TOBON, VASQUEZ JORGE ALBERTO. "Efficient Electromagnetic Modelling of Complex Structures". Doctoral thesis, Politecnico di Torino, 2014. http://hdl.handle.net/11583/2555144.

Pełny tekst źródła
Streszczenie:
Part 1. Space vehicles re-entering earth's atmosphere produce a shock wave which in turns results in a bow of plasma around the vehicle body. This plasma signicantly affects all radio links between the vehicle and ground, since the electron plasma frequency reaches beyond several GHz. In this work, a model of the propagation in plasma issue is developed. The radiofrequency propagation from/to antennae installed aboard the vehicle to the ground stations (or Data Relay Satellites) can be predicted, and the position of this antennae improved before a mission launch. Part 2. The Surface Integral Equation is one of the most used methods in the simulation of electromagnetic problems. The method used a discretized description of the surface on which a number of basis functions are needed. In the case of multi-scale structures, the test-object has regions with high details that require a fine mesh, together with flat surfaces where the current can be properly described with a coarser mesh. The goal of this work is to develop an automatic tool that identies the regions to be refined in a initial coarse mesh (dened only by geometry) using electromagnetic characteristics of the problem. It avoid the use of more unknowns that the actually needed (computational cost) and permits use a geometric mesh as base for different problems, adapting to each electromagnetic incidence automatically.
Style APA, Harvard, Vancouver, ISO itp.
39

Brümmer, Anneke. "Mathematical modelling of DNA replication". Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I, 2010. http://dx.doi.org/10.18452/16212.

Pełny tekst źródła
Streszczenie:
Bevor sich eine Zelle teilt muss sie ihr gesamtes genetisches Material verdoppeln. Eukaryotische Genome werden von einer Vielzahl von Replikationsstartpunkten, den sogenannten Origins, aus repliziert, die über das gesamte Genome verteilt sind. In dieser Arbeit wird der zugrundeliegende molekulare Mechanismus quantitativ analysiert, der für die nahezu simultane Initiierung der Origins exakt ein Mal pro Zellzyklus verantwortlich ist. Basierend auf umfangreichen experimentellen Studien, wird zunächst ein molekulares regulatorisches Netzwerk rekonstruiert, welches das Binden von Molekülen an die Origins beschreibt, an denen sich schließlich komplette Replikationskomplexe (RKs) bilden. Die molekularen Reaktionen werden dann in ein Differentialgleichungssystem übersetzt. Um dieses mathematische Modell zu parametrisieren, werden gemessene Proteinkonzentrationen als Anfangswerte verwendet, während kinetische Parametersätze in einen Optimierungsverfahren erzeugt werden, in welchem die Dauer, in der sich eine Mindestanzahl von RKs gebildet hat, minimiert wird. Das Modell identifiziert einen Konflikt zwischen einer schnellen Initiierung der Origins und einer effizienten Verhinderung der DNA Rereplikation. Modellanalysen deuten darauf hin, dass eine zeitlich verzögerte Origininitiierung verursacht durch die multiple Phosphorylierung der Proteine Sic1 und Sld2 durch Cyclin-abhängige Kinasen, G1-Cdk bzw. S-Cdk, essentiell für die Lösung dieses Konfliktes ist. Insbesondere verschafft die Mehrfach-Phosphorylierung von Sld2 durch S-Cdk eine zeitliche Verzögerung, die robust gegenüber Veränderungen in der S-Cdk Aktivierungskinetik ist und außerdem eine nahezu simultane Aktivierung der Origins ermöglicht. Die berechnete Verteilung der Fertigstellungszeiten der RKs, oder die Verteilung der Originaktivierungszeiten, wird auch genutzt, um die Konsequenzen bestimmter Mutationen im Assemblierungsprozess auf das Kopieren des genetischen Materials in der S Phase des Zellzyklus zu simulieren.
Before a cell divides it has to duplicate its entire genetic material. Eukaryotic genomes are replicated from multiple replication origins across the genome. This work is focused on the quantitative analysis of the underlying molecular mechanism that allows these origins to initiate DNA replication almost simultaneously and exactly once per cell cycle. Based on a vast amount of experimental findings, a molecular regulatory network is constructed that describes the assembly of the molecules at the replication origins that finally form complete replication complexes. Using mass–action kinetics, the molecular reactions are translated into a system of differential equations. To parameterize the mathematical model, the initial protein concentrations are taken from experimental data, while kinetic parameter sets are determined using an optimization approach, in particular a minimization of the duration, in which a minimum number of replication complexes has formed. The model identifies a conflict between the rapid initiation of replication origins and the efficient inhibition of DNA rereplication. Analyses of the model suggest that a time delay before the initiation of DNA replication provided by the multiple phosphorylations of the proteins Sic1 and Sld2 by cyclin-dependent kinases in G1 and S phase, G1-Cdk and S-Cdk, respectively, may be essential to solve this conflict. In particular, multisite phosphorylation of Sld2 by S-Cdk creates a time delay that is robust to changes in the S-Cdk activation kinetics and additionally allows the near-simultaneous activation of multiple replication origins. The calculated distribution of the assembly times of replication complexes, that is also the distribution of origin activation times, is then used to simulate the consequences of certain mutations in the assembly process on the copying of the genetic material in S phase of the cell cycle.
Style APA, Harvard, Vancouver, ISO itp.
40

Fong, Sharon Mei Chan. "Examining re-patronising intentions formation : the intention-as-wants model". University of Western Australia. Graduate School of Management, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0020.

Pełny tekst źródła
Streszczenie:
Competition in the mobile services industry is intense, with players in the industry offering generally similar subscription plans. Opportunities are few for differentiating one service provider from another. In the light of prior research suggesting value is multi-dimensional, the present study, which examines how these dimensions impact customer satisfaction and repurchase intentions, provides differentiation opportunities for mobile service providers through focusing on value dimensions that are important to customers. Of six perceived value dimensions examined in the present research, value for money, reputation and social value dimensions had significant effects on customer satisfaction and repurchase intentions. One way for companies in the highly commoditized mobile service industry to minimize customer defection is to enhance their relationships with customers. However, as relationship building comes with a cost, it is of interest for companies to know whether certain customer groups will reciprocate more than others with loyalty if they are satisfied. The results from the present study show customer relationship inclination, the customer attribute examined, did not moderate the relationship between customer satisfaction and repurchase intention. Finally, recent studies have differentiated measures of repurchase intentions on the basis of volition levels and have suggested that better model fit can be achieved when higher volition measures are used. Intentions-as-expectations represents the lower volition end and intentions-as-wants represents the higher volition end of intention measure. However, the present study did not find any significant differences in model fit with the different intention measures used.
Style APA, Harvard, Vancouver, ISO itp.
41

Prashant, Prashant. "Development and Assessment of Re-Fleet Assignment Model under Environmental Considerations". Thesis, KTH, Optimeringslära och systemteori, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288864.

Pełny tekst źródła
Streszczenie:
The imminent threat of global catastrophe due to climate change gets more real by each passing year. The Aviation trade association, IATA, claims that Aviation accounts for approximately 2% of the Greenhouse Gases (GHG) caused by human activities, and 3.5% of the total Radiative Forcing. With continuous increase in Aviation industry and subsequent drop in fossil fuel prices, these numbers are only expected to up with time. In Addition, these numbers do not include the effects of altitude of emission and many environmentalists believe that the number for some pollutants could be at least 2-3 times larger than IATA estimates. This rising concern engages the Aviation industry to investigate possible methods to alleviate their environmental impact.  The first part of this thesis provides a framework to support Airlines in monitoring their current environmental footprint during the process of scheduling. This objective is realised by developing a robust system for estimating the fuel consumed (ergo quantity of major Greenhouse Gases emitted) by a particular fleet type operating a certain leg, which is then employed in a Fleet Assignment (FA) Operation to reduce emissions and increase the Contribution. An emissions estimation model for Turbojet Aeroplane fleets is created for Industrial Optimizers AB’sMP2 software. The emissions estimation model uses historic fuel consumption data provided by ICAO for a given fleet type to estimate the quantity (in kg) of environmental pollutants during the Landing and Takeoff operation (below 3000 ft) and the Cruise, Climb and Descent operation (above 3000 ft).  The second part of this thesis concerns with assigning monetary weights to the pollutant estimates to calculate an emission cost. This emission cost is then added to MP2’s Fleet Assignment’s objective function as an additional Operational cost to perform a Contribution maximization optimization subjected to the legality constraints. The effects of these monetary weights levied on the results of Fleet Assignment are studied, and utilizing curve-fitting and mathematical optimization, monetary weights are estimated for the desired reduction in GHG emissions.  Finally, a recursive algorithm based on Newton-Raphson method is designed and tested for calculating pollutant weights for untested schedules.
Det omedelbara hotet om en global katastrof pga klimatförändringar blir mer och mer tydligt för varje år som går. IATA, den internationella flyghandelsorganisationen, hävdar att flyget står för runt 2% av växthusgaserna (GHG) som kommer från människans aktiviteter, och 3.5% av den totala avstrålningen. Med den kontinuerliga tillväxten av flygindustrin och prisminskningar av fossila bränslen så förväntas dessa andelar att öka. Dessutom så inkluderar inte dessa siffror effekten av att utsläppen sker på hög höjd, och många miljöaktivister tror att siffrorna för vissa utsläpp kan vara åtminstone 2-3 gånger högre än IATAs uppskattningar. Denna växande oro motiverar flygindustrin till att undersöka metoder för att begränsa dess miljöpåverkan.  Den första delen av denna rapport ger ett ramverk för att hjälpa flygbolag med att bevaka deras aktuella miljöavtryck under schemaläggningsprocessen. Detta mål realiseras genom att utveckla ett robust system för att uppskatta bränsleförbrukningen (och därmed kvantiteten av växthusgasutsläpp) av en specifik flygplanstyp på en given etapp, som sedan kan användas för att allokera flygplanstyper för att minska utsläppen och bidra till att förbättra miljön. En modell för att uppskatta utsläpp för flottor av turbojetflygplan har skapats för Industrial Optimizers AB programvara MP2. Modellen för att uppskatta utsläppen baseras på historiska data om bränsleförbrukning som tillhandahållits av ICAO för en given flygplanstyp som använts för att uppskatta kvantiteten (i kg) av föroreningar vid start (under 3000 fot) och vid sträckflygning, stigning och inflygning (över 3000 fot). Den andra delen av denna rapport handlar om att bestämma monetära vikter till föroreningsskattningarna för att beräkna utsläppskostnader som ska användas i MP2 s målfunktion för allokering av flygplanstyper. Detta ger en ytterligare driftskostnad att beakta i optimeringen för att få med miljöaspekterna och tillåtna lösningar. Effekten som dessa monetära vikter har på resultaten från optimeringen studeras, och genom att använda kurvanpassning och matematisk optimering, de monetära vikterna anpassas för att få den önskade minskningen i växthusgasutsläpp. Slutligen så har en rekursiv algoritm, baserad på Newon-Raphsons metod, designats och testats för att beräkna utsläppsvikter för scheman som inte använts för att beräkna vikterna
Style APA, Harvard, Vancouver, ISO itp.
42

Nyman, Jonas. "Faster Environment Modelling and Integration into Virtual Reality Simulations". Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19800.

Pełny tekst źródła
Streszczenie:
The use of virtual reality in engineering tasks, such as in virtual commissioning, has increased steadily in recent years, where a robot, machine or object of interest can be simulated and visualized. Yet, for a more immerse experience, an environment for the object in question needs to be constructed. However, the process for creatingan accurate environment, for a virtual simulation have remained a costly and a long endeavour. Because of this, many digital simulations are performed, either with no environment at all, or present a very basic and abstract representation of an intended environment.The aim of this thesis is to investigate if technologies such as LiDAR and digital photogrammetry could shorten the environment creation process. Therefore, a demonstrative virtual environment was created and analysed, in which the different technologies was investigated and presented in the form of a comprehensive review of the current state of the technologies with in digital recreation. Lastly, a technique specific evaluation of the time requirement, cost and user difficulty was conducted. As the field of LiDAR and digital photogrammetry is too vast to investigate all forms thereof within one project, this thesis is limited to the investigation of static laser scanners and wide lens camera photogrammetry. A semi industrious locale was chosen for digital replication, which through static laser scans and photographs would generate semi-automated 3D models.The resulting 3D models leave much to be desired, as large holes were present throughout the 3D models, sincecertain surfaces are not suitable for neither replication processes. Transparent and reflective surfaces lead to ripple effects within the 3D models geometry and textures. Moreover, certain surfaces, as blank areas for photogrammetry or black coloration for laser scanners led to missing features and model distortions.Yet despite the abnormalities, the majority of the test environment was successfully re-created. An evaluation of the created environments was performed, which list and illustrate with tables and figures the attributes, strengths and weaknesses of each technique. Moreover, technique specific limitations and a spatial analysis was carried out. With the result, seemingly illustrating that photogrammetry creates more visually accurate 3D models in comparison to the laser scanner, yet the laser scanner produces a more spatially accurate result. As such, a selective combination of the techniques can be suggested.Observations and interviews seem to point towards the full scale application, in which an accurate 3D model is re-created without much effort, to currently not exist. As both photogrammetry and static laser scanning require great effort, skill and time in order to create a seemingly perfect solid model. Yet, utilizing either, or both techniques as a template for 3D object creation could reduce the time to create an environment significantly.Furthermore, methods such as digital 3D sculpting could be used in order to remove imperfections and create what is missing from the digitally constructed 3D models. Thereby achieving an accurate result.
Style APA, Harvard, Vancouver, ISO itp.
43

Alzaili, Jafar S. L. "Semi-empirical approach to characterize thin water film behaviour in relation to droplet splashing in modelling aircraft icing". Thesis, Cranfield University, 2012. http://dspace.lib.cranfield.ac.uk/handle/1826/7849.

Pełny tekst źródła
Streszczenie:
Modelling the ice accretion in glaze regime for the supercooled large droplets is one of the most challenging problems in the aircraft icing field. The difficulties are related to the presence of the liquid water film on the surface in the glaze regime and also the phenomena associated with SLD conditions, specifically the splashing and re-impingement. The steady improvement of simulation methods and the increasing demand for highly optimised aircraft performance, make it worthwhile to try to get beyond the current level of modelling accuracy. A semi-empirical method has been presented to characterize the thin water film in the icing problem based on both analytical and experimental approaches. The experiments have been performed at the Cranfield icing facilities. Imaging techniques have been used to observe and measure the features of the thin water film in the different conditions. A series of numerical simulations based on an inviscid VOF model have been performed to characterize the splashing process for different water film to droplet size ratios and impact angles. Based on these numerical simulations and the proposed methods to estimate the thin water film thickness, a framework has been presented to model the effects of the splashing in the icing simulation. These effects are the lost mass from the water film due to the splashing and the re-impingement of the ejected droplets. Finally, a new framework to study the solidification process of the thin water film has been explored. This framework is based on the lattice Boltzmann method and the preliminary results showed the capabilities of the method to model the dynamics, thermodynamics and the solidification of the thin water film.
Style APA, Harvard, Vancouver, ISO itp.
44

Pedley, Katherine Louise. "Modelling Submarine Landscape Evolution in Response to Subduction Processes, Northern Hikurangi Margin, New Zealand". Thesis, University of Canterbury. Geological Sciences, 2010. http://hdl.handle.net/10092/4648.

Pełny tekst źródła
Streszczenie:
The steep forearc slope along the northern sector of the obliquely convergent Hikurangi subduction zone is characteristic of non-accretionary and tectonically eroding continental margins, with reduced sediment supply in the trench relative to further south, and the presence of seamount relief on the Hikurangi Plateau. These seamounts influence the subduction process and the structurally-driven geomorphic development of the over-riding margin of the Australian Plate frontal wedge. The Poverty Indentation represents an unusual, especially challenging and therefore exciting location to investigate the tectonic and eustatic effects on this sedimentary system because of: (i) the geometry and obliquity of the subducting seamounts; (ii) the influence of multiple repeated seamount impacts; (iii) the effects of structurally-driven over-steeping and associated widespread occurrence of gravitational collapse and mass movements; and (iv) the development of a large canyon system down the axis of the indentation. High quality bathymetric and backscatter images of the Poverty Indentation submarine re-entrant across the northern part of the Hikurangi margin were obtained by scientists from the National Institute of Water and Atmospheric Research (NIWA) (Lewis, 2001) using a SIMRAD EM300 multibeam swath-mapping system, hull-mounted on NIWA’s research vessel Tangaroa. The entire accretionary slope of the re-entrant was mapped, at depths ranging from 100 to 3500 metres. The level of seafloor morphologic resolution is comparable with some of the most detailed Digital Elevation Maps (DEM) onshore. The detailed digital swath images are complemented by the availability of excellent high-quality processed multi-channel seismic reflection data, single channel high-resolution 3.5 kHz seismic reflection data, as well as core samples. Combined, these data support this study of the complex interactions of tectonic deformation with slope sedimentary processes and slope submarine geomorphic evolution at a convergent margin. The origin of the Poverty Indentation, on the inboard trench-slope at the transition from the northern to central sectors of the Hikurangi margin, is attributed to multiple seamount impacts over the last c. 2 Myr period. This has been accompanied by canyon incision, thrust fault propagation into the trench fill, and numerous large-scale gravitational collapse structures with multiple debris flow and avalanche deposits ranging in down-slope length from a few hundred metres to more than 40 km. The indentation is directly offshore of the Waipaoa River which is currently estimated to have a high sediment yield into the marine system. The indentation is recognised as the “Sink” for sediments derived from the Waipaoa River catchment, one of two target river systems chosen for the US National Science Foundation (NSF)-funded MARGINS “Source-to-Sink” initiative. The Poverty Canyon stretches 70 km from the continental shelf edge directly offshore from the Waipaoa to the trench floor, incising into the axis of the indentation. The sediment delivered to the margin from the Waipaoa catchment and elsewhere during sea-level high-stands, including the Holocene, has remained largely trapped in a large depocentre on the Poverty shelf, while during low-stand cycles, sediment bypassed the shelf to develop a prograding clinoform sequence out onto the upper slope. The formation of the indentation and the development of the upper branches of the Poverty Canyon system have led to the progressive removal of a substantial part of this prograding wedge by mass movements and gully incision. Sediment has also accumulated in the head of the Poverty Canyon and episodic mass flows contribute significantly to continued modification of the indentation by driving canyon incision and triggering instability in the adjacent slopes. Prograding clinoforms lying seaward of active faults beneath the shelf, and overlying a buried inactive thrust system beneath the upper slope, reveal a history of deformation accompanied by the creation of accommodation space. There is some more recent activity on shelf faults (i.e. Lachlan Fault) and at the transition into the lower margin, but reduced (~2 %) or no evidence of recent deformation for the majority of the upper to mid-slope. This is in contrast to current activity (approximately 24 to 47% shortening) across the lower slope and frontal wedge regions of the indentation. The middle to lower Poverty Canyon represents a structural transition zone within the indentation coincident with the indentation axis. The lower to mid-slope south of the canyon conforms more closely to a classic accretionary slope deformation style with a series of east-facing thrust-propagated asymmetric anticlines separated by early-stage slope basins. North of the canyon system, sediment starvation and seamount impact has resulted in frontal tectonic erosion associated with the development of an over-steepened lower to mid-slope margin, fault reactivation and structural inversion and over-printing. Evidence points to at least three main seamount subduction events within the Poverty Indentation, each with different margin responses: i) older substantial seamount impact that drove the first-order perturbation in the margin, since approximately ~1-2 Ma ii) subducted seamount(s) now beneath Pantin and Paritu Ridge complexes, initially impacting on the margin approximately ~0.5 Ma, and iii) incipient seamount subduction of the Puke Seamount at the current deformation front. The overall geometry and geomorphology of the wider indentation appears to conform to the geometry accompanying the structure observed in sandbox models after the seamount has passed completely through the deformation front. The main morphological features correlating with sandbox models include: i) the axial re-entrant down which the Poverty Canyon now incises; ii) the re-establishment of an accretionary wedge to the south of the indentation axis, accompanied by out-stepping, deformation front propagation into the trench fill sequence, particularly towards the mouth of the canyon; iii) the linear north margin of the indentation with respect to the more arcuate shape of the southern accretionary wedge; and, iv) the set of faults cutting obliquely across the deformation front near the mouth of the canyon. Many of the observed structural and geomorphic features of the Poverty Indentation also correlate well both with other sediment-rich convergent margins where seamount subduction is prevalent particularly the Nankai and Sumatra margins, and the sediment-starved Costa Rican margin. While submarine canyon systems are certainly present on other convergent margins undergoing seamount subduction there appears to be no other documented shelf to trench extending canyon system developing in the axis of such a re-entrant, as is dominating the Poverty Indentation. Ongoing modification of the Indentation appears to be driven by: i) continued smaller seamount impacts at the deformation front, and currently subducting beneath the mid-lower slope, ii) low and high sea-level stands accompanied by variations on sediment flux from the continental shelf, iii) over-steepening of the deformation front and mass movement, particularly from the shelf edge and upper slope.
Style APA, Harvard, Vancouver, ISO itp.
45

Gerl, Armin. "Modelling of a privacy language and efficient policy-based de-identification". Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEI105.

Pełny tekst źródła
Streszczenie:
De nos jours, les informations personnelles des utilisateurs intéressent énormément les annonceurs et les industriels qui les utilisent pour mieux cibler leurs clients et pour amééliorer leurs offres. Ces informations, souvent trés sensibles, nécessitent d’être protégées pour réguler leur utilisation. Le RGPD est la législation européenne, récemment entrée en vigueur en Mai 2018 et qui vise à renforcer les droits de l’utilisateur quant au traitement de ses données personnelles. Parmi les concepts phares du RGPD, la définition des règles régissant la protection de la vie privée par défaut (privacy by default) et dès la conception (privacy by design). La possibilité pour chaque utilisateur, d’établir un consentement personnalisé sur la manière de consommer ses données personnelles constitue un de ces concepts. Ces règles, malgré qu’elles soient bien explicitées dans les textes juridiques, sont difficiles à mettre en oeuvre du fait de l’absence d’outils permettant de les exprimer et de les appliquer de manière systématique – et de manière différente – à chaque fois que les informations personnelles d’un utilisateur sont sollicitées pour une tâche donnée, par une organisation donnée. L’application de ces règles conduit à adapter l’utilisation des données personnelles aux exigences de chaque utilisateur, en appliquant des méthodes empêchant de révéler plus d’information que souhaité (par exemple : des méthodes d’anonymisation ou de pseudo-anonymisation). Le problème tend cependant à se complexifier quand il s’agit d’accéder aux informations personnelles de plusieurs utilisateurs, en provenance de sources différentes et respectant des normes hétérogènes, où il s’agit de surcroit de respecter individuellement les consentements de chaque utilisateur. L’objectif de cette thèse est donc de proposer un framework permettant de définir et d’appliquer des règles protégeant la vie privée de l’utilisateur selon le RGPD. La première contribution de ce travail consiste à définir le langage LPL (Layered Privacy Language) permettant d’exprimer, de personnaliser (pour un utilisateur) et de guider l’application de politiques de consommation des données personnelles, respectueuses de la vie privée. LPL présente la particularité d’être compréhensible pour un utilisateur ce qui facilite la négociation puis la mise en place de versions personnalisées des politiques de respect de la vie privée. La seconde contribution de la thèse est une méthode appelée Policy-based De-identification. Cette méthode permet l’application efficace des règles de protection de la vie privée dans un contexte de données multi-utilisateurs, régies par des normes hétérogènes de respect de la vie privée et tout en respectant les choix de protection arrêtés par chaque utilisateur. L’évaluation des performances de la méthode proposée montre un extra-temps de calcul négligeable par rapport au temps nécessaire à l’application des méthodes de protection des données
The processing of personal information is omnipresent in our datadriven society enabling personalized services, which are regulated by privacy policies. Although privacy policies are strictly defined by the General Data Protection Regulation (GDPR), no systematic mechanism is in place to enforce them. Especially if data is merged from several sources into a data-set with different privacy policies associated, the management and compliance to all privacy requirements is challenging during the processing of the data-set. Privacy policies can vary hereby due to different policies for each source or personalization of privacy policies by individual users. Thus, the risk for negligent or malicious processing of personal data due to defiance of privacy policies exists. To tackle this challenge, a privacy-preserving framework is proposed. Within this framework privacy policies are expressed in the proposed Layered Privacy Language (LPL) which allows to specify legal privacy policies and privacy-preserving de-identification methods. The policies are enforced by a Policy-based De-identification (PD) process. The PD process enables efficient compliance to various privacy policies simultaneously while applying pseudonymization, personal privacy anonymization and privacy models for de-identification of the data-set. Thus, the privacy requirements of each individual privacy policy are enforced filling the gap between legal privacy policies and their technical enforcement
Style APA, Harvard, Vancouver, ISO itp.
46

McLucas, Alan Charles Civil Engineering Australian Defence Force Academy UNSW. "An investigation into the integration of qualitative and quantitative techniques for addressing systemic complexity in the context of organisational strategic decision-making". Awarded by:University of New South Wales - Australian Defence Force Academy. School of Civil Engineering, 2001. http://handle.unsw.edu.au/1959.4/38744.

Pełny tekst źródła
Streszczenie:
System dynamics modelling has been used for around 40 years to address complex, systemic, dynamic problems, those often described as wicked. But, system dynamics modelling is not an exact science and arguments about the most suitable techniques to use in which circumstances, continues. The nature of these wicked problems is investigated through a series of case studies where poor situational awareness among stakeholders was identified. This was found to be an underlying cause for management failure, suggesting need for better ways of recognising and managing wicked problem situations. Human cognition is considered both as a limitation and enabler to decision-making in wicked problem environments. Naturalistic and deliberate decision-making are reviewed. The thesis identifies the need for integration of qualitative and quantitative techniques. Case study results and a review of the literature led to identification of a set of principles of method to be applied in an integrated framework, the aim being to develop an improved way of addressing wicked problems. These principles were applied to a series of cases in an action research setting. However, organisational and political barriers were encountered. This limited the exploitation and investigation of cases to varying degrees. In response to a need identified in the literature review and the case studies, a tool is designed to facilitate analysis of multi-factorial, non-linear causality. This unique tool and its use to assist in problem conceptualisation, and as an aid to testing alternate strategies, are demonstrated. Further investigation is needed in relation to the veracity of combining causal influences using this tool and system dynamics, broadly. System dynamics modelling was found to have utility needed to support analysis of wicked problems. However, failure in a particular modelling project occurred when it was found necessary to rely on human judgement in estimating values to be input into the models. This was found to be problematic and unacceptably risky for sponsors of the modelling effort. Finally, this work has also identified that further study is required into: the use of human judgement in decision-making and the validity of system dynamics models that rely on the quantification of human judgement.
Style APA, Harvard, Vancouver, ISO itp.
47

Rafael-Palou, Xavier. "Detection, quantification, malignancy prediction and growth forecasting of pulmonary nodules using deep learning in follow-up CT scans". Doctoral thesis, Universitat Pompeu Fabra, 2021. http://hdl.handle.net/10803/672964.

Pełny tekst źródła
Streszczenie:
Nowadays, lung cancer assessment is a complex and tedious task mainly per- formed by radiological visual inspection of suspicious pulmonary nodules, using computed tomography (CT) scan images taken to patients over time. Several computational tools relying on conventional artificial intelligence and computer vision algorithms have been proposed for supporting lung cancer de- tection and classification. These solutions mostly rely on the analysis of indi- vidual lung CT images of patients and on the use of hand-crafted image de- scriptors. Unfortunately, this makes them unable to cope with the complexity and variability of the problem. Recently, the advent of deep learning has led to a major breakthrough in the medical image domain, outperforming conven- tional approaches. Despite recent promising achievements in nodule detection, segmentation, and lung cancer classification, radiologists are still reluctant to use these solutions in their day-to-day clinical practice. One of the main rea- sons is that current solutions do not provide support to automatic analysis of the temporal evolution of lung tumours. The difficulty to collect and annotate longitudinal lung CT cases to train models may partially explain the lack of deep learning studies that address this issue. In this dissertation, we investigate how to automatically provide lung can- cer assessment through deep learning algorithms and computer vision pipelines, especially taking into consideration the temporal evolution of the pulmonary nodules. To this end, our first goal consisted on obtaining accurate methods for lung cancer assessment (diagnostic ground truth) based on individual lung CT images. Since these types of labels are expensive and difficult to collect (e.g. usually after biopsy), we proposed to train different deep learning models, based on 3D convolutional neural networks (CNN), to predict nodule malig- nancy based on radiologist visual inspection annotations (which are reasonable to obtain). These classifiers were built based on ground truth consisting of the nodule malignancy, the position and the size of the nodules to classify. Next, we evaluated different ways of synthesizing the knowledge embedded by the nodule malignancy neural network, into an end-to-end pipeline aimed to detect pul- monary nodules and predict lung cancer at the patient level, given a lung CT image. The positive results confirmed the convenience of using CNNs for mod- elling nodule malignancy, according to radiologists, for the automatic prediction of lung cancer. Next, we focused on the analysis of lung CT image series. Thus, we first faced the problem of automatically re-identifying pulmonary nodules from dif- ferent lung CT scans of the same patient. To do this, we present a novel method based on a Siamese neural network (SNN) to rank similarity between nodules, overpassing the need for image registration. This change of paradigm avoided introducing potentially erroneous image deformations and provided computa- tionally faster results. Different configurations of the SNN were examined, in- cluding the application of transfer learning, using different loss functions, and the combination of several feature maps of different network levels. This method obtained state-of-the-art performances for nodule matching both in an isolated manner and embedded in an end-to-end nodule growth detection pipeline. Afterwards, we moved to the core problem of supporting radiologists in the longitudinal management of lung cancer. For this purpose, we created a novel end-to-end deep learning pipeline, composed of four stages that completely au- tomatize from the detection of nodules to the classification of cancer, through the detection of growth in the nodules. In addition, the pipeline integrated a novel approach for nodule growth detection, which relies on a recent hierarchi- cal probabilistic segmentation network adapted to report uncertainty estimates. Also, a second novel method was introduced for lung cancer nodule classification, integrating into a two stream 3D-CNN the estimated nodule malignancy prob- abilities derived from a pre-trained nodule malignancy network. The pipeline was evaluated in a longitudinal cohort and the reported outcomes (i.e. nodule detection, re-identification, growth quantification, and malignancy prediction) were comparable with state-of-the-art work, focused on solving one or a few of the functionalities of our pipeline. Thereafter, we also investigated how to help clinicians to prescribe more accurate tumour treatments and surgical planning. Thus, we created a novel method to forecast nodule growth given a single image of the nodule. Partic- ularly, the method relied on a hierarchical, probabilistic and generative deep neural network able to produce multiple consistent future segmentations of the nodule at a given time. To do this, the network learned to model the mul- timodal posterior distribution of future lung tumour segmentations by using variational inference and injecting the posterior latent features. Eventually, by applying Monte-Carlo sampling on the outputs of the trained network, we esti- mated the expected tumour growth mean and the uncertainty associated with the prediction. Although further evaluation in a larger cohort would be highly recommended, the proposed methods reported accurate results to adequately support the ra- diological workflow of pulmonary nodule follow-up. Beyond this specific appli- cation, the outlined innovations, such as the methods for integrating CNNs into computer vision pipelines, the re-identification of suspicious regions over time based on SNNs, without the need to warp the inherent image structure, or the proposed deep generative and probabilistic network to model tumour growth considering ambiguous images and label uncertainty, they could be easily appli- cable to other types of cancer (e.g. pancreas), clinical diseases (e.g. Covid-19) or medical applications (e.g. therapy follow-up).
Avui en dia, l’avaluació del càncer de pulmó ´es una tasca complexa i tediosa, principalment realitzada per inspecció visual radiològica de nòduls pulmonars sospitosos, mitjançant imatges de tomografia computada (TC) preses als pacients al llarg del temps. Actualment, existeixen diverses eines computacionals basades en intel·ligència artificial i algorismes de visió per computador per donar suport a la detecció i classificació del càncer de pulmó. Aquestes solucions es basen majoritàriament en l’anàlisi d’imatges individuals de TC pulmonar dels pacients i en l’ús de descriptors d’imatges fets a mà. Malauradament, això les fa incapaces d’afrontar completament la complexitat i la variabilitat del problema. Recentment, l’aparició de l’aprenentatge profund ha permès un gran avenc¸ en el camp de la imatge mèdica. Malgrat els prometedors assoliments en detecció de nòduls, segmentació i classificació del càncer de pulmó, els radiòlegs encara són reticents a utilitzar aquestes solucions en el seu dia a dia. Un dels principals motius ´es que les solucions actuals no proporcionen suport automàtic per analitzar l’evolució temporal dels tumors pulmonars. La dificultat de recopilar i anotar cohorts longitudinals de TC pulmonar poden explicar la manca de treballs d’aprenentatge profund que aborden aquest problema. En aquesta tesi investiguem com abordar el suport automàtic a l’avaluació del càncer de pulmó, construint algoritmes d’aprenentatge profund i pipelines de visió per ordinador que, especialment, tenen en compte l’evolució temporal dels nòduls pulmonars. Així doncs, el nostre primer objectiu va consistir a obtenir mètodes precisos per a l’avaluació del càncer de pulmó basats en imatges de CT pulmonar individuals. Atès que aquests tipus d’etiquetes són costoses i difícils d’obtenir (per exemple, després d’una biòpsia), vam dissenyar diferents xarxes neuronals profundes, basades en xarxes de convolució 3D (CNN), per predir la malignitat dels nòduls basada en la inspecció visual dels radiòlegs (més senzilles de recol.lectar). A continuació, vàrem avaluar diferents maneres de sintetitzar aquest coneixement representat en la xarxa neuronal de malignitat, en una pipeline destinada a proporcionar predicció del càncer de pulmó a nivell de pacient, donada una imatge de TC pulmonar. Els resultats positius van confirmar la conveniència d’utilitzar CNN per modelar la malignitat dels nòduls, segons els radiòlegs, per a la predicció automàtica del càncer de pulmó. Seguidament, vam dirigir la nostra investigació cap a l’anàlisi de sèries d’imatges de TC pulmonar. Per tant, ens vam enfrontar primer a la reidentificació automàtica de nòduls pulmonars de diferents tomografies pulmonars. Per fer-ho, vam proposar utilitzar xarxes neuronals siameses (SNN) per classificar la similitud entre nòduls, superant la necessitat de registre d’imatges. Aquest canvi de paradigma va evitar possibles pertorbacions de la imatge i va proporcionar resultats computacionalment més ràpids. Es van examinar diferents configuracions del SNN convencional, que van des de l’aplicació de l’aprenentatge de transferència, utilitzant diferents funcions de pèrdua, fins a la combinació de diversos mapes de característiques de diferents nivells de xarxa. Aquest mètode va obtenir resultats d’estat de la tècnica per reidentificar nòduls de manera aïllada, i de forma integrada en una pipeline per a la quantificació de creixement de nòduls. A més, vam abordar el problema de donar suport als radiòlegs en la gestió longitudinal del càncer de pulmó. Amb aquesta finalitat, vam proposar una nova pipeline d’aprenentatge profund, composta de quatre etapes que s’automatitzen completament i que van des de la detecció de nòduls fins a la classificació del càncer, passant per la detecció del creixement dels nòduls. A més, la pipeline va integrar un nou enfocament per a la detecció del creixement dels nòduls, que es basava en una recent xarxa de segmentació probabilística jeràrquica adaptada per informar estimacions d’incertesa. A més, es va introduir un segon mètode per a la classificació dels nòduls del càncer de pulmó, que integrava en una xarxa 3D-CNN de dos fluxos les probabilitats estimades de malignitat dels nòduls derivades de la xarxa pre-entrenada de malignitat dels nòduls. La pipeline es va avaluar en una cohort longitudinal i va informar rendiments comparables a l’estat de la tècnica utilitzats individualment o en pipelines però amb menys components que la proposada. Finalment, també vam investigar com ajudar els metges a prescriure de forma més acurada tractaments tumorals i planificacions quirúrgiques més precises. Amb aquesta finalitat, hem realitzat un nou mètode per predir el creixement dels nòduls donada una única imatge del nòdul. Particularment, el mètode es basa en una xarxa neuronal profunda jeràrquica, probabilística i generativa capaç de produir múltiples segmentacions de nòduls futurs consistents del nòdul en un moment determinat. Per fer-ho, la xarxa aprèn a modelar la distribució posterior multimodal de futures segmentacions de tumors pulmonars mitjançant la utilització d’inferència variacional i la injecció de les característiques latents posteriors. Finalment, aplicant el mostreig de Monte-Carlo a les sortides de la xarxa, podem estimar la mitjana de creixement del tumor i la incertesa associada a la predicció. Tot i que es recomanable una avaluació posterior en una cohort més gran, els mètodes proposats en aquest treball han informat resultats prou precisos per donar suport adequadament al flux de treball radiològic del seguiment dels nòduls pulmonars. Més enllà d’aquesta aplicació especifica, les innovacions presentades com, per exemple, els mètodes per integrar les xarxes CNN a pipelines de visió per ordinador, la reidentificació de regions sospitoses al llarg del temps basades en SNN, sense la necessitat de deformar l’estructura de la imatge inherent o la xarxa probabilística per modelar el creixement del tumor tenint en compte imatges ambigües i la incertesa en les prediccions, podrien ser fàcilment aplicables a altres tipus de càncer (per exemple, pàncrees), malalties clíniques (per exemple, Covid-19) o aplicacions mèdiques (per exemple, seguiment de la teràpia).
Style APA, Harvard, Vancouver, ISO itp.
48

ANKIT. "ARTIFICIAL INTELLIGENCE FOR THE RE-MODELLING OF HEALTHCARE SYSTEM". Thesis, 2022. http://dspace.dtu.ac.in:8080/jspui/handle/repository/19053.

Pełny tekst źródła
Streszczenie:
Artificial intelligence had re-modeled the various fields and made ease for various difficult tasks which were hard to do by traditional approaches in the related fields. AI impacted entertainment, IT, mechanical and various type of industries and changes lifestyle of the world. Healthcare sector is not left behind of this there are various healthcare sectors in which AI is playing crucial role and providing support to healthcare infrastructure. AI can improve the workflow of various diagnostic and treatment facilities. AI can analyse images and this feature has robust application in pathological tasks and now pathologists can visualize histopathology images directly on the computer screen. AI can be used in drug discovery and designing as they can predict how the drug will going to react to a particular molecule. AI gas still to do more in healthcare sector and this thesis is done in order to provide evidence of the positive results of these two fields synergy.
Style APA, Harvard, Vancouver, ISO itp.
49

Graddon, Andrew. "The modelling of integrated urban water management schemes from the allotment to the town scale". Thesis, 2015. http://hdl.handle.net/1959.13/1059162.

Pełny tekst źródła
Streszczenie:
Research Doctorate - Doctor of Philosophy (PhD)
Population growth in urban areas coupled with a potentially drier future climate is likely to stress existing water resources. One way to address this is to augment existing centralised water supply systems. An alternative is to make better use of urban water resources which, inter alia, involves stormwater and rainwater harvesting and wastewater recycling. The basic proposition is that any augmentation of water supply that can reduce the amount of water drawn from existing centralised reservoirs will be of benefit to the whole supply region, especially in terms of drought security. This thesis describes a versatile modelling framework that can simulate a wide variety of Integrated Urban Water Management (IUWM) schemes from the allotment to the town scale. The framework combines two modelling approaches. The first, named urbanCycle, simulates water supply and demand, stormwater and wastewater using allotments as the basic building block. Although urbanCycle can simulate allotment processes in great detail, it assumes that the network forms a directed acyclic graph. This simplifies the connectivity logic but precludes investigation of systems with multiple storages and multiple supply paths. To overcome this, a second model, a network linear programming based modelling environment, WathNet5, is embedded in the urbanCycle framework to enable the modelling of cluster and town scale recycling and harvesting options, as well as supply and demand decision making, based on objectives rather than pre-set operating rules. This combined modelling environment has been named UrbanNet. The UrbanNet framework is demonstrated with the aid of hypothetical case studies. These case studies focus on three different aspects of the modelling framework: 1. A series of cluster scale scenarios demonstrates the flexibility in modelling cluster scale topologies. 2. A large multi-cluster case study demonstrates the design detail and flexibility from the allotment scale up to the town scale. 3. A multi-objective optimisation case study demonstrates how key variables within a particular IUWM topology can be optimized. These case studies show UrbanNet to be capable of a high degree of detail and flexibility in the design, simulation and analysis of complex Integrated Urban Water Management Schemes.
Style APA, Harvard, Vancouver, ISO itp.
50

Linnenlucke, Lauren. "Chronological modelling of the Torres Strait: a re-evaluation of occupation trends, and expansion of village and ritual sites". Thesis, 2022. https://researchonline.jcu.edu.au/75567/1/JCU_75567_Linnenlucke_2022_thesis.pdf.

Pełny tekst źródła
Streszczenie:
Lauren Linnenlucke re-evaluated the archaeological dating evidence for occupation, village establishment and use and seascape ritualisation of dugong bone mounds and shell arrangements across the Torres Strait, northeast Australia.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii