Dissertations / Theses on the topic 'Instrumental modelling'

To see the other types of publications on this topic, follow the link: Instrumental modelling.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 40 dissertations / theses for your research on the topic 'Instrumental modelling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ellis-Braithwaite, Richard. "Modelling the instrumental value of software requirements." Thesis, Loughborough University, 2015. https://dspace.lboro.ac.uk/2134/21803.

Full text
Abstract:
Numerous studies have concluded that roughly half of all implemented software requirements are never or rarely used in practice, and that failure to realise expected benefits is a major cause of software project failure. This thesis presents an exploration of these concepts, claims, and causes. It evaluates the literature s proposed solutions to them, and then presents a unified framework that covers additional concerns not previously considered. The value of a requirement is assessed often during the requirements engineering (RE) process, e.g., in requirement prioritisation, release planning, and trade-off analysis. In order to support these activities, and hence to support the decisions that lead to the aforementioned waste, this thesis proposes a framework built on the modelling languages of Goal Oriented Requirements Engineering (GORE), and on the principles of Value Based Software Engineering (VBSE). The framework guides the elicitation of a requirement s value using philosophy and business theory, and aims to quantitatively model chains of instrumental value that are expected to be generated for a system s stakeholders by a proposed software capability. The framework enriches the description of the individual links comprising these chains with descriptions of probabilistic degrees of causation, non-linear dose-response and utility functions, and credibility and confidence. A software tool to support the framework s implementation is presented, employing novel features such as automated visualisation, and information retrieval and machine learning (recommendation system) techniques. These software capabilities provide more than just usability improvements to the framework. For example, they enable visual comprehension of the implications of what-if? questions, and enable re-use of previous models in order to suggest modifications to a project s requirements set, and reduce uncertainty in its value propositions. Two case studies in real-world industry contexts are presented, which explore the problem and the viability of the proposed framework for alleviating it. The thesis research questions are answered by various methods, including practitioner surveys, interviews, expert opinion, real-world examples and proofs of concept, as well as less-common methods such as natural language processing analysis of real requirements specifications (e.g., using TF-IDF to measure the proportion of software requirement traceability links that do not describe the requirement s value or problem-to-be-solved). The thesis found that in general, there is a disconnect between the state of best practice as proposed by the literature, and current industry practice in requirements engineering. The surveyed practitioners supported the notion that the aforementioned value realisation problems do exist in current practice, that they would be treatable by better requirements engineering practice, and that this thesis proposed framework would be useful and usable in projects whose complexity warrants the overhead of requirements modelling (e.g., for projects with many stakeholders, competing desires, or having high costs of deploying incorrect increments of software functionality).
APA, Harvard, Vancouver, ISO, and other styles
2

McKenna, Paul. "Delta operator : modelling, forecasting and control." Thesis, Lancaster University, 1997. http://eprints.lancs.ac.uk/65542/.

Full text
Abstract:
Interest in the delta operator as a tool in the development of robust approaches to modelling and control has been revived in the last decade, principally following the work of Goodwin (1985). The use of this discrete differential operator provides improved numerical properties particularly when modelling or implementing control at high sampling frequencies or under finite wordlength restraints. The delta operator also provides for the alliance of continuous time designs and discrete time application, linking traditional control theory with modern implementation through digital computing. In this thesis, a delta operator Simplified Refined Instrumental Variable (SRIV) approach to model estimation is employed, together with model order identification tools, to provide delta operator models for use in control and forecasting. The True Digital Control (TDC) design theory is adopted to develop a delta operator Proportional-Integral-Plus (PIP) controller. The construction of realisable control filters enables implementation of the PIP controller, the structure of which can prove operationally significant. A number of refinements to the standard PIP controller are developed and applications are presented for engineering and environmental examples. The development of a recursive delta operator Kalman filter is presented and incorporated within a forecasting framework. The resulting algorithm is applied to historical data to generate real time stochastic forecasts of river flows from an effective rainfall-flow model.
APA, Harvard, Vancouver, ISO, and other styles
3

Goldsmith, Kimberley. "Instrumental variable and longitudinal structural equation modelling methods for causal mediation : the PACE trial of treatments for chronic fatigue syndrome." Thesis, King's College London (University of London), 2014. https://kclpure.kcl.ac.uk/portal/en/theses/instrumental-variable-and-longitudinal-structural-equation-modelling-methods-for-causal-mediation-the-pace-trial-of-treatments-for-chronic-fatigue-syndrome(413e5fb0-03b9-40bc-b993-0465b1bcbdee).html.

Full text
Abstract:
Background: Understanding complex psychological treatment mechanisms is important in order to refine and improve treatment. Mechanistic theories can be evaluated using mediation analysis methods. The Pacing, Graded Activity, and Cognitive Behaviour Therapy: A Randomised Evaluation (PACE) trial studied complex therapies for the treatment of chronic fatigue syndrome. The aim of the project was to study different mediation analysis methods using PACE trial data, and to make trial design recommendations based upon the findings. Methods: PACE trial data were described using summary statistics and correlation analyses. Mediation estimates were derived using: the product of coefficients approach, instrumental variable (IV) methods with randomisation by baseline variables interactions as IVs, and dual process longitudinal structural equation models (SEM). Monte Carlo simulation studies were done to further explore the behaviour of IV estimators and to examine aspects of the SEM. Results: Cognitive and behavioural measures were mediators of the cognitive behavioural and graded exercise therapies in PACE. Results were robust when accounting for correlated measurement error and different SEM structures. Randomisation by baseline IVs were weak, giving imprecise and sometimes extreme estimates, leaving their utility unclear. A flexible version of a latent change SEM with contemporaneous mediation effects and contemporaneous correlated measurement errors was the most appropriate longitudinal model. Conclusions: IV methods using interaction IVs are unlikely to be useful; designs with randomised IV might be more suitable. Longitudinal SEM for mediation in clinical trials seems a promising approach. Mediation estimates from SEM were generally robust when allowing for correlated measurement error and for different model classes. Mediation analysis in trials should be longitudinal and should consider the number and timing of measures at the design stage. Using appropriate methods for studying mediation in trials will help clarify treatment mechanisms of action and allow for their refinement, which would maximize the information gained from trials and benefit patients.
APA, Harvard, Vancouver, ISO, and other styles
4

Turner, Alexander James. "Economic analysis of the causes and consequences of social and emotional well-being in childhood." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/economic-analysis-of-the-causes-and-consequences-of-social-and-emotional-wellbeing-in-childhood(e59a2e05-d842-46e0-9c63-17c84ffc564a).html.

Full text
Abstract:
The upward trend in the prevalence of childhood mental disorders observed in the UK over the previous two decades, together with UK’s poor performance in recent international comparisons of child well-being, has brought childhood social and emotional well-being (SEW) to the forefront of policy. Key to tackling this issue is to understand what causes SEW in childhood, what interventions are successful in improving it, and what are its late-life consequences. This thesis furthers the literature in each of these areas. Firstly, we examine whether foetal (or in-utero) exposure to influenza hampers the development of childhood SEW. To do so, we examine the use of an instrumental variables approach, whereby the severity of the 1957 Asian Flu epidemic in the local authority of birth is used as an instrument for whether mothers self-report contracting influenza during pregnancy. We establish that exposure has little effect on childhood SEW, but that it results in a 60% increase in the risk of being stillborn, suggesting an increasing focus on influenza vaccination during pregnancy is needed. Secondly, we investigate the long-term effectiveness of school-based interventions to improve SEW. In order to overcome the absence of long follow-up in trial datasets, we develop a new modelling approach which involves the matching of trial participants to individuals in birth cohort datasets. An application of this method found that a Promoting Alternative Thinking Strategies (PATHS) intervention implemented in Greater Manchester schools led to a statistically significant improvement in childhood SEW, and had a positive, although statistically insignificant, effect on health across the life-course. Finally, we address the paucity of studies examining the effects of childhood SEW on late-life health and labour market outcomes. To do so, we develop a method for generating predictions of the effects of childhood characteristics beyond the currently available follow-up periods in birth cohort datasets, adapting an existing mediation analysis framework. Applying this method, we establish that a one-standard deviation improvement in childhood SEW leads to an increase of up to 0.18 accumulated quality-adjusted-life-years in late-life, and an increase in pre-tax labour income in late-life of up to £23,850. Both of these effects are primarily driven by large positive effects of childhood SEW on educational attainment, employment, income and health in mid-life. Childhood SEW is a predictor of important outcomes throughout the lifecourse. More research is needed to identify its causes and interventions to successfully improve it.
APA, Harvard, Vancouver, ISO, and other styles
5

Laird, Joel Augustus. "The physical modelling of drums using digital waveguides." Thesis, University of Bristol, 2001. http://hdl.handle.net/1983/ebd75b4b-bcdd-4cc7-b153-a6e0007682aa.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Torres-Echeverria, Alejandro C. "Modelling and optimization of Safety Instrumented Systems based on dependability and cost measures." Thesis, University of Sheffield, 2009. http://etheses.whiterose.ac.uk/106/.

Full text
Abstract:
This thesis is centred on modelling and multi-objective optimization of Safety Instrumented Systems (SIS) in compliance with the standard IEC 61508. SIS are in charge of monitoring that the operating conditions of a plant remain under safe limits and free of hazards. Their performance is, therefore, critical for the integrity of people around the plant, the environment, assets and production. A large part of this work is devoted to modelling of SIS. Safety integrity and reliability measures, used as optimization objectives, are quantified by the Average Probability of Failure on Demand (PFDavg) and the Spurious Trip Rate (STR). The third objective is the Lifecycle Cost (LCC); ensuring system cost-effectiveness. The optimization strategies include design and testing policies. This encompasses optimization of design by redundancy and reliability allocation, use of diverse redundancy, inclusion of MooN voting systems and optimization of testing frequency and strategies. The project implements truly multi-objective optimization using Genetic Algorithms. A comprehensive analysis is presented and diverse applications to optimization of SIS are developed. Graphical techniques for presentation of results that aid the analysis are also presented. A practical approach is intended. The modelling and optimization algorithms include the level of modelling detail and meet the requirements of IEC 61508. The focus is on systems working in low-demand mode. It is largely based on the requirements of the process industry but applicable to a wide range of other process. Novel contributions include a model for quantification of time-dependent Probability of Failure on Demand; an approximation for STR; implementation of modelling by Fault Trees with flexibility for evaluation of multiple solutions; and the integration of system modelling with optimization by Genetic Algorithms. Thus, this work intends to widen the state-of-the-art in modelling of Probability of Failure on Demand, Spurious Trip Rate and solution of multi-optimization of design and testing of safety systems with Genetic Algorithms.
APA, Harvard, Vancouver, ISO, and other styles
7

Desvages, Charlotte Genevieve Micheline. "Physical modelling of the bowed string and applications to sound synthesis." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31273.

Full text
Abstract:
This work outlines the design and implementation of an algorithm to simulate two-polarisation bowed string motion, for the purpose of realistic sound synthesis. The algorithm is based on a physical model of a linear string, coupled with a bow, stopping fi ngers, and a rigid, distributed fingerboard. In one polarisation, the normal interaction forces are based on a nonlinear impact model. In the other polarisation, the tangential forces between the string and the bow, fingers, and fingerboard are based on a force-velocity friction curve model, also nonlinear. The linear string model includes accurate time-domain reproduction of frequency-dependent decay times. The equations of motion for the full system are discretised with an energy-balanced finite difference scheme, and integrated in the discrete time domain. Control parameters are dynamically updated, allowing for the simulation of a wide range of bowed string gestures. The playability range of the proposed algorithm is explored, and example synthesised gestures are demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
8

Noreland, Daniel. "Numerical Techniques for Acoustic Modelling and Design of Brass Wind Instruments." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis : Univ.-bibl. [distributör], 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-3507.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Torin, Alberto. "Percussion instrument modelling in 3D : sound synthesis through time domain numerical simulation." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/31029.

Full text
Abstract:
This work is concerned with the numerical simulation of percussion instruments based on physical principles. Three novel modular environments for sound synthesis are presented: a system composed of various plates vibrating under nonlinear conditions, a model for a nonlinear double membrane drum and a snare drum. All are embedded in a 3D acoustic environment. The approach adopted is based on the finite difference method, and extends recent results in the field. Starting from simple models, the modular instruments can be created by combining different components in order to obtain virtual environments with increasing complexity. The resulting numerical codes can be used by composers and musicians to create music by specifying the parameters and a score for the systems. Stability is a major concern in numerical simulation. In this work, energy techniques are employed in order to guarantee the stability of the numerical schemes for the virtual instruments, by imposing suitable coupling conditions between the various components of the system. Before presenting the virtual instruments, the various components are individually analysed. Plates are the main elements of the multiple plate system, and they represent the first approximation to the simulation of gongs and cymbals. Similarly to plates, membranes are important in the simulation of drums. Linear and nonlinear plate/membrane vibration is thus the starting point of this work. An important aspect of percussion instruments is the modelling of collisions. A novel approach based on penalty methods is adopted here to describe lumped collisions with a mallet and distributed collisions with a string in the case of a membrane. Another point discussed in the present work is the coupling between 2D structures like plates and membranes with the 3D acoustic field, in order to obtain an integrated system. It is demonstrated how the air coupling can be implemented when nonlinearities and collisions are present. Finally, some attention is devoted to the experimental validation of the numerical simulation in the case of tom tom drums. Preliminary results comparing different types of nonlinear models for membrane vibration are presented.
APA, Harvard, Vancouver, ISO, and other styles
10

Harrison-Harsley, Reginald Langford. "Physical modelling of brass instruments using finite-difference time-domain methods." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31272.

Full text
Abstract:
This work considers the synthesis of brass instrument sounds using time-domain numerical methods. The operation of such a brass instrument is as follows. The player's lips are set into motion by forcing air through them, which in turn creates a pressure disturbance in the instrument mouthpiece. These disturbances produce waves that propagate along the air column, here described using one spatial dimension, to set up a series of resonances that interact with the vibrating lips of the player. Accurate description of these resonances requires the inclusion of attenuation of the wave during propagation, due to the boundary layer effects in the tube, along with how sound radiates from the instrument. A musically interesting instrument must also be flexible in the control of the available resonances, achieved, for example, by the manipulation of valves in trumpet-like instruments. These features are incorporated into a synthesis framework that allows the user to design and play a virtual instrument. This is all achieved using the finite-difference time-domain method. Robustness of simulations is vital, so a global energy measure is employed, where possible, to ensure numerical stability of the algorithms. A new passive model of viscothermal losses is proposed using tools from electrical network theory. An embedded system is also presented that couples a one-dimensional tube to the three-dimensional wave equation to model sound radiation. Additional control of the instrument using a simple lip model as well a time varying valve model to modify the instrument resonances is presented and the range of the virtual instrument is explored. Looking towards extensions of this tool, three nonlinear propagation models are compared, and differences related to distortion and response to changing bore profiles are highlighted. A preliminary experimental investigation into the effects of partially open valve configurations is also performed.
APA, Harvard, Vancouver, ISO, and other styles
11

Theissen, Nikolas Alexander. "Physics-based modelling and measurement of advanced manufacturing machinery’s positioning accuracy : Machine tools, industrial manipulators and their positioning accuracy." Licentiate thesis, KTH, Skolan för industriell teknik och management (ITM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263700.

Full text
Abstract:
Advanced manufacturing machinery is a corner stone of essential industries of technologicallydeveloped societies. Their accuracy permits the production of complexproducts according to tight geometric dimensions and tolerances for high efficiency,interchangeability and sustainability. The accuracy of advanced manufacturingmachinery can be quantified by the performance measure of positioning accuracy.Positioning accuracy measures the closeness between a commanded and an attainedposition on a machine tool or industrial manipulator, and it is ruled by lawsof physics in classical mechanics and thermodynamics. These laws can be applied tomodel how much the machinery deflects due to gravity, expands due to a change intemperature and how much and how long it vibrates due to process forces; hence,one can quantify how much the accuracy decreases. Thus, to produce machinerywith ever higher accuracy and precision one can design machines which deflect,expand and vibrate less or one can understand and model the actual behaviour ofthe machinery to compensate for it.This licentiate thesis uses physics-based modelling to quantify the positioningaccuracy of machine tools and industrial robots. The work investigates the potentialincrease in positioning accuracy because of the simultaneous modelling of the kinematics,static deflections, vibrations and thermo-elasticity as a lumped-parametermodel of the machinery. Consequently the models can be used to quantify thechange of the accuracy throughout the workspace.The lumped parameter models presented in this work require empirical modelcalibration and validation. The success of both, calibration and validation, dependson the availability of the right measurement instruments, as these need to be ableto capture the actual positioning accuracy of machinery. This thesis focuses on theimportance of measurement instruments in industry and metrology and creates acatalogue of requirements and trends to identify the features of the measurementinstruments required for the factories of the future. These novel measurement instrumentsshall be able to improve model calibration and validation for an improvedoverall equipment effectiveness, improved product quality, reduced costs, improvedsafety and sustainability as a result of physics-based modelling and measurementof advanced manufacturing machinery.
APA, Harvard, Vancouver, ISO, and other styles
12

Fougere, Nicolas, K. Altwegg, J. J. Berthelier, A. Bieler, D. Bockelée-Morvan, U. Calmonte, F. Capaccioni, et al. "Direct Simulation Monte Carlo modelling of the major species in the coma of comet 67P/Churyumov-Gerasimenko." OXFORD UNIV PRESS, 2016. http://hdl.handle.net/10150/624746.

Full text
Abstract:
We analyse the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA) - the Double Focusing Mass Spectrometer data between 2014 August and 2016 February to examine the effect of seasonal variations on the four major species within the coma of 67P/Churyumov-Gerasimenko (H2O, CO2, CO, and O-2), resulting from the tilt in the orientation of the comet's spin axis. Using a numerical data inversion, we derive the non-uniform activity distribution at the surface of the nucleus for these species, suggesting that the activity distribution at the surface of the nucleus has not significantly been changed and that the differences observed in the coma are solely due to the variations in illumination conditions. A three-dimensional Direct Simulation Monte Carlo model is applied where the boundary conditions are computed with a coupling of the surface activity distributions and the local illumination. The model is able to reproduce the evolution of the densities observed by ROSINA including the changes happening at equinox. While O-2 stays correlated with H2O as it was before equinox, CO2 and CO, which had a poor correlation with respect to H2O pre-equinox, also became well correlated with H2O post-equinox. The integration of the densities from the model along the line of sight results in column densities directly comparable to the VIRTIS-H observations. Also, the evolution of the volatiles' production rates is derived from the coma model showing a steepening in the production rate curves after equinox. The model/data comparison suggests that the seasonal effects result in the Northern hemisphere of 67P's nucleus being more processed with a layered structure while the Southern hemisphere constantly exposes new material.
APA, Harvard, Vancouver, ISO, and other styles
13

Young, Melissa Marie. "Consumer Identity." Master's thesis, Vysoká škola ekonomická v Praze, 2009. http://www.nusl.cz/ntk/nusl-16844.

Full text
Abstract:
The purpose of this thesis is to prove that despite consumers' impression that they are alone in deciding their consumption decision they are wrong. Consumers are manipulated on various levels by marketers. It is the marketer who decides what consumer identities should be created. Consumers are persuaded by marketers on different levels beginning with consumers' needs. Marketers begin by appealing to consumer drives, motivations and emotions to persuade their consumers to purchase their brand. On a more in-depth level marketers manipulate consumers by using a variety of human behaviour learning strategies to sway consumers' purchasing decisions. In addition, marketers use various environmental and social-environmental influences to control their consumers. Lastly, a practical example illustrating the multinational corporation Nike is used, to prove that marketers are aware of these different methods and use them to manipulate consumers. In the end of this paper it is very obvious that consumers are easily persuade by marketers. A consumer is only the puppet while the marketer is the puppet string master.
APA, Harvard, Vancouver, ISO, and other styles
14

Gunawan, David Oon Tao Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Musical instrument sound source separation." Awarded By:University of New South Wales. Electrical Engineering & Telecommunications, 2009. http://handle.unsw.edu.au/1959.4/41751.

Full text
Abstract:
The structured arrangement of sounds in musical pieces, results in the unique creation of complex acoustic mixtures. The analysis of these mixtures, with the objective of estimating the individual sounds which constitute them, is known as musical instrument sound source separation, and has applications in audio coding, audio restoration, music production, music information retrieval and music education. This thesis principally addresses the issues related to the separation of harmonic musical instrument sound sources in single-channel mixtures. The contributions presented in this work include novel separation methods which exploit the characteristic structure and inherent correlations of pitched sound sources; as well as an exploration of the musical timbre space, for the development of an objective distortion metric to evaluate the perceptual quality of separated sources. The separation methods presented in this work address the concordant nature of musical mixtures using a model-based paradigm. Model parameters are estimated for each source, beginning with a novel, computationally efficient algorithm for the refinement of frequency estimates of the detected harmonics. Harmonic tracks are formed, and overlapping components are resolved by exploiting spectro-temporal intra-instrument dependencies, integrating the spectral and temporal approaches which are currently employed in a mutually exclusive manner in existing systems. Subsequent to the harmonic magnitude extraction using this method, a unique, closed-loop approach to source synthesis is presented, separating sources by iteratively minimizing the aggregate error of the sources, constraining the minimization to a set of estimated parameters. The proposed methods are evaluated independently, and then are placed within the context of a source separation system, which is evaluated using objective and subjective measures. The evaluation of music source separation systems is presently limited by the simplicity of objective measures, and the extensive effort required to conduct subjective evaluations. To contribute to the development of perceptually relevant evaluations, three psychoacoustic experiments are also presented, exploring the perceptual sensitivity of timbre for the development of an objective distortion metric for timbre. The experiments investigate spectral envelope sensitivity, spectral envelope morphing and noise sensitivity.
APA, Harvard, Vancouver, ISO, and other styles
15

Lysér, Oskar, and Viktor Sylvén. "Ger IFRS 9 bättre beslutsunderlag? : En dokumentstudie ur en investerares perspektiv." Thesis, Karlstads universitet, Handelshögskolan (from 2013), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-79139.

Full text
Abstract:
IFRS 9 är en ny standard som avser att förbättra redovisningen av finansiella instrument. Detta grundade sig i finanskrisen 2008 då föregångaren IAS 39 fick motstå kritik för att redovisa kreditförluster för sent. Med IFRS 9 introducerades därför en ny framåtblickande nedskrivningsmodell där företagen blivit tvungna att ta hänsyn till framtida makroekonomiska faktorer. Tidigare forskning har främst studerat banker eller avsett att beräkna framtida effekter av IFRS 9 innan standarden implementerades 2018. Det har inte påvisats några väsentliga effekter av IFRS 9 inom banksektorn. Det som skiljer vår studie från tidigare studier är att vi försöker tyda effekter av implementeringen på olika sektorer samt att analysera om effekten av IFRS 9 har påverkat investerares beslutsfattande. För att uppnå syftet har en dokumentstudie på olika företag inom diverse branscher genomförts utifrån ett investerarperspektiv som grundar sig i IASB:s kvalitativa egenskaper. Resultatet av studien blir att det inte har skett några väsentliga förändringar i de finansiella rapporterna hos de olika företagen när det gäller värderingen. Utifrån de kvalitativa egenskaperna har informationen för en investerare blivit mer användbar samtidigt som relevansen i de finansiella rapporterna ökat. Detta har påverkat en investerares beslutsfattande positivt. Den skillnad som går att utläsa är att företagen har efter implementeringen ökat den årliga avsättningen till reserven för förväntade kreditförluster. Dock anser vi att den stora skillnaden med IFRS 9 först kan tydas i en lågkonjunktur.
IFRS 9 is a new standard which intend to improve accounting of financial instruments. The precursor IAS 39 was heavily criticized during the financial crises in 2008 because of its late recognition of credit loss. IFRS 9 introduces a new forward-looking approach that considers future macroeconomic factors. Previous research has mainly been studying banks and future effects of the standards implementation. No essential effects have been shown in previous research. What distinguishes our study is that we try to interpret the effects of implementation in different sectors and to analyze whether the effect of IFRS 9 has affected investors’ decision-making. To achieve the purpose of this thesis, a document analysis of different companies in various industries has been conducted. This thesis has been analyzed from an investors point of view which is based on IASB’s conceptual framework. This thesis has not seen any essential effects from the implementation of IFRS 9 in the financial reports. From the qualitative characteristics the financial information for an investor has improved and its relevance has increased, which has affected the investors’ decision-making, in a positive manner. After the implementation the big difference is that the annual deposition to the reserve for credit loss has increased for companies in all industries. Although, we believe that the big differences with IFRS 9 can first be interpreted in a recession.
APA, Harvard, Vancouver, ISO, and other styles
16

Seo, Kyungwoon. "Representation as a language of scientific practice: exploring students’ views on the use of representation and the linkage to understanding of scientific models." Diss., University of Iowa, 2016. https://ir.uiowa.edu/etd/6281.

Full text
Abstract:
The purpose of this study was to explore how students view the use of representation in science classroom. Representation, as a disciplinary language of science, has long been promoted as a way to develop students’ scientific literacy and is closely linked to engaging students in scientific practices through the use of models in science. However, previous research studies have mostly focused on the use of representation and models as outcome measure of an implementation task and little is known about the learner’s perspectives. The study aimed to fill this missing gap by investigating how students view the use of representation in science classroom and how these perception are linked to the epistemic practice and cognitive/conceptual practice of science learning. In this respect, the study involved (1) developing an instrument, namely, a Representation Survey, to assess students’ views on the use of representation and (2) examining the relationship between students’ views on representation and their understanding of models in science, science content knowledge, and critical thinking skills. The Representation Survey was developed in three phases as a pencil-and-paper questionnaire with 1-5 Likert scales, and grounded in the empirical data and a literature review. An exploratory factor analysis of the Representation Survey with 619 middle school students identified two distinct ways students view the use of representation: multiple modes of representation and uni-mode of representation. Correlation analysis with a modified version of the Student’ Understanding of Models (SUMS) Survey revealed a strong relationship between students’ perception on using multiple-mode of representation and their understanding of models in science, while how students perceive uni-modal representation was shown to be related to students’ performances in assessments of science content knowledge. Lastly, students’ critical thinking skills, as measured by the Cornell Critical Thinking Test, showed no evident relationship with students’ perceptions of the use of representation. A validity argument for the newly developed Representation Survey and modified SUMS instrument is presented, followed by a discussion of broader implications and limitations of the study.
APA, Harvard, Vancouver, ISO, and other styles
17

Viglund, Kerstin. "Inner strength among old people : a resource for experience of health, despite disease and adversities." Doctoral thesis, Umeå universitet, Institutionen för omvårdnad, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-82719.

Full text
Abstract:
Background Inner strength has been described as an important phenomenon in association with disease management, health, and ageing. To increase the knowledge of the phenomenon of inner strength, a meta-theoretical analysis was performed which resulted in a model of Inner Strength where inner strength comprises four interrelated and interacting dimensions; connectedness, creativity, flexibility, and firmness. The model was used in this thesis as a theoretical framework. Aim The overall purpose of this thesis was to develop and validate an inner strength scale, describe inner strength among an older population, and elucidate its significance for experience of health, despite disease and adversities. Methods The studies had quantitative approaches with cross-sectional designs (I-III) and a qualitative approach with narrative interviews (IV). Studies I-IV was part of the GErontological Regional DAtabase (GERDA) Botnia project. In study I, the participants (n = 391, 19-90 years old) were mostly from northern Sweden. In studies II and III, the participants (n = 6119, 65, 70, 75 and 80 years old) were from Sweden and Finland, and in study IV the participants (n = 12, 67-82 years old) were from Västerbotten County. Data was analysed using principal component analysis and confirmatory factor analysis (CFA), various statistics, structural equation modelling, and qualitative content analysis. Results In study I, the Inner Strength Scale (ISS) was developed and psychometrically tested. An initial 63-item ISS was reduced to a final 20-item ISS. A four-factor solution based on the four dimensions of inner strength was supported, explaining 51% of the variance, and the CFA showed satisfactory goodness-of-fit. In study II, ISS scores in relation to age, gender and culture showed the highest mean ISS score among the 65-year-olds, with a decrease in mean score for every subsequent age (70, 75, and 80 years). Women had slightly higher mean ISS scores than men, and there were minor differences between the regions in Sweden and Finland. In study III, a hypothesis was proposed and subsequently supported in the results where inner strength was found to partially mediate in the relationship between disease and self-rated health. The bias-corrected bootstrap, estimating the mediating indirect effect was significant and the test of goodness-of-fit was satisfactory. In study IV, from the narratives of inner strength it was found that inner strength comprised feelings of being connected and finding life worth living. Having faith in oneself and one’s possibilities and facing and taking an active part in the situation were also expressed. Finally, coming back and finding ways to go forward in life were found to be essential aspects of inner strength. Conclusions The newly developed ISS is a reliable and valid instrument that captures a broad perspective of inner strength. Basic data about inner strength in a large population of old people in Sweden and Finland is provided, showing the highest mean ISS score among the 65-year-olds. Inner strength among old people is a resource for experience of health, despite disease and adversities. This thesis contributes to increase knowledge of the phenomenon of inner strength and provide evidence for the importance of inner strength for old people’s wellbeing. Increased knowledge of the four dimensions of inner strength; connectedness, creativity, flexibility and firmness, is proposed to serve as an aid for health care professionals in their efforts to identify where the need of support is greatest and to find interventions that promotes and strengthen inner strength.
Bakgrund Inre styrka har beskrivits som ett viktigt fenomen associerat till att hantera sjukdom, till hälsa och åldrande. För att öka kunskapen om fenomenet inre styrka genomfördes en metateoretisk analys som resulterade i en Inre Styrka modell där inre styrka omfattar fyra samverkande dimensioner; samhörighet, kreativitet, flexibilitet och fasthet. Modellen har använts i denna avhandling som ett teoretiskt ramverk. Syfte Det övergripande syftet med denna avhandling var att utveckla och testa en skala som mäter inre styrka, beskriva inre styrka i en population av äldre, och att belysa dess betydelse för upplevelsen av hälsa, trots sjukdom 0ch motgångar. Metod Studierna som genomfördes hade kvantitativ ansats med tvärsnittsdesign (I-III) och kvalitativ ansats med narrativa intervjuer (IV).  Alla studier var en del av GErontologiska Regionala DAtabas (GERDA) Botnia projektet. Deltagarna i studie I (n= 391, 19-90 år) var mestadels från norra delarna av Sverige. I studierna II och III var deltagarna (n=6119, 65, 70, 75 och 80 år) från Sverige och Finland. I studie IV var deltagarna (n=12, 67-82 år) från Västerbotten. Data analyserades med hjälp av principalkomponentanalys och konfirmatorisk faktor analys (CFA), varierande statistik, strukturell ekvationsmodellering, och kvalitativ innehållsanalys. Resultat I studie I utvecklades och testades Inre Styrka Skalan (ISS). En inledande 63 frågors ISS reducerades till en slutlig 20 frågors ISS. Baserad på de fyra dimensionerna av inre styrka bekräftades en fyrafaktors lösning med 51 % förklaringsgrad och CFA visade ett tillfredställande goodness-of-fit. I studie II beskrevs inre styrka i relation till ålder, kön och kultur. Det högsta totala ISS medelvärdet skattades bland 65-åringarna med lägre medelvärden för varje efterföljande ålder (70, 75 och 80 år). Kvinnor skattade ett något högre totalt ISS medelvärde än män och det var inte några större skillnader mellan regionerna i Sverige och Finland. I studie III bekräftades den hypotes som lagts fram, att inre styrka kan mediera i relationen mellan sjukdom och upplevelsen av hälsa. Bias-corrected bootstrap visade en signifikant indirekt effekt i relationen mellan sjukdom och upplevelsen av hälsa, medierad av inre styrka, och test av modellens goodness-of-fit var tillfredsställande. I studie IV, utifrån berättelserna om inre styrka visade det sig att inre styrka omfattar känslor av samhörighet och att finna livet värt att leva. Att ha tillit till sig själv och sina möjligheter, och att kunna möta och ta aktiv del i situationen beskrevs också. Slutligen, att komma igen och hitta vägar att gå vidare i livet var viktiga aspekter av inre styrka.  Slutsatser Den nyutvecklade Inre Styrka Skalan är ett reliabelt och valitt instrument som fångar ett brett perspektiv av inre styrka. Basdata om inre styrka i en stor population äldre i Sverige och Finland har presenterats, och visar det högsta ISS medelvärdet bland 65-åringarna. Inre styrka bland äldre är en resurs för upplevelsen av hälsa, trots sjukdom och motgångar. Denna avhandling bidrar till att öka kunskapen om fenomenet inre styrka och ger evidens för att inre styrka har en viktig betydelse för äldres välbefinnande. Ökad kunskap om de fyra dimensionerna av inre styrka; samhörighet, kreativitet, flexibilitet, och fasthet, föreslås kunna vara en hjälp för vårdpersonal i deras arbete att identifiera var behovet av stöd är störst och att sätta in insatser som främjar och stärker inre styrka.
GErontologiska Regionala DAtabas (GERDA) Botnia projektet
APA, Harvard, Vancouver, ISO, and other styles
18

Orelli, Paiva Guilherme. "Vibroacoustic Characterization and Sound Synthesis of the Viola Caipira." Thesis, Le Mans, 2017. http://www.theses.fr/2017LEMA1045/document.

Full text
Abstract:
La viola caipira est un type de guitare brésilienne largement utilisée dans la musique populaire. Elle comprend dix cordes métalliques organisées en cinq paires, accordées à l'unisson ou à l'octave. Le travail de thèse porte sur l'analyse des spécificités des sons musicaux produits par cet instrument, peu étudié dans la littérature.L'analyse des mouvements des cordes pincées au moyen d'une caméra rapide montre l'importance des vibrations par sympathie qui donnent lieu à un halo sonore, constituant une signature perceptive importante. Ces mesures révèlent également l'existence de chocs entre cordes, qui ont des conséquences très clairement audibles. Les mobilités vibratoires au chevalet sont par ailleurs mesurées au moyen de la méthode du fil brisé, simple de mise en oeuvre et peu couteuse dans la mesure où elle évite l'utilisation d'un capteur d'effort. Associée à une analyse modale haute résolution (méthode ESPRIT), ces mesures permettent de déterminer les déformées modales aux points de couplage corde/caisse et donc de caractériser l'instrument.Une modélisation physique, basées une approche modale, est réalisée à des fins de synthèse sonore. Elle prend en compte les mouvements des cordes selon 2 polarisations, les couplages avec la caisse ainsi que les collisions entre cordes. Ce modèle est qualifié de modèle hybride car il combine une approche analytique pour décrire les vibrations des cordes et des données expérimentales décrivant la caisse.Les simulations dans le domaine temporel rendent compte des principales caractéristiques identifiées de la viola caipira
The viola caipira is a type of Brazilian guitar widely used in popular music. It consists of ten metallic strings arranged in five pairs, tuned in unison or octave. The thesis work focuses on the analysis of the specificities of musical sounds produced by this instrument, which has been little studied in the literature.The analysis of the motions of plucked strings using a high speed camera shows the existance of sympathetic vibrations, which results in a sound halo, constituting an important perceptive feature. These measurements also reveal the existence of shocks between strings, which lead to very clearly audible consequences. Bridges mobilities are also measured using the wire-breaking method, which is simple to use and inexpensive since it does not require the use of a force sensor. Combined with a high-resolution modal analysis (ESPRIT method), these measurements enable to determine the modal shapes at the string/body coupling points and thus to characterize the instrument.A physical modelling, based on a modal approach, is carried out for sound synthesis purposes. It takes into account the strings motions according to 2 polarizations, the couplings with the body and the collisions between strings. This model is called a hybrid model because it combines an analytical approach to describe the vibrations of strings and experimental data describing the body. Simulations in the time domain reveal the main characteristics of the viola caipira
APA, Harvard, Vancouver, ISO, and other styles
19

Wisheart, M. "Impact properties and finite element analysis of a pultruded composite system." Thesis, Loughborough University, 1996. https://dspace.lboro.ac.uk/2134/12474.

Full text
Abstract:
This project was sponsored by two companies interested in promoting the use of pultruded glass fibre/polyester composites in the construction of freight containers. Thus, the research was to understand and quantify the damage mechanisms caused by low velocity impact on the composite system and to produce a finite element impact model to further the understanding of these events.
APA, Harvard, Vancouver, ISO, and other styles
20

Robinson, Matthew. "Development of planar technology for focal planes of future radio to sub-millimetre astronomical instruments." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/development-of-planar-technology-for-focal-planes-of-future-radio-to-submillimetre-astronomical-instruments(dd2876aa-ff1a-4ae7-903f-a8228f3fc85f).html.

Full text
Abstract:
Receiver systems utilising planar technologies are prevalent in telescopes observing at radio to sub-millimetre wavelengths. Receiver components using planar technologies are generally smaller, have reduced mass and are cheaper to manufacture than waveguide-based alternatives. Given that modern-day detectors are capable of reaching the fundamental photon noise limit, increases in the sensitivity of telescopes are frequently attained by increasing the total number of detectors in the receivers. The development of components utilising planar technologies facilitates the demand for large numbers of detectors, whilst minimising the size, mass and manufacturing cost of the receiver. After a review and study of existing concepts in radio to sub-mm telescopes and their receivers, this thesis develops planar components that couple the radiation from the telescope's optics onto the focal plane. Two components are developed; a W- band (75-110 GHz) planar antenna-coupled flat mesh lens designed for the receiver of a Cosmic Microwave Background (CMB) B-mode experiment, and an L-band (1- 2 GHz) horn-coupled planar orthomode transducer designed for the receiver of the FAST telescope. The first developments of a planar antenna-coupled flat mesh lens are presented. The design is driven by the requirement to mitigate beam systematics to prevent pollution of the CMB B-mode signal. In the first instance, a waveguide-coupled mesh lens is characterised. The radiation patterns of the waveguide-coupled mesh lens have -3 dB beam widths between 26 and 19 degrees, beam ellipticity <10%, and cross-polarisation.
APA, Harvard, Vancouver, ISO, and other styles
21

Oliveira, Luiz Claudio Marangoni de 1975. "Contribuições para melhoria do desempenho e viabilidade de fabricação de scanners indutivos." [s.n.], 2006. http://repositorio.unicamp.br/jspui/handle/REPOSIP/263395.

Full text
Abstract:
Orientador: Luiz Otavio Saraiva Ferreira
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica
Made available in DSpace on 2018-08-06T21:02:18Z (GMT). No. of bitstreams: 1 Oliveira_LuizClaudioMarangonide_D.pdf: 9710529 bytes, checksum: 4fc22426f2b4d1d845715c4742df71a2 (MD5) Previous issue date: 2006
Resumo: Scanners são dispositivos que defletem um feixe luminoso e transfonnam um feixe puntual em uma linha de varredura, com amplitude e freqüência controladas. Diversos equipamentos utilizam este padrão luminoso para codificar ou decodificar infonnações, exemplos mais comuns são os leitores de código de barras de bancada, utilizados em supennercados, e as impressoras laser. V ários fenômenos podem ser empregados para defletir um feixe luminoso. Neste trabalho, os scanners utilizam o princípio da reflexão da luz por um espelho em movimento hannônico e ressonante sob ação de forças de origem eletromagnética. Tais forças são geradas pela interação de correntes induzi das na annadura, com o campo magnético produzido por ímãs pennanentes. A principal vantagem deste tipo de scanner é a ausência de conexões elétricas entre as partes móveis e fixas do dispositivo, o que simplifica o processo de fabricação e o toma mais robusto e menos suceptível a falhas. Parte dos scanners similares existentes atualmente são dispositivos eletro-mecânicos complexos, fabricados em série. Trabalhos anteriores demonstraram a viabilidade da geometria planar e da utilização de processos de fabricação em lotes, derivados da microeletrônica, neste tipo de scanner. Os protótipos fabricados, embora funcionais, apresentaram consumo de potência acima da média para este tipo de dispositivo, o que demonstrava a necessidade de melhorias em seu projeto. O processo de fabricação, embora confiável, foi desenvolvido com materiais e métodos baseados no Silício e originários da microeletrônica, o que dificultava sua implantação em indústrias em território nacional. Neste trabalho, foram pr9postos aprimoramentos à tecnologia dos scanners ressonantes planares atuados por indução para tomar seu desempenho compatível com o d~ dispositivos similares, e também para viabilizar sua fabricação utilizando materiais e métodos disponíveis no país. Uma metodologia de projeto, em conjunto com uma série de contribuições ao modelo, foi proposta e avaliada. Para viabilizar a fabricação propôs-se a utilização do Bronze-fosforoso, como material estrutural, e a utilização de foto-fabricação, como processo de fabricação. As contribuições propostas neste trabalho possibilitaram a redução do consumo de potência de 2, 2 W para cerca de 5 m W por grau óptico, e o aumento da freqüência de operação do circuito de cerca de 1 kHz para 4 kHz, com um ângulo de deflexão óptico típico de 20° pico-a-pico, parâmetros compatíveis com os de dispositivos similares, mas mecanicamente mais complexos e fabricados por processo serial
Abstract: Scanners are devices that deftects a light beam and converts a spot light in a well controlled amplitude and frequency scan line. Several applications uses the generated pattem to code or decode data, common examples ar,e barcode readers, and laser printers. A light beam can be deftected by different means. In this work, the scanner deftects the light by reftection in a moving mirror, in a resonant and harmonic movement, subjected to forces of electromagnetic nature. Such forces are generated by the interaction between an induced current in the armature, and a magnetic field, generated by permanent magnets. The main advantage of this kind of scanner is the absence of electrical connections between the mobile, and fixed parts of the device, that simplifies the fabrication process, and make its more reliable and less fault susceptible. Part of the similar devices available today are complex eIectro-mechanical devices, manufactured by serial processo Earlier works established that the planar geometry, and the use of batch fabrication process, derived from microelectronics, are feasible with this kind of device. A1though functional, the earlier prototypes presented a high power consumption, that shown the demand for an improved designo The Silicon-based fabrication process adopted makes the use of materiaIs and methods that are not readily accessible to the Brazilian industry. In this work improvements were proposed to the induction actuated planar resonant scanners technology. The goal was to make its performance compatible with the performance of similar devices, and to enable its fabrication using materiaIs and methods available to the Brazilian industry. A design m~thodology, and a set of model contributions were proposed and validated. The use of Phosphor-bronze, as structural material, and the photqfabrication process, as the machining method, were proposed as an option to the Silicon-based fabrication method. The contributions ofthis work had enabled the reduction ofthe power consumptlon from 2,2 W to about 5 m W per optical degree, and an increase in the working frequency from 1 kHz to 4kHz, with a optical deftection angle of about 20° peak-to-peak. Such parameters are fully compatible with similar devices, mechanically more complexes and manufactured by serial processes
Doutorado
Mecanica dos Sólidos e Projeto Mecanico
Doutor em Engenharia Mecânica
APA, Harvard, Vancouver, ISO, and other styles
22

Samaan, Mariam. "La photogrammétrie rapprochée au service de l'archéologie préventive." Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1068/document.

Full text
Abstract:
Le développement des appareils photos numériques, de la puissance de calcul des ordinateurs, les travaux de recherche en photogrammétrie et vision par ordinateur ont abouti à l’émergence récente de solutions opérationnelles permettant de construire de manière automatique des modèles 3D à partir de prises de vues à recouvrements multiples (multi stéréoscopiques).Par exemple, en prenant les « bonnes » photos, il est aujourd’hui possible de réaliser en quelques heures de calcul et quelques minutes d’interaction opérateur, des ortho-photos rigoureuses qui, il y a quelques années, auraient demandé des jours de restitution. Ces méthodes commencent à être répandues parmi certains acteurs du relevé patrimonial (architectes ou archéologues) et une économie commence même à se construire autour de services de modélisation 3D.Cependant ces méthodes restent loin d’être pleinement acceptées par la majorité des utilisateurs potentiels. Parmi les freins liés à la diffusion de ces techniques auprès des scientifiques du patrimoine, la méconnaissance des règles d’acquisition photographique permettant de tirer un parti optimum des outils de modélisation par photo.L’objectif de ce travail de thèse est de réaliser un travail de transfert et d’accompagnent effectif des outils du monde de la technologie et de l’informatique vers celui des utilisateurs dans le domaine du patrimoine. De manière plus spécifique, la thématique d’application choisie est celle de l’archéologie préventive, dans laquelle les contraintes de budget et de calendrier pour la réalisation des fouilles rendent particulièrement intéressantes les méthodes de relevé par photo.Nos travaux ont exclusivement porté sur la mise au point de méthodes photogrammétriques à partir de protocoles d’acquisition d’images fiables et légers, ainsi que de traitements adaptés à chaque étape de la chaîne de calcul.Le choix de traiter tel ou tel type d’objet dans le cadre de nos travaux est indépendant de toute classification des nombreuses spécialités de l’archéologie, mais est plutôt lié à un cadrage méthodologique, préférant multiplier les protocoles expérimentaux de documentation de petits artéfacts plutôt que de diversifier le type de vestiges à documenter. Au-delà du cas des petits artéfacts, les problématiques soulevées par la documentation d’une fouille archéologique comme site « vivant » ont aussi été partiellement abordées. Des méthodes de relevé capables d’enregistrer de manière exhaustive l’ensemble des objets découverts tout en les associant à une stratigraphie particulière ont en effet été étudiées
The development of digital cameras, the computational power of computers, photogrammetry and computer vision research has led to the recent emergence of operational solutions for building automatically 3D models from shooting multiple overlays (stereoscopic multi).For example, taking the "good" photos, it is now possible to achieve in a few hours of calculation and a few minutes of operator interaction, rigorous ortho-photos that, there a few years have asked for days restitution. These methods are starting to be widespread among some in the heritage survey (architects or archaeologists) and an economy even starting to build around 3D modeling services.However, these methods are far from being fully accepted by the majority of potential users. Among the brakes associated with the dissemination of these techniques to the scientific heritage, ignorance of the rules of photographic acquisition to take optimum advantage of modeling tools per photo.The objective of this thesis is to do a job transfer and accompany the actual world of technology tools and IT to the users in the field of heritage. More specifically, the theme chosen for application is that of preventive archeology, in which the constraints of budget and timetable for the completion of excavations make it particularly interesting methods identified by photograph.Our work has focused exclusively on the development of photogrammetric methods from acquiring reliable and lightweight image protocols and treatments for each stage of the calculation chain.The choice to treat a particular type of object in the context of our work is independent of any classification of the many specialties of archeology, but is instead linked to a methodological framework, preferring multiply experimental protocols documentation rather small artifacts that diversify the type remains to be documented. Beyond the case of small artifacts, the issues raised by the documentation of an archaeological dig as a site "living" were also partially addressed. Survey methods capable of recording exhaustively all objects discovered while linking to a particular stratigraphy have indeed been studied
APA, Harvard, Vancouver, ISO, and other styles
23

Pískatá, Petra. "Vliv nejistoty modelů projektů na investiční rozhodování." Doctoral thesis, Vysoké učení technické v Brně. Fakulta stavební, 2020. http://www.nusl.cz/ntk/nusl-433595.

Full text
Abstract:
This doctoral thesis widely analyses the process of investment decision-making. In its individual parts, it researches models used for planning, analysing and evaluation of investments projects, but also models used for final decision about realization of the investment. Investing activity is present in world economic cycle in all it’s phases. Capital sources used for financing if the investment projects are scarce and must be handled with care. For this reason, there are many supportive methodologies and models employed in managing of the investments as well as instruments developed to miti-gate the potential project risks. However, even utilization of these instruments and models can’t guarantee the expected results. There are uncertainties, errors and in-accuracies in the process that can thwart investment decisions. The aim of the thesis is to analyse the investment decision-making process (from the initial idea to the realization of the investment project) and to identify the main un-certainties – factors influencing the success / error rate of models for investment project planning as well as the decision on their realization. The main outcome of the thesis is an overview of these factors and recommenda-tions on how to work with these factors and make the process as effective as possi-ble. Another output is an analysis and recommendations for the use of financing sources and mix of the instruments that should be used to mitigate the potential impact of risks that are connected to all investment projects.
APA, Harvard, Vancouver, ISO, and other styles
24

Lafarge, David. "Analyse didactique de l'enseignement-apprentissage de la chimie organique jusqu'à bac+2 pour envisager sa restructuration." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2010. http://tel.archives-ouvertes.fr/tel-00578419.

Full text
Abstract:
La chimie organique est souvent reconnue comme une discipline difficile à enseigner et à apprendre : par exemple les étudiants privilégient la mémorisation seule au détriment de raisonnements articulant les modèles de la réactivité chimique. Notre recherche avait pour objectif de produire des connaissances utiles à l'amélioration de l'enseignement et de l'apprentissage de la chimie organique lors des deux premières années de l'enseignement supérieur. Nous avons centré nos travaux sur les effets de la structuration actuelle de cet enseignement (afin d'envisager sa modification tout en conservant le même contenu disciplinaire), sur les enseignements de la synthèse organique, des modèles et des mécanismes réactionnels ainsi que sur l'activité des enseignants de chimie organique.Trois études ont été menées. La première étude avait pour objectif de déterminer si les étudiants pouvaient raisonner en utilisant les modèles plutôt que mémoriser-restituer : nous avons analysé les tâches soumises aux étudiants lors des examens ou concours, avant d'inférer les stratégies de leur résolution sous l'éclairage d'une analyse du fonctionnement des modèles ; nous avons étudié la pertinence de la structuration des programmes d'enseignement et des livres de chimie organique. La deuxième étude a consisté en l'analyse des pratiques déclarées de neuf enseignants de chimie organique pour interpréter leur activité. Les résultats obtenus jusqu'alors nous ont permis de concevoir une première version d'un instrument didactique à destination des enseignants. Cet instrument a été testé lors de la dernière étude auprès de quatre binômes d'enseignants placés en simulation de préparation de cours. Nos résultats mettent en évidence que les tâches soumises aux étudiants et la structuration par fonction actuellement en place, incitent les étudiants à mettre en oeuvre une stratégie de mémorisation-restitution au détriment d'une stratégie de modélisation. Nous proposons par exemple de restructurer globalement les contenus par modélisation croissante autour des questions du chimiste organicien, l'utilisation d'une base de données (" réactiothèque ") ou la mise en oeuvre d'enseignements de la modélisation et de la synthèse organique dès les premières années. Les enseignants semblent contraints par les prescriptions institutionnelles en vigueur et leur genre professionnel ; il leur manquerait aussi des PCK (Pedagogical Content Knowledge) sur la modélisation, la synthèse organique dès les premières années, et la gestion des phases de conclusion. Les enseignants se montrent partagés vis-à-vis de l'instrument didactique, ils sont cependant parvenus à mettre en oeuvre certains aspects en adaptant leur activité et en nous permettant d'envisager l'amélioration future de l'instrument
APA, Harvard, Vancouver, ISO, and other styles
25

Barth, Thierry. "Vers une observation inter-disciplinaire des phénomènes naturels sur les bassins versants de montagne : Hydrogéologie à coût limité du bassin du Vorz (Massif de Belledonne, Isère)." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00764344.

Full text
Abstract:
Le 22 Août 2005 une crue intense s'est produite sur le bassin versant du Vorz, détruisant partiellement le hameau de la Gorge. Cet évènement a mis en évidence les difficultés à anticiper les conditions hydrométéorologiques en montagne où elles sont extrêmement variables spatialement et temporellement, et souvent faiblement instrumentées. De ce constat est né le projet de mettre en place un réseau d'instrumentation hydrométéorologique original sur le bassin versant du Vorz, afin d'y observer les phénomènes naturels et hydrologiques s'y produisant, de mieux les appréhender, et de construire les outils et méthodes nécessaires à leur modélisation. Après deux saisons de mesures, les premiers résultats ont montré que le réseau mis en place permet d'obtenir des informations à haute résolution spatiale et temporelle sur les processus hydrométéorologiques. Malgré son installation dans le milieu difficile de la montagne (accessibilité, froid, énergie,...), une très bonne fiabilité a pu être mise en avant, ainsi que des perspectives de transposition à d'autres bassins versants, et ce, pour un faible coût financier. L'originalité du réseau est de réaliser un multi-échantillonnage de nombreux paramètres hydrométéorologiques (pluviométrie, température, neige, insolation,...), avec des résolutions spatiales (10 à 50 mètres) et temporelles (horaire à moins) permettant d'envisager une modélisation hydrologique à différentes échelles, aussi bien pour la gestion des ressources en eau (long terme) que pour la prévention des crues (court terme). Les capteurs mis en place constituent un ensemble complémentaire et indissociable de divers instruments de mesure: iButtons (air et sol), totalisateurs, pluviomètres, appareils photographiques. La mise au point d'un capteur de mesure innovant de cartographie automatique de la couverture neigeuse (SnoDEC), à partir d'images photographiques classiques, prises à pas de temps régulier (5 à 7 images par jours) a été réalisée au cours de ce travail. Il permet de quantifier l'hérogénéité spatiale et temporelle des phénomènes d'enneigement sur le versant, prépondérants sur son hydrologie, au vue de la persistance nivale (5 à 10 mois). L'ensemble de ce dispositif permet de disposer d'une importante base de données, et de mettre en oeuvre différentes techniques d'interpolations des variables hydrométéorologiques sur l'ensemble du bassin versant. Ainsi, des cartographies précises du champ de température et de pluviométrie seront disponibles au pas de temps journalier. En outre, le capteur SnoDEC permettra d'analyser et quantifier l'hétérogénétité spatio-temporelle (altitude, exposition, vitesse de fonte,...) de la couverture nivale. A partir de ces données, on pourra mieux appréhender les mécanismes hydrologiques en jeu sur le site et dessiner les contours des modélisations futures. Dans le même temps, les données disponibles pourront être combinées afin de mettre en évidence des phénomènes difficilement mesurables (limite pluie/neige, inversion thermiques,...), qui serviront à l'avenir à contraindre de manière précise les modèles nivologiques et hydrologiques. Au travers des différents paramètres instrumentés, et grâce à l'utilisation de l'imagerie, ce réseau est capable de mesurer des variables relevant de nombreux champs disciplinaires (dynamique glaciaire, cyle végétatif,...). Il s'inscrit ainsi, par son approche interdisciplinaire, dans une volonté de mise en place d'un réseau de mesure à coût limité, destiné à l'ensemble des acteurs de l'étude et la recherche des milieux de la montagne.
APA, Harvard, Vancouver, ISO, and other styles
26

Yoo, Thomas. "Application of a Multimodal Polarimetric Imager to Study the Polarimetric Response of Scattering Media and Microstructures." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLX106/document.

Full text
Abstract:
Les travaux réalisés au cours cette thèse ont eu comme objectif l’étude de l’interaction de la lumière polarisée avec des milieux et des particules diffusants. Ces travaux s’inscrivent dans un contexte collaboratif fort entre le LPICM et différents laboratoires privés et publics. Des aspects très variées ont été traités en profondeur dont le développement instrumental, la simulation numérique avancée et la création de protocoles de mesure pour l’interprétation de donnés à caractère complexe.La partie instrumentale de la thèse a été consacrée au développement d’un instrument novateur, adapté à la prise d’images polarimétriques à différents échelles (du millimètre au micron) pouvant être rapidement reconfigurable pour offrir différents modes d’imagerie du même échantillon. Les deux aspects principaux qui caractérisent l’instrument sont i) la possibilité d’obtenir des images polarimétriques réelles de l’échantillon et des images de la distribution angulaire de lumière diffusé par une zone sur l’échantillon dont sa taille et position peuvent être sélectionnée par l’utilisateur à volonté, ii) le contrôle total de l’état de polarisation, de la taille et de la divergence des faisceaux utilisés pour l’éclairage de l’échantillon et pour la réalisation des images de celui-ci. Ces deux aspects ne se trouvent réunis sur aucun autre appareil commercial ou expérimental actuel.Le premier objet d’étude en utilisant le polarimètre imageur multimodal a été l’étude de l’effet de l’épaisseur d’un milieu diffusant sur sa réponse optique. En imagerie médicale il existe un large consensus sur les avantages de l’utilisation de différentes propriétés polarimétriques pour améliorer l’efficacité de techniques optiques de dépistage de différentes maladies. En dépit de ces avantages, l’interprétation des observables polarimétriques en termes de propriétés physiologiques des tissus se trouve souvent obscurcie par l’influence de l’épaisseur, souvent inconnue, de l’échantillon étudié.L’objectif des travaux a été donc, de mieux comprendre la dépendance des propriétés polarimétriques de différents matériaux diffusants avec l’épaisseur de ceux-ci. En conclusion, il a été possible de montrer que, de manière assez universelle, les propriétés polarimétriques des milieux diffusants varient proportionnellement au chemin optique que la lumière a parcouru à l’intérieur du milieu, tandis que le dégrée de polarisation dépend quadratiquement de ce chemin. Cette découverte a pu être ensuite utilisée pour élaborer une méthode d’analyse de données qui permet de s’affranchir de l’effet des variations d’épaisseur des tissus, rendant ainsi les mesures très robustes et liées uniquement aux propriétés intrinsèques des échantillons étudiés.Un deuxième objet d’étude a été la réponse polarimétrique de particules de taille micrométrique. La sélection des particules étudiées par analogie à la taille des cellules qui forment les tissus biologiques et qui sont responsables de la dispersion de la lumière. Grâce à des mesures polarimétriques, il a été découvert que lorsque les microparticules sont éclairées avec une incidence oblique par rapport à l’axe optique du microscope, celles-ci semblent se comporter comme si elles étaient optiquement actives. D’ailleurs, il a été trouvé que la valeur de cette activité optique apparente dépend de la forme des particules étudiées. L’explication de ce phénomène est basée sur l’apparition d’une phase topologique dans le faisceau de lumière. Cette phase topologique dépend du parcours de la lumière diffusée à l’intérieur du microscope. L’observation inédite de cette phase topologique a été possible grâce au fait que l’imageur polarimétrique multimodale permet un éclairage des échantillons à l’incidence oblique. Cette découverte peut améliorer significativement l’efficacité de méthodes optiques pour la détermination de la forme de micro-objets
The work carried out during this thesis was aimed to study the interaction of polarized light from the scattering media and particles. This work is part of a strong collaborative context between the LPICM and various private and public laboratories. A wide variety of aspects have been treated deeply, including instrumental development, advanced numerical simulation and the creation of measurement protocols for the interpretation of complex data.The instrumental part of the thesis was devoted to the development of an innovative instrument, suitable for taking polarimetric images at different scales (from millimeters to microns) that can be quickly reconfigured to offer different imaging modes of the same sample. The two main aspects that characterize the instrument are i) the possibility of obtaining real polarimetric images of the sample and the angular distribution of light scattered by an illuminated zone whose size and position can be controlled, ii) the total control of the polarization state, size and divergence of the beams. These two aspects are not united on any other commercial or experimental apparatus today.The first object of the study using the multimodal imaging polarimeter was to study the effect of the thickness from a scattering medium on its optical response. In medical imaging, there is a broad consensus on the benefits of using different polarimetric properties to improve the effectiveness of optical screening techniques for different diseases. Despite these advantages, the interpretation of the polarimetric responses in terms of the physiological properties of tissues has been obscured by the influence of the unknown thickness of the sample.The objective of the work was, therefore, to better understand the dependence of the polarimetric properties of different scattering materials with the known thickness. In conclusion, it is possible to show that the polarimetric properties of the scattering media vary proportionally with the optical path that the light has traveled inside the medium, whereas the degree of polarization depends quadratically on the optical path. This discovery could be used to develop a method of data analysis that overcomes the effect of thickness variations, thus making the measurements very robust and related only to the intrinsic properties of the samples studied.The second object of study was to study the polarimetric responses from particles of micrometric size. The selection of the particles studied by analogy to the size of the cells that form the biological tissues, and which are responsible for the dispersion of light. By means of the polarimetric measurements, it has been discovered that when the microparticles are illuminated with an oblique incidence with respect to the optical axis of the microscope, they appear to behave as if they were optically active. Moreover, it has been found that the value of this apparent optical activity depends on the shape of the particles. The explanation of this phenomenon is based on the appearance of a topological phase of the beam. This topological phase depends on the path of the light scattered inside the microscope. The unprecedented observation of this topological phase has been done by the fact that the multimodal polarimetric imager allows illumination of the samples at the oblique incidence. This discovery can significantly improve the efficiency of optical methods for determining the shape of micro-objects
APA, Harvard, Vancouver, ISO, and other styles
27

Budi, Wibowo Sandy. "Approches multiscalaires de l'érosion du volcan Merapi, Indonésie : contribution à la compréhension du déclenchement et de la dynamique des lahars." Thesis, Paris 1, 2016. http://www.theses.fr/2016PA01H044/document.

Full text
Abstract:
L’érosion des édifices volcaniques résulte d’une série de processus géomorphologiques qui se produisent pendant, avant ou sans éruption. Ce processus implique également le terme « lahar » qui décrit un écoulement rapide de la zone sommitale vers l’aval amenant des matériaux volcaniques mélangés à de l’eau avec une évolution de la dynamique d’écoulement dans l’espace et dans le temps. L’érosion des édifices volcaniques est encore mal connue, particulièrement en raison de la difficulté d’acquisition de données sur le terrain. Pourtant, les lahars ont causé à eux seuls au moins 44 250 morts de 1600 à 2010 dont 52 % à cause d’un seul évènement en 1985 (Nevado del Ruiz, Colombie). Cette étude propose une approche multiscalaire pour mieux comprendre la nature de l’érosion des édifices volcaniques sur le déclenchement et la dynamique des lahars. L’éruption du volcan Merapi (Indonésie) en2010 fut l’occasion de produire de nouvelles données de terrain. La première partie de la thèse, relative au déclenchement des lahars, repose sur des données de terrain et des expérimentations en laboratoire. Le travail de terrain avait pour but de comparer un bassin versant bouleversé par l’éruption de 2010 et un autre bassin versant non bouleversé, par le biais d’observations in-situ et d’instrumentation de terrain. En laboratoire, l’approche expérimentale fut réalisée en utilisant 8 scénarios différents sur un plan incliné. La deuxième partie, liée à la dynamique des lahars en mouvement, fut étudiée à partir du couplage vidéo signaux sismiques. Les dépôts liés à ces lahars furent également analysés et mis en regard de la chronologie des écoulements. Trois ans après l’éruption du Merapi en 2010, les lahars se sont raréfiés. Cependant, les dépôts de cendres juvéniles issues d’une autre éruption d’un volcan voisin (Kelud à Java Est) eurent comme résultat une augmentation significative du nombre de lahars à partir de février 2014. Le déclenchement des lahars fut également favorisé par des glissements de terrain connectés aux thalwegs, comme celui produit dans la nuit du 6 au 7 décembre 2012, que nous avons étudié en détail. La dynamique des deux lahars observés et filmés les 28 février et 18 mars 2014 fut divisée en 4 phases : (1) écoulement hyperconcentré, (2) pic de coulée de débris, (3) corps du lahar, (4) queue du lahar. L’analyse vidéo et l’observation in-situ sur les lahars en mouvement a permis de créer des hydrogrammes détaillés indiquant la profondeur, la vitesse, le débit et le nombre des blocs métriques flottés. La dynamique des lahars sur les différentes topographies du chenal a provoqué une fréquence sismique très différente. La formation des dépôts de lahars fut corrélée à la dynamique des écoulements et nécessita une observation in-situ pour la validation d’interprétation
The erosion of volcanic edifices is a series of geomorphological processes that occurs during, before or without eruption. This process also involves the term "lahar" which is characterized by dense mixtures of volcanic materials and water, rapidly flowing from a volcano with important spatio-temporal rheological changes. The erosion of volcanic edifices is still poorly understood, particularly because data collection in the field is difficult. However, lahars have caused at least 44,250 deaths from 1600 to 2010 of which 52%due to a single event in 1985 (Nevado del Ruiz, Colombia).This study proposes a multi-scalar approach to better understand the nature of the erosion of volcanic edifices, especially on lahar initiation process and dynamics. The eruption of the Merapi volcano(Indonesia) in 2010 was an opportunity to produce new data. The first part of this thesis focused on the lahar initiation process, was based on field data and laboratory experiments. The field work was intended to compare a volcanically disturbed watershed by the eruption of 2010 and an undisturbed watershed, by conducting in-situ observations and field instrumentation. In the laboratory, an experimental approach was performed using 8 different scenarios on a flume. The second part of the thesis related to the dynamics of two lahars in motion was conducted using coupling between video footage and seismic signal. Lahar deposits were also analyzed based on the chronology of the flows. Three years after the eruption of Merapi in 2010, the frequency of lahar occurrence decreased. However, juvenile ash fall deposits (volcanic ash) from another eruption of a nearby volcano (Kelud in East Java) in February 2014 resulted a significant increase of lahars occurrence. Lahars triggering process was also favored by a landslides occurring in the night of 6 to 7 December 2012, of which the deposit was connected to the thalweg. The dynamics of the two lahars were observed and filmed on 28 February and18 March 2014. Those lahars were divided into four phases: (1) hyperconcentrated flow, (2) the peak of debris flow, (3) lahar body, and (4) lahar tail. Video analysis and in-situ observation on active lahars allowed us to create detailed hydrographs indicating flow depth, velocity, discharge and the number of floated boulders. Lahar dynamics on different topography of the channel caused a very different seismic frequency. The formation of lahar deposits was correlated with the flow dynamics and required an in-situ observation for the validation of the interpretation
Erosi kerucut vulkanik merupakan hasil dari serangkaian proses geomorfologi yang terjadi baik selama,sebelum atau tanpa erupsi. Proses ini juga melibatkan "lahar" yang didefinisikan sebagai aliran cepat daridaerah puncak gunung menuju hilir dengan membawa material vulkanik yang bercampur dengan airdimana dinamika alirannya terus berubah secara spasial dan temporal. Erosi struktur vulkanik masihsedikit ditelaah, terutama karena sulitnya pengumpulan data di lapangan. Padahal, lahar telahmenyebabkan setidaknya 44.250 kematian dari tahun 1600 sampai 2010, dimana 52% -nya terkait denganbencana pada tahun 1985 di gunung Nevado del Ruiz (Kolombia).Penelitian ini mengusulkan pendekatan multi-skalar untuk lebih memahami karakteristik erosi kerucutvulkanik terutama yang terkait dengan pemicu dan dinamika aliran lahar. Letusan Gunung Merapi(Indonesia) pada tahun 2010 memberikan kesempatan untuk menghasilkan data lapangan baru. Bagianpertama dari disertasi ini, mengenai pemicu lahar, dilakukan berdasarkan data lapangan dan experimenlaboratorium. Kegiatan lapangan dimaksudkan untuk membandingkan DAS yang terdampak oleh letusan2010 dan DAS alami, melalui pengamatan in-situ dan instrumentasi lapangan. Di laboratorium,pendekatan eksperimental dilakukan dengan menggunakan 8 skenario yang berbeda pada flume. Bagiankedua dari disertasi ini berkaitan dengan dinamika aliran lahar aktif yang dipelajari dari perpaduanrekaman video dan sinyal seismik. Proses sedimentasi juga dianalisis dengan dipertimbangkan kronologialiran lahar.Tiga tahun setelah letusan Merapi pada tahun 2010, frekuensi kejadian lahar berkurang. Namun,sedimentasi abu vulkanik yang berasal dari gunung api lain (Kelud di Jawa Timur) telah mengakibatkanpeningkatan jumlah lahar yang signifikan sejak Februari 2014. Pembentukan lahar juga dipicu oleh tanahlongsor yang terjadi pada pada malam 6 menuju 7 Desember 2012 dimana materialnya terhubunglangsung ke thalweg. Dinamika dua aliran lahar diamati dan difilmkan pada tanggal 28 Februari dan 18Maret 2014. Lahar tersebut dibagi menjadi empat fase: (1) aliran hyperconcentrated, (2) puncak alirandebris, (3) tubuh lahar, (4) ekor lahar. Analisis video dan pengamatan in-situ pada lahar aktif sangatmembantu pembuatan hidrograf secara rinci terkait dengan kedalaman aliran, kecepatan, debit dan jumlahbatu yang terapung. Dinamika lahar pada topografi sungai yang berbeda menimbulkan frekuensi seismikyang sangat berbeda. Proses sedimentasi lahar sangat berkaitan dengan dinamika aliran lahar dandiperlukan pengamatan in-situ untuk memvalidasi interpretasi yang dibuat
La erosión de los edificios volcánicos es el resultado de una serie de procesos geomorfológicos que ocurre durante, antes o sin erupción. Este proceso también involucra el término "lahar", un flujo rápido de la cumbre de volcán hacia el rio que contiene una mezcla de materiales volcánicos y agua con cambio espacial y temporal. La erosión de los edificios volcánicos aún es poco estudiado debido a las dificultades para la obtención de los datos en el campo y además es peligroso. Mientras, los lahares han causado 44 250 muertos desde 1600 a 2010, en el cual de 52% ha sido causado por un evento único en 1985 (Nevado del Ruiz, Colombia). Esta investigación propone un acercamiento multiescalar para entender mejor las características de erosión de los edificios volcánicos, en particular el proceso de descenso y la dinámica de lahares. La erupción del volcán Merapi (Indonesia) en 2010 fue una oportunidad para generar nuevos datos. La primera parte de esta tesis enfocada al proceso de iniciación de descenso de lahares, que fue basada en la obtención de los datos de campo y experimentos en el laboratorio. El trabajo de campo fue realizado con el objetivo de comparar una cuenca hidrográfica afectada por la erupción de 2010 y una otra cuenca natural, a través de la observación in-situ y la instrumentación geofísica en el campo. En el laboratorio, el trabajo fue realizado con 8 escenarios diferentes usando un canal artificial. La segunda parte de esta tesis fue relacionada a la dinámica de movimiento de lahares que se realizó a través del acoplamiento de vídeos y señales sísmicas. Se analizó también el proceso de sedimentación basado en la cronología de los flujos de lahares. Tres años después de la erupción del Merapi en 2010, la frecuencia de ocurrencia de lahares se disminuye. Sin embargo, la sedimentación de ceniza volcánica de otra erupción de un volcán cercano (Kelud en Java Oriental) causó un aumento significativo de la ocurrencia de lahares desde febrero de 2014. La formación de lahares también se provocó por deslizamiento de tierra que se ocurrió en la noche de 6 a 7 de diciembre de 2012, en la que los materiales se juntaron directamente a la vaguada. La dinámica de dos flujos de lahares fue observada y grabada en video el 28 de febrero y 18 de marzo 2014. Estos dos lahares se dividieron en cuatro fases: (1) flujo hiperconcentrado, (2) el pico de flujo de escombros, (3) cuerpo de lahar, (4) cola de lahar. El análisis de video y la observación in-situ de lahares activos nos han ayudado a crear los hidrogramas en detalle que muestran la profundidad del flujo, la velocidad, la descarga y el número de rocas flotadas. La dinámica de lahares en diferentes topografías del canal causó una frecuencia sísmica muy diferente. El proceso de sedimentación de lahares se correlacionó con la dinámica de flujo y se requiere una observación in-situ para validar la interpretación
APA, Harvard, Vancouver, ISO, and other styles
28

Harb, Étienne Gebran. "Risques liés de crédit et dérivés de crédit." Thesis, Paris 2, 2011. http://www.theses.fr/2011PA020098.

Full text
Abstract:
Le premier volet de cette thèse traite de l’évaluation du risque de crédit. Après un chapitre introductif offrant une synthèse technique des modèles de risque, nous nous intéressons à la modélisation de la dépendance entre les risques de défaut par les copules qui permettent de mieux fonder les mesures du risque de crédit. Ces dernières assurent une description intégrale de la structure de dépendance et ont l’avantage d’exprimer la distribution jointe en termes des distributions marginales. Nous les appréhendons en termes probabilistes telles qu’elles sont désormais familières, mais également selon des perspectives algébriques, démarche à certains égards plus englobante que l’approche probabiliste. Ensuite, nous proposons un modèle général de pricing des dérivés de crédit inspiré des travaux de Cherubini et Luciano (2003) et de Luciano (2003). Nous évaluons un Credit Default Swap « vulnérable », comprenant un risque de contrepartie. Nous y intégrons la Credit Valuation Adjustment (CVA)préconisée par Bâle III pour optimiser l’allocation du capital économique. Nous reprenons la représentation générale de pricing établie par Sorensen et Bollier (1994) et contrairement aux travaux cités ci-dessus, le paiement de protection ne survient pas forcément à l’échéance du contrat. La dépendance entre le risque de contrepartie et celui de l’entité de référence est approchée par les copules. Nous examinons la vulnérabilité du CDS pour des cas de dépendance extrêmes grâce à un choix de copule mixte combinant des copules usuelles « extrêmes ». En variant le rho de Spearman, la copule mixte balaie un large spectre de dépendances, tout en assurant des closed form prices. Le modèle qui en résulte est adapté aux pratiques du marché et facile à calibrer.Nous en fournissons une application numérique. Nous mettons ensuite en évidence le rôle des dérivés de crédit en tant qu’instruments de couvertures mais aussi comme facteurs de risque, accusés d’être à l’origine de la crise des subprime. Enfin, nous analysons cette dernière ainsi que celle des dettes souveraines, héritant également de l’effondrement du marché immobilier américain. Nous proposons à la suite une étude de soutenabilité de la dette publique des pays périphériques surendettés de la zone euro à l’horizon 2016
The first part of this thesis deals with the valuation of credit risk. After an introductory chapter providing a technical synthesis of risk models, we model the dependence between default risks with the copula that helps enhancing credit risk measures. This technical tool provides a full description of the dependence structure; one could exploit the possibility of writing any joint distribution function as a copula, taking as arguments the marginal distributions. We approach copulas in probabilistic terms as they are familiar nowadays, then with an algebraic approach which is more inclusive than the probabilistic one. Afterwards, we present a general credit derivative pricing model based on Cherubini and Luciano (2003) and Luciano (2003). We price a “vulnerable”Credit Default Swap, taking into account a counterparty risk. We consider theCredit Valuation Adjustment (CVA) advocated by Basel III to optimize theeconomic capital allocation. We recover the general representation of aproduct with counterparty risk which goes back to Sorensen and Bollier (1994)and differently from the papers mentioned above, the payment of protectiondoes not occur necessarily at the end of the contract. We approach the dependence between counterparty risk and the reference credit’s one with the copula. We study the sensitivity of the CDS in extreme dependence cases with a mixture copula defined in terms of the “extreme” ones. By varying the Spearman’s rho, one can explore the whole range of positive and negative association. Furthermore, the mixture copula provides closed form prices. Our model is then closer to the market practice and easy to implement. Later on, we provide an application on credit market data. Then, we highlight the role of credit derivatives as hedging instruments and as risk factors as well since they are accused to be responsible for the subprime crisis. Finally, we analyze the subprime crisis and the sovereign debt crisis which arose from the U.S. mortgage market collapse as well. We then study the public debt sustainability of the heavily indebted peripheral countries of the eurozone by 2016
APA, Harvard, Vancouver, ISO, and other styles
29

Assadollahi, Tejaragh Hossein. "L’impact des événements climatiques et de la sécheresse sur le phénomène du retrait gonflement des argiles en interaction avec les constructions." Thesis, Strasbourg, 2019. http://www.theses.fr/2019STRAD011/document.

Full text
Abstract:
Le changement climatique et les événements climatiques sévères tels que les périodes de sécheresse/humidification prolongées sont à l'origine du phénomène de retrait-gonflement dans les sols argileux. Ce phénomène est affecté par les interactions sol-végétation-atmosphère (SVA) et peut causer d’importants dommages structurels aux constructions légères telles que les bâtiments résidentiels. L’objectif de ce travail de recherche est de modéliser le comportement in situ du retrait-gonflement des sols gonflants dans un contexte SVA en se basent sur des outils numériques. Une méthode d'interaction sol-atmosphère est initialement présentée accompagnée d’un modèle couplé hydro-thermique du sol. Cette approche a été principalement mise en place afin de déterminer les conditions aux limites temporelles à la surface du sol en se basent sur la notion du bilan de masse et d'énergie pour déterminer a posteriori, les modifications spatio-temporelles de la succion du sol, de la teneur en eau et de la température. Cette approche a été validée à l'aide des observations in situ des sites instrumentés. Par la suite, l’influence de l’absorption d’eau par les végétations a été intégrée dans le terme source de l’écoulement de l’eau dans un milieu non saturé, à l’aide d’un modèle d’absorption d’eau de racine existant.Les variations temporelles de succion ont été postérieurement reliées au comportement volumique du sol en appliquant une approche simple développée à partir des résultats expérimentaux des essais de séchage/humidification réalisés dans la littérature. Les indices associés dans le plan indice des vides-log succion, ainsi que les paramètres complémentaires du modèle linéaire ont été corrélés aux paramètres géotechniques de base. L'approche proposée a été ultérieurement validée avec des données in situ fournies par la surveillance d’un site expérimental. Le site expérimental de Roaillan a été instrumenté afin de surveiller les modifications physiques du sol ainsi que le comportement structurel du bâtiment. Les comparaisons entre les résultats de la modélisation et les observations in situ de la succion du sol, la teneur en eau, la température et les mouvements du sol dans le temps ont montré une performance acceptable du modèle. L’approche a ensuite été appliquée pour étudier l’influence des projections climatiques futures (2050) sur les variables physiques et les mouvements du sol sur ce site. Trois scénarios RCP relatifs aux changements climatiques ont été examinés dans cette étude, qui ont révélé des différents comportements possibles à court terme et à long terme. Finalement, l'approche développée a été appliquée au territoire français en le divisant en six régions climatiques. Différents paramètres de sol ont été attribués à chacune de ces régions climatiques afin de définir les conditions de référence. En conséquence, l’influence de différents facteurs externes sur les mouvements du sol a été analysée sur une période donnée. Enfin, l’étude suggère les mesures adéquates à prendre pour minimiser l’amplitude du phénomène de retrait et de gonflement dans un contexte SVA
Climate change and severe climatic events such as long drought/rehydration periods are at the origin of the shrinkage and swelling phenomenon in expansive soils. This phenomenon is affected by Soil-Vegetation-Atmosphere (SVA) interactions and can cause severe structural damage to lightly loaded constructions such as residential buildings. The objective of this re-search work is to simulate the in-situ behavior of the shrinkage-swelling in expansive soils in a SVA context using numerical tools. A soil-atmosphere interaction method is primarily presented along with a coupled hydro-thermal soil model. This approach was established in order to determine primarily, the natural time variable boundary conditions at the considered soil surface based on the mass and energy balance concept, and secondly to determine the spatial-temporal changes of the soil suction, water content and temperature. This approach was validated using in situ observations of monitored sites. Thereafter, the influence of the water uptake by vegetation was incorporated in the source term of the unsaturated water flow theory, using an existing root water uptake model. Subsequently, the temporal variations of the soil suction were related to the volume change behavior using a simple approach developed based on the experimental results of drying/wetting tests performed in the literature. The associated volumetric indices in the void ratio-log suction plan, along with the complementary parameters of the linear model were correlated with basic geotechnical parameters. The proposed approach was validated with in situ data provided from an experimental site. The Roaillan experimental site was instrumented in order to monitor the soil’s physical changes along with the structural behavior of the building. Comparisons between the simulated and observed soil suction, soil water content, temperature and soil movements in time and depth showed an acceptable performance of the predictions. The approach was then extended to study the influence of future climate projections (2050) on the soil’s physical variables and movements. Three RCP climate change scenarios were considered in this analysis which revealed different possible behavior in both short term and long term. Finally, the developed approach was applied to the French territory by dividing it to six different climatic regions. Different soil parameters were attributed to each of these climatic regions in order to set the reference condition. Thereafter, the influence of different external factors was analyzed on the soil movements over a chosen period. The study finally suggests the adequate actions to take for minimizing the amplitude of the shrinkage and swelling phenome-non in a SVA context
APA, Harvard, Vancouver, ISO, and other styles
30

Sanadzadeh, Iran. "A Musicological Study of the Japanese Koto using Heuristic Finite Element Models." Thesis, 2019. http://hdl.handle.net/2440/124192.

Full text
Abstract:
Vol. 1 A Musicological Study of the Japanese Koto using Heuristic Finite Element Models -- Vol. 2 Datasets
This musicological study investigates the sound of the Japanese koto, a 13-string zither, using heuristic finite element models. It aims firstly to test a new integrated analytical approach with finite element methods; these methods have become more accessible to scholars across many disciplines including systematic musicology in recent decades. This thesis demonstrates how these methods can provide powerful analytical tools for technical studies of musical instruments as part of organological research. Secondly, it applies this method in a heuristic study of the koto to characterise its sound envelope by using a series of models; these models range from a simple box to a more complex and geometrically accurate lofted model developed as part of this study. These models permitted the continual development of the integrated analytical approach during the period of investigation. COMSOL Multiphysics®, the finite element method software used to develop the models, also enabled specialist analysis of sound from the instrument including its qualitative visual representation. Results of these models in turn were validated by comparison with the limited existing literature on the koto’s acoustics and additional physical experiments. During this process initial tests on a plank of paulownia wood were undertaken in order to understand the paulownia wood from which the koto is made. These results then informed more complex, subsequent models. Findings from the study reveal that the anisotropic nature of paulownia significantly influenced predicted resonances when compared to a simple isotropic model. Key characteristics of the koto body that help to explain the relationship between sound production and geometry of the instrument were also identified, for example, the significant influence of the curvature of the top plate and the arching down the length of the instrument on the sound envelope produced. These findings contribute to the understanding of the acoustical behaviour of the koto in particular and East Asian zithers in general. The methods identified and validated in this study also serve more broadly as a template for future organological and acoustical investigations of geometrically complex wooden musical instruments.
Thesis (Ph.D.) -- University of Adelaide, Elder Conservatorium of Music, 2020
APA, Harvard, Vancouver, ISO, and other styles
31

Silva, Anaïs Andrea Forte. "Efeito do género e da idade no suporte social em adultos idosos." Master's thesis, 2009. http://hdl.handle.net/10400.12/4445.

Full text
Abstract:
Dissertação de Mestrado apresentada ao ISPA - Instituto Universitário
Com este estudo pretendeu-se verificar se existem relações de predição entre o género e a idade face ao Suporte Social (SS) junto de uma amostra de idosos autónomos e não institucionalizados, controlados para demência e depressão, residentes na Área da Grande Lisboa, (N= 240; leque etário= 65-90 anos, M= 72). Recorrendo ao LISREL8, efectuou-se a validação estrutural do Questionário de Suporte Instrumental e Emocional (QSIE; Feijão, 2009) verificando-se a existência de uma estrutura tridimensional hierárquica para o mesmo. Realizou-se também o teste de um Modelo Estrutural Preditor do SS, cujos resultados mostraram que apenas o género tem influência sobre o Suporte. Discute-se estes resultados, bem como a ausência de relação estatisticamente significativa entre a idade e o SS
ABSTRACT: In this study we set out to understand whether there was a predictable relation between Social Support (SS), gender and age on a sample of autonomous and non-institutionalized old-adults. To find out whether these two factors had any significant bearing on the amount of support, the sample of old-adults was taken from in the subarea of Lisbon (N= 240; (age group= 65-90 years, M= 72). A control for dementia and depression was previously made. Using the LISREL8 program, a structural validation of the Emotional and Instrumental Support Inventory (Feijão, 2009) was made, and it was found that there was a hierarchical tridimensional structure to the support. A Predictor Structural Model of SS was also made, which results showed that only gender seems to have an influence on support but there was found an absence of statistic value
APA, Harvard, Vancouver, ISO, and other styles
32

Dickens, Paul Physics Faculty of Science UNSW. "Flute acoustics: measurement, modelling and design." 2007. http://handle.unsw.edu.au/1959.4/40607.

Full text
Abstract:
A well-made flute is always a compromise and the job of flute makers is to achieve a musically and aesthetically satisfying compromise; a task that involves much trial and-error. The practical aim of this thesis is to develop a mathematical model of the flute and a computer program that assists in the flute design process. Many musical qualities of a woodwind instrument may be calculated from the acoustic impedance spectrum of the instrument. A technique for fast and accurate measurement of this quantity is developed. The technique is based on the multiple-microphone technique, and uses resonance-free impedance loads to calibrate the system and spectral shaping to improve the precision at impedance extrema. The impedance spectra of the flute and clarinet are measured over a wide range of fingerings, yielding a comprehensive and accurate database. The impedance properties of single finger holes are measured using a related technique, and fitformulae are derived for the length corrections of closed finger holes for a typical range of hole sizes and lengths. The bore surface of wooden instruments can change over time with playing and this can affect the acoustic impedance, and therefore the playing quality. Such changes in acoustic impedance are explored using wooden test pipes. To account for the effect of a typical player on flute tuning, an empirical correction is determined from the measured tuning of both modern and classical flutes as played by several professional and semi-professional players. By combining the measured impedance database with the player effects and various results in the literature a mathematical model of the input impedance of flutes is developed and implemented in command-line programs written in the software language C. A user-friendly graphical interface is created using the flute impedance model for the purposes of flute acoustical design and analysis. The program calculates the tuning and other acoustical properties for any given geometry. The program is applied to a modern flute and a classical flute. The capabilities and limitations of the software are thereby illustrated and possible contributions of the program to contemporary flute design are explored.
APA, Harvard, Vancouver, ISO, and other styles
33

Vyasarayani, Chandrika Prakash. "Transient Dynamics of Continuous Systems with Impact and Friction, with Applications to Musical Instruments." Thesis, 2009. http://hdl.handle.net/10012/4723.

Full text
Abstract:
The objective of this work is to develop mathematical simulation models for predicting the transient behaviour of strings and beams subjected to impacts. The developed models are applied to study the dynamics of the piano and the sitar. For simulating rigid point impacts on continuous systems, a new method is proposed based on the unit impulse response. The developed method allows one to relate modal velocities before and after impact, without requiring the integration of the system equations of motion during impact. The proposed method has been used to model the impact of a pinned-pinned beam with a rigid obstacle. Numerical simulations are presented to illustrate the inability of the collocation-based coefficient of restitution method to predict an accurate and energy-consistent response. The results using the unit-impulse-based coefficient of restitution method are also compared to those obtained with a penalty approach,with good agreement. A new moving boundary formulation is presented to simulate wrapping contacts in continuous systems impacting rigid distributed obstacles. The free vibration response of an ideal string impacting a distributed parabolic obstacle located at its boundary is analyzed to understand and simulate a sitar string. The portion of the string in contact with the obstacle is governed by a different partial differential equation (PDE) from the free portion represented by the classical string equation. These two PDEs and corresponding boundary conditions, along with the transversality condition that governs the dynamics of the moving boundary, are obtained using Hamilton's principle. A Galerkin approximation is used to convert them into a system of nonlinear ordinary differential equations, with time-dependent mode-shapes as basis functions. The advantages and disadvantages of the proposed method are discussed in comparison to the penalty approach for simulating wrapping contacts. Finally, the model is used to investigate the mechanism behind the generation of the buzzing tone in a sitar. An alternate formulation using the penalty approach is also proposed, and the results are contrasted with those obtained using the moving boundary approach. A model for studying the interaction between a flexible beam and a string at a point including friction has also been developed. This model is used to study the interaction between a piano hammer and the string. A realistic model of the piano hammer-string interaction must treat both the action mechanism and the string. An elastic stiff string model is integrated with a dynamic model of a compliant piano action mechanism with a flexible hammer shank. Simulations have been used to compare the mechanism response for impact on an elastic string and a rigid stop. Hammer head scuffing along the string, as well as length of time in contact, were found to increase where an elastic string was used, while hammer shank vibration amplitude and peak contact force decreased. Introducing hammer-string friction decreases the duration of contact and reduces the extent of scuffing. Finally, significant differences in hammer and string motion were predicted for a highly flexible hammer shank. Initial contact time and location, length of contact period, peak contact force, hammer vibration amplitude, scuffing extent, and string spectral content were all influenced.
APA, Harvard, Vancouver, ISO, and other styles
34

Ghebreab, Tesfalidet Alem. "Modelling the soil water balance and applications using a decision support system (DSSAT v3.5)." 2003. http://hdl.handle.net/10413/3579.

Full text
Abstract:
Water is a scarce resource used by various stakeholders. Agriculture is one of the users of this resource especially for growing plants. Plants need to take up carbon dioxide to prepare their own food. For this purpose plants have stomatal openings. These same openings are used for transpiration. Quantifying transpiration is important for efficient water resource management and crop production because it is closely related to dry matter production. Transpiration could be measured using a number of methods or calculated indirectly through quantification of the soil water balance components using environmental instruments. The use of models such as the Decision Support System for Agrotechnology Transfer (DSSAT v3.5) is, however, much easier than environmental instruments. Nowadays, with increased capabilities of computers, the use of crop simulation modelling has become a common practice for various applications. But it is important that models, such as DSSAT v3.5, be calibrated and verified before being used for various applications such as long-term risk assessment, evaluation of cultural practices and other applications. In this study the model inputs have been collected first Then the model was calibrated and verified. Next sensitivitY analysis was carried to observe the model behavior to changes in inputs. Finally the model has been applied for long-term risk assessment and evaluation of cultural practices. In this study, the data collected formed the basis forthe minimum dataset needed for running the DSSAT v3.5 model. In addition, the factory given transmission of shading material over a tomato crop was compared to actual measurements. Missing weather data (solar irradiance, minimum and maximum air temperature and rainfall) were completed after checking that it was homogeneous to measurements from nearby automatic weather station. It was found that factory-given transmission value of 0.7 of the shade cloth was different from the actual one of 0.765. So this value was used for conversion of solar irradiance measured outside the shade cloth to solar irradiance inside the shade cloth. Conventional laboratory procedures were used for the analysis of soil physical and chemical properties. Soil water content limits were determined using texture and bulk density regression based equations. Other model inputs were calculated using the DSSAT model. Crop management inputs were also documented for creation of the experimental details file. The DSSATv3.5 soil water balance model was calibrated for soil, plant and weather conditions at Ukulinga by modifying some of its inputs and then simulations of the soil water balance components were evaluated against actual measurements. For this purpose half of the data available was used for calibration and the other half for verification. Model simulations of soil water content (150 to 300 mm and 450 to 600 mm) improved significantly after calibration. In addition, simulations of leaf area index (LA!) were satisfactory. Simulated evapotranspiration (E1) had certain deviations from the measured ET because the latter calculated ET by multiplying the potential ET with constant crop multiplier so-called the crop coefficient. Sensitivity analysis and long-term risk assessments for yield, runoff and drainage and other model outputs were carried out for soil, plant and weather conditions at Ukulinga. For this purpose, some of the input parameters were varied individually to determine the effect on seven model output parameters. In addition, long-term weather data was used to simulate yield, biomass at harvest, runoff and drainage for various initial soil water content values. The sensitivity analysis gave results that conform to the current understanding of the soil-plant atmosphere system. The long-term assessment showed that it is risky to grow tomatoes during the winter season at Ukulinga irrespective of the initial soil water content unless certain measures are taken such as the use of mulching to protect the plants from frost. The CROPGRO-Soya bean model was used to evaluate the soil water balance and gro'W1:h routines for soil, plant and weather conditions at Cedara. In addition, cultural practices such as row spacing, seeding rate and cultivars were also evaluated using longterm weather data. Simulations of soil water content were unsatisfactory even after calibration of some of the model parameters. Other model parameters such as LAI, yield and flowering date had satisfactory agreement with observed values. Results from this study suggest that the model is sensitive to weather and cultural practices such as seeding rates, row spacing and cultivar maturity groups. The general use of decision support systems is limited by various factors. Some of the factors are: unclear definition of clients/end users; no end user input prior to or during the development of the DSS; DSS does not solve the problems that the client is experiencing; DSS do not match their decision-making style; producers see no reason to change the current management practices; DSS does not provide benefit over current decision-making system; limited computer ownership amongst producers; lack of field testing; producers do not trust the output due to the lack of understanding of the underlying theories of the models utilized; cannot access the necessary data inputs; lack of technical support; lack of training in the development ofDSS software; marketing and support constraints; institutional resistances; short shelf-life of DSS software; technical constraints, user constraints and other constraints. For successful use of DSS, the abovementioned constraints have to be solved before their useful impacts on farming systems could be realized. This study has shown that the DSSAT v3.5 model simulations of the soil water balance components such as evapotranspiration and soil water content were unsatisfactory while simulations of plant parameters such as leaf area index, yield and phonological stages were simulate to a satisfactory standard. Sensitivity analysis gave results that conform to the current understanding of the soil-plant -atmosphere system. Model outputs such as yield and phonological stages were found to sensitive to weather and cultural practices such as seeding rates, row spacing and cultivar maturity groups. It ha been further investigated that the model could be used for risk assessment in various crop management practices and evaluation of cultural practices. However, before farmers can use DSSAT v3.5, several constraints have to be solved.
Thesis (M.Sc.)-University of Natal, Pietermaritzburg, 2003.
APA, Harvard, Vancouver, ISO, and other styles
35

Hartig, Florian. "Metapopulations, Markets and the Individual: Refining incentive-based approaches for biodiversity conservation on private lands." Doctoral thesis, 2010. https://repositorium.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-2010012932.

Full text
Abstract:
When designing financial incentives for voluntary conservation of threatened habitats and ecosystems, we are faced with the problem that there is no single indicator for "biodiversity value". The value of a habitat depends on multiple factors such as habitat type, area, and spatial and temporal connectivity. Moreover, not only are there local trade-offs between these indicators, but land use changes at one location may also change the value of sites in the vicinity. This doctoral thesis analyzes the consequences of including trade-offs and interactions between sites in market-based conservation schemes. We ask the following questions: How can trade-offs between the survival of different species be quantified? How can spatial processes and temporal processes be included in market-based conservation, in particular the value of spatial and temporal connectivity? And how do underlying economic dynamics relate to the spatio-temporal allocation of conservation measures in market-based conservation schemes?
APA, Harvard, Vancouver, ISO, and other styles
36

Herold, Maria. "Investigation of contaminant mass fluxes and reactive transport modelling of heterocyclic hydrocarbons at former gasworks sites." Doctoral thesis, 2009. http://hdl.handle.net/11858/00-1735-0000-0006-B2EC-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Klapwijk, Jonathan Menno. "A validation of the Visual Perceptual Aspects Test using a bifactor exploratory structural equation modelling approach." Diss., 2018. http://hdl.handle.net/10500/25693.

Full text
Abstract:
Visual perception is a psychological construct that describes the awareness of visual sensations and arise from the interactions of the individual or observer in the external environment together with the physiology of the observer’s visual system. A variety of theories of the development of visual perception have led to the development of different psychometric measures aimed at quantifying the cognitive construct. The Visual Perceptual Aspects Test was developed by Clutten (2009) to measure nine different constructs of visual perception. The original VPAT was validated using content and construct validity based on a Western Cape sample. However, to the researcher’s knowledge, a factor analysis had not yet been conducted on the VPAT to determine the factor validity of the test. Furthermore, no measures of validity or reliability had been conducted on the VPAT using a sample outside of the Western Cape. The aim of this research is to validate the hypothesised nine factor structure of the Visual Perceptual Aspects Test, using a confirmatory factor analysis, exploratory structural equation model, a bifactor confirmatory factor analysis and a bifactor exploratory structural equation model. The results of the analysis showed marginal model fit of the VPAT with the sample data, with sufficient levels of reliability for certain sub-tests. However, the VPAT did not meet significant levels of validity or reliability of the proposed model structure of the VPAT for the sample group of learners based in the Eastern Cape.
Psychology
M.A. (Research Psychology)
APA, Harvard, Vancouver, ISO, and other styles
38

kulkarni, Milind Anant. "Small Angle Measurement Using Optical Caustics From Hollow Cylinders - Few Investingations." Thesis, 2007. http://hdl.handle.net/2005/621.

Full text
Abstract:
‘Optical Caustics’ represent some of the most visually striking patterns of the light in nature.They occur when light rays from a source, such as the sun, get refracted, or reflected by curved media so as to bend and alter their path. They are ubiquitous and signify the regions of space in which many rays intersect to form bright singularities along a two-or a three-dimensional surface. The associated 2-D patterns (caustic patterns) could be simple or complex in ‘shape and size’ depending upon the optical arrangement used to produce them. Such patterns exhibit either a static or a dynamic behavior which can be controlled sensitively by the medium or the device used to produce them. The present thesis concerns with a few novel contributions in utilization of such optical caustics for the measurement of small angular rotation/tilt of objects. Utilizing a ‘hollow cylinder’ as a novel device for the generation of the optical caustics, the author proposed and demonstrated three new schemes of realizing a position-dependent-behavior of ‘Optical Caustic Patterns’. The said behavior is investigated both analytically as well as experimentally. The results of the investigation are then utilized to propose and demonstrate three methods of magnifying angular displacement of the hollow cylinder. The salient feature of the principle behind each of the said methods is illustrated in the figures below. The patterns in each of the above pictures correspond to two different positions of the hollow cylinder-the pattern in white color corresponds to the initial position while that in red color corresponds to new angular position of the cylinder. Defining S1 = ƒ (LΔΦ), S2= ƒ(TΔΦ) and S3= ƒ(ξ ΔΦ) as new signals from the proposed methods, it has been shown that each of them represent a magnified measure of the change in the angular position of the cylinder ΔΦ. Further, if a plane mirror is used in place of cylinder in the proposed methods, the corresponding signal S for the same change in the angular position ΔΦis represented by ΔD. For a chosen set of the experimental conditions, it is shown that for unit change in ΔΦ, the values of S1, S2 and S3 change 30, 37 and 62 times faster than ΔD. The investigations clearly demonstrate that hollow cylinders can be advantageously used as position-magnifying angle-sensing devices. The results of the investigations also suggest that in application areas such as auto collimation, torsion pendulum and design of motion control stages, this device is expected to bring in new advances.
APA, Harvard, Vancouver, ISO, and other styles
39

Maguraushe, Kudakwashe. "Development of a diagnostic instrument and privacy model for student personal information privacy perceptions at a Zimbabwean university." Thesis, 2021. http://hdl.handle.net/10500/27557.

Full text
Abstract:
Orientation: The safety of any natural being with respect to the processing of their personal information is an essential human right as specified in the Zimbabwe Data Protection Act (ZDPA) bill. Once enacted, the ZDPA bill will affect universities as public entities. It will directly impact how personal information is collected and processed. The bill will be fundamental in understanding the privacy perceptions of students in relation to privacy awareness, privacy expectations and confidence within university. These need to be understood to give guidelines to universities on the implementation of the ZPDA. Problem Statement: The current constitution and the ZDPA are not sufficient to give organisations guidelines on ensuring personal information privacy. There is need for guidelines to help organisations and institutions to implement and comply with the provisions of the ZDPA in the context of Zimbabwe. The privacy regulations, regarded as the three concepts (awareness, expectations and confidence), were used to determine the student perceptions. These three concepts have not been researched before in the privacy context and the relationship between the three concepts has not as yet been established. Research purpose: The main aim of the study was to develop and validate an Information Privacy Perception Survey (IPPS) diagnostic tool and a Student Personal Information Privacy Perception (SPIPP) model to give guidelines to universities on how they can implement the ZDPA and aid universities in comprehending student privacy perceptions to safeguard personal information and assist in giving effect to their privacy constitutional right. Research Methodology: A quantitative research method was used in a deductive research approach where a survey research strategy was applied using the IPPS instrument for data collection. The IPPS instrument was designed with 54 items that were developed from the literature. The preliminary instrument was taken through both the expert review and pilot study. Using the non-probability convenience sampling method, 287 students participated in the final survey. SPSS version 25 was used for data analysis. Both descriptive and inferential statistics were done. Exploratory factor analysis (EFA) was used to validate the instrument while confirmatory factor analysis (CFA) and the structural equation modelling (SEM) were used to validate the model. Main findings: diagnostic instrument was validated and resulted in seven new factors, namely university confidence (UC), privacy expectations (PE), individual awareness (IA), external awareness (EA), privacy awareness (PA), practice confidence (PC) and correctness expectations (CE). Students indicated that they had high expectations of the university on privacy. The new factors showed a high level of awareness of privacy and had low confidence in the university safeguarding their personal information privacy. A SPIPP empirical model was also validated using structural equation modelling (SEM) and it indicated an average overall good fit between the proposed SPIPP conceptual model and the empirically derived SPIPP model Contribution: A diagnostic instrument that measures the perceptions (privacy awareness, expectations and confidence of students) was developed and validated. This study further contributed a model for information privacy perceptions that illustrates the relationship between the three concepts (awareness, expectations and confidence). Other universities can use the model to ascertain the perceptions of students on privacy. This research also contributes to improvement in the personal information protection of students processed by universities. The results will aid university management and information regulators to implement measures to create a culture of privacy and to protect student data in line with regulatory requirements and best practice.
School of Computing
Ph. D. (Information Systems)
APA, Harvard, Vancouver, ISO, and other styles
40

Afagbegee, Gabriel Lionel. "A theoretical sociocultural assessment instrument for health communication campaigns." Thesis, 2016. http://hdl.handle.net/10500/21173.

Full text
Abstract:
Text in English
Health Communication Campaigns are one of the strategies used in facing the challenges of the spread and effects of the HIV/AIDS epidemic, which is not only a health issue but also has sociocultural implications and consequences. Although there are some models and research tools available to guide the planning, designing, implementing, monitoring and evaluation of health communication campaigns, the premise of the study was on two assumptions. First, most available models that guide the planning and execution of HIV/AIDS communication campaigns do not sufficiently highlight sociocultural variables; and second, since most available models do not sufficiently emphasise sociocultural variables, the design of the instruments for the assessment of the campaigns are not sufficiently geared towards identifying and assessing sociocultural variables of the campaigns. In light of these assumptions, the study was undertaken for three reasons. Firstly, to construct a sociocultural health communication campaign conceptual model that incorporates and highlights sociocultural variables to guide the planning and implementation of health communication campaigns; particularly HIV/AIDS communication campaigns. Secondly to develop an assessment instrument for assessing the presence or absence of sociocultural variables in the planning and implementation of health communication campaigns. Thirdly to test the theoretical sociocultural assessment instrument developed in the study in an HIV/AIDS communication campaign of the Ekurhuleni Metropolitan Municipality’s HIV/AIDS Unit. The results indicated that the instrument is a functional sociocultural assessment tool that can be used to determine three main aspects. Firstly, whether or not and at what level there is/or was active involvement and participation of the target audience in the communication campaigns process. Secondly, whether or not and at what level in the planning and execution of a campaign, the sociocultural context was taken into consideration and the relevant elements of such context incorporated in the campaign process. Thirdly, whether or not and at what level relevant theories/models underpinned the whole process of the health communication campaigns in the planning, designing, implementation, monitoring and evaluation stages. The sociocultural assessment instrument, therefore, is not meant for assessing the effectiveness of health communication campaigns per se. It is rather meant for use to ascertain the presence or absence of those three aspects on the assumption that if they are taking care of in the planning and implementation of such campaigns, the probability is that the campaigns would be more socioculturally appropriate. The implications of this study are that for health communication campaigns to be socioculturally appropriate, they display continuous community interactivity and participative (ensuring mutual relationship between campaign planners and target audience) in their planning, implementation and evaluation/assessment; making the whole campaign process strategic and integrative – their management should be strategic, implementation creative and monitoring and evaluation continuous.
Communication Science
D. Litt. et Phil. (Communication)
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography