To see the other types of publications on this topic, follow the link: Modes of measurement.

Dissertations / Theses on the topic 'Modes of measurement'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Modes of measurement.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

FORNO, ALEXANDRE DE OLIVEIRA DAL. "MEASUREMENT OF NEAR FIELD PROPAGATION MODES IN OPTICAL WAVEGUIDES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1995. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8747@1.

Full text
Abstract:
TELECOMUNICAÇÕES BRASILEIRAS S/A
A caracterização do campo próximo em guias de onda ópticos torna-se indispensável quando se deseja confeccionar guias monomodo. Neste trabalho utilizou-se um sistema de medidas baseado na medida do campo próximo, para caracterizar guias de onda quanto ao seu conteúdo modal. Foram realizadas medidas do campo próximo de onda em GaAs/AlGaAs, com estruturas diferentes, para se verificar as suas características modais. Analisou-se, também, alguns parâmetros importantes nesta medida, como os alinhamentos e o ajuste do foco do sistema óptico. Baseado nos dados medidos, calculou-se o spot size dos guias nas direções horizontal e vertical, verificando-se a eficiência de acoplamento com fibras monomodo. Para se averiguar a precisão do sistema de medidas, uma fibra monomodo calibrada foi medida, verificando-se um erro de aproximadamente 1.5% nas medidas de spot size.
Near field characterization of optical waveguides is extremely important in monomode waveguides production. This work has use a measurement system based on the near field measurement, for characterizing the modal contents of waveguides. In order to verify the modal characteristics of two GaAS/AlGaAs waveguides, with different structures, near fiels measurements were made. Important parameters, such as the alignment and the focal adjustment of the aoptical systems, were also analyzed. Using the results from the measurements, it was calculated the spot size of the waveguides in the horizontal and vertical directions, verifying the coupling efficiency with monomode fibers. A calibrated monomode fiber was measured to analyze the system precision. On using this measurement as a reference parameter a high precision was observed
APA, Harvard, Vancouver, ISO, and other styles
2

Hupf, Christina [Verfasser], and Joachim [Akademischer Betreuer] Wegener. "Impedance-based analysis of 3D tissue models: A novel measurement setup for novel measurement modes / Christina Hupf ; Betreuer: Joachim Wegener." Regensburg : Universitätsbibliothek Regensburg, 2018. http://d-nb.info/1156008387/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Truitt, Patrick A. "Measurement of coupling between the electron gas and nanomechanical modes." College Park, Md.: University of Maryland, 2007. http://hdl.handle.net/1903/7738.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2007.
Thesis research directed by: Dept. of Physics. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
4

Georgas, Peter. "Simultaneous measurement of full-field vibration modes using Electronic Speckle Pattern Interferometry (ESPI)." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/42841.

Full text
Abstract:
The natural frequencies and vibration mode shapes of flat plates are measured using ESPI, even when multiple modes are simultaneously present. The method involves measuring the surface shape of a vibrating plate at high frame rate using a Michelson interferometer and high-speed camera. The vibration is either excited by white (random) noise or by impact. Fourier analysis of the acquired data gives the natural frequencies and associated mode shapes. The analytical procedure used has the advantage that it simultaneously identifies all vibration modes with frequencies up to half the sampling frequency. In comparison, the ESPI Time-Averaged method and the traditional Chladni method both require that the plate be sinusoidally excited at each natural frequency to allow separate measurements of the associated mode shapes. Example measurements are presented to illustrate the use and capabilities of the proposed plate natural frequency and mode shape measurement method.
APA, Harvard, Vancouver, ISO, and other styles
5

Taniguchi, Nanae. "Measurement of branching fractions, isospin and CP-violating asymmetries for exclusive b → dγ modes." 京都大学 (Kyoto University), 2008. http://hdl.handle.net/2433/124374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Baselt, Tobias, Christopher Taudt, Bryan Nelsen, Andrés Fabián Lasagni, and Peter Hartmann. "Experimental measurement and numerical analysis of group velocity dispersion in cladding modes of an endlessly single-mode photonic crystal fiber." SPIE, 2017. https://tud.qucosa.de/id/qucosa%3A35152.

Full text
Abstract:
The optical properties of the guided modes in the core of photonic crystal fibers (PCFs) can be easily manipulated by changing the air-hole structure in the cladding. Special properties can be achieved in this case such as endless singlemode operation. Endlessly single-mode fibers, which enable single-mode guidance over a wide spectral range, are indispensable in the field of fiber technology. A two-dimensional photonic crystal with a silica central core and a micrometer-spaced hexagonal array of air holes is an established method to achieve endless single-mode properties. In addition to the guidance of light in the core, different cladding modes occur. The coupling between the core and the cladding modes can affect the endlessly single-mode guides. There are two possible ways to determine the dispersion: measurement and calculation. We calculate the group velocity dispersion (GVD) of different cladding modes based on the measurement of the fiber structure parameters, the hole diameter and the pitch of a presumed homogeneous hexagonal array. Based on the scanning electron image, a calculation was made of the optical guiding properties of the microstructured cladding. We compare the calculation with a method to measure the wavelength-dependent time delay. We measure the time delay of defined cladding modes with a homemade supercontinuum light source in a white light interferometric setup. To measure the dispersion of cladding modes of optical fibers with high accuracy, a time-domain white-light interferometer based on a Mach-Zehnder interferometer is used. The experimental setup allows the determination of the wavelengthdependent differential group delay of light travelling through a thirty centimeter piece of test fiber in the wavelength range from VIS to NIR. The determination of the GVD using different methods enables the evaluation of the individual methods for characterizing the cladding modes of an endlessly single-mode fiber.
APA, Harvard, Vancouver, ISO, and other styles
7

Jones, Cameron Bennion. "Development and Validation of a Vibration-Based Sound Power Measurement Method." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/9260.

Full text
Abstract:
The International Organization for Standardization (ISO) provides no vibration-based sound power measurement standard that provides Precision (Grade 1) results. Current standards that provide Precision (Grade 1) results require known acoustic environments or complex setups. This thesis details the Vibration Based Radiation Mode (VBRM) method as one approach that could potentially be used to develop a Precision (Grade 1) standard. The VBRM method uses measured surface velocities of a structure and combines them with the radiation resistance matrix to calculate sound power. In this thesis the VBRM method is used to measure the sound power of a single-plate and multiple plate system. The results are compared to sound power measurements using ISO 3741 and good alignment between the 200 Hz and 4 kHz one-third octave band is shown. It also shows that in the case of two plates separated by a distance and driven with uncorrelated sources, the contribution to sound power of each individual plate can be calculated while they are simultaneously excited. The VBRM method is then extended to account for acoustically radiating cylindrical geometries. The mathematical formulations of the radiation resistance matrix and the accompanying acoustic radiation modes of a baffled cylinder are developed. Numberical sound power calculations using the VBRM method and a boundary element method (BEM) are compared and show good alignment. Experimental surface velocity measurements of a cylinder are taken using a scanning laser Doppler vibrometer (SLDV) and the VBRM method is used to calculate the sound power of a cylinder experimentally. The results are compared to sound power measurements taken using ISO 3741.
APA, Harvard, Vancouver, ISO, and other styles
8

Hensley, Kiersten Kenning. "Examining the effects of paper-based and computer-based modes of assessment on mathematics curriculum-based measurement." Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/1627.

Full text
Abstract:
The computer to pupil ratio has changed drastically in the past decades, from 125:1 in 1983 to less than 2:1 in 2009 (Gray, Thomas, and Lewis, 2010), allowing for teachers and students to integrate technology throughout the educational experience. The area of educational assessment has adapted to the increased use of technology. Trends in assessment and technology include a movement from paper-based to computer-based testing for all types of assessments, from large-scale assessments to teacher-created classroom tests. Computer-based testing comes with many benefits when compared to paper-based testing, but it is necessary to determine if results are comparable, especially in situations where computer-based and paper-based tests can be used interchangeably. The main purpose of this study was to expand upon the base of research comparing paper-based and computer-based testing, specifically with elementary students and mathematical fluency. The study was designed to answer the following research questions: (1) Are there differences in fluency-based performance on math computation problems presented on paper versus on the computer? (2) Are there differential mode effects on computer-based tests based on sex, grade level, or ability level? A mixed-factorial design with both within- and between-subject variables was used to investigate the differences between performance on paper-based and computer-based tests of mathematical fluency. Participants completed both paper- and computer-based tests, as well as the Group Math Assessment and Diagnostic Evaluation as a measure of general math ability. Overall findings indicate that performance on paper- and computer-based tests of mathematical fluency are not comparable and student grade-level may be a contributing factor in that difference.
APA, Harvard, Vancouver, ISO, and other styles
9

Ikeda, Tatsunori. "Improvement of surface wave methods for constructing subsurface S-wave velocity structures." 京都大学 (Kyoto University), 2014. http://hdl.handle.net/2433/188570.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ghosh, Aparna. "Fabrication and characterization of solid-state gas seasons." Thesis, University of North Bengal, 2020. http://ir.nbu.ac.in/handle/123456789/4338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Moussou, Julien. "Caractérisation expérimentale du flux thermique transitoire pariétal pour différents modes de combustion." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2019. http://www.theses.fr/2019ESMA0010/document.

Full text
Abstract:
Pour réduire significativement les émissions de CO2 dans les moteurs à combustion interne, un levier majeur est la réduction des pertes thermiques pariétales lors de la combustion. Ces pertes présentent un pic de plusieurs MW/m2 près du point mort haut, et sont liées à des phénomènes complexes d'interaction flamme-paroi qui dépendent du mode de combustion. Afin de mieux appréhender les phénomènes associés, il est nécessaire de caractériser le flux thermique à des échelles temporelles inférieures à la milliseconde.Dans ces travaux, une machine à compression rapide et une cellule à précombustion à volume constant sont utilisées pour simuler les phénomènes de combustion rencontrés en moteurs. Des thermocouples à jonction fines permettent une mesure de flux thermique instantanée avec une résolution temporelle de 0.1 ms. Ces moyens d'essais permettent de reproduire trois modes de combustion : flamme de propagation, flamme de diffusion et auto-inflammation. Ces travaux permettent également d'évaluer les différentes technologies envisageables de mesure de transfert thermique en combustion (thermocouples, thermorésistances et thermométrie phosphore rapide) au regard des caractéristiques métrologiques requises par la rapidité des phénomènes mis en jeu.Le flux lors du transfert thermique atteint des valeurs de plusieurs MW/m2 avec une forme qui dépend du mode de combustion. Le flux lors de la propagation d'une flamme prémélangée est dominé par un pic lors de l'interaction flamme paroi,d'environ 5 MW/m2 et de durée 0.5 ms. Le flux lors de la combustion d'un jet Diesel est approximativement un plateau pendant la durée de l'injection ; il est dominé par l'effet d'entraînement d'air par le jet qui cause une augmentation du coefficient de transfert convectif jusqu'à des valeurs de 10 kW/m2/K, l'augmentation de température liée à la combustion étant secondaire. Dans le cas d'ondes de pression générées par une auto-inflammation rapides de gaz(cliquetis lors d'un allumage commandé ou HCCI à fort contenu énergétique), une corrélation est observée entre l'intensité du cliquetis et le flux thermique associé, quel que soit le mode de combustion qui génère les oscillations de pression. Le flux lors du cliquetis est 3 à 5 fois plus élevé que lors d'une combustion par flamme de propagation comparable
CO2 emissions in internal combustion engines are linked with inefficiencies due to wall heat losses during combustion.Those losses exhibit a sharp peak of a few MW/m2 close to top dead center and are linked to complex flame/wall interaction phenomena that vary with the combustion mode. A fine understanding of the associated phenomena requires experimental characterization of wall heat flux with a time resolution better than the millisecond. In this PhD work, a rapid compression machine and a precombustion cell are used to reproduce engine combustion phenomena. Thin-junction thermocouples allow an instantaneous measurement of the wall heat flux with a time resolution of 0.1 ms. Three combustion modes are generated: propagation flame, diffusion flame and auto-ignition.Different possible measurement technologies and procedures (thermocouples, thermoresistances and rapid phosphor thermometry) are compared and benchmarked against the features of combustion phenomena. Flux during wall heat transfer reaches values of a few MW/m2 and its shape varies with the combustion mode. During premixed flame propagation, flux is dominated by a peak during flame-wall interaction of about 5 MW/m2 in amplitude and 0.5 ms in duration. During Diesel combustion, heat flux is approximately constant during the injection duration; itsevolution is driven by an increase of the convection coefficient up to 10 kW/m2/K, which is attributed to air entrainment by the spray; the temperature increase from combustion is considered a second-order effect. During combustion presenting a pressure wave propagation (e.g. knock for some spark-ignition cases or HCCI with high energy content), the intensity of pressure oscillations and wall heat flux are shown to be correlated. That correlation is independent of the phenomenon creating the pressure wave; heat flux during knock is 3-5 times higher than for a comparable premixed propagation flame
APA, Harvard, Vancouver, ISO, and other styles
12

Bank, Jason Noah. "Propagation of Electromechanical Disturbances across Large Interconnected Power Systems and Extraction of Associated Modal Content from Measurement Data." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/73008.

Full text
Abstract:
Changes in power system operating conditions cause dynamic changes in angle and frequency. These disturbances propagate throughout the system area with finite speed. This propagation takes the form of a traveling wave whose arrival time at a particular point in the system can be observed using a wide-area measurement system (WAMS). Observations of these waves both through simulation and measurement data have demonstrated several factors that influence the speed at which a disturbance propagates through a system. Results of this testing are presented which demonstrate dependence on generator inertia, damping and line impedance. Considering a power system as an area with and uneven distribution of these parameters it is observed that a disturbance will propagate throughout a system at different rates in differing directions. This knowledge has applications in locating the originating point of a system disturbance, understanding the overall dynamic response of a power system, and determining the dependencies between various parts of that system. A simplified power system simulator is developed using the swing equation and system power flow equations. This simplified modeling technique captures the phenomenon of traveling electromechanical waves and demonstrates the same dependencies as data derived from measurements and commercial power system simulation packages. The ultimate goal of this research is develop a methodology to approximate a real system with this simplified wave propagation model. In this architecture each measurement point would represent a pseudo-bus in the model. This procedure effectively lumps areas of the system into one equivalent bus with appropriately sized generators and loads. With the architecture of this reduced network determined its parameters maybe estimated so as to provide a best fit to the measurement data. Doing this effectively derives a data-driven equivalent system model. With an appropriate equivalent model for a given system determined, incoming measurement data can be processed in real time to provide an indication of the system operating point. Additionally as the system state is read in through measurement data future measurements values along the same trajectory can be estimated. These estimates of future system values can provide information for advanced control and protection schemes. Finally a procedure for the identification and extraction of inter-area oscillations is developed. The dominant oscillatory frequency is identified from an event region then fit across the surrounding dataset. For each segment of this data set values of amplitude, phase and damping are derived for each measurement vector. Doing this builds up a picture of how the oscillation evolves over time and responds to system conditions. These results are presented in a graphical format as a movie tracking the modal phasors over time. Examples derived from real world measurement data are presented.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
13

Fatemi-Ghomi, Nasim. "Measurement of the double beta decay half-life of 150Nd and search for neutrinoless decay modes with the NEMO-3 detector." Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.496229.

Full text
Abstract:
The half-life for two-neutrino double beta decay of ¹⁵⁰Nd has been measured with data taking by the NEMO 3 experiment at the Modane Underground Laboratory. Using 924.7 days of data recorded with 36.55 g of ¹⁵⁰Nd the half-life of this process is measured ... this result has significantly improved the previous limit on neutrinoless double beta decay of this isotope. Limits are also set on several other neutrinoless double decay modes.
APA, Harvard, Vancouver, ISO, and other styles
14

Müller, Vanessa [Verfasser], Bernhard [Akademischer Betreuer] Spaan, and Kevin [Gutachter] Kröninger. "Improvement of the LHCb measurement of sin(2β) in additional B0 decay modes / Vanessa Müller ; Gutachter: Kevin Kröninger ; Betreuer: Bernhard Spaan." Dortmund : Universitätsbibliothek Dortmund, 2019. http://d-nb.info/1195215940/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Wardrope, David Robert. "Preparations for measurement of electroweak vector boson production cross-sections using the electron decay modes, with the Compact Muon Solenoid detector." Thesis, Imperial College London, 2009. http://hdl.handle.net/10044/1/5517.

Full text
Abstract:
The Compact Muon Solenoid was designed to make discoveries at the TeV scale : to elucidate the nature of electroweak symmetry breaking and to search for physics beyond the Standard Model. For any such discovery to be credible, it must first be demonstrated that the CMS detector is understood. One mechanism to make this demonstration is to measure "standard candle" processes, such as W and Z production. This thesis describes preparations undertaken to make these measurements using the electron decay modes, with [derivative]Ldt = 10 pb-1 of collision data. The energy resolution of the electromagnetic calorimeter was measured in test beam data. An improved method of deriving the optimised weights necessary for amplitude reconstruction is described. The measurement of electron charge using tracks is impaired by the electron showering in the tracker material. A novel charge measurement technique that is complementary to the existing method was assessed. Missing transverse energy is a powerful discriminating variable for the selection of W+/ -> e+/ [upsilon]e events, however it is difficult to simulate accurately due to its global nature. The Ersatz Missing Energy method was developed to provide reliable and accurate descriptions of missing energy from data using readily reconstructible [gamma]*/Z -> e+e- events. The method is described and evaluated. Finally, the measurement strategy for W and Z boson production cross-sections in early data is outlined and analysed using simulated data. Significant results can be obtained with only [derivative] Ldt = 10 pb-1.
APA, Harvard, Vancouver, ISO, and other styles
16

Leary, Cody Collin 1981. "Measurement and control of transverse photonic degrees of freedom via parity sorting and spin-orbit interaction." Thesis, University of Oregon, 2010. http://hdl.handle.net/1794/10910.

Full text
Abstract:
xv, 215 p. : ill. (some col.) A print copy of this thesis is available through the UO Libraries. Search the library catalog for the location and call number.
In this dissertation, several new methods for the measurement and control of transverse photonic degrees of freedom are developed. We demonstrate a mode sorter for two-dimensional (2-D) parity of transverse spatial states of light based on an out-of-plane Sagnac interferometer. The first experimental 2-D parity sorting measurements of Hermite-Gauss transverse spatial modes are presented. Due to the inherent phase stability of this type of interferometer, it provides a promising tool for the manipulation of higher order transverse spatial modes for the purposes of quantum information processing. We propose two such applications: the production of both spatial-mode entangled Bell states and heralded single photons, tailored to cover the entire Poincaré sphere of first-order transverse modes. In addition to the aforementioned transverse spatial manipulation based on free-space parity sorting, we introduce several more such techniques involving photons propagating in optical fibers. We show that when a photon propagates in a cylindrically symmetric waveguide, its spin angular momentum and its orbital angular momentum (OAM) interact. This spin-orbit interaction (SOI) leads to the prediction of several novel rotational effects: the spatial or time evolution of the photonic polarization vector is controlled by its OAM quantum number or, conversely, its spatial wave function is controlled by its spin. We demonstrate how these phenomena can be used to reversibly transfer entanglement between the spin and OAM degrees of freedom of two-particle states. In order to provide a deeper insight into the cause of the SOI for photons, we also investigate an analogous interaction for electrons in a cylindrical waveguide and find that each of the SOI effects mentioned above remain manifest for the electron case. We show that the SOI dynamics are quantitatively described by a single expression applying to both electrons and photons and explain their common origin in terms of a universal geometric phase associated with the interplay between either particle's spin and OAM. This implies that these SOI-based effects occur for any particle with spin and thereby exist independently of whether or not the particle has mass, charge, or magnetic moment.
Committee in charge: Daniel Steck, Chairperson, Physics; Michael Raymer, Member, Physics; Jens Noeckel, Member, Physics; Steven van Enk, Member, Physics; Andrew Marcus, Outside Member, Chemistry
APA, Harvard, Vancouver, ISO, and other styles
17

Diakite, Cheick. "Identification d’un modèle de mesure au sol du diagramme de rayonnement des antennes installées sur aéronefs." Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4007.

Full text
Abstract:
La détermination du diagramme de rayonnement d’une antenne installée sur un porteur est indispensable pour la validation des systèmes RF auxquels sont associées ces antennes. Lorsque l’antenne est placée sur le porteur, la structure rayonnante est formée par l’antenne et le porteur. Le diagramme doit être déterminé à grande distance de la source dans la zone de champ lointain. Suivant la fréquence de travail et la taille de la source rayonnante, cette distance peut être grande. Elle est par exemple de l’ordre 85m lorsque l’on cherche à caractériser à 88MHz une source rayonnante de dimension maximale 12m. Actuellement, cette distance de champ lointain, est obtenue lors des essais en vol au cours desquels l’aéronef est à 80km de la station sol relevant le diagramme. Cette méthode de caractérisation n’est pas tenable à long terme, en raison de son coût et des temps de mesure. C’est pourquoi Airbus Helicopter veut mettre en œuvre une mesure au sol mais ce dernier perturbe grandement le rayonnement. La méthode de caractérisation retenue permet d’effectuer les mesures au sol dans un espace réduit et permet de s’affranchir de l’influence du sol. La méthode retenue repose sur la caractérisation en champ proche dans une géométrie cylindrique, associée à une technique de correction de diagramme de rayonnement basée sur la décomposition de modes. La mesure champ proche requiert l’acquisition des données de champ proche sur une surface cylindrique, puis un algorithme de transformation développé en environnent MATLAB effectue le passage en champ lointain. Le champ lointain est décomposé en modes cylindriques, puis un filtrage modal est appliqué pour s’affranchir de l’influence du sol
Aircraft antennas radiation pattern must be measured before radiofrequency systems deployment. When an antenna is on the aircraft, radiating structure is not only antenna element but it’s the set antenna and aircraft. Radiation pattern have to be determined at great distance from the radiating structure. This distance is located in area called far field area depending on the frequency of operation and the maximal size of the aircraft. For example, to measure an antenna on an aircraft of 12 m of long at 88 MHz; the far field area is at 85m.To get this distance, flight measurement is performed. During flight test, radiation pattern is determined by a ground station which is at 80 km of the aircraft. This method is very expensive and takes much time and causes serious delays on aircraft delivery. For these reasons Airbus Helicopters explores new innovative solution, like as radiation pattern measurement on the ground. The last one disturbs the radiation pattern. The proposed method allows to perform measurement in compact volume and cancel out the ground influence. Proposed solution is based on cylindrical near field measurement, and cylindrical modes decomposition. Near field data are scanned on cylinder surface, then an algorithm developed on MATLAB environment do near field to far field transformation. Cylindrical modes decomposition and modal filtering rid of ground reflections. Proposed solution has been validated numerically through comparison with electromagnetic simulator software results. Antenna measurement range in the laboratory has served to validate experimentally the proposed solution on a prototype in scale size
APA, Harvard, Vancouver, ISO, and other styles
18

Karavdina, Anastasia [Verfasser]. "Preparation for the accurate luminosity measurement by antiproton-proton elastic scattering and feasibility study for observation of h c hadronic decay modes at the P̄ANDA experiment / Anastasia Karavdina." Mainz : Universitätsbibliothek Mainz, 2016. http://d-nb.info/1100175938/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Yixi. "Application of Item Response Tree (IRTree) Models on Testing Data: Comparing Its Performance with Binary and Polytomous Item Response Models." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587481533999313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Rezaei, Hengameh. "Models complexity measurement." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-68701.

Full text
Abstract:
The demand for measuring the quality aspects and need for higher maintainability and understandability of the models are increasing within the field of software engineering and management. Among these, complex models are of special interest for designers as they are more correlated to the eventual reliability of the system and therefore are considered very important. This study presents a method for measuring the complexity of existing software models in Ericsson seeking to raise the maintainability and understandability of the software engineering project in progress. A literature survey was performed in order to find a list of all potentially useful metrics. Narrowing down the long list of metrics was carried out by interviews with designers at Ericsson. Utilizing statistical data analysis based on interviews results was the next step. Beside, workshops were used for evaluating the reliability of preliminary data analysis and an empirical formula was generated for models’ complexity prediction. Metrics such as “non-self-transitions”, “transitions per states”, and “state depth” are the most important for calculating the models’ complexity score (rank) and for these metrics threshold values were set. Challenges and experiences gained in this study demonstrated the importance of incorporating user generated feedback in the empirical complexity modeling studies
APA, Harvard, Vancouver, ISO, and other styles
21

Sbaibi, Ahmed. "Contribution à l'étude du comportement statique et dynamique des capteurs thermiques (fils froids, fils chauds et thermocouples) en présence de différents modes de transferts thermiques." Rouen, 1987. http://www.theses.fr/1987ROUES030.

Full text
Abstract:
Etude à la fois théorique et expérimentale dans le cas des écoulements gazeux subsoniques. Les résultats expérimentaux relatifs aux constantes de temps des capteurs dans des situations convectives ou radiatives et aux fonctions de transfert des fils chauds et fils froids sont en bon accord avec l'analyse théorique présentée dans ce mémoire
APA, Harvard, Vancouver, ISO, and other styles
22

Si, Ching-Fung B. "Ability Estimation Under Different Item Parameterization and Scoring Models." Thesis, University of North Texas, 2002. https://digital.library.unt.edu/ark:/67531/metadc3116/.

Full text
Abstract:
A Monte Carlo simulation study investigated the effect of scoring format, item parameterization, threshold configuration, and prior ability distribution on the accuracy of ability estimation given various IRT models. Item response data on 30 items from 1,000 examinees was simulated using known item parameters and ability estimates. The item response data sets were submitted to seven dichotomous or polytomous IRT models with different item parameterization to estimate examinee ability. The accuracy of the ability estimation for a given IRT model was assessed by the recovery rate and the root mean square errors. The results indicated that polytomous models produced more accurate ability estimates than the dichotomous models, under all combinations of research conditions, as indicated by higher recovery rates and lower root mean square errors. For the item parameterization models, the one-parameter model out-performed the two-parameter and three-parameter models under all research conditions. Among the polytomous models, the partial credit model had more accurate ability estimation than the other three polytomous models. The nominal categories model performed better than the general partial credit model and the multiple-choice model with the multiple-choice model the least accurate. The results further indicated that certain prior ability distributions had an effect on the accuracy of ability estimation; however, no clear order of accuracy among the four prior distribution groups was identified due to an interaction between prior ability distribution and threshold configuration. The recovery rate was lower when the test items had categories with unequal threshold distances, were close at one end of the ability/difficulty continuum, and were administered to a sample of examinees whose population ability distribution was skewed to the same end of the ability continuum.
APA, Harvard, Vancouver, ISO, and other styles
23

Smith, Austin. "Agreement Level of Running Temporal Measurements, Kinetics, and Force-Time Curves Calculated from Inertial Measurement Units." Digital Commons @ East Tennessee State University, 2021. https://dc.etsu.edu/etd/3861.

Full text
Abstract:
Inertial measurement units (IMUs) and wearable sensors have enabled athlete monitoring and research to become more ecologically valid due to their small size and low cost. IMUs and accelerometers that are placed on the body close to the point of impact and that record at sufficiently high frequencies have demonstrated the highest validity when measuring temporal gait event moments such as ground contact time (GCT) and flight time (FT) as well as peak forces (PF) during upright running. While the use of IMUs has increased in the sport performance and athlete monitoring realm, the potential of the technology’s ability to estimate running force-time curves utilizing the two-mass model (TMM) remains unexplored. The purpose of this study was two-fold. First, was to determine the validity of measuring temporal gait events and peak forces utilizing a commercially available shank-mounted inertial measurement unit. Second, was to determine the validity of force-time curves generated from the TMM utilizing data from shank-mounted inertial measurement units. Ten subjects voluntarily completed submaximal treadmill tests equipped with a force plate while wearing shank-mounted IMUs on each leg. Using the raw data from the IMUs, GCT, FT, total step time (ST), PF, and two-mass model-based force-time (F-t) curves were generated for 25 steps at 8 different speeds. Paired sample T-tests were performed on the gait events and peak force between the IMU and treadmill with both individual step comparison and averages per each speed. 95% confidence intervals were calculated for each timepoint of the force time curves. No statistically significant differences (p > 0.05) and nearly perfect relationships were observed for the step averages for each speed with FT, ST, and PF. Confidence intervals of the corrected mean difference suggest that F-t curves calculated from the TMM may not be valid when assessing the running population as a whole. When performing a sub-group analysis of skilled runners and recreational runners, F-t curves derived from shank-mounted IMUs may be more valid in skilled runners than recreational runners. In skilled runners, the 95% CI for the mean difference contained zero within the first 60% of the GCT duration, whereas the 95% CI recreational runners contained a zero-value in a smaller percentage of the GCT located only in the middle of the GCT at the curve peak height. The results of this study suggest that interchangeability between shank-mounted IMUs and force plates may be very limited when estimating temporal gait events and kinetics. While agreement was low between F-t curves after the peak in skilled runners, use of shank-mounted IMUs to estimate F-t curves may have several benefits still in skilled runners when assessing peak forces and force development from initial contact until peak force.
APA, Harvard, Vancouver, ISO, and other styles
24

Machekhin, Yu P. "Measurement of non physical quantities." Thesis, XXI IMEKO World Congress, Prague, Czech Republic, 2015. http://openarchive.nure.ua/handle/document/8669.

Full text
Abstract:
The work focused on the development of measurement methods in non-physical areas of human activity. Analyzed not technical activities of a person lying in the fields of social, biological, economic and other areas where no standards or measurement procedures, built on mathematical and physical models. It is shown that for measuring physical quantities it is necessary to use methods that provide measurements and processing of measurement results. Implementation of measurements of physical quantities for which there are no reference values, is possible only in the case when a connection is established between the test value and operating parameters, which is in the nature of the scales. The basis of these scales are measured intervals of time during which the dynamic system is returned to a stable state.
APA, Harvard, Vancouver, ISO, and other styles
25

Ferria, Hakim. "Contribution to Numerical and Experimental Studies of Flutter in Space Turbines. Aerodynamic Analysis of Subsonic or Supersonic Flows in Response to a Prescribed Vibratory Mode of the Structure." Phd thesis, Ecole Centrale de Lyon, 2011. http://tel.archives-ouvertes.fr/tel-00677648.

Full text
Abstract:
Modern turbomachines are designed towards thinner, lighter and highly loaded blades. This gives rise to increased sensitivity to flow induced vibrations such as flutter, which leads to structure failure in a short period of time if not sufficiently damped. Although numerical tools are more and more reliable, flutter prediction still depends on a large degree on simplified models. In addition, the critical nature of flutter, resulting in poor welldocumented real cases in the open literature, and the lack of experimental database typical of engine flows make its apprehension even more challenging. In that context, the present thesis is dedicated to study flutter in recent turbines through aerodynamic analysis of subsonic or supersonic flows in response to a prescribed vibratory mode of the structure. The objective is to highlight some mechanisms potentially responsible for flutter in order to be in better position when designing blades. The strategy consists in leading both experimental and numerical investigations. The experimental part is based on a worldwide unique annular turbine sector cascade employed for measuring the aeroelastic response by means of the aerodynamic influence coefficient technique. The cascade comprises seven low pressure gas turbine blades one of which can oscillate in a controlled way as a rigid body. Aeroelastic responses are measured at various mechanical and aerodynamic parameters: pure and combined modeshapes, reduced frequency, Mach number, incidence angle. In addition to turbulence level measurements, the database aims at assessing the influence of these parameters on the aerodynamic damping, at validating the linear combination principle and at providing input for numerical tools. The numerical part is based on unsteady computations linearized in the frequency domain and performed in the traveling wave mode. The focus is put on two industrial space turbines: 2D computations are performed on an integrally bladed disk, also called blisk; its very low viscous material damping results in complex motions with combined modes and extremely high reduced frequency. The blisk operates at low subsonic conditions without strong non-linearities. Although the blades have been predicted aeroelastically stable, an original methodology based on elementary decompositions of the blade motion is presented to identify the destabilizing movements. The results suggest that the so-called classical flutter is surprisingly prone to occur. Moreover, the aerodynamic damping has been found extremely sensitive to the interblade phase angle and cut-on/cut-off conditions.* 3D computations are then performed on a supersonic turbine, which features shockwaves and boundary layer separation. In contrast, the blade motion is of elementary nature, i.e. purely axial. The blades have been predicted aeroelastically unstable for backward traveling waves and stable for forward traveling waves. The low reduced frequencies allow quasi-steady analysis, which still account for flutter mechanisms: the shock wave motion establishes the boundary between stable and unstable configurations.
APA, Harvard, Vancouver, ISO, and other styles
26

Li, Qie. "A Bayesian Hierarchical Model for Multiple Comparisons in Mixed Models." Bowling Green State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1342530994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Kwong, Sunny Kai-Sun. "Price-sensitive inequality measurement." Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/25807.

Full text
Abstract:
The existing inequality indexes in the economics literature (including the more sophisticated indexes of Muellbauer (1974) and Jorgenson-Slesnick (1984)), are found to be insensitive to relative price changes or are unjustifiable in terms of social evaluation ethics or both. The present research fills this gap in the literature by proposing a new index, named the Individual Equivalent Income (IEI) index. A household indirect utility function is hypothesized which incorporates certain attribute parameters in the form of equivalence scales. These attributes are demographic and environmental characteristics specific to a given household. This indirect utility function gives a number which represents the utility of each member of the household. A particular level of interpersonal comparison of utilities is assumed which gives rise to an exact individual utility indicator named equivalent income. A distribution of these equivalent incomes forms the basis of a price-sensitive relative inequality index. This index can be implemented in the Canadian context. Preferences are assumed to be nonhomothetic translog and demand data are derived from cross-section surveys and time-series aggregates. Based on demand data, the translog equivalent income function can be estimated and equivalent incomes imputed to all individuals in society. An Atkinson index of equivalent incomes is then computed to indicate the actual degree of inequality in Canada. The new IEI index is compared with other indexes based on a common data set. The main findings are: conventional indexes give bad estimates of the true extent of inequality and the IEI index, while providing a more accurate estimate, indicates distributive price impact in a predictable manner, i.e., food price inflation aggravates while transportation price inflation ameliorates the inequality problem.
Arts, Faculty of
Vancouver School of Economics
Graduate
APA, Harvard, Vancouver, ISO, and other styles
28

Ang, Keng Cheng. "A computational fluid dynamic study of blood flow through stenosed arteries /." Title page, table of contents and summary only, 1996. http://web4.library.adelaide.edu.au/theses/09PH/09pha5808.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Linder, Jonas. "Indirect System Identification for Unknown Input Problems : With Applications to Ships." Doctoral thesis, Linköpings universitet, Reglerteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-134126.

Full text
Abstract:
System identification is used in engineering sciences to build mathematical models from data. A common issue in system identification problems is that the true inputs to the system are not fully known. In this thesis, existing approaches to unknown input problems are classified and some of their properties are analyzed.  A new indirect framework is proposed to treat system identification problems with unknown inputs. The effects of the unknown inputs are assumed to be measured through possibly unknown dynamics. Furthermore, the measurements may also be dependent on other known or measured inputs and can in these cases be called indirect input measurements. Typically, these indirect input measurements can arise when a subsystem of a larger system is of interest and only a limited set of sensors is available. Two examples are when it is desired to estimate parts of a mechanical system or parts of a dynamic network without full knowledge of the signals in the system. The input measurements can be used to eliminate the unknown inputs from a mathematical model of the system through algebraic manipulations. The resulting indirect model structure only depends on known and measured signals and can be used to estimate the desired dynamics or properties. The effects of using the input measurements are analyzed in terms of identifiability, consistency and variance properties. It is shown that cancelation of shared dynamics can occur and that the resulting estimation problem is similar to errors-in-variables and closed-loop estimation problems because of the noisy inputs used in the model. In fact, the indirect framework unifies a number of already existing system identification problems that are contained as special cases. For completeness, an instrumental variable method is proposed as one possibility for estimating the indirect model. It is shown that multiple datasets can be used to overcome certain identifiability issues and two approaches, the multi-stage and the joint identification approach, are suggested to utilize multiple datasets for estimation of models. Furthermore, the benefits of using the indirect model in filtering and for control synthesis are briefly discussed.  To show the applicability, the framework is applied to the roll dynamics of a ship for tracking of the loading conditions. The roll dynamics is very sensitive to changes in these conditions and a worst-case scenario is that the ship will capsize.  It is assumed that only motion measurements from an inertial measurement unit (IMU) together with measurements of the rudder angle are available. The true inputs are thus not available, but the measurements from the IMU can be used to form an indirect model from a well-established ship model. It is shown that only a subset of the unknown parameters can be estimated simultaneously. Data was collected in experiments with a scale ship model in a basin and the joint identification approach was selected for this application due to the properties of the model. The approach was applied to the collected data and gave promising results.
Till skillnad från många andra industrier där avancerade styrsystem har haft en omfattande utveckling under de senaste decennierna så har styrsystem för skepps- och marinindustrin inte alls utvecklats i samma utsträckning. Det är framförallt under de senaste 10 åren som lagkrav och stigande driftskostnader har ökat intresset för effektivitet och säkerhet genom användning av styrsystem. Rederier och den marina industrin är nu intresserade av hur de avancerade styrsystem som används inom andra områden kan tillämpas för marina ändamål. Huvudmålet är typiskt att minska den totala energianvändningen, och därmed också bränsleförbrukningen, genom att hela tiden planera om hur skeppet skall användas med hjälp av ny information samt styra skeppet och dess ingående system på ett sätt som maximerar effektiviteten. För många av dessa avancerade styrsystem är det grundläggande att ha en god förståelse för beteendet hos det systemet som skall styras. Ofta används matematiska modeller av systemet för detta ändamål. Sådana modeller kan skapas genom att observera hur systemet reagerar på yttre påverkan och använda dessa observationer för att finna eller skatta den modell som bäst beskriver observationerna. Observationerna är mätningar som görs med så kallade sensorer och tekniken att skapa modeller från mätningarna kallas för systemidentifiering. Detta är i grunden ett utmanande problem och det kan försvåras ytterligare om de sensorer som behövs inte finns tillgängliga eller är för dyra att installera. I denna avhandling föreslås en ny teknik där de mätningar som finns tillgängliga används på ett nytt och annorlunda sätt. Detta kan minska mängden nödvändiga sensorer eller möjliggöra användandet av alternativa sensorer i modell-framtagningen. Med hjälp av denna nya teknik kan enkla sensorer användas för att skatta en matematisk modell för en del av skeppet på ett sätt som inte är möjligt med traditionella metoder. Genom att skatta denna modell kan fysikaliska egenskaper hos skeppet, så som dess massa och hur massan är fördelad över skeppet, övervakas för att upptäcka förändringar. Just dessa två egenskaper har stor inverkan på hur skeppet beter sig och om skeppet är fellastat kan det i värsta fall kapsejsa. Vetskapen om dessa fysikaliska egenskaper kan alltså utöver effektivisering användas för att varna besättningen eller påverka styrsystemen så att farliga manövrar undviks. För att visa att tekniken fungerar i verkligheten har den använts på mätningar som har samlats in från ett skalenligt modellskepp. Experimenten utfördes i bassäng och resultaten visar att tekniken fungerar. Denna nya teknik är inte specifik för marint bruk utan kan också vara användbar i andra typer av tillämpningar. Även i dessa tillämpningar möjliggörs användandet av färre eller alternativa sensorer för att skatta modeller. Tekniken kan vara speciellt användbar när en modell av ett system eller process som verkar i ett nätverk av många system är av intresse, något som också diskuteras i avhandlingen.
APA, Harvard, Vancouver, ISO, and other styles
30

Kasperskaya, Yulia. "Essays on causal performance measurement models." Doctoral thesis, Universitat Pompeu Fabra, 2007. http://hdl.handle.net/10803/7348.

Full text
Abstract:
La tesis trata de los modelos de gestión modernos de naturaleza causal, entre los que destaca el Cuadro de Mando Integral (CMI). En primer lugar, se presenta un análisis conceptual su supuesto central, las relaciones causa-efecto. En segundo lugar, se ofrece un análisis estadístico de los supuestos del CMI. En tercer lugar, a partir dos casos de estudio de ayuntamientos, se compara y analiza la implementación del CMI en las organizaciones. El análisis conceptual indica que la complejidad, la incertidumbre y el carácter dinámico del entorno pueden obstaculizar la elaboración de modelos válidos de causa-efecto. Los resultados del análisis estadístico no permiten confirmar la hipótesis de existencia de relaciones estables en el CMI. Finalmente, el análisis de los casos sugiere que las organizaciones pueden recurrir a diversas tácticas y escenarios para implementar el CMI y enfatiza la influencia que las rutinas y normas establecidas en las organizaciones ejercen en ello.
This dissertation address a number of research questions related to causal performance measurement models, such as the Balanced Scorecard (BSC). First, we provide an analytical discussion on the topic of the cause-and-effect relationships in these models. Second, we test of the BSC assumptions on empirical data coming from the dynamic enterprises. Third, we compare and analyze the BSC adoption by two city councils. The conceptual analysis indicates that dynamic, complex and uncertain environment and the cognitive limitations of managers may greatly challenge the elaboration of valid causal models. Our statistical results give little support to the existence of stable BSC links. Our case studies findings show diverse organizational rationalities behind the BSC adoption in organizations and the importance of the existing rules and routines.
APA, Harvard, Vancouver, ISO, and other styles
31

Lysko, Meena Devi. "Measurement and Models of Solar Irradiance." Doctoral thesis, Norwegian University of Science and Technology, Department of Physics, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-779.

Full text
Abstract:

Solar radiation is the main driving force for the Earth’s weather and climate. It is also the prime source for renewable energy technologies.Solar energy technologies can satisfy the current energy demand and at the same time reduce anthropogenic greenhouse gas emissions. It follows from scientific and engineering research within the field that accurate terrestrial solar radiation data and derived models can improve the detection of longterm climate change, the validation of Earth radiation budget estimates, and the deployment of solar energy systems. But, accurate assessment of solar energy at the Earth’s surface is difficult due to spatial, temporal and angular variability. These variations emphasise the need for localised solar radiation measurements and models.

In light of the above, this work has aimed at improving the quality of radiation data at two specific locations; one in Trondheim, Norway and the other in Durban, South Africa. This has entailed the development of an automatic data logging system, and frequent radiometer calibrations with traceability to the World Radiometric Reference. In addition, a radiometer called the Direct-Diffuse Solar Radiometer has been developed for the mapping of sky radiance. Supplementary work has been done to re-affirm the need for quality ground-based data.

The work done is twofold, the first on the measurements of solar irradiance and the second on the implementation of solar radiation models. So, the presentation of the work is divided into two parts. Chapters 1 to 5 are included in Part A under the heading “Measurement of Solar Irradiance”. Chapters 6 is given in Part B, “Data Modelling”.

Chapter 1 describes the concept of solar irradiance, discusses the components of solar radiation and introduces the radiometers in common use for the measurements of solar irradiance. Motivations for the development of the Direct-Diffuse Solar Radiometer are also included.

Chapter 2 describes the radiation mapping at the two sites, with particular emphasis on methods implemented for higher accuracy sampling, data reliability and continuity.

Regular radiometer calibration is necessary to ensure the data quality. Therefore the methodology for and results from calibration of the site radiometers are given in Chapter 3.

The acquired direct and global data for the sites are presented in Chapter 4. Diurnal 1-minute variations are discussed along with estimates of monthly and annual trends, with particular emphasis on atmospheric transmittance and the clearness index.

The last chapter in Part A, i.e. Chapter 5, presents the work done on the Direct-Diffuse Solar Radiometer (DDSR-1B). The instrument is designed for measuring sky irradiance as well as direct solar irradiance. The DDSR-1B is a prototype ground based instrument which is mounted on a remotely controlled solar tracker/sky positioner. The Chapter discusses the DDSR- 1B’s design, control, calibration and field tests in detail.

Chapter 6 in Part B considers 2 available software products that can generate estimates of direct normal solar irradiances for any ground location. This chapter also concludes the work for this thesis by reemphasising the need for localised solar radiation measurements and models.

Finally, Chapter 7 collects the major conclusions given through the course of this text.

APA, Harvard, Vancouver, ISO, and other styles
32

Weingarten, Leopold. "Physical Hybrid Model : Measurement - Experiment - Simulation." Thesis, Uppsala universitet, Fasta tillståndets fysik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-176412.

Full text
Abstract:
A method has been developed, Physical Hybrid Model, to investigate the physical large scale electrical effects of a Battery Energy Storage System (BESS) on a distribution grid by scaling the response from a small size Research Development and Demonstration (RD&D) platform. In order to realize the model the control system of an existing RD&D platform was refurbished and stability of components ensured. The Physical Hybrid Model proceeds as follows: Data from a distribution grid are collected. A BESS cycle curve is produced based on analyzed measurements. Required BESS power and capacity in investigated grid is scaled down by factor k to that of the physical test installation of the RD&D platform. The scaled BESS cycle is sent as input to control of the battery cycling of the RD&D platform. The response from the RD&D platform is scaled – up, and used in simulation of the distribution grid to find the impact of a BESS. The model was successfully implemented on a regional distribution grid in southern Sweden.
APA, Harvard, Vancouver, ISO, and other styles
33

Klotins, Eriks. "Usability and user experience : measurement model." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4722.

Full text
Abstract:
Software quality is one of success factors in software development. Usability and user experience (U&UX) as a part of software quality is becoming more and more important. Although, there has been successful attempts to formalize specific parts of U&UX there is still a need for a systematic framework of U&UX evaluation. This thesis is aimed to study the state of the art in U&UX evaluation in order to develop a single framework that comprises existing knowledge on the topic. Furthermore, the U&UX evaluation framework is aimed to support product development in industry and provide a versatile guide for U&UX practitioners. The study is based on reference based systematic review. The literature review covers both scientific publications and industrial grade papers. The papers to be reviewed were selected by their relevance to the study goals and credibility of the source. The result of this is three layer U&UX evaluation framework. First layer of the Model features breakdown structure of usability and user experience. Total number of usability and context of use attributes is 217. Second layer of the model contains guidelines of how to perform usability evaluation. Third layer features validation strategies and guidelines on how to expand the Model. In order to enable practical use of the Model both static and dynamic validation should take place. There are many models in place attempting to formalize U&UX evaluation. However, most of them focuses on particular branch of usability or are too broad to be applied practically without adaption. Furthermore, there are many resources offering practical usability and user experience checklists or guidelines. However, most of them lack connection with industry standards such as ISO/IEC 9126. The Model presented in this thesis attempts to fill the gap between high level industry standards and cook book style U&UX guidelines.
APA, Harvard, Vancouver, ISO, and other styles
34

Bristol, Nikki. "Shared mental models : conceptualisation & measurement." Thesis, University of Nottingham, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.417084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Scholtz, Reginald. "The manufacturing performance measurement matrix model." Thesis, Stellenbosch : Stellenbosch University, 2008. http://hdl.handle.net/10019.1/969.

Full text
Abstract:
Thesis (MBA (Graduate School of Business))--Stellenbosch University, 2008.
ENGLISH ABSTRACT: Globalisation, constant innovations and well-informed customers have made modern business environments dynamic and complex. Organisations, with their ever-changing business models, are striving to improve the quality of their products and services. Congruently, several management theories have evolved. Total Quality Management (TQM),Just in Time (JIT), Benchmarking, Lean Management, Balanced Scorecard and Six Sigma are some of the more well known. Each movement with its own life span, its share of popularity and level of acceptance has the same basic goal – improvement in quality by measuring performance. Recent years have seen an upsurge in the approaches and contributions to the field of performance measurement. Are these management theories just a flavour of the month or do they add value. The object of this study is to provide industry specific manufacturing business with a Manufacturing Performance Measurement Matrix Model, which can be used over time to measure improvement and serve as a basis for Knowledge Management.
AFRIKAANSE OPSOMMING: Globalisering, konstante inbring van veranderinge en goed ingeligte kliënte het die moderne besigheidsomgewing dinamies en kompleks gemaak. Organisasies, met hulle ewig veranderende besigheidsmodelle, streef daarna om die kwaliteit van hulle produkte en dienslewering te verbeter. Verskeie bestuursteorieë het as gevolg hiervan ontwikkel. Die mees bekende bestuursteorieë is onder andere: Total Quality Management (TQM), Just in Time (JIT), Benchmarking, Lean Management, Balanced Scorecard en Six Sigma. Elkeen van hierdie planne, met sy eie leeftyd, porsie van die gewildheid en vlak van aanvaarding, het dieselfde basiese doel – verbetering van kwaliteit deur die meet van prestasie. Onlangse jare het ‘n opvlam gesien in die benaderings en bydraes tot die veld van prestasiemeting. Is hierdie bestuursteorieë net ‘n nuutjie of is hulle waardevol vir die besigheidswêreld? Die doel van hierdie studie is om industrie spesifieke vervaardigingsbesighede met ‘n Vervaardigings Prestasiemetings Matriks Model te voorsien wat deur die verloop van tyd verbeterings kan meet en kan dien as ‘n basis vir die bestuur van kundigheid.
APA, Harvard, Vancouver, ISO, and other styles
36

Dixon, Diane. "Conceptual and measurement models of disability." Thesis, University of Aberdeen, 2006. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU206100.

Full text
Abstract:
Numerous theoretical frameworks have been applied to further our understanding of the correlates, causes and consequences of disability, and each is accompanied by a particular conceptualisation of disability. In this thesis, disability was conceptualised as behaviour. The concept of disability as behaviour is accompanied by psychological theories of behaviour and behaviour change within which behaviour is viewed as a product of motivational factors. A repertory grid study indicated that people with mobility disability use motivational concepts to distinguish between physical activities typically used to index their disability. These data lent support to the suitability of the disability as behaviour concept in this population. Consequently, psychological theory, in the form of the theory of planned behaviour (TPB) and social cognitive theory (SCT), was used to account for walking disability in an orthopaedic sample. The empirical application of psychological theories requires the operationalisation of the constructs within those theories. An investigation of the content validity of existing measures of the perceived control constructs from the TPB and SCT indicated current measurement items do not show discriminant content validity. These data were used to select perceived control items into the study of disability in an orthopaedic sample. The content validity of the perceived control items was investigated using confirmatory factor analyses applied to the responses of the orthopaedic sample. Structural equation modelling indicated that both the TPB and SCT could account for walking disability in the orthopaedic sample. Finally, the ability of the TPB and SCT to mediate between the central constructs of the main medical model of disability, namely the International Classification of Functioning Disability and Health (ICF), was assessed using structural modelling. This integrated model accounted for a greater proportion of the variance in walking disability than did the ICF alone, suggesting psychological theory can be used to improve the ICF model.
APA, Harvard, Vancouver, ISO, and other styles
37

Lo, Sau Yee. "Measurement error in logistic regression model /." View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?MATH%202004%20LO.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2004.
Includes bibliographical references (leaves 82-83). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
38

Sedera, Darshana. "Enterprise systems success: A measurement model." Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16269/3/Darshana%20Sedera%20Thesis.pdf.

Full text
Abstract:
Organizations make large investments in Information Systems (IS) expecting positive impacts to the organisation and its functions. Yet, there exists much controversy surrounding the 'potential' impacts of these systems, with some studies reporting broadly positive impacts of IS across organizations (Barua, Kriebel and Mukhopadhyay 1995; Barua and Lee 1997; Brynjolfsson and Hitt 1996; Lehr and Lichtenberg 1999; Mukherjee, Ray and Miller 2001), while others have shown nil or detrimental impacts (Attewell and Rule 1984; Brynjolfsson and Yang 1996; Cameron and Quinn 1988; Wilson 1993). Various authors have suggested that these conflicting results may be due to poor measurement - E.g. incomplete or inappropriate measures of success (DeLone and McLean 1992; Gable 1996; Melone 1990), lack of theoretical grounding and hence agreement on appropriate measures of success (Bonner 1995; Myers, Kappelman and Prybutok 1998), myopic focus on financial performance indicators (Ballantine, Bonner, Levy, Martin, Munro and Powell 1996; Kaplan and Norton 1996), weaknesses in survey instruments employed (Gable, Sedera and Chan 2003) (e.g., constructs lacking in validity), or (5) inappropriate data collection approach (Seddon, Staples, Patnayakuni and Bowtell 1999; Sedera and Gable 2004) (e.g., asking the wrong people, unrepresentative sample). Enterprise Systems (ES) have over the past decade emerged to be one of the most important developments in the corporate use of information technology. Anecdotal evidence reveals discontent with these large application software packages. Yet Enterprise System investments are seldom systematically evaluated post-implementation; the review process and measures typically being idiosyncratic and lacking credibility. Impacts resulting from 'Enterprise Systems' are particularly difficult to measure, with an Enterprise System entailing many users ranging from top executives to data entry operators; many applications that span the organization; and a diversity of capabilities and functionality. Despite the substantial investments made by organizations and the anecdotal evidence of discontent, systematic attempts to measure their success have been few. The primary objective of this research is to develop and test a standardized instrument for measuring ES-Success. Other related objectives of this research include: (1) to identify the dimensions and measures of ES-Success, (2) to validate a maximally generalizable measurement model and survey instrument for gauging ES-Success; (3) to develop an understanding of the state of Enterprise Systems using descriptive/comparative statistics, and (4) to identify and test an antecedent of ES-Success. With the above objectives, and in attention to the weaknesses identified in past IS-success research, this study follows and extends the 'research cycle' guidelines of Mackenzie and House (1979) and McGrath (1979). The research cycle entails two main phases: (1) an exploratory phase to develop the hypothesized measurement model, and (2) a confirmatory phase, to test the hypothesized measurement model against new data. The two surveys (termed as identification-survey and specification-survey) conducted in the exploratory phase of this research go beyond the activities recommended by Mackenzie and House (1979) and McGrath (1979). A third "confirmation-survey" was completed in the confirmatory phase of the research cycle. The three surveys gathered and analyzed data from six hundred (600) respondents. The purpose of the identification-survey was to discover the salient ES-Success dimensions and measures to include in an a-priori ES-Success model. Data from 137 respondents representing 27 Australian State Government Agencies that had implemented SAP R/3 in the late 1990s were analyzed. The analysis of identification-survey data yielded an a-priori model with 41 measures of 5 dimensions of ES-Success that provide a holistic view across the organization from strategic to operational levels. The specification-survey was employed to validate the a-priori ES-Success measurement model derived in the preceding identification-survey. Employing 310 responses from the same 27 public sector organizations, exploratory data analysis validated 27 measures of success pertaining to the 4 dimensions: information quality, system quality, individual impact and organizational impact. Data for testing the influence of an antecedent of ES-Success was simultaneously gathered during the specification-survey. This analysis, based on the Adaptive Structuration Theory (AST), investigated the influence of Knowledge Management Structures Adequacy (KMSA) on ES-Success. Preliminary results indicate a strong relationship between the Knowledge Management Structures Adequacy and ES-Success. The purpose of the confirmation-survey was to further validate the dimensions and measures of the ES-Success model, using new data, employing confirmatory statistical techniques. Data was gathered from 153 respondents across a large University that had implemented the Oracle Enterprise System, which facilitated further construct validity of the ES-Success measurement instrument was further established using Structural Equation Modeling (SEM).
APA, Harvard, Vancouver, ISO, and other styles
39

Sedera, Darshana. "Enterprise systems success: a measurement model." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16269/.

Full text
Abstract:
Organizations make large investments in Information Systems (IS) expecting positive impacts to the organisation and its functions. Yet, there exists much controversy surrounding the 'potential' impacts of these systems, with some studies reporting broadly positive impacts of IS across organizations (Barua, Kriebel and Mukhopadhyay 1995; Barua and Lee 1997; Brynjolfsson and Hitt 1996; Lehr and Lichtenberg 1999; Mukherjee, Ray and Miller 2001), while others have shown nil or detrimental impacts (Attewell and Rule 1984; Brynjolfsson and Yang 1996; Cameron and Quinn 1988; Wilson 1993). Various authors have suggested that these conflicting results may be due to poor measurement - E.g. incomplete or inappropriate measures of success (DeLone and McLean 1992; Gable 1996; Melone 1990), lack of theoretical grounding and hence agreement on appropriate measures of success (Bonner 1995; Myers, Kappelman and Prybutok 1998), myopic focus on financial performance indicators (Ballantine, Bonner, Levy, Martin, Munro and Powell 1996; Kaplan and Norton 1996), weaknesses in survey instruments employed (Gable, Sedera and Chan 2003) (e.g., constructs lacking in validity), or (5) inappropriate data collection approach (Seddon, Staples, Patnayakuni and Bowtell 1999; Sedera and Gable 2004) (e.g., asking the wrong people, unrepresentative sample). Enterprise Systems (ES) have over the past decade emerged to be one of the most important developments in the corporate use of information technology. Anecdotal evidence reveals discontent with these large application software packages. Yet Enterprise System investments are seldom systematically evaluated post-implementation; the review process and measures typically being idiosyncratic and lacking credibility. Impacts resulting from 'Enterprise Systems' are particularly difficult to measure, with an Enterprise System entailing many users ranging from top executives to data entry operators; many applications that span the organization; and a diversity of capabilities and functionality. Despite the substantial investments made by organizations and the anecdotal evidence of discontent, systematic attempts to measure their success have been few. The primary objective of this research is to develop and test a standardized instrument for measuring ES-Success. Other related objectives of this research include: (1) to identify the dimensions and measures of ES-Success, (2) to validate a maximally generalizable measurement model and survey instrument for gauging ES-Success; (3) to develop an understanding of the state of Enterprise Systems using descriptive/comparative statistics, and (4) to identify and test an antecedent of ES-Success. With the above objectives, and in attention to the weaknesses identified in past IS-success research, this study follows and extends the 'research cycle' guidelines of Mackenzie and House (1979) and McGrath (1979). The research cycle entails two main phases: (1) an exploratory phase to develop the hypothesized measurement model, and (2) a confirmatory phase, to test the hypothesized measurement model against new data. The two surveys (termed as identification-survey and specification-survey) conducted in the exploratory phase of this research go beyond the activities recommended by Mackenzie and House (1979) and McGrath (1979). A third "confirmation-survey" was completed in the confirmatory phase of the research cycle. The three surveys gathered and analyzed data from six hundred (600) respondents. The purpose of the identification-survey was to discover the salient ES-Success dimensions and measures to include in an a-priori ES-Success model. Data from 137 respondents representing 27 Australian State Government Agencies that had implemented SAP R/3 in the late 1990s were analyzed. The analysis of identification-survey data yielded an a-priori model with 41 measures of 5 dimensions of ES-Success that provide a holistic view across the organization from strategic to operational levels. The specification-survey was employed to validate the a-priori ES-Success measurement model derived in the preceding identification-survey. Employing 310 responses from the same 27 public sector organizations, exploratory data analysis validated 27 measures of success pertaining to the 4 dimensions: information quality, system quality, individual impact and organizational impact. Data for testing the influence of an antecedent of ES-Success was simultaneously gathered during the specification-survey. This analysis, based on the Adaptive Structuration Theory (AST), investigated the influence of Knowledge Management Structures Adequacy (KMSA) on ES-Success. Preliminary results indicate a strong relationship between the Knowledge Management Structures Adequacy and ES-Success. The purpose of the confirmation-survey was to further validate the dimensions and measures of the ES-Success model, using new data, employing confirmatory statistical techniques. Data was gathered from 153 respondents across a large University that had implemented the Oracle Enterprise System, which facilitated further construct validity of the ES-Success measurement instrument was further established using Structural Equation Modeling (SEM).
APA, Harvard, Vancouver, ISO, and other styles
40

Randström, Monica. "Deltack, Blanket Study." Thesis, Högskolan Dalarna, Grafisk teknik, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:du-3537.

Full text
Abstract:
There are two objectives of this study.The first one was to study the new instrument Deltack fromPrufbau, and with the new knowledge, study the interaction between blanket and paper.Deltack is a new instrument to measure the ink splitting force. Blankets from an earlier study doneby GATF have been montered on 12 different Deltack print forms and those have together with Deltackbeen the base for this project.This study together with information from GATF’s study shows that some of the blankets have ahigher capability to adsorb ink. Those blankets generate a sharper shape on the force curve, they havea lower dot gain, but they print not as good solid.
APA, Harvard, Vancouver, ISO, and other styles
41

Eckstein, Johannes [Verfasser], and P. [Akademischer Betreuer] Braesicke. "Atmospheric Models and Aircraft Measurements: Representativeness and Model Validation / Johannes Eckstein ; Betreuer: P. Braesicke." Karlsruhe : KIT-Bibliothek, 2017. http://d-nb.info/1138708690/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Braungarten, René. "The SMPI model a stepwise process model to facilitate software measurement process improvement along the measurement paradigms /." [S.l.] : [s.n.], 2007. http://deposit.ddb.de/cgi-bin/dokserv?idn=985144955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Erman, David. "BitTorrent Traffic Measurements and Models." Licentiate thesis, Karlskrona : Blekinge Institute of Technology, 2005. http://www.bth.se/fou/Forskinfo.nsf/allfirst2/5ece344a70455472c125709f00347e3b?OpenDocument.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Offenbacher, Carolina, and Ellen Thornström. "Ionospheric Models for GNSS Measurements." Thesis, KTH, Fastigheter och byggande, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-277711.

Full text
Abstract:
There is an increasing demand for higher precision when using Global Navigation Satellite Systems, GNSS, for positioning. The measurement uncertainty depends on multiple factors and one of them is the ionosphere. Due to the ionosphere being ionized and contains free electrons, satellite signals that propagates the ionosphere will be affected by the total electron content, TEC. There is no way to reduce the errors caused by the ionosphere for single frequency measuring, but it can be done for dual frequencies. The objective for this study was to compare different websites modeling results for disturbance on ground and ionospheric turbulence. Three websites were used in the comparison: Swedish SWEPOS, Norwegian seSolstorm and German IMPC. Due to different content on the websites, SWEPOS was compared with seSolstorm and IMPC was compared to seSolstorm on five different dates. In total 10 comparisons were made. The websites were evaluated on four criteria, designed from the point of view of a land surveyor: user friendliness, graphical representation, knowledge requirement and mobile website adaptation. Each criterion was graded on a scale from 1–5, where 5 was considered the best. The study showed that the modeling results from the websites differed and that the difference can not only be explained by different graphical representation or scales for measure. The results for the evaluation and grading of the websites where as follows: SWEPOS 16, seSolstorm 13 and IMPC 12. This makes SWEPOS the best suited website to use for a land surveyor.
Vid positionsbestämning finns det ett behov av högre precision vid användandet av Global Navigation Satellite Systems, GNSS. Mätosäkerheten beror av ett flertal faktorer och en av dem är jonosfären. Tack vare att jonosfären är joniserad och innehåller fria elektroner kommer satellitsignaler som färdas genom jonosfären att påverkas av det totala elektroninnehållet, TEC. Det finns idag inget sätt för enkel frekvensmätning att eliminera den mätosäkerhet som uppstår till följd av jonosfäriska störningar, däremot är det möjligt att modellera för dessa störningar då två frekvenser används. Målet för denna studie var att jämföra olika webbplatsers modelleringsresultat för störningar på marknivå och för jonosfärisk turbulens. Tre webbplatser användes i jämförelsen: svenska SWEPOS, norska seSolstorm och tyska IMPC. På grund av att hemsidorna hade olika innehåll gällande modelleringar jämfördes SWEPOS med seSolstorm, medan IMPC jämfördes med seSolstorm. Totalt gjordes 10 jämförelser för fem olika datum. Webbplatserna utvärderades utifrån fyra kriterier vilka var utformade utifrån en mätteknikers synvinkel: användarvänlighet, grafisk representation, kunskapsbehov och mobil webbplatsanpassning. Varje kriterium betygsattes på en skala från 1–5, där 5 ansågs vara det bästa. Studien visade att modelleringsresultaten från webbplatserna skilde sig åt och att skillnaden inte kunde förklaras med olika grafiska framställningar eller skalstorlek. Resultaten för utvärdering och betygsättning av webbplatserna var följande: SWEPOS 16, seSolstorm 13 och IMPC 12. Detta gjorde SWEPOS till den bäst lämpade webbplatsen att använda för en mättekniker.
APA, Harvard, Vancouver, ISO, and other styles
45

Grossi, Peter. "A stakeholder-based organisation performance model." Thesis, University of South Wales, 2007. https://pure.southwales.ac.uk/en/studentthesis/a-stakeholderbased-organisation-performance-model(ef0e5e3e-3325-43dd-92e2-e37d605c6e0a).html.

Full text
Abstract:
Organisations exist in a vast range of types and sizes. While it is generally known that start-ups suffer a high rate of infant mortality for various reasons, it is clear that there are many different successful approaches to achieving stability and worthwhile contribution. Even the fortunes of large companies are not immune to ebb and flow, and these phenomena are manifested in the effects they have, both on the communities in which they are placed, and on their trading partners. Therefore there is more to sustainable success than size or other traditional financial measures such as turnover, profit, return on investment etc. This thesis is inspired by the need to identify a way of characterising the contributions of organisations as a framework of performance measurement that is meaningful to all organisations regardless of type or size, and systematically relating that view of contributions to organisations' strategic and operational activities. Business processes are used within organisations to control productive activity and therefore are at the root of all aspects of an organisation's output. There are, however, a number of reasons for processes to be disconnected from the strategic intentions of an organisation, whereupon the processes, and the activities within them, become less efficient and effective in serving the needs of the organisation than they should be. Traditional methods of performance measurement do not adequately address this problem, so a new model for the measurement and improvement of organisational performance is required. In reviewing theories and empirical viewpoints concerning stakeholders it is found that there are several distinct topics of interest within that field. These are brought together in the form of a standardised list of stakeholder groups, which is then field tested for general applicability. A method for expressing the strategic intentions of an organisation, based on this standardised list, is then developed and is also field tested. The group structure is extended by identifying a number of factors that determine the satisfaction of stakeholders, and these are also field tested for applicability. Using the structured analysis of stakeholders by groups, and the factors that determine their satisfaction, a model is proposed (the Performance Boundary Model) that shows stakeholders and the organisation itself as distinct but connected domains. This concept is developed, by building on established theory and the findings of the field research, into a representation or model. This model provides a structured connection between strategic intentions and measured operational performance, and these are connected into the organisation through its processes. The model thus provides structured links between organisational strategy, operational processes and objective performance measures.
APA, Harvard, Vancouver, ISO, and other styles
46

Xin, Xin. "Using Posterior Predictive Checking of Item Response Theory Models to Study Invariance Violations." Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc984130/.

Full text
Abstract:
The common practice for testing measurement invariance is to constrain parameters to be equal over groups, and then evaluate the model-data fit to reject or fail to reject the restrictive model. Posterior predictive checking (PPC) provides an alternative approach to evaluating model-data discrepancy. This paper explores the utility of PPC in estimating measurement invariance. The simulation results show that the posterior predictive p (PP p) values of item parameter estimates respond to various invariance violations, whereas the PP p values of item-fit index may fail to detect such violations. The current paper suggests comparing group estimates and restrictive model estimates with posterior predictive distributions in order to demonstrate the pattern of misfit graphically.
APA, Harvard, Vancouver, ISO, and other styles
47

Spank, Uwe. "Site Water Budget: Influences of Measurement Uncertainties on Measurement Results and Model Results." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-62557.

Full text
Abstract:
The exact quantification of site water budget is a necessary precondition for successful and sustainable management of forests, agriculture and water resources. In this study the water balance was investigated at the spatial scale of canopies and at different temporal scales with focus on the monthly time scale. The estimation of the individual water balance components was primarily based on micrometeorological measurement methods. Evapotranspiration was assessed by the eddy-covariance (EC) method, while sap flow measurements were used to estimate transpiration. Interception was assessed by a combination of canopy drip, stem flow and precipitation (gross rainfall) measurements and soil moisture measurements were used to estimate the soil water storage. The combination of different measurement methods and the derivation of water balance components that are not directly measurable e.g. seepage and soil evaporation is a very complex task due to different scales of measurement, measurement uncertainties and the superposition of these effects. The quantification of uncertainties is a core point of the present study. The uncertainties were quantified for water balance component as well as for meteorological variables (e.g. wind speed, temperature, global radiation, net radiation and precipitation) that served as input data in water balance models. Furthermore, the influences of uncertainties were investigated in relation to numerical water balance simulations. Here, both the effects of uncertainties in input data and in reference data were analysed and evaluated. The study addresses three main topics. The first topic was the providing of reference data of evapotranspiration by EC measurements. Here, the processing of EC raw-data was of main concern with focus on the correction of the spectral attenuation. Four different methods of spectral correction were tested and compared. The estimated correction coefficients were significantly different between all methods. However, the effects were small to absolute values on half-hourly time scale. In contrast to half-hour data sets, the method had significant influence to estimated monthly totals of evapotranspiration. The second main topic dealt with the comparison of water balances between a spruce (Picea abies) and a beech (Fagus sylvatica) site. Both sites are located in the Tharandter Wald (Germany). Abiotic conditions are very similar at both sites. Thus, the comparison of both sites offered the opportunity to reveal differences in the water balance due to different dominant tree species. The aim was to estimate and to compare all individual components of the water balance by a combination of the above mentioned measurement methods. A major challenge was to overcome problems due different scales of measurements. Significant differences of the water balances between both sites occurred under untypical weather conditions. However, under typical condition the sites showed a similar behaviour. Here, the importance of involved uncertainties deserved special attention. Results showed that differences in the water balance between sites were blurred by uncertainties. The third main topic dealt with the effects of uncertainties on simulations of water balances with numerical models. These analyses were based on data of three sites (Spruce, Grass and Agricultural site). A kind of Monte-Carlo-Simulation (uncertainty model) was used to simulate effects of measurement uncertainties. Furthermore, the effects of model complexity and the effect of uncertainties in reference data on the evaluation of simulation results were investigated. Results showed that complex water balance models like BROOK90 have the ability to describe the general behaviour and tendencies of a water balance. However, satisfying quantitative results were only reached under typical weather conditions. Under untypical weather e.g. droughts or extreme precipitation, the results significantly differed from actual (measured) values. In contrast to complex models, it was demonstrated that simple Black Box Models (e.g. HPTFs) are not suited for water balance simulations for the three sites tested here
Die genaue Quantifizierung des Standortswasserhaushalts ist eine notwendige Voraussetzung für eine erfolgreiche und nachhaltige Bewirtschaftung von Wäldern, Äckern und Wasserressourcen. In dieser Studie wurde auf der Raumskala des Bestandes und auf verschieden Zeitskalen, jedoch vorrangig auf Monatsebene, die Wasserbilanz untersucht. Die Bestimmung der einzelnen Wasserbilanzkomponenten erfolgte hauptsächlich mit mikrometeorologischen Messmethoden. Die Eddy- Kovarianz- Methode (EC- Methode) wurde benutzt zur Messung der Evapotranspiration, während Xylem- Flussmessungen angewendet wurden, um die Transpiration zu bestimmen. Die Interzeption wurde aus Messungen des Bestandesniederschlags, des Stammablaufs und des Freilandniederschlags abgeleitet. Messungen der Bodenfeuchte dienten zur Abschätzung des Bodenwasservorrats. Die Kombination verschiedener Messmethoden und die Ableitung von nicht direkt messbaren Wasserhaushaltkomponenten (z.B. Versickerung und Bodenverdunstung) ist eine äußerst komplexe Aufgabe durch verschiedenen Messskalen, Messfehler und die Überlagerung dieser Effekte. Die Quantifizierung von Unsicherheiten ist ein Kernpunkt in dieser Studie. Dabei werden sowohl Unsicherheiten in Wasserhaushaltskomponenten als auch in meteorologischen Größen, welche als Eingangsdaten in Wasserbilanzmodellen dienen (z.B. Windgeschwindigkeit, Temperatur, Globalstrahlung, Nettostrahlung und Niederschlag) quantifiziert. Weiterführend wird der Einfluss von Unsicherheiten im Zusammenhang mit numerischen Wasserbilanzsimulationen untersucht. Dabei wird sowohl die Wirkung von Unsicherheiten in Eingangsdaten als auch in Referenzdaten analysiert und bewertet. Die Studie beinhaltet drei Hauptthemen. Das erste Thema widmet sich der Bereitstellung von Referenzdaten der Evapotranspiration mittels EC- Messungen. Dabei waren die Aufbereitung von EC- Rohdaten und insbesondere die Dämpfungskorrektur (Spektralkorrektur) der Schwerpunkt. Vier verschiedene Methoden zur Dämpfungskorrektur wurden getestet und verglichen. Die bestimmten Korrekturkoeffizienten unterschieden sich deutlich zwischen den einzelnen Methoden. Jedoch war der Einfluss auf die Absolutwerte halbstündlicher Datensätze gering. Im Gegensatz dazu hatte die Methode deutlichen Einfluss auf die ermittelten Monatssummen der Evapotranspiration. Das zweite Hauptthema beinhaltet einen Vergleich der Wasserbilanz eines Fichten- (Picea abies) mit der eines Buchenbestands (Fagus sylvatica). Beide Bestände befinden sich im Tharandter Wald (Deutschland). Die abiotischen Faktoren sind an beiden Standorten sehr ähnlich. Somit bietet der Vergleich die Möglichkeit Unterschiede in der Wasserbilanz, die durch unterschiedliche Hauptbaumarten verursacht wurden, zu analysieren. Das Ziel was es, die einzelnen Wasserbilanzkomponenten durch eine Kombination der eingangs genanten Messmethoden zu bestimmen und zu vergleichen. Ein Hauptproblem dabei war die Umgehung der unterschiedlichen Messskalen. Deutliche Unterschiede zwischen den beiden Standorten traten nur unter untypischen Wetterbedingungen auf. Unter typischen Bedingungen zeigten die Bestände jedoch ein ähnliches Verhalten. An dieser Stelle erlangten Messunsicherheiten besondere Bedeutung. So demonstrierten die Ergebnisse, dass Unterschiede in der Wasserbilanz beider Standorte durch Messunsicherheiten verwischt wurden. Das dritte Hauptthema behandelt die Wirkung von Unsicherheiten auf Wasserbilanzsimulationen mittels numerischer Modelle. Die Analysen basierten auf Daten von drei Messstationen (Fichten-, Grasland- und Agrarstandort). Es wurde eine Art Monte-Carlo-Simulation eingesetzt, um die Wirkung von Messunsicherheiten zu simulieren. Ferner wurden auch der Einfluss der Modellkomplexität und die Effekte von Unsicherheiten in Referenzdaten auf die Bewertung von Modellergebnissen untersucht. Die Ergebnisse zeigten, dass komplexe Wasserhaushaltsmodelle wie BROOK90 in der Lage sind, das Verhalten und Tendenzen der Wasserbilanz abzubilden. Jedoch wurden zufriedenstellende quantitative Ergebnisse nur unter üblichen Wetterbedingungen erzielt. Unter untypischen Wetterbedingungen (Dürreperioden, Extremniederschläge) wichen die Ergebnisse deutlich vom tatsächlichen (gemessenen) Wert ab. Im Gegensatz zu komplexen Modellen zeigte sich, dass Black Box Modelle (HPTFs) nicht für Wasserhaushaltssimulation an den drei genannten Messstandorten geeignet sind
APA, Harvard, Vancouver, ISO, and other styles
48

Breitner, Susanne. "Time-varying coefficient models and measurement error." Diss., lmu, 2007. http://nbn-resolving.de/urn:nbn:de:bvb:19-79772.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Lundgren, Urban. "EMC barriers : measurement methodologies and model verification." Licentiate thesis, Luleå tekniska universitet, EISLAB, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-18035.

Full text
Abstract:
The subject of Electromagnetic Compatibility (EMC) is introduced. A short overview of the regulatory EMC requirements in Europe and in the United States is presented. An explanation is given why it is important for people in the electronics business to understand the need for EMC regulations. Electromagnetic shielding is a often used as an electromagnetic barrier for attenuation of disturbances from radiating sources. An orientation over the concept of electromagnetic barriers is presented. Ways to characterize barriers are presented and measurement methods are reviewed. Results from transfer impedance measurements show that conductive gaskets made of tin coated stainless steel, can give as good shielding performance as gaskets made of beryllium copper in the entire frequency interval of the investigation. The results show that the difference in shielding performance with gaskets made of tin coated stainless steel and of beryllium copper can be insignificant compared to the influence of a very small change in contact pressure. The difference between gaskets with and without tin coating is small with beryllium copper as base material. With stainless steel as base material the difference is about 10dB at 2MHz and less at higher frequencies. A method to describe electromagnetic barriers such as filters, shielded cables and connectors etc. with circuits consisting of linear discrete components is presented. By comparing a barrier with a multi-conductor transmission line a lumped circuit can be constructed where component values are designated by the per-unit length transmission line parameters. For barriers that cannot be viewed as transmission lines a method for determining equivalent circuits outgoing from measured S-parameters has been developed. Different measurement fixtures were constructed in order to obtain accurate S-parameter values. Derived models have been used in SPICE simulations and validated by comparison with measurements. The current distributions on the surface of a printed log periodic dipole antenna (LPDA) have been investigated. A position scanned magnetic field probe was used to obtain values of the magnitude and phase of the magnetic field at each point of the scan. From the measurements the time variant instantaneous values of the magnetic field and the instantaneous currents on the LPDA were derived. The wave nature of the current distribution can be readily observed and problems with the design such as standing waves on the feeder lines are highlighted. Measured current distributions are compared with predicted distributions obtained from Method of Moments (MOM) analysis of the LPDA structure. Measured and predicted far field radiation patterns are also compared.

Godkänd; 2000; 20070318 (ysko)

APA, Harvard, Vancouver, ISO, and other styles
50

Trubilowicz, Joel William. "Using motes for high resolution hydrological measurement." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/633.

Full text
Abstract:
Low cost, low power wireless sensors (motes) promise to revolutionize environmental data collection, but are they currently refined enough for widespread use by hydrologists? Their viability as a replacement for traditional data collection techniques was investigated in a 7 ha forested watershed in south-western British Columbia. The watershed included 41 instrument clusters measuring air and soil temperature, humidity, throughfall, soil moisture content, overland flow and groundwater head. The foundation of each cluster was a data box containing a MDA300 data acquisition board and a MICA2 processor board from Crossbow Technologies, Inc.™ that allowed for short range wireless data collection. The 41 motes each recorded data every 15 minutes from July, 2006, to April, 2007. In addition to reporting on the reliability of the motes and sensors during the 10 months deployment, the high spatial and temporal resolution data collected by this study gave the opportunity for many analyses of catchment processes. As soil moisture and throughfall are two influential processes in the exchange of water between the earth and the atmosphere, these were the focus of the data analysis. The first analysis was a resampling experiment on seven different events selected from the full data set. Comparing 100 different subsamples each of 5, 10 and 20 points for throughfall and soil moisture showed if increasing the sample size eventually produced diminishing returns in the ability to reproduce the true catchment mean. With significant differences in prediction ability for both soil moisture and throughfall at times of differing hydrologic activity, this analysis provides further support for the theories of changing moisture states of soil moisture and threshold values for throughfall. The second analysis described how the organization of soil moisture and throughfall changed during a range of weather conditions and timescales. Spatial representation of normalized values and Pearson correlation coefficients showed that there were distinct differences between wet and dry periods for soil moisture and between long and short analysis periods for throughfall.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography