To see the other types of publications on this topic, follow the link: Measuring applications.

Dissertations / Theses on the topic 'Measuring applications'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Measuring applications.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Conombo, Blanchard. "Measuring «Correlation Neglect» : experimental prodecures and applications." Master's thesis, Université Laval, 2017. http://hdl.handle.net/20.500.11794/28198.

Full text
Abstract:
Tableau d'honneur de la Faculté des études supérieures et postdoctorales, 2017-2018
Des études économiques récentes ont identifié la difficulté des personnes à prendre des décisions optimales lorsqu’il existe une corrélation entre différentes variables d’état (aléatoires), que l’on appelle maintenant dans la littérature «inattention envers la corrélation». Dans cet article, nous supposons que l’inattention envers la corrélation est un trait individuel d’une personne et nous proposons différentes mesures de cette caractéristique. Nous comparons différentes mesures en termes de corrélation à partir des résultats d’expériences de laboratoire. Nous présentons les applications de ces mesures dans des domaines précis. Mots clés : heuristiques et biais, inattention envers la corrélation, mesure.
Recent economic studies identified the difficulty of persons to make optimal decisions when there is correlation between (random) state variables, now referred to in the literatures as “correlation neglect.” In this article, we presume correlation neglect to be an individual trait of a person and propose different measures of this characteristic. We compare different measures in terms of their correlation based on results from laboratory experiments. We present applications of the measures in the field. Keywords : heuristics and biases, correlation neglect, measurement.
APA, Harvard, Vancouver, ISO, and other styles
2

Klingler, Emily L. "Measuring Student Understanding of Density, with Geological Applications." Fogler Library, University of Maine, 2006. http://www.library.umaine.edu/theses/pdf/KlinglerEL2006.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Permi, Perttu. "Applications for measuring scalar and residual dipolar couplings in proteins /." Oulu : Oulun Yliopisto, 2000. http://herkules.oulu.fi/isbn9514258223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Banerjee, Anirban. "Measuring and modeling applications for content distribution in the Internet." Diss., [Riverside, Calif.] : University of California, Riverside, 2008. http://proquest.umi.com/pqdweb?index=0&did=1663077961&SrchMode=2&sid=1&Fmt=6&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1265225554&clientId=48051.

Full text
Abstract:
Thesis (Ph. D.)--University of California, Riverside, 2008.
Includes abstract. Title from first page of PDF file (viewed February 3, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references. Also issued in print.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Jiamin. "Measuring the Functionality of Amazon Alexa and Google Home Applications." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/97316.

Full text
Abstract:
Voice Personal Assistant (VPA) is a software agent, which can interpret the user's voice commands and respond with appropriate information or action. The users can operate the VPA by voice to complete multiple tasks, such as read the message, order coffee, send an email, check the news, and so on. Although this new technique brings in interesting and useful features, they also pose new privacy and security risks. The current researches have focused on proof-of-concept attacks by pointing out the potential ways of launching the attacks, e.g., craft hidden voice commands to trigger malicious actions without noticing the user, fool the VPA to invoke the wrong applications. However, lacking a comprehensive understanding of the functionality of the skills and its commands prevents us from analyzing the potential threats of these attacks systematically. In this project, we developed convolutional neural networks with active learning and keyword-based approach to investigate the commands according to their capability (information retrieval or action injection) and sensitivity (sensitive or nonsensitive). Through these two levels of analysis, we will provide a complete view of VPA skills, and their susceptibility to the existing attacks.
M.S.
Voice Personal Assistant (VPA) is a software agent, which can interpret the users' voice commands and respond with appropriate information or action. The current popular VPAs are Amazon Alexa, Google Home, Apple Siri and Microsoft Cortana. The developers can build and publish third-party applications, called skills in Amazon Alex and actions in Google Homes on the VPA server. The users simply "talk" to the VPA devices to complete different tasks, like read the message, order coffee, send an email, check the news, and so on. Although this new technique brings in interesting and useful features, they also pose new potential security threats. Recent researches revealed that the vulnerabilities exist in the VPA ecosystems. The users can incorrectly invoke the malicious skill whose name has similar pronunciations to the user-intended skill. The inaudible voice triggers the unintended actions without noticing users. All the current researches focused on the potential ways of launching the attacks. The lack of a comprehensive understanding of the functionality of the skills and its commands prevents us from analyzing the potential consequences of these attacks systematically. In this project, we carried out an extensive analysis of third-party applications from Amazon Alexa and Google Home to characterize the attack surfaces. First, we developed a convolutional neural network with active learning framework to categorize the commands according to their capability, whether they are information retrieval or action injection commands. Second, we employed the keyword-based approach to classifying the commands into sensitive and nonsensitive classes. Through these two levels of analysis, we will provide a complete view of VPA skills' functionality, and their susceptibility to the existing attacks.
APA, Harvard, Vancouver, ISO, and other styles
6

Permi, P. (Perttu). "Applications for measuring scalar and residual dipolar couplings in proteins." Doctoral thesis, University of Oulu, 2000. http://urn.fi/urn:isbn:9514258223.

Full text
Abstract:
Abstract Nuclear magnetic resonance spectroscopic structure determination of proteins has been under rapid development during the last decade. The size limitation impeding structural studies of biological macromolecules in solution has increased from 10 kDa to 30 kDa thanks to exploitation of 15N/13C enrichment. Perdeuteration of non-exchangeable protons has pushed this limit even further, allowing backbone resonance assignment of 40 to 50 kDa proteins. Most recently, transverse relaxation optimized spectroscopy (TROSY) has been demonstrated to lengthen 15N and 1HN spin transverse relaxation times significantly, especially in large perdeuterated proteins, thus extending the size limit beyond 100 kDa systems. However, determination of structurally important nuclear Overhauser enhancements (NOE) suffers from perdeuteration, due to the lower density of proton spins available, eventually leading to imprecise protein structures. Very recently, residual dipolar couplings have been used to supplement NOE information, enabling accurate molecular structures to also be obtained with perdeuterated proteins. This thesis focuses on the measurement of the structurally important 3J-coupling between 1HN and 1Hα spins, and determination of residual dipolar couplings by utilizing the novel spin-state-selective subspectral editing together with the TROSY methodology. This approach allows precise measurement of a large number of dipolar couplings in larger protonated or perdeuterated proteins.
APA, Harvard, Vancouver, ISO, and other styles
7

Siddiqui, Feroz Ahmed. "Understanding and measuring systems flexibility : a case of object-oriented development environment." Thesis, Brunel University, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.268857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Popp, Raluca-Florica. "Political representation in different electoral settings : measuring issue congruence with VAA-generated data." Thesis, University of Exeter, 2018. http://hdl.handle.net/10871/34377.

Full text
Abstract:
The long line of representation studies posits that proportional representation systems, with larger electoral districts, have a representational advantage over majoritarian systems. However, over the last decade, scholars have challenged this longstanding nding (Blais & Bodet 2006, Golder & Stramski 2007). Additionally, Golder & Stramski (2007) initiated a debate over the conceptualization and measurement of congruence, arguing that the most common practice of assessing congruence is flawed. They call for an improved measure of congruence. In the light of this recent debates, the purpose of this thesis is to inspect the relationship between institutional designs and political representation in the European context, using Voting Advice Application generated data. Three main research questions are explored. The first question relates to institutional designs such as district magnitude, and electoral system characteristics such as disproportionality or polarization, investigating the conditions necessary for a country to present high levels of congruence between its citizens and their representatives. Looking at party level characteristics, I will investigate what are the effects of niche party status and governmental status on issue congruence in European democracies? Last but not least, what is the role of individual characteristics? These questions will be addressed by studying the impact of different features of electoral systems, party and individual characteristics have on political representation conceptualized as issue congruence. Congruence is measured as the degree of matching of the common policy preferences of citizens and parties as indicated by the Voting Advice Applications EU Pro filer 2009 and EUvox 2014. The present work contributes to the stream of research on political representation understood as congruence. The strength of this work lays in its comparative approach, and the use of VAA generated data to measure congruence. While most of the studies on political representation using congruence focus on the Left-Right dimension, this thesis uses the concept of issue congruence. Based on the 28 common statements of the VAA tool, the measure of congruence is metric-free, allowing for cross-country comparisons. Although there is a wide range of research on the effects of electoral systems on political representation, most of these studies are limited in their use of comparative approaches. The lack of extensive comparative research on issue congruence is due to insufficient data. The 2009 EU Profi ler and 2014 EUvox address this issue, providing the necessary framework for testing the predictors of congruence at a system, party and individual level. Political representation can be operationalized through congruence, as the distance between the citizen and the representative (Huber & Powell 1994, Powell 2004). Issue congruence is the correspondence between party electorates and their representatives across a set of salient policy dimensions (Powell 2004). VAA generated data provides a new means of measuring congruence. I propose two new measures of congruence, based on the distance between the citizen and the party the citizen intends to vote for. Unlike other comparative studies that measure congruence with the help of the Left-Right scale, the present work focuses on issue policies. Issue congruence is the outcome of the match between the citizen and the party she intends to vote for on a series of 28 and 22 political statements. Additionally, the focus on issue congruence is important because issue representation is mostly inferred from the alternative interpretations of congruence. The measures of issue congruence therefore contribute to a better understanding of political representation in the EU political space, tackling the recurrent crisis of representation.
APA, Harvard, Vancouver, ISO, and other styles
9

Wu, Yanqi. "New methods for measuring CSA tensors : applications to nucleotides and nucleosides." Thesis, University of Nottingham, 2011. http://eprints.nottingham.ac.uk/11859/.

Full text
Abstract:
A novel version of the CSA (Chemical Shift Anisotropy) amplification experiment which results in large amplification factors is introduced. Large xa (up to 48) are achieved by sequences which are efficient in terms of the number of π pulses and total duration compared to a modification due to Orr et al. (2005), and greater flexibility in terms of the choice of amplification factor is possible than in our most recent version. Furthermore, the incorporation of XiX decoupling ensures the overall sensitivity of the experiment is optimal. This advantage has been proved by extracting the CSA tensors for a novel vinylphosphonate-linked nucleotide. The application of CSA amplification experiment to six nucleosides is also discussed. The measured principal tensor values are compared with those calculated using the recently developed first-principles methods. Throughout this work, the NMR parameters of all nucleosides are presented. Finally, high-resolution multi-nuclear solid-state NMR experiments are used to study some novel vinyl phosphonate-linked oligo-nucleotides.
APA, Harvard, Vancouver, ISO, and other styles
10

Basham, Matthew John George. "Cognitive applications of personality testing measuring entrepreneurialism in America's community colleges /." [Gainesville, Fla.] : University of Florida, 2007. http://purl.fcla.edu/fcla/etd/UFE0021040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Nilsson, Tobias. "Construction and development of a multifunctional measuring device for biomedical applications." Thesis, Umeå universitet, Institutionen för fysik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-126023.

Full text
Abstract:
Lab-on-a-chip technology is a rapidly growing research area. Joining together several disciplines, such as physics, biology and several instances of nanotechnologies. The aim of this research is mainly to produce chips that can do the same types of measurements as large lab equipment and measurement systems, but at a fraction of the size and cost. In this work a multifunctional measuring device have been developed. It can measure optical absorbance and fluorescence while performing a range of potentiometric techniques; including chronoamperometry, linear- and cyclic voltammetry. From all these measurements it is possible to calculate particle concentrations in fluid samples. The aim is to bring simpler and cheaper point of care devices to the public. Without larger losses in accuracy and reliability of the medicinal test. To do this our device is intended to be used with lab-chip, which are capable of amplifying the signals while reducing the sample size. Lab-chips could be used in several areas but the ones being designed with this device are made for biomedical purposes, applying suitable nanostructures and reagents to measure the presence of biomarkers. With these techniques, medicinal diagnostics can be made a few minutes after samples have been collected from patients. Much quicker and more direct than sending the samples to a lab and waiting hours if not days for the results. The measuring device or lab-chip reader will use two different lab-chips in the future. One that is optimised for optical absorbance and the other for fluorescence. Both will work with electrochemical measurements, but at present only the absorbance chip have been available for testing and that without any signal enhancing techniques. Assessment of the reader's capabilities was made with solutions of gold nanoparticles, TMB (tetramethylbenzidine), iron dissolved in PBS (Phosphate-buffed saline) and with a film made of PPV (Poly para-phenylenevinylene). The first two were used to test absorbance; while the iron and PBS have been used to test electrochemical system; and the PPV was coated on a glass substrate and used to test fluorescence. During the optical absorption test, it was found that the reader can distinguish between different concentrations of the various solutions. The results are promising and further removal of signal drifts will improve signals considerably. Fluorescence can be induced and measured with the device. This part of the system is, however, untested in general and future work will show if it is sufficient. The iron solution was tested with three different methods. chronopotentiometry, linear sweep voltammetry and cyclic voltammetry. It was however found that our measurements were distorted in comparison with the expected voltammogram for iron in PBS. Additional peaks were found in the voltammogram and it is believed that these are a result of oxidation of the electrodes on the lab-chip.
APA, Harvard, Vancouver, ISO, and other styles
12

Blundell, Emma L. C. J. "Measuring zeta potential using tunable resistive pulse sensing : applications in biosensing." Thesis, Loughborough University, 2017. https://dspace.lboro.ac.uk/2134/25557.

Full text
Abstract:
The aim of this PhD was to develop and optimise an analytical method that incorporated zeta potential measurements within tunable resistive pulse sensing (TRPS) for biosensing. Modern society is dependent upon the accurate and rapid quantification of biological analytes within solution (biological or environmental) and on materials (clothing, skin, food). If the characterisation of particles within biological samples such as blood, plasma and serum is done simply by optical methods such as light scattering or microscopy, the various particulates and molecules, many of which are similar in size may not be able to be identified. TRPS is a label-free, non-optical based technique that can complete size, concentration, and more recently aided by the work in this thesis, zeta potential measurements in real time. Zeta potential could be a powerful analytical tool, as it is relative to the charge on an analyte and can be measured by monitoring the velocities of analytes as they traverse a nanopore in an electric field. Monitoring translocation velocities through the pore and thus zeta potentials could allow for an extra signal to help characterise analytes. Following a literature review in chapter 1 which focuses on the use of nanoparticles and their characterisation within bioassays, a general theory chapter (chapter 2) covers common theory and experimental setup used throughout the research. Chapter 3 contains theory specific to zeta potential measurements using TRPS developed with an industrial sponsor to which chapter 4 is the application of this theory. It contains details on applying the method of inferring zeta potential from particle velocities to measure the change in zeta potential of nanoparticles as their surfaces are functionalised with DNA of varying packing density, length, structure, and hybridisation times to also determine the sensitivity of the method. As described the zeta potential is determined via the particle velocities as they traverse a pore that are determined from the signal produced using a TRPS measurement, a blockade. The blockade gives information on the particle velocities at relative positions within the pore as well as information on the size and charge of the particle. TRPS is an evolving analytical platform that can differentiate samples of similar and the same size by their charge in a range of electrolyte solutions. This is important for whole blood and biological samples, for example, as there will always be other biomolecules or contaminants present, of similar size that may not be the target of interest. A large part of this PhD was the incorporation of DNA aptamers onto nanoparticles as recognition elements to a specific target. They were of particular interest as aptamers are ssDNA (single-stranded DNA) strands of high affinity and specificity to a target analyte. Nanoparticles can be functionalised with DNA aptamers or proteins as a means to capture a target analyte. TRPS was used to monitor the binding of DNA aptamers to their target proteins, aided by zeta potential measurements. The results showed that a smaller zeta potential value was observed when a target protein was bound to the aptamer-modified particles. As well as protein detection and quantification, a new assay using nanoparticles as tags was investigated, chapter 5. TRPS was used to monitor controlled particle aggregation in the presence of target bioparticles mimicking a streptavidin-biotin assay at first. It was found that when two differently sized particles, one functionalised with biotin and the other streptavidin (70 nm and 115 nm at a 10:1 ratio), the particles in excess saturated the larger particles resulting in a large change in size and zeta potential that could be monitored using the tunable pores. This method was then applied to nanoparticles in complex biological media, including plasma, serum, and biological buffers used to suspend bacteriophage samples, two examples are given in the thesis; the first in chapter 5 and second in chapter 7. In chapter 5, as well as sub 150 nm particles, bacteriophages of similar sizes were investigated to test the technique to biologically relevant particles. State of the art methods of counting bacteriophage via optical techniques have proven difficult, or inconsistent. In preliminary work shown in chapter 6, the characterisation of phage samples in their respective media is demonstrated. TRPS has overcome some of these challenges and preliminary data has been obtained for the size and charge characteristics of different phage types including Salmonella phage and coliphage. The study has also progressed to the size and concentration analysis of Clostridium difficile phage that has gained interest in recent decades due to their uses in therapeutics. As an alternative to nanoparticle based assays, the pores themselves were modified with DNA aptamers, see chapter 6, for direct detection of a target analyte without the need for a particle label . Pore surface modifications have been completed to enable pores to be easily functionalised with DNA and this work has enabled current rectification properties of conically shaped pores to be explored. Limits of detection for DNA-modified pores were found to be similar to that of a particle-based assay (5 pM and 18 pM, respectively) but the particle assays are more versatile and may be used in future for multiplexing experiments. Finally, in chapter 7, the technique and methodology were able to monitor changes in the behaviour of nanoparticles as they were immersed in protein rich solutions, to mimic an in vivo environment. Here the protein corona around the nanoparticles was investigated as a function of temperature (25oC and 37oC). The kinetics and binding mechanism of high and low affinity proteins forming a protein corona could be monitored in real time as well as displacement reactions between various proteins, showing the advantages of TRPS technology. In summary, from working with a commercial partner and collaborating with other institutions, we have delivered 4 papers (plus one JoVE paper) including a review of applications of TRPS technology and work detailed in this thesis, presented at 14 conferences and user meetings, and facilitated the development and implementation of zeta potential into bioassays.
APA, Harvard, Vancouver, ISO, and other styles
13

Walker, Ian. "Applications of solid modelling to component inspection with coordinate measuring machines." Thesis, University of Bath, 1991. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.314607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Temporão, Mickael. "Measuring public opinion using Big Data : applications in computational social sciences." Doctoral thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/34438.

Full text
Abstract:
La démocratie est fondée sur l’idée que les gouvernements sont sensibles à l’opinion des citoyens qu’ils sont élus pour représenter. Des mesures fiables de l’opinion publique sont requises afin de permettre aux élus de gouverner de manière efficace. Les sources traditionnelles d’information sur l’opinion publique se complexifient avec l’accroissement des modalités de communication et les changements culturels qui y sont associés. La diversification des technologies de l’information et de la communication ainsi que la forte baisse des taux de réponse aux enquêtes de sondages provoquent une crise de confiance dans les méthodes d’échantillonnage probabiliste classique. Une source d’information sur l’opinion publique de plus en plus riche, mais relativement peu exploitée, se présente sous la forme d’ensembles de données extraordinairement volumineuses et complexes, communément appelées Big Data. Ces données présentent de nombreux défis liés à l’inférence statistique, notamment parce qu’elles prennent généralement la forme d’échantillons non probabilistes. En combinant des avancées récentes en sciences sociales numériques, en statistiques et en technologie de l’information, cette thèse, constituée de trois articles, aborde certains de ces défis en développant de nouvelles approches, permettant l’extraction d’informations adaptées aux larges ensembles de données. Ces nouvelles approches permettent d’étudier l’opinion publique sous de nouveaux angles et ainsi de contribuer à des débats théoriques importants dans la littérature sur la recherche sur l’opinion publique en rassemblant les preuves empiriques nécessaires afin de tester des théories de la science politique qui n’avaient pas pu être abordées, jusqu’à présent, en raison du manque des données. Dans le premier article, sur le placement idéologique des utilisateurs sur les médias sociaux, nous développons un modèle permettant de prédire l’idéologie et l’intention de vote des utilisateurs sur les médias sociaux en se basant sur le jargon qu’ils emploient dans leurs interactions sur les plateformes de médias sociaux. Dans le second article, sur l’identité nationale au Canada, nous présentons une approche permettant d’étudier l’hétérogénéité de l’identité nationale en explorant la variance de l’attachement à des symboles nationaux parmi les citoyens à partir de données provenant d’un vaste sondage en ligne. Dans le troisième article portant sur les prédictions électorales, nous introduisons une approche se basant sur le concept de la sagesse des foules, qui facilite l’utilisation de données à grande échelle dans le contexte d’études électorales non-aléatoires afin de corriger les biais de sélection inhérents à de tels échantillons. Chacune de ces études améliore notre compréhension collective sur la manière dont les sciences sociales numériques peuvent accroître notre connaissance théorique des dynamiques de l’opinion publique et du comportement politique.
Democracy is predicated on the idea that governments are responsive to the publics which they are elected to represent. In order for elected representatives to govern effectively, they require reliable measures of public opinion. Traditional sources of public opinion research are increasingly complicated by the expanding modalities of communication and accompanying cultural shifts. Diversification of information and communication technologies as well as a steep decline in survey response rates is producing a crisis of confidence in conventional probability sampling. An increasingly rich, yet relatively untapped, source of public opinion takes the form of extraordinarily large, complex datasets commonly referred to as Big Data. These datasets present numerous challenges for statistical inference, not least of which is that they typically take the form of non-probability sample. By combining recent advances in social science, computer science, statistics, and information technology, this thesis, which combines three distinct articles, addresses some of these challenges by developing new and scalable approaches to facilitate the extraction of valuable insights from Big Data. In so doing, it introduces novel approaches to study public opinion and contributes to important theoretical debates within the literature on public opinion research by marshalling the empirical evidence necessary to test theories in political science that were previously unaddressed due to data scarcity. In our first article, Ideological scaling of social media users, we develop a model that predicts the ideology and vote intention of social media users by virtue of the vernacular that they employ in their interactions on social media platforms. In our second article, The symbolic mosaic, we draw from a large online panel survey in Canada to make inferences about the heterogeneous construction of national identities by exploring variance in the attachment to symbols among various publics. Finally, in our third article, Crowdsourcing the vote, we endeavour to draw on the wisdom of the crowd in large, non-random election studies as part of an effort to control for the selection bias inherent to such samples. Each of these studies makes a contribution to our collective understanding of how computational social science can advance theoretical knowledge of the dynamics of public opinion and political behaviour.
APA, Harvard, Vancouver, ISO, and other styles
15

Shi, Xiang. "Advanced Applications of Generalized Hyperbolic Distributions in Portfolio Allocation and Measuring Diversification." Thesis, State University of New York at Stony Brook, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10165670.

Full text
Abstract:

This thesis consists of two parts. The first part addresses the parameter estimation and calibration of the Generalized Hyperbolic (GH) distributions. In this part we review the classical expectation maximization (EM) algorithm and factor analysis for the GH distribution. We also propose a simple shrinkage estimator driven from the penalized maximum likelihood. In addition an on-line EM algorithm is implemented to the GH distribution; and its regret for general exponential family can be represented as a mixture of Kullback-Leibler divergence. We compute the Hellinger distance of the joint GH distribution to measure the performances of all the estimators numerically. Empirical studies for long-term and short-term predictions are also performed to evaluate the algorithms.

In the second part we applied the GH distribution to portfolio optimization and risk allocation. We show that the mean-risk portfolio optimization problem of a certain type of normal mixture distributions including the GH distribution can be reduced to a two dimensional problem by fixing the location parameter and the skewness parameter. In addition, we show that the efficient frontier of the mean-risk optimization problem can be extended to the three dimensional space. We also proposed a simple algorithm to deal with the transaction costs. The first and second derivatives of the CVaR are computed analytically when the underlying distribution is GH. With these results we are able to extend the effective number of bets (ENB) to general risk measures with the GH distribution. By diagonalizing the Hessian matrix of a risk measure we are able to extract locally independent marginal contributions to the risk. The minimal torsion approach can still be applied to get the local coordinators of the marginal contributions.

APA, Harvard, Vancouver, ISO, and other styles
16

Wolfaardt, H. Jurgens. "Theory of the microfluidic channel angular accelerometer for inertial measurement applications." Pretoria : [s.n.], 2005. http://upetd.up.ac.za/thesis/available/etd-05152007-120803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Meritt, Ryan James. "A Study of Direct Measuring Skin Friction Gages for High Enthalpy Flow Applications." Thesis, Virginia Tech, 2010. http://hdl.handle.net/10919/76783.

Full text
Abstract:
This study concerns the design, analysis, and initial testing of a novel skin friction gage for applications in three-dimensional, high-speed, high-enthalpy flows. Design conditions required favorable gage performance in the Arc-Heated Facilities at Arnold Engineering Development Center. Flow conditions are expected to be at Mach 3.4, with convective heat properties of h= 1,500 W/(m°·K) (264 Btu/(hr·ft°·°R)) and T_aw= 3,900 K (7,000 °R). The wall shear stress is expected to be as high as τ_w= 2,750 Pa (0.40 psi) with a correlating coefficient of skin friction value around C_f= 0.0035. Through finite element model and analytical analyses, a generic gage design is predicted to remain fully functional and within reasonable factors of safety for short duration tests. The deflection of the sensing head does not exceed 0.025 mm (0.0001 in). Surfaces exposed to the flow reach a maximum temperatures of 960 K (1,720 °R) and the region near the sensitive electronic components experience a negligible rise in temperature after a one second test run. The gage is a direct-measuring, non-nulling design in a cantilever beam arrangement. The sensing head is flush with the surrounding surface of the wall and is separated by a small gap, approximately 0.127 mm (0.005 in). A dual-axis, semi-conductor strain gage unit measures the strain in the beam resulting from the shear stress experienced by the head due to the flow. The gage design incorporates a unique bellows system as a shroud to contain the oil filling and protect the strain gages. Oil filling provides dynamic and thermal damping while eliminating uniform pressure loading. An active water-cooling system is routed externally around the housing in order to control the temperature of the gage system and electronic components. Each gage is wired in a full-bridge Wheatstone configuration and is calibrated for temperature compensation to minimize temperature effects. Design verification was conducted in the Virginia Tech Hypersonic Tunnel. The gage was tested in well-documented Mach 3.0, cold and hot flow environments. The tunnel provided stagnation temperatures and pressures of up to T₀= 655 K (1,180 °R) and P₀= 1,020 kPa (148 psi) respectively. The local wall temperatures ranged from T_w= 292 to 320 K (525 to 576 °R). The skin friction coefficient measurements were between 0.00118 and 0.00134 with an uncertainty of less than 5%. Results were shown to be repeatable and in good concurrence with analytical predictions. The design concept of the gage proved to be very sound in heated, supersonic flow. When it worked, it did so very effectively. Unfortunately, the implementation of the concept is still not robust enough for routine use. The strain gage units in general were often unstable and proved to be insufficiently reliable. The detailed gage design as built was subject to many potential sources of assembly misalignment and machining tolerances, and was susceptible to pre-loading. Further recommendations are provided for a better implementation of this design concept to make a fully functional gage test ready for Arnold Engineering Development Center.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
18

Lindberg, Emil. "Measuring the effect of memory bandwidth contention in applications on multi-core processors." Thesis, Linköpings universitet, Programvara och system, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-114136.

Full text
Abstract:
In this thesis we design and implement a benchmarking tool for applications' sensitivity to main memory bandwidth contention, in a multi-core environment, on an ARM Cortex-A15 CPU. The tool is supposed to minimize usage of shared resources, except for the main memory bandwidth, allowing it to isolate the effects of the bandwidth contention only. The difficulty in doing this lies in using a correct memory access pattern for this purpose, i.e. which memory addresses to access, in which order and at what rate in order to minimize cache usage while generating a high and controllable main memory bandwidth usage. We manage to implement a tool with low cache memory usage while still being able to saturate the main memory bandwidth. The tool uses a proportional-integral controller to control the amount of bandwidth it uses. We then use the tool to investigate the memory behaviour of the platform and of some applications when the tool is using a variable amount of bandwidth. However, we have some difficulties in analyzing the results due to the lack of support for hardware performance counters in the operating system we are using and are forced to rely on hardware timers for our data gathering. Another difficulty is the platform's limited L2 cache bandwidth, which leads to a heavy impact on L2 cache read latency by the tool. Despite this, we are able to draw some conclusions on the bandwidth usage of other applications in optimal cases with the help of the tool.
APA, Harvard, Vancouver, ISO, and other styles
19

Guo, Jun Feng. "Use of the ultrasonic technique in measuring inclusions in Al-Si alloy melts." Thèse, Chicoutimi : Université du Québec à Chicoutimi, 2007. http://theses.uqac.ca.

Full text
Abstract:
Thèse (M.Eng.) -- Université du Québec à Chicoutimi, 2007.
La p. de t. porte en outre: Mémoire présenté à l'Université du Québec à Chicoutimi comme exigence partielle de la maîtrise en ingénierie. CaQQUQ Bibliogr.: f. 101-106. Document électronique également accessible en format PDF. CaQQUQ
APA, Harvard, Vancouver, ISO, and other styles
20

Fleming, Brian Joseph William. "Measuring invariants and noise level from complex time series with applications to field measurements." Thesis, Heriot-Watt University, 2002. http://hdl.handle.net/10399/485.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Vander, Elst Harry-Paul. "Measuring, Modeling, and Forecasting Volatility and Correlations from High-Frequency Data." Doctoral thesis, Universite Libre de Bruxelles, 2016. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/228960.

Full text
Abstract:
This dissertation contains four essays that all share a common purpose: developing new methodologies to exploit the potential of high-frequency data for the measurement, modeling and forecasting of financial assets volatility and correlations. The first two chapters provide useful tools for univariate applications while the last two chapters develop multivariate methodologies. In chapter 1, we introduce a new class of univariate volatility models named FloGARCH models. FloGARCH models provide a parsimonious joint model for low frequency returns and realized measures, and are sufficiently flexible to capture long memory as well as asymmetries related to leverage effects. We analyze the performances of the models in a realistic numerical study and on the basis of a data set composed of 65 equities. Using more than 10 years of high-frequency transactions, we document significant statistical gains related to the FloGARCH models in terms of in-sample fit, out-of-sample fit and forecasting accuracy compared to classical and Realized GARCH models. In chapter 2, using 12 years of high-frequency transactions for 55 U.S. stocks, we argue that combining low-frequency exogenous economic indicators with high-frequency financial data improves the ability of conditionally heteroskedastic models to forecast the volatility of returns, their full multi-step ahead conditional distribution and the multi-period Value-at-Risk. Using a refined version of the Realized LGARCH model allowing for time-varying intercept and implemented with realized kernels, we document that nominal corporate profits and term spreads have strong long-run predictive ability and generate accurate risk measures forecasts over long-horizon. The results are based on several loss functions and tests, including the Model Confidence Set. Chapter 3 is a joint work with David Veredas. We study the class of disentangled realized estimators for the integrated covariance matrix of Brownian semimartingales with finite activity jumps. These estimators separate correlations and volatilities. We analyze different combinations of quantile- and median-based realized volatilities, and four estimators of realized correlations with three synchronization schemes. Their finite sample properties are studied under four data generating processes, in presence, or not, of microstructure noise, and under synchronous and asynchronous trading. The main finding is that the pre-averaged version of disentangled estimators based on Gaussian ranks (for the correlations) and median deviations (for the volatilities) provide a precise, computationally efficient, and easy alternative to measure integrated covariances on the basis of noisy and asynchronous prices. Along these lines, a minimum variance portfolio application shows the superiority of this disentangled realized estimator in terms of numerous performance metrics. Chapter 4 is co-authored with Niels S. Hansen, Asger Lunde and Kasper V. Olesen, all affiliated with CREATES at Aarhus University. We propose to use the Realized Beta GARCH model to exploit the potential of high-frequency data in commodity markets. The model produces high quality forecasts of pairwise correlations between commodities which can be used to construct a composite covariance matrix. We evaluate the quality of this matrix in a portfolio context and compare it to models used in the industry. We demonstrate significant economic gains in a realistic setting including short selling constraints and transaction costs.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
22

Ahmad, Riyaz. "Evaluation and applications of a new measuring device in the measurement of various ocular parameters." Thesis, University of Nottingham, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387681.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Haj, Ali Mahtab. "Measuring the Modeling Complexity of Microservice Choreography and Orchestration: The Case of E-commerce Applications." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42438.

Full text
Abstract:
With the increasing popularity of microservices for software application development, businesses are migrating from monolithic approaches towards more scalable and independently deployable applications using microservice architectures. Each microservice is designed to perform one single task. However, these microservices need to be composed together to communicate and deliver complex system functionalities. There are two major approaches to compose microservices, namely choreography and orchestration. Microservice compositions are mainly built around business functionalities, therefore businesses need to choose the right composition style that best serves their business needs. In this research, we follow a five-step process for conducting a Design Science Research (DSR) methodology to define, develop and evaluate BPMN-based models for microservice compositions. We design a series of BPMN workflows as the artifacts to investigate choreography and orchestration of microservices. The objective of this research is to compare the complexity of the two leading composition techniques on small, mid-sized, and end-to-end e-commerce scenarios, using complexity metrics from the software engineering and business process literature. More specifically, we use the metrics to assess the complexity of BPMN-based models representing the abovementioned e-commerce scenarios. An important aspect of our research is the fact that we model, deploy, and run our scenarios to make sure we are assessing the modeling complexity of realistic applications. For that, we rely on Zeebe Modeler and CAMUNDA workflow engine. Finally, we use the results of our complexity assessment to uncover insights on modeling microservice choreography and orchestration and discuss the impacts of complexity on the modifiability and understandability of the proposed models.
APA, Harvard, Vancouver, ISO, and other styles
24

Solis-Fallas, Geovanny. "Economic, financial, and statistical applications in measuring firm performance: the case of Ohio commercial farmers /." The Ohio State University, 1994. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487854314872337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Abdulkariem, Heibetullah. "Measuring magnetically induced eddy current densities in biological structures at low frequencies : circuit design and applications." Thesis, University of Aberdeen, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.385153.

Full text
Abstract:
Electrical eddy currents can be induced inside biological tissue by time-varying magnetic fields according to Faraday's law of induction. These eddy currents are responsible for biological effects such as visual sensations in eyes called magnetophosphenes and they accelerate the healing process of fractured bones in magnetotherapy operation. Induced eddy currents also cause neuromuscular stimulation of cardiac muscle, shown as a disturbance in the electrocardiogram and respiratory disturbance shown as a brief period of apnoea (stopped breathing) and muscle contraction in the forearm and finger. Brain cortex also can be stimulated by pulsed magnetic fields. A transient decrease in blood flow in the human skin is seen as a result of exposing the skin to pulsed magnetic fields. To study the effects of time-varying magnetic field, a method is needed to assess and measure induced current densities. Many attempts have been made to find such a method, both theoretically and practically. A theoretical model with homogenous and isotropic concentric loops of tissue was suggested but biological tissues are neither homogenous nor isotropic. A Hall effect method using a slab of semiconductor was suggested for measurement of current densities inside tissues, but this method ignored disturbances in the current pathways inside the tissue as a result of differences in impedances between the semiconductor and the tissue. A cube substitution method using platinized conductive faces implanted in the tissue does not consider problems of alignment of the probes with the direction of isopotential lines or electrode-electrolyte impedance. Also, such electrodes measure only dc current. In a method using a three dimensional electrode to provide three-dimensional information, the author did not give evidence that these electrodes have a zero field distortion, and also did not give information about measurements made using his electrodes. None of the above methods provide a solid approach to the problems of measuring induced current densities. This thesis attempts to present a method of measuring induced current density. The method is capable of measuring both the magnitude and direction of induced current densities. It uses five point electrodes, four of them applied inside the tissue while the fifth one is just in electrical contact with the tissue. The method consists of a probe configuration system, an open-loop operational amplifier and a balanced semi-floating current driver. Leakage current, which goes to the ground and causes error, can be adjusted to be very low (about 0.01% of the total output current). A pair of Helmoltz coils was employed to provide a system for producing time-varying magnetic field. The core of the coil pair was shielded and grounded by a cut metal shield, to avoid any interference from time-varying electric field. The shield was also used as a metal incubator to keep biological samples at body temperature. The heat to the shield was supplied by a unit consisting of four power transistors, and a circuit of sensing, and controlling components. The method used in this study was tested by making measurements of eddy current densities induced in physiological saline solution as a model of a biological conducting fluid. The measurements were represented by arrows, each representing a single measurement, with the length of the arrow representing the magnitude of current density and the direction representing the direction of the induced current. Because electrically induced eddy currents are dependent on electric charge density available inside tissue, and therefore dependent on tissue electrical conductivity, this thesis presents a direct and simple method for measuring complex tissue electrical conductivity. The method uses the same five-electrode system and shares the same point electrode configurations and balanced semi-floating current driver as used for eddy current measurements. The method measures both real and imaginary components of tissue complex conductivity. Both systems are gathered into one box and their functions are separated by four toggle switches. Measurements of electrical induced current densities and complex electrical conductivities for body fluids and tissues have been carried out on saline solutions with different ionic concentrations, expired human whole blood, expired human plasma, human cerebrospinal fluid (CSF) and human urine. Solid tissue such as bovine cardiac muscle and liver were also examined. Current-to-field ratios were obtained for experiments in both fluid and tissues.
APA, Harvard, Vancouver, ISO, and other styles
26

Presa, Käld Marcus, and Oskar Svensson. "React Native and native application development : A comparative study based on features & functionality when measuring performance in hybrid and native applications." Thesis, Jönköping University, JTH, Avdelningen för datateknik och informatik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-52327.

Full text
Abstract:
Smartphone apps today have a wide array of different usages & features and several different tools can be used to develop these smartphone apps. These tools can be broken down into three different categories, depending on what type of app they create: Native, hybrid, or web apps. These types of apps come with their advantages and disadvantages when it comes to development, performance, and costs to name a few.   The purpose of this paper seeks to answer performance issues around gradual app development in the native development language Kotlin, in comparison to the hybrid development framework React Native, with a focus on common functionalities. The focus on functionality adds the perspective of not only performance but also how a native and hybrid app may respond to the implementations, to give a wider glance at how native and hybrid compare. This may give a better understanding of how the development will turn out for both hybrid and native, in real-life cases. The chosen components for performance in this study are CPU, RAM, and battery.  The method to carry out this research involves the implementation of two testing apps for smartphones, one for Kotlin and one for React Native who function the same for the corresponding platform. The testing apps are a construct of various functionality that will be gradually measured in experiments. The experiments for the apps have been created to be a mixture of user usage and assurance of representative data from the smartphone’s hardware components when the testing app is running. The experiments conducted in this essay show that React Native has an overall worse performance than Kotlin when it comes to CPU processing and that React Native is more prone to having a negative response in performance when features or functionality are implemented. Memory usage did not show the same clear difference. A functionality that performed somewhat worse than the others involved for React Native compared to Kotlin was GPS, as further investigation of the collected data showed.
APA, Harvard, Vancouver, ISO, and other styles
27

Reagle, Colin James. "Technique for Measuring the Coefficient of Restitution for Microparticle Sand Impacts at High Temperature for Turbomachinery Applications." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/28875.

Full text
Abstract:
Erosion and deposition in gas turbine engines are functions of particle/wall interactions and the Coefficient of Restitution (COR) is a fundamental property of these interactions. COR depends on impact velocity, angle of impact, temperature, particle composition, and wall material. In the first study, a novel Particle Tracking Velocimetry (PTV) / Computational Fluid Dynamics (CFD) hybrid method for measuring COR has been developed which is simple, cost-effective, and robust. A Laser-Camera system is used in the Virginia Tech Aerothermal Rig to measure microparticles velocity. The method solves for particle impact velocity at the surface by numerical methods. The methodology presented here characterizes a difficult problem by a combination of established techniques, PTV and CFD, which have not been used in this capacity before. The current study characterizes the fundamental behavior of sand at different impact angles. Two sizes of Arizona Road Dust (ARD) and one size of Glass beads are impacted on to 304-Stainless Steel. The particles are entrained into a free jet of 27m/s at room temperature. Mean results compare favorably with trends established in literature. This technique to measure COR of microparticle sand will help develop a computational model and serve as a baseline for further measurements at elevated, engine representative air and wall temperatures. In the second study, ARD is injected into a hot flow field at temperatures of 533oK, 866oK, and 1073oK to measure the effects of high temperature on particle rebound. The results are compared with baseline measurements at ambient temperature made in the VT Aerothermal Rig, as well as previously published literature. The effects of increasing temperature and velocity led to a 12% average reduction in total COR at 533oK (47m/s), a 15% average reduction at 866oK (77m/s), and a 16% average reduction at 1073oK (102m/s) compared with ambient results. From these results it is shown that a power law relationship may not conclusively fit the COR vs temperature/velocity trend at oblique angles of impact. The decrease in COR appeared to be almost entirely a result of increased velocity that resulted from heating the flow.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
28

Brush, Ursula Jane. "Design and Validation of an Intensity-Based POF Bend Sensor Applications in Measuring Three-Dimensional Trunk Motion." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1269456459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Wiengarten, Frank. "Measuring the performance impact of electronic business applications in buyer-supplier relationships within the German automotive industry." Thesis, Ulster University, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.489977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Carlsson, Viktor. "Measuring routines of ice accretion for Wind Turbine applications : The correlation of production losses and detection of ice." Thesis, Umeå universitet, Institutionen för fysik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-37896.

Full text
Abstract:
Wind power will play a major role in the future energy system in Sweden. Most of the major wind parks are planned to be built in sites where the cold climate and atmospheric icing can cause serious problems. This underlines the importance of addressing these issues. The major cause of these problems is in-cloud icing of the rotor blades due to super cooled liquid droplets of clouds. The droplets freeze upon impact with the rotor blade and form hard rime ice. This rime ice causes disruption in the aerodynamics that leads to production losses, extra loads on the rotor blades and when the ice is shed it poses a safety risk to people in the near environment. This master thesis focuses on how to measure the accretion of ice and the correlation between measured ice and production losses of two wind parks in northern Sweden.   The results show a good correlation between the ice accretion on a stationary sensor and the production loss from a wind turbine. In most icing events the icing of the sensor and large production losses from the wind turbine correlated clearly. Attempts to quantify the production losses at a certain ice rate measured with the stationary sensors was done, however no clear results was produced. The reason for this is that the wind turbines often stop completely during an icing event and that the time series analyzed was too short to be able to quantify the losses at certain wind speed and ice rates.   Recommendations on the type of sensor which should be used was to be produced, however the conclusion was that no single sensor has acted satisfactory and could be recommended to measure ice accretion for wind turbine applications. Due to this, at least two sensors are recommended to increase the redundancy in the measurement system. Modeling ice accretion with standard parameters measured has been done and the results show that the time of icing could be determined quite well when the sensors was ice free, however when the sensors and especially the humidity sensors was iced the time of icing was overestimated.   The main conclusion drawn is that there is a clear relationship between the icing of a stationary sensor and the rotor blade. There is still no which fulfills all demands of measuring ice accretion for wind turbine applications, further it is possible with simple models to roughly determine when icing occurs with standard measurements.
APA, Harvard, Vancouver, ISO, and other styles
31

Pauscher, Lukas [Verfasser], and Thomas [Akademischer Betreuer] Foken. "Measuring and understanding site-specific wind and turbulence characteristics for wind energy applications / Lukas Pauscher ; Betreuer: Thomas Foken." Bayreuth : Universität Bayreuth, 2017. http://d-nb.info/1139358189/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Chou, Nigel Shijie. "Measuring mass changes in single suspended and adherent cells, with applications to personalized medicine in Glioblastoma Multiforme (GBM)." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112498.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Biological Engineering, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 117-119).
The increased precision offered by developments in suspended microchannel resonator (SMR) technology opens the possibility for measuring small mass changes in cells. Mass accumulation rate (MAR) measurements in single suspended cells over short periods of time have the potential for characterizing heterogeneous collections of tumorigenic cells and serve as a functional marker for the effects of anti-cancer drugs. In this thesis we adapt mass accumulation measurements for use in Glioblastoma Multiforme (GBM) patient-derived cell lines, exploring the heterogeneity between and within patient tumors, and validating the measurement as a predictor of drug susceptibility with response times on the order of 24 to 48 hours using an experimental MDM2 inhibitor. While MAR measurements can be performed on suspended single cells with high precision, it has not yet been adapted for measuring the growth of adherent cells. We develop a technique to measure mass accumulation in cells adhered to the inner surface of the resonator channel. To overcome challenges inherent in such a measurement, we use infrared imaging and multiple resonant modes to reveal the cell's position in the SMR, and utilize differential measurements from a second cantilever to account for frequency drift.
by Nigel Shijie Chou.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
33

Greenwell, Felicity Emma. "Measuring implicit attitudes towards pictures : Presenting a modified version of the Extrinsic Affective Simon Task (EAST), and its applications." Thesis, Bangor University, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.528319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Nguyen, Anh Dung. "Elaboration of an innovative protocol for measuring the mechanical properties of cell membranes for medical diagnostic and therapeutic applications." Thesis, Le Mans, 2020. http://www.theses.fr/2020LEMA1006.

Full text
Abstract:
La mesure des propriétés mécaniques des membranes entourant les cellules vivantes pourraient révéler/refléter leur état physiologique, leur état pathologique ou l’influence d'un agent externe tel qu'un médicament ou un virus, ou encore la réponse à une stimulation ou à un protocole thérapeutique. Les principales techniques de mesure de ces propriétés présentent de nombreuses limitations notamment en termes de qualité, de fiabilité, de la rapidité de la mesure et du nombre d’acquisitions. Cette thèse est centrée sur l’utilisation du Mode Circulaire de Microscopie à Force Atomique ou MC-AFM en milieu liquide pour des applications dans le domaine de la Santé. Ce mode est obtenu en modifiant l’électronique d’un AFM pour générer un contact en glissement en mouvement relatif circulaire. Couplé au mode spectroscopie de force AFM (i.e l’application d’un mouvement vertical à la pointe), le MC-AFM permet d’accéder en une seule procédure de mesure réalisée en régime stationnaire (déplacement continu à vitesse constante), à de nombreuses propriétés mécaniques de la membrane biologique, dont certaines sont inaccessibles par les procédures AFM conventionnelles.L’objectif principal de ce travail de doctorat est d’élaborer une série de protocoles et d’adapter le MC-AFM pour mesurer les propriétés mécaniques d’objets biologiques complexes. Une fois les protocoles validés à partir de globules rouges du sang, leurs intérêts pour des applications dans le domaine du Vivant sont démontrés à partir de (1) l’étude de l’influence de protocoles nutritionnels originaux à base de micro-algue sur les propriétés mécaniques des membranes de globules rouges pour et (2) l’étude de l’effet d’un traitement à base de phytostérols sur les cellules cancéreuses du sein. Ces protocoles s’avèrent également utiles pour mieux comprendre les mécanismes physiologiques impliqués, ou/et le rôle des molécules constituants la membrane sur l’évolution des propriétés mécaniques
Measurement of the mechanical properties of the membranes surrounding living cells could reveal/reflect their physiological state, pathological condition, or the influence of an external agent such as a drug or virus, or also, the response to stimulation or a therapeutic protocol. The main techniques for measuring these properties have many limitations, particularly in terms of quality, reliability, speed of measurement and number of acquisitions. This thesis focuses on the use of the Circular Mode Atomic Force Microscopy or CM-AFM in liquid media for applications in the field of Health. This mode is obtained by modifying the electronics of an AFM to generate a sliding contact in a circular relative motion. Coupled with the AFM force spectroscopy mode (i.e. the application of a vertical movement to the tip), the MC-AFM allows access, in a single measurement procedure and performed under steady state conditions (continuous displacement at constant speed), to numerous mechanical properties of the biological membrane, some of which are inaccessible by conventional AFM procedures.The main objective of this PhD project is to develop a series of protocols and adapt the MC-AFM to measure the mechanical properties of complex biological objects. Once the protocols have been validated using red blood cells, their interest for applications in the field of Life science is demonstrated by (1) studying the influence of original microalgae-based nutritional protocols on the mechanical properties of red blood cell membranes and (2) for studying the effect of a phytosterol-based treatment on breast cancer cells. These protocols are also useful to better understand the physiological mechanisms involved, and/or the role of the molecules constituting the membrane on the evolution of mechanical properties
APA, Harvard, Vancouver, ISO, and other styles
35

Geleta, Solomon. "Measuring Citizens' Preferences for Protecting Environmental Resources| Applications of Choice Experiment Surveys, Social Network Analysis and Deliberative Citizens' Juries." Thesis, Colorado State University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10262222.

Full text
Abstract:

Many reasons have been suggested as explanation for observed differences in citizens' environmental conservation projects policy choices and willingness-to-pay (WTP) values. Some people attribute this distinctive decision behavior to contrasts in the overall policy outcome expectations (preference heterogeneity) and/or differences in reactions to the changes in the environmental attributes (response heterogeneity). Others attribute this to differences in individual choice rationales, personalities, encounters, and past and present experiences. In other words, regardless of the possibility that outcomes are the same, people do not have the same emotions, convictions, disposition, or motivations.

In three separate essays, I investigate the possible reasons for the observed differences in citizens' environmental conservation policy choices and examine how preference and response heterogeneity arise. In the first essay, I ask if a priori environmental damage perception is a source of heterogeneity affecting conservation option choice decisions. In the second, I investigate if social networks (interactions among decision-making agents) affect choice decisions. In the third, I investigate if preferences change when decision making agents are allowed to deliberate among peers.

For the first essay, I conducted an on-line choice experiment (CE) survey. The survey asked questions that help to measure citizen preference for protecting environmental public goods, ascertain the value local residents are willing-to-pay (WTP), and determine how preference heterogeneity arises. CE attributes included groundwater use (measured by share of total water use from groundwater), aquatic habitat (measured by count of spawning kokanee salmon return), natural habitat health (measured by the sensitive ecosystem area reclaimed), and rural character (measured by a decrease in urban sprawl and/or a decrease in population density in rural areas). I used a special property levy as the vehicle of payment. Random parameter logit (RP) and latent class (LC) models were estimated to capture response and preference heterogeneity. The results suggest that (1) both preference and response heterogeneities were found for the choices and all environmental attributes respectively (2) respondents who have a higher value for one environmental good will have a higher value for other environmental goods, and (3) a priori damage perception could be one of the sources of response and preference heterogeneity.

In the same survey, I included people's egocentric networks, interactions, environment related activities and perceptions to empirically evaluate whether social network effect (SNE) is a source of systematic differences in preference. I estimate consumer preferences for a hypothetical future environmental conservation management alternative described by its attributes within a Nested Logit Model: nesting broader and distinct conservation options within choices impacted by individual’s network structure. The results show that some network centrality measures capture preference heterogeneity, and consequently the differences in WTP values in a systematic way.

Third, I compare the value estimated based on the traditional choice experiment (CE) with the results obtained using the citizen jury (CJ) approach or a group-based approach or also called the "Market Stall" in some literature. I estimate the effect of deliberation on conservation choice outcomes by removing any significant differences between the people who participated in the CJ (people who volunteered to be contacted again after deliberation treatment) and those people who did the survey twice but did not volunteer for CJ (control group) in terms of their socioeconomic status and be able attribute the changes in preferences to deliberation treatment only. CJ approach involved two 90 minute deliberations held over two days to discuss and consider their preferences and WTP values with other household members. Results show that deliberation improves individuals' valuation process and there is observed difference in choice outcomes between the deliberation treatment and control groups. Both preference and response heterogeneity relatively vanish when people were allowed to deliberate among peers.

APA, Harvard, Vancouver, ISO, and other styles
36

Gendreau, Keith Charles. "X-ray CCDs for space applications : calibration, radiation hardness, and use for measuring the spectrum of the cosmic X-ray background." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/38053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Stoll, Josef [Verfasser], and Wolfgang [Akademischer Betreuer] Einhäuser-Treyer. "Measuring gaze and pupil in the real world: object-based attention, 3D eye tracking and applications / Josef Stoll. Betreuer: Wolfgang Einhäuser-Treyer." Marburg : Philipps-Universität Marburg, 2015. http://d-nb.info/1071947826/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

BRIONIZIO, JULIO DUTRA. "STUDY OF THE MEASURING METHOD OF THERMAL CONDUCTIVITY AND WATER CONTENT BY MEANS OF SPHERICAL GEOMETRY: APPLICATIONS ON AQUEOUS SOLUTIONS OF ETHANOL." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2013. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=22034@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
A presente tese tem por objetivo o estudo teórico e experimental, seguindo as boas práticas metrológicas, de um método baseado em uma fonte esférica de calor para medição da condutividade térmica de líquidos, com foco em soluções aquosas de etanol, e posterior determinação do teor de água da substância. O estudo e o desenvolvimento de métodos de medição de condutividade térmica são essenciais em diversas aplicações de engenharia, visto que, em consequência das justificadas demandas atuais de economia e uso racional de energia térmica, a transferência de calor com a máxima eficiência possível é de extrema relevância. A medição do teor de água também é um relevante parâmetro em muitas áreas de pesquisa e nos setores industriais, pois a quantidade de água nas substâncias influencia vários processos físicos, químicos e biológicos. Contudo, a quantidade de equipamentos disponíveis no mercado para a medição de ambas as grandezas não é vasta. O método da esfera quente, em principio, é um método absoluto de medição da condutividade térmica, o que significa que o sensor pode fornecer um resultado sem ser calibrado. Porém, alguns parâmetros do modelo precisam ser analisados isoladamente ou obtidos por meio de calibração. Embora haja alguns estudos sobre este método, poucos têm os meios líquidos como foco principal. Ademais, tais estudos não correlacionam a condutividade térmica do material com o seu teor de água e nem realizam uma análise metrológica mais criteriosa, de modo a determinar minuciosamente as incertezas de medição. A aplicabilidade do método para medição da condutividade térmica e do teor de água das soluções analisadas mostrou-se bastante satisfatória, pois os resultados obtidos neste estudo apresentaram muito boa concordância com os valores propostos por vários pesquisadores e com as medições realizadas no Inmetro por outros métodos.
The aim of this thesis is the experimental and theoretical study, following the good metrological practices, of a method based on a spherical heat source in order to measure thermal conductivity of liquids, focusing on aqueous solutions of ethanol, with later determination of the water content of the substance. The study and the development of measuring methods of thermal conductivity are essentials in several engineering applications, since as a consequence of the current justified demands on saving and rational use of thermal energy, the heat transfer with the maximum efficient as possible is of great relevance. The measurement of the water content is also a relevant parameter in several research areas and industrial sectors, since the quantity of water in the substances influences several biological, chemical and physical processes. However, the amount of equipment available on the market for the measurement of both quantities is not vast. The heated sphere method, in principle, is an absolute one for the measurement of the thermal conductivity, which means that the sensor may furnish a result without a calibration. Nevertheless, some parameters of the model need to be analyzed separately or obtained by means of calibration. Although there are some studies on this method, few of them have liquids as the main focus. Moreover, these studies do not correlate the thermal conductivity of the material with its water content, and they do not perform a more careful metrological analysis in order to determine the measurement uncertainties. The applicability of the method to measure the thermal conductivity and the water content of the analyzed substances proved to be satisfactory, because the obtained results of this study presented a very good agreement with the values proposed by several researches and with the measurements performed at Inmetro by other methods.
APA, Harvard, Vancouver, ISO, and other styles
39

Ripper, David. "Měření kvalitativních parametrů datových sítí." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-242161.

Full text
Abstract:
The aim of this thesis was studying and describing known methods for testing transmission parameters in data networks based on protocol stack TCP/IP. Firstly were individual transmission parameters specified and determined their impact on service quality. Another aim was to carry out a comparison of the different methodologies of measurement of transmission parameters, performed their qualifications on the basis of these findings and methodology for measuring service quality from the user's perspective was proposed. It was a web application where users measured basic transmission parameters, then the users judged videos according to the evaluation scale MOS (Mean Opinion Score) and with the help of these data was analysis, classification connection and impact of connection on MOS evaluation.
APA, Harvard, Vancouver, ISO, and other styles
40

Zagorská, Kateřina. "Návrh a realizace testovací aplikace měření rozměrů optickým senzorem a robotickým ramenem." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-442816.

Full text
Abstract:
This diploma thesis deals with creation of a measuring application using optical sensor and robotic arm in the RobotStudio software. The literature review of the thesis describes the field of optical measuring and calibration. Furthermore, the thesis includes the process of creating a measuring application and outlining issue of calibration of individual parts. The work is supplemented by a demonstration of repeatability measurements and a description of possible errors that may occur if the robot is set incorrectly.
APA, Harvard, Vancouver, ISO, and other styles
41

Mühlhäuser, Katja Mara Vanessa [Verfasser], Christoph [Akademischer Betreuer] Kaserer, and Gunther [Akademischer Betreuer] Friedl. "Measuring expected stock returns - The implied cost of capital and its applications / Katja Mara Vanessa Mühlhäuser. Gutachter: Gunther Friedl ; Christoph Kaserer. Betreuer: Christoph Kaserer." München : Universitätsbibliothek der TU München, 2013. http://d-nb.info/1046404830/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Greeff, Gabriel Pieter. "A study for the development of a laser tracking system utilizing multilateration for high accuracy dimensional metrology." Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/3181.

Full text
Abstract:
MScEng
Thesis (MScEng (Mechanical and Mechatronic Engineering))--University of Stellenbosch, 2010.
ENGLISH ABSTRACT: Accurate dimensional measurement devices are critical for international industrial competitiveness for South Africa. An overview of all the necessary components of a laser tracking system using a multilateration technique for very accurate dimensional metrology is presented. A prototype laser tracker station was built to further investigate this type of system. The prototype successfully tracks a target within a volume of at least 200 200 200 mm3, approximately 300 mm away from the tracker. This system includes the mechanical design of a prototype tracker station, electronic implementation of ampli cation and motor control circuits, a tracking control algorithm, microcontroller programming and interfacing, as well as a user interface. Kinematic modelling along with Monte Carlo analyses nd the main error source of such a tracker as the beam steering mechanism gimbal axes misalignment. Multilateration is also motivated by the results found by the analysis. Furthermore, an initial sequential multilateration algorithm is developed and tested. The results of these tests are promising and motivate the use of multilateration over a single beam laser tracking system.
AFRIKAANSE OPSOMMING: Dit is van kritieke belang dat Suid-Afrika akkurate dimensionele metingstoestelle ontwikkel vir internasionale industriële medinging. 'n Oorsig van al die nodige komponente vir 'n Laser-Volgsisteem, wat slegs van multilaterasie gebruik maak om baie akkurate drie dimensionele metings te kan neem, word in hierdie projek voorgestel. 'n Prototipe Laser-Volgsisteem-stasie word gebou om so 'n sisteem verder te ondersoek. Die prototipe slaag wel daarin om 'n teiken, binne 'n volume van 200 200 200 mm3 op 'n afstand van omtrent 300 mm te volg. Die sisteem sluit die meganiese ontwerp van die sodanige stasie, elektroniese seinversterking, motorbeheer, 'n volgingsbeheer algoritme, mikroverwerker programeering en intergrasie, asook 'n gebruikerskoppelvlak program in. Kinematiese modelering, tesame met Monte Carlo simulasies, toon aan dat die hoof oorsaak van metingsfoute by so 'n stasie by die rotasie-asse van die laserstraal-stuurmeganisme, wat nie haaks is nie, lê. Die multilaterasie metode word ook verder ondersteun deur dié modelering. 'n Algoritme wat sekwensiële multilateratsie toepas word boonop ontwikkel en getoets. Die resultate van die toetse dui daarop dat die algoritme funksioneer en dat daar voordele daarin kan wees om so 'n metode in plaas van 'n Enkelstraal-Volgsisteem te gebruik.
APA, Harvard, Vancouver, ISO, and other styles
43

Louis, Valérie. "Conception d'un procédé automatisé pour la mesure d'angles par capteurs à fibres optiques : Applications a la rééducation des handicapés." Vandoeuvre-les-Nancy, INPL, 1993. http://www.theses.fr/1993INPL020N.

Full text
Abstract:
Certaines pathologies fonctionnelles requièrent une rééducation articulaire spécifique. Pour ce faire, il est nécessaire de quantifier la cinématique angulaire des mouvements imposes au patient; celle-ci a deux buts principaux: permettre une rééducation la plus adaptée possible et un suivi clinique. La recherche dans ce domaine est motivée par l'absence sur le marché de dispositifs conviviaux, polyvalents et peu couteux. Une étude est donc menée sur la réalisation d'un capteur polyvalent de mesure angulaire à fibres optiques: l'originalité de cette étude fait appel à différents concepts de techniques évoluées. Après une importante recherche bibliographique, une approche géométrique est adoptée pour analyser l'atténuation lumineuse provoquée par la courbure dans une fibre. Un banc de tests automatisé est conçu pour étudier le comportement de l'onde lumineuse en fonction du rayon et de l'angle dans une fibre optique courbée. A la suite d'un traitement électronique du signal détecté, une mesure indirecte de l'angle de courbure est déduite. Enfin, l'évaluation des performances du capteur est discutée à partir des résultats obtenus
APA, Harvard, Vancouver, ISO, and other styles
44

Ren, Kuan Fang. "Diffusion des faisceaux feuille laser par une particule sphérique et applications aux écoulements diphasiques." Rouen, 1995. http://www.theses.fr/1995ROUES012.

Full text
Abstract:
Les faisceaux de type feuille laser (faisceau gaussien de section elliptique), sont de plus en plus utilisés dans les applications des techniques de mesure pour élargir le domaine mesurable ou pour simplifier le traitement des signaux. Cette thèse concerne l'extension de la théorie de Lorenz-Mie généralisée (TLMG) à la diffusion des faisceaux feuilles laser par une particule sphérique et ses applications aux écoulements diphasiques. Une description mathématique du faisceau feuille laser est dérivée par deux méthodes différentes. L'intérêt de la TLMG dépend fortement de notre capacité pour calculer les coefficients gmn décrivant la forme du faisceau. Deux méthodes d'intégration, une interprétation localisée et une interprétation localisée intégrale pour le calcul des coefficients sont étudiées. L'outil théorique développé est alors applique à la discussion de la pression de radiation exercée sur une particule. La théorie de la diffusion de deux faisceaux est présentée et appliquée à l'évaluation d'un système de mesure réel. De plus, un modèle d'imagerie basé sur le TLMG est développé pour un objet localisé arbitrairement dans un faisceau (orienté ou polarisé arbitrairement). L'objet peut être une particule homogène ou concentrique, un trou ou une fente annulaire
APA, Harvard, Vancouver, ISO, and other styles
45

Jung, Changsu. "Measuring movement of golfers with an accelerometer." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-105144.

Full text
Abstract:
The purpose of this thesis is to analyze a golfer’s movement and provide the feedback related to the golfer’s skill with the simple and novel ways. A golfer can easily measure golf swings and get feedback based on his performance using an Android smart phone without expensive or complicated devices. In this thesis, we designed and implemented an Android application using an accelerometer sensor to analyze swing data for identifying critical points and to give various kinds of feedback based on the data. The feedback helps golfers to understand their swing patterns, timing and speed so it makes them improve their skills.
APA, Harvard, Vancouver, ISO, and other styles
46

Norouzi, Foad. "Measuring Application Availability,Usage and Performance : Implementation of EnView System." Thesis, Högskolan Dalarna, Datateknik, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:du-2357.

Full text
Abstract:
The main objective for this degree project is to implement an Application Availability Monitoring (AAM) system named Softek EnView for Fujitsu Services. The aim of implementing the AAM system is to proactively identify end user performance problems, such as application and site performance, before the actual end users experience them. No matter how well applications and sites are designed and nomatter how well they meet business requirements, they are useless to the end users if the performance is slow and/or unreliable. It is important for the customers to find out whether the end user problems are caused by the network or application malfunction. The Softek EnView was comprised of the following EnView components: Robot, Monitor, Reporter, Collector and Repository. The implemented system, however, is designed to use only some of these EnView elements: Robot, Reporter and depository. Robots can be placed at any key user location and are dedicated to customers, which means that when the number of customers increases, at the sametime the amount of Robots will increase. To make the AAM system ideal for the company to use, it was integrated with Fujitsu Services’ centralised monitoring system, BMC PATROL Enterprise Manager (PEM). That was actually the reason for deciding to drop the EnView Monitor element. After the system was fully implemented, the AAM system was ready for production. Transactions were (and are) written and deployed on Robots to simulate typical end user actions. These transactions are configured to run with certain intervals, which are defined collectively with customers. While they are driven against customers’ applicationsautomatically, transactions collect availability data and response time data all the time. In case of a failure in transactions, the robot immediately quits the transactionand writes detailed information to a log file about what went wrong and which element failed while going through an application. Then an alert is generated by a BMC PATROL Agent based on this data and is sent to the BMC PEM. Fujitsu Services’ monitoring room receives the alert, reacts to it according to the incident management process in ITIL and by alerting system specialists on critical incidents to resolve problems. As a result of the data gathered by the Robots, weekly reports, which contain detailed statistics and trend analyses of ongoing quality of IT services, is provided for the Customers.
APA, Harvard, Vancouver, ISO, and other styles
47

Yang, Liyun. "Development and validation of a novel iOS application for measuring arm inclination." Thesis, KTH, Skolan för teknik och hälsa (STH), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-173361.

Full text
Abstract:
Work in demanding postures is a known risk factor for work-related musculoskeletal disorders (MSDs), specifically work with elevated arms may cause neck/shoulder disorders. Such a disorder is a tragedy for the individual, and costly for society. Technical measurements are more precise in estimating the work exposure, than observation and self-reports, and there is a need for uncomplicated methods for risk assessments. The aim of this project was to develop and validate an iOS application for measuring arm elevation angle. Such an application was developed, based on the built-in accelerometer and gyroscope of the iPhone/iPod Touch. The application was designed to be self-exploratory. Directly after a measurement, 10th, 50th and 90th percentiles of angular distribution and median angular velocity, and percentage of time above 30°, 60°, and 90° are presented. The focused user group, ergonomists, was consulted during the user interface design phase. Complete angular datasets may be exported via email as text files for further analyses. The application was validated by comparison to the output of an optical motion capture system for four subjects. The two methods correlated above 0.99, with absolute error below 4.8° in arm flexion and abduction positions. During arm swing movements, the average root-mean-square differences (RMSDs) were 3.7°, 4.6° and 6.5° for slow (0.1 Hz), medium (0.4 Hz) and fast (0.8 Hz) arm swings, respectively. For simulated painting, the mean RMSDs was 5.5°. Since the accuracy was similar to other tested field research methods, this convenient and “low-cost” application should be useful for ergonomists, for risk assessments or educational use. The plan is to publish this iOS application on Apple Store (Apple Inc.) for free. New user feedback may further improve the user interface.
APA, Harvard, Vancouver, ISO, and other styles
48

Velasquez, Donna Marie. "Measuring Nursing Care Complexity in Nursing Homes." Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1360%5F1%5Fm.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

tariq, tariq. "GUI Application for measuring instrument. : Noise measurement system." Thesis, Uppsala universitet, Informationssystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-208326.

Full text
Abstract:
The always growing demands on the electronics design of modern electron microscopes cause increased requirements to the measurement tasks in the electronics development of these systems. In this thesis, we report the findings of designing noise measurements setup in Carl-Zeiss, Oberkochen. The aim of this thesis was to explore the design setup for noise measurement and to provide an interface which help us analyze these measurements using C# and agilent multimeter. This was achieved by the construction and evaluation of a prototype for a noise measurment application. For this purpose Design Science Research (DSR) was conducted, situated in the domain of noise measurement research. The results consist of a set of design principles expressing key aspects needed to address when designing noise measurement functionality. The artifacts derived from the development and evaluation process each one constitutes an example of how to design for noise measurement functionality of this kind.
APA, Harvard, Vancouver, ISO, and other styles
50

Zobrist, Tom L. "Application of laser tracker technology for measuring optical surfaces." Diss., The University of Arizona, 2009. http://hdl.handle.net/10150/195326.

Full text
Abstract:
The pages of this dissertation detail the development of an advanced metrology instrument for measuring large optical surfaces. The system is designed to accurately guide the fabrication of the Giant Magellan Telescope and future telescopes through loose-abrasive grinding. The instrument couples a commercial laser tracker with an advanced calibration technique and a set of external references to mitigate a number of error sources. The system is also required to work as a verification test for the GMT principal optical interferometric test of the polished mirror segment to corroborate the measurements in several low-order aberrations. A set of system performance goals were developed to ensure that the system will achieve these purposes. The design, analysis, calibration results, and measurement performance of the Laser Tracker Plus system are presented in this dissertation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography