Tesis sobre el tema "Internal measure"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Internal measure.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Internal measure".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Grady, Michael D. "A High Accuracy Microwave Radiometric Thermometer to Measure Internal Body Temperature". Scholar Commons, 2017. https://scholarcommons.usf.edu/etd/7404.

Texto completo
Resumen
The Center for Disease Control and Prevention (CDC) released heat illness data which highlighted that ~29 heat stress hospitalizations and ~3 heat-related deaths occurred every day during the summer months within the US from years 2000 to 2014. Heatstroke- the most severe form of heat illness which oftentimes lead to death- has been cited to be entirely preventable if a timely intervention is introduced. This dissertation uses microwave radiometric thermometry to perform wireless non-invasive internal body temperature monitoring which can enable intervention methods that help to prevent deaths associated with heat-illness. Overall, this dissertation develops a comprehensive closed-form analytical radiometric model and validates the effectiveness of the comprehensive model through a controlled life-like human body temperature sensing experiment. Wireless sub-skin temperature data is predicted from a human tissue mimicking phantom testbed to within 1%. A generic isolated radiometer system equation is derived for all possible calibration source combinations. The generic isolated radiometer system equation predicts comparable results to that of an ideal simulation. While improved isolation decreases measurement uncertainty, it does not improve the accuracy of estimated noise temperatures using a perfectly-isolated radiometer system equation assumption. A highly reproducible tissue-mimicking biological phantom (bio-phantom) recipe (comprised of urethane, graphite powder, and a solvent) was developed to accurately emulate the electrical properties of actual dry human skin versus frequency up to 18 GHz. The developed solid state skin phantom begins in pourable liquid form and then cures at room temperature into a dry solid state mold. An in-plane electromagnetic bandgap structure was developed and integrated within an on-body inward facing spiral antenna design. The inclusion of the in-plane electromagnetic bandgap structure demonstrated a +2.64dB gain improvement in the antenna broadside and -8dB in the rear gain while in-contact with the body as compared to the conventional spiral antenna. Likewise, the measured main beam efficiency is improved from 54.43% for the conventional antenna to 86.36% for the EBG antenna. Two techniques based on signal-flow graph theory were derived to explain both the non-coherent steady-state radiative transfer and the coherent radiative transfer within multi-layered dielectric media with non-uniform temperatures and any number of stratified layers. Both models allow for the accurate characterization and sensing of the thermal emissions originating from subsurface tissue layers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Barnard, Megan Patricia. "Using biosensors to measure and regulate the negative affect of drivers in simulated environments". Thesis, University of Nottingham, 2017. http://eprints.nottingham.ac.uk/44561/.

Texto completo
Resumen
Recent statistics have suggested that a proportion of drivers are killed or seriously injured on UK roads due to feeling nervous, uncertain or panicked whilst driving. The literature into negative emotions has primarily focused on the relationship between anger and driving. Not including the literature on driving phobias and fears after a motor vehicle accident, the literature on the relationship between anxiety and driving is limited and inconclusive. The aim of the thesis was to investigate the effects of both state and trait anxiety on driving behaviours and autonomic reactions using studies with varying methodologies. Chapter 2 describes a questionnaire study, which found that whilst driving anxiety can have a substantial impact on anxiety related thoughts, behaviours and active avoidance, trait anxiety had slightly differential effects regarding social concerns, aggressive reactions and anxiety and avoidance of specific driving situations. Chapter 3 established, in a laboratory study, that whilst trait anxiety predicted various self-reported driving reactions, it did not affect levels of behavioural or autonomic reactions to driving video stimuli. Chapter 4 expands on these findings with a study that demonstrated reductions in high frequency heart rate variability, indicating a potential lack of emotional regulation within this context. The research was then taken into a simulated environment, where state and trait anxiety were investigated. The studies reported in Chapters 6 and 7 found limited impacts of threatening instructional sets on levels of state anxiety, but demonstrated that increases in state anxiety could lead to changes in behaviour and skin conductance levels. Finally, a simulator study reported in Chapter 8 demonstrated that whilst trait anxiety did not affect driving behaviours, it did affect levels of attentional control and processing efficiency. This leads into a discussion of the theoretical and practical implications of these findings. Particular focus is given to the benefits of interventions and exposure therapies, and it is argued that different types of intervention would be more beneficial depending on levels of state or trait anxiety.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Brumbaugh, Scott J. "Development of a Methodology to Measure Aerodynamic Forces on Pin Fins in Channel Flow". Thesis, Virginia Tech, 2006. http://hdl.handle.net/10919/30871.

Texto completo
Resumen
The desire for smaller, faster, and more efficient products places a strain on thermal management in components ranging from gas turbine blades to computers. Heat exchangers that utilize internal cooling flows have shown promise in both of these industries. Although pin fins are often placed in the cooling channels to augment heat transfer, their addition comes at the expense of increased pressure drop. Consequently, the pin fin geometry must be judiciously chosen to achieve the desired heat transfer rate while minimizing the pressure drop and accompanying pumping requirements. This project culminates in the construction of a new test facility and the development of a unique force measurement methodology. Direct force measurement is achieved with a cantilever beam force sensor that uses sensitive piezoresistive strain gauges to simultaneously measure aerodynamic lift and drag forces on a pin fin. After eliminating the detrimental environmental influences, forces as small as one-tenth the weight of a paper clip are successfully measured. Although the drag of an infinitely long cylinder in uniform cross flow is well documented, the literature does not discuss the aerodynamic forces on a cylinder with an aspect ratio of unity in channel flow. Measured results indicate that the drag coefficient of a cylindrical pin in a single row array is greater than the drag coefficient of an infinite cylinder in cross flow. This phenomenon is believed to be caused by an augmentation of viscous drag on the pin fin induced by the increased viscous effects inherent in channel flow.
Master of Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Booi, Arthur Mzwandile. "An empirical investigation of the extension of servqual to measure internal service quality in a motor vehicle manufacturing setting". Thesis, Rhodes University, 2004. http://hdl.handle.net/10962/d1006139.

Texto completo
Resumen
This research explores the role, which the construct, service quality plays in an internal marketing setting. This is achieved by evaluating the perceptions and expectations of the production department with regards to the service quality provided by the maintenance department of a South African motor vehicle manufacturer. This was done using the INTSERVQUAL instrument, which was found to be a reliable instrument for measuring internal service quality within this context. A positivist approach has been adopted in conducting this research. There are two main hypotheses for this study: the first hypothesis is concerned with the relationship between the overall internal service quality and the five dimensions of service quality namely: tangibles, empathy, reliability, responsiveness and reliability. The second hypothesis focuses on the relationship between the front line staff segments of the production department and the five dimensions of internal service quality. The results of this research suggest that the perceptions and expectations of internal service customer segments plays a major role in achieving internal service quality. In addition, the importance of the INTSERVQUAL instrument in measuring internal service quality within the motor vehicle manufacturing environment is confirmed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Nieuwoudt, Anna-Marie. "Confirmatory factor analysis of the organisational climate measure : a South African perspective". Diss., University of Pretoria, 2011. http://hdl.handle.net/2263/24706.

Texto completo
Resumen
The effective management of organisational climate has become an increasingly important ingredient for business success. This has resulted in a need for up-to-date research and information on the subject, leading to the development of various measurement instruments. The main purpose of this study was to validate the Organisational Climate Measure (OCM) for the South African context. The OCM is designed to serve as a global multi-dimensional measure of organisational climate and is based on the competing values model developed by Quinn and Rohrbaugh. In this study a comprehensive literature review was conducted prior to the OCM’s administration to a sample of 200 individuals currently employed in a South African organisation. The reliability and validity of the OCM was evaluated by means of Cronbach’s alpha coefficient and confirmatory factor analysis. The results indicated strong correlations between factors and a good model fit. It was concluded that the OCM is a valid and reliable instrument for measuring organisational climate within the South African context. Copyright
Dissertation (MCom)--University of Pretoria, 2011.
Human Resource Management
unrestricted
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Coleman, Mary Angela. "Construct Validity Evidence Based on Internal Structure: Exploring and Comparing the Use of Rasch Measurement Modeling and Factor Analysis with a Measure of Student Motivation". VCU Scholars Compass, 2006. http://hdl.handle.net/10156/1425.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Krützmann, Nikolai Christian. "Analysis of Internal Boundaries and Transition Regions in Geophysical Systems with Advanced Processing Techniques". Thesis, University of Canterbury. Physics & Astronomy, 2013. http://hdl.handle.net/10092/8534.

Texto completo
Resumen
This thesis examines the utility of the Rényi entropy (RE), a measure of the complexity of probability density functions, as a tool for finding physically meaningful patterns in geophysical data. Initially, the RE is applied to observational data of long-lived atmospheric tracers in order to analyse the dynamics of stratospheric transitions regions associated with barriers to horizontal mixing. Its wider applicability is investigated by testing the RE as a method for highlighting internal boundaries in snow and ice from ground penetrating radar (GPR) recordings. High-resolution 500 MHz GPR soundings of dry snow were acquired at several sites near Scott Base, Antarctica, in 2008 and 2009, with the aim of using the RE to facilitate the identification and tracking of subsurface layers to extrapolate point measurements of accumulation from snow pits and firn cores to larger areas. The atmospheric analysis focuses on applying the RE to observational tracer data from the EOS-MLS satellite instrument. Nitrous oxide (N2O) is shown to exhibit subtropical RE maxima in both hemispheres. These peaks are a measure of the tracer gradients that mark the transition between the tropics and the mid-latitudes in the stratosphere, also referred to as the edges of the tropical pipe. The RE maxima are shown to be located closer to the equator in winter than in summer. This agrees well with the expected behaviour of the tropical pipe edges and is similar to results reported by other studies. Compared to other stratospheric mixing metrics, the RE has the advantage that it is easy to calculate as it does not, for example, require conversion to equivalent latitude and does not rely on dynamical information such as wind fields. The RE analysis also reveals occasional sudden poleward shifts of the southern hemisphere tropical pipe edge during austral winter which are accompanied by increased mid-latitude N2O levels. These events are investigated in more detail by creating daily high-resolution N2O maps using a two-dimensional trajectory model and MERRA reanalysis winds to advect N2O observations forwards and backwards in time on isentropic surfaces. With the aid of this ‘domain filling’ technique it is illustrated that the increase in southern hemisphere mid-latitude N2O during austral winter is probably the result of the cumulative effect of several large-scale, episodic leaks of N2O-rich air from the tropical pipe. A comparison with the global distribution of potential vorticity strongly suggests that irreversible mixing related to planetary wave breaking is the cause of the leak events. Between 2004 and 2011 the large-scale leaks are shown to occur approximately every second year and a connection to the equatorial quasi-biennial oscillation is found to be likely, though this cannot be established conclusively due to the relatively short data set. Identification and tracking of subsurface boundaries, such as ice layers in snow or the bedrock of a glacier, is the focus of the cryospheric part of this project. The utility of the RE for detecting amplitude gradients associated with reflections in GPR recordings is initially tested on a 25 MHz sounding of an Antarctic glacier. The results show distinct regions of increased RE values that allow identification of the glacial bedrock along large parts of the profile. Due to the low computational requirements, the RE is found to be an effective pseudo gain function for initial analysis of GPR data in the field. While other gain functions often have to be tuned to give a good contrast between reflections and background noise over the whole vertical range of a profile, the RE tends to assign all detectable amplitude gradients a similar (high) value, resulting in a clear contrast between reflections and background scattering. Additionally, theoretical considerations allow the definition of a ‘standard’ data window size with which the RE can be applied to recordings made by most pulsed GPR systems and centre frequencies. This is confirmed by tests with higher frequency recordings (50 and 500 MHz) acquired on the McMurdo Ice Shelf. However, these also reveal that the RE processing is less reliable for identifying more closely spaced reflections from internal layers in dry snow. In order to complete the intended high-resolution analysis of accumulation patterns by tracking internal snow layers in the 500 MHz data from two test sites, a different processing approach is developed. Using an estimate of the emitted waveform from direct measurement, deterministic deconvolution via the Fourier domain is applied to the high-resolution GPR data. This reveals unambiguous reflection horizons which can be observed in repeat measurements made one year apart. Point measurements of average accumulation from snow pits and firn cores are extrapolated to larger areas by identifying and tracking a dateable dust layer horizon in the radargrams. Furthermore, it is shown that annual compaction rates of snow can be estimated by tracking several internal reflection horizons along the deconvolved radar profiles and calculating the average change in separation of horizon pairs from one year to the next. The technique is complementary to point measurements from other studies and the derived compaction rates agree well with published values and theoretical estimates.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Sharp, Helen Mary. "Cross-test and predictive validity of a narrative measure of young children's internal representations of the self and other (The Teddy Bears' Picnic; Muller, 1996) relations with age, gender and expressive language". Thesis, Bangor University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.297674.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Lippy, Robert D. "Development of the seasonal beliefs questionnaire : a measure of cognitions specific to seasonal affective disorder /". Download the thesis in PDF, 2005. http://www.lrc.usuhs.mil/dissertations/pdf/Lippy2005.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Mullins, Edmond N. "Derivation bases, interval functions, and fractal measures /". The Ohio State University, 1996. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487942182325914.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Magee, Aoife. "Examination of the Social Emotional Assessment Measure (SEAM) Parent-Toddler Interval". Thesis, University of Oregon, 2013. http://hdl.handle.net/1794/12959.

Texto completo
Resumen
Parent-child relationships serve as the foundation for social emotional competence in young children. To support the healthy social emotional development of their children, parents may need to acquire information, resources, and skills through interventions that are based upon assessment of parent competence. This manuscript presents results from a study of parents of toddlers and the practitioners who serve them in a suburban area of the Pacific Northwest. The purpose of the study was to conduct initial psychometric studies on a curriculum-based tool, the Social Emotional Assessment Measure (SEAM), focused on improving parent-child interactions for parents of toddlers. Convergent validity and utility were investigated for the SEAM Parent-Toddler Interval. Findings suggest that the SEAM Parent-Toddler Interval is an appropriate tool that can identify the strengths and needs of parents and assist in designing quality interventions that might alter developmental trajectories, leading to improved family and child outcomes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Ridler, Anne C. "Can internal elastic properties of cartilage be measured using MR elastography?" Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ58836.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Trčka, Martin. "Řešení interních hrozeb v managementu bezpečnosti informací". Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2013. http://www.nusl.cz/ntk/nusl-224219.

Texto completo
Resumen
This diploma thesis deals with internal threats in the organization and their restriction with the assistance of DLP system. The first part of the thesis discusses the information security management system and describes requirements for the introduction of the ISO/IEC 27000 standards series. Next chapters detail internal threats and technical description of the DLP system. The second part of the thesis analyzes the organization and describes the process of implementation of DLP solution, which aims to reduce internal threats. The conclusion of the thesis describes acceptance agreement and financial evaluation of the implementation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Cedervall, Simon Bertil. "Invariant measures and correlation decay for s-multimodal interval maps". Thesis, Imperial College London, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.434345.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Steel, Jacob. "Majorisation ordering of measures invariant under transformations of the interval". Thesis, Queen Mary, University of London, 2010. http://qmro.qmul.ac.uk/xmlui/handle/123456789/1292.

Texto completo
Resumen
Majorisation is a partial ordering that can be applied to the set of probability measures on the unit interval I = [0, 1). Its defining property is that one measure μ majorises another measure , written μ , if R I fdμ R I fd for every convex real-valued function f : I ! R. This means that studying the majorisation of MT , the set of measures invariant under a transformation T : I ! I, can give us insight into finding the maximising and minimising T-invariant measures for convex and concave f. In this thesis I look at the majorisation ordering of MT for four categories of transformations T: concave unimodal maps, the doubling map T : x 7! 2x (mod 1), the family of shifted doubling maps T : x 7! 2x + (mod 1), and the family of orientation-reversing weakly-expanding maps.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Hopkins, Tessa Marie. "Assessment of mood measures for people with multiple sclerosis". Thesis, University of Nottingham, 2011. http://eprints.nottingham.ac.uk/12257/.

Texto completo
Resumen
In order to understand the complex nature of the relationship between depression, anxiety and multiple sclerosis (MS) valid assessments are needed. The prevalence of anxiety and depression reported varies widely dependent on the assessment used, although it is often reported as being high in people with MS. Despite the proposed high prevalence, depression and anxiety are often poorly identified in people with MS resulting in poor access to treatment. To address these issues the current study assessed the validity of three commonly used measures of depression and anxiety for people with MS. The Beck Anxiety Inventory (BAI), Beck Depression Inventory (BDI-II), and the Hospital Anxiety and Depression Scale (HADS) were compared to a gold standard clinical interview in 21 people with MS. The results found that the BDI-II and HADS were valid measures to detect depression and anxiety in people with MS. An optimum cut off score of 18 for the BDI-II yields high sensitivity (89%) and high specificity (92%). An optimum cut off score of 10 for the HADS demonstrates high sensitivity and specificity for both the anxiety subscale (75%, 100%) and the depression subscale (78%, 92%). The BAI was not found to be valid. It is recommended that the measures BDI-II and HADS are used for screening for anxiety and depression in people with MS. By conducting screening it is hoped that people with MS will have greater access to treatment and future research can be conducted to better understand the relationship between depression, anxiety and MS.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Neethling, Willem Francois. "Comparison of methods to calculate measures of inequality based on interval data". Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/97780.

Texto completo
Resumen
Thesis (MComm)—Stellenbosch University, 2015.
ENGLISH ABSTRACT: In recent decades, economists and sociologists have taken an increasing interest in the study of income attainment and income inequality. Many of these studies have used census data, but social surveys have also increasingly been utilised as sources for these analyses. In these surveys, respondents’ incomes are most often not measured in true amounts, but in categories of which the last category is open-ended. The reason is that income is seen as sensitive data and/or is sometimes difficult to reveal. Continuous data divided into categories is often more difficult to work with than ungrouped data. In this study, we compare different methods to convert grouped data to data where each observation has a specific value or point. For some methods, all the observations in an interval receive the same value; an example is the midpoint method, where all the observations in an interval are assigned the midpoint. Other methods include random methods, where each observation receives a random point between the lower and upper bound of the interval. For some methods, random and non-random, a distribution is fitted to the data and a value is calculated according to the distribution. The non-random methods that we use are the midpoint-, Pareto means- and lognormal means methods; the random methods are the random midpoint-, random Pareto- and random lognormal methods. Since our focus falls on income data, which usually follows a heavy-tailed distribution, we use the Pareto and lognormal distributions in our methods. The above-mentioned methods are applied to simulated and real datasets. The raw values of these datasets are known, and are categorised into intervals. These methods are then applied to the interval data to reconvert the interval data to point data. To test the effectiveness of these methods, we calculate some measures of inequality. The measures considered are the Gini coefficient, quintile share ratio (QSR), the Theil measure and the Atkinson measure. The estimated measures of inequality, calculated from each dataset obtained through these methods, are then compared to the true measures of inequality.
AFRIKAANSE OPSOMMING: Oor die afgelope dekades het ekonome en sosioloë ʼn toenemende belangstelling getoon in studies aangaande inkomsteverkryging en inkomste-ongelykheid. Baie van die studies maak gebruik van sensus data, maar die gebruik van sosiale opnames as bronne vir die ontledings het ook merkbaar toegeneem. In die opnames word die inkomste van ʼn persoon meestal in kategorieë aangedui waar die laaste interval oop is, in plaas van numeriese waardes. Die rede vir die kategorieë is dat inkomste data as sensitief beskou word en soms is dit ook moeilik om aan te dui. Kontinue data wat in kategorieë opgedeel is, is meeste van die tyd moeiliker om mee te werk as ongegroepeerde data. In dié studie word verskeie metodes vergelyk om gegroepeerde data om te skakel na data waar elke waarneming ʼn numeriese waarde het. Vir van die metodes word dieselfde waarde aan al die waarnemings in ʼn interval gegee, byvoorbeeld die ‘midpoint’ metode waar elke waarde die middelpunt van die interval verkry. Ander metodes is ewekansige metodes waar elke waarneming ʼn ewekansige waarde kry tussen die onder- en bogrens van die interval. Vir sommige van die metodes, ewekansig en nie-ewekansig, word ʼn verdeling oor die data gepas en ʼn waarde bereken volgens die verdeling. Die nie-ewekansige metodes wat gebruik word, is die ‘midpoint’, ‘Pareto means’ en ‘Lognormal means’ en die ewekansige metodes is die ‘random midpoint’, ‘random Pareto’ en ‘random lognormal’. Ons fokus is op inkomste data, wat gewoonlik ʼn swaar stertverdeling volg, en om hierdie rede maak ons gebruik van die Pareto en lognormaal verdelings in ons metodes. Al die metodes word toegepas op gesimuleerde en werklike datastelle. Die rou waardes van die datastelle is bekend en word in intervalle gekategoriseer. Die metodes word dan op die interval data toegepas om dit terug te skakel na data waar elke waarneming ʼn numeriese waardes het. Om die doeltreffendheid van die metodes te toets word ʼn paar maatstawwe van ongelykheid bereken. Die maatstawwe sluit in die Gini koeffisiënt, ‘quintile share ratio’ (QSR), die Theil en Atkinson maatstawwe. Die beraamde maatstawwe van ongelykheid, wat bereken is vanaf die datastelle verkry deur die metodes, word dan vergelyk met die ware maatstawwe van ongelykheid.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Albert, Benoit. "Méthodes d'optimisation avancées pour la classification automatique". Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2024. http://www.theses.fr/2024UCFA0005.

Texto completo
Resumen
En partitionnement de données, l'objectif consiste à regrouper des objets en fonction de leur similarité. K-means est un des modèles les plus utilisés, chaque classe est représentée par son centroïde. Les objets sont assignés à la classe la plus proche selon une distance. Le choix de cette distance revêt une grande importance pour prendre en compte la similarité entre les données. En optant pour la distance de Mahalanobis au lieu de la distance euclidienne, le modèle est capable de détecter des classes de forme ellipsoïdale et non plus seulement sphérique. L'utilisation de cette distance offre de nombreuses opportunités, mais elle soulève également de nouveaux défis explorés dansma thèse.L'objectif central concerne l'optimisation des modèles, en particulier FCM-GK (variante floue de k-means) qui est un problème non convexe. L'idée est d'obtenir un partitionnement de meilleure qualité, sans créer un nouveau modèle en appliquant des méthodes d'optimisation plus robustes. À cet égard, nous proposons deux approches :ADMM (Alternating Direction Method of Multipliers) et la méthode du gradient accéléré de Nesterov. Les expériences numériques soulignent l'intérêt particulier de l'optimisation par ADMM, surtout lorsque le nombre d'attributs dans le jeu de données est significativement plus élevé que le nombre de clusters. L'incorporation de la distance de Mahalanobis dans le modèle requiert l'introduction d'une mesure d'évaluation dédiée aux partitions basées sur cette distance. Une extension de la mesure d'évaluation de Xie et Beni est proposée. Cet index apparaît comme un outil pour déterminer la distance optimale à utiliser.Enfin, la gestion des sous-ensembles dans ECM (variante évidentielle) est traitée en abordant la détermination optimale de la zone d'imprécision. Une nouvelle formulation des centroides et des distances des sous-ensembles à partir des clusters est introduite. Les analyses théoriques et les expérimentations numériques mettent en évidence la pertinence de cette nouvelle formulation
In data partitioning, the goal is to group objects based on their similarity. K-means is one of the most commonly used models, where each cluster is represented by its centroid. Objects are assigned to the nearest cluster based on a distance metric. The choice of this distance is crucial to account for the similarity between the data points. Opting for the Mahalanobis distance instead of the Euclidean distance enables the model to detect classes of ellipsoidal shape rather than just spherical ones. The use of this distance metric presents numerous opportunities but also raises new challenges explored in my thesis.The central objective is the optimization of models, particularly FCM-GK (a fuzzy variant of k-means), which is a non-convex problem. The idea is to achieve a higher-quality partitioning without creating a new model by applying more robust optimization methods. In this regard, we propose two approaches: ADMM (Alternating Direction Method of Multipliers) and Nesterov's accelerated gradient method. Numerical experiments highlight the particular effectiveness of ADMM optimization, especially when the number of attributes in the dataset is significantly higher than the number of clusters.Incorporating the Mahalanobis distance into the model requires the introduction of an evaluation measure dedicated to partitions based on this distance. An extension of the Xie and Beni evaluation measure is proposed. This index serves as a tool to determine the optimal distance to use.Finally, the management of subsets in ECM (evidential variant) is addressed by determining the optimal imprecision zone. A new formulation of centroids and distances for subsets from clusters is introduced. Theoretical analyses and numerical experiments underscore the relevance of this new formulation
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Varekamp, Charlene Ghislaine. "Internal and external load measures as predictors of overuse injury risk in professional football players". Master's thesis, Faculty of Health Sciences, 2018. http://hdl.handle.net/11427/30044.

Texto completo
Resumen
Background Football is the most popular sport worldwide. Football has grown into a faster, intensive and more competitive game with a substantial increase in technical and physical demands. To reach the peak demands of match play, extensive training is necessary to improve performance and to reach the top level in professional football. Inadequate training loads prevent optimal performance adaptions, place the player at higher risk of being underprepared and may increase the risk of overuse injuries. Determining an optimal training load that improves performance and decreases the risk of overuse injuries is important. Therefore, monitoring and understanding individual responses to training loads are necessary. To date there is limited research regarding prediction of risk of overuse injuries with respect to optimal TL in professional football players. Aim To describe the pattern of injuries and determine the influence of load metrics and injury risk in South African professional football. The total GPS distance covered, the number of GPS measured high-intensity sprints and session Rating Perceived Exertion load and the effects on the risk of developing an overuse injury in professional football players. Objectives (1) To determine the relationship between total GPS distance (m) covered, ACWLR and overuse injuries in a full competitive season. (2) To determine the relationship between GPS measured high-intensity sprints, ACWLR and overuse injuries in a full competitive season. (3) To determine the relationship between session rating of perceived exertion, ACWLR and overuse injuries in a full competitive season. (4) To determine the overuse injury risk per playing position (defenders, midfielders and attackers). (5) To determine the patterns of injury during a full competitive season. (6) To determine the effect of the internal load (sRPE) and external load (GPS) in a congestion week compared to a normal week on overuse injury risk. Methods Data was collected from 32 professional football players in the first and reserve team over one full competitive Premier Soccer League season (2016/17). Training load metrics were assessed using the acute:chronic workload ratio (ACWLR) to predict overuse injury risk within the team. The relationship between total GPS distance (m) covered (TDC), GPS measured high-intensity sprints (HIS), session rating of perceived exertion (sRPE) and ACWLR and overuse injuries was determined. Overuse injuries were described based on frequency, anatomical position and injury type as well as with regards to playing position (defenders, midfielders and attackers). The effect of a congestion week on overuse injury risk was also determined. Results No significant outcomes were recorded when predicting overuse injuries for the whole team, with regards to average TDC, HIS and sRPE ACWLR. Overuse injuries may be predicted when monitoring the individual player loads, thereby taking into account the peak demands of match play per playing position. Large difference between TDC and HIS and large increases or decreases (20%) within weeks may increase the risk of overuse injuries. Hamstrings and groins injuries are the most common injuries sustained and defenders sustained the most overuse injuries within the team relative to exposure time. Congestion weeks did not predict overuse injury risk.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Islam, Md Shafiqul. "Absolutely continuous invariant measures for piecewise linear interval maps both expanding and contracting". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ54294.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Tink, Victoria J. "The measured energy efficiency and thermal environment of a UK house retrofitted with internal wall insulation". Thesis, Loughborough University, 2018. https://dspace.lboro.ac.uk/2134/33727.

Texto completo
Resumen
Approximately 30% of the UK s housing stock is comprised of older, solid wall buildings. Solid walls have no cavity and were built without insulation; therefore these buildings have high heat loss, can be uncomfortable for occupants throughout the winter and require an above-average amount of energy to heat. Solid wall buildings can be made more energy efficient by retrofitting internal wall insulation (IWI). However, there is little empirical evidence on how much energy can be saved by insulating solid wall buildings and there are concerns that internal wall insulation could lead to overheating in the summer. This thesis reports measured results obtained from a unique facility comprised of a matched pair of unoccupied, solid wall, semi-detached houses. In the winter of 2015 one house of the pair was fitted with internal wall insulation then both houses had their thermal performance measured to see how differently they behaved. Measuring the thermal performance was the process of measuring the wall U-values, the whole house heat transfer coefficient and the whole house airtightness of the original and insulated houses. Both houses were then monitored in the winter of 2015, monitoring was the process of measuring the houses energy demand while using synthetic occupancy to create normal occupancy conditions. In the summer of 2015 indoor temperatures were monitored in the houses to assess overheating. The monitoring was done firstly to see how differently an insulated and an uninsulated house perform under normal operating conditions: with the blinds open through the day and the windows closed. Secondly, a mitigation strategy was applied to reduce high indoor operative temperatures in the houses, which involved closing the blinds in the day to reduce solar gains and opening the windows at night to purge warm air from the houses. The original solid walls were measured to have U-values of 1.72 W/m2K, while with internal wall insulation the walls had U-values of 0.21 W/m2K, a reduction of 88%. The house without IWI had a heat transfer coefficient of 238 W/K; this was reduced by 39% to 144 W/K by installing IWI. The monitored data from winter was extrapolated into yearly energy demand; the internally insulated house used 52% less gas than before retrofit. The measured U-values, whole house heat loss and energy demand were all compared to those produced from RdSAP models. The house was found to be more energy efficient than expected in its original state and to continue to use less energy than modelled once insulated. This has important implications for potential carbon savings and calculating pay-back times for retrofit measures. In summer, operative temperatures in the living room and main bedroom were observed to be higher, by 2.2 oC and 1.5 oC respectively, in the internally insulated house in comparison to the uninsulated house. Both of these rooms overheated according to CIBSE TM52 criteria; however the tests were conducted during an exceptionally warm period of weather. With the simple mitigation strategy applied the indoor operative temperature in the internally insulated house was reduced to a similar level as observed in the uninsulated house. This demonstrates that any increased overheating risk due to the installation of internal wall insulation can be mitigated through the use of simple, low cost mitigation measures. This research contributes field-measured evidence gathered under realistic controlled conditions to show that internal wall insulation can significantly reduce the energy demand of a solid wall house; this in turn can reduce greenhouse gas emissions and could help alleviate fuel poverty. Further to this it has been demonstrated that in this archetype and location IWI would cause overheating only in unusually hot weather and that indoor temperatures can be reduced to those found in an uninsulated house through the use of a simple and low cost mitigation strategy. It is concluded that IWI can provide a comfortable indoor environment, and that overheating should not be considered a barrier to the uptake of IWI in the UK.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Yu, Kin-ying y 余見英. "Efficient schemes for anonymous credential with reputation support". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B48330012.

Texto completo
Resumen
Anonymous credential is an important tool to protect the identity of users in the Internet for various reasons (e.g. free open speech) even when a service provider (SP) requires user authentication. Yet, misbehaving users may use anonymity for malicious purposes and SP would have no way to refrain these users from creating further damages. Revocable anonymous credential allows SP to revoke a particular anonymous user based on the observed behavior of a session the user conducted. However, such kind of all-or-nothing revocation does not work well with the “Web 2.0” applications because it does not give a user a second chance to remedy a misconduct, nor rewards for positive behaviors. Reputation support is vital for these platforms. In this thesis, we propose two schemes with different strengths that solve this privacy and reputation dilemma. Our first scheme, PE(AR)2, aims to empower anonymous credential based authentication with revocation and rewarding support. The scheme is efficient, outperforms PEREA which was the most efficient solution to this problem, with an authentication time complexity O(1) as compared with other related works that has dependency on either the user side storage or the blacklist size. PEREA has a few drawbacks that make it vulnerable and not practical enough. Our scheme fixes PEREA's vulnerability together with efficiency improvement. Our benchmark on PE(AR)2 shows that an SP can handle over 160 requests/second when the credentials store 1000 single-use tickets, which outperforms PEREA with a 460 fold efficiency improvement. Our second scheme, SAC, aims to provide a revocation and full reputation support over anonymous credential based authentication system. With a small efficiency trade-o_ as compared with PE(AR)2, the scheme now supports both positive and negative scores. The scoring mechanism is now much more flexible, that SP could modify the rated score of any active sessions, or declare that no more rating should be given to it and mark it as finalized. SAC provides a much more elastic user side credential storage, there is no practical limit on the number of authentication sessions associated with a credential. Unlike other schemes, SAC make use of a combined membership proof instead of multiple non-membership proofs to distinguish if a session is active, finalized, or blacklisted. This special consideration has contributed to the reduction of efficiency-flexibility trade-off from PE(AR)2, making the scheme stay practical in terms of authentication time. Our benchmark on SAC shows that an SP can handle over 2.9 requests/second when the credentials store 10000 active sessions, which outperforms BLACR-Express (a related work based on pairing cryptography with full reputation support) with a 131 fold efficiency improvement. Then we analyze the potential difficulties for adopting the solutions to any existing web applications. We present a plugin based approach such that our solutions could run on a user web browser directly, and how a service provider could instruct the plugin to communicate using our protocol in HTML context. We conclude our thesis stating the solutions are practical, efficient and easy to integrate in real world scenario, and discuss potential future works.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Woods, Alexander. "Subjective adjustments to objective performance measures an empirical examination in complex work settings /". Diss., Connect to online resource - MSU authorized users, 2008.

Buscar texto completo
Resumen
Thesis (Ph. D.)--Michigan State University. Dept. of Accounting and Information Systems, 2008.
Title from PDF t.p. (viewed on July 2, 2009) Includes bibliographical references (p. 73-77). Also issued in print.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Hui, Ada M. L. "The use of coercive measures in a high secure hospital : expressions of institutional and emotional work". Thesis, University of Nottingham, 2015. http://eprints.nottingham.ac.uk/29557/.

Texto completo
Resumen
This thesis examines the use and implications of using coercive measures within a high security hospital. High security hospitals are unique environments where challenges are often faced in balancing care with safety and security. The use of coercive measures, namely, restraint, seclusion, rapid tranquillisation and segregation, are considered unavoidable necessities in preventing and/or limiting harm. Yet coercive measures are deemed ethically, morally and professionally controversial. This study explores patient, staff and environmental factors that influence variations in attitudes and experiences towards the use of coercive measures using a sequential mixed methods design. Stage one examines the rates, frequencies and demographic characteristics of patients experiencing coercive measures. . Stage two uses standardised questionnaires to elicit and analyse staff and patient attitudes towards aggression (ATAS), containment measures (ACMQ) and hospital environment (EssenCES). Stage three uses a constructivist grounded theory approach to conducting semi-structured interviews with staff using institutional and emotional work theories. Findings revealed that younger, newly admitted females were those most likely to experience coercion. Aggression was viewed as being significantly more destructive on the pre-discharge ward, in comparison with the admission, ICU and treatment ward. Discrepancies were found between staff and patient perceptions of the least acceptable containment measures. Patients experienced the hospital environment as more supportive and cohesive than staff. Finally, findings from the staff interviews uncovered a complex interplay between personal and institutional expectations, values and actions. Further research is required into examining i) the attitudes and implications regarding the least restrictive methods, ii) the internal dynamics within high secure hospitals warrant and what it means for staff to be working in an environment where patients feel more supported by being contained than staff do when containing them, and iii) what can be done to relieve the tensions of healthcare professionals expected to care, coerce and contain.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Naude, Kevin Alexander. "Assessing program code through static structural similarity". Thesis, Nelson Mandela Metropolitan University, 2007. http://hdl.handle.net/10948/578.

Texto completo
Resumen
Learning to write software requires much practice and frequent assessment. Consequently, the use of computers to assist in the assessment of computer programs has been important in supporting large classes at universities. The main approaches to the problem are dynamic analysis (testing student programs for expected output) and static analysis (direct analysis of the program code). The former is very sensitive to all kinds of errors in student programs, while the latter has traditionally only been used to assess quality, and not correctness. This research focusses on the application of static analysis, particularly structural similarity, to marking student programs. Existing traditional measures of similarity are limiting in that they are usually only effective on tree structures. In this regard they do not easily support dependencies in program code. Contemporary measures of structural similarity, such as similarity flooding, usually rely on an internal normalisation of scores. The effect is that the scores only have relative meaning, and cannot be interpreted in isolation, ie. they are not meaningful for assessment. The SimRank measure is shown to have the same problem, but not because of normalisation. The problem with the SimRank measure arises from the fact that its scores depend on all possible mappings between the children of vertices being compared. The main contribution of this research is a novel graph similarity measure, the Weighted Assignment Similarity measure. It is related to SimRank, but derives propagation scores from only the locally optimal mapping between child vertices. The resulting similarity scores may be regarded as the percentage of mutual coverage between graphs. The measure is proven to converge for all directed acyclic graphs, and an efficient implementation is outlined for this case. Attributes on graph vertices and edges are often used to capture domain specific information which is not structural in nature. It has been suggested that these should influence the similarity propagation, but no clear method for doing this has been reported. The second important contribution of this research is a general method for incorporating these local attribute similarities into the larger similarity propagation method. An example of attributes in program graphs are identifier names. The choice of identifiers in programs is arbitrary as they are purely symbolic. A problem facing any comparison between programs is that they are unlikely to use the same set of identifiers. This problem indicates that a mapping between the identifier sets is required. The third contribution of this research is a method for applying the structural similarity measure in a two step process to find an optimal identifier mapping. This approach is both novel and valuable as it cleverly reuses the similarity measure as an existing resource. In general, programming assignments allow a large variety of solutions. Assessing student programs through structural similarity is only feasible if the diversity in the solution space can be addressed. This study narrows program diversity through a set of semantic preserving program transformations that convert programs into a normal form. The application of the Weighted Assignment Similarity measure to marking student programs is investigated, and strong correlations are found with the human marker. It is shown that the most accurate assessment requires that programs not only be compared with a set of good solutions, but rather a mixed set of programs of varying levels of correctness. This research represents the first documented successful application of structural similarity to the marking of student programs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Carneal-Frazer, Nicole Devine. "Analysis of Control Measures Used During Cholera Outbreaks Among Internally Displaced Persons". ScholarWorks, 2019. https://scholarworks.waldenu.edu/dissertations/6183.

Texto completo
Resumen
Cholera remains a major public health problem affecting high-risk populations such as camps of internally displaced persons. During a cholera outbreak, it is essential to reduce transmission and minimize new infections. The Miasma theory, host-agent-environment model and Ecosocial theory were utilized for this study. This study was a retrospective comparison to determine whether historical cholera control measures are effective during current cholera outbreaks within camps of internally displaced persons. A quantitative approach ascertained changes in incidence and mortality rates following implementation of primary and/or secondary control measures. Cholera outbreaks were identified from the World Health Organization's (WHO) Disease Outbreak News reports issued between 1996 and 2017. Each reported cholera outbreak was categorized into one of eight outbreak cohorts -- each cohort having the same primary control measure. The WHO Data Repository was used to identify cholera incidence and/or mortalities and the World Bank data set was used for population total to calculate incidence and/or mortality rates for the years prior to and the year of the outbreak to calculate the case percentage change and death percentage change. Analysis of covariance was used to assess statistical significance in rate change within each intervention cohort. No statistical significance was noted within various cholera control intervention. Limitations of this study provide the basis for continued research on this topic; also aligning with the Global Task Force on Cholera to reduce infections by 90% by the year 2030.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Mellott, Deborah S. "Measuring implicit attitudes and stereotypes : increasing internal consistency reveals the convergent validity of IAT and priming measures /". Thesis, Connect to this title online; UW restricted, 2003. http://hdl.handle.net/1773/9169.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

King-Lacroix, Justin. "Securing the 'Internet of Things' : decentralised security for wireless networks of embedded systems". Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:b41c942f-5389-4a5b-8bb7-d5fb6a18a3db.

Texto completo
Resumen
The phrase 'Internet of Things' refers to the pervasive instrumentation of physical objects with sensors and actuators, and the connection of those sensors and actuators to the Internet. These sensors and actuators are generally based on similar hardware as, and have similar capabilities to, wireless sensor network nodes. However, they operate in a completely different network environment: wireless sensor network nodes all generally belong to a single entity, whereas Internet of Things endpoints can belong to different, even competing, ones. This difference has profound implications for the design of security mechanisms in these environments. Wireless sensor network security is generally focused on defence against attack by external parties. On the Internet of Things, such an insider/outsider distinction is impossible; every entity is both an endpoint for legitimate communications, and a possible source of attack. We argue that that under such conditions, the centralised models that underpin current networking standards and protocols for embedded systems are simply not appropriate, because they require such an insider/outsider distinction. This thesis serves as an exposition in the design of decentralised security mechanisms, applied both to applications, which must perform access control, and networks, which must guarantee communications security. It contains three main contributions. The first is a threat model for Internet of Things networks. The second is BottleCap, a capability-based access control module, and an exemplar of decentralised security architecture at the application layer. The third is StarfishNet, a network-layer protocol for Internet of Things wireless networks, and a similar exemplar of decentralised security architecture at the network layer. Both are evaluated with microbenchmarks on prototype implementations; StarfishNet's association protocol is additionally validated using formal verification in the protocol verification tool Tamarin.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Brown, Richard. "Measures of vascular dysfunction, monocyte subsets and circulating microparticles in patients with diffuse coronary artery disease". Thesis, University of Birmingham, 2018. http://etheses.bham.ac.uk//id/eprint/8547/.

Texto completo
Resumen
Diffuse, multi vessel coronary artery disease (CAD) affects about one third of patients with CAD and is associated with worse outcomes. Abnormal vascular stiffness and function (e.g., reflected by increased endothelial microparticles and diminished microvascular endothelial-mediated responses), cell mediated pro-inflammatory status (e.g., reflected by levels of specific monocyte subsets), and platelet function (e.g., increased monocyte-platelet aggregates (MPAs) and platelet microparticles) have established roles in CAD pathogenesis but their contribution to the unfavourable diffuse CAD form is unclear. The aim of this study was to compare measures of vascular function, monocyte subsets, MP As, and endothelial and platelet microparticles in patients with diffuse and focal CAD and subjects without CAD. Additionally, prospective changes in these characteristics were analysed over one year. I found increased counts of aggregates of Mon2 monocyte subset with platelets and apoptotic endothelial microparticles in patients with diffuse CAD and I identified a negative correlation between Mon2 MPAs and microvascular endothelial function and increased diastolic elastance. My findings suggest that excessive levels of Mon2 aggregates with platelets and apoptotic endothelial micropa1iicles could be important contributors to the diffuse type of CAD by a mechanism involving microvascular endothelial dysfunction and abnonnal cardio-vascular interactions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

De, Goede Johan. "An investigation into the internal structure of the learning potential construct as measured by the Apil Test Battery". Thesis, Stellenbosch : University of Stellenbosch, 2007. http://hdl.handle.net/10019.1/2152.

Texto completo
Resumen
Thesis (MComm (Industrial Psychology))--University of Stellenbosch, 2007.
This thesis presents an investigation into the internal structure of the learning potential construct as measured by the APIL Test Battery developed by Taylor (1989, 1992, 1994, 1997). The measurement of learning potential, a core or fundamental ability, as opposed to abilities heavily influenced by exposure to previous opportunities is important in the South African environment. The importance of the assessment of learning potential can be explained partly in terms of the necessity of equalling the proverbial ‘playing field’ and ensuring that previously disadvantaged individuals are not becoming more disadvantaged by further being denied development opportunities and partly in terms of attempts to compensate and correct for a system that clearly oppressed the development of important job related skills, knowledge and abilities in certain groups. Such attempts at accelerated affirmative development will, however, only be effective to the extent to which there exists a comprehensive understanding of the factors underlying training performance success and the manner in which they combine to determine learning performance in addition to clarity on the fundamental nature of the key performance areas comprising the learning task. In this study the internal structure of the learning potential construct as measured by the APIL Test Battery was investigated through structural equation modelling and regression analysis. Overall, it was found that both the measurement and the structural model fitted the data reasonably well. The study, however, was unable to corroborate a number of the central hypotheses in Taylor’s (1989, 1992, 1994, 1997) stance on learning potential. Moreover, the analysis of the standardised residuals for the structural model, suggested that the addition of one or more paths to the existing structural model would probably improve the fit of the model. Modification indices calculated as part of the structural equation modelling could, however, not point out any specific additions to the existing model. Regression analysis resulted in the conclusion that the inclusion of the two learning competency potential measures together with the two learning competencies measures in a learning potential selection battery is not really warranted. The use of information processing capacity as a predictor on its own seems to be indicated by the results of this study. Recommendations for future research are made.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Henning, Roline. "An investigation into the internal structure of the unit performance construct as measured by the performance index (PI)". Thesis, Stellenbosch : Stellenbosch University, 2002. http://hdl.handle.net/10019.1/52679.

Texto completo
Resumen
Assignment (MA)--University of Stellenbosch, 2002
ENGLISH ABSTRACT: The responsibility for the performance of any organisational unit ultimately lies with the leadership of the unit. Given this perceived pivotal role of leadership in work unit performance, the ultimate objective is to capture the nature of the presumed relationship between leadership and unit performance in a comprehensive structural model. To validate such a leadership model, however, requires an explanation of the manner in which the unit performance dimensions affect each other. Spangenberg and Theron (2002b) developed a generic, standardized unit performance measure (PI) that encompasses all the unit performance dimensions for which the unit leader could be held responsible. The objective of this paper is to investigate the internal structure of the PI in order to establish the inter-relationships between the eight unit performance latent variables. The PI consists of 56 questions covering eight dimensions. The validation sample consisted of 304 completed PI questionnaires. However, after imputation 273 cases with observations on all 56 items remained in the validation sample. Item analysis and dimensionality analysis was performed on each of the sub-scales using SPSS. Thereafter, confirmatory factor analysis was performed on the reduced data set using LISREL. The results indicated satisfactory factor loadings on the measurement model. Acceptable model fit was achieved for the measurement model. Subsequently, the structural model was tested using LISREL. The results provided statistics of good fit. Only four hypotheses failed to be corroborated in this study. Conclusions were drawn from the results obtained and suggestions for further research are made.
AFRIKAANSE OPSOMMING: Die prestasie van enige organisatoriese werkeenheid is die uiteindelike verantwoordelikheid van die leierskap van die eenheid. Gegewe hierdie waargenome sleutelrol van leierskap in werkeenheidprestasie, is die uiteindelike doelwit om die aard van die veronderstelde verwantskap tussen leierskap en eenheidprestasie in 'n omvattende strukturele model vas te lê. Die validering van so 'n leierskapmodel vereis egter 'n uiteensetting van die wyse waarop die eenheidprestasie-dimensies mekaar onderling beïnvloed. Spangenberg en Theron (2002b) het 'n generiese, gestandaardiseerde eenheidprestasie-meetinstrument (PI) ontwikkel wat al die eenheidprestasie-dimensies insluit waarvoor die leier van die eenheid verantwoordelik gehou kan word. Die doel van hierdie studie is om ondersoek in te stel na die interne struktuur van die PI ten einde die inter-verwantskappe tussen die agt eenheidprestasie latente veranderlikes vas te stel. Die PI bestaan uit 56 vrae wat die agt dimensies dek. Die validasiesteekproef bestaan uit 304 voltooide PI vraelyste. Na vervanging van ontbrekende waardes is die validasiesteekproef egter gereduseer tot 273 gevalle met waarnemings op al 56 items. Item-ontleding en dimensieanalise is op elk van die sub-skale met behulp van SPSS gedoen. Daaropvolgend is bevestigende faktor-analise op die verkorte datastel gedoen met behulp van LISREL. Die passingstatistieke het hier aanvaarbare resultate opgelewer. Vervolgens is die strukturele model met behulp van LISRELgetoets. Die resultate het hier bevredigende passingstatistieke gelewer. Daar kon vir slegs vier hipoteses nie steun gevind word in die studie nie. Op grond van die resultate is daar tot bepaalde gevolgtrekkings gekom en daar word aanbevelings vir verdere navorsing gemaak.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Zhao, Qing Ph D. Massachusetts Institute of Technology. "Modeling of contact between liner finish and piston ring in internal combustion engines based on 3D measured surface". Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/92105.

Texto completo
Resumen
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 87-88).
When decreasing of fossil fuel supplies and air pollution are two major society problems in the 21st century, rapid growth of internal combustion (IC) engines serves as a main producer of these two problems. In order to increase fuel efficiency, mechanical loss should be controlled in internal combustion engines. Interaction between piston ring pack and cylinder liner finish accounts for nearly 20 percent of the mechanical losses within an internal combustion engine, and is an important factor that affects the lubricant oil consumption. Among the total friction between piston ring pack and cylinder liner, boundary friction occurs when piston is at low speed and there is direct contact between rings and liners. This work focuses on prediction of contact between piston ring and liner finish based on 3D measured surface and different methods are compared. In previous twin-land oil control ring (TLOCR) deterministic model, Greenwood-Tripp correlation function was used to determine contact. The practical challenge for this single equation is that real plateau roughness makes it unreliable. As a result, micro geometry of liner surface needs to be obtained through white light interferometry device or confocal equipment to conduct contact model. Based on real geometry of liner finish and the assumption that ring surface is ideally smooth, contact can be predicted by three different models which were developed by using statistical Greenwood-Williamson model, Hertzian contact and revised deterministic dry contact model by Professor A.A. Lubrecht. The predicted contact between liner finish and piston ring is then combined with hydrodynamic pressure caused by lubricant which was examined using TLOCR deterministic model by Chen. et al to get total friction resulted on the surface of liner finish. Finally, contact model is used to examine friction of different liners in an actual engine running cycle.
by Qing Zhao.
S.M.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Rutherford, Andrew. "Introducing hippocratic log files for personal privacy control". Thesis, Nelson Mandela Metropolitan University, 2005. http://hdl.handle.net/10948/171.

Texto completo
Resumen
The rapid growth of the Internet has served to intensify existing privacy concerns of the individual, to the point that privacy is the number one concern amongst Internet users today. Tools exist that can provide users with a choice of anonymity or pseudonymity. However, many Web transactions require the release of personally identifying information, thus rendering such tools infeasible in many instances. Since it is then a given that users are often required to release personal information, which could be recorded, it follows that they require a greater degree of control over the information they release. Hippocratic databases, designed by Agrawal, Kiernan, Srikant, and Xu (2002), aim to give users greater control over information stored in a data- base. Their design was inspired by the medical Hippocratic oath, and makes data privacy protection a fundamental responsibility of the database itself. To achieve the privacy of data, Hippocratic databases are governed by 10 key privacy principles. This dissertation argues, that asides from a few challenges, the 10 prin- ciples of Hippocratic databases can be applied to log ¯les. This argument is supported by presenting a high-level functional view of a Hippocratic log file architecture. This architecture focuses on issues that highlight the con- trol users gain over their personal information that is collected in log files. By presenting a layered view of the aforementioned architecture, it was, fur- thermore, possible to provide greater insight into the major processes that would be at work in a Hippocratic log file implementation. An exploratory prototype served to understand and demonstrate certain of the architectural components of Hippocratic log files. This dissertation, thus, makes a contribution to the ideal of providing users with greater control over their personal information, by proposing the use of Hippocratic logfiles.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Collie, Christin, Samuel C. Peter, Hannah G. Mitchell y Meredith K. Ginley. "What Are You Really Asking? Readability of Internet Gaming Disorder Measures". Digital Commons @ East Tennessee State University, 2021. https://dc.etsu.edu/asrf/2021/presentations/20.

Texto completo
Resumen
When designing assessment measures to capture psychological symptoms it is essential to ensure the individual completing the measure understands what is being asked of them. In the most basic sense, readability relates to how easy it is to understand something when you read it. Understanding readability can inform clinicians and researchers about selecting appropriate measures for their clients and participants. One commonly used formula to determine a given text's readability is the Flesch Kincaid Grade Level (FKG). Newer approaches of measuring readability utilize technological programs, such as Coh-Metrix and Question Understanding Aid (QUAID), that analyze text characteristics to determine the impact on comprehension. The current project investigated the readability of seven measures of Internet Gaming Disorder (IGD). Assessments of IGD have been largely adapted from validated measures of other constructs (i.e., gambling disorder, internet addiction) or created based directly on the proposed criteria of IGD. Prior to the current study, researchers had not yet critically examined the readability of measures of IGD. Assessment of readability is of critical importance given IGD is most likely to impact adolescents, a population that has lower levels of literacy because critical reading skills are developing throughout adolescence. It was hypothesized that measures of IGD may be difficult to read for adolescents. Items within seven measures of IGD were examined utilizing FKG, Coh-Metrix, and QUAID formulas for calculating readability and potential problematic question characteristics. Results found that the mean FKG ranged from 5.40 to 12.28 and indicated six of the seven measures contained at least one item written above an 8th-grade reading level. Coh-Metrix analysis found all measures contained at least one and up to eight items that were written at a below average level of syntactic simplicity (z =
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Clayton, Bradley. "Securing media streams in an Asterisk-based environment and evaluating the resulting performance cost". Thesis, Rhodes University, 2007. http://eprints.ru.ac.za/851/.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Li, Kehan. "Stress, uncertainty and multimodality of risk measures". Thesis, Paris 1, 2017. http://www.theses.fr/2017PA01E068.

Texto completo
Resumen
Dans cette thèse, nous discutons du stress, de l'incertitude et de la multimodalité des mesures de risque en accordant une attention particulière à deux parties. Les résultats ont une influence directe sur le calcul du capital économique et réglementaire des banques. Tout d'abord, nous fournissons une nouvelle mesure de risque - la VaR du stress du spectre (SSVaR) - pour quantifier et intégrer l'incertitude de la valeur à risque. C'est un modèle de mise en œuvre de la VaR stressée proposée par Bâle III. La SSVaR est basée sur l'intervalle de confiance de la VaR. Nous étudions la distribution asymptotique de la statistique de l'ordre, qui est un estimateur non paramétrique de la VaR, afin de construire l'intervalle de confiance. Deux intervalles de confiance sont obtenus soit par le résultat gaussien asymptotique, soit par l'approche saddlepoint. Nous les comparons avec l'intervalle de confiance en bootstrapping par des simulations, montrant que l'intervalle de confiance construit à partir de l'approche saddlepoint est robuste pour différentes tailles d'échantillons, distributions sous-jacentes et niveaux de confiance. Les applications de test de stress utilisant SSVaR sont effectuées avec des rendements historiques de l'indice boursier lors d'une crise financière, pour identifier les violations potentielles de la VaR pendant les périodes de turbulences sur les marchés financiers. Deuxièmement, nous étudions l'impact de la multimodalité des distributions sur les calculs de la VaR et de l'ES. Les distributions de probabilité unimodales ont été largement utilisées pour le calcul paramétrique de la VaR par les investisseurs, les gestionnaires de risques et les régulateurs. Cependant, les données financières peuvent être caractérisées par des distributions ayant plus d'un mode. Avec ces données nous montrons que les distributions multimodales peuvent surpasser la distribution unimodale au sens de la qualité de l'ajustement. Deux catégories de distributions multimodales sont considérées: la famille de Cobb et la famille Distortion. Nous développons un algorithme d'échantillonnage de rejet adapté, permettant de générer efficacement des échantillons aléatoires à partir de la fonction de densité de probabilité de la famille de Cobb. Pour une étude empirique, deux ensembles de données sont considérés: un ensemble de données quotidiennes concernant le risque opérationnel et un scénario de trois mois de rendement du portefeuille de marché construit avec cinq minutes de données intraday. Avec un éventail complet de niveaux de confiance, la VaR et l'ES à la fois des distributions unimodales et des distributions multimodales sont calculés. Nous analysons les résultats pour voir l'intérêt d'utiliser la distribution multimodale au lieu de la distribution unimodale en pratique
In this thesis, we focus on discussing the stress, uncertainty and multimodality of risk measures with special attention on two parts. The results have direct influence on the computation of bank economic and regulatory capital. First, we provide a novel risk measure - the Spectrum Stress VaR (SSVaR) - to quantify and integrate the uncertainty of the Value-at-Risk. It is an implementation model of stressed VaR proposed in Basel III. The SSVaR is based on the confidence interval of the VaR. We investigate the asymptotic distribution of the order statistic, which is a nonparametric estimator of the VaR, in order to build the confidence interval. Two confidence intervals are derived from either the asymptotic Gaussian result, or the saddlepoint approach. We compare them with the bootstrapping confidence interval by simulations, showing that the confidence interval built from the saddlepoint approach is robust for different sample sizes, underlying distributions and confidence levels. Stress testing applications using SSVaR are performed with historical stock index returns during financial crisis, for identifying potential violations of the VaR during turmoil periods on financial markets. Second, we investigate the impact of multimodality of distributions on VaR and ES calculations. Unimodal probability distributions have been widely used for parametric VaR computation by investors, risk managers and regulators. However, financial data may be characterized by distributions having more than one modes. For these data, we show that multimodal distributions may outperform unimodal distribution in the sense of goodness-of-fit. Two classes of multimodal distributions are considered: Cobb's family and Distortion family. We develop an adapted rejection sampling algorithm, permitting to generate random samples efficiently from the probability density function of Cobb's family. For empirical study, two data sets are considered: a daily data set concerning operational risk and a three month scenario of market portfolio return built with five minutes intraday data. With a complete spectrum of confidence levels, the VaR and the ES from both unimodal distributions and multimodal distributions are calculated. We analyze the results to see the interest of using multimodal distribution instead of unimodal distribution in practice
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Hulthén, Hana. "On understanding of external and internal integration in supply chains : challenges and evaluation". Doctoral thesis, Lund University, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-64925.

Texto completo
Resumen
Benefits of implementing Supply Chain Integration (SCI) are acknowledged in existing integration literature. Integration extending beyond functional silos and firm boundaries is expected to provide value for customers in terms of higher quality, improved service level, and reduced costs.In addition, internal integration allows business functions to align around a single company goal. This type of integration promotes value creation while decreasing redundancies and costs. Yet, regardless of the significant advances in research and practice, many organizations still experience difficulties not only to integrate activities with supply chain partners, but they also struggle to integrate activities within an organization, for example, through implementation of a sales and operations planning (S&OP) process. To tackle these challenges, organizations may need to reconsider why and how they integrate both internally and externally. However, the previous integration research provides only limited guidelines for how to carry out such evaluations. Many organizations experience difficulties in addressing the complexity related to integration and evaluation of activities internally and with SC partners. The lack of concrete guidelines for evaluation of SCI in theory is seen as one of the reasons for the still sporadic examples of successful SCI in practice. Thus, the overall purpose of this research is to increase understanding of external and internal integration in supply chains. To address the purpose, three studies (1-3) have been conducted. The study 1 highlighted the current status and several SCI challenges in academic literature and in practice. One of the major challenges relates to the absence of a systematic comprehensive approach for evaluation of internal and external integration. To contribute to closing of this gap, study 2 was conducted to develop a context based framework for evaluation of external integration. Finally, the subsequent study 3 aimed to develop a framework for evaluation of the S&OP process. Concerning the SCI challenges, this research contributes to previous integration literature by confirming some existing challenges but also by identifying additional challenges. Related to challenges of external integration, a set of contextual factors are identified which were observed to challenge the establishment of an appropriate level of external integration with SC partners. As a result a misfit occurs between the contextual factors and applied level of external integration. Additionally, reasons for the misfits were identified and discussed. Associated with the challenges of S&OP process, this thesis adds to existing fragmented literature on the S&OP process evaluation challenges by synthesizing and extending the existing knowledge. A framework has been developed which is founded on two key areas of process performance – S&OP process effectiveness and efficiency, and on various maturity levels of the process. Although several challenges were found for each maturity level, some challenges were observed occurring across more levels. Moreover, in this research, a context based framework for evaluation of external integration is proposed. The framework extends the previous SCI frameworks. It is founded on contextual factors which were considered by the studied cases when integrating with their SC partners. Furthermore, the factors were observed to promote establishment of an appropriate level of external integration. Each level consists of identified external integration activities. The thesis further contributes to the S&OP performance research by addressing the lack of process oriented frameworks for evaluation of the process performance. The proposed framework of measuring the S&OP process performance considers the five major steps of the process and their outputs as well as the output of the entire process. To reflect the process performance measures, the framework structures and defines effectiveness and efficiency measures and their relation to the process performance. The framework also conforms to the majority of the criteria for designing of appropriate performance measures. Finally, the major results of the thesis are synthesized and a framework is suggested of external integration and its effect on S&OP process performance. The framework considers the identified contextual factors, appropriate levels of external integration, and the S&OP performance measures the integration can have effect on. The thesis also discusses, alongside with the theoretical contributions, how the developed frameworks can support managers in evaluating their supply chain integration practices. Additionally, several opportunities for future research are outlined.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Janz, Linda y University of Lethbridge Faculty of Arts and Science. "Privacy and the internet : differences in perspectives". Thesis, Lethbridge, Alta. : University of Lethbridge, Faculty of Arts and Science, 1997, 1997. http://hdl.handle.net/10133/64.

Texto completo
Resumen
This study examined results of a World Wide Web survey that used the framework of domain theory of moral development to examine attitudes of Internet users assuming perspectives of victims, aggressors and bystanders toward privacy issues. The effect of a monetary incentive was tested on two perspectives; effects of three moderating variables, employment status, newsgroup/mailing list membership and culture, were also tested. In the process of examing interactions, an evaluation determined if changes in attitudes indicated movement along a morality continuum. Results show that victims are more concerned than aggressors, and bystanders take a moralizing stance regardless of domain. Results of the monetary incentive test suggest that privacy is for sale. Employed respondents are more concerned than non-employed respondents; membership has little effect. Effects of culture do not support the hypotheses. Implications are that moral judgements are a function of perspective and domain, allowing flexibility along a morality continuum due to situational deviations.
xii, 112 leaves ; 28 cm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Nagarle, Shivashankarappa A. "Novel framework to support information security audit in virtual environment". Thesis, Coventry University, 2013. http://curve.coventry.ac.uk/open/items/aa65bb37-9504-46d3-930e-44ec71f745f3/1.

Texto completo
Resumen
Over the years, the focus of information security has evolved from technical issue to business issue. Heightened competition from globalization compounded by emerging technologies such as cloud computing has given rise to new threats and vulnerabilities which are not only complex but unpredictable. However, there are enormous opportunities which can bring value to business and enhance stakeholders’ wealth. Enterprises in Oman are compelled to embark e-Oman strategy which invariably increases the complexity due to integration of heterogeneous systems and outsourcing with external business partners. This implies that there is a need for a comprehensive model that integrates people, processes and technology and provides enterprise information security focusing on organizational transparency and enhancing business value. It was evident through interviews with security practitioners that existing security models and frameworks are inadequate to meet the dynamic nature of threats and challenges inherent in virtualization technology which is a catalyst to cloud computing. Hence the intent of this research is to evaluate enterprise information security in Oman and explore the potential of building a balanced model that aligns governance, risk management and compliance with emphasis to auditing in virtual environment. An integrated enterprise governance, risk and compliance model was developed where enterprise risk management acts as a platform, both mitigating risk on one hand and as a framework for defining cost controls and quantifying revenue opportunities on the other. Further, security standards and frameworks were evaluated and some limitations were identified. A framework for implementing IT governance focusing on critical success factors was developed after analysing and mapping the four domains of COBIT with various best practices. Server virtualization using bare metal architecture was practically tested which provides fault-tolerance and automated load balancing with enhanced security. Taxonomy of risks inherent in virtual environments was identified and an audit process flow was devised that provides insight to auditors to assess the adequacy of controls in a virtual environment. A novel framework for a successful audit in virtual environment is the contribution of this research that has changed some of the security assumptions and audit controls in virtual environment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Allam, Sean. "A model to measure the maturuty of smartphone security at software consultancies". Thesis, University of Fort Hare, 2009. http://hdl.handle.net/10353/281.

Texto completo
Resumen
Smartphones are proliferating into the workplace at an ever-increasing rate, similarly the threats that they pose is increasing. In an era of constant connectivity and availability, information is freed up of constraints of time and place. This research project delves into the risks introduced by smartphones, and through multiple cases studies, a maturity measurement model is formulated. The model is based on recommendations from two leading information security frameworks, the COBIT 4.1 framework and ISO27002 code of practice. Ultimately, a combination of smartphone specific risks are integrated with key control recommendations, in providing a set of key measurable security maturity components. The subjective opinions of case study respondents are considered a key component in achieving a solution. The solution addresses the concerns of not only policy makers, but also the employees subjected to the security policies. Nurturing security awareness into organisational culture through reinforcement and employee acceptance is highlighted in this research project. Software consultancies can use this model to mitigate risks, while harnessing the potential strategic advantages of mobile computing through smartphone devices. In addition, this research project identifies the critical components of a smartphone security solution. As a result, a model is provided for software consultancies due to the intense reliance on information within these types of organisations. The model can be effectively applied to any information intensive organisation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Gwaka, Leon Tinashe. "Social media risks in large and medium enterprises in the Cape Metropole : the role of internal auditors". Thesis, Cape Peninisula University of Technology, 2015. http://hdl.handle.net/20.500.11838/2086.

Texto completo
Resumen
Thesis (MTech (Internal Auditing))--Cape Peninsula University of Technology, 2015.
Social media has undoubtedly emerged as one of the greatest developments in this technology driven generation. Despite its existence from years back, social media popularity has recently surged drastically, with billions of users worldwide reported to be on at least one social media platform. This increase in users of social media has further been necessitated by governmental and private-sector initiatives to boost Internet connectivity to bridge the digital divide globally. Mobile Internet access has also fuelled the use of social media as it allows easy and economical connectivity anytime, anywhere. The availability of hundreds of social media platforms has presented businesses with several opportunities to conduct business activities using social media. The use of social media has been reported to avail businesses with plenty of benefits when this use is strategically aligned to business objectives. On the flipside of the coin, these social media platforms have also emerged as new hunting grounds for fraudsters and other information-technology related criminals. As with any invention, engaging social media for business has its own inherent risks; this further complicates existing information-technology risks and also presents businesses with new risks. Despite blossoming into a global phenomenon, social media has no universally accepted frameworks or approaches (thus no safeguards) when engaging with it, resulting in almost unlimited risk exposures. The uncertainly, i.e. risks surrounding social media platforms, proves problematic in determining the optimum social media platform to use in business. Furthermore, organisations are facing challenges in deciding whether to formally adopt it or totally ignore it, with the latter emerging not to be a viable option. The complex nature of social media has made it difficult for enterprises to place a monetary value and determine a return on investment on these platforms. From a governance perspective, it remains a challenge for most enterprises to identify and appoint individuals responsible for social media management within businesses, although recently social media strategist positions have been surfacing. Due to their nature, the social media trigger matters relating to governance, risk and compliance, which imply that internal auditors therefore are expected to champion the adoption of social media in enterprises. As a relatively new concept, the role that internal auditors should play towards social media appears not to be well defined. Through examination of existing guidelines, an attempt is made to define the role of internal auditors towards social media.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Mangion, Kenneth. "Myocardial strain measured in survivors of acute ST-elevation myocardial infarction : implementation and prognostic significance of novel magnetic resonance imaging methods". Thesis, University of Glasgow, 2018. http://theses.gla.ac.uk/38952/.

Texto completo
Resumen
Background: Cardiac Magnetic Resonance (CMR) has utility in the risk stratification of patients post ST elevation myocardial infarction (STEMI). Myocardial strain is theoretically more linked to left ventricular pump function than left ventricular ejection fraction (LVEF). There are a number of CMR strain techniques including bespoke methods such as displacement encoding with stimulated echoes (DENSE) and cine derived methods such as feature-tracking. Whilst cine-derived strain is more appealing for imaging in real-world practice, there are concerns on accuracy, especially on a myocardial segmental level. Deformation-tracking is a new technique based on tissue-tracking from cine imaging which has been developed in our group and is theoretically more accurate at identifying myocardial displacement and shortening than other commercial cine-strain techniques. Hypothesis: Compared with standard methods for imaging heart function, novel strain methods have superior diagnostic and prognostic performance. Objectives: (1) I aimed to compare circumferential strain derived from DENSE, deformation-tracking and feature-tracking in a group of 81 healthy volunteers, and in a group of STEMI patients. I investigated the relationship between strain age and sex in the healthy volunteers. (2) I also investigated the comparative performance of the three strain techniques and LV surrogate outcomes (LVEF, LV end diastolic volume indexed to body surface area, infarct size) as well as composite health outcomes (major adverse cardiac events) at 4 years in the STEMI patients. (3) I investigated the incremental predictive utility of segmental circumferential strain over infarct size to predict segmental functional improvement by wall-motion scoring at 6 months in patients with STEMI, and the influence of infarct characteristics (microvascular obstruction, intra-myocardial haemorrhage) on segmental circumferential strain at 6 months. (4) I investigated the utility of feature-tracking derived global longitudinal strain in this STEMI group. (5) Finally, I performed a de-novo study implementing a new DENSE technique in a group of STEMI patients and compared deformation-tracking and feature-tracking against this new technique. Methods: 1. Healthy Volunteers Study: 81 participants underwent multi-parametric CMR at 1.5T. 2. STEMI population 1: 324 patients underwent a similar multi-parametric CMR at 3 days and 295 at 6 months post STEMI. Composite health outcomes that are pathophysiologically linked to STEMI were collected by an independent team. 3. STEMI population 2: 50 patients underwent a multi-parametric CMR at 1 day and 6 months post STEMI. This protocol included the new 2D-Spiral DENSE sequence. The imaging analyses were performed using standardised methods. Health outcomes were analysed and adjudicated by an independent team blinded to the rest of the study. Statistical analyses were carried out under the supervision of a biostatistician. Results: The main findings of this thesis are: 1. Deformation-tracking performed well when compared with a reference method (DENSE) in a large group of healthy volunteers. The advantage of utilising a cine-strain derived method is that this would obviate the need for bespoke strain sequences being acquired, limiting the total duration of an CMR scan, and making strain more accessible in the clinical setting. 2. Global circumferential strain with DENSE provides incremental prognostic value over infarct size and pathologies revealed by contrast-enhanced CMR for LV surrogate outcomes. Strain imaging with DENSE has emerging potential as a new reference test for prognostication in patients after an acute STEMI. 3. Global circumferential strain with DENSE provides incremental prognostic value over infarct size and pathologies revealed by contrast-enhanced CMR for MACE. Conclusions: The data presented in this thesis indicate that CMR strain imaging may be clinically useful in the assessment of patients following an acute STEMI. This indicates that strain should be more widely used in clinical studies as both global and segmental strain provide incremental utility over more commonly used markers of prognosis for global and regional LV function, as well as major adverse cardiac events. 2D-Spiral DENSE is a new technique, which I have demonstrated, to be feasible to acquire in STEMI patients and has the potential to investigate LV pump function in more detail than conventional methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Tylus, Joseph. "The Impact of Enabling School Structures on the Degree of Internal School Change as Measured by the Implementation of Professional Learning Communities". VCU Scholars Compass, 2009. http://scholarscompass.vcu.edu/etd/1855.

Texto completo
Resumen
Abstract THE IMPACT OF ENABLING SCHOOL STRUCTURES ON THE DEGREE OF INTERNAL SCHOOL CHANGE AS MEASURED BY THE IMPLEMENTATION OF PROFESSIONAL LEARNING COMMUNITIES By Joseph D. Tylus, Ph.D. A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at Virginia Commonwealth University. Virginia Commonwealth University, 2009 Major Director: Dr. William C. Bosher Distinguished Professor, G. Wilder School of Policy Adjunct Professor, VCU School of Education This non-experimental, correlational study looked at the relationship between bureaucratic structures in middle and high schools in bringing about change in individual teacher classroom instructional practices through the centralized directive of membership in a professional learning community. Using a continuum of bureaucratic structure, from enabling to hindering, designed by Hoy and Sweetland (2001), each teacher identified the type of bureaucratic structure they believed they operated within. The teacher participants responded to a questionnaire on how involved they were and to what degree they participated with colleagues in a professional learning community during the current school year. Further, they were asked how membership in a professional learning community influenced, if at all, their instructional practices. A regression analysis showed a statistically significant relationship between enabling bureaucratic structure and a higher degree of teacher personal professional growth. A regression analysis also demonstrated a statistically significant relationship between enabling bureaucratic structure and change in instructional practices in the classroom associated with membership in a professional learning community. However, while the analyses found statistical significance, the actual effect size was low, challenging the level of practical significance of the model. One interaction of interest related to teachers who teach courses where there is a state mandated end-of-course test that impacts the school’s adequate yearly progress (AYP) rating. Teachers in this group reported the highest level of change in their classroom instructional practices through membership in a professional learning community when they perceived a more enabling bureaucratic structure for the school in which they worked. Hopefully these results will help encourage future work that pertains to which bureaucratic structures are most effective in producing change in the classroom through the use of professional learning communities. The dissertation was created using Microsoft Word 2003.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

De, Lange Mariska. "Guidelines to establish an e-safety awareness in South Africa". Thesis, Nelson Mandela Metropolitan University, 2012. http://hdl.handle.net/10948/d1007863.

Texto completo
Resumen
Information and Communication Technology (ICT) has become an integral part of almost every individual’s life. Although ICT, particularly the Internet, might offer numerous opportunities, individuals should also be aware of the associated risks. Especially with the younger generations who can be seen as being the most vulnerable to online dangers as they are becoming more involved in online activities. Children are utilizing new technologies from an early age and should know how to keep themselves and others safe whilst accessing the Internet. However, most of them do not have the required knowledge and expertise to protect themselves. This is because, under most circumstances, the parents do not understand their children’s online behaviours and activities and are, therefore, unable to teach their children how to utilize it safe and responsibly. A school can be seen as the perfect place to teach children safe online behaviours. However, there is currently a definite lack of e-Safety in South African schools, because no e-Safety policies are in place and there is little or nothing in curricula with regards to e-Safety. This can lead to additional concerns. The primary objective of this research study is, therefore, to develop, motivate and verify a framework that might contribute towards the development of an e-Safety culture. This e-Safety culture should allow individuals to adapt their behavior towards the secure utilization of ICT. However, for the purpose of this research study, the focus will primarily be on learners from primary and secondary schools.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Wikström, Sverre. "Background aEEG/EEG measures in very preterm infants : Relation to physiology and outcome". Doctoral thesis, Uppsala universitet, Institutionen för kvinnors och barns hälsa, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-146737.

Texto completo
Resumen
The overall aim of this thesis was to characterize single-channel aEEG/EEG, recorded during the first postnatal days in preterm infants, in relation to brain function and two-year outcome. Study I investigated if aEEG/EEG was associated with neonatal brain injury, inflammation and outcome in 16 very preterm (VPT) infants. The interburst interval (IBI) was prolonged, and aEEG amplitudes were lower in infants with brain injury, and in infants developing handicap. Cord blood TNF-α correlated with IBI. Study II investigated inter-rater agreement of visual burst detection, as compared to automated burst detection based on a non-linear energy operator (NLEO) in an EEG data set from 12 extremely preterm (EPT) and 6 VPT infants. The sensitivity of the NLEO was 64 % and 69 % (EPT and VPT infants, respectively) and the specificity 96 % and 88 %. The algorithm was then modified to further improve the accuracy. Study III investigated if arterial carbon dioxide and plasma glucose is associated with EEG continuity. In 247 sets of samples (PaCO2, plasma glucose, IBI) from 32 EPT infants there was a positive association between PaCO2 and IBI; higher PaCO2 was associated with longer IBI. Corrected for carbon dioxide, plasma glucose had a U-shaped association with IBI in infants with good outcome. Study IV investigated the predictive value of aEEG/EEG in 41 EPT and 8 VPT infants. All VPT infants had good outcome. Predictors of outcome in EPT infants included presence or absence of burst-suppression, continuous activity and cyclicity, median IBI and interburst%. Seizures were associated with neonatal brain damage but not with outcome. Improved preterm brain monitoring may in the future be used for early identification of infants at high risk of brain damage and adverse outcome, which may have implications for direction of care and for early intervention.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Chan, Yik-Kwan Eric y 陳奕鈞. "Investigation of a router-based approach to defense against Distributed Denial-of-Service (DDoS) attack". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B30173309.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Maynard, Robert. "The effects of limb speed and limb preference on selected isokinetic strength and power measures during internal and external rotation of the shoulder". Thesis, Virginia Tech, 1990. http://hdl.handle.net/10919/42059.

Texto completo
Resumen
Forty-five males volunteered to serve as subjects to investigate the effects of limb velocity and limb preference on peak torque/body weight (PTBW), torque acceleration energy (TAE), average power (AVP), and endurance ratio (ER) at isokinetic speeds of 60 and 300 degrees/second during internal and external rotation of the shoulder. Standard Cybex warm-up and test protocol were used for both test conditions. Test/retest reliability estimates ranges from r=.60- .70. Repeated Measures ANOVA revealed significant limb speed and limb preference effects on PTBW, TAE, and AVP in both exercise speed or limb preference. The data illustrate a need for an internal/external shoulder rotation normative profile specific to limb speed and limb preference.
Master of Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Wang, Gary y 王家賢. "An Integration of AHP and DEA to measure performance of Internal Auditing". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/47367220357761450716.

Texto completo
Resumen
碩士
國立高雄應用科技大學
商務經營研究所
96
Internal auditors often treat each auditing item equally important and evaluate the performance of the audited departments accordingly. This approach provides misleading results and sometimes incurs complaints from departments that receive poor ratings. This research aims at establishing an auditing procedure capable of providing fair evaluation standard for various departments. This research proposes an auditing procedure that integrates multi-criteria methodologies such as Analytic Hierarchy Process (AHP) and Data Envelopment Analysis (DEA). AHP is utilized to generate scoring weights by collecting opinions from departments to be audited and experts specialized on auditing relating industries. DEA is therefore applied for measuring the operation efficiencies by incorporating two inputs to the audited departments and the respective weighted audit score as the only output. Internal quality auditing data of a TFT-LCD assembly house based in Southern Taiwan is used to demonstrate the implementation of the proposed auditing procedure. Sensitivity analysis and slack variable analysis are also conducted to show how the audited system can be improved.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

UTRO, Filippo. "Algorithms for internal validation clustering measures in the post genomic era". Doctoral thesis, 2011. http://hdl.handle.net/10447/102436.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Wang, Li-Hsuan y 王儷璇. "Commercial bank how to measure PD and LGD of mortgage---internal ratings based approach application". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/32506536330820960210.

Texto completo
Resumen
碩士
國立中央大學
財務金融研究所
92
As the New Basel Capital Accord supports the internal ratings based approach, individual banks need to estimate probability of default(PD), loss given default(LGD) and exposure at default by themselves in order to set capital requirements of residential mortgages. Because of the reason, the purpose of this paper are to find the significant factors on abnormal payments and loan states transitions to help estimate PD,and factors of estimating LGD to assit in pricing differentials to compensate for risks,and making strategic credit-granting decisions. According to the empirical findings, interest differential and vintage of loan are more useful in explaining resolution of sample delinquencies than other variables . They are positively and significantly related to the probability of delinquencies and defaults and transitions.Thus financial institutions can add the importance of them when making collection policies.Besides, by comparison of actual loan state probabilities and model forecasts reaveals that sample bank can control the delinquent conditions even under bad economics.Finanily, the behaviors of borrowors choosing defaults and prosperity are siginificant factors of estimating LGD, whereas information on the loan contract are not.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía