To see the other types of publications on this topic, follow the link: Face – Measurement.

Dissertations / Theses on the topic 'Face – Measurement'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Face – Measurement.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chan, Yin-man, and 陳彥民. "Three-dimensional cephalometry of Chinese faces." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B43958643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Austin, Erin, and L. Lee Glenn. "Online and Face-To-Face Orthopaedic Surgery Education Methods." Digital Commons @ East Tennessee State University, 2012. https://dc.etsu.edu/etsu-works/7497.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sandri, Gustavo Luiz. "Automated non-contact heart rate measurement using conventional video cameras." reponame:Repositório Institucional da UnB, 2016. http://dx.doi.org/10.26512/2016.02.D.21118.

Full text
Abstract:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2016.
Conforme o sangue flui através do corpo de um indivíduo, ele muda a forma como a luz é irradiada pela pele, pois o sangue absorve luz de forma diferente dos outros tecidos. Essa sutil variação pode ser capturada por uma câmera e ser usada para monitorar a atividade cardíaca de uma pessoa. O sinal capturado pela câmera é uma onda que representa as variações de tonalidade da pele ao longo do tempo. A frequência dessa onda é a mesma frequência na qual o coração bate. Portanto, o sinal capturado pela câmera pode ser usado para estimar a taxa cardíaca de uma pessoa. Medir o pulso cardíaco remotamente traz mais conforto pois evita o uso de eletrodos. Também permite o monitoramento de uma pessoa de forma oculta para ser empregado em um detector de mentira, por exemplo. Neste trabalho nós propomos dois algoritmos para a estimação da taxa cardíaca sem contato usando câmeras convencionais sob iluminação não controlada. O primeiro algoritmo proposto é um método simples que emprega um detector de face que identifica a face da pessoa sendo monitorada e extrai o sinal gerado pelas mudanças no tom da pele devido ao fluxo sanguíneo. Este algoritmo emprega um filtro adaptativo para aumentar a energia do sinal de interesse em relação ao ruído. Nós mostramos que este algoritmo funciona muito bem para vídeos com pouco movimento. O segundo algoritmo que propomos é uma melhora do primeiro para torná-lo mais robusto a movimentos. Nós modificamos o método usado para definir a região de interesse. Neste algoritmo é utilizado um detector de pele para eliminar pixels do plano de fundo do vídeo, os frames dos vídeos são divididos em micro-regiões que são rastreados com um algoritmo de fluxo ótico para compensar os movimentos e um algoritmo de clusterização é aplicado para selecionar automaticamente as melhores micro-regiões para efetuar a estimação da taxa cardíaca. Propomos também um esquema de filtragem temporal e espacial para reduzir o ruído introduzido pelo algoritmo de fluxo ótico. Comparamos os resultados dos nossos algoritmos com um oxímetro de dedo comercial e mostramos que eles funcionam bem para situações desafiadoras.
As the blood flows through the body of an individual, it changes the way that light is irradiated by the skin, because blood absorbs light differently than the remaining tissues. This subtle variation can be captured by a camera and be used to monitor the heart activity of a person. The signal captured by the camera is a wave that represents the changes in skin tone along time. The frequency of this wave is the same as the frequency by which the heart beats. Therefore, the signal captured by the camera could be used to estimate a person’s heart rate. This remote measurement of cardiac pulse provides more comfort as it avoids the use of electrodes or others devices attached to the body. It also allows the monitoring of a person in a canceled way to be employed in lie detectors, for example. In this work we propose two algorithms for non-contact heart rate estimation using conventional cameras under uncontrolled illumination. The first proposed algorithm is a simple approach that uses a face detector to identify the face of the person being monitored and extract the signal generated by the changes in the skin tone due to the blood flow. This algorithm employs an adaptive filter to boost the energy of the interest signal against noise. We show that this algorithm works very well for videos with little movement. The second algorithm we propose is an improvement of the first one to make it more robust to movements. We modify the approach used to define the region of interest. In this algorithm we employ a skin detector to eliminate pixels from the background, divide the frames in microregions that are tracked using an optical flow algorithm to compensate for movements and we apply a clustering algorithm to automatically select the best micro-regions to use for heart rate estimation. We also propose a temporal and spatial filtering scheme to reduce noise introduced by the optical flow algorithm. We compared the results of our algorithms to an off-the-shelf fingertip pulse oximeter and showed that they can work well under challenging situations.
APA, Harvard, Vancouver, ISO, and other styles
4

Hayes, Susan. "Seeing and measuring the 2D face." University of Western Australia. School of Anatomy and Human Biology, 2009. http://theses.library.uwa.edu.au/adt-WU2010.0067.

Full text
Abstract:
This is a study of the factors that affect face shapes, and the techniques that can be used to measure variations in two dimensional representations of faces. The materials included thirty photographs of people in natural poses and thirty portraits that were based on the pose photographs. Visual assessors were asked to score the photographs and portraits in terms of pose (cant, turn and pitch) and also to compare the portraits to the photographs and score them in terms of likeness in the depiction of the face and its component features. Anthropometric indices were derived and used to score the images for the pose variables as well as for aspects of individual variation in external face shape and the spatial arrangement of the features. Geometric morphometric analysis was also used to determine the shape variation occurring in the photographs, the variation within the portraits, and to specifically discern where the portraits differ from the photographs in the depiction of head pose and individual differences in facial morphology. For the analysis of pose it was found that visual assessors were best at discerning the extent of head turning and poorest at discerning head pitch. These tendencies occurred in the visual assessments of both the photographs and the portrait drawings. For the analysis of the individual variation in face shapes it was found that external face shape varies according to upper face dimensions and the shape of the chin, and that vertical featural configurations are strongly linked to external face shape. When the portrait and photograph data were placed in the same geometric morphometric analysis the inaccuracies in the portrait drawings became evident. When these findings were compared to the visual assessments it transpired that, on average, visual assessment was generally congruent with the geometric morphometric analysis, but were possibly confounded by patterns of dysmorphology in the portraits that were contrary to what this study suggests are normal patterns of face shape variation. Overall this study has demonstrated that while anthropometric and visual assessments of facial differences are quite good, both were comparatively poor at assessing head pitch and tended to be confounded by the dysmorphologies arising in the portrait drawings. Geometric morphometric analysis was found to be very powerful in discerning complex shape variations associated with head pose and individual differences in facial morphology, both within and between the photographs and portraits.
APA, Harvard, Vancouver, ISO, and other styles
5

McIntyre, A. H. "Applying psychology to forensic facial identification : perception and identification of facial composite images and facial image comparison." Thesis, University of Stirling, 2012. http://hdl.handle.net/1893/9077.

Full text
Abstract:
Eyewitness recognition is acknowledged to be prone to error but there is less understanding of difficulty in discriminating unfamiliar faces. This thesis examined the effects of face perception on identification of facial composites, and on unfamiliar face image comparison. Facial composites depict face memories by reconstructing features and configurations to form a likeness. They are generally reconstructed from an unfamiliar face memory, and will be unavoidably flawed. Identification will require perception of any accurate features, by someone who is familiar with the suspect and performance is typically poor. In typical face perception, face images are processed efficiently as complete units of information. Chapter 2 explored the possibility that holistic processing of inaccurate composite configurations will impair identification of individual features. Composites were split below the eyes and misaligned to impair holistic analysis (cf. Young, Hellawell, & Jay, 1987); identification was significantly enhanced, indicating that perceptual expertise with inaccurate configurations exerts powerful effects that can be reduced by enabling featural analysis. Facial composite recognition is difficult, which means that perception and judgement will be influence by an affective recognition bias: smiles enhance perceived familiarity, while negative expressions produce the opposite effect. In applied use, facial composites are generally produced from unpleasant memories and will convey negative expression; affective bias will, therefore, be important for facial composite recognition. Chapter 3 explored the effect of positive expression on composite identification: composite expressions were enhanced, and positive affect significantly increased identification. Affective quality rather than expression strength mediated the effect, with subtle manipulations being very effective. Facial image comparison (FIC) involves discrimination of two or more face images. Accuracy in unfamiliar face matching is typically in the region of 70%, and as discrimination is difficult, may be influenced by affective bias. Chapter 4 explored the smiling face effect in unfamiliar face matching. When multiple items were compared, positive affect did not enhance performance and false positive identification increased. With a delayed matching procedure, identification was not enhanced but in contrast to face recognition and simultaneous matching, positive affect improved rejection of foil images. Distinctive faces are easier to discriminate. Chapter 5 evaluated a systematic caricature transformation as a means to increase distinctiveness and enhance discrimination of unfamiliar faces. Identification of matching face images did not improve, but successful rejection of non-matching items was significantly enhanced. Chapter 6 used face matching to explore the basis of own race bias in face perception. Other race faces were manipulated to show own race facial variation, and own race faces to show African American facial variation. When multiple face images were matched simultaneously, the transformation impaired performance for all of the images; but when images were individually matched, the transformation improved perception of other race faces and discrimination of own race faces declined. Transformation of Japanese faces to show own race dimensions produced the same pattern of effects but failed to reach significance. The results provide support for both perceptual expertise and featural processing theories of own race bias. Results are interpreted with reference to face perception theories; implications for application and future study are discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Zanatta, Juliana 1982. "Procedimento preparatório face a face e respostas de ansiedade e dor em jovens submetidos à exodontia de terceiro molar." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/288645.

Full text
Abstract:
Orientador: Antonio Bento Alves de Moraes
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Odontologia de Piracicaba
Made available in DSpace on 2018-08-18T17:51:22Z (GMT). No. of bitstreams: 1 Zanatta_Juliana_M.pdf: 1561744 bytes, checksum: 6deeb2fc7bc444b163f8541e7049b28b (MD5) Previous issue date: 2011
Resumo: O objetivo do trabalho foi identificar os efeitos de um procedimento preparatório de fornecimento de informação face a face sobre os níveis de ansiedade, mudanças fisiológicas e dor de pacientes odontológicos submetidos à exodontia de terceiros molares. Participaram 123 pacientes, de 14 a 24 anos, que necessitavam de exodontia de, pelo menos, um terceiro molar em uma sessão odontológica. Os pacientes foram selecionados e alocados, aleatoriamente, em dois grupos (Controle e Experimental). O planejamento experimental foi dividido em (1) Questionário de Identificação; (2) Pré-Cirúrgico; (3) Oferecimento de Informação Prévia Face a Face (Grupo experimental); (4) Procedimento Cirúrgico; (5) Pós-Cirúrgico Imediato; (6) Pós-cirúrgico Mediato; e (7) Remoção de Sutura. O Questionário de Identificação apresentou questões abertas e fechadas sobre hábitos, experiência cirúrgica odontológica e história de uso de medicamentos. O momento Pré-Cirúrgico envolveu medidas fisiológicas (pressão arterial e frequência cardíaca), a aplicação do Inventário de Ansiedade Traço-Estado (IDATE), da Escala de Ansiedade Odontológica de Corah (DAS) e do Questionário Mcgill de Dor (Índice de Estimativa de Dor Sensorial, Índice de Estimativa de Dor Afetiva, Intensidade de Dor Presente e Avaliação Global da Experiência de Dor). Depois da etapa Pré-Cirúrgica, os pacientes do Grupo Experimental receberam a Informação Prévia Face a Face sobre a cirurgia de exodontia. As mesmas medidas do momento Pré-Cirúrgico foram repetidas no momento Pós-Cirúrgico Imediato, Mediato e Remoção de Sutura. Os dados obtidos pela entrevista e escores obtidos pelos instrumentos IDATE, DAS e McGill e pelo equipamento de aferição de medidas fisiológicas foram analisados através de testes Qui-Quadrado, Análise de Variância com Modelo Misto e Tukey (?=0,05). Verificou-se que não houve diferença estatisticamente significativa entre as médias dos escores obtidos por meio das medidas fisiológicas e das avaliações com os instrumentos DAS e IDATE em todos os momentos entre os Grupos (análise entre grupos) e na análise de cada grupo nos momentos (análise intra-grupo). A análise dos dados no relato de dor sugere haver redução significativa na variável Índice de Estimativa de Dor Sensorial no momento imediatamente após a cirurgia para os pacientes do Grupo Experimental (Pós-Cirúrgico Imediato: GC=6,83 - GE=4,43, p?0,0001). Estes resultados sugerem que a informação prévia face a face foi eficiente para a diminuição significativa dos relatos de dor sensorial imediatamente após a exodontia, mas não foi eficaz para a redução das medidas fisiológicas, das respostas de ansiedade e de outros relatos dor na exodontia de terceiros molares
Abstract: The aim of this work was to identify the effects of a preparatory procedure for providing face to face information on the levels of anxiety, physiological changes and pain of dental patients undergoing extraction of third molars. Participants 123 patients, 14 to 24 years, who required extraction of at least one third molar in a dental session. The patients were selected and randomly allocated into two groups (Control and Experimental). The experimental design was divided into (1) Identification Questionnaire; (2) Pre-Surgical; (3) Providing Prior Face to Face Information (Experimental Group); (4) Surgical Procedure; (5) Immediate Post-Surgical; (6) Mediate Post-Surgical; and (7) Suture Removal. The Identification Questionnaire presented open and closed questions about habits, dental experience and history of drug use. The Pre-Surgical moment involved physiological measurements (blood pressure and heart rate), the implementation of the Trait-State Anxiety Inventory (STAI), Dental Anxiety Scale (DAS), and the McGill Pain Questionnaire (Sensory Pain Rank Index, Affective Pain Rank Index, Present Pain Intensity and Global Assessment of Pain Experience). After the preoperative phase, patients in the Experimental Group received prior face to face information about extraction surgery. The same measures of Pre-Surgical moment were repeated in the Immediate Post-Surgical, Mediate Post-Surgical and Suture Removal moments. Data obtained by interview and scores obtained by instruments STAI, DAS and McGill, and the measures of the equipment measuring physiological measures were analyzed using Chi-Square, Analyses of Variance with Mixed Model and Tukey (?=0,05). It was found that there was no statistically significant difference between the means scores on the physiological assessments with the DAS and STAI instruments at all moments between the groups (between groups analysis) and analysis of each group in moments (Intra-group analysis). Data analysis in the reporting of pain suggests that there is significant reduction in the Sensory Pain Rank Index at the moment immediately after surgery to Experimental Group patients (Immediate Post-Surgical: CG: 6,83 - EG: 4,43, p?0,0001). These results suggest that prior face to face information was efficient for the significant decrease in the reports of sensory pain immediately after extraction, but was not effective for reduces physiological responses measures of anxiety and of others reports pain on third molar extraction. These results indicate that prior face to face information was efficient for significant decrease in reports of sensory pain immediately after extraction, but was not effective for reducing physiological measures and anxiety and other pain responses reports on third molar extraction
Mestrado
Saude Coletiva
Mestre em Odontologia
APA, Harvard, Vancouver, ISO, and other styles
7

Sarkar, Abhijit. "Cardiac Signals: Remote Measurement and Applications." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/78739.

Full text
Abstract:
The dissertation investigates the promises and challenges for application of cardiac signals in biometrics and affective computing, and noninvasive measurement of cardiac signals. We have mainly discussed two major cardiac signals: electrocardiogram (ECG), and photoplethysmogram (PPG). ECG and PPG signals hold strong potential for biometric authentications and identifications. We have shown that by mapping each cardiac beat from time domain to an angular domain using a limit cycle, intra-class variability can be significantly minimized. This is in contrary to conventional time domain analysis. Our experiments with both ECG and PPG signal shows that the proposed method eliminates the effect of instantaneous heart rate on the shape morphology and improves authentication accuracy. For noninvasive measurement of PPG beats, we have developed a systematic algorithm to extract pulse rate from face video in diverse situations using video magnification. We have extracted signals from skin patches and then used frequency domain correlation to filter out non-cardiac signals. We have developed a novel entropy based method to automatically select skin patches from face. We report beat-to-beat accuracy of remote PPG (rPPG) in comparison to conventional average heart rate. The beat-to-beat accuracy is required for applications related to heart rate variability (HRV) and affective computing. The algorithm has been tested on two datasets, one with static illumination condition and the other with unrestricted ambient illumination condition. Automatic skin detection is an intermediate step for rPPG. Existing methods always depend on color information to detect human skin. We have developed a novel standalone skin detection method to show that it is not necessary to have color cues for skin detection. We have used LBP lacunarity based micro-textures features and a region growing algorithm to find skin pixels in an image. Our experiment shows that the proposed method is applicable universally to any image including near infra-red images. This finding helps to extend the domain of many application including rPPG. To the best of our knowledge, this is first such method that is independent of color cues.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
8

Singh, Rajendra. "CCD based active triangulation for automatic close range monitoring of rock movement." Thesis, University College London (University of London), 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.243595.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Al-Ghadban, Fatima A. "Evaluating the Face Validity of an Arabic-language Translation of a Food Security Questionnaire in Arabic-speaking Populations." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1343055581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Marfull, Héctor. "Investigation of packet delay jitter metrics in face of loss and reordering." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4289.

Full text
Abstract:
Nowadays mobility is a field of great importance. The fact of travelling or moving should not mean the rupture of the connection to the Internet. And the current objective is not only to be world-wide connected, it is also to be it always through the best available connection (ABC). It means the need to perform vertical handover to switch between different networks, while maintaining the same Internet connection. All this has to be done in a transparent way to the user. In order provide the highest Quality of Experience some tools are needed to enable checking the status and performance of the different available networks, measuring and collecting statistics, in order to take advantage of each one of them. This thesis presents the theoretical base for a measurement module by describing and analysing different metrics, with special emphasis on delay jitter, collecting and comparing different methods, and discussing their main characteristics and suitability for this goal.
APA, Harvard, Vancouver, ISO, and other styles
11

Hunter, Kirsten, and n/a. "Affective Empathy in Children: Measurement and Correlates." Griffith University. School of Applied Psychology, 2004. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20040610.135822.

Full text
Abstract:
Empathy is a construct that plays a pivotal role in the development of interpersonal relationships, and thus ones ability to function socially and often professionally. The development of empathy in children is therefore of particular interest to allow for further understanding of normative and atypical developmental trajectories. This thesis investigated the assessment of affective empathy in children aged 5-12, through the development and comparison of a multimethod assessment approach. Furthermore this thesis evaluated the differential relationships between affective empathy and global behavioural problems in children versus the presence of early psychopathic traits, such as callous-unemotional traits. The first component of this study incorporated; a measure of facial expression of affective empathy, and self-reported experience of affective empathy, as measured by the newly designed Griffith Empathy Measure - Video Observation (GEM-VO) and the Griffith Empathy Measure - Self Report (GEM-SR); the Bryant's Index of Empathy for Children and Adolescents (1982) which is a traditional child self-report measure; and a newly designed parent-report of child affective empathy (Griffith Empathy Measure - Parent Report; GEM-PR). Using a normative community sample of 211 children from grades 1, 3, 5, and 7 (aged 5-6, 7-8, 9-10, & 11-12, respectively), the GEM-PR and the Bryant were found to have moderate to strong internal consistency. As a measure of concurrent validity, strong positive correlations were found between the mother and father reports (GEM-PR) of their child's affective empathy, for grades 5 and 7, and for girls of all age groups. Using a convenience sample of 31 parents and children aged 5 to 12, the GEM-PR and the Bryant demonstrated strong test-retest reliability. The reliability of the GEM-VO and the GEM-SR were assessed using a convenience sample of 20 children aged 5 to 12. These measures involve the assessment of children's facial and verbal responses to emotionally evocative videotape vignettes. Children were unobtrusively videotaped while they watched the vignettes and their facial expressions were coded. Children were then interviewed to determine the emotions they attributed to stimulus persons and to themselves whilst viewing the material. Adequate to strong test-retest reliability was found for both measures. Using 30% from the larger sample of 211 participants (N=60), the GEM-VO also demonstrated robust inter-rater reliability. This multimethod approach to assessing child affective empathy produced differing age and gender trends. Facial affect as reported by the GEM-VO decreased with age. Similarly, the matching of child facial emotion to the vignette protagonist's facial emotion was higher in the younger grades. These findings suggest that measures that assess the matching of facial affect (i.e., GEM-VO) may be more appropriate for younger age groups who have not yet learnt to conceal their facial expression of emotion. Data from the GEM-SR suggests that older children are more verbally expressive of negative emotions then younger children, with older girls found to be the most verbally expressive of feeling the same emotion as the vignette character; a role more complimentary of the female gender socialization pressures. These findings are also indicative of the increase in emotional vocabulary and self-awareness in older children, supporting the validity of child self-report measures (based on observational stimuli) with older children. In comparing data from the GEM-VO and GEM-SR, this study found that for negative emotions the consistency between facial emotions coded and emotions verbally reported increased with age. This consistency across gender and amongst the older age groups provides encouraging concurrent validity, suggesting the results of one measure could be inferred through the exclusive use of the alternate measurement approach. In contrast, affective empathy as measured by the two measures; the accurate matching of the participant and vignette character's facial expression (GEM-VO), and the accurate matching of the self reported and vignette character's emotion (GEM-SR); were not found to converge. This finding is consistent with prior research and questions the assumption that facially expressed and self-appraised indexes of affective empathy are different aspects of a complex unified process. When evaluating the convergence of all four measures of affective empathy, negative correlations were found between the Bryant and the GEM-PR, these two measures were also found to not converge with the GEM-VO and GEM-SR in a consistent and predictable way. These findings pose the question of whether different aspects of the complex phenomena of affective empathy are being assessed. Furthermore, the validity of the exclusive use of a child self report measure such as the Bryant, which is the standard assessment in the literature, is questioned. The possibility that callous-unemotional traits (CU; a unique subgroup identified in the child psychopathy literature) may account for the mixed findings throughout research regarding the assumption that deficiencies in empathy underlie conduct problems in children, was examined using regression analysis. Using the previous sample of 211 children aged 5-12, conduct problems (CP) were measured using the Strengths and Difficulties Questionnaire (SDQ; Goodman, 1999), and the CU subscale was used from the Antisocial Process Screening Device (APSD; Caputo, Frick, & Brodsky, 1999). Affective empathy when measured by the GEM-PR and the Bryant showed differing patterns in the relationship between affective empathy, CU traits and CP. While the GEM-Father reported that neither age, CU traits nor CP accounted for affective empathy variance, the GEM-Mother report supported that affective empathy was no longer associated with CP once CU traits had been partialled out. In contrast, the Bryant reported for girls, that CU traits were not found to have an underlying correlational relationship. It can be argued from the GEM-Mother data only that it was the unmeasured variance of CU traits that was accounting for the relationship between CP and affective empathy found in the literature. Furthermore, the comparison of an altered CU subscale with all possible empathy items removed, suggests that the constructs of CU traits and affective empathy are not synonymous or overlapping in nature, but rather are two independent constructs. This multimethod approach highlights the complexity of this research area, exemplifying the significant influence of the source of the reports, and suggesting that affective empathy consists of multiple components that are assessed to differing degrees by the different measurement approaches.
APA, Harvard, Vancouver, ISO, and other styles
12

Rörden, Sarah, and Kristofer Wille. "Measuring and handling risk : How different financial institutions face the same problem." Thesis, Mälardalen University, School of Sustainable Development of Society and Technology, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-9951.

Full text
Abstract:

Title: Measuring and handling risk - How different financial institutions face the same problem

Seminar date: 4th of June, 2010

 

Level: Bachelor thesis in Business Administration, Basic level 300, 15 ECTS

Authors: Sarah Rörden and Kristofer Wille

Supervisor: Angelina Sundström

Subject terms: Risk variables, Risk measurement, Risk management, Modern Portfolio Theory, Diversification, Beta

Target group: Everyone who has basic knowledge of financial theories and risk principles but lacks the understanding of how they can be used in risk management.

 

Purpose: To understand the different Swedish financial institutions’ way of handling and reducing risk in portfolio investing using financial theories.

Theoretical framework: The theoretical framework is based on relevant literature about financial theories and risk management, including critical articles.

 

Method: A multi-case study has been conducted, built upon empirical data collected through semi-structured interviews at three different financial institutions.

 

Empiricism: The study is based on interviews with Per Lundqvist, private banker at Carnegie Investment Bank AB; Erik Dagne, head of risk management department and Joachim Spetz, head of asset management at Erik Penser Bankaktiebolag; and David Lindström, asset manager at Strand Kapitalförvaltning AB.

 

Conclusion: There is a practical implementation of the theoretical models chosen for this research. The numbers the financial models generate do not tell one the entire truth about the total risk, therefore the models are used differently at each study object. For a model to hold it has to be transparent, and take each model’s assumptions into account. It all comes down to interpreting the models in an appropriate way.

APA, Harvard, Vancouver, ISO, and other styles
13

O'Mara, David Thomas John. "Automated facial metrology." University of Western Australia. School of Computer Science and Software Engineering, 2002. http://theses.library.uwa.edu.au/adt-WU2003.0015.

Full text
Abstract:
Automated facial metrology is the science of objective and automatic measurement of the human face. There are many reasons for measuring the human face. Psychologists are interested in determining how humans perceive beauty, and how this is related to facial symmetry [158]. Biologists are interested in the relationship between symmetry and biological fitness [124]. Anthropologists, surgeons, forensic experts, and security professionals can also benefit from automated facial metrology [32, 101, 114]. This thesis investigates the concept of automated facial metrology, presenting original techniques for segmenting 3D range and colour images of the human head, measuring the bilateral symmetry of n-dimensional point data (with particular emphasis on measuring the human head), and extracting the 2D profile of the face from 3D data representing the head. Two facial profile analysis techniques are also presented that are incremental improvements over existing techniques. Extensive literature reviews of skin colour modelling, symmetry detection, symmetry measurement, and facial profile analysis are also included in this thesis. It was discovered during this research that bilateral symmetry detection using principal axes is not appropriate for detecting the mid-line of the human face. An original mid-line detection technique that does not use symmetry, and is superior to the symmetry-based technique, was developed as a direct result of this discovery. There is disagreement among researchers about the effect of ethnicity on skin colour. Some researchers claim that people from different ethnic groups have the same skin chromaticity (hue, saturation) [87, 129, 206], while other researchers claim that different ethnic groups have different skin colours [208, 209]. It is shown in this thesis that people from apparently different ethnic groups can have skin chromaticity that is within the same Gaussian distribution. The chromaticity-based skin colour model used in this thesis has been chosen from the many models previously used by other researchers, and its applicability to skin colour modelling has been justified. It is proven in this thesis that the Mahalanobis distance to the skin colour distribution is Gaussian in both the chromatic and normalised rg colour spaces. Most facial profile analysis techniques use either tangency or curvature to locate anthropometric features along the profile. Techniques based on both approaches have been implemented and compared. Neither approach is clearly superior to the other, but the results indicate that a hybrid technique, combining both approaches, could provide significant improvements. The areas of research most relevant to facial metrology are reviewed in this thesis and original contributions are made to the body of knowledge in each area. The techniques, results, literature reviews, and suggestions presented in this thesis provide a solid foundation for further research and hopefully bring the goal of automated facial metrology a little closer to being achieved.
APA, Harvard, Vancouver, ISO, and other styles
14

Gevaux, Lou. "3D-hyperspectral imaging and optical analysis of skin for the human face." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSES035.

Full text
Abstract:
L’imagerie hyperspectrale (HSI), une méthode non invasive permettant de mesurer in vivo la réflectance spectrale, a démontré son fort potentiel pour l’analyse des propriétés optiques de la peau pour des zones planes et de petite taille : l’association d’un modèle optique de peau, d’une modélisation de ses interactions avec la lumière et d’une méthode d’optimisation permet d’analyser l’image hyperspectrale en chaque pixel et d’estimer des cartographies de concentrations en chromophores, comme la mélanine et le sang. Le but de ce travail est l’extension de la méthode pour la mesure et l’analyse de surfaces larges et non planes, et en particulier du visage humain. Les mesures d’objets complexes comme le visage sont affectées par des variations spatiales d’éclairement, que l’on appelle dérives d’éclairement. A moins d’être prises en compte dans le modèle, celles-ci créent des erreurs dans l’analyse des images.Nous proposons en 1ère partie de ce travail une caméra HSI grand-champ (acquisition de bandes spectrales de 10 nm de largeur entre 400 et 700 nm), combinée avec un système d’acquisition de la géométrie 3D par projection de franges. Une acquisition courte étant cruciale in vivo, un compromis entre résolution et vitesse d’acquisition permet un temps d’acquisition inférieur à 5 secondes.La caméra HSI a été associée avec un scanner 3D afin de corriger les dérives d’éclairement en utilisant la géométrie 3D et des principes de radiométrie. L’éclairement reçu par le visage est calculé en chaque pixel puis utilisé pour supprimer les dérives d’éclairement dans l’image hyperspectrale, un prétraitement à appliquer avant l’analyse. Cependant, cette méthode n’est pas satisfaisante sur les zones du visage pratiquement perpendiculaires à l’axe optique de la caméra, comme les côtés du nez, et a été rejetée en faveur d’un algorithme d’optimisation robuste aux dérives d’éclairement dans la méthode d’analyse.L’analyse de la peau à partir des images hyperspectrales est basée sur l’utilisation de modèles optiques. La peau est modélisée par un matériau translucide à deux couches dont les propriétés d’absorption dépendent de sa composition en chromophores. Les interactions lumière-peau sont modélisées à l’aide d’une approche à deux flux. La résolution d’un problème inverse par optimisation permet d’estimer la composition en chromophores à partir de la réflectance spectrale mesurée. Les modèles optiques choisis sont un bon compromis entre une description fidèle de la peau et un temps de calcul acceptable, qui augmente de manière exponentielle avec le nombre de paramètres du modèle. Les cartes de chromophores estimées peuvent être affichées sous forme 3D grâce à l’information mesurée par la caméra HSI-3D.Un point faible de la méthode est le manque d’information sur les propriétés de diffusion de la peau, considérées identiques d’une personne à l’autre et d’une partie du corps à l’autre. Dans la 2nd partie de ce travail, nous utilisons le projecteur de franges initialement dédié à l’acquisition 3D, pour mesurer la fonction de transfert de modulation (FTM) de la peau, qui fournit de l’information sur l’absorption et la diffusion. La FTM est mesurée par imagerie dans le domaine fréquentiel spatial (SFDI) et analysée avec l’équation de la diffusion pour estimer le coefficient de diffusion de la peau. Sur des objets non-plats, l’extraction d’information indépendamment des dérives d’éclairement est un défi important. L’originalité de la méthode proposée repose sur l’association de la HSI et SFDI dans le but d’estimer des cartes de coefficient de diffusion sur le visage indépendamment de sa forme.Nous insistons sur l’importance d’une acquisition courte pour des mesures in vivo, cependant, l’analyse par optimisation demande plusieurs heures de calcul. L’utilisation des réseaux de neurones comme alternative à l’optimisation nous semble prometteur, des premiers résultats ayant montré une forte réduction du temps de calcul, d’environ 1 heure à 1 seconde
Hyperspectral imaging (HSI), a non-invasive, in vivo imaging method that can be applied to measure skin spectral reflectance, has shown great potential for the analysis of skin optical properties on small, flat areas: by combining a skin model, a model of light-skin interaction and an optimization algorithm, an estimation of skin chromophore concentration in each pixel of the image can be obtained, corresponding to quantities such as melanin and blood. The purpose of this work is to extend this method to large, non-flat areas, in particular the human face. The accurate measurement of complex objects such as the face must account for variances of illumination that result from the 3D geometry of an object, which we call irradiance drifts. Unless they are accounted for, irradiance drifts will lead to errors in the hyperspectral image analysis.In the first part of the work, we propose a measurement setup comprising a wide field HSI camera (with an acquisition range of 400 - 700 nm, in 10 nm width wavebands) and a 3D measurement system using fringe projection. As short acquisition time is crucial for in vivo measurement, a trade-off between resolution and speed has been made so that the acquisition time remains under 5 seconds.To account for irradiance drifts, a correction method using the surface 3D geometry and radiometry principles is proposed. The irradiance received on the face is computed for each pixel of the image, and the resulting data used to suppress the irradiance drifts in the measured hyperspectral image. This acts as a pre-processing step to be applied before image analysis. This method, however, failed to yield satisfactory results on those parts of the face almost perpendicular to the optical axis of the camera, such as the sides of the nose, and was therefore discarded in favor of using an optimization algorithm robust to irradiance drifts in the analysis method.Skin analysis from the measured hyperspectral image is performed using optical models and an optimization method. Skin is modeled as a two-layer translucent material whose absorption and scattering properties are determined by its composition in chromophores. Light-skin interactions are modeled using a two-flux method. An inverse problem is solved by optimization to retrieve information about skin composition from the measured reflectance. The chosen optical models represent a trade-off between accuracy and acceptable computation time, which increases exponentially with the number of parameters in the model. The resulting chromophore maps can be added to the 3D mesh measured using the 3D-HSI camera for display purposes.In the spectral reflectance analysis method, skin scattering properties are assumed to be the same for everyone and on every part of the body, which represents a shortcoming. In the second part of this work, the fringe projector originally intended for measuring 3D geometry is used to acquire skin modulation transfer function (MTF), a quantity that yields information about both skin absorption and scattering coefficients. The MTF is measured using spatial frequency domain imaging (SFDI) and analyzed by an optical model relying on the diffusion equation to estimate skin scattering coefficients. On non-flat objects, retrieving such information independently from irradiance drifts is a significant challenge. The novelty of the proposed method is that it combines HSI and SFDI to obtain skin scattering coefficient maps of the face independently from its shape.We emphasize throughout this dissertation the importance of short acquisition time for in vivo measurement. The HSI analysis method, however, is extremely time-consuming, preventing real time image analysis. A preliminary attempt to address this shortcoming is presented, using neural networks to replace optimization-based analysis. Initial results of the method have been promising, and could drastically reduce calculation time from around an hour to a second
APA, Harvard, Vancouver, ISO, and other styles
15

Roman, Matej. "Automatizované měření teploty v boji proti COVID." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442439.

Full text
Abstract:
This thesis focuses on the development of an open source software capable of automatic face detection in an image captured by a thermal camera, followed by a temperature measuring. This software is supposed to aid in the COVID-19 pandemics. The developed software is independent of used thermal camera. In this thesis, I am using TIM400 thermal camera. The implementation of the face detection was achieved by an OpenCV module. The methods tested were Template Matching, Eigen Faces, and Cascade Classifier. The last-mentioned had the best results, hence was used in the final version of the software. Cascade Classifier is looking for the eyes and their surrounding area in the image, allowing the software to subsequently measure the temperature on the surface of one's forehead. One can therefore be wearing a face mask or a respirator safely. The temperature measuring works in real time and the software is able to capture several people at once. It then keeps a record of the temperature of each measured individual as well as the time of the measurement. The software as a whole is a part of an installation file compatible with the Windows operating system. The functionality of this software was tested – the video recordings are included in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
16

Provot, Thomas. "Apport de l’accélérométrie pour l’étude quantifiée des dérives mécaniques de la course à pied face à la fatigue." Thesis, Reims, 2016. http://www.theses.fr/2016REIMS032/document.

Full text
Abstract:
La fatigue est un phénomène bien connu dans le monde du sport provoquant une chute des performances et une augmentation du risque de blessures. La communauté scientifique s’intéresse donc à la quantification de ce phénomène au moyen de différents outils d’analyse du mouvement. Cependant certains sports comme la course à pied soumettent les athlètes à des sollicitations mécaniques violentes impactant fortement sur leur santé et leurs performances. Ces sollicitations se traduisent souvent par des chocs importants, un nombre de cycles élevé et sont accompagnées de postures complexes du corps. Les outils d’analyse du mouvement ne sont alors pas toujours adaptés pour la mesures de ces données ni à l’étude du mouvement de l’athlète dans des conditions réelles de pratique. L’accélération apparait alors comme une caractéristique riche en informations. Elle peut permettre de mesurer et d’analyser la pratique de la course à pied afin de quantifier la dérive de la réponse mécanique du corps humain. Par la validation d’outils issus de l’accélérométrie, ces travaux de thèse permettront d’étudier les phénomènes mécaniques intervenant durant la pratique de la course à pied afin de quantifier et prédire leurs effets sur les phénomènes de fatigue des athlètes
Fatigue is a well known phenomenon in the sports world causing a decrease of performance and an increase of injury risk. The scientific community is therefore concerned with the quantification of this phenomenon using different motion analysis tools. However some sports like running inflict violent mechanical loads to the athletes strongly impacting their health and performance. These loads frequently result in significant shocks, a high number of cycles and are accompanied by complex postures of the body. The motion analysis tools are then not always suitable for the measurement of this information or to study the athlete’s movement in real conditions of practice. The acceleration then appears as a feature rich in information.It can allow to measure and analyze the practice of running in order to quantify the drift of the mechanical response of the human body. By validating accelerometric tools, these thesis works will allow to study the mechanical phenomena intervening in the practice of running in order to quantify and predict their effects on the athlete’s fatigue
APA, Harvard, Vancouver, ISO, and other styles
17

Malla, Amol Man. "Automated video-based measurement of eye closure using a remote camera for detecting drowsiness and behavioural microsleeps." Thesis, University of Canterbury. Electrical and Computer Engineering, 2008. http://hdl.handle.net/10092/2111.

Full text
Abstract:
A device capable of continuously monitoring an individual’s levels of alertness in real-time is highly desirable for preventing drowsiness and lapse related accidents. This thesis presents the development of a non-intrusive and light-insensitive video-based system that uses computer-vision methods to localize face, eyes, and eyelids positions to measure level of eye closure within an image, which, in turn, can be used to identify visible facial signs associated with drowsiness and behavioural microsleeps. The system was developed to be non-intrusive and light-insensitive to make it practical and end-user compliant. To non-intrusively monitor the subject without constraining their movement, the video was collected by placing a camera, a near-infrared (NIR) illumination source, and an NIR-pass optical filter at an eye-to-camera distance of 60 cm from the subject. The NIR-illumination source and filter make the system insensitive to lighting conditions, allowing it to operate in both ambient light and complete darkness without visually distracting the subject. To determine the image characteristics and to quantitatively evaluate the developed methods, reference videos of nine subjects were recorded under four different lighting conditions with the subjects exhibiting several levels of eye closure, head orientations, and eye gaze. For each subject, a set of 66 frontal face reference images was selected and manually annotated with multiple face and eye features. The eye-closure measurement system was developed using a top-down passive feature-detection approach, in which the face region of interest (fROI), eye regions of interests (eROIs), eyes, and eyelid positions were sequentially localized. The fROI was localized using an existing Haar-object detection algorithm. In addition, a Kalman filter was used to stabilize and track the fROI in the video. The left and the right eROIs were localized by scaling the fROI with corresponding proportional anthropometric constants. The position of an eye within each eROI was detected by applying a template-matching method in which a pre-formed eye-template image was cross-correlated with the sub-images derived from the eROI. Once the eye position was determined, the positions of the upper and lower eyelids were detected using a vertical integral-projection of the eROI. The detected positions of the eyelids were then used to measure eye closure. The detection of fROI and eROI was very reliable for frontal-face images, which was considered sufficient for an alertness monitoring system as subjects are most likely facing straight ahead when they are drowsy or about to have microsleep. Estimation of the y- coordinates of the eye, upper eyelid, and lower eyelid positions showed average median errors of 1.7, 1.4, and 2.1 pixels and average 90th percentile (worst-case) errors of 3.2, 2.7, and 6.9 pixels, respectively (1 pixel 1.3 mm in reference images). The average height of a fully open eye in the reference database was 14.2 pixels. The average median and 90th percentile errors of the eye and eyelid detection methods were reasonably low except for the 90th percentile error of the lower eyelid detection method. Poor estimation of the lower eyelid was the primary limitation for accurate eye-closure measurement. The median error of fractional eye-closure (EC) estimation (i.e., the ratio of closed portions of an eye to average height when the eye is fully open) was 0.15, which was sufficient to distinguish between the eyes being fully open, half closed, or fully closed. However, compounding errors in the facial-feature detection methods resulted in a 90th percentile EC estimation error of 0.42, which was too high to reliably determine extent of eye-closure. The eye-closure measurement system was relatively robust to variation in facial-features except for spectacles, for which reflections can saturate much of the eye-image. Therefore, in its current state, the eye-closure measurement system requires further development before it could be used with confidence for monitoring drowsiness and detecting microsleeps.
APA, Harvard, Vancouver, ISO, and other styles
18

Santos, Filipe Vinci dos. "Techniques de conception pour le durcissement des circuits intégrés face aux rayonnements." Grenoble 1, 1998. http://www.theses.fr/1998GRE10208.

Full text
Abstract:
Les microsystèmes sont le dernier développement de la microélectronique. Leur apparition ouvre des possibilités révolutionnaires dans plusieurs domaines d'application, dont l'exploitation de l'espace. L'utilisation des microsystèmes dans l'espace se heurte au problème de l'exposition à la radiation, notamment pour la partie électronique. Cet obstacle a été surmonte dans le passe par la mise en place de filières de fabrication résistantes (durcies) aux effets de la radiation. Le rétrécissement des budgets militaires a provoqué la disparition de la plupart des technologies de fabrication durcies, ce qui est en train de pousser les constructeurs vers l'emploi de technologies commerciales standard (COTS). L'objectif de cette thèse a été d'investiguer des techniques de conception pour le durcissement d'un microsystème fabrique par une technologie COTS. Le microsystème en question est un capteur de rayonnements infrarouges base sur des thermopiles en silicium, suspendues par une étape de micro-usinage en volume par la face avant. Les éléments pertinents des différents domaines de connaissance impliques sont passés en revue, avec une analyse des techniques de durcissement applicables à la construction de l'électronique de lecture en technologie CMOS. Un programme de caractérisation expérimentale a été réalisé, et il a permis d'établir le niveau de sensibilité de la technologie aux rayonnements et l'efficacité des techniques de durcissement développées. Les très bons résultats obtenus ont permis de passer à la réalisation de la chaine de lecture du capteur, qui a été fabriquée, caractérisée et qualifiée pour l'espace.
APA, Harvard, Vancouver, ISO, and other styles
19

Tang, Shu-sum. "Cephalometric airway measurements in class III skeletal deformity." Click to view the E-thesis via HKUTO, 2000. http://sunzi.lib.hku.hk/HKUTO/record/B38628065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

鄧樹森 and Shu-sum Tang. "Cephalometric airway measurements in class III skeletal deformity." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B38628065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Lopes, Patrícia de Medeiros Loureiro. ""Validação das medidas lineares crânio-faciais por meio da tomografia computadorizada multislice em 3D"." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/23/23139/tde-28082006-184029/.

Full text
Abstract:
O objetivo deste trabalho foi avaliar a precisão e acurácia (validade) de medidas ósseas lineares crânio-faciais por meio de reconstruções em terceira dimensão (3D), pela técnica de volume, a partir da tomografia computadorizada (TC) multislice. O material da pesquisa consistiu de 10 (dez) crânios secos, os quais foram submetidos à tomografia computadorizada multislice 16 cortes com 0.5 mm de espessura por 0.3 mm de intervalo de reconstrução. Os dados obtidos foram enviados para uma estação de trabalho independente com o programa Vitrea®. Pontos crânio-faciais foram localizados e medidas lineares foram realizadas por 02 examinadores, previamente treinados, medindo cada um duas vezes, independentemente em 3D. As medidas físicas foram obtidas por um terceiro examinador utilizando um paquímetro digital. A análise dos dados foi feita mediante um estudo comparativo entre as medidas inter- e intra-examinadores, em 3D-TC, e entre estas e as medidas físicas obtidas diretamente nos crânios, utilizando ANOVA (análise de variância). Não foram encontradas diferenças estatisticamente significantes entre as medidas inter e intra-examinadores, nem entre as medidas físicas em 3D, com p>0,6. Em conclusão, todas as medidas lineares crânio-faciais foram consideradas acuradas e precisas utilizando a técnica de volume em 3D por meio da TC multislice.
This research objectives the assessment of the precision and accuracy (validity) of the linear craniofacial measurements in three-dimensional reconstructed volume rendered images (3D) using a multislice computed tomography (CT). The study population consisted of 10 (ten) dry skulls, previously selected, without distinction of ethnic group and sex, which were submitted to a multislice CT 16 slices using 0.5 mm of slice thickness and 0.3 mm of interval of reconstruction. Subsequently the data were sent to an independent workstation with Vitrea software. Conventional craniofacial points were localized and linear measurements were obtained by 02 examiners, previously calibrated, twice each, independently, in 3D images. The physical measurements were obtained by a third examiner using a digital caliper. The data analysis was carried out, regarding to inter and intra examiner, in 3D-CT, and between image and physical measurements from dry skulls, using ANOVA (analysis of variance). There were also no statistically significant differences between imaging and physical measurements with p>0.6 for all measurements. In conclusion, all the linear craniofacial measurements were considered accurate and precise using a 3D volume rendering technique by multislice CT.
APA, Harvard, Vancouver, ISO, and other styles
22

Yeoh, Terence Eng Siong. "The Facet Satisfaction Scale: Enhancing the measurement of job satisfaction." Thesis, University of North Texas, 2007. https://digital.library.unt.edu/ark:/67531/metadc3899/.

Full text
Abstract:
Job satisfaction is an important job-related attitude that has been linked to various outcomes for both the organization and its employees. In spite of this, researchers of the construct disagree about how job satisfaction is defined and measured. This study proposes the use of the Facet Satisfaction Scale, a new scale of measurement for job satisfaction that is based on more recent definitions of the construct. Reliability and preliminary predictive validity studies were conducted in order to determine the utility of this scale. Next steps in scale development are discussed.
APA, Harvard, Vancouver, ISO, and other styles
23

Shokouhi, Shahriar B. "Automatic digitisation and analysis of facial topography by using a biostereometric structured light system." Thesis, University of Bath, 1999. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Yeoh, Terence Eng Siong Beyerlein Michael Martin. "The facet satisfaction scale enhancing the measurement of job satisfaction /." [Denton, Tex.] : University of North Texas, 2007. http://digital.library.unt.edu/permalink/meta-dc-3899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Montpetit, Alessandro. "Measurement and Separation of Sterol Glycosides in Biodiesel and FAME." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32978.

Full text
Abstract:
The major issue that hinder the widespread use of biodiesel is its poor cold weather stability and operability. This is attributed minor components identified as monoglycerides (MG), diglycerides (DG) and sterol glycosides (SG). There is currently no standard method to determine SG levels in biodiesel. A method to isolate and measure SG concentration in biodiesel and FAME was first developed. This was accomplished by decompatibilizing SG from the biodiesel matrix using n-dodecane and purifying the solids using a Folch liquid-liquid extraction. The extracted SG was analyzed by GC-FID; the tricaprin internal standard was detected at 21.5 min and SG from 26-26 min. Recovery using this method was 100% ± 2.5% when 3 commercial canola biodiesel samples were spiked with 38 ppm SG and extracted. This method was used to measure SG concentration of filtered FAME produced using 0.3wt%, 0.5wt% and 0.7wt% at a MeOH:Oil (mol/mol) ratio of 4:1, 5:1, 6:1, 7:1 and 9:1. The biodiesel produced was characterized according to ASTM D6584; MG, DG and TG decreased with increasing catalyst concentration and MeOH:Oil ratios. The SG solubility in reactive FAME was found to be lowest at high glycerol catalyst concentration. High levels of TG were found to solubilize SG in the reactive FAME. Finally, the solubility of SG in reactive FAME increased when high ratios of methanol were used.
APA, Harvard, Vancouver, ISO, and other styles
26

Persson, Angelica, and Amanda Lindewald. "Extraoral 3D-scanning - conformity between extraoral 3D scanning and clinical measurements of the face." Thesis, Malmö universitet, Odontologiska fakulteten (OD), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-42624.

Full text
Abstract:
Aim: To use the extraoral scanner 3D Sense in practice and compare the measurements on scanned material with conventional, direct clinical measurements. This is to evaluate if extraoral scanning can replace a clinical examination and extraoral 2D photography. Material & method: Fifteen adults at the Faculty of Odontology were recruited for the study. Five determined landmarks were marked in the faces of the subjects. Direct clinical measurements were performed between the landmarks of every subject and used as a reference. The subjects' faces were scanned and the same distances were measured in the scans. Differences in measurements of the two methods were conducted in a paired t-test. Intra- and inter operator differences were calculated for all distances. Intraclass correlations were used to describe to what extent subjects in the same group resemble each other.  Results: Conformity of direct clinical measurements and measurements on scanned material varied between the mean difference of 0,22-5,13 mm. Intra- and inter operator ICC was overall excellent. Conclusion: The measurements between the landmarks pronasale (prn) and pogonion (pg) was the only distance with no statistical significant difference between the two methods. The 3D Sense shows decreasing conformity to clinical measurements with increasing distances. Inter operator ICC shows excellent values and measuring on scanned material can be regarded as a reproducible method. The results indicate clinical acceptance for use of 3D Sense for some purposes in odontology. 3D Sense has been validated in-vitro and analyzed in-vivo. The studies have established the 3D Sense’s adequacy in odontology.
APA, Harvard, Vancouver, ISO, and other styles
27

Weyrich, Tim Alexander. "Acquisition of human faces using a measurement-based skin reflectance model /." Zürich : ETH, 2006. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=16741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Jurán, Antonín. "Efektivní obrábění nových konstrukčních keramických materiálů." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2009. http://www.nusl.cz/ntk/nusl-228424.

Full text
Abstract:
The objective of this diploma thesis are new design materials in a view of their structure, properties, manufacture,partitions, applications and possibili-ties of effective machining them Ceramics are used in the field of design applications more often than before. They are in form of compact parts and in form of thin coatings on the surface of metal parts. Fast development of constructive applications requires the same progress rate of machining ma-chines innovations and development of new methods of effective machining these materials, too. The aim is to produce parts of demanded shapes, dimensions and surface quality at affordable costs. Ceramics guarantee long terming durability and reliability. The evaluation of ceramics grinding tests from the point of view of cutting forces and surface quality of the machined faces are presented in the last part of the diploma thesis.
APA, Harvard, Vancouver, ISO, and other styles
29

Martinez, Carmen Ivette. "Study of photolytic interference on HO measurements by LIF-FAGE." PDXScholar, 1989. https://pdxscholar.library.pdx.edu/open_access_etds/3931.

Full text
Abstract:
For many years there has been a great interest among the scientific community in the study of the hydroxyl radical, HO. This interest stems from the fundamental role played by this molecule in the photochemistry of the atmosphere, mainly as a cleansing agent of environmental pollutants. Knowing the concentration of the radical would enable scientists to corroborate current atmospheric models and to predict future trends in the atmosphere. Even though there is a great interest in the determination of atmospheric concentrations of this molecule, the task has been very difficult. This is mainly due to the lack of a method sensitive enough to detect concentrations around 106 molecules per cubic centimeter. The most accurate method presently available is the method of laser induced fluorescence using the fluorescence assay with gas expansion technique (LIF-FAGE). This method involves low pressure excitation of HO from its ground state to its lowest electronic excited state and observing the consequent fluorescence around 309 nm. The procedure is done at a pressure of 5 Torr to maximize the fluorescence lifetime of the radical and to minimize the interference of photolytic species. Background determination is achieved by chemical modulation using isobutane in a second channel of the same cell which removes the HO signal. In this study an assessment of the level of ozone interference in LIFF AGE has been done by calculating the relative population distribution of HO among its rotational levels and from this, determining its temperature. When the laser passes through the excitation detection cell it photolyses the ozone present producing in this way the highly reactive 0 1(D). When this molecule reacts with water or with isobutane it produces HO, and this is the source of interference in the actual measurements. In the determination of the relative population distributions of the different HO species, it was found that the naturally occurring HO has a thermal distribution with a temperature of about 300 K. The HO molecules produced from the reaction of 0 1(D) with isobutane also showed a thermal distribution with a temperature of about 230 K. On the other hand, the HO produced from the reaction of 0 1(D) with water did not show a thermal distribution. Two distinct temperatures were observed for this case: one around 200 K for values of K = 1 to 4, and the second one around 3000 K for values of K = 5 to 6. These values agree with previous experimental results for LIF methods by other authors except for the fact that the deviation from the first temperature determined by other authors starts at K = 6 or 7.
APA, Harvard, Vancouver, ISO, and other styles
30

Coyle, Monica Norton. "The New Jersey high school proficiency test in writing: a pragmatic face on an autonomous mode /." Access Digital Full Text version, 1992. http://pocketknowledge.tc.columbia.edu/home.php/bybib/11302136.

Full text
Abstract:
Thesis (Ed.D.) -- Teachers College, Columbia University, 1992.
Typescript; issued also on microfilm. Sponsor: Clifford Hill. Dissertation Committee: Lucy Calkins. Includes bibliographical references (leaves 195-201).
APA, Harvard, Vancouver, ISO, and other styles
31

Chang, Wen-Chia Claire. "Measuring the complexity of teachers' enactment of practice for equity: A Rasch model and facet theory-based approach." Thesis, Boston College, 2017. http://hdl.handle.net/2345/bc-ir:107345.

Full text
Abstract:
Thesis advisor: Larry H. Ludlow
Preparing and supporting teachers to enact teaching practice that responds to diversity, challenges educational inequities, and promotes social justice is a pressing yet daunting and complex task. More research is needed to understand how and to what extent teacher education programs prepare and support teacher candidates to enhance the achievement of all learners while challenging systematic inequity (Cochran-Smith, Ell, Ludlow, Grudnoff, & Aitken, 2014). One piece of empirical evidence needed is a measure that captures the extent to which teachers enact teaching practice for equity. This study developed an instrument – the Teaching Equity Enactment Scenario Scale (TEES) - to measure the extent of equity-centered teaching practice by applying Rasch measurement theory (Rasch, 1960) and Guttman’s facet theory (Borg & Shye, 1995). The research question addressed whether the TEES scale can measure teachers’ self-reported enactment of practice for equity in a reliable, valid, and authentic manner. This study employed a three-phase design, comprising an extensive process of item development, a pilot study and a final full-scale administration. Fifteen scenario-style items were developed to capture the enactment levels of six interconnected principles of teaching practice for equity. Using the Rasch rating scale model the outcome was a 15-item TEES scale that reliably and validly measures increasing levels of teaching practice for equity progressing through low, moderate, and high levels of enactment. The distribution of the scenarios confirmed their hypothesized order and the instrument development principles of Rasch measurement - unidimensionality, variation and a hierarchical order of the items, as well as a uniform continuum defining the construct. The scale also provides meaningful interpretations of what a raw score means regarding one’s equity-centered teaching practice. The overall findings suggest that the novel approach of combining Rasch measurement and facet theory can be successful in developing a scenario-style scale that measures a complex construct. Moreover, the scale can provide the evidence needed in research on preparing and supporting teachers to teach with a commitment to equity and social justice
APA, Harvard, Vancouver, ISO, and other styles
32

Mousselon, Laure. "Radio Wave Propagation Measurements and Modeling for Land Mobile Satellite Systems." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/10155.

Full text
Abstract:
The performance of a mobile satellite communications link is conditioned by the characteristics of the propagation path between a satellite and mobile users. The most important propagation effect in land mobile satellite system is roadside attenuation of the signals due to vegetation or urban structures. System designers should have the most reliable information about the statistics of the propagation channel to build reliable systems that can compensate for bad propagation conditions. In 1998, the Virginia Tech Antenna Group developed a simulator, PROSIM, to simulate a propagation channel in the case of roadside tree attenuation in land mobile satellite systems. This thesis describes some improvements to PROSIM, and the adaptation and validation of PROSIM for Digital Audio Radio Satellite systems operating at S-band frequencies. The performance of the simulator for S-band frequencies was evaluated through a measurement campaign conducted with the XM Radio signals at 2.33 GHz in various propagation environments. Finally, additional results on dual satellite systems and fade correlation are described.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
33

Schindler, Frank Vincent. "Redistribution and fate of applied ??N-enriched urea under irrigated continuous corn production." Thesis, North Dakota State University, 1996. https://hdl.handle.net/10365/28973.

Full text
Abstract:
Understanding the redistribution and fate of N is essential for justification of Best Management Practices (BMP). This project was conducted on a Hecla fine sandy loam (sandy, mixed, Aquic Haploboroll) soil at the BMP field site near Oakes, North Dakota. One objective of this investigation was to evaluate the residence times of N03- -N in 20 undisturbed lysimeters and its infiltration time through the soil profile to tile drains. Corn (Zea mays L.) was fertilized with 135 kg N ha -1 as ??N-enriched urea plus 13.5 and 48.1 kg N ha -1 preplant for 1993 and 1994, respectively. Urea-N was band applied to 20 and 10 undisturbed lysimeters at 2.0 and 5.93 atom percent (at %) ??N in 1993 and 1994, respectively. Average resident times of N03- -N in the lysimeters was 11.7 months. Lysimeter and tile drainage indicate the presence of preferential pathways. Residence times of N03- -N depend on frequency and intensity of precipitation events. Another objective was to determine what portion of the total N in the crop was from applied urea-N and what portion was from the native soil-N. Nitrogen plots received ??N enrichments of 4.25 and 5.93 at % ??N in 1993 and 1994, respectively. At the end of the 1993 and 1994 growing season, 41.5% and 35.7% of the labeled fertilizer N remained in the soil profile, while the total recovery of applied ??N in the soil-plant system was 86.2% and 75.4%, respectively. Low recoveries of applied N may have been the result of soil or aboveground plant biomass volatilization, or denitrification or preferential flow processes. Further research needs to be conducted with strict accountability of gaseous loss and the mechanism(s) responsible.
U.S. Bureau of Reclamation
APA, Harvard, Vancouver, ISO, and other styles
34

Walker, Hannah Marie. "Field measurements and analysis of reactive tropospheric species using the FAGE technique." Thesis, University of Leeds, 2013. http://etheses.whiterose.ac.uk/5554/.

Full text
Abstract:
Measurements of OH and HO2 using the Fluorescence Assay by Gas Expansion (FAGE) technique were made during a series of nighttime and daytime flights over the UK in summer 2010 and winter 2011. OH was not detected above the instrument’s limit of detection during any of the nighttime flights or during the winter daytime flights, placing upper limits on [OH] of 1.8 × 106 molecule cm−3 and 6.4 × 105 molecule cm−3 for the summer and winter flights, respectively. HO2 reached a maximum concentration of 3.17 × 108 molecule cm−3 (13.6 pptv) during a nighttime flight on 20th July 2010. Analysis of the rates of reaction of O3 and NO3 with the alkenes measured indicates that the summer nighttime troposphere can be as important for the processing of VOCs as the winter daytime troposphere. Analysis of the instantaneous rate of production of HO2 from the reactions of O3 and NO3 with alkenes has shown that, on average, reactions of NO3 dominated nighttime production of HO2 during summer, and reactions of O3 dominated nighttime HO2 production during winter. Measurements of IO were made by laser-induced fluorescence (LIF) during a cruise between Singapore and Manila in November 2011. The mean IO mixing ratio was 0.8 pptv. No correlation was found between IO and sea surface temperature, salinity, air temperature, wind speed or concentrations of chlorophyll-a. Measurements of I2 and the sum of HOI + ICl during the cruise contributed to a steady-state analysis of the IO measurements. Production of IO was dominated by photolysis of I2, with a smaller but significant contribution from photolysis of HOI. Reasonable agreement was found between measurements of IO made by LIF, MAX-DOAS, and satellite-based DOAS. A laser-induced phosphorescence instrument for detection of glyoxal is in development. The results of initial testing are reported here. The instrument, which will be deployed for field measurements in Cape Verde in 2014, has high sensitivity and a low 1 minute limit of detection (2.5 × 107 molecule cm−3 or 6.8 pptv), enabling detection of low ambient mixing ratios of glyoxal.
APA, Harvard, Vancouver, ISO, and other styles
35

Amédro, Damien. "Atmospheric measurements of OH and HO2 radicals using FAGE : Development and deployment on the field." Thesis, Lille 1, 2012. http://www.theses.fr/2012LIL10083/document.

Full text
Abstract:
Les radicaux HOx (=OH+HO2) jouent un rôle central dans la dégradation des hydrocarbures dans la troposphère. La réaction d’OH avec les hydrocarbures mène en présence de NOx à la formation de polluants secondaires comme l’ozone. Du fait de sa réactivité élevée, la concentration en OH (<1 ppt) ainsi que son temps de vie (<1 s) sont faibles. Pour valider les modèles de chimie atmosphérique, le développement d’appareils capable de mesurer ces très faibles concentrations est nécessaire. Un appareil basé sur la technique FAGE (Fluorescence Assay by Gas Expansion) a été développé à l’Université de Lille pour la mesure simultanée des radicaux HOx. La limite de détection atteinte est de 4 × 10[puissance 5] cm-3 pour OH and et 5 × 10[puissance 6] cm-3 pour HO2 pour un temps de mesure de 1 min. L’appareil a été utilisé dans 4 campagnes de mesure dans différents environnements : en chambre de simulation, en milieu rural, en milieu urbain et à l’intérieur d’une classe. Le FAGE de Lille a été validé grâce à 2 intercomparaisons en chambre de simulation et en air ambiant. En parallèle, le FAGE a été adapté pour la mesure de la réactivité d’OH. La réactivité d’OH est l’inverse du temps de vie. L’air ambiant est échantillonné au travers d’une cellule de photolyse dans laquelle OH est produit. La décroissance d’OH mesurée est due à la réaction de OH avec les réactifs présents dans l’air ambiant. L’appareil de mesure de la réactivité d’OH a participé à une campagne de mesure où il a été intercomparé. De plus, la réaction entre NO2* et H2O comme nouvelle source potentielle d’OH a été étudiée
HOx(=OH+HO2) radicals play a central role in the degradation of hydrocarbons in the troposphere. Reaction of OH with hydrocarbons leads in the presence of NOx to the formation of secondary pollutants such as O3. Due to its high reactivity, the concentration of OH radicals (<1ppt) and its lifetime are very low (<1s). In order to validate atmospheric chemistry models, the development of highly sensitive instruments for the measurement of OH and HO2 is needed. An instrument based on the FAGE technique (Fluorescence Assay by Gas Expansion) was developed at the University of Lille for the simultaneous measurement of HOx radicals. The limit of detection for OH and HO2 is of 4 × 10[power 5] cm-3 and 5 × 10[power 6] cm-3 respectively for 1 min integration time, appropriate for ambient measurements. The instrument was deployed in 4 field campaigns in different environments: simulation chamber, rural, suburban and indoor. The Lille FAGE was validated during 2 intercomparative measurements in an atmospheric chamber and in ambient air. In parallel, the FAGE set-up was adapted for the measurement of the OH reactivity. OH reactivity is the measure of the total loss of OH radicals that includes the reaction of all chemical species with OH. Ambient air is sampled through a photolysis cell where OH is artificially produced and it decays from the reaction with reactants present in ambient air is recorded by LIF in the FAGE. The OH reactivity system was deployed during an intercomparative measurement and used for the study of the reaction between NO2* and H2O as a source of OH
APA, Harvard, Vancouver, ISO, and other styles
36

Diekema, Emily D. "Acoustic Measurements of Clear Speech Cue Fade in Adults with Idiopathic Parkinson Disease." Bowling Green State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1460063159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Meredith, Laura Kelsey 1982. "Field measurement of the fate of atmospheric H₂ in a forest environment : from canopy to soil." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/79283.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Earth, Atmospheric, and Planetary Sciences, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 245-254).
Atmospheric hydrogen (H₂ ), an indirect greenhouse gas, plays a notable role in the chemistry of the atmosphere and ozone layer. Current anthropogenic emissions of H₂ are substantial and may increase with its widespread use as a fuel. The H₂ budget is dominated by the microbe-mediated soil sink, and although its significance has long been recognized, our understanding is limited by the low temporal and spatial resolution of traditional field measurements. This thesis was designed to improve the process-based understanding of the H₂ soil sink with targeted field and lab measurements. In the field, ecosystem-scale flux measurements of atmospheric H₂ were made both above and below the forest canopy for over a year using a custom, automated instrument at the Harvard Forest. H₂ fluxes were derived using a flux-gradient technique from the H₂ concentration gradient and the turbulent eddy coefficient. A ten-fold improvement in precision was attained over traditional systems, which was critical for quantifying the whole ecosystem flux from small H2 concentration gradients above the turbulent forest canopy. Soil uptake of atmospheric H₂ was the dominant process in this forest ecosystem. Rates peaked in the summer and persisted at reduced levels in the winter season, even across a 70 cm snowpack. We present correlations of the H₂ flux with environmental variables (e.g., soil temperature and moisture). This work is the most comprehensive attempt to elucidate the processes controlling biosphere-atmosphere exchange of H₂ . Our results will help reduce uncertainty in the present-day H₂ budget and improve projections of the response of the H₂ soil sink to global change. In the lab, we isolated microbial strains of the genus Streptomyces from Harvard Forest and found that the genetic potential for atmospheric H₂ uptake predicted H₂ consumption activity. Furthermore, two soil Actinobacteria were found to utilize H₂ only during specific lifecycle stages. The lifecycle of soil microorganisms can be quite complex as an adaptation to variable environmental conditions. Our results indicate that H₂ may be an important energetic supplement to soil microorganisms under stress. These results add to the understanding of the connections between the environment, organismal life cycle, and soil H₂ uptake.
by Laura Kelsey Meredith.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
38

Smith, Shona Cowan. "Atmospheric measurements of OH and HOâ‚‚ using the FAGE technique : instrument development and data analysis." Thesis, University of Leeds, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.436016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Nelson, Bernard A. "Fade slope measurements and modeling in the Ku- and Ka-bands using the Olympus satellite." Thesis, Virginia Tech, 1996. http://hdl.handle.net/10919/45081.

Full text
Abstract:
The Satellite Communications Group (SatComm Group) at Virginia Tech conducted a propagation research experiment with the intent of analyzing the effects of signal propagation in the Ku- and Ka-bands. 12, 20, and 30 GHz beacons were transmitted from the Olympus satellite and received at earth stations located in Blacksburg, Virginia. Data were collected from August 1990 through August 1992. One year of useable data were extracted from the measurements for analysis. The useable data set included January through May 1991, September through December 1991, and June through August 1992. This thesis presents fade slope statistics that were generated from the one year data that were obtained during the Olympus experiment. A background on fade slope is presented and includes a theoretically derived expression for fade slope, a comparison of fade slope calculation techniques, and a discussion of previous propagation experiments that yielded fade slope results. The Olympus experiment measurement techniques are discussed and the fade slope calculation method used with the Olympus data is presented. Fade slope statistical results are discussed and the development of an empirical model derived from the Olympus fade slope statistics is detailed. The empirical model predicts fade slope occurrence as a function of fade slope for any frequency between 12 and 30 GHz.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
40

Bonomi, Mattia. "Facial-based Analysis Tools: Engagement Measurements and Forensics Applications." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/271342.

Full text
Abstract:
The last advancements in technology leads to an easy acquisition and spreading of multi-dimensional multimedia content, e.g. videos, which in many cases depict human faces. From such videos, valuable information describing the intrinsic characteristic of the recorded user can be retrieved: the features extracted from the facial patch are relevant descriptors that allow for the measurement of subject's emotional status or the identification of synthetic characters. One of the emerging challenges is the development of contactless approaches based on face analysis aiming at measuring the emotional status of the subject without placing sensors that limit or bias his experience. This raises even more interest in the context of Quality of Experience (QoE) measurement, or the measurement of user emotional status when subjected to a multimedia content, since it allows for retrieving the overall acceptability of the content as perceived by the end user. Measuring the impact of a given content to the user can have many implications from both the content producer and the end-user perspectives. For this reason, we pursue the QoE assessment of a user watching multimedia stimuli, i.e. 3D-movies, through the analysis of his facial features acquired by means of contactless approaches. More specifically, the user's Heart Rate (HR) was retrieved by using computer vision techniques applied to the facial recording of the subject and then analysed in order to compute the level of engagement. We show that the proposed framework is effective for long video sequences, being robust to facial movements and illumination changes. We validate it on a dataset of 64 sequences where users observe 3D movies selected to induce variations in users' emotional status. From one hand understanding the interaction between the user's perception of the content and his cognitive-emotional aspects leads to many opportunities to content producers, which may influence people's emotional statuses according to needs that can be driven by political, social, or business interests. On the other hand, the end-user must be aware of the authenticity of the content being watched: advancements in computer renderings allowed for the spreading of fake subjects in videos. Because of this, as a second challenge we target the identification of CG characters in videos by applying two different approaches. We firstly exploit the idea that fake characters do not present any pulse rate signal, while humans' pulse rate is expressed by a sinusoidal signal. The application of computer vision techniques on a facial video allows for the contactless estimation of the subject's HR, thus leading to the identification of signals that lack of a strong sinusoidality, which represent virtual humans. The proposed pipeline allows for a fully automated discrimination, validated on a dataset consisting of 104 videos. Secondly, we make use of facial spatio-temporal texture dynamics that reveal the artefacts introduced by computer renderings techniques when creating a manipulation, e.g. face swapping, on videos depicting human faces. To do so, we consider multiple temporal video segments on which we estimated multi-dimensional (spatial and temporal) texture features. A binary decision of the joint analysis of such features is applied to strengthen the classification accuracy. This is achieved through the use of Local Derivative Patterns on Three Orthogonal Planes (LDP-TOP). Experimental analyses on state-of-the-art datasets of manipulated videos show the discriminative power of such descriptors in separating real and manipulated sequences and identifying the creation method used. The main finding of this thesis is the relevance of facial features in describing intrinsic characteristics of humans. These can be used to retrieve significant information like the physiological response to multimedia stimuli or the authenticity of the human being itself. The application of the proposed approaches also on benchmark dataset returned good results, thus demonstrating real advancements in this research field. In addition to that, these methods can be extended to different practical application, from the autonomous driving safety checks to the identification of spoofing attacks, from the medical check-ups when doing sports to the users' engagement measurement when watching advertising. Because of this, we encourage further investigations in such direction, in order to improve the robustness of the methods, thus allowing for the application to increasingly challenging scenarios.
APA, Harvard, Vancouver, ISO, and other styles
41

Lundmark, Annika. "Monitoring transport and fate of de-icing salt in the roadside environment : Modelling and field measurements." Doctoral thesis, KTH, Mark- och vattenteknik, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Filho, Waldemar Pacheco de Oliveira. "Utilização de cromatografia em fase gasosa para a determinação de antioxidantes sintéticos em biodiesel: uma abordagem metrológica." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/75/75135/tde-30072013-091048/.

Full text
Abstract:
O biodiesel é um combustível renovável composto por ésteres alquílicos obtidos a partir de óleos vegetais e/ou gorduras animais. É um produto que devido a sua composição, é susceptível a reações de oxidação, comprometendo assim sua qualidade e a adequação ao uso. Como prevenção de problemas decorrentes de sua oxidação, e também para adequação do produto aos requisitos de qualidade, produtores de biodiesel têm utilizado rotineiramente antioxidantes sintéticos em seus processos de obtenção de biodiesel. Estudos recentes têm apontado o TBHQ (tert-butil hidroquinona) como antioxidante sintético de melhor desempenho para biodiesel, mas além deste antioxidante, produtores nacionais têm adotado também o BHT (butil hidroxitolueno). Entretanto, ainda não há método analítico difundido entre os agentes econômicos envolvidos com a cadeia do biodiesel para a quantificação dos antioxidantes sintéticos para produtos finalizados. Esta determinação, em princípio, pode ser uma ferramenta importante em estudos que confrontam a oxidação do biodiesel com a concentração de determinado antioxidante que tenha sido adicionado quando da sua produção, ao longo de períodos de armazenamento; também em estudos que avaliem possíveis impactos ambientais causados pela utilização desses produtos, que apresentam toxicidade comprovada e possuem teores controlados pelo Ministério da Saúde para a indústria alimentícia. Dentre as técnicas analíticas utilizadas no desenvolvimento de métodos para quantificação dos antioxidantes sintéticos, destacam-se a voltametria (especificamente para TBHQ) e cromatografia líquida, mas ainda sem apresentarem um potencial de uso em larga escala pelos agentes de mercado. Neste trabalho, foi utilizada cromatografia em fase gasosa para o desenvolvimento de um novo método para a quantificação de TBHQ e BHT usados pelos produtores de biodiesel. Inicialmente, foram feitos testes com os antioxidantes fornecidos por um produtor, para verificação de seu uso como padrões de calibração, por meio de cromatografia gasosa com detecção por espectrômetro de massas. Em seguida, realizou-se o desenvolvimento do método propriamente dito, por cromatografia gasosa com detector de ionização por chama. O método então foi otimizado, verificando-se possíveis limitações em relação às amostras analisadas, e então validado, conforme parâmetros previstos em protocolos internacionais. Na validação foram empregados métodos estatísticos pertinentes ao método desenvolvido, e os resultados de validação foram criticamente analisados segundo critérios de aceitação previamente estabelecidos. Destaque para valores de recuperação que oscilaram entre 92% e 106%. Foi também estimada a incerteza de medição, ao longo da faixa de trabalho, utilizando conceitos apresentados em guias internacionais de estimativa de incerteza de medição. Foi verificado que o modelo de regressão linear escolhido pode ter impacto direto nos valores de incerteza de medição, que oscilaram entre 1% e 40% dependendo da região da faixa de trabalho. O método então foi testado quanto à sua aplicabilidade, em amostras comerciais, e relacionando-se os resultados obtidos com os valores de estabilidade à oxidação dessas amostras. Aqui, o TBHQ também apresentou o melhor desempenho para adequação do biodiesel às especificações de qualidade, confirmando os dados disponíveis na literatura.
Biodiesel is a renewable fuel composed by alkyl esters generated from vegetable oils and/or animal fats. It´s a susceptible product to oxidation reactions because its composition, thus compromising their quality and use. Currently, biodiesel manufacturers have used synthetic antioxidants in production processes, as prevention to oxidation problems, also as product adjustment to quality requirements. Recent studies have indicated TBHQ (tertbutyl hydroquinone) as the synthetic antioxidant with the best performance for biodiesel, but beyond this antioxidant, national manufacturers have also adopted the BHT (butylated hydroxytoluene). However, there is not a currently analytical method widespread among the economic agents involved in the chain of biodiesel for the quantification of synthetic antioxidants to finished products. At first, this determination can be an important tool to make advance to studies which check the relationship between biodiesel oxidation and the used antioxidant concentration, added in biodiesel production, at long biodiesel storage times. This determination, in principle, can be an important tool in studies which compare the oxidation of biodiesel with the concentration of a particular antioxidant which has been added at the time of its production, over periods of storage, also in studies to assess possible environmental impacts the use of these products, which have proven toxicity levels and possess controlled by the Ministry of Health for the food industry. Among the analytical techniques used in the development of methods for quantification of synthetic antioxidants, stand out voltammetry (specifically for TBHQ) and liquid chromatography, but still without presenting a potential for widespread use by the market agents. In this work, the gas chromatography was used as a new method development for TBHQ and BHT antioxidants (available in biodiesel manufacturers) determination. Initially, tests with antioxidants provided from a manufacturer were carried out, to verify their using as calibration standards conditions, performed by gas chromatography with mass spectrometer detector. Then, there was the development of the method itself, by gas chromatography with flame ionization detector. Then the method was optimized, verifying possible limitations on samples, and validated according to international protocols parameters provided. In this validation, statistical methods compatible to developed method were applied, and the validation results were critically analyzed according to previously established acceptance criteria. Recovery data between 92% and 106% are detached. In addition, the measurement uncertainty was estimated, using presented concepts in according with international guides of measurement uncertainty calculation. The impact of the adopted linear regression model in measurement uncertainty values was verified, as been a notable influence. These values stayed between 1% and 40%, depending of the test method work range. Finally, the new method was verified about its applicability, being carried out for commercial samples, and checking their obtained results and Oxidation Stability values relationship. Here, TBHQ also presented the best antioxidant performance for biodiesel suitability to quality requirements, in agreement with available data in literature.
APA, Harvard, Vancouver, ISO, and other styles
43

Foster, Garett C. "Faking is a FACT: Examining the Susceptibility of Intermediate Items to Misrepresentation." Bowling Green State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1487256025031404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

RAINA, RAFFAELLA. "Limiti e criticità nella fase di progettazione del Balanced Scorecard." Doctoral thesis, Università Cattolica del Sacro Cuore, 2013. http://hdl.handle.net/10280/1796.

Full text
Abstract:
L’obiettivo di ricerca che si propone questa tesi è l’analisi delle criticità e dei limiti concettuali che sorgono nella fase di progettazione logica all’interno del processo più ampio di implementazione e manutenzione del Balanced Scorecard. La fase di progettazione del Balanced Scorecard è fondamentale per capire l’orientamento della strategia di tutta l’azienda e le priorità che questa si vuole porre ed è proprio lì che sorgono gli aspetti più critici del sistema che poi vengono, dove possibile, migliorati dal suo monitoraggio in maniera dinamica e continua. Molti autori, tra cui Kaplan e Norton (1993, 1996, 2001, 2006), Alberti (2000), Baraldi (2005), Bourne et al. (2000), Falduto e Ruscica (2005), De Marco, Salvo e Lanzani (2004), Marr e Neely (2001), Niven (2002, 2006), Lohman et al. (2004), Simons (2000), si sono focalizzati sulle modalità di implementazione del Balanced Scorecard, però pochi hanno analizzato quali criticità siano nascoste in queste scelte e come vegano gestite per affrontare il dinamismo stesso di un’organizzazione che opera in un contesto spesso fortemente competitivo. L’argomento è considerato rilevante non solo dalla letteratura, ma anche dalla prassi, infatti, di fronte ai fallimenti dei sistemi di misurazione delle performance ci si domanda se ci siano dei punti critici che meritano maggiore attenzione e che siano discriminanti tra un successo e un fallimento e come è meglio gestirli.
The specific purpose of this research is the analysis of critical issues and conceptual limitations that arise in the phase of logical design within the broader process of implementation and maintenance of the Balanced Scorecard. The design phase of the Balanced Scorecard is vital to understand the orientation of the strategy across the enterprise and the priorities that you want to address and it is there that arises the most critical aspects of the system which are, where possible, enhanced by its monitoring in a dynamic and continuous process of improvement. Many authors, including Kaplan and Norton (1993, 1996, 2001, 2006), Alberti (2000), Baraldi (2005), Bourne et al. (2000), Falduto and Ruscica (2005), De Marco, Salvo and Lanzani (2004), Marr and Neely (2001), Niven (2002, 2006), Lohman et al. (2004), Simons (2000), have focused on how to implement the Balanced Scorecard, but few have analyzed which critical issues are hidden in these choices and how are managed to face the dynamism of an organization that often operates in a highly competitive environment. The subject is relevant not only in the literature, but also from the point of view of the enterprise operations; in fact, facing the failures of performance measurement systems, the following questions have arisen: whether there are critical points that need more attention, which are the discriminating factors between a success and a failure and which is the best way to manage them.
APA, Harvard, Vancouver, ISO, and other styles
45

Takiy, Aline Emy. "Análise teórica de uma nova técnica de processamento de sinais interferométricos baseada na modulação triangular da fase óptica /." Ilha Solteira : [s.n.], 2010. http://hdl.handle.net/11449/87038.

Full text
Abstract:
Orientador: Cláudio Kitano
Banca: Ricardo Tokio Higuti
Banca: Luiz Antonio Perezi Marçal
Resumo: Neste trabalho estuda-se a interferometria laser, a qual constitui uma técnica adequada para determinar grandezas físicas com sensibilidade extremamente elevada. Basicamente, no interferômetro óptico, a informação a respeito do dispositivo sob teste é inserida na fase da luz. Utilizando-se o fotodiodo, promove-se a transferência de informação, do domínio óptico para o elétrico, no qual pode ser demodulada usando-se as várias técnicas disponíveis na literatura para detectar sinais modulados em fase. Ênfase é dada a um novo método de demodulação de fase óptica auto-consistente e de grande sensibilidade. Neste método, utiliza- se a modulação dada por uma forma de onda triangular e é baseado na análise do espectro do sinal fotodetectado, sendo capaz de estender a faixa dinâmica de demodulação a valores tão elevados quanto às dos métodos clássicos. Simulações dinâmicas computacionais de interferômetros ópticos são executadas em Simulink juntamente com este método, levando-se em consideração tensões de ruído eletrônico do tipo ruído branco, evidenciando a eficiência do método quando comparados com dados teóricos obtidos em Matlab. A validação experimental do método é realizada com o auxílio de um modulador eletro-óptico de amplitudes, cujas características de fase podem ser previstas analiticamente. Trata-se de um sensor polarimétrico baseado em cristal de Niobato de Lítio, em que a diferença de fase óptica induzida pela tensão elétrica aplicada pode ser determinada através de análise espectral, tal como o novo método descrito neste trabalho. Um interferômetro de Michelson homódino de baixo custo é implementado e a eficiência do novo método de demodulação de fase óptica é avaliada através de testes com atuadores e manipuladores piezoelétricos flextensionais, cujas características de linearidade são conhecidas... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: In this work, has been done a study the laser interferometer, which is a technique for determining physical quantities with extremely high sensitivity. Basically, in the optical interferometer, information about the device under test modulates the phase of light. Using a photodiode, promotes the transfer of information from the optical domain for the electric, which can be demodulated using the various techniques available in literature to detect modulated signals in phase. Emphasis is given to a new method of phase demodulation of optical self-consistent and high sensitivity. The method employs a linear modulation given by a triangular waveform, and is based on analysis of the spectrum of the photodetected signal, being able to extend the dynamic range of the demodulation values as high as the classical methods. Dynamic computational simulations of optical interferometers are implemented in Simulink with this method, taking into account strains of electronic noise like white noise, indicating the efficiency of the method compared with theoretical data obtained in Matlab workspace. The experimental validation of the method is performed with the aid of an electro- optic amplitude modulator, whose phase characteristics can be analytically predicted. This is a polarimetric sensor based on lithium niobate crystal, in which the optical phase difference induced by electric voltage can be determined by spectral analysis, using new method described in this work. A low cost homodyne Michelson interferometer is implemented and the efficiency of the new method of optical phase demodulation is evaluated by testing with piezoelectric flextensional actuators whose characteristics of linearity are well known.The experimental results agree with theoretical analysis and reveal this method is more efficient than the classical methods
Mestre
APA, Harvard, Vancouver, ISO, and other styles
46

Dang, Mei-Zhen. "Interplay of spin structures, hyperfine magnetic field distributions and chemical order-disorder phenomena in face centered cubic Fe-Ni alloys studied by Mössbauer spectroscopy measurements and Monte Carlo simulations." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1996. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq20995.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Dang, Mei-Zhen. "Interplay of spin structures, hyperfine magnetic field distributions and chemical order-disorder phenomena in face centered cubic Fe-Ni alloys studied by Mossbauer spectroscopy measurements and Monte Carlo simulations." Thesis, University of Ottawa (Canada), 1996. http://hdl.handle.net/10393/10020.

Full text
Abstract:
The magnetic properties of fcc Fe-Ni alloys are studied by Mossbauer spectroscopy and Monte Carlo (MC) simulations. Both macroscopic (magnetization, paraprocess susceptibility, Curie points, etc.) and microscopic properties (hyperfine fields) are used to test simple local moment models under various assumptions. A non-linear composition dependence of the average hyperfine field is observed by Fe-57 Mossbauer spectroscopy. A microscopic vector hyperfine field model is proposed and used to model the measured average hyperfine fields and hyperfine field distributions (HFDs) in the collinear ferromagnetic Fe-Ni alloys (y $\le$ 0.45 in Fe$\sb{y}$Ni$\sb{1-y}).$ Modeling the liquid helium temperature average hyperfine fields and HFDs resolves the coupling parameters in the proposed hyperfine field model:$$\langle\vec H\sb{k}\rangle\sb{T}=A\langle\vec\mu\sb{k}\rangle\sb{T}+ B\sum\sb{j}\langle\vec\mu \sb{j}\rangle\sb{T}.$$To the extent that chemical short range order can be neglected in our rapidly quenched samples, the coupling parameters are $\rm A=A\sb0+A\sb1y\ (A\sb0=89$ kOe/$\mu\sb{B},$ A$\sb1={-}20$ kOe/$\mu\sb{B})$ and B = B$\rm\sb0=B\sb1y\ (B\sb0=4.4$ kOe/$\mu\sb{B},$ B$\sb1=3.2$ kOe/$\mu\sb{B}).$ MC simulations show the success and the limits of a simple local moment model, in characterizing the bulk magnetic properties of Fe-Ni. A new approach for simulating HFDs is developed. It combines MC simulation for the spin structure and the above phenomenological hyperfine field model for the site-specific hyperfine field values. Using this method, we calculated spin structures and HFDs in Fe-Ni alloys at different compositions and temperatures. Finally, interplay between the magnetic and the atomic ordering phenomena is studied in FeNi$\sb3,$ FeNi and Fe$\sb3$Ni, by considering the magnetic and chemical interactions simultaneously using MC simulations. Serveral new features that are not predicted by mean-field theory or MC simulations with chemical interactions only arise: (1) chemical order can be induced where using chemical interactions only leads to the prediction of no chemical order (2) chemical segregation can be induced where using chemical interactions only leads to the prediction of no chemical segregation, (3) FeNi$\sb3$ and Fe$\sb3$Ni are found to have significantly different chemical ordering temperatures where chemical interactions only lead to equal ordering temperatures, (4) chemical ordering temperatures are significantly shifted from their chemical interactions only values, even when the chemical ordering temperature is larger than the magnetic ordering temperature, (5) abrupt steps can occur in the spontaneous magnetization at the chemical ordering temperature, when the latter is smaller than the magnetic ordering temperature, and (6) nonlinear relations arise between the chemical ordering temperature and the chemical exchange parameter U $\equiv$ 2U$\sb{FeNi}-{\rm U}\sb{FeFe}-{\rm U}\sb{NiNi},$ where the U$\sb{ij}$ are the near-neighbour pair-wise chemical bonds.
APA, Harvard, Vancouver, ISO, and other styles
48

Takiy, Aline Emy [UNESP]. "Análise teórica de uma nova técnica de processamento de sinais interferométricos baseada na modulação triangular da fase óptica." Universidade Estadual Paulista (UNESP), 2010. http://hdl.handle.net/11449/87038.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:22:31Z (GMT). No. of bitstreams: 0 Previous issue date: 2010-11-30Bitstream added on 2014-06-13T20:49:10Z : No. of bitstreams: 1 takiy_ae_me_ilha.pdf: 1702331 bytes, checksum: b89d2b4960ee7bbc0e01a3a5d3e7a2be (MD5)
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Neste trabalho estuda-se a interferometria laser, a qual constitui uma técnica adequada para determinar grandezas físicas com sensibilidade extremamente elevada. Basicamente, no interferômetro óptico, a informação a respeito do dispositivo sob teste é inserida na fase da luz. Utilizando-se o fotodiodo, promove-se a transferência de informação, do domínio óptico para o elétrico, no qual pode ser demodulada usando-se as várias técnicas disponíveis na literatura para detectar sinais modulados em fase. Ênfase é dada a um novo método de demodulação de fase óptica auto-consistente e de grande sensibilidade. Neste método, utiliza- se a modulação dada por uma forma de onda triangular e é baseado na análise do espectro do sinal fotodetectado, sendo capaz de estender a faixa dinâmica de demodulação a valores tão elevados quanto às dos métodos clássicos. Simulações dinâmicas computacionais de interferômetros ópticos são executadas em Simulink juntamente com este método, levando-se em consideração tensões de ruído eletrônico do tipo ruído branco, evidenciando a eficiência do método quando comparados com dados teóricos obtidos em Matlab. A validação experimental do método é realizada com o auxílio de um modulador eletro-óptico de amplitudes, cujas características de fase podem ser previstas analiticamente. Trata-se de um sensor polarimétrico baseado em cristal de Niobato de Lítio, em que a diferença de fase óptica induzida pela tensão elétrica aplicada pode ser determinada através de análise espectral, tal como o novo método descrito neste trabalho. Um interferômetro de Michelson homódino de baixo custo é implementado e a eficiência do novo método de demodulação de fase óptica é avaliada através de testes com atuadores e manipuladores piezoelétricos flextensionais, cujas características de linearidade são conhecidas...
In this work, has been done a study the laser interferometer, which is a technique for determining physical quantities with extremely high sensitivity. Basically, in the optical interferometer, information about the device under test modulates the phase of light. Using a photodiode, promotes the transfer of information from the optical domain for the electric, which can be demodulated using the various techniques available in literature to detect modulated signals in phase. Emphasis is given to a new method of phase demodulation of optical self-consistent and high sensitivity. The method employs a linear modulation given by a triangular waveform, and is based on analysis of the spectrum of the photodetected signal, being able to extend the dynamic range of the demodulation values as high as the classical methods. Dynamic computational simulations of optical interferometers are implemented in Simulink with this method, taking into account strains of electronic noise like white noise, indicating the efficiency of the method compared with theoretical data obtained in Matlab workspace. The experimental validation of the method is performed with the aid of an electro- optic amplitude modulator, whose phase characteristics can be analytically predicted. This is a polarimetric sensor based on lithium niobate crystal, in which the optical phase difference induced by electric voltage can be determined by spectral analysis, using new method described in this work. A low cost homodyne Michelson interferometer is implemented and the efficiency of the new method of optical phase demodulation is evaluated by testing with piezoelectric flextensional actuators whose characteristics of linearity are well known.The experimental results agree with theoretical analysis and reveal this method is more efficient than the classical methods
APA, Harvard, Vancouver, ISO, and other styles
49

Ingwersen, Joachim [Verfasser], and O. [Akademischer Betreuer] Richter. "The Environmental Fate of Cadmium in the Soils of the Waste Water Irrigation Area of Braunschweig - Measurement, Modelling and Assessment - / Joachim Ingwersen ; Betreuer: O. Richter." Braunschweig : Technische Universität Braunschweig, 2001. http://d-nb.info/1175832022/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Friberg, Annika. "Interaktionskvalitet - hur mäts det?" Thesis, Malmö högskola, Fakulteten för teknik och samhälle (TS), 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20810.

Full text
Abstract:
Den tekniska utvecklingen har lett till att massiva mängder av information sänds, i högahastigheter. Detta flöde måste vi lära oss att hantera. För att maximera nyttan av de nyateknikerna och undkomma de problem som detta enorma informationsflöde bär med sig, börinteraktionskvalitet studeras. Vi måste anpassa gränssnitt efter användaren eftersom denneinte har möjlighet att anpassa sig till, och sortera i för stora informationsmängder. Vi måsteutveckla system som gör människan mer effektiv vid användande av gränssnitt.För att anpassa gränssnitten efter användarens behov och begränsningar krävs kunskaperom den mänskliga kognitionen. När kognitiv belastning studeras är det viktigt att en såflexibel, lättillgänglig och icke-påträngande teknik som möjligt används för att få objektivamätresultat, samtidigt som pålitligheten är av största vikt. För att kunna designa gränssnittmed hög interaktionskvalitet krävs en teknik att utvärdera dessa. Målet med uppsatsen är attfastställa en mätmetod väl lämpad för mätning av interaktionskvalitet.För mätning av interaktionskvalitet rekommenderas en kombinering av subjektiva ochfysiologiska mätmetoder, detta innefattar en kombination av Functional near-infraredspecroscopy; en fysiologisk mätmetod som mäter hjärnaktiviteten med hjälp av ljuskällor ochdetektorer som fästs på frontalloben, Electrodermal activity; en fysiologisk mätmetod sommäter hjärnaktiviteten med hjälp av elektroder som fästs över skalpen och NASA task loadindex; en subjektiv, multidimensionell mätmetod som bygger på kortsortering och mäteruppfattad kognitiv belastning i en sammanhängande skala. Mätning med hjälp av dessametoder kan resultera i en ökad interaktionskvalitet i interaktiva, fysiska och digitalagränssnitt. En uppskattning av interaktionskvalitet kan bidra till att fel vid interaktionminimeras, vilket innebär en förbättring av användares upplevelse vid interaktion.
Technical developments have led to the broadcasting of massive amounts of information, athigh velocities. We must learn to handle this flow. To maximize the benefits of newtechnologies and avoid the problems that this immense information flow brings, interactionquality should be studied. We must adjust interfaces to the user because the user does nothave the ability to adapt and sort overly large amounts of information. We must developsystems that make the human more efficient when using interfaces.To adjust the interfaces to the user needs and limitations, knowledge about humancognitive processes is required. When cognitive workload is studied it is important that aflexible, easily accessed and non assertive technique is used to get unbiased results. At thesame time reliability is of great importance. To design interfaces with high interaction quality,a technique to evaluate these is required. The aim of this paper is to establish a method that iswell suited for measurement of interaction quality.When measuring interaction quality, a combination of subjective and physiologicalmethods is recommended. This comprises a combination of Functional near-infraredspectroscopy; a physiological measurement which measures brain activity using light sourcesand detectors placed on the frontal lobe, Electrodermal activity; a physiological measurementwhich measures brain activity using electrodes placed over the scalp and NASA task loadindex; a subjective, multidimensional measurement based on card sorting and measures theindividual perceived cognitive workload on a continuum scale. Measuring with these methodscan result in an increase in interaction quality in interactive, physical and digital interfaces.An estimation of interaction quality can contribute to eliminate interaction errors, thusimproving the user’s interaction experience.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography