To see the other types of publications on this topic, follow the link: Analytical variability.

Dissertations / Theses on the topic 'Analytical variability'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 27 dissertations / theses for your research on the topic 'Analytical variability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gräf, Michael. "Two-Dimensional Analytical Modeling of Tunnel-FETs." Doctoral thesis, Universitat Rovira i Virgili, 2017. http://hdl.handle.net/10803/450516.

Full text
Abstract:
Basat en un mecanisme de transport de corrent de banda a banda, el túnel-FET és capaç de superar la limitació de pendent sub-llindar física del MOSFET de 60 mV /dec. Per tant, s'ha convertit en un dels dispositius més prometedors per ser el successor del MOSFET clàssic en els últims anys. Aquesta tesi descriu tots els passos necessaris per modelar analíticament un Túnel-FET de doble porta. El model inclou una solució electrostàtica de dues dimensions en totes les regions del dispositiu, el que permet fins i tot simulacions hetero-unió del dispositiu. Per a un comportament més realista del dispositiu, cal tenir en compte el rendiment del dispositiu que limita els perfils de dopatge de forma Gaussiana en les unions del canal. Les expressions per a les probabilitats de túnel de banda a banda i les de Trap-Assisted-Tunneling (TAT) són executades per un enfocament WKB quasi bidimensional. El corrent del dispositiu es calcula mitjançant la teoria de transmissió de Landauer. El model és vàlid per a dispositius de canal curt i les estàncies estan ben comparades amb les dades de simulació TCAD Sentaurus i amb les medicions proporcionades. S'introdueix un modelo general per les flactuacions del dopant aleatoria, que prediu les influencies característiques del dispositiu en el corrent de sortida i el voltatge llindar. El model s'aplica al MOSFET, així com a dispositius TFET.
Basado en un mecanismo de transporte de corriente banda a banda, el Tunnel-FET es capaz de superar la limitación de pendiente sub-umbral física del MOSFET de 60 mV/dec. Por lo tanto, esto lo convierte en uno de los dispositivos más prometedores para ser el sucesor del MOSFET clásico en los últimos años. Esta tesis describe todos los pasos necesarios para modelar analíticamente un Tunnel-FET de doble puerta. El modelo incluye una solución electrostática bidimensional en todas las regiones del dispositivo, lo que permite incluso simulaciones de hetero-unión del dispositivo. Para un comportamiento más realista del dispositivo se tiene en cuenta el rendimiento del dispositivo que limita los perfiles de dopaje de forma Gaussiana en las uniones del canal. Las expresiones para las probabilidades de túnel de banda a banda y de Trap-Assisted-Tunneling (TAT) se implementan mediante un enfoque de WKB cuasi bidimensional. La corriente del dispositivo se calcula mediante la teoría de transmisión de Landauer. El modelo es válido para dispositivos de canal corto y las estancias están bien comparadas con los datos de simulación TCAD Sentaurus y con las mediciones proporcionadas. Se introduce un modelo general para las fluctuaciones del dopado aleatorio, que predice las influencias características del dispositivo en la corriente de salida y el voltaje umbral. El modelo se aplica al MOSFET, así como a los dispositivos TFET.
Based on a band-to-band current transport mechanism, the Tunnel-FET is able to overcome the physical subthreshold slope limitation of the MOSFET of 60 mV/dec. Therefore, it has become one of the most promising devices to be the successor of the classical MOSFET in the last few years. This thesis describes all necessary steps to analytically model a double-gate Tunnel-FET. The model includes a two-dimensional electrostatic solution in all device regions, which enables even hetero-junction device simulations. Device performance limiting Gaussian-shaped doping profiles at the channel junctions are taken into account for a realistic device behavior. Expressions for the band-to-band and trap-assisted-tunneling probabilities are implemented by a quasi two-dimensional WKB approach. The device current is calculated based on Landauer's transmission theory. The model is valid for short-channel devices and stays is good agreement with the TCAD Sentaurus simulation data and with the provided measurements. A general model for random-dopant-fluctuations is introduced, which predicts characteristic device influences on the output current and threshold voltage. The model is applied to MOSFET, as well as TFET devices.
APA, Harvard, Vancouver, ISO, and other styles
2

Germani, Élodie. "Exploring and mitigating analytical variability in fMRI results using representation learning." Electronic Thesis or Diss., Université de Rennes (2023-....), 2024. http://www.theses.fr/2024URENS031.

Full text
Abstract:
Dans cette thèse, nous nous intéressons aux variations induites par différentes méthodes d'analyse, ou variabilité analytique, dans les études d'imagerie cérébrale. C'est un phénomène qui est désormais connu dans la communauté, et notre objectif est maintenant de mieux comprendre les facteurs menant à cette variabilité et de trouver des solutions pour mieux la prendre en compte. Pour cela, j’analyse des données et j’explore les relations entre les résultats de différentes méthodes. Parallèlement, j’étudie les contraintes liées à la réutilisation de données et je propose des solutions basées sur l'intelligence artificielle afin de rendre les études plus robustes
In this thesis, we focus on the variations induced by different analysis methods, also known as analytical variability, in brain imaging studies. This phenomenon is now well known in the community, and our aim is now to better understand the factors leading to this variability and to find solutions to better account for it. To do so, I analyse data and explore the relationships between the results of different methods. At the same time, I study the constraints related to data reuse and I propose solutions based on artificial intelligence to build more robust studies
APA, Harvard, Vancouver, ISO, and other styles
3

Chatfield, Marion J. "Uncertainty of variance estimators in analytical and process variability studies." Thesis, University of Southampton, 2018. https://eprints.soton.ac.uk/422240/.

Full text
Abstract:
This thesis demonstrates that the half-t distribution is the prior of choice for estimating uncertainty of variance estimators in routine analysis of analytical and process variance components studies. Industrial studies are often performed to estimate sources of variation e.g. to improve and quantify measurement or process capability. Understanding the uncertainty of those estimators is important, especially for small studies. A Bayesian analysis is proposed – providing a flexible methodology which easily copes with the complex and varied nature of the studies and the varied quantities of interest. The prior is a fundamental component of a Bayesian analysis. The choice of prior is appraised and the coverage of the credible intervals obtained using six families of priors is assessed. A half-t prior (with several degrees of freedom) on the standard deviation is recommended in preference to a uniform or half-Cauchy prior, when some information exists on the magnitude of variability ‘core’ to the process or analytical method. Whilst a half-t prior has been previously proposed, through extensive simulation it is demonstrated that it is the prior of choice for estimating uncertainty of variance estimators in routine analysis of analytical and process variation studies. The coverage of 95% credible intervals for variance components and total variance is 93% (approximately) or above across a range of realistic scenarios. Other priors investigated, including Jeffreys’, a FLAT prior and inverse gamma distributions on stratum variances available in PROC MIXED1 in the SAS/STAT® software, are less satisfactory. This evaluation is novel: for one-way variance component designs there is very limited evaluation of the half-t prior when estimating the uncertainty of the variance component estimators; for the two-way or more complex none has been found. Since the coverage issues were primarily for the mid-level variance component, evaluation of designs more complex than one-way is important. Highest posterior density intervals are recommended with the metric of the parameter being important. Additionally, a scale based on the intra-class correlation coefficient is proposed for plotting the credible intervals.
APA, Harvard, Vancouver, ISO, and other styles
4

Anderson, Neil R. "An investigation of the pre-analytical variability in laboratory testing and its influence on result interpretation and patient management." Thesis, University of Warwick, 2018. http://wrap.warwick.ac.uk/108557/.

Full text
Abstract:
Interpretation of laboratory tests in clinical practice is based on an understanding of the disease process within or between individuals. This is demonstrated by the variability of pathology results as compared to the previous result or against the reference range, made up from the intrinsic pathophysiological changes and also variation associated with the in vitro changes to the sample. My work is on identification and minimisation of the result variation in the pre-analytical phase, accounting for 60-70% of the errors associated with laboratory testing. The first project of my thesis is based on four studies that consider the in vitro stability of parathyroid hormone (PTH) and C-reactive protein (CRP), in which significant sample degradation is observed due to sample tube type, anticoagulant used and time to separation. The second project considers ethnic variation as a source of intra individual variation. Specifically considering intra individual ethnic variation in total cholesterol (TC) and high density lipoprotein cholesterol (HDLC), reporting significant differences were observed between Caucasian Indo-Asians in HDLC, in addition I investigated the relationship between low maternal vitamin B12 concentrations in Caucasian women and cord blood cholesterol. The third project considered the variation in laboratory results due to pre-existing conditions causing interference in common laboratory tests. I published on the effect of lipaemia on common laboratory tests, showing lipaemia does have a significant effect on laboratory tests. The following study found that the raised prolactin seen in rheumatoid arthritis is not artefactual but due to changes in cross reactivity due of prolactin subtypes. The final paper of this project shows, through a collection of case studies falsely elevated serum calcium levels in patients with paraproteinaemia.
APA, Harvard, Vancouver, ISO, and other styles
5

XHELAJ, ANDI. "Downburst Wind Field Reconstruction by means of a 2D Analytical Model and Investigation of the Parameter’s Variability through an Ensemble Approach." Doctoral thesis, Università degli studi di Genova, 2022. https://hdl.handle.net/11567/1097493.

Full text
Abstract:
A “downburst” is defined as a diverging wind system that occurs when a strong downdraft induces an outflow of damaging winds on or near the ground. Severe wind damage in many parts of the world are often due to thunderstorm outflows and their knowledge is therefore relevant for structural safety and design wind speed evaluation. Nevertheless, there is not yet a shared model for thunderstorm outflows and their actions on structures. In this paper, an analytical model that simulates the horizontal mean wind velocity originated from a travelling downburst is proposed. The horizontal wind velocity is expressed as the vector summation of three independent components: the stationary radial velocity generated by an impinging jet over a flat surface, the downdraft translating velocity, which corresponds to the parent cloud motion, and the boundary layer background wind field at the surface where the downburst is immersed. A parametric analysis is also developed and coupled with the analytical model aiming to investigate two observed downburst events and extract their main parameters – e.g. downdraft diameter, touch-down position, translating downdraft speed and direction, intensity and decay period - in order to reconstruct the space-time evolution of these events. Due to large computational cost for the reconstruction of a single downburst wind field a novel strategy is implemented to speed up the process. Two metaheuristic optimization algorithms are used, namely the Teaching Learning Based Optimization and the Differential Evolution. The metric to evaluate the algorithm’s efficiency is based on the convergence behaviour of the objective function towards the best solution as the number of iterations increases. The decision variable parameters (e.g. downdraft diameter, touch-down position, translating downdraft speed and direction, iintensity,and decay period and so on) that minimize the objective function are very important in Wind Engineering since their knowledge allows statistical analysis of the intense wind fields that are generated during downburst winds, and therefore allows to better define the actions that these extreme events have on structures. Lastly the proposed model was validated against s strong downburst event that took place in Sânnicolau Mare (Romania) during the summer of 2021. This event was accompanied by hail of 2-3 cm in size and the hail near the surface was driven by the downburst wind. This means that the horizontal velocity of the ice projectile near the surface was less or equal to the horizontal downburst wind velocity. After this strong event, a damage survey was carried out in collaboration between the University of Genoa (Italy) and the University of Bucharest (Romania). The damage survey identified locations of buildings in Sânnicolau Mare that suffered hail damage during the event. The analytical model was used to reproduce the recorded wind speed and direction due to the severe downburst. Using the simulated wind field, the simulated damage “footprint” (i.e., the maximum wind speed that occurred at a given place at any time during the passage of the downburst) was calculated. The simulated footprint was able to matches with a very good extent the areas that suffered from hail damage, and consequently permit to validate the proposed analytical model.
APA, Harvard, Vancouver, ISO, and other styles
6

Kim, Dong Yeub. "An Analytical Study of the Short-run Variability of Korea's Balance of payments, 1961-85: Application of Keynesian and Monetary Approaches to the Problem." DigitalCommons@USU, 1989. https://digitalcommons.usu.edu/etd/4104.

Full text
Abstract:
The relationships among the balance of payments and other macroeconomic variables in the Korean economy for the period 1961-85 are analyzed in this study. Theoretical studies on the effects of government policies on the economy and the balance of payments were conducted under both the Keynesian and monetary approaches. The Keynesian approach concentrates on the commodity and capital market adjustment factors and does not focus on the money market factors, whereas the monetary approach considers the balance of payments adjustments as a symptom of money market disequilibrium alone. The basic assumptions of those two approaches, taken seperately, are not fully relevant to the Korean economy, which has unemployed resources, a high proportion of non-traded goods to traded goods, and monetary effects of balance of payments changes. Therefore, a model combining monetary and real factors to explain the short-run behavior of Korea's balance of payments in a single framework is developed. The empirical results of the combined model show that its explanatory power is much higher than either of the two models taken separately. For balance of payments adjustment policy in Korea during the period 1961-85, fiscal and foreign exchange rate policy instruments were found to be very effective in the short-run, but monetary policy instruments were not.
APA, Harvard, Vancouver, ISO, and other styles
7

Fritsch, Clément. "Étude de la variabilité inter et intra spécifique des extractibles présents dans les écorces de résineux et de feuillus exploités industriellement dans le nord-est de la France." Electronic Thesis or Diss., Université de Lorraine, 2021. https://docnum.univ-lorraine.fr/ulprive/DDOC_T_2021_0300_FRITSCH.pdf.

Full text
Abstract:
Les travaux de recherche présentés portent sur l’étude de la variabilité inter et intra spécifique des substances extractibles des écorces d’Abies alba, Picea abies, Pseudotsuga menziesii, Quercus robur et Fagus sylvatica. La connaissance de la variabilité de la composition chimique des écorces est un outil d’aide à la décision pour une valorisation industrielle future des écorces, qui apporterait une plus-value à la filière bois. Ainsi, les constituants des écorces sont séparés en deux catégories avec les polyphénols qui ont été analysés principalement à partir des extraits eau/éthanol et les biopolymères qui ont été extraits à partir des résidus d’écorces provenant des extractions successives. Pour les extractions eau/éthanol (1:1), 5 essences avec 8 arbres par essence et plus de 10 hauteurs par arbres ont été extraits. Une augmentation des taux d’extraits avec la hauteur a été observée pour les résineux et d’après ces résultats, il semble que les résineux contiennent davantage de matières extractibles que les feuillus et que la quantité d’extractibles augmente avec la hauteur. Pour les extractions successives avec le mélange toluène/éthanol (2:1), puis avec l’éthanol (100%), seulement 4 arbres par essence ont été extraits car il s’agissait essentiellement de récupérer des écorces dépourvues d’extractibles en vue d’analyser les biopolymères. Par la suite, les extraits d’écorces ont été étudiés grâce à des méthodes analytiques complémentaires que sont la LC-UV-MS, GC-MS, IR, RMN1H, MALDI-TOF, SEC ainsi que des tests spécifiques permettant de déterminer les taux de phénols totaux, holocellulose, α-cellulose, hémicellulose, lignine, subérine et cendres. Les résultats mettent en évidence l’existence d’une variabilité de la composition chimique des écorces de résineux en fonction de la hauteur : les taux de polyphénols, subérine, lignine et holocellulose diminuent avec la hauteur tandis que les taux de terpènes et de cendres augmentent avec la hauteur. A l’aide du même protocole analytique, une variabilité interarbres importante a été mise en évidence pour les écorces de feuillus. Certaines différences observées ont été expliquées par des paramètres biologiques tels que la hauteur, l’âge des tissus, les conditions de stockage des écorces ou encore une allométrie différente entre les arbres
This work focuses on the study of the inter and intra specific variability of the extractives substances of the barks of Abies alba, Picea abies, Pseudotsuga menziesii, Quercus robur and Fagus sylvatica. New knowledge about the variability in chemical composition of different bark types can be used as a support decision tool for a future industrial valorization, which would bring an added value to the wood industry. Thus, the constituents of the bark are separated into two categories with the polyphenols which were analyzed mainly from the water/ethanol extracts and the biopolymers which were extracted from the bark residues from the successive extractions. For the water/ethanol extractions (1:1), 5 species with 8 trees per species and more than 10 heights per tree were extracted. An increase of extractives rates with height was observed for softwoods and from these results it appears that softwoods contain more extractives than hardwoods and that the amount of extractive increases with height. For the successive extractions with the toluene / ethanol mixture (2:1), then with the ethanol (100%), only 4 trees by essence were extracted because it was essentially a question of recovering bark devoid of extractives in view of analyze biopolymers. Subsequently, the bark extracts were studied using complementary analytical methods such as LC-UV-MS, GC-MS, IR, 1H NMR, MALDI-TOF, SEC as well as specific tests to determine the levels. total phenols, holocellulose, α-cellulose, hemicellulose, lignin, suberin and ash. The results demonstrate the existence of variability in the chemical composition of softwood bark as a function of height: the levels of polyphenols, suberin, lignin and holocellulose decrease with height while the levels of terpenes and ash increase. with height. Using the same analytical protocol, significant inter-tree variability was demonstrated for hardwood bark. Some observed differences have been explained by biological parameters such as height, age of tissues, bark storage conditions or even different allometry between trees
APA, Harvard, Vancouver, ISO, and other styles
8

Skinner, Michael A. "Hapsite® gas chromatograph-mass spectrometer (GC/MS) variability assessment /." Download the thesis in PDF, 2005. http://www.lrc.usuhs.mil/dissertations/pdf/Skinner2005.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cejas, Agustin Javier Diaz. "Aperfeiçoamentos em uma framework para análise de folgas em sistemas sócio-técnicos complexos : aplicação em um laboratório químico." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/180636.

Full text
Abstract:
Medidas para gerenciamento de saúde e segurança em laboratórios são de extrema importância em laboratórios químicos. As pessoas que realizam qualquer atividade em um ambiente de laboratório estão expostas a diversos perigos e, consequentemente, existe o risco de ocorrência de eventos adversos para a saúde e segurança. Este trabalho foi desenvolvido em um laboratório químico de uma universidade federal tem como principal objetivo o aperfeiçoamento de uma framework que permite a realização de uma análise sistemática qualitativa e quantitativa das folgas presentes em um sistema sócio-técnico complexo. Ferramentas da Engenharia de Resiliência foram utilizadas para estudar o laboratório, o qual foi considerado como um sistema sócio-técnico complexo. Uma das características de um sistema resiliente é a capacidade de lidar com a variabilidade, o que pode ser obtido por meio de recursos de folgas (slack) no sistema. O uso da framework permitiu obter dados importantes para a análise do sistema e sugestões de melhorias. Os aperfeiçoamentos propostos na framework mostraram-se eficazes, principalmente na quantificação das folgas e variabilidades, em função da utilização do método AHP (Analytical Hierarchy Process) para a análise de dados. O método AHP tornou possível substituir o uso de questionários para toda a equipe por uma avaliação direcionada a especialistas. Ao utilizar o AHP, os dados podem ser adquiridos com maior rapidez. Outro ganho obtido com o uso do método AHP foi a possibilidade de redução de uma etapa da framework, tornando-a mais concisa.
Measures for health and safety management are of paramount importance in chemical laboratories. People who perform any activity in a laboratory environment are exposed to a variety of hazards and consequently there is a risk of adverse health and safety events. This work was developed in a chemical laboratory of a federal university, and has as main objective the improvement of a framework that allows the accomplishment of a systematic qualitative and quantitative analysis of the slack present in a complex socio-technical system. Tools of Resilience Engineering were used for studying a chemical laboratory, which was considered as a complex socio-technical system. One of the characteristics of a resilient system is the ability to deal with variability, which can be obtained through slack resources in the system. This work was developed in a chemical laboratory of a federal university and consists in the improvement of a framework that allows the accomplishment of a systematic qualitative and quantitative analysis of the slack present in the system. The use of the framework allowed to obtain data important for the analysis of the system and suggestions for improvements. The improvements proposed in the framework proved to be effective, especially in the quantification of slack and variability, as a function of the AHP (Analytical Hierarchy Process) method for data collection. The AHP method made it possible to replace the use of questionnaires for the entire team by an expert team assessment. By using AHP, data can be acquired more quickly. Another gain obtained with the use of the AHP method was the possibility of reducing one stage of the framework, making it more concise.
APA, Harvard, Vancouver, ISO, and other styles
10

McConnell, Meghan. "Advancements in the Evaluation and Implementation of Heart Rate Variability Analytics." Thesis, Griffith University, 2021. http://hdl.handle.net/10072/404855.

Full text
Abstract:
Clinical applications for heart rate variability (HRV) have become increasingly popular, gaining momentum and value as societies increased understanding of physiology reveals their true potential to reflect health. An additional reason for the rising popularity of HRV analysis, along with many other algorithmic based medical processes, is the relatively recent exponential increase of computing power and capabilities. Despite this many medical standards lag behind this booming increase in scientific knowledge, as the risks and precautions involved with healthcare necessarily take priority. Resultantly, the standards which pertain to the acceptable tolerance for accurate R-peak detection have remain unchanged for decades. For similar reasons, medical software is also prone to lag behind state-of-the-art developments. Yet, society is currently on the precipice of an age of high computational abilities, mass data storage, and capabilities to apply deep learning algorithms to reveal patterns that were previously inconceivable. So, when considering the needs of the future in relation to the place of HRV in healthcare, there is a distinct need for its accurate and precise collection, storage, and processing. In the work presented in this dissertation, the overarching aim was to increase the reliability of electrocardiogram (ECG) based HRV for use in predictive health analytics. To ensure both clarity and attainability, this project-level aim was broken down and addressed in a series of several works. The first a im w ork w as t o address the problems associated with the precision specified f or a ccurate p eak d etection, and thereby increase the reliability of predictive health analytics generated using HRV metrics. The study conducted around this initial aim investigates the specifics of match window requirements, clarifies the difference between fiducial marker and QRS complex detection, and makes recommendations on the precision required for accurate HRV metric extraction. In the second work, the aim was to ensure that there is a reliable foundation for the conduction of HRV-related research. Here, a thorough investigation of relevant literature revealed the lack of a suitable software, particularly for research requiring the analysis of large databases. Consequently, an improved HRV analysis platform was developed. Through use of both user-feedback and quantitative comparison to highly regarded software, the proposed platform is shown to offer a similar standard in estimated HRV metrics but requires significantly l ess manual e ffort (batch-processing approach) than the traditional single patient focused approach. The third work also addressed this aim, providing the base peak detection algorithm implemented within the HRV analysis platform. Experimentation undertaken here ensured that the developed algorithm performed precise fiducial marker detection, thereby increasing the reliability of the generated HRV metrics (measured against the framework presented in the first work). In the fourth work, the aim was to address the lack of published literature on the relationship between ECG sampling frequency (fs) and extracted HRV, in order to further ensure the reliability of predictive health analytics generated using HRV metrics. Here, a quantitative experimental approach was taken to evaluate the impact of ECG fs on subsequent estimations of HRV. This experimentation resulted in a recommendation for the minimum required ECG fs for reliable HRV extraction. The aim of the final work was to further improve the foundation for future predicative health analytics, by developing a robust pre-processing algorithm capable of autonomous detection of regions of valid ECG signal. This type of algorithm should be considered of critical importance to furthering machine learning (ML) based applications in the medical field. ML algorithms are heavily reliant on access to vast amounts of data, and without an automated pre-processing stage would require an unrealistic amount of hand-processing for implementation.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Eng & Built Env
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
11

Silva, Osvaldo Cirilo da. "Processo de fabricação de comprimidos de lamivudina e zidovudina (150+300mg): avaliação retrospectiva da variabilidade e desenvolvimento de metodologia analítica por espectroscopia no infravermelho próximo com transformada de Fourier (FT-NIR) aplicada a avaliação da homogeneidade da mistura de pós." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/9/9139/tde-29012019-155121/.

Full text
Abstract:
O uso de ferramentas estatísticas no ciclo de vida de um produto farmacêutico permite verificar e controlar o processo tendo como objetivo a sua melhoria contínua. No presente estudo foi avaliada a estabilidade e a capacidade estatística do processo de fabricação dos comprimidos revestidos de lamivudina 3TC e zidovudina AZT (150 + 300 mg) fabricados pela Fundação para o Remédio Popular \"Chopin Tavares de Lima\" (FURP). Esse medicamento, distribuido gratuitamente pelo programa DST/AIDS do Ministério da Saúde, e fabricado por compressão direta, processo rápido que permite a implementação futura da tecnologia analítica de processo (Process Analytical Technology - PAT). No Capítulo I foi realizada avaliação retrospectiva da variabilidade de atributos criticos da qualidade de 529 lotes dos comprimidos fabricados de acordo com a RDC ANVISA 17/2010 e as monografias oficiais, sendo tais atributos: peso médio, uniformidade de dose unitária e % m/v de fármaco dissolvido, antes e após o revestimento. O objetivo foi identificar eventuais causas especiais de variabilidade dos processos que permitam melhorias contínuas. No Capitulo II foi desenvolvida metodologia analítica empregando a espectroscopia no infravermelho próximo com transformada de Fourier para a avaliação da homogeneidade da mistura dos pós. Nesse estudo foram analisadas amostras de misturas dos fármacos lamivudina 3TC e zidovudina AZT e mistura excipiente, empregando como método de referência a CLAE, para a quantificação desses dois fármacos. No Capitulo I, a avaliação do processo para o peso médio revelou a necessidade de investigação das causa especiais de variabilidade, evidenciada por meio das cartas de controle. Os resultados do ano de 2015 indicaram necessidade de centralização e de consistência do processo, com redução de probabilidade de falha. As cartas de controle para uniformidade de dose unitária, no ano de 2013, revelaram menor variabilidade do processo. Porem, nesse ano, a análise estatística para a dissolução revelou processo descentralizado e sem consistência, com maior evidência para o fármaco 3TC que demonstrou menor desempenho, Cpk<1,0. A avaliação da estabilidade e da capacidade do processo de fabricação de comprimidos de lamivudina + zidovudina (150+300 mg), no período de 2012 a 2015, permitiu o maior entendimento de suas fontes de variação. Foi possível detectar e determinar o grau dessa variação e seu impacto no processo e nos atributos críticos de qualidade do produto com evidentes oportunidades de melhoria do processo, reduzindo os riscos para o paciente. No capítulo II, no desenvolvimento do método, as estatísticas de validação revelaram que os menores valores de BIAS foram observados para a 3TC, 0,000116 e 0,0021, respectivamente para validação cruzada e validação. Os valores de BIAS próximos a zero indicaram reduzida porcentagem de variabilidade do método. O presente estudo demonstrou a viabilidade do uso do modelo desenvolvido para a quantificação da 3TC e AZT por FT-NIR apos ajustes que contribuam para a elevação de R, R2 e RPD para valores aceitáveis. Valores de RPD acima de 5,0 que permitem o uso do modelo para uso em controle de qualidade.
The use of statistical tools in the life cycle of a pharmaceutical product allows verifying and controlling the process aiming at its continuous improvement. In the present study, the stability and statistical capacity of the lamivudine coated tablets 3TC and zidovudine AZT (150 + 300 mg) manufactured by the Chopin Tavares de Lima Foundation (FURP) were evaluated. This drug, distributed free of charge by the Ministry of Health\'s DST/AIDS program, is manufactured by direct compression, a rapid process that allows the future implementation of Process Analytical Technology (PAT). In Chapter I, a retrospective evaluation of the variability of critical quality attributes of 529 batches of tablets manufactured was carried out, such attributes being: mean weight, unit dose uniformity and % m/v of dissolved drug substances, before and after coating. The objective was to identify possible special causes of variability of the processes that allow continuous improvements. In Chapter II an analytical methodology was developed employing the near infrared spectroscopy with Fourier transform for the evaluation of the homogeneity of the powder mixture. In this study, samples of mixtures of the drugs lamivudine 3TC and zidovudine AZT and excipient mixture were analyzed, using as reference method the HPLC, for the quantification of these two drugs. In Chapter I, the evaluation of the process for the mean weight revealed the need to investigate the special cause of variability, as evidenced by the charts. The results of the year 2015 indicated the need for centralization and process consistency, with a reduction in the probability of failure. The control charts for unit dose uniformity, in the year 2013, revealed less process variability. However, in that year, the statistical analysis for dissolution revealed a decentralized process with no consistency, with greater evidence for the 3TC drug that showed lower performance, Cpk<1.0. The evaluation of the stability and capacity of the lamivudine + zidovudine tablet manufacturing process (150 + 300 mg) in the period from 2012 to 2015 allowed a better understanding of its sources of variation. It was possible to detect and determine the degree of this variation and its impact on the process and the critical quality attributes of the product with evident opportunities to improve the process, reducing risks for the patient. In Chapter II, in the development of the method, the validation revealed that the lowest values of BIAS were observed for 3TC, 0.000116 and 0.0021, respectively for cross validation and validation. BIAS values close to zero indicated a reduced percentage of variability of the method. The present study demonstrated the feasibility of using the model developed for the quantification of 3TC and AZT by FT-NIR after adjustments that contribute to the elevation of R, R2 and RPD to acceptable values. RPD values above 5.0 that allow the use of the model for use in quality control.
APA, Harvard, Vancouver, ISO, and other styles
12

Huang, Tse-Yang. "Fostering creativity: a meta-analytic inquiry into the variability of effects." Texas A&M University, 2005. http://hdl.handle.net/1969.1/2338.

Full text
Abstract:
The present study used the method of meta-analysis to synthesize the empirical research on the effects of intervention techniques for fostering creativity. Overall, the average effect sizes of all types of creativity training were sizable, and their effectiveness could be generalized across age levels and beyond school settings. Generally, among these training programs, CPS (Creative Problem Solving) spent the least training time and gained the highest training effects on creativity scores. In addition, ??Other Attitudes programs,?? which presumed to motivate or facilitate the creativity motivation, also presented sizable effect size as other types of creativity training programs. As for the issue of creativity ability vs. skills, this analysis did not support the notion that figural components of the TTCT (Torrance Tests of Creative Thinking) might be measuring the relatively stable aspects of creativity proposed by Rose and Lin (1984). Because the figural form of the TTCT did not obtain the lowest effect size, the results indicated that the view of multi-manifestation of creativity is a more plausible explanation. And since neither the Stroop Color and Word Test or the Raven Progressive Matrices was found in the studies, this issue was difficult to investigate further. From the path-model analysis, it can be implied that a research design with a control group and student sample would more likely lead to publication, which would influence the effect size index. Unfortunately, from the information provided in the articles included in this study, there were not any quantitative data about motivation or related measurement of the participants, which is a major problem and impedes this study for creating a better path-model. This study has many implications which merit investigation. One approach follows the concepts of aptitude-treatment interactions, which is focused on each individual??s unique strengths and talent, and the goals of a creativity training program should help them to recognize, to develop their own creative potential, and finally to learn to express it in their own way. Another involves developing the assessment techniques and criteria for individuals as well as collecting related information regarding attitudes and motivation during the training process.
APA, Harvard, Vancouver, ISO, and other styles
13

ALVES, Suzana Ferreira. "Estudo da composição química, de atividades biológicas e microencapsulação do óleo essencial dos frutos de Pterodon emarginatus Vogel - Fabaceae ("sucupira")." Universidade Federal de Goiás, 2012. http://repositorio.bc.ufg.br/tede/handle/tde/2112.

Full text
Abstract:
Made available in DSpace on 2014-07-29T16:11:50Z (GMT). No. of bitstreams: 1 Dissertacao Suzana Ferreira Alves.pdf: 1091463 bytes, checksum: f169757b64e48eb6022ea0919135c602 (MD5) Previous issue date: 2012-02-28
The present work aimed to do the study of chemical variability, evaluating biological activities and microencapsulate the essential oil of the fruits of Pterodon emarginatus Vogel (Fabaceae). Chapter 1 presents the study of the chemical variability of essential oil from fruits of sucupira of 11 individual of the Brazilian savanna, collected from five different populations, and were identified as its chemical composition by gas chromatography coupled to mass spectrometry (GC/MS), used for data analysis to multivariate statistical tool that indicated compounds β caryophyllene and α copaene as the main discriminant of the samples studied, both of which are of greater significance. Chapter 2 describes the antimicrobial activity and chemical composition of total oil of the fruits of sucupira, four samples being analyzed, three of the city of Jussara GO and one of region of Jaciara MT. The oils tested showed good antimicrobial activity against bacteria Gram positive (Gram (+)) and moderate activity against Enterobacter aerogenes ATCC 13048. Chapter 3 describes the process of obtaining microcapsules containg essential oil (OE) of sucupira employing the technique of spray drying and the development and validation methodology for quantification of the compound β caryophyllene. Were used gum Arabic and maltodextrin as wall material and prepared five different dispersions (Emulsion 1-E1, Emulsion 2-E2, Emulsion 3-E3, Emulsion 4-E4 and Emulsion 5-E5) which then were atomized in spray dryer. The results show that the drying condition most appropriate was sprinkler beak of 1,2 mm of diameter, power flow the emulsion in system of 4 mL/min, inlet temperature of 160ºC, air flow of 40 L/min e and pressure of 60 psi. Among the emulsions, E2 was standardized with adequate proportion of essential oil and wall materials, by presenting microcapsules (MOE) for having thick-walled, with a pronounced retention of essential oil, spherical morphology and low hygroscopicity. The method developed and validated proved to be linear, precise, acuurate and robust. Chapter 4 presents the evaluation of biological activities of antimicrobial essential oil of PES-01 employing the technique of broth microdilution and antinociceptive and anti-inflamatory activities performed to the microcapsules with the essential oil. The models of pain induced by formalin and hot plate were used to assess the antinociceptive activity and the model of carrageenan-induced pleurisy and Evans blue for evaluation of anti-inflamatory activity. The OE has weak antimicrobial activity (500 μg mL-1) against bacteria Gram (+) and against the fungi of the genus Cryptococcus was inactive against S. aureus ATCC 25923, against bacteria Gram (-) and against the fungi of the genus Candida. OE reduced by 61,54% (300 mg kg-¹) of the reactivity to pain. The MOE reduced 40,87% (1,0 g kg-¹) and 41,57% (2,0 g kg-¹) in first phase of the test formalin-induced pain, suggesting, anti-nociceptive activity. The 2nd phase of the test, the essential oil inhibited 52,35% of the pain related to inflammatory mediators and the microcapsules decreased from 25,86% - 55,60% of the pain. MOE at a dose of 1,0 g kg-¹ showed anti-nociceptive activity in hot plate model, suggestion central analgesic activity. In the arrageenaninduced pleurisy the MOE reduced 25,44% of the complete migration of leukocytes and significantly decreased the concentration of Evans blue in 24,18%, which demonstrates important anti-inflamatory activity.
O presente trabalho teve como objetivos realizar o estudo da variabilidade química, avaliar atividades biológicas e microencapsular o óleo essencial dos frutos de Pterodon emarginatus Vogel (Fabaceae). O capítulo 1 traz o estudo da variabilidade química do óleo essencial dos frutos de sucupira extraído de 11 indivíduos do cerrado brasileiro, coletados de 5 diferentes localidades, e que foram identificados quanto sua composição química através de cromatografia gasosa acoplada a espectrometria de massas (CG/EM), para análise dos dados empregou se a ferramenta estatística multivariada que indicou os compostos β cariofileno e α copaeno como os principais discriminantes das localidades estudadas, sendo ambos de maior significância. O capítulo 2 descreve a atividade antimicrobiana e a composição química do óleo total dos frutos de sucupira, sendo quatro amostras analisadas, três da cidade de Jussara-GO e uma da região de Jaciara-MT, os óleos testados apresentaram boa atividade antimicrobiana contra bactérias Gram-positivas (Gram (+)) e moderada atividade contra Enterobacter aerogenes ATCC 13048. O capítulo 3 descreve o processo de obtenção de microcápsulas contendo o óleo essencial (OE) de sucupira empregando a técnica de spray drying e o desenvolvimento e validação de metodologia para quantificação do composto β cariofileno. Foram utilizados goma arábica e maltodextrina como material de parede e preparadas 5 diferentes dispersões (Emulsão 1-E1, Emulsão 2-E2, Emulsão 3-E3, Emulsão 4-E4 e Emulsão 5-E5), que em seguida foram atomizadas em spray dryer. Os resultados mostram que a condição de secagem mais adequada foi bico aspersor de 1,2 mm de diâmetro, fluxo de alimentação da emulsão no sistema de 4 mL/min, temperatura de entrada de 160ºC, fluxo de ar de 40 L/min e pressão de 60 psi. Dentre as emulsões, E2 foi padronizada com a proporção adequada de óleo essencial e dos materiais de parede, por apresentar microcápsulas (MOE) de parede espessa, com pronunciada retenção do óleo essencial, morfologia esférica e baixa higroscopicidade. O método desenvolvido e validado mostrou ser linear, preciso, exato e robusto. O capítulo 4 traz a avaliação de atividades biológicas antimicrobiana do óleo essencial de PES-01 pela técnica de microdiluição em caldo e atividades antinociceptiva e anti inflamatória realizadas para as microcápsulas contendo o óleo essencial. Os modelos de dor induzida por formalina e placa quente foram empregados para avaliar a atividade antinociceptiva e o modelo de pleurisia induzida por carragenina e azul de Evans para avaliação da antividade antiinflamatória. O OE apresentou fraca atividade antimicrobiana (500 μg mL-¹) para bactérias Gram (+) e fungos do gênero Cryptococcus, foi inativo para S. aureus ATCC 25923, contra bactérias Gram (-) e contra fungos do gênero Candida. OE reduziu em 61,54% (300 mg kg-¹) a reatividade à dor. As MOE reduziram 40,87% (1,0 g kg-¹) e 41,57% (2,0 g kg-¹) na 1ª fase do teste de dor induzida por formalina, sugerindo atividade antinociceptiva, na 2ª fase do teste, o óleo essencial inibiu 52,35% da dor relacionada aos mediadores da inflamação e as microcápsulas reduziram de 25,86% a 55,60% da dor. MOE na dose de 1,0 g kg-¹ demonstraram atividade antinociceptiva no modelo de placa quente, sugerindo atividade analgésica central. Na pleurisia induzida por carragenina as MOE reduziram 25,44% da migração total de leucócitos e diminuíram significativamente a concentração de azul Evans em 24,18%, o que demonstra importante atividade anti inflamatória.
APA, Harvard, Vancouver, ISO, and other styles
14

Nagarajan, Balaji. "Analytic Evaluation of the Expectation and Variance of Different Performance Measures of a Schedule under Processing Time Variability." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/31264.

Full text
Abstract:
The realm of manufacturing is replete with instances of uncertainties in job processing times, machine statuses (up or down), demand fluctuations, due dates of jobs and job priorities. These uncertainties stem from the inability to gather accurate information about the various parameters (e.g., processing times, product demand) or to gain complete control over the different manufacturing processes that are involved. Hence, it becomes imperative on the part of a production manager to take into account the impact of uncertainty on the performance of the system on hand. This uncertainty, or variability, is of considerable importance in the scheduling of production tasks. A scheduling problem is primarily to allocate the jobs and determine their start times for processing on a single or multiple machines (resources) for the objective of optimizing a performance measure of interest. If the problem parameters of interest e.g., processing times, due dates, release dates are deterministic, the scheduling problem is relatively easier to solve than for the case when the information is uncertain about these parameters. From a practical point of view, the knowledge of these parameters is, most often than not, uncertain and it becomes necessary to develop a stochastic model of the scheduling system in order to analyze its performance. Investigation of the stochastic scheduling literature reveals that the preponderance of the work reported has dealt with optimizing the expected value of the performance measure. By focusing only on the expected value and ignoring the variance of the measure used, the scheduling problem becomes purely deterministic and the significant ramifications of schedule variability are essentially neglected. In many a practical cases, a scheduler would prefer to have a stable schedule with minimum variance than a schedule that has lower expected value and unknown (and possibly high) variance. Hence, it becomes apparent to define schedule efficiencies in terms of both the expectation and variance of the performance measure used. It could be easily perceived that the primary reasons for neglecting variance are the complications arising out of variance considerations and the difficulty of solving the underlying optimization problem. Moreover, research work to develop closed-form expressions or methodologies to determine the variance of the performance measures is very limited in the literature. However, conceivably, such an evaluation or analysis can only help a scheduler in making appropriate decisions in the face of uncertain environment. Additionally, these expressions and methodologies can be incorporated in various scheduling algorithms to determine efficient schedules in terms of both the expectation and variance. In our research work, we develop such analytic expressions and methodologies to determine the expectation and variance of different performance measures of a schedule. The performance measures considered are both completion time and tardiness based measures. The scheduling environments considered in our analysis involve a single machine, parallel machines, flow shops and job shops. The processing times of the jobs are modeled as independent random variables with known probability density functions. With the schedule given a priori, we develop closed-form expressions or devise methodologies to determine the expectation and variance of the performance measures of interest. We also describe in detail the approaches that we used for the various scheduling environments mentioned earlier. The developed expressions and methodologies were programmed in MATLAB R12 and illustrated with a few sample problems. It is our understanding that knowing the variance of the performance measure in addition to its expected value would aid in determining the appropriate schedule to use in practice. A scheduler would be in a better position to base his/her decisions having known the variability of the schedules and, consequently, can strike a balance between the expected value and variance.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
15

Kleisarchaki, Sofia. "Analyse des différences dans le Big Data : Exploration, Explication, Évolution." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM055/document.

Full text
Abstract:
La Variabilité dans le Big Data se réfère aux données dont la signification change de manière continue. Par exemple, les données des plateformes sociales et les données des applications de surveillance, présentent une grande variabilité. Cette variabilité est dûe aux différences dans la distribution de données sous-jacente comme l’opinion de populations d’utilisateurs ou les mesures des réseaux d’ordinateurs, etc. L’Analyse de Différences a comme objectif l’étude de la variabilité des Données Massives. Afin de réaliser cet objectif, les data scientists ont besoin (a) de mesures de comparaison de données pour différentes dimensions telles que l’âge pour les utilisateurs et le sujet pour le traffic réseau, et (b) d’algorithmes efficaces pour la détection de différences à grande échelle. Dans cette thèse, nous identifions et étudions trois nouvelles tâches analytiques : L’Exploration des Différences, l’Explication des Différences et l’Evolution des Différences.L’Exploration des Différences s’attaque à l’extraction de l’opinion de différents segments d’utilisateurs (ex., sur un site de films). Nous proposons des mesures adaptées à la com- paraison de distributions de notes attribuées par les utilisateurs, et des algorithmes efficaces qui permettent, à partir d’une opinion donnée, de trouver les segments qui sont d’accord ou pas avec cette opinion. L’Explication des Différences s’intéresse à fournir une explication succinte de la différence entre deux ensembles de données (ex., les habitudes d’achat de deux ensembles de clients). Nous proposons des fonctions de scoring permettant d’ordonner les explications, et des algorithmes qui guarantissent de fournir des explications à la fois concises et informatives. Enfin, l’Evolution des Différences suit l’évolution d’un ensemble de données dans le temps et résume cette évolution à différentes granularités de temps. Nous proposons une approche basée sur le requêtage qui utilise des mesures de similarité pour comparer des clusters consécutifs dans le temps. Nos index et algorithmes pour l’Evolution des Différences sont capables de traiter des données qui arrivent à différentes vitesses et des types de changements différents (ex., soudains, incrémentaux). L’utilité et le passage à l’échelle de tous nos algorithmes reposent sur l’exploitation de la hiérarchie dans les données (ex., temporelle, démographique).Afin de valider l’utilité de nos tâches analytiques et le passage à l’échelle de nos algo- rithmes, nous réalisons un grand nombre d’expériences aussi bien sur des données synthé- tiques que réelles.Nous montrons que l’Exploration des Différences guide les data scientists ainsi que les novices à découvrir l’opinion de plusieurs segments d’internautes à grande échelle. L’Explication des Différences révèle la nécessité de résumer les différences entre deux ensembles de donnes, de manière parcimonieuse et montre que la parcimonie peut être atteinte en exploitant les relations hiérarchiques dans les données. Enfin, notre étude sur l’Evolution des Différences fournit des preuves solides qu’une approche basée sur les requêtes est très adaptée à capturer des taux d’arrivée des données variés à plusieurs granularités de temps. De même, nous montrons que les approches de clustering sont adaptées à différents types de changement
Variability in Big Data refers to data whose meaning changes continuously. For instance, data derived from social platforms and from monitoring applications, exhibits great variability. This variability is essentially the result of changes in the underlying data distributions of attributes of interest, such as user opinions/ratings, computer network measurements, etc. {em Difference Analysis} aims to study variability in Big Data. To achieve that goal, data scientists need: (a) measures to compare data in various dimensions such as age for users or topic for network traffic, and (b) efficient algorithms to detect changes in massive data. In this thesis, we identify and study three novel analytical tasks to capture data variability: {em Difference Exploration, Difference Explanation} and {em Difference Evolution}.Difference Exploration is concerned with extracting the opinion of different user segments (e.g., on a movie rating website). We propose appropriate measures for comparing user opinions in the form of rating distributions, and efficient algorithms that, given an opinion of interest in the form of a rating histogram, discover agreeing and disargreeing populations. Difference Explanation tackles the question of providing a succinct explanation of differences between two datasets of interest (e.g., buying habits of two sets of customers). We propose scoring functions designed to rank explanations, and algorithms that guarantee explanation conciseness and informativeness. Finally, Difference Evolution tracks change in an input dataset over time and summarizes change at multiple time granularities. We propose a query-based approach that uses similarity measures to compare consecutive clusters over time. Our indexes and algorithms for Difference Evolution are designed to capture different data arrival rates (e.g., low, high) and different types of change (e.g., sudden, incremental). The utility and scalability of all our algorithms relies on hierarchies inherent in data (e.g., time, demographic).We run extensive experiments on real and synthetic datasets to validate the usefulness of the three analytical tasks and the scalability of our algorithms. We show that Difference Exploration guides end-users and data scientists in uncovering the opinion of different user segments in a scalable way. Difference Explanation reveals the need to parsimoniously summarize differences between two datasets and shows that parsimony can be achieved by exploiting hierarchy in data. Finally, our study on Difference Evolution provides strong evidence that a query-based approach is well-suited to tracking change in datasets with varying arrival rates and at multiple time granularities. Similarly, we show that different clustering approaches can be used to capture different types of change
APA, Harvard, Vancouver, ISO, and other styles
16

Sánchez, Martos Vanessa. "Avaluació de la qualitat analítica de diferents biomarcadors de l'estrès oxidatiu i la inflamació en una població sana." Doctoral thesis, Universitat Rovira i Virgili, 2017. http://hdl.handle.net/10803/454721.

Full text
Abstract:
Els biomarcadors són paràmetres biològics, metodològicament vàlids, mesurables i quantificables, indicadors d’estat de salut i relacionats amb processos fisiopatològics. Que sigui metodològicament vàlid vol dir que ha de complir una sèrie de característiques analítiques com són la repetibilitat i reproductibilitat. Tot i que la repetibilitat i la reproductibilitat s’expressen mitjançant la desviació estàndard i el coeficient de variabilitat, existeixen altres paràmetres, com el coeficient de variabilitat relatiu, l’error sistemàtic, l’error aleatori i l’error total analític, per determinar el grau de qualitat analítica d’un laboratori. La inflamació i el dany oxidatiu són els majors contribuents a l’aparició de malalties degeneratives, malaltia cardiovascular o càncer. L’objectiu d’aquest estudi és el d’establir el grau de qualitat analítica que presenten diferents biomarcadors relacionats amb l’estrès oxidatiu, la inflamació i el perfil fèrric, en població sana, mitjançant el càlcul del coeficient de variabilitat relatiu, la imprecisió, l’error sistemàtic, l’error aleatori i l’error total analític. Es va calcular la distribució dels biomarcadors, el coeficient de variabilitat tant intraindividual com interindividual, el coeficient de variabilitat relatiu, la desviació estàndard i els errors sistemàtic, aleatori i total analític de diferents biomarcadors relacionats amb l’estrès oxidatiu, la inflamació i el perfil fèrric en una població sana. El càlcul del coeficient de variabilitat relatiu, l’error sistemàtic i l’error total analític són millors indicadors de qualitat analítica que la desviació estàndard i la imprecisió. Mentre que el coeficient de variabilitat relatiu és un bon indicador de qualitat analítica per un laboratori de recerca, l’establiment del grau de qualitat en funció de de la imprecisió, l’error sistemàtic i l’error total analític és un sistema de control excessiu.
Los biomarcadores son parámetros biológicos, metodológicamente válidos, medibles y cuantificables, indicadores del estado de salud y relacionados con procesos fisiopatológicos. Que un biomarcador sea metodológicamente válido quiere decir que debe cumplir unas características analíticas, como son la repetitividad y la reproducibilidad. Aunque la repetibilidad y la reproducibilidad se suelen expresar a partir de la desviación estándar y el coeficiente de variación, existen otros indicadores de calidad analítica, como el coeficiente de variabilidad relativo, el error sistemático, el error aleatorio y el error total analítico. La inflamación y el daño oxidativo están relacionados como la aparición de enfermedades degenerativas, enfermedades cardiovasculares o el cáncer. El objetivo de este trabajo es el de establecer el grado de calidad analítica que presentan diferentes biomarcadores relacionados con el estrés oxidativo, la inflamación y el perfil del hierro en población sana, mediante el cálculo del coeficiente de variación relativo, la imprecisión, el error sistemático, el error aleatorio y el error total analítico. Se calculó la distribución, el coeficiente de variación interindividual e intraindividual, el coeficiente de variación relativo, la desviación estándar y los errores sistemático, aleatorio y total analítico de biomarcadores relacionados con el estrés oxidativo, la inflamación y el perfil del hierro, en una población sana. El cálculo del coeficiente de variación relativo, el error sistemático y el error total analítico son mejores indicadores de calidad analítica que la desviación estándar y la imprecisión. Mientras que el coeficiente de variación relativo es un buen indicador de calidad analítica para un laboratorio de investigación, establecer el grado de calidad en según de la imprecisión, el error sistemático y el error total analítico, creemos que es un sistema de control de la calidad excesivo para un laboratorio de investigación.
Biomarkers are measurable, methodologically valid and quantifiable biological parameters, indicators of health status and also related to pathophysiological processes. Biomarkers measurement requires the availability of accurate and reproducible analytical methods. This requirements are usually expressed as a standard deviation and the coefficient of variation. However, to determine the analytical quality of any laboratory analysis there exist other parameters such as relative coefficient of variation, systematic error, random error and total analytical error. Inflammation and oxidative damage are emerging the most contributors to degenerative diseases, cardiovascular diseases or cancer. The aim of this study is to establish the analytical quality degree of different biomarkers related to oxidative stress, inflammation and ferric profile in healthy population by calculating the relative coefficient of variation, the accuracy, the systematic error, random error and total analytical error. We concluded that the relative coefficient of variation, systematic error and total analytical error were better quality indicators than standard deviation and inaccuracy. Although the relative coefficient of variation was a good indicator of analytical quality, the establishment of the level of quality according to the inaccuracy, systematic error and total analytical error is excessive quality indicator for a research laboratory.
APA, Harvard, Vancouver, ISO, and other styles
17

Darriet, Florent. "Caractérisation de nouvelles molécules et variabilité chimique de trois plantes du continuum Corse-Sardaigne: Chamaemelum mixtum, Anthemis maritima et Eryngium maritimum." Phd thesis, Université Pascal Paoli, 2011. http://tel.archives-ouvertes.fr/tel-00804279.

Full text
Abstract:
La Corse est riche en Plantes à Parfum, Aromatiques et Médicinales (PPAM) dont les huiles essentielles sont susceptibles d'être utilisées dans différents domaines. La composition de ces dernières permet de les caractériser, d'en évaluer la qualité, de mettre en évidence une éventuelle spécificité locale et, ce faisant, de les valoriser. Deux objectifs majeurs ont été fixés à ce travail de thèse portant sur les huiles essentielles et les fractions volatiles de trois plantes des littoraux de Corse et de Sardaigne : Chamaemelum mixtum, Anthemis maritima et Eryngium maritimum : (i) Un objectif fondamental : accroître notre connaissance des constitutions chimiques des huiles essentielles et fractions volatiles des trois espèces, en adaptant en tant que de besoin, la méthodologie d'analyse adoptée au laboratoire ; méthodologie basée sur la complémentarité de diverses techniques (CC, CPG/DIF, CPG/SM-IE, CPG/SM-IC et RMN 1D et 2D). (ii) Un objectif plus appliqué : évaluer le potentiel de valorisation à travers l'étude des variabilités des huiles essentielles, mais également, à travers la recherche d'activités antibactériennes et antifongiques. Dans l'huile essentielle de Chamaemelum mixtum qui présente une composition chimique différente des huiles essentielles du Maroc décrites dans la littérature, nous avons isolé et identifié une nouvelle cétone linéaire la (Z)-heptadéc-9,16-adièn-7-one. La composition chimique de l'huile essentielle d'Anthemis maritima riche en esters de chrysanthényle est ici rapportée pour la première fois. Après avoir élaboré une stratégie d'analyse originale, nous avons décrit une série de 14 esters des cis- et trans-chrysanthénol. Un grand nombre d'entre eux n'avaient encore jamais été décrits en RMN. L'huile essentielle d'Eryngium maritimum est, également, décrite pour la première fois de façon aussi exhaustive. Nous avons identifié : le 4-muurol-9-èn-15-ol, le 4-cadin-9-èn-15-ol et le 4-cadin-9-èn-15-al, composés naturels jamais décrits jusque là, ainsi que le 4-muurol-9-èn-15-al dont les données RMN sont rapportées ici pour la première fois. Les trois huiles essentielles et plus particulièrement celle de Chamaemaelum mixtum se sont avérées actives contre un panel de micro-organismes impliqués dans les infections nosocomiales et alimentaires. L'étude de variabilité liée à la géographie, montre la présence de plusieurs groupes statistiques pour chacune des trois plantes et plus spécifiquement pour Anthemis maritima qui admet deux types de compositions chimiques tranchés. Les compositions chimiques des huiles essentielles et fractions volatiles captées par MEPS des divers organes de chaque plante ont également été décrites. Nos résultats confortent la MEPS comme outil analytique rapide et fiable d'acquisition du profil chimique d'une fraction volatile.
APA, Harvard, Vancouver, ISO, and other styles
18

Imanzadeh, Saber. "Effets des incertitudes et de la variabilité spatiale des propriétés des sols et des structures sur le dimensionnement des semelles filantes et des conduites enterrées." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00803563.

Full text
Abstract:
Le sol présente une variabilité spatiale des propriétés physiques et mécaniques dont les effets sur des structures légères avec semelles filantes et sur les conduites enterrées ne sont pas bien pris en compte dans leur dimensionnement. Cette variabilité naturelle peut être très importante dans le cas de ces ouvrages car elle induit des tassements différentiels, dont les conséquences peuvent être dommageables : fissures dans les murs, les poutres ou encore des fuites dans les réseaux d'assainissement. La variabilité naturelle du sol et l'incertitude liée à la connaissance imparfaite des propriétés du sol et/ou du béton ou de l'acier de la structure sont les principales sources d'incertitude dans le choix des paramètres de calcul pour le dimensionnement de ces structures. Dans cette thèse, une approche analytique avec les méthodes probabilistes (FOSM et SOSM) et le modèle de Winkler, puis numérique avec le couplage de la méthode des éléments finis avec des approches géostatistiques ont été successivement menées pour modéliser le comportement des semelles filantes et des conduites enterrés lorsque les incertitudes sur les propriétés mécaniques du sol et de la structure sont prises en compte dans leur dimensionnement. Il apparait ainsi, l'importance du comportement longitudinal de ces ouvrages et du poids des incertitudes dans leur dimensionnement.
APA, Harvard, Vancouver, ISO, and other styles
19

Thorpe, David Stuart. "A process for the management of physical infrastructure." Thesis, Queensland University of Technology, 1998. https://eprints.qut.edu.au/36067/7/36067_Digitsed_Thesis.pdf.

Full text
Abstract:
Physical infrastructure assets are important components of our society and our economy. They are usually designed to last for many years, are expected to be heavily used during their lifetime, carry considerable load, and are exposed to the natural environment. They are also normally major structures, and therefore present a heavy investment, requiring constant management over their life cycle to ensure that they perform as required by their owners and users. Given a complex and varied infrastructure life cycle, constraints on available resources, and continuing requirements for effectiveness and efficiency, good management of infrastructure is important. While there is often no one best management approach, the choice of options is improved by better identification and analysis of the issues, by the ability to prioritise objectives, and by a scientific approach to the analysis process. The abilities to better understand the effect of inputs in the infrastructure life cycle on results, to minimise uncertainty, and to better evaluate the effect of decisions in a complex environment, are important in allocating scarce resources and making sound decisions. Through the development of an infrastructure management modelling and analysis methodology, this thesis provides a process that assists the infrastructure manager in the analysis, prioritisation and decision making process. This is achieved through the use of practical, relatively simple tools, integrated in a modular flexible framework that aims to provide an understanding of the interactions and issues in the infrastructure management process. The methodology uses a combination of flowcharting and analysis techniques. It first charts the infrastructure management process and its underlying infrastructure life cycle through the time interaction diagram, a graphical flowcharting methodology that is an extension of methodologies for modelling data flows in information systems. This process divides the infrastructure management process over time into self contained modules that are based on a particular set of activities, the information flows between which are defined by the interfaces and relationships between them. The modular approach also permits more detailed analysis, or aggregation, as the case may be. It also forms the basis of ext~nding the infrastructure modelling and analysis process to infrastructure networks, through using individual infrastructure assets and their related projects as the basis of the network analysis process. It is recognised that the infrastructure manager is required to meet, and balance, a number of different objectives, and therefore a number of high level outcome goals for the infrastructure management process have been developed, based on common purpose or measurement scales. These goals form the basis of classifYing the larger set of multiple objectives for analysis purposes. A two stage approach that rationalises then weights objectives, using a paired comparison process, ensures that the objectives required to be met are both kept to the minimum number required and are fairly weighted. Qualitative variables are incorporated into the weighting and scoring process, utility functions being proposed where there is risk, or a trade-off situation applies. Variability is considered important in the infrastructure life cycle, the approach used being based on analytical principles but incorporating randomness in variables where required. The modular design of the process permits alternative processes to be used within particular modules, if this is considered a more appropriate way of analysis, provided boundary conditions and requirements for linkages to other modules, are met. Development and use of the methodology has highlighted a number of infrastructure life cycle issues, including data and information aspects, and consequences of change over the life cycle, as well as variability and the other matters discussed above. It has also highlighted the requirement to use judgment where required, and for organisations that own and manage infrastructure to retain intellectual knowledge regarding that infrastructure. It is considered that the methodology discussed in this thesis, which to the author's knowledge has not been developed elsewhere, may be used for the analysis of alternatives, planning, prioritisation of a number of projects, and identification of the principal issues in the infrastructure life cycle.
APA, Harvard, Vancouver, ISO, and other styles
20

Sáez, Silvestre Carlos. "Probabilistic methods for multi-source and temporal biomedical data quality assessment." Doctoral thesis, Editorial Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/62188.

Full text
Abstract:
[EN] Nowadays, biomedical research and decision making depend to a great extent on the data stored in information systems. As a consequence, a lack of data quality (DQ) may lead to suboptimal decisions, or hinder the derived research processes and outcomes. This thesis aims to the research and development of methods for assessing two DQ problems of special importance in Big Data and large-scale repositories, based on multi-institutional, cross-border infrastructures, and acquired during long periods of time: the variability of data probability distributions (PDFs) among different data sources-multi-source variability-and the variability of data PDFs over time-temporal variability. Variability in PDFs may be caused by differences in data acquisition methods, protocols or health care policies; systematic or random errors during data input and management; demographic differences in populations; or even falsified data. To date, these issues have received little attention as DQ problems nor count with adequate assessment methods. The developed methods aim to measure, detect and characterize variability dealing with multi-type, multivariate, multi-modal data, and not affected by large sample sizes. To this end, we defined an Information Theory and Geometry probabilistic framework based on the inference of non-parametric statistical manifolds from the normalized distances of PDFs among data sources and over time. Based on this, a number of contributions have been generated. For the multi-source variability assessment we have designed two metrics: the Global Probabilistic Deviation, which measures the degree of global variability among the PDFs of multiple sources-equivalent to the standard deviation among PDFs; and the Source Probabilistic Outlyingness, which measures the dissimilarity of the PDF of a single data source to a global latent average. They are based on the construction of a simplex geometrical figure (the maximum-dimensional statistical manifold) using the distances among sources, and complemented by the Multi-Source Variability plot, an exploratory visualization of that simplex which permits detecting grouping patterns among sources. The temporal variability method provides two main tools: the Information Geometric Temporal plot, an exploratory visualization of the temporal evolution of PDFs based on the projection of the statistical manifold from temporal batches; and the PDF Statistical Process Control, a monitoring and automatic change detection algorithm for PDFs. The methods have been applied to repositories in real case studies, including the Public Health Mortality and Cancer Registries of the Region of Valencia, Spain; the UCI Heart Disease; the United States NHDS; and Spanish Breast Cancer and an In-Vitro Fertilization datasets. The methods permitted discovering several findings such as partitions of the repositories in probabilistically separated temporal subgroups, punctual temporal anomalies due to anomalous data, and outlying and clustered data sources due to differences in populations or in practices. A software toolbox including the methods and the automated generation of DQ reports was developed. Finally, we defined the theoretical basis of a biomedical DQ evaluation framework, which have been used in the construction of quality assured infant feeding repositories, in the contextualization of data for their reuse in Clinical Decision Support Systems using an HL7-CDA wrapper; and in an on-line service for the DQ evaluation and rating of biomedical data repositories. The results of this thesis have been published in eight scientific contributions, including top-ranked journals and conferences. One of the journal publications was selected by the IMIA as one of the best of Health Information Systems in 2013. Additionally, the results have contributed to several research projects, and have leaded the way to the industrialization of the developed methods and approaches for the audit and control of biomedical DQ.
[ES] Actualmente, la investigación biomédica y toma de decisiones dependen en gran medida de los datos almacenados en los sistemas de información. En consecuencia, una falta de calidad de datos (CD) puede dar lugar a decisiones sub-óptimas o dificultar los procesos y resultados de las investigaciones derivadas. Esta tesis tiene como propósito la investigación y desarrollo de métodos para evaluar dos problemas especialmente importantes en repositorios de datos masivos (Big Data), basados en infraestructuras multi-céntricas, adquiridos durante largos periodos de tiempo: la variabilidad de las distribuciones de probabilidad (DPs) de los datos entre diferentes fuentes o sitios-variabilidad multi-fuente-y la variabilidad de las distribuciones de probabilidad de los datos a lo largo del tiempo-variabilidad temporal. La variabilidad en DPs puede estar causada por diferencias en los métodos de adquisición, protocolos o políticas de atención; errores sistemáticos o aleatorios en la entrada o gestión de datos; diferencias demográficas en poblaciones; o incluso por datos falsificados. Esta tesis aporta métodos para detectar, medir y caracterizar dicha variabilidad, tratando con datos multi-tipo, multivariantes y multi-modales, y sin ser afectados por tamaños muestrales grandes. Para ello, hemos definido un marco de Teoría y Geometría de la Información basado en la inferencia de variedades de Riemann no-paramétricas a partir de distancias normalizadas entre las PDs de varias fuentes de datos o a lo largo del tiempo. En consecuencia, se han aportado las siguientes contribuciones: Para evaluar la variabilidad multi-fuente se han definido dos métricas: la Global Probabilistic Deviation, la cual mide la variabilidad global entre las PDs de varias fuentes-equivalente a la desviación estándar entre PDs; y la Source Probabilistic Outlyingness, la cual mide la disimilaridad entre la DP de una fuente y un promedio global latente. Éstas se basan en un simplex construido mediante las distancias entre las PDs de las fuentes. En base a éste, se ha definido el Multi-Source Variability plot, visualización que permite detectar patrones de agrupamiento entre fuentes. El método de variabilidad temporal proporciona dos herramientas: el Information Geometric Temporal plot, visualización exploratoria de la evolución temporal de las PDs basada en la la variedad estadística de los lotes temporales; y el Control de Procesos Estadístico de PDs, algoritmo para la monitorización y detección automática de cambios en PDs. Los métodos han sido aplicados a casos de estudio reales, incluyendo: los Registros de Salud Pública de Mortalidad y Cáncer de la Comunidad Valenciana; los repositorios de enfermedades del corazón de UCI y NHDS de los Estados Unidos; y repositorios españoles de Cáncer de Mama y Fecundación In-Vitro. Los métodos detectaron hallazgos como particiones de repositorios en subgrupos probabilísticos temporales, anomalías temporales puntuales, y fuentes de datos agrupadas por diferencias en poblaciones y en prácticas. Se han desarrollado herramientas software incluyendo los métodos y la generación automática de informes. Finalmente, se ha definido la base teórica de un marco de CD biomédicos, el cual ha sido utilizado en la construcción de repositorios de calidad para la alimentación del lactante, en la contextualización de datos para el reuso en Sistemas de Ayuda a la Decisión Médica usando un wrapper HL7-CDA, y en un servicio on-line para la evaluación y clasificación de la CD de repositorios biomédicos. Los resultados de esta tesis han sido publicados en ocho contribuciones científicas (revistas indexadas y artículos en congresos), una de ellas seleccionada por la IMIA como una de las mejores publicaciones en Sistemas de Información de Salud en 2013. Los resultados han contribuido en varios proyectos de investigación, y facilitado los primeros pasos hacia la industrialización de las tecnologías
[CAT] Actualment, la investigació biomèdica i presa de decisions depenen en gran mesura de les dades emmagatzemades en els sistemes d'informació. En conseqüència, una manca en la qualitat de les dades (QD) pot donar lloc a decisions sub-òptimes o dificultar els processos i resultats de les investigacions derivades. Aquesta tesi té com a propòsit la investigació i desenvolupament de mètodes per avaluar dos problemes especialment importants en repositoris de dades massius (Big Data) basats en infraestructures multi-institucionals o transfrontereres, adquirits durant llargs períodes de temps: la variabilitat de les distribucions de probabilitat (DPs) de les dades entre diferents fonts o llocs-variabilitat multi-font-i la variabilitat de les distribucions de probabilitat de les dades al llarg del temps-variabilitat temporal. La variabilitat en DPs pot estar causada per diferències en els mètodes d'adquisició, protocols o polítiques d'atenció; errors sistemàtics o aleatoris durant l'entrada o gestió de dades; diferències demogràfiques en les poblacions; o fins i tot per dades falsificades. Aquesta tesi aporta mètodes per detectar, mesurar i caracteritzar aquesta variabilitat, tractant amb dades multi-tipus, multivariants i multi-modals, i no sent afectats per mides mostrals grans. Per a això, hem definit un marc de Teoria i Geometria de la Informació basat en la inferència de varietats de Riemann no-paramètriques a partir de distàncies normalitzades entre les DPs de diverses fonts de dades o al llarg del temps. En conseqüència s'han aportat les següents contribucions: Per avaluar la variabilitat multi-font s'han definit dos mètriques: la Global Probabilistic Deviation, la qual mesura la variabilitat global entre les DPs de les diferents fonts-equivalent a la desviació estàndard entre DPs; i la Source Probabilistic Outlyingness, la qual mesura la dissimilaritat entre la DP d'una font de dades donada i una mitjana global latent. Aquestes estan basades en la construcció d'un simplex mitjançant les distàncies en les DPs entre fonts. Basat en aquest, s'ha definit el Multi-Source Variability plot, una visualització que permet detectar patrons d'agrupament entre fonts. El mètode de variabilitat temporal proporciona dues eines: l'Information Geometric Temporal plot, visualització exploratòria de l'evolució temporal de les distribucions de dades basada en la varietat estadística dels lots temporals; i el Statistical Process Control de DPs, algoritme per al monitoratge i detecció automàtica de canvis en les DPs de dades. Els mètodes han estat aplicats en repositoris de casos d'estudi reals, incloent: els Registres de Salut Pública de Mortalitat i Càncer de la Comunitat Valenciana; els repositoris de malalties del cor de UCI i NHDS dels Estats Units; i repositoris espanyols de Càncer de Mama i Fecundació In-Vitro. Els mètodes han detectat troballes com particions dels repositoris en subgrups probabilístics temporals, anomalies temporals puntuals, i fonts de dades anòmales i agrupades a causa de diferències en poblacions i en les pràctiques. S'han desenvolupat eines programari incloent els mètodes i la generació automàtica d'informes. Finalment, s'ha definit la base teòrica d'un marc de QD biomèdiques, el qual ha estat utilitzat en la construcció de repositoris de qualitat per l'alimentació del lactant, la contextualització de dades per a la reutilització en Sistemes d'Ajuda a la Decisió Mèdica usant un wrapper HL7-CDA, i en un servei on-line per a l'avaluació i classificació de la QD de repositoris biomèdics. Els resultats d'aquesta tesi han estat publicats en vuit contribucions científiques (revistes indexades i en articles en congressos), una de elles seleccionada per la IMIA com una de les millors publicacions en Sistemes d'Informació de Salut en 2013. Els resultats han contribuït en diversos projectes d'investigació, i han facilitat la industrialització de les tecnologies d
Sáez Silvestre, C. (2016). Probabilistic methods for multi-source and temporal biomedical data quality assessment [Tesis doctoral]. Editorial Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/62188
TESIS
Premiado
APA, Harvard, Vancouver, ISO, and other styles
21

"An Analytical Approach to Efficient Circuit Variability Analysis in Scaled CMOS Design." Master's thesis, 2011. http://hdl.handle.net/2286/R.I.9288.

Full text
Abstract:
abstract: Process variations have become increasingly important for scaled technologies starting at 45nm. The increased variations are primarily due to random dopant fluctuations, line-edge roughness and oxide thickness fluctuation. These variations greatly impact all aspects of circuit performance and pose a grand challenge to future robust IC design. To improve robustness, efficient methodology is required that considers effect of variations in the design flow. Analyzing timing variability of complex circuits with HSPICE simulations is very time consuming. This thesis proposes an analytical model to predict variability in CMOS circuits that is quick and accurate. There are several analytical models to estimate nominal delay performance but very little work has been done to accurately model delay variability. The proposed model is comprehensive and estimates nominal delay and variability as a function of transistor width, load capacitance and transition time. First, models are developed for library gates and the accuracy of the models is verified with HSPICE simulations for 45nm and 32nm technology nodes. The difference between predicted and simulated σ/μ for the library gates is less than 1%. Next, the accuracy of the model for nominal delay is verified for larger circuits including ISCAS'85 benchmark circuits. The model predicted results are within 4% error of HSPICE simulated results and take a small fraction of the time, for 45nm technology. Delay variability is analyzed for various paths and it is observed that non-critical paths can become critical because of Vth variation. Variability on shortest paths show that rate of hold violations increase enormously with increasing Vth variation.
Dissertation/Thesis
M.S. Electrical Engineering 2011
APA, Harvard, Vancouver, ISO, and other styles
22

Mohammed, Riazuddin. "Analytical considerations and biology of milk conjugated linoleic acid synthesis in the bovine." Phd thesis, 2009. http://hdl.handle.net/10048/852.

Full text
Abstract:
Thesis (Ph. D.)--University of Alberta, 2010.
Title from pdf file main screen (viewed on Feb. 8, 2010). A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Animal Science, Department of Agricultural, Food and Nutritional Science, University of Alberta. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
23

Spyridaki, Athina. "Response Variability of Statically Determinate Beam Structures Following Non-Linear Constitutive Laws and Analytical identication of progressive collapse modes of steel frames." Thesis, 2017. https://doi.org/10.7916/D8GT5SSC.

Full text
Abstract:
This thesis is divided into two distinct and independent parts. Part I focuses on the extension of the concept of Variability Response Function (VRF). The focus of research community has recently shifted from the improvement of structural models and enhancement of the performance of computational tools in a deterministic framework towards the development of tools capable of quantifying the uncertainty of parameters of the structural system and their effect on the system response in a probabilistic framework. One limitation to this direction is the inadequacy of information to fully describe the probabilistic characteristics of a structural system. In effort to bypass this barrier, VRF was introduced by Shinozuka as a tool to calculate the variability of the response of a system. VRF is a deterministic function and for the case of deterministic structural beams where the uncertain system parameters are modeled as homogeneous stochastic fields, it offers an efficient way to circumvent timely computational analyses. In this dissertation, a flexibility-based VRF for the case of statically determinate beams following an arbitrary non-linear constitutive law is proposed. A closed-form analytical expression of VRF is derived and the constrains of the mechanics approximation embedded are discussed. No series expansion is used, thus the probabilistic part is exact and not limited by any constraint on the relative magnitude of the variations of the parameters. Part II of this dissertation explores the topic of progressive collapse. The appearance of damage in structural systems (explosions, design or construction errors, aging infrastructure) is following an upward trend during the last decades, urging for measures to be taken in order to control the damage advancement within the system. There has been an organized effort to update the design codes and regulations, in order to include provisions towards the reinforcement of buildings to eliminate their susceptibility to local damage. These efforts tend to focus on improving redundancy and alternate load paths, to ensure that loss of any single component will not lead to a general structural collapse. The analysis of a damaged system is a very complicated phenomenon due to its non-linear nature. So far the engineering community has addressed the problem of progressive collapse by employing sophisticated computational finite element methods to accurately simulate an unexpected damaging event. In this framework, damage has been introduced in the model by removing key load-bearing elements of the building and conducting elaborate analyses which almost always require inelastic and loss of stability theories to be considered. The computational complexity renders this kind of analyses almost prohibitive for practicing engineers. In the direction of eliminating sophisticated and computationally expensive analyses, simple, trustworthy tools should be generated for practitioners to easily predict the mechanism of damage propagation and determine the governing collapse mode of a structure. In this environment, this thesis introduces a simple and less labor demanding analytical tool/method which can be used to determine the governing progressive collapse mechanism of steel moment frames under the scenario of a column removal. After performing plain elastic analyses, the method develops critical Euler-type ductility curves for each removal scenario by performing straightforward analytical calculations. The response of structural systems under column removals is examined in a 2D and 3D context. The main objective of Part II is to investigate the response of dierent structural systems to the event of damage introduction (in this thesis, in the form of column removals in several locations of the system) and to develop a simple analytical framework for the identification of the governing progressive collapse failure modes. Although failure may occur due to a number of reasons (shear beam-to-column connection failure, beam yielding-type mechanism, loss of stability of adjacent columns, global loss of stability of the structural system, etc), in this study focus is being placed in only two of them; The proposed method establishes critical limit state functions which are used to identify whether a specic structure will experience progressive collapse through a yielding-type beam-induced collapse mechanism or through a loss-of-stability-induced column failure collapse mechanism.
APA, Harvard, Vancouver, ISO, and other styles
24

Tedela, Tenaw Hailu. "Analytical study on the appraisal of communal land use management practices and policies towards climate resilience and sustainability in Bir-Temicha Watershed of the Upper Blue Nile Basin, Ethiopia." Thesis, 2017. http://hdl.handle.net/10500/23351.

Full text
Abstract:
This study was aimed at analysing communal land use management practices and policies towards sustainability and climate resilience. The objectives of this study were to assess rainfall variability, climate change impact, adaptation practices and impediment factors for adaptation on the one hand and, on the other, analysing the pressure, scrutinising the sustainability of institutional practices, and assessing policy setting and its application status in managing communal lands. To conduct the study, a household survey, key informant interviews and group discussions were used. It employed both quantitative and qualitative methods. For analysis, rainfall variability trend analysis, different empirical formulas, Principal Component Analysis and analysis of variance were used. In addition, Qualitative Content Analysis technique and descriptive statistical tools were also used. The study found that there was spatiotemporal rainfall variability. About 18 extreme wet and 8 extreme dry events were depicted out of 194 frequencies of events. The most outstanding manifestations of climate change/variability impacts identified were: water scarcity, migration, severe erosion and feed scarcity. Applying biophysical measures on communal lands, practicing area enclosure and constraction of feeder road were moderately excersised adaptation and mitigation practices while, low level community awareness was the most outstanding barrier for community adaptation. Besides, feed source and fuel biomass energy did not satisfy community demand. Government recognition to support community user groups, the existence of community labour contribution and congruence between government legislation and community by-laws were found moderately strong. Besides, communal land administration and use of legislative setting and instruments to govern land administration were adequately in place to implement communal land use and management. However, workability of by-laws in applying them at the ground was a major weakness. In conclusion, the study revealed that there exist generally a weak communal land use management practices and policy implementation towards enhancing sustainability and climate resilience. Hence, the following recommendations were forwarded: enhancing community awareness, encouraging communities to establish their own private woodlots and grazing areas to reduce the pressure on communal land, applying proper communal land resource use and management plans and certifying communal lands with demarcation and maps should be given due emphasis to enhance sustainability. Moreover, policy and legislation evaluation and revision to improve its application at the ground is fundamental. On top of this, further research endeavour is still paramount important to scrutinize the integral effects of the biophysical, social, cultural and legislative dimensions for better sustainable and climate resilient communal land use management practices and policy implementation
College of Agriculture and Environmental Sciences
Ph. D. (Environmental Management)
APA, Harvard, Vancouver, ISO, and other styles
25

Lucchese, Andrea. "Information-based human motor performance models." Doctoral thesis, 2022. https://hdl.handle.net/11589/245760.

Full text
Abstract:
Despite the evolution of technologies that brought to the advent of I4.0, in current work environments, the operator still plays a crucial role in motor activities that require repetitive movements to be executed. Repetitive movements characterize a great range of motor tasks that can be performed in multiple work environments such as factories (e.g., manual assembly tasks, manual sorting), laboratories (e.g., pick and place tasks) or even outdoor (e.g., construction work manual tasks). The evaluation of motor performance by focusing on movements executed by operators or prescribed by the task has not yet considered in the current scientific literature. The present dissertation addresses the topic of motor performance by introducing information-based models relying on the Fitts’ Law Index of Difficulty (ID). The proposed models consider the entire motor behaviour (required or observed) for the correct execution of repetitive motor tasks. The topic is investigated and discussed under a new point of view, where features of the environment and individual’s abilities influence the quality of the motor behaviour (demanded or executed) and affect the motor performance. Results show the effectiveness of the models proposed, underlying the importance of the motor behaviour for the correct execution of the motor tasks. The performance is not only linked to the efficiency in achieving a task goal, but also on how physically the task goal is reached. The proposed models have a general validity, not limited to specific applications/work environments, paving the way to a novel motor performance perspective domain independent.
APA, Harvard, Vancouver, ISO, and other styles
26

Kalický, Andrej. "Vysoce výkonné analýzy." Master's thesis, 2013. http://www.nusl.cz/ntk/nusl-321028.

Full text
Abstract:
This thesis explains Big Data Phenomenon, which is characterised by rapid growth of volume, variety and velocity of data - information assets, and thrives the paradigm shift in analytical data processing. Thesis aims to provide summary and overview with complete and consistent image about the area of High Performance Analytics (HPA), including problems and challenges on the pioneering state-of-art of advanced analytics. Overview of HPA introduces classification, characteristics and advantages of specific HPA method utilising the various combination of system resources. In the practical part of the thesis the experimental assignment focuses on analytical processing of large dataset using analytical platform from SAS Institute. The experiment demonstrates the convenience and benefits of In-Memory Analytics (specific HPA method) by evaluating the performance of different analytical scenarios and operations. Powered by TCPDF (www.tcpdf.org)
APA, Harvard, Vancouver, ISO, and other styles
27

Délusca, Kénel. "Évaluation de la vulnérabilité des fermes productrices de maïs-grain du Québec aux variabilités et changements climatiques : les cas de Montérégie-Ouest et du Lac-Saint-Jean-Est." Thèse, 2010. http://hdl.handle.net/1866/4230.

Full text
Abstract:
Réalisées aux échelles internationales et nationales, les études de vulnérabilité aux changements et à la variabilité climatiques sont peu pertinentes dans un processus de prise de décisions à des échelles géographiques plus petites qui représentent les lieux d’implantation des stratégies de réponses envisagées. Les études de vulnérabilité aux changements et à la variabilité climatiques à des échelles géographiques relativement petites dans le secteur agricole sont généralement rares, voire inexistantes au Canada, notamment au Québec. Dans le souci de combler ce vide et de favoriser un processus décisionnel plus éclairé à l’échelle de la ferme, cette étude cherchait principalement à dresser un portrait de l’évolution de la vulnérabilité des fermes productrices de maïs-grain des régions de Montérégie-Ouest et du Lac-St-Jean-Est aux changements et à la variabilité climatiques dans un contexte de multiples sources de pression. Une méthodologie générale constituée d'une évaluation de la vulnérabilité globale à partir d’une combinaison de profils de vulnérabilité aux conditions climatiques et socio-économiques a été adoptée. Pour la période de référence (1985-2005), les profils de vulnérabilité ont été dressés à l’aide d’analyses des coefficients de variation des séries temporelles de rendements et de superficies en maïs-grain. Au moyen de méthodes ethnographiques associées à une technique d’analyse multicritère, le Processus d’analyse hiérarchique (PAH), des scénarios d’indicateurs de capacité adaptative du secteur agricole susmentionné ont été développés pour la période de référence. Ceux-ci ont ensuite servi de point de départ dans l’élaboration des indicateurs de capacité de réponses des producteurs agricoles pour la période future 2010-2039. Pour celle-ci, les deux profils de vulnérabilité sont issus d’une simplification du cadre théorique de « Intergovernmental Panel on Climate Change » (IPCC) relatif aux principales composantes du concept de vulnérabilité. Pour la dimension « sensibilité » du secteur des fermes productrices de maïs-grain des deux régions agricoles aux conditions climatiques, une série de données de rendements a été simulée pour la période future. Ces simulations ont été réalisées à l’aide d’un couplage de cinq scénarios climatiques et du modèle de culture CERES-Maize de « Decision Support System for Agrotechnology Transfer » (DSSAT), version 4.0.2.0. En ce qui concerne l’évaluation de la « capacité adaptative » au cours de la période future, la construction des scénarios d’indicateurs de cette composante a été effectuée selon l’influence potentielle des grandes orientations économiques et environnementales considérées dans l’élaboration des lignes directrices des deux familles d’émissions de gaz à effet de serre (GES) A2 et A1B. L’application de la démarche méthodologique préalablement mentionnée a conduit aux principaux résultats suivants. Au cours de la période de référence, la région agricole du Lac-St-Jean-Est semblait être plus vulnérable aux conditions climatiques que celle de Montérégie-Ouest. En effet, le coefficient de variation des rendements du maïs-grain pour la région du Lac-St-Jean-Est était évalué à 0,35; tandis que celui pour la région de Montérégie-Ouest n’était que de 0,23. Toutefois, par rapport aux conditions socio-économiques, la région de Montérégie-Ouest affichait une vulnérabilité plus élevée que celle du Lac-St-Jean-Est. Les valeurs des coefficients de variation pour les superficies en maïs-grain au cours de la période de référence pour la Montérégie-Ouest et le Lac-St-Jean-Est étaient de 0,66 et 0,48, respectivement. Au cours de la période future 2010-2039, la région du Lac-St-Jean-Est serait, dans l’ensemble, toujours plus vulnérable aux conditions climatiques que celle de Montérégie-Ouest. Les valeurs moyennes des coefficients de variation pour les rendements agricoles anticipés fluctuent entre 0,21 et 0,25 pour la région de Montérégie-Ouest et entre 0,31 et 0,50 pour la région du Lac-St-Jean-Est. Néanmoins, en matière de vulnérabilité future aux conditions socio-économiques, la position relative des deux régions serait fonction du scénario de capacité adaptative considéré. Avec les orientations économiques et environnementales considérées dans l’élaboration des lignes directrices de la famille d’émission de GES A2, les indicateurs de capacité adaptative du secteur à l’étude seraient respectivement de 0,13 et 0,08 pour la Montérégie-Ouest et le Lac-St-Jean-Est. D’autre part, en considérant les lignes directrices de la famille d’émission de GES A1B, la région agricole du Lac-St-Jean-Est aurait une capacité adaptative légèrement supérieure (0,07) à celle de la Montérégie-Ouest (0,06). De façon générale, au cours de la période future, la région du Lac-St-Jean-Est devrait posséder une vulnérabilité globale plus élevée que la région de Montérégie-Ouest. Cette situation s’expliquerait principalement par une plus grande vulnérabilité de la région du Lac-St-Jean-Est aux conditions climatiques. Les résultats de cette étude doivent être appréciés dans le contexte des postulats considérés, de la méthodologie suivie et des spécificités des deux régions agricoles examinées. Essentiellement, avec l’adoption d’une démarche méthodologique simple, cette étude a révélé les caractéristiques « dynamique et relative » du concept de vulnérabilité, l’importance de l’échelle géographique et de la prise en compte d’autres sources de pression et surtout de la considération d’une approche contraire à celle du « agriculteur réfractaire aux changements » dans les travaux d’évaluation de ce concept dans le secteur agricole. Finalement, elle a aussi présenté plusieurs pistes de recherche susceptibles de contribuer à une meilleure évaluation de la vulnérabilité des agriculteurs aux changements climatiques dans un contexte de multiples sources de pression.
The undertaking of vulnerability studies in relation to climatic change and vulnerability at the international and national levels renders them less relevant to a decision-making process at smaller spatial scales where specific response strategies are implemented. Vulnerability studies to climatic change and variability at relatively small geographic scales within the agriculture sector are rare in general, and even nonexistent in Canada, including Quebec. In order to fill in this gap and to contribute to a better decision-making process at the farm level, this study aimed at presenting a description and analysis of the evolution of grain corn growers’ vulnerability to climatic change and variability and other stressors within the Montérégie-Ouest and Lac-St-Jean-Est regions. A general methodology consisting of an assessment of farmers’ overall vulnerability by combining vulnerability profiles to climate and socio-economic conditions has been considered. For the reference period (1985-2005), vulnerability profiles were constructed by analyzing the variation coefficients of grain corn yields and crop area data. By means of ethnographic methods associated with a multicriteria analysis technique, the Analytic Hierarchy Process (AHP), adaptive capacity indices of the agriculture sector have been elaborated upon for the reference period. These indices have then been used as a starting point in the construction of scenario indices of future adaptive capacity of farmers for the future period 2010-2039. For this future period (2010-2039), vulnerability profiles for both regions have been created using a simplified version of the Intergovernmental Panel on Climate Change (IPCC) conceptual framework on the components of vulnerability. For the « sensitivity » component of grain corn growers to climate conditions within the selected agricultural regions, a set of grain corn yields has been simulated using five climate scenarios coupled with CERES-Maize, one of the crop models embedded in the Decision Support System for Agrotechnology Transfer (DSSAT 4.0.2.0 version) software. In regards to the evaluation of the « adaptive capacity » for the future period (2010-2039), the elaboration of indices for this component has been undertaken by considering the potential influence of the main economic and environmental drivers used in the development of the storylines for two greenhouse gas (GHG) emission scenarios families, namely the A2 and A1B families. The application of the methodological approach mentioned above produced the following key results. For the reference period, the Lac-St-Jean-Est region appeared to be more vulnerable to climate conditions than Montérégie-Ouest region. The coefficient of variation for grain corn yields within the Lac-St-Jean-Est region was evaluated to be 0,35, while the value for the Montérégie-Ouest region was only 0,23. However, with respect to the socio-economic conditions, the Montérégie-Ouest region showed greater vulnerability than the Lac-St-Jean-Est region. The values of the coefficient of variation for the areas under grain corn during the reference period (1985-2005) within Montérégie-Ouest and Lac-St-Jean-Est were 0,66 and 0,48 respectively. For the future period (2010-2039), the Lac-St-Jean-Est region, once again, would seem to be more vulnerable to climate conditions than the Montérégie-Ouest region. The average values of the coefficient of variation for the simulated grain corn yields fluctuate between 0,21 and 0,25 for the Montérégie-Ouest region and between 0,31 and 0,50 for Lac-St-St-Jean-Est region. However, from a socio-economic perspective, the relative vulnerability status of both regions would seem to vary according to the scenario of adaptive capacity considered. With the economic and environmental drivers considered in the storylines of the A2 GHG emissions scenario family, the adaptive capacity indices for the sector under study would be 0,13 and 0,08 for Montérégie-Ouest and Lac-St-Jean-Est, respectively. On the other hand, by considering the economic and environmental drivers considered for the A1B GHG emissions scenario family, the Lac-St-Jean-Est agricultural region would appear to have an adaptive capacity slightly higher (0,07) than that of the Montérégie-Ouest region (0,06). In general, for the future period, the Lac-St-Jean-Est region would appear to have greater overall vulnerability than the Montérégie-Ouest. This situation can be explained mainly by a greater vulnerability of Lac-St-Jean-Est region to climate conditions. The results of this study have to be interpreted within the context of the assumptions considered, the methodology used, and the characteristics of the two regions under study. In general, using a simple methodological approach, this study revealed the « dynamic and relative » characteristics of the vulnerability concept, the importance of spatial scale and consideration of multiple stressors and the integration of an approach different to the commonly used« dumb-farmer » approach for the evaluation of this concept of vulnerability within the agriculture sector. Finally, this study has also identified some new research pathways likely to contribute to a better evaluation of farmers’ vulnerability to climate change in the context of multiple stressors.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography