Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Pearson Correlation Coefficient 13.

Дисертації з теми "Pearson Correlation Coefficient 13"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-25 дисертацій для дослідження на тему "Pearson Correlation Coefficient 13".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Al, Samrout Marwa. "Approches mono et bi-objective pour l'optimisation intégrée des postes d'amarrage et des grues de quai dans les opérations de transbordement." Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMLH21.

Повний текст джерела
Анотація:
Le transport maritime international est vital pour le commerce mondial, représentant plus de 85 % des échanges, avec 10,5 milliards de tonnes transportées chaque année. Ce mode de transport est le plus économique et durable, contribuant seulement à 2,6 % des émissions de CO2. En France, le secteur maritime représente 1,5 % du PIB et près de 525 000 emplois. Les ports maritimes, cruciaux pour la chaîne logistique, facilitent le transbordement des marchandises et adoptent de plus en plus des solutions numériques basées sur l'intelligence artificielle pour améliorer leur efficacité. La France compte onze Grands ports maritimes, dont sept en Métropole. La thèse se concentre sur l’optimisation des terminaux à conteneurs pour améliorer l’efficacité et la performance des ports.Ce mémoire aborde la problématique de la planification des postes d’accostage et de l’activation des portiques dans les terminaux à conteneurs des ports maritimes, en réponse aux changements récents dans la logistique maritime, tels que l’arrivée de méga-navires et l’automatisation. Il souligne les lacunes dans la littérature existante et propose une analyse approfondie des défis actuels. Le document se divise en trois chapitres : Le premier chapitre explore l’histoire de la conteneurisation, les types de conteneurs, et les défis de la planification opérationnelle. Il se concentre sur le problème d’attribution des postes d’amarrage (BAP), ses méthodes de résolution et l’intégration de l’intelligence artificielle (IA) pour optimiser les processus logistiques. Le 2ème chapitre introduit le problème d'allocation dynamique avec transbordement ship-to-ship. Il propose un programme linéaire en nombres entiers mixtes (MILP) pour optimiser l’ordonnancement d’accostage et le transbordement entre navires. L’objectif est de réduire les temps de séjour des navires dans le terminal, ainsi que les pénalités dues aux retards des navires, et de décider du mode de transbordement nécessaire. La méthode combine une heuristique de type packing et un algorithme génétique amélioré, démontrant une efficacité dans la réduction des temps de séjour des navires. Nous avons effectué une analyse statistique pour identifier les paramètres de contrôle efficaces du GA, puis nous avons appliqué cet algorithme avec les paramètres de contrôle déterminés pour réaliser des expériences numériques sur des instances générées aléatoirement. De plus, nous avons réalisé une étude comparative afin d’évaluer différents opérateurs de croisement, en utilisant le test d’analyse de variance (ANOVA). Ensuite, nous avons présenté une série d’exemples basés sur des données aléatoires, résolus à l’aide du solveur CPLEX, afin de confirmer la validité du modèle proposé. La méthode proposée est capable de résoudre le problème dans un temps de calcul acceptable pour des instances de taille moyenne et grande. Le dernier chapitre présente un problème intégré d’allocation des postes d’amarrage et des grues, avec un focus sur le transbordement ship-to-ship. Trois approches sont proposées . La première approche utilise l'algorithme génétique NSGA-III, complété par une analyse statistique pour optimiser les paramètres et évaluer différents opérateurs de croisement. En analysant des données de la base AIS, des tests numériques montrent l’efficacité de cette méthode au port du Havre, avec des résultats satisfaisants et un temps de calcul raisonnable.La deuxième approche implique deux modèles de régression, Gradient Boosting Regression (GBR) et Random Forest Regression (RFR), entraînés sur des caractéristiques sélectionnées. La méthodologie inclut des étapes de prétraitement et l'optimisation des hyperparamètres. Bien que NSGA-III offre la meilleure précision, il nécessite un temps d'exécution plus long. En revanche, GBR et RFR, bien que légèrement moins précis, améliorent l’efficacité, soulignant le compromis entre précision et temps d'exécution dans les applications pratiques
International maritime transport is vital for global trade, representing over 85% of exchanges, with 10.5 billion tons transported each year. This mode of transport is the most economical and sustainable, contributing only 2.6% of CO2 emissions. In France, the maritime sector accounts for 1.5% of GDP and nearly 525,000 jobs. Maritime ports, crucial for the logistics chain, facilitate the transshipment of goods and increasingly adopt digital solutions based on artificial intelligence to improve their efficiency. France has eleven major seaports, seven of which are located in mainland France.The thesis focuses on optimizing container terminals to enhance the efficiency and performance of ports. It addresses the issues of berth allocation planning and crane activation in container terminals in response to recent changes in maritime logistics, such as the arrival of mega-ships and automation. It highlights gaps in the existing literature and offers an in-depth analysis of current challenges. The document is divided into three chapters:The first chapter explores the history of containerization, types of containers, and challenges in operational planning. It focuses on the berth allocation problem (BAP), its resolution methods, and the integration of artificial intelligence (AI) to optimize logistical processes. The second chapter introduces the dynamic allocation problem with ship-to-ship transshipment. It proposes a mixed-integer linear program (MILP) to optimize the berthing schedule and transshipment between vessels. The objective is to reduce vessel stay times in the terminal, as well as penalties due to vessel delays, and to determine the necessary transshipment method. The method combines a packing-type heuristic and an improved genetic algorithm, demonstrating effectiveness in reducing vessel stay times. We conducted a statistical analysis to identify effective control parameters for the GA, then applied this algorithm with the determined control parameters to perform numerical experiments on randomly generated instances. Additionally, we conducted a comparative study to evaluate different crossover operators using ANOVA. We then presented a series of examples based on random data, solved using the CPLEX solver, to confirm the validity of the proposed model. The proposed method is capable of solving the problem in an acceptable computation time for medium and large instances. The final chapter presents an integrated berth and crane allocation problem, focusing on ship-to-ship transshipment. Three approaches are proposed. The first approach uses the NSGA-III genetic algorithm, supplemented by a statistical analysis to optimize parameters and evaluate different crossover operators. By analyzing AIS database data, numerical tests demonstrate the effectiveness of this method at the port of Le Havre, yielding satisfactory results within a reasonable computation time. The second approach involves two regression models, Gradient Boosting Regression (GBR) and Random Forest Regression (RFR), trained on selected features. The methodology includes preprocessing steps and hyperparameter optimization. While NSGA-III achieves the highest accuracy, it requires a longer execution time. In contrast, although GBR and RFR are slightly less precise, they significantly improve efficiency, highlighting the trade-off between accuracy and execution time in practical applications
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Kalaitzis, Angelos. "Bitcoin - Monero analysis: Pearson and Spearman correlation coefficients of cryptocurrencies." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-41402.

Повний текст джерела
Анотація:
In this thesis, an analysis of Bitcoin, Monero price and volatility is conducted with respect to S&P500 and the VIX index. Moreover using Python, we computed correlation coefficients of nine cryptocurrencies with two different approaches: Pearson and Spearman from July 2016 -July 2018. Moreover the Pearson correlation coefficient was computed for each year from July2016 - July 2017 - July 2018. It has been concluded that in 2016 the correlation between the selected cryptocurrencies was very weak - almost none, but in 2017 the correlation increased and became moderate positive. In 2018, almost all of the cryptocurrencies were highly correlated. For example, from January until July of 2018, the Bitcoin - Monero correlation was 0.86 and Bitcoin - Ethereum was 0.82.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Bergamaschi, Denise Pimentel. "Correlação intraclasse de Pearson para pares repetidos: comparação entre dois estimadores." Universidade de São Paulo, 1999. http://www.teses.usp.br/teses/disponiveis/6/6132/tde-01102014-105050/.

Повний текст джерела
Анотація:
Objetivo. Comparar, teórica e empiricamente, dois estimadores do coeficiente de correlação intraclasse momento-produto de Pearson para pares repetidos Pi. O primeiro é o estimador \"natural\", obtido mediante a correlação momento-produto de Pearson para membros de uma mesma classe (rI) e o segundo, obtido como função de componentes de variância (icc). Métodos. Comparação teórica e empírica dos parâmetros e estimadores. A comparação teórica envolve duas definições do coeficiente de correlação intraclasse PI como medida de confiabilidade (*), para o caso de duas réplicas, assim como uma apresentação da técnica de análise de variância e a definição e interpretação dos estimadores ri e icc. A comparação empírica é realizada mediante um estudo de simulação Monte Carlo com a geração de pares de valores correlacionados segundo o coeficiente de correlação intraclasse, momento-produto de Pearson para pares repetidos. Os pares de valores são distribuídos segundo uma distribuição Normal bivariada, com valores do tamanho da amostra e da correlação intraclasse previamente fixados em: n= 15, 30 e 45 e pI = {O; 0,15; 0,30; 0,45; 0,60; 0,75; 0,9}. Resultados. Comparando-se o vício e o erro quadrático médio dos estimadores, bem como as amplitudes dos intervalos de confiança, tem-se como resultado que o vício de icc foi sempre menor que o vício de rI, mesmo ocorrendo com o erro quadrático médio. Conclusões. O icc é um estimador melhor, principalmente para n pequeno (por exemplo 15). Para valores maiores de n (30 ou mais), os estimadores produzem resultados iguais até a segunda casa decimal.
Objective. This thesis presents and compares, theoretically and empirically, two estimators of the intraclass correlation coefficient pI, defined as Pearson\'s pairwise intraclass correlation coefficient. The first is the \"natural\" estimator, obtained by Pearson\'s moment-product correlation for members of one class (rI) while the second was obtained as a function of components of variance (icc). Methods. Theoretical and empirical comparison of the parameters and estimators are performed. The theoretical comparison involves two definitions of the intrac1ass correlation coefficient pI as a measure of reliability (*) for two repeated measurements in the same class and the presentation of the technique of analysis of variance, as well as for the definition and interpretation of the estimators ri and icc. The empirical comparison was carried out by means of a Monte Carlo simulation study of pairs of correlated values according Pearson\'s pairwise correlation. The pairs of values follow a normal bivariate distribution, with correlation values and sample size previously fixed: n= 15, 30 e 45 and Pl = . Results. Bias and mean square error for the estimators were compared as well as the range of the intervals of confidence. The comparison shows that the bias of icc is always smaller than of rI This also applies to the mean square error. Conclusions. The icc is a better estimator, especially for n less than or equal to 15. For larger samples sízes (n 30 or more), the estimators produce results that are equal to the second decimal place. (*) Fórmula
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Truong, Thi Kim Tien. "Grandes déviations précises pour des statistiques de test." Thesis, Orléans, 2018. http://www.theses.fr/2018ORLE2057/document.

Повний текст джерела
Анотація:
Cette thèse concerne l’étude de grandes déviations précises pour deux statistiques de test:le coefficient de corrélation empirique de Pearson et la statistique de Moran.Les deux premiers chapitres sont consacrés à des rappels sur les grandes déviations précises et sur la méthode de Laplace qui seront utilisés par la suite. Par la suite, nous étudions les grandes déviations précises pour des coefficients de Pearson empiriques qui sont définis par:$r_n=\sum_{i=1}^n(X_i-\bar X_n)(Y_i-\bar Y_n)/\sqrt{\sum_{i=1}(X_i-\bar X_n)^2 \sum_{i=1}(Y_i-\bar Y_n)^2}$ ou, quand les espérances sont connues, $\tilde r_n=\sum_{i=1}^n(X_i-\mathbb E(X))(Y_i-\mathbb E(Y))/\sqrt{\sum_{i=1}(X_i-\mathbb E(X))^2 \sum_{i=1}(Y_i-\mathbb E(Y))^2} \, .$. Notre cadre est celui d’échantillons (Xi, Yi) ayant une distribution sphérique ou une distribution gaussienne. Dans chaque cas, le schéma de preuve suit celui de Bercu et al.Par la suite, nous considérons la statistique de Moran $T_n=\frac{1}{n}\sum_{k=1}^n\log\frac{X_i}{\bar X_n}+\gamma \, ,$o\`u $\gamma$, où γ est la constante d’ Euler. Enfin l’appendice est consacré aux preuves de résultats techniques
This thesis focuses on the study of Sharp large deviations (SLD) for two test statistics:the Pearson’s empirical correlation coefficient and the Moran statistic.The two first chapters aim to recall general results on SLD principles and Laplace’s methodsused in the sequel. Then we study the SLD of empirical Pearson coefficients, name $r_n=\sum_{i=1}^n(X_i-\bar X_n)(Y_i-\bar Y_n)/\sqrt{\sum_{i=1}(X_i-\bar X_n)^2 \sum_{i=1}(Y_i-\bar Y_n)^2}$ and when the meansare known,$\tilde r_n=\sum_{i=1}^n(X_i-\mathbb E(X))(Y_i-\mathbb E(Y))/\sqrt{\sum_{i=1}(X_i-\mathbb E(X))^2 \sum_{i=1}(Y_i-\mathbb E(Y))^2} \, .$ .Our framework takes place in two cases of random sample (Xi, Yi): spherical distributionand Gaussian distribution. In each case, we follow the scheme of Bercu et al. Next, westate SLD for the Moran statistic $T_n=\frac{1}{n}\sum_{k=1}^n\log\frac{X_i}{\bar X_n}+\gamma \, ,$o\`u $\gamma$ , where γ is the Euler constant.Finally the appendix is devoted to some technical results
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Johansson, Emilia. "Factors controlling the sorption of Cs, Ni and U in soil : A statistical analysis with experimental sorption data of caesium, nickel and uranium in soils from the Laxemar area." Thesis, KTH, Hållbar utveckling, miljövetenskap och teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281938.

Повний текст джерела
Анотація:
In the fall of 2006, soils from three small valleys in the Laxemar/Oskarshamn area were sampled. A total of eight composite samples were characterized for a number of soil parameters that are important for geochemical sorption and were later also used in batch sorption experiments. Solid/liquid partition coefficients (Kd values) were then determined for seven radionuclides in each of the eight samples. To contribute to the interpretation of the sorption results together with the soil characterizations, this study aims to describe the sorption behavior of the radionuclides caesium, nickel and uranium and also discern which parameters that could provide a basis for estimating the strength of sorption of radionuclides in general. The methodology included quantitative methodologies such as compilation of chemical equilibrium diagrams by the software Hydra/Medusa and correlation analyses using the statistical software SPSS statistics. Based on the speciation diagrams of each radionuclide and identified important linear and non-linear relationships of the Kd values with a number of soil parameters, the following soil- and soil solution properties were found to have controlled the sorption of Cs, Ni and U, respectively, in the Laxemar soils. Cs: the specific surface area of the soil coupled to the clay content. Ni: the cation exchange capacity, alkaline solution pH, soil organic matter and dissolved organic matter. U: the cation exchange capacity, soil organic matter, dissolved organic matter, dissolved carbonate and alkaline solution pH. The soil that showed the strongest sorption varied between the nuclides, which can be related to the individual sorption behavior of caesium, nickel and uranium, as well as the different physicochemical properties of the soils. The parameters that should be prioritized in characterizations of soil samples are identified to be: solution pH, the cation exchange capacity, the specific surface area of the soil, soil organic matter and soil texture (clay content).
För att kunna fatta beslut relaterade till hypotetisk framtida kontaminering från slutförvar av radioaktivt avfall är det direkt avgörande att förstå mobiliteten av radioaktiva element i miljön. Sorption är en av de viktigaste kemiska mekanismerna som kan minska spridningen av radionuklider i vatten/jord/bergssystem, där nukliderna fördelar sig mellan vätskefasen och ytor på fasta partiklar i dessa system. Fördelningskoefficienter (Kd värden) används generellt som ett kvantitativt mått på sorptionen, där ett högt Kd värde innebär att en större andel av ämnet i fråga är bundet till den fasta fasen. Under hösten 2006 togs jordprover från tre dalgångar i Laxemar/Oskarshamn. Totalt åtta jordprover karakteriserades för ett antal jordparametrar som är viktiga för geokemisk sorption och användes senare i batchförsök tillsammans med ett naturligt grundvatten. Fördelningskoefficienter (Kd värden) bestämdes för sju radionuklider (Cs, Eu, I, Ni, Np, Sr and U) för vart och ett av de åtta jordproverna. För att bidra till tolkningen av sorptionsresultaten tillsammans med jordprovernas egenskaper syftar denna studie till att beskriva sorptionsbeteendet hos radionukliderna caesium, nickel och uran samt urskilja vilka parametrar som kan fungera som grund för att uppskatta sorptionsstyrkan av radionuklider i allmänhet. För att uppnå detta syfte så har studien följande mål. Identifiera de jord- och marklösningsegenskaper som kontrollerar sorptionen av Cs, Ni respektive U i de åtta Laxemar proverna. Bestämma vilket Laxemar-jordprov som starkast sorberar de tre radionukliderna. Identifiera de jordparametrar som bör prioriteras vid jordkarakteriseringar, baserat på deras sorptionsinflytande, för att kunna uppskatta Kd värden endast med begränsad information om ett jordsystem. Metoden innefattade kvantitativa metoder såsom sammanställning av kemiska jämviktsdiagram med programvaran Hydra/Medusa och korrelationsanalyser med hjälp av statistikprogramvaran SPSS statistics. De kemiska jämviktsdiagrammen bidrog till att beskriva specieringen av respektive nuklid som en funktion av pH och korrelationsanalyserna bidrog till att identifiera linjära samband mellan par av variabler, tex mellan Kd och jordparametrar. Baserat på specieringsdiagrammen för varje radionuklid och identifierade viktiga linjära och icke-linjära förhållanden mellan Kd-värdena och ett antal jordparametrar har följande egenskaper hos jordarna och marklösningen visat sig huvudsakligen kontrollera sorptionen av Cs, Ni respektive U i de åtta Laxemar jordarna: För caesium gäller jordens specifika ytarea kopplad till lerinnehållet, medan för nickel är det katjonbytarkapaciteten, organiskt material, alkaliska pH-värden samt löst organiskt material. Sorptionen av uran befanns kontrolleras av katjonbytarkapaciteten, organiskt material, löst organiskt material, alkaliska pH-värden samt lösta karbonater. Den jord som visade starkast sorption varierar mellan de tre nukliderna, vilket kan relateras till nuklidernas individuella sorptionsbeteende i jord samt jordarnas olika fysikaliska och kemiska egenskaper. Parametrarna som bör prioriteras vid karaktärisering av jordprov identifierades vara: pH, katjonbytarkapaciteten, jordens specifika ytarea, mängden organiskt material samt jordtexturen (lerinnehåll).
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Lima, Leonardo da Silva e. "Centralidades em redes espaciais urbanas e localização de atividades econômicas." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/122524.

Повний текст джерела
Анотація:
Nos últimos anos, o estudo de propriedades de redes espaciais urbanas conhecidas como centralidades, tem sido utilizado com frequência para descrever fenômenos de ordem sócio-econômica associados à forma da cidade. Autores têm sugerido que centralidades são capazes de descrever a estrutura espacial urbana (KRAFTA, 1994; ANAS et al., 1998) e, portanto através do estudo de centralidades, é possível reconhecer os espaços que mais concentram fluxos, os que possuem os maiores valores de renda da terra, os mais seguros, entre outros aspectos que parecem estar diretamente relacionados com o fenômeno urbano. A hipótese dessa pesquisa admite que centralidades em redes espaciais urbanas desempenham um papel fundamental na formação da estrutura espacial urbana e na maneira como são organizados os usos do solo da cidade. Assim, essa pesquisa investiga qual modelo de centralidade, processado sobre diversas formas de se descrever o espaço urbano na forma de uma rede, é capaz de apresentar resultados mais fortemente correlacionados com a distribuição espacial de atividades econômicas urbanas. Nessa pesquisa são avaliados cinco modelos de centralidade, aplicados sobre diferentes redes espaciais urbanas com a intenção de se verificar qual deles apresenta maior correlação com a ocorrência de atividades econômicas. Para realizar tal exercício, esses modelos são aplicados sobre três tipos de redes espaciais urbanas (axial, nodal e trechos de rua) – oriundas da configuração espacial de três cidades brasileiras – processados de forma geométrica e topológica. Os modelos de centralidade aplicados são conhecidos como centralidade por Alcance (SEVTSUK; 2010), centralidade por Excentricidade (PORTA et al.; 2009, 2011), centralidade por Intermediação (FREEMAN, 1977), centralidade por Intermediação Planar (KRAFTA, 1994) e centralidade por Proximidade (INGRAM, 1971). O coeficiente de correlação Pearson (r) é utilizado como ferramenta capaz de descrever qual modelo de centralidade, associado a qual tipo de representação espacial e qual modo de processamento de distâncias melhor se correlaciona com a distribuição de atividades econômicas urbanas nessas cidades. As evidências encontradas nessa pesquisa sugerem que os modelos de centralidade por Alcance, centralidade por Excentricidade e centralidade por Intermediação Planar destacam-se em comparação com os demais modelos processados. Além disso, os valores de correlação Pearson (r) mais relevantes foram obtidos quando os modelos de centralidade foram processados considerando-se a geometria da rede formada por trechos de rua, indicando que um tipo de representação espacial mais desagregada e processada de forma geométrica seria mais capaz de apresentar os melhores valores de correlação para a compreensão do fenômeno urbano estudado.
In recent years, the study of urban spatial networks has been often used to describe urban phenomena associated with the shape of the city. Researches suggested that centralities are able to describe the urban spatial structure (KRAFTA, 1994; ANAS et al., 1998) and then it is possible to recognize the spaces with more flows, which have the highest values of land revenue, the safest, among other aspects related to urban phenomenon. The hypothesis of this research accepts that centrality in urban spatial networks play a key role for the urban spatial structure and the way of land uses is organized. Thus, there would be some measures of centrality in urban spatial networks that would be more associated with economic activities occurring in the city. The research will evaluate five measures of centrality applied on three types of urban spatial networks (axial map, node map and segment map). Therefore we will use five models of centrality in urban spatial networks known as reach (SEVTSUK, MEKONNEN, 2012), straightness (PORTA et al., 2006b), betweenness (FREEMAN, 1977), planar betweenness (KRAFTA, 1994) and closeness (INGRAM, 1971) in order to determine which this most highly correlated with the occurrence of economic activities. The relationships between these measures of centrality and locations of economic activities are examined in three Brazilian cities, using as methodology the Pearson correlation coefficient (r). The highest correlation between the results of centrality in urban spatial networks and the location of economic activities will suggest which centrality measure, way of to describe urban space like a network and distance processing method (euclidian or topologic) is more associated with the occurrence of these activities in the city. The results indicate that Reach, Straightness and Planar Betweenness are most outstanding models of centrality. In addition, Pearson correlation coefficients (r) most relevant were obtained when models of centrality are processed considering euclidian paths in the street segments network, suggesting that this type of spatial network and distances processing method generates centralities with more significant correlation values within the urban phenomenon studied.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Fernandes, Catarina Marques. "Liderança de empoderamento e trabalho digno." Master's thesis, Universidade de Évora, 2018. http://hdl.handle.net/10174/24511.

Повний текст джерела
Анотація:
O conceito de trabalho digno foi legitimado pela Organização Internacional do Trabalho em 1999, procurando dar resposta a questões no âmbito das políticas internacionais relacionadas com o trabalho. Devido às recentes mudanças no contexto organizacional, a liderança de empoderamento tem adquirido destaque na investigação e na prática. O objetivo do presente estudo é analisar a relação entre o trabalho digno e a liderança de empoderamento e de que forma as dimensões dos dois conceitos se associam. Os dados foram recolhidos através de dois questionários, o Decent Work Questionnaire e o Empowering Leadership Questionnaire, aplicados a 901 trabalhadores portugueses. Os dados foram analisados através do coeficiente de correlação de Pearson, cujos resultados indicaram que as correlações no geral são elevadas e que a dimensão Princípios e valores fundamentais no trabalho do trabalho digno e as dimensões Participação na tomada de decisão, Coaching e Demonstração de preocupação/interação com a equipa pertencentes à liderança de empoderamento são as que apresentam as correlações mais elevadas. Estes resultados demonstram a associação entre trabalho digno e a liderança de empoderamento, sugerindo que são conceitos com forte relação, embora distintos entre si;Empowering Leardership and Decent Work Abstract: The concept of decent work was legitimized by the International Labor Organization in 1999, seeking to address issues in international labor-related policies. Due to recent changes in the organizational context, empowering leadership has gained prominence in research and practice. The aim of the present study is to analyze the relationship between decent work and empowering leadership and how the dimensions of the two concepts are associated. Data were collected through two questionnaires, the Decent Work Questionnaire and the Empowering Leadership Questionnaire, applied to 901 Portuguese workers. The data were analyzed using Pearson's correlation coefficient, whose results indicated that the correlations are generally high and that the dimension Principles and fundamental values in decent work work and the dimensions Participation in decision making, Coaching and Demonstration of concern / interaction with the team belonging to the empowering leadership are those with the highest correlations. These results demonstrate the association between decent work and empowering leadership, suggesting that these concepts are strongly related, although different from each other.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Kasianenko, Stanislav. "Predicting Software Defectiveness by Mining Software Repositories." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-78729.

Повний текст джерела
Анотація:
One of the important aims of the continuous software development process is to localize and remove all existing program bugs as fast as possible. Such goal is highly related to software engineering and defectiveness estimation. Many big companies started to store source code in software repositories as the later grew in popularity. These repositories usually include static source code as well as detailed data for defects in software units. This allows analyzing all the data without interrupting programing process. The main problem of large, complex software is impossibility to control everything manually while the price of the error can be very high. This might result in developers missing defects on testing stage and increase of maintenance cost. The general research goal is to find a way of predicting future software defectiveness with high precision. Reducing maintenance and development costs will contribute to reduce the time-to-market and increase software quality. To address the problem of estimating residual defects an approach was found to predict residual defectiveness of a software by the means of machine learning. For a prime machine learning algorithm, a regression decision tree was chosen as a simple and reliable solution. Data for this tree is extracted from static source code repository and divided into two parts: software metrics and defect data. Software metrics are formed from static code and defect data is extracted from reported issues in the repository. In addition to already reported bugs, they are augmented with unreported bugs found on “discussions” section in repository and parsed by a natural language processor. Metrics were filtered to remove ones, that were not related to defect data by applying correlation algorithm. Remaining metrics were weighted to use the most correlated combination as a training set for the decision tree. As a result, built decision tree model allows to forecast defectiveness with 89% chance for the particular product. This experiment was conducted using GitHub repository on a Java project and predicted number of possible bugs in a single file (Java class). The experiment resulted in designed method for predicting possible defectiveness from a static code of a single big (more than 1000 files) software version.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Le, Trang Thi, Doan Dang Phan, Bao Dang Khoa Huynh, Van Tho Le, and Van Tu Nguyen. "Phytoplankton diversity and its relation to the physicochemical parameters in main water bodies of Vinh Long province, Vietnam." Technische Universität Dresden, 2019. https://tud.qucosa.de/id/qucosa%3A70829.

Повний текст джерела
Анотація:
Phytoplankton samples were collected in 2016 during the dry and rainy seasons at nine sampling sites in Vinh Long province, Vietnam. Some basic environment parameters such as temperature, pH, dissolved oxygen, nitrate and phosphate were measured and a total of 209 phytoplankton species (six phyla, 96 genera) were identified. The phylum that had the greatest number of species was Bacillariophyta (82 species), followed by Chlorophyta (61 species), Cyanophyta (39 species), Euglenophyta (21 species), Chrysophyta (three species) and Dinophyta (three3 species). The phytoplankton density ranged from 4,128 to 123,029 cells/liter. The dominant algae recorded in the study area include Microcystis aeruginosa, Merismopedia glauca, Oscillatoria perornata, Jaaginema sp., Planktothrix agardhii, Coscinodiscus subtilis, Melosira granulata. In particular, Microcystis aeruginosa was the most density dominant species in the total number of sampling sites during the dry season survey, and this species was classified as a group producing toxins harmful to the environment. Surface water quality, according to QCVN 08: 2015/BTNMT was classified into Column A1 for pH, nitrate and Column B1 for dissolved oxygen, and Column B2 for phosphate. Phytoplankton community structure and environmental factors changed substantially between dry and rainy seasons. A Pearson (r) correlation coefficient was used for the relative analysis. The results indicated that the number of phytoplankton species were a significantly positive correlation with pH, dissolved oxygen and nitrate in the rainy season. The phytoplankton abundance was uncorrelated with environmental factors in both seasons.
Các mẫu thực vật phù du được thu thập trong năm 2016 (mùa khô và mùa mưa) tại 9 vị trí ở tỉnh Vĩnh Long, Việt Nam. Một số thông số môi trường như nhiệt độ, pH, oxy hòa tan, nitrat và phốt phát được đo ngay tại hiện trường. Tổng cộng 209 loài thực vật phù du được ghi nhận (6 ngành, 96 chi). Số lượng loài cao nhất là tảo Silic (82 loài), kế đến là tảo Lục (61 loài), tảo Lam (39 loài), tảo Mắt (21 loài), tảo Vàng ánh (3 loài) và tảo Giáp (3 loài). Mật độ thực vật phù du dao động từ 4.128 đến 123.029 tế bào/ lít. Các loài ưu thế ghi nhận được ở khu vực nghiên cứu gồm có: Microcystis aeruginosa, Merismopedia glauca, Oscillatoria perornata, Jaaginema sp., Planktothrix agardhii; Coscinodiscus subtilis, Melosira granulata. Trong đó, loài Microcystis aeruginosa chiếm ưu thế nhiều nhất trên tổng số điểm thu mẫu trong đợt khảo sát mùa khô, đồng thời loài này được xếp vào nhóm sản sinh độc tố gây hại cho môi trường. Chất lượng nước mặt theo QCVN 08:2015/BTNMT được xếp vào loại A1 đối với thông số pH, nitrat và loại B1 đối với thông số oxy hòa tan, và loại B2 đối với phốt phát. Cấu trúc quần xã thực vât nổi và các yếu tố môi trường thay đổi đáng kể giữa mùa mưa và mừa khô. Hệ số tương quan Pearson (r) được dùng để phân tích. Kết quả cho thấy số lượng thực vật phù du có tương quan thuận với pH, oxy hòa tan và nitrat trong mùa mưa và có ý nghĩa về mặt thống kê. Mật độ của thực vật phù du không tương quan với các yếu tố môi trường trong cả hai mùa.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Siqueira, Lucas Alfredo. "Titulador automático baseado em filmes digitais para determinação de dureza e alcalinidade total em águas minerais/Titulador automático baseado em filmes digitais para determinação de dureza e alcalinidade total em águas minerais." Universidade Federal da Paraíba, 2016. http://tede.biblioteca.ufpb.br:8080/handle/tede/9013.

Повний текст джерела
Анотація:
Submitted by Maike Costa (maiksebas@gmail.com) on 2017-06-21T14:26:33Z No. of bitstreams: 1 arquivototal.pdf: 3690977 bytes, checksum: 752560aa5c7d78968c32cb55f0778788 (MD5)
Made available in DSpace on 2017-06-21T14:26:33Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 3690977 bytes, checksum: 752560aa5c7d78968c32cb55f0778788 (MD5) Previous issue date: 2016-02-29
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Total hardness and Total alkalinity are important physico-chemical parameters for the evaluation of water quality and are determined by volumetric analytical methods. These methods have difficult to detect the endpoint of the titration due to the difficult of viewing the color transition inherent to each of them. To circumvent this problem, here is proposed a new automatic method for the detection of the titration end point for the determination of total hardness and total alkalinity in mineral water samples. The proposed flow-batch titrator consists of a peristaltic pump, five three-way solenoid valves, a magnetic stirrer, an electronic actuator, an Arduino MEGA 2560TM board, a mixing chamber and a webcam. The webcam records the digital movie (DM) during the addition of the titrant towards mixing chamber, also recording the color variations resulting from chemical reactions between titrant and sample within chamber. While the DM is recorded, it is decompiled into frames ordered sequentially at a constant rate of 30 frames per second (FPS). The first frame is used as a reference to define the region of interest (RI) of 48 × 50 pixels and the R channel values, which are used to calculate the Pearson's correlation coefficient (r) values. r is calculated between the R values of the initial frame and each subsequent frame. The titration curves are plotted in real time using the values of r (ordinate axis) and the total opening time of the valve titrant (abscissa axis). The end point is estimated by the second derivative method. A software written in ActionScript 3.0 language manages all analytical steps and data treatment in real time. The feasibility of the method was attested by its application for the analysis of natural water samples. Results were compared with classical titration and did not present statistically significant differences when the paired ttest at the 95% confidence level was applied. The proposed method is able to process about 71 samples per hour, and its precision was confirmed by overall relative standard deviation (RSD) values, always lower than the 2,4% for total hardness and 1,4% for total alkalinity.
A dureza total e a alcalinidade total são importantes parâmetros físico-químicos para avaliação da qualidade de águas e são determinados por métodos volumétricos de análise. Estes métodos apresentam difícil detecção do ponto final da titulação devido à dificuldade de visualização das transições de cores inerentes a cada um deles. Para contornar este problema, foi proposta neste trabalho uma nova metodologia automática para a detecção do ponto final nas determinações de dureza total e alcalinidade total em águas. O titulador em fluxo-batelada proposto é composto de uma bomba peristáltica, cinco válvulas solenoides de três vias, um agitador magnético, um acionador de válvulas, uma placa Arduíno MEGA 2560TM, uma câmara de mistura e uma webcam. O programa de gerenciamento e controle do titulador foi escrito em linguagem ActionScript 3.0. A webcam grava o filme digital durante a adição do titulante na câmara de mistura, registrando as variações de cor decorrentes das reações químicas entre titulante e amostra no interior de câmara. Enquanto o filme é gravado, este é decomposto em quadros ordenados sequencialmente a uma taxa constante de 30 quadros por segundo (FPS). O primeiro quadro é utilizado como referência para definir uma região de interesse (RI) com 48 x 50 pixels, na qual seus valores R, G e B são utilizados para calcular os valores de coeficiente de correlação de Pearson (r). O valor de r é calculado entre os valores de R do quadro inicial e de cada quadro subsequente. As curvas de titulação são obtidas em tempo real usando os valores de r (ordenadas) e o tempo total de abertura da válvula de titulante (abscissas). O ponto final é estimado pelo método de segunda derivada. O método foi aplicado na análise de águas minerais e os resultados foram comparados com a titulação clássica, não apresentando diferenças estatisticamente significativas com aplicação do teste t pareado a 95% de confiança. O método proposto foi capaz de processar até 71 amostras por hora e a sua precisão foi confirmada pelos valores de desvio padrão relativos (DPR) globais, sempre inferiores as 2,4% para dureza total e 1,4% para alcalinidade total.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Ozbal, Gozde. "A Content Boosted Collaborative Filtering Approach For Movie Recommendation Based On Local &amp." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610984/index.pdf.

Повний текст джерела
Анотація:
Recently, it has become more and more difficult for the existing web based systems to locate or retrieve any kind of relevant information, due to the rapid growth of the World Wide Web (WWW) in terms of the information space and the amount of the users in that space. However, in today'
s world, many systems and approaches make it possible for the users to be guided by the recommendations that they provide about new items such as articles, news, books, music, and movies. However, a lot of traditional recommender systems result in failure when the data to be used throughout the recommendation process is sparse. In another sense, when there exists an inadequate number of items or users in the system, unsuccessful recommendations are produced. Within this thesis work, ReMovender, a web based movie recommendation system, which uses a content boosted collaborative filtering approach, will be presented. ReMovender combines the local/global similarity and missing data prediction v techniques in order to handle the previously mentioned sparseness problem effectively. Besides, by putting the content information of the movies into consideration during the item similarity calculations, the goal of making more successful and realistic predictions is achieved.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Kovařík, Tomáš. "Řízení poslechových testů pro subjektivní hodnocení kvality audio signálu." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219467.

Повний текст джерела
Анотація:
The point of this thesis was to perform listening tests. Appropriate methods of performance were selected for these tests, tests were carried out and the data were analyzed using statistical analysis. Then was compiled the resulting interval scale from results of the first test and in the second listening test were determined average values SNR for background noises.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Mihulka, Tomáš. "Evoluční optimalizace analogových obvodů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2017. http://www.nusl.cz/ntk/nusl-363843.

Повний текст джерела
Анотація:
The aim of this work was to create a system for optimisaton of specific analog circuits by evolution using multiple fitness functions . A set of experiments was run, and the results analyzed to evaluate the feasibility of evolutionary optimisation of analog circuits . A requirement for this goal is the study and choice of certain types of analog circuits and evolutionary algorithms . For the scope of this work , amplifiers and oscillators were chosen as target circuits , and genetic algorithms and evolutionary strategies as evolutionary algorithms . The motivation for this work is the ongoing effort to automate the design and optimisation of analog circuits , where evolutionary optimisation is one of the options .
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Watanabe, Jorge. "Métodos geoestatísticos de co-estimativas: estudo do efeito da correlação entre variáveis na precisão dos resultados." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/44/44137/tde-14082008-165227/.

Повний текст джерела
Анотація:
Esta dissertação de mestrado apresenta os resultados de uma investigação sobre os métodos de co-estimativa comumente utilizados em geoestatística. Estes métodos são: cokrigagem ordinária; cokrigagem colocalizada e krigagem com deriva externa. Além disso, a krigagem ordinária foi considerada apenas a título de ilustração como esse método trabalha quando a variável primária estiver pobremente amostrada. Como sabemos, os métodos de co-estimativa dependem de uma variável secundária amostrada sobre o domínio a ser estimado. Adicionalmente, esta variável deveria apresentar correlação linear com a variável principal ou variável primária. Geralmente, a variável primária é pobremente amostrada enquanto a variável secundária é conhecida sobre todo o domínio a ser estimado. Por exemplo, em exploração petrolífera, a variável primária é a porosidade medida em amostras de rocha retiradas de testemunhos e a variável secundária é a amplitude sísmica derivada de processamento de dados de reflexão sísmica. É importante mencionar que a variável primária e a variável secundária devem apresentar algum grau de correlação. Contudo, nós não sabemos como eles funcionam dependendo do grau de correlação. Esta é a questão. Assim, testamos os métodos de co-estimativa para vários conjuntos de dados apresentando diferentes graus de correlação. Na verdade, esses conjuntos de dados foram gerados em computador baseado em algoritmos de transformação de dados. Cinco valores de correlação foram considerados neste estudo: 0,993, 0,870, 0,752, 0,588 e 0,461. A cokrigagem colocalizada foi o melhor método entre todos testados. Este método tem um filtro interno que é aplicado no cálculo do peso da variável secundária, que por sua vez depende do coeficiente de correlação. De fato, quanto maior o coeficiente de correlação, maior é o peso da variável secundária. Então isso significa que este método funciona mesmo quando o coeficiente de correlação entre a variável primária e a variável secundária é baixo. Este é o resultado mais impressionante desta pesquisa.
This master dissertation presents the results of a survey into co-estimation methods commonly used in geostatistics. These methods are ordinary cokriging, collocated cokriging and kriging with an external drift. Besides that ordinary kriging was considered just to illustrate how it does work when the primary variable is poorly sampled. As we know co-estimation methods depend on a secondary variable sampled over the estimation domain. Moreover, this secondary variable should present linear correlation with the main variable or primary variable. Usually the primary variable is poorly sampled whereas the secondary variable is known over the estimation domain. For instance in oil exploration the primary variable is porosity as measured on rock samples gathered from drill holes and the secondary variable is seismic amplitude derived from processing seismic reflection data. It is important to mention that primary and secondary variables must present some degree of correlation. However, we do not know how they work depending on the correlation coefficient. That is the question. Thus, we have tested co-estimation methods for several data sets presenting different degrees of correlation. Actually, these data sets were generated in computer based on some data transform algorithms. Five correlation values have been considered in this study: 0.993; 0.870; 0.752; 0.588 and 0.461. Collocated simple cokriging was the best method among all tested. This method has an internal filter applied to compute the weight for the secondary variable, which in its turn depends on the correlation coefficient. In fact, the greater the correlation coefficient the greater the weight of secondary variable is. Then it means this method works even when the correlation coefficient between primary and secondary variables is low. This is the most impressive result that came out from this research.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Gaspar, Willians Cesar Rocha. "A correlação entre jornada de trabalho e produtividade: uma perspectiva macroeconômica entre países." reponame:Repositório Institucional do FGV, 2017. http://hdl.handle.net/10438/19961.

Повний текст джерела
Анотація:
Submitted by Willians Gaspar (willians.gaspar@fgv.br) on 2018-01-22T16:33:59Z No. of bitstreams: 1 A Correlação entre Jornada de Trabalho e Produtividade - Uma Perspectiva Macroeconômica entre Países.pdf: 1651221 bytes, checksum: 10a95ba6074b04f5e4e0f6d88a9bf7b6 (MD5)
Approved for entry into archive by Janete de Oliveira Feitosa (janete.feitosa@fgv.br) on 2018-01-24T12:00:40Z (GMT) No. of bitstreams: 1 A Correlação entre Jornada de Trabalho e Produtividade - Uma Perspectiva Macroeconômica entre Países.pdf: 1651221 bytes, checksum: 10a95ba6074b04f5e4e0f6d88a9bf7b6 (MD5)
Made available in DSpace on 2018-01-29T18:55:15Z (GMT). No. of bitstreams: 1 A Correlação entre Jornada de Trabalho e Produtividade - Uma Perspectiva Macroeconômica entre Países.pdf: 1651221 bytes, checksum: 10a95ba6074b04f5e4e0f6d88a9bf7b6 (MD5) Previous issue date: 2017-12-19
This research has as general objective to identify the variables or contributing factors to subsidize the discussion about reduction of the Working Day. As a specific objective, what is proposed is to verify how these same variables affect Productivity. For both objectives the macroeconomic aspects of the countries analyzed are considered. The criterion for selecting these countries is based on the "ranking" of the OECD and World Bank database for the year 2013, considering all the major world economies, which together represent 65.22% of global GDP. The data extracted refer to the "Gross Domestic Product - GDP at (PPP) - Purchasing Power Parity", which consists of the Gross Domestic Product, in international dollars, with a view to the comparative possibility of these economies by purchasing power parity (PPP). Other sources of information were considered as objects of analysis and observations, including the statistical series of secondary data from the International Labor Office (ILO), the International Monetary Fund (IMF), the United Nations (UNDP), the Brazilian Institute of Geography and Economics (IBGE), the Department of Statistics and Socioeconomic Studies (DIEESE) and the Institute of Economic and Applied Research (IPEA). The research was conducted at the macroeconomic level of the countries, with a longitudinal temporal cut between the years 2007 and 2013, in order to observe the behavior of these economies, including during the period of the 2008 global crisis. evolution of the historical series of GDP, revealing the size of the economy, GDP per capita, which captures wealth in relation to the population. Finally, we consider the labor productivity factor itself, which deals with the relationship between GDP, the number of people and the number of hours worked in the period. This research has as general objective to identify the variables or contributing factors to subsidize the discussion about reduction of the Working Day. As a specific objective, what is proposed is to verify how these same variables affect Productivity. For both objectives the macroeconomic aspects of the countries analyzed are considered. The criterion for selecting these countries is based on the "ranking" of the OECD and World Bank database for the year 2013, considering all the major world economies, which together represent 65.22% of global GDP. The data extracted refer to the "Gross Domestic Product - GDP at (PPP) - Purchasing Power Parity", which consists of the Gross Domestic Product, in international dollars, with a view to the comparative possibility of these economies by purchasing power parity (PPP). Other sources of information were considered as objects of analysis and observations, including the statistical series of secondary data from the International Labor Office (ILO), the International Monetary Fund (IMF), the United Nations (UNDP), the Brazilian Institute of Geography and Economics (IBGE), the Department of Statistics and Socioeconomic Studies (DIEESE) and the Institute of Economic and Applied Research (IPEA). The research was conducted at the macroeconomic level of the countries, with a longitudinal temporal cut between the years 2007 and 2013, in order to observe the behavior of these economies, including during the period of the 2008 global crisis. evolution of the historical series of GDP, revealing the size of the economy, GDP per capita, which captures wealth relative to the population. Finally, we consider the labor productivity factor itself, which deals with the relationship between GDP, the number of people and the number of hours worked in the period. Design/Methodology/ approach – The method is a qualitative research of the exploratory type, subsidized by quantitative correlation analysis, and the statistical design is directed to the verification of the degree of association between the variables: Working day and Labor productivity; that is, calculation and interpretation of the degree of correlation between these two variables. Findings – In the final conclusion of the study, it is inferred based on the theoretical reference and the analysis of the statistical data, if the reduction in the working day contributes to changes in productivity indexes, and just as other variables are considered in this discussion. Research limitations – No aspects of the national culture, climatic conditions and segregation of nations by percentage of participation in agriculture, industry, and services were considered in the composition of their economies, with a view to performing comparative analysis by subgroups. In addition, the sample set is restricted both in number of countries and in relation to the relatively short period between 2007 and 2013, in addition to being marked by an atypical event such as the global economic crisis of 2008. Practical contributions – To governments, organizations and workers to rethink the possible economic and social benefits, through public policies that allow greater flexibility in working hours, focusing on the competitive advantages and the balance of the relation between labor and capital, observing the legal aspects, productivity, quality of life, unit costs and the generation of jobs
Esta pesquisa tem como objetivo geral identificar as variáveis ou fatores contribuintes para subsidiar a discussão sobre redução da Jornada de Trabalho. Como objetivo específico, o que se propõe é verificar como essas mesmas variáveis afetam a Produtividade. Para ambos os objetivos são considerados os aspectos macroeconômicos dos países analisados. O critério para seleção desses países se fundamenta no “ranking” da base de dados da Organização para a Cooperação e Desenvolvimento Econômico – OCDE e do Banco Mundial, ano base 2013, considerando-se o conjunto das maiores economias mundiais, que, juntas, representam 65,22% do PIB global. Os dados extraídos são referentes ao “Gross Domestic Product – GDP at (PPP) - Purchasing Power Parity”, que consiste no Produto Interno Bruto, em dólares internacionais, com vistas à possibilidade comparativa destas economias pela paridade do poder de compra (PPC). Outras fontes de informações foram consideradas como objetos de análise e observações, incluindo-se as séries estatísticas de dados secundários do Instituto Internacional do Trabalho (OIT), do Fundo Monetário Internacional (FMI), das Nações Unidas (UNDP), do Instituto Brasileiro de Geografia e Economia (IBGE), do Departamento Intersindical de Estatística e Estudos Socioeconômicos (DIEESE) e do Instituto de Pesquisa Econômica e Aplicada (IPEA). A pesquisa foi conduzida no nível macroeconômico dos países, com corte temporal longitudinal entre os anos de 2007 a 2013, com o objetivo de observar-se o comportamento dessas economias, inclusive durante o período da crise mundial de 2008. Nesse sentido, foi avaliada a evolução da série histórica do PIB, como reveladora do tamanho da economia, o PIB per capita, que captura a riqueza em relação à população. Por último, considera-se o fator produtividade do trabalho propriamente dito, que trata da relação entre o PIB, o número de pessoas e o número de horas trabalhadas no período. Quanto ao método, trata-se de pesquisa qualitativa do tipo exploratória, subsidiada por análise quantitativa correlacional, sendo o delineamento estatístico direcionado para a verificação do grau de associação entre as varáveis: Jornada de trabalho e Produtividade do trabalho; ou seja, cálculo e interpretação do grau de correlação entre essas duas variáveis. Na conclusão final do trabalho, infere-se com base no referencial teórico e na análise dos dados estatísticos, se a redução na jornada de trabalho contribui para alterações nos índices de produtividade, e assim como outras variáveis são consideradas nesta discussão. Não foram considerados aspectos da cultura nacional, condições climáticas e segregação das nações por percentual de participação respectivamente em agricultura, indústria, e serviços, na composição de suas economias, visando realizar análise comparativa por subgrupos. Além disto o conjunto amostral é restrito, tanto em número de países, quanto em relação ao período, relativamente curto, entre 2007 e 2013, além de ter sido marcado por fato atípico como a crise econômica mundial de 2008. Á governos, organizações e trabalhadores para repensarem os eventuais benefícios econômicos e sociais, através de políticas públicas que permitam maior flexibilização das jornadas de trabalho, com foco nas vantagens competitivas e no equilíbrio da relação entre mão de obra e capital, observando os aspectos legais, a produtividade, a qualidade de vida, os custos unitários e a geração de empregos
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Lee, Wan Yi, and 李宛懌. "Is Pearson Sample Correlation Coefficient Always Feasible To Test For Correlations ?" Thesis, 2016. http://ndltd.ncl.edu.tw/handle/43295031226491363248.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Lei, Cheng. "Student performance prediction based on course grade correlation." Thesis, 2019. http://hdl.handle.net/1828/10654.

Повний текст джерела
Анотація:
This research explored the relationship between an earlier-year technical course and one later year technical course, for students who graduated between 2010 and 2015 with the degree of bachelor of engineering. The research only focuses on the courses in the program of Electrical Engineering at the University of Victoria. Three approaches based on the two major factors, coefficient and enrolment, were established to select the course grade predictor including Max(Pearson Coefficient), Max(Enrolment), and Max(Pi) which is a combination of the two factors. The prediction algorithm used is linear regression and the prediction results were evaluated by Mean Absolute Error and prediction precision. The results show that the predictions of most course pairs could not be reliably used for the student performance in one course based on another one. However, the fourth-year courses are specialization-related and have relatively small enrolments in general, some of the course pairs with fourth-year CourseYs and having acceptable MAE and prediction precision could be used as early references and advices for the students to select the specialization direction while they are in their first or second academic year.
Graduate
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Chen, Chin-Han, and 陳勁含. "Constructing Molecular Phylogeny by Pearson''s Correlation Coefficient and Molecular Phylogenetic Analysis of PRMT Super Family." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/70958272384702013640.

Повний текст джерела
Анотація:
碩士
中山醫學大學
生物醫學科學學系碩士班
101
Evolutionary relationship of all living organisms can be viewed by the phylogenetic tree. So far there are many methods have been developed to evaluate evolutionary relationships. However, multiple sequence alignment (MSA) should be performed before those methods. Several studies have shown that the order in which sequences were added to a MSA could significantly affect the end result. Therefore we want to find if there is another method that makes more reliable results. Our goal is to construct a unique and reasonable phylogenetic tree building method better than the others. Here we propose a novel approach to replace the MSA process. We combine pair-wise sequence alignment (BLAST) and Pearson''s correlation coefficient (PCC) to simulate the interactive relationship of compared sequences. The relationship would be clustered by hierarchical clustering (HC) method. The results have shown that our method indeed improved the problem that MSA may occur. Our method also has a better clustering ability than the conventional methods and could produce a more reasonable tree. We subsequently use our method to perform a phylogenetic analysis of protein arginine methyltransferase (PRMT) families. In addition, we are curious to find if there is a way to identify the pattern of each PRMT family, which makes a fast classification of an unknown sequence.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Lin, Jian-Fa, and 林建發. "Transitive Pearson Product-Moment Correlation Coefficient Based Particle Swarm Optimization on Applying Hyperspectral Image Dimension Reduction." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/5ruft5.

Повний текст джерела
Анотація:
碩士
國立臺北科技大學
電機工程研究所
105
In recent years, the satellite remote sensing technology progress in wide applications, resulting in the increased amount of hyperspectral imaging bands and datasets. To avoid containing wrong data and noise bands lead to the correct rate drops, we utilize hyperspectral image processing to select these bands representative in the spectral bands. Thus, reducing the data complexity is essential procedure. This paper proposed Transitive Pearson Product-Moment Correlation Coefficient (TPMCC), improving the correlation coefficient between two bands via those similar neighbor bands according with specified conditions to enhance the ability to select the bands and effectively achieve the effect of dimensionality reduction. The previous study posed Particle Swarm Optimization (PSO), this algorithm clusters the correlation coefficient matrix generated by original hyperspectral image into a cluster module of feature space, and then choose the representative bands for the effect of reduction dimension. However, when dealing with more category images, each category bands grouped by the correlation coefficient matrix inefficiently. Additionally, PSO as vulnerable as to be disturbed cannot find out suitable universal correlation coefficient matrix. In this dissertation, Salinas’s AVIRIS and Washington DC Mall’s HYDICE remote sensing images are the experiments. The experimental results show that TPMCC algorithm have more effective than Pearson Product-Moment Correlation Coefficient to prove the dimension reduction rate and reduce the selection of bands then achieve a good classification result.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Říha, Samuel. "Parciální a podmíněné korelační koeficienty." Master's thesis, 2015. http://www.nusl.cz/ntk/nusl-350851.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Fonseca, João Miguel Lucas da. "Covid-19 versus H1N1: a comparative study of the impact of viruses on small and large economies on the Stock Market: The special study of Portugal." Master's thesis, 2022. http://hdl.handle.net/10362/134920.

Повний текст джерела
Анотація:
Dissertation presented as the partial requirement for obtaining a Master's degree in Statistics and Information Management, specialization in Risk Analysis and Management.
Esta dissertação descreve e detalha, o impacto que as crises pandémicas têm na volatilidade dos mercados da bolsa de pequenas e grandes economias. Relativamente aos países estudados, escolhemos como casos de estudo os Estados Unidos da América (grande economia), a Grécia (pequena economia), e Portugal, como comparação aos dois países mencionados anteriormente, mas apenas no que se refere à consequência da pandemia provocada pela Covid-19. Em termos de metodologia, a volatilidade financeira diária é tradicional para modelar um processo GARCH (1,1). Este modelo foi utilizado no programa SAS para provar se a volatilidade podia ser ou não correlacionada. Adicionalmente, os coeficientes de correlação linear de Pearson foram realizados e analisados em várias variáveis, tais como o valor no fecho, casos e mortes confirmados com a volatilidade diária entre cada país e a volatilidade histórica diária. Finalmente, o estudo mostra como as pandemias do século XXI tiveram impacto tanto na bolsa de valores (financeiramente), como no produto interno bruto (economicamente). Esta dissertação comprova que existe, de facto, uma enorme volatilidade no mercado de bolsa no início de um fenómeno atípico. Contudo, após um determinado período de tempo, o mercado de bolsa corrige-se. Saliento, que na pandemia de Covid-19, apesar dos Estados Unidos da América terem sofrido uma repercussão no seu Produto Interno Bruto, Portugal teve implicações ainda mais fortes na economia, tal como a Grécia (países de economia pequena). Referente à parte financeira, Portugal compara-se igualmente à Grécia aquando se realizou as correlações entre as volatilidades históricas diárias. No entanto aproxima se dos Estados Unidos da América nas correlações das várias variáveis, transcritas anteriormente, tais como o valor no fecho e os casos e mortes confirmados com a volatilidade diária entre cada país. Conclui-se que as pandemias causaram impacto nos mercados de bolsa nos países estudados e mencionados supra.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Vondra, Jan. "Vliv vybraných kondičních faktorů na výkonnost ve vodním slalomu." Master's thesis, 2016. http://www.nusl.cz/ntk/nusl-342055.

Повний текст джерела
Анотація:
Title: Influence of selected conditional factors on performance in white water slalom. Aims: The aim of the study was to investigate the relationship of selected specific movement abilities being examined modified test battery with the performance of athletes in the water slalom. Methods: It was used field measurements where the applied modified test battery. Using GPS module to determine the distance partial tests from batery. For measuring was used manual measurement. To determine the statistical correlation between the modified battery and performance ability of competitors was used two different coefficients of correlation and regression analysis. According to the order of the test and the race was used nonparametric correlation study - Spearman correlation coefficient. Determining the statistical significance of the relationship of measured times in tests and final time in the nomination races have used the Pearson correlation coefficient. Results: For a statistically significant relationship was determined value when r ≥ 0.8. Spearman's correlation coefficient: In the test at 40 m were obtained these correlation coefficients: Nomination races rs = 0,380952, Czech cup rs = 0,595238. In the test at 80 meters they were obtained these correlation coefficients: nomination races rs = 0,857143,...
Стилі APA, Harvard, Vancouver, ISO та ін.
23

KŘÍŽOVÁ, Tereza. "Měření návštěvnosti." Master's thesis, 2019. http://www.nusl.cz/ntk/nusl-394863.

Повний текст джерела
Анотація:
The objective of this thesis is to proof possibilities to use data from the electronic revenue records to measure the visit rate and process recommendations for the use in tourism. The thesis focuses on the tourism sector. Concepts and related terminology are explained. Described in this thesis are sources of the information about visitors, profiles of visitors, decision-making process about visits, and selected technologies used to measure the visit rate. Reasons, problems and classification related to measurements of the visit rate are included in the thesis as well. The practical part examines the use of information from electronic revenue records for the purpose of measuring the number of visitors based on the calculation of Pearson's correlation coefficients. The principal how EET functions is explained in the thesis. Significant part of the work is the analysis of daily and monthly revenues of electronic records in the sector of lodging in regions of the Czech Republic. Based on this analysis, 6 groups are determined in which the development of daily seasonality takes place in a specific way. An important part is also the calculation of average cost of accommodation in regions, which identifies certain economic impacts of tourism. Part of the thesis are summarized recommendations for the use of data from EET.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Pereira, Carlos Miguel Dias. "Teste e validação de um programa de simulação do comportamento termofisiológico do corpo humano." Master's thesis, 2014. http://hdl.handle.net/10316/38875.

Повний текст джерела
Анотація:
Dissertação de Mestrado Integrado em Engenharia Mecânica apresentada à Faculdade de Ciências e Tecnologia da Universidade de Coimbra
Os programas de simulação do comportamento termofisiológico do corpo humano têm ganho uma enorme importância na nossa sociedade ao longo dos tempos, em particular nos tempos modernos. A sua utilização permite ajudar a melhorar as condições de conforto térmico no interior de edifícios, a melhorar o rendimento de atividades desportivas, a elaborar protocolos para certas atividades, a estudar situações em que indivíduos são sujeitos a elevados esforços ou em que trabalham em condições térmicas muito frias ou muito quentes. Foi esta a motivação que motivou o desenvolvimento, no DEM-FCTUC, de um software de simulação do comportamento termofisiológico do corpo humano, o programa Huthereg. Em função das características físicas da pessoa, do seu nível de vestuário, da intensidade da sua atividade e das condições térmicas do ambiente em que se encontra, o programa Huthereg consegue prever um conjunto alargado de parâmetros (temperaturas interiores, na pele e do vestuário, fluxos de calor sensíveis e latentes, estado térmico global e de cada uma das zonas do corpo humano, etc.). Este trabalho tem como objectivo validar e testar o programa Huthereg, encontrando os seus pontos fortes e fracos, as suas limitações e testando a sua fiabilidade. Para isso foram simulados vários casos relatados na bibliografia científica, com comparação dos resultados experimentais daí retirados com os correspondentes previstos pelo software. De modo a testar o programa numa gama abrangente de situações, os casos selecionados envolvem diferentes ambientes térmicos (de extremamente quente a extremamente frio), com diversos níveis de vestuário (de indivíduos nus até ao uso de vestuário de proteção térmica), com diferentes intensidades de atividade (desde repouso a elevado esforço físico). No caso de indivíduos vestidos foram estudados diferentes níveis de isolamento térmico e com variação da distribuição da roupa pelas várias zonas do corpo humano. Como ferramentas auxiliares na avaliação do nível de conformidade entre os resultados experimentais retirados da bibliografia e os previstos pelo software utilizaram-se as funções estatísticas média aritmética das diferenças relativas (e respectivo desvio padrão), desvio quadrático médio e aquela que foi dada uma importância extra, o coeficiente de correlação de Pearson. Estas funções estudam a proximidade entre os valores medidos e previstos e o tipo de relação que existe entre eles. Adicionalmente foi ainda efectuada uma avaliação com recurso a gráficos. Pensa-se ter atingido os objectivos propostos. Ou seja, testou-se o programa Huthereg para uma gama muito alargada de situações. Obtiveram-se quase sempre bons e muito bons resultados, chegando a ser excelentes em muitos deles. Mesmo quando foram detectadas diferenças entre os valores experimentais e os previstos pelo programa não é garantido que a falha se deve à sua falta de capacidade de previsão. Isto porque na bibliografia, o protocolo que serviu de base aos ensaios quase nunca vem suficientemente detalhado, a descrição dos ambientes térmicos está quase sempre incompleta, o nível de atividade física não aparece quantificado em termos numéricos e nunca é apresentada a distribuição do vestuário pelas diferentes zonas do corpo humano. Dentro da gama de situações analisadas não foram encontradas limitações à aplicação do software. Face ao exposto é óbvia a utilidade, a aplicabilidade e a fiabilidade do programa Huthereg na simulação da resposta termofisiológica do corpo humano. Espera-se, assim, que ele continue a ser usado no futuro quer em estudos de investigação, em desenvolvimentos tecnológicos, para avaliar situações térmicas que de algum modo possam ser de risco para as pessoas ou para simplesmente melhorar o desempenho na prática de atividades relacionadas com trabalho, lazer ou desporto.
The importance of programs simulating the behaviour of human thermoregulatory systems has increased greatly. They have been used to improve the conditions of thermal comfort inside buildings, to enhance performance in sporting activities, to develop protocols for specific activities and to study situations in which people are subjected to high levels of stress resulting from very cold or very hot thermal conditions. This was the motivation that led to the development of a software tool that simulates the behaviour of the thermoregulatory systems of human body at DEM-FCTUC. Its name is Huthereg. Depending on the physical characteristics of the person, the amount of clothing, the level of activity and the thermal conditions of the thermal environment where s/he is. The Huthereg program can provide a broad set of parameters (the internal temperature of human body, the skin temperatures and clothing temperatures, the flow of the sensible and latent heat, global thermal state and each of the areas of the human body, etc…) This study is intended to validate and test the Huthereg program, seeking to discover its strengths and weaknesses, its limitations and to test its reliability. To this end, numerous cases were simulated with data taken from the scientific literature and then, compared with the experimental results, removed therefrom, with the corresponding predicted by the software. In order to test the program on a broad range of situations, the selected cases involve different thermal environments varying from extremely hot to extremely cold, with different levels of clothing (naked subjects to subjects wearing thermal protective clothing), with different levels of activity (from resting to high physical exercise). The dressed subjects were studied with different levels of thermal insulation and with changes in the distribution of the clothing over various parts of the human body. As auxiliary tools to evaluate the level of compliance between the experimental results taken from the literature and those provided by the software, we use: the statistical functions arithmetic mean of relative differences (and the respective standard deviation), the root mean square deviation and the Pearson correlation coefficient, which was given an extra importance. These functions study the proximity between the measured and the predicted values and the type of ratio that exists between them. Additionally, graphics were used in the evaluation. The following objective was achieved, the Huthereg program was tested for a very wide range of situations, good and very good results were often obtained, reaching excellent in many of them. Even when differences between experimental and predicted values were detected, the program may not be at fault. It may be that the protocol that was used in the tests is almost never detailed completely in the bibliography, the description of thermal environments is often incomplete, the level of physical activity is not quantified in numerical terms and the distribution of clothing for the different areas of the human body is never presented. Within the range of the situations analysed, no limitations in the software application were found. Taking this into consideration, the usefulness, the applicability and the reliability of the Huthereg program in the simulation of the thermphysiological response of the human body is apparent. It is expected that this program will continue to be used in the future in many different research studies, in technological developments, to evaluate thermal situations that may entail risks or simply to improve performance in work or sport related activities.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Bárta, Vít. "Komparace konsolidace demokracie na území bývalého východního bloku." Master's thesis, 2016. http://www.nusl.cz/ntk/nusl-353713.

Повний текст джерела
Анотація:
The aim of this thesis is to numerically evaluate democratic consolidation in Eastern European countries of the former Eastern Bloc. To compare these countries with each other and decide which of these countries can be considered as consolidated democracies. Secondary aim is to find which factors supported this consolidation or at least correlate with it. Theoretical basis of this work is Wolfgang Merkel's theory of democratic consolidation. He divides democratic consolidation into four levels: constitutional consolidation, representative consolidation, behavioral consolidation and democratic consolidation of the political culture. Each level of democratic consolidation is numerically expressed, with usage of Bertelsmann's transformation index data, separately for all states in two-year intervals since 2005 to 2015. Based on that, overall democratic consolidation is calculated. Therefore, we can compare countries between each other and in time. Correlation between factors supporting consolidation and overall democratic consolidation is expressed by Pearson correlation coefficient. This work is beneficial in creating and describing method, which can be used for numerical expression of democratic consolidation in any state since 2005 to 2015 without author's subjective influence. Another benefit is...
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії