Дисертації з теми "Pearson Correlation Coefficient 13"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-25 дисертацій для дослідження на тему "Pearson Correlation Coefficient 13".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Al, Samrout Marwa. "Approches mono et bi-objective pour l'optimisation intégrée des postes d'amarrage et des grues de quai dans les opérations de transbordement." Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMLH21.
Повний текст джерелаInternational maritime transport is vital for global trade, representing over 85% of exchanges, with 10.5 billion tons transported each year. This mode of transport is the most economical and sustainable, contributing only 2.6% of CO2 emissions. In France, the maritime sector accounts for 1.5% of GDP and nearly 525,000 jobs. Maritime ports, crucial for the logistics chain, facilitate the transshipment of goods and increasingly adopt digital solutions based on artificial intelligence to improve their efficiency. France has eleven major seaports, seven of which are located in mainland France.The thesis focuses on optimizing container terminals to enhance the efficiency and performance of ports. It addresses the issues of berth allocation planning and crane activation in container terminals in response to recent changes in maritime logistics, such as the arrival of mega-ships and automation. It highlights gaps in the existing literature and offers an in-depth analysis of current challenges. The document is divided into three chapters:The first chapter explores the history of containerization, types of containers, and challenges in operational planning. It focuses on the berth allocation problem (BAP), its resolution methods, and the integration of artificial intelligence (AI) to optimize logistical processes. The second chapter introduces the dynamic allocation problem with ship-to-ship transshipment. It proposes a mixed-integer linear program (MILP) to optimize the berthing schedule and transshipment between vessels. The objective is to reduce vessel stay times in the terminal, as well as penalties due to vessel delays, and to determine the necessary transshipment method. The method combines a packing-type heuristic and an improved genetic algorithm, demonstrating effectiveness in reducing vessel stay times. We conducted a statistical analysis to identify effective control parameters for the GA, then applied this algorithm with the determined control parameters to perform numerical experiments on randomly generated instances. Additionally, we conducted a comparative study to evaluate different crossover operators using ANOVA. We then presented a series of examples based on random data, solved using the CPLEX solver, to confirm the validity of the proposed model. The proposed method is capable of solving the problem in an acceptable computation time for medium and large instances. The final chapter presents an integrated berth and crane allocation problem, focusing on ship-to-ship transshipment. Three approaches are proposed. The first approach uses the NSGA-III genetic algorithm, supplemented by a statistical analysis to optimize parameters and evaluate different crossover operators. By analyzing AIS database data, numerical tests demonstrate the effectiveness of this method at the port of Le Havre, yielding satisfactory results within a reasonable computation time. The second approach involves two regression models, Gradient Boosting Regression (GBR) and Random Forest Regression (RFR), trained on selected features. The methodology includes preprocessing steps and hyperparameter optimization. While NSGA-III achieves the highest accuracy, it requires a longer execution time. In contrast, although GBR and RFR are slightly less precise, they significantly improve efficiency, highlighting the trade-off between accuracy and execution time in practical applications
Kalaitzis, Angelos. "Bitcoin - Monero analysis: Pearson and Spearman correlation coefficients of cryptocurrencies." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-41402.
Повний текст джерелаBergamaschi, Denise Pimentel. "Correlação intraclasse de Pearson para pares repetidos: comparação entre dois estimadores." Universidade de São Paulo, 1999. http://www.teses.usp.br/teses/disponiveis/6/6132/tde-01102014-105050/.
Повний текст джерелаObjective. This thesis presents and compares, theoretically and empirically, two estimators of the intraclass correlation coefficient pI, defined as Pearson\'s pairwise intraclass correlation coefficient. The first is the \"natural\" estimator, obtained by Pearson\'s moment-product correlation for members of one class (rI) while the second was obtained as a function of components of variance (icc). Methods. Theoretical and empirical comparison of the parameters and estimators are performed. The theoretical comparison involves two definitions of the intrac1ass correlation coefficient pI as a measure of reliability (*) for two repeated measurements in the same class and the presentation of the technique of analysis of variance, as well as for the definition and interpretation of the estimators ri and icc. The empirical comparison was carried out by means of a Monte Carlo simulation study of pairs of correlated values according Pearson\'s pairwise correlation. The pairs of values follow a normal bivariate distribution, with correlation values and sample size previously fixed: n= 15, 30 e 45 and Pl = . Results. Bias and mean square error for the estimators were compared as well as the range of the intervals of confidence. The comparison shows that the bias of icc is always smaller than of rI This also applies to the mean square error. Conclusions. The icc is a better estimator, especially for n less than or equal to 15. For larger samples sízes (n 30 or more), the estimators produce results that are equal to the second decimal place. (*) Fórmula
Truong, Thi Kim Tien. "Grandes déviations précises pour des statistiques de test." Thesis, Orléans, 2018. http://www.theses.fr/2018ORLE2057/document.
Повний текст джерелаThis thesis focuses on the study of Sharp large deviations (SLD) for two test statistics:the Pearson’s empirical correlation coefficient and the Moran statistic.The two first chapters aim to recall general results on SLD principles and Laplace’s methodsused in the sequel. Then we study the SLD of empirical Pearson coefficients, name $r_n=\sum_{i=1}^n(X_i-\bar X_n)(Y_i-\bar Y_n)/\sqrt{\sum_{i=1}(X_i-\bar X_n)^2 \sum_{i=1}(Y_i-\bar Y_n)^2}$ and when the meansare known,$\tilde r_n=\sum_{i=1}^n(X_i-\mathbb E(X))(Y_i-\mathbb E(Y))/\sqrt{\sum_{i=1}(X_i-\mathbb E(X))^2 \sum_{i=1}(Y_i-\mathbb E(Y))^2} \, .$ .Our framework takes place in two cases of random sample (Xi, Yi): spherical distributionand Gaussian distribution. In each case, we follow the scheme of Bercu et al. Next, westate SLD for the Moran statistic $T_n=\frac{1}{n}\sum_{k=1}^n\log\frac{X_i}{\bar X_n}+\gamma \, ,$o\`u $\gamma$ , where γ is the Euler constant.Finally the appendix is devoted to some technical results
Johansson, Emilia. "Factors controlling the sorption of Cs, Ni and U in soil : A statistical analysis with experimental sorption data of caesium, nickel and uranium in soils from the Laxemar area." Thesis, KTH, Hållbar utveckling, miljövetenskap och teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281938.
Повний текст джерелаFör att kunna fatta beslut relaterade till hypotetisk framtida kontaminering från slutförvar av radioaktivt avfall är det direkt avgörande att förstå mobiliteten av radioaktiva element i miljön. Sorption är en av de viktigaste kemiska mekanismerna som kan minska spridningen av radionuklider i vatten/jord/bergssystem, där nukliderna fördelar sig mellan vätskefasen och ytor på fasta partiklar i dessa system. Fördelningskoefficienter (Kd värden) används generellt som ett kvantitativt mått på sorptionen, där ett högt Kd värde innebär att en större andel av ämnet i fråga är bundet till den fasta fasen. Under hösten 2006 togs jordprover från tre dalgångar i Laxemar/Oskarshamn. Totalt åtta jordprover karakteriserades för ett antal jordparametrar som är viktiga för geokemisk sorption och användes senare i batchförsök tillsammans med ett naturligt grundvatten. Fördelningskoefficienter (Kd värden) bestämdes för sju radionuklider (Cs, Eu, I, Ni, Np, Sr and U) för vart och ett av de åtta jordproverna. För att bidra till tolkningen av sorptionsresultaten tillsammans med jordprovernas egenskaper syftar denna studie till att beskriva sorptionsbeteendet hos radionukliderna caesium, nickel och uran samt urskilja vilka parametrar som kan fungera som grund för att uppskatta sorptionsstyrkan av radionuklider i allmänhet. För att uppnå detta syfte så har studien följande mål. Identifiera de jord- och marklösningsegenskaper som kontrollerar sorptionen av Cs, Ni respektive U i de åtta Laxemar proverna. Bestämma vilket Laxemar-jordprov som starkast sorberar de tre radionukliderna. Identifiera de jordparametrar som bör prioriteras vid jordkarakteriseringar, baserat på deras sorptionsinflytande, för att kunna uppskatta Kd värden endast med begränsad information om ett jordsystem. Metoden innefattade kvantitativa metoder såsom sammanställning av kemiska jämviktsdiagram med programvaran Hydra/Medusa och korrelationsanalyser med hjälp av statistikprogramvaran SPSS statistics. De kemiska jämviktsdiagrammen bidrog till att beskriva specieringen av respektive nuklid som en funktion av pH och korrelationsanalyserna bidrog till att identifiera linjära samband mellan par av variabler, tex mellan Kd och jordparametrar. Baserat på specieringsdiagrammen för varje radionuklid och identifierade viktiga linjära och icke-linjära förhållanden mellan Kd-värdena och ett antal jordparametrar har följande egenskaper hos jordarna och marklösningen visat sig huvudsakligen kontrollera sorptionen av Cs, Ni respektive U i de åtta Laxemar jordarna: För caesium gäller jordens specifika ytarea kopplad till lerinnehållet, medan för nickel är det katjonbytarkapaciteten, organiskt material, alkaliska pH-värden samt löst organiskt material. Sorptionen av uran befanns kontrolleras av katjonbytarkapaciteten, organiskt material, löst organiskt material, alkaliska pH-värden samt lösta karbonater. Den jord som visade starkast sorption varierar mellan de tre nukliderna, vilket kan relateras till nuklidernas individuella sorptionsbeteende i jord samt jordarnas olika fysikaliska och kemiska egenskaper. Parametrarna som bör prioriteras vid karaktärisering av jordprov identifierades vara: pH, katjonbytarkapaciteten, jordens specifika ytarea, mängden organiskt material samt jordtexturen (lerinnehåll).
Lima, Leonardo da Silva e. "Centralidades em redes espaciais urbanas e localização de atividades econômicas." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/122524.
Повний текст джерелаIn recent years, the study of urban spatial networks has been often used to describe urban phenomena associated with the shape of the city. Researches suggested that centralities are able to describe the urban spatial structure (KRAFTA, 1994; ANAS et al., 1998) and then it is possible to recognize the spaces with more flows, which have the highest values of land revenue, the safest, among other aspects related to urban phenomenon. The hypothesis of this research accepts that centrality in urban spatial networks play a key role for the urban spatial structure and the way of land uses is organized. Thus, there would be some measures of centrality in urban spatial networks that would be more associated with economic activities occurring in the city. The research will evaluate five measures of centrality applied on three types of urban spatial networks (axial map, node map and segment map). Therefore we will use five models of centrality in urban spatial networks known as reach (SEVTSUK, MEKONNEN, 2012), straightness (PORTA et al., 2006b), betweenness (FREEMAN, 1977), planar betweenness (KRAFTA, 1994) and closeness (INGRAM, 1971) in order to determine which this most highly correlated with the occurrence of economic activities. The relationships between these measures of centrality and locations of economic activities are examined in three Brazilian cities, using as methodology the Pearson correlation coefficient (r). The highest correlation between the results of centrality in urban spatial networks and the location of economic activities will suggest which centrality measure, way of to describe urban space like a network and distance processing method (euclidian or topologic) is more associated with the occurrence of these activities in the city. The results indicate that Reach, Straightness and Planar Betweenness are most outstanding models of centrality. In addition, Pearson correlation coefficients (r) most relevant were obtained when models of centrality are processed considering euclidian paths in the street segments network, suggesting that this type of spatial network and distances processing method generates centralities with more significant correlation values within the urban phenomenon studied.
Fernandes, Catarina Marques. "Liderança de empoderamento e trabalho digno." Master's thesis, Universidade de Évora, 2018. http://hdl.handle.net/10174/24511.
Повний текст джерелаKasianenko, Stanislav. "Predicting Software Defectiveness by Mining Software Repositories." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-78729.
Повний текст джерелаLe, Trang Thi, Doan Dang Phan, Bao Dang Khoa Huynh, Van Tho Le, and Van Tu Nguyen. "Phytoplankton diversity and its relation to the physicochemical parameters in main water bodies of Vinh Long province, Vietnam." Technische Universität Dresden, 2019. https://tud.qucosa.de/id/qucosa%3A70829.
Повний текст джерелаCác mẫu thực vật phù du được thu thập trong năm 2016 (mùa khô và mùa mưa) tại 9 vị trí ở tỉnh Vĩnh Long, Việt Nam. Một số thông số môi trường như nhiệt độ, pH, oxy hòa tan, nitrat và phốt phát được đo ngay tại hiện trường. Tổng cộng 209 loài thực vật phù du được ghi nhận (6 ngành, 96 chi). Số lượng loài cao nhất là tảo Silic (82 loài), kế đến là tảo Lục (61 loài), tảo Lam (39 loài), tảo Mắt (21 loài), tảo Vàng ánh (3 loài) và tảo Giáp (3 loài). Mật độ thực vật phù du dao động từ 4.128 đến 123.029 tế bào/ lít. Các loài ưu thế ghi nhận được ở khu vực nghiên cứu gồm có: Microcystis aeruginosa, Merismopedia glauca, Oscillatoria perornata, Jaaginema sp., Planktothrix agardhii; Coscinodiscus subtilis, Melosira granulata. Trong đó, loài Microcystis aeruginosa chiếm ưu thế nhiều nhất trên tổng số điểm thu mẫu trong đợt khảo sát mùa khô, đồng thời loài này được xếp vào nhóm sản sinh độc tố gây hại cho môi trường. Chất lượng nước mặt theo QCVN 08:2015/BTNMT được xếp vào loại A1 đối với thông số pH, nitrat và loại B1 đối với thông số oxy hòa tan, và loại B2 đối với phốt phát. Cấu trúc quần xã thực vât nổi và các yếu tố môi trường thay đổi đáng kể giữa mùa mưa và mừa khô. Hệ số tương quan Pearson (r) được dùng để phân tích. Kết quả cho thấy số lượng thực vật phù du có tương quan thuận với pH, oxy hòa tan và nitrat trong mùa mưa và có ý nghĩa về mặt thống kê. Mật độ của thực vật phù du không tương quan với các yếu tố môi trường trong cả hai mùa.
Siqueira, Lucas Alfredo. "Titulador automático baseado em filmes digitais para determinação de dureza e alcalinidade total em águas minerais/Titulador automático baseado em filmes digitais para determinação de dureza e alcalinidade total em águas minerais." Universidade Federal da Paraíba, 2016. http://tede.biblioteca.ufpb.br:8080/handle/tede/9013.
Повний текст джерелаMade available in DSpace on 2017-06-21T14:26:33Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 3690977 bytes, checksum: 752560aa5c7d78968c32cb55f0778788 (MD5) Previous issue date: 2016-02-29
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Total hardness and Total alkalinity are important physico-chemical parameters for the evaluation of water quality and are determined by volumetric analytical methods. These methods have difficult to detect the endpoint of the titration due to the difficult of viewing the color transition inherent to each of them. To circumvent this problem, here is proposed a new automatic method for the detection of the titration end point for the determination of total hardness and total alkalinity in mineral water samples. The proposed flow-batch titrator consists of a peristaltic pump, five three-way solenoid valves, a magnetic stirrer, an electronic actuator, an Arduino MEGA 2560TM board, a mixing chamber and a webcam. The webcam records the digital movie (DM) during the addition of the titrant towards mixing chamber, also recording the color variations resulting from chemical reactions between titrant and sample within chamber. While the DM is recorded, it is decompiled into frames ordered sequentially at a constant rate of 30 frames per second (FPS). The first frame is used as a reference to define the region of interest (RI) of 48 × 50 pixels and the R channel values, which are used to calculate the Pearson's correlation coefficient (r) values. r is calculated between the R values of the initial frame and each subsequent frame. The titration curves are plotted in real time using the values of r (ordinate axis) and the total opening time of the valve titrant (abscissa axis). The end point is estimated by the second derivative method. A software written in ActionScript 3.0 language manages all analytical steps and data treatment in real time. The feasibility of the method was attested by its application for the analysis of natural water samples. Results were compared with classical titration and did not present statistically significant differences when the paired ttest at the 95% confidence level was applied. The proposed method is able to process about 71 samples per hour, and its precision was confirmed by overall relative standard deviation (RSD) values, always lower than the 2,4% for total hardness and 1,4% for total alkalinity.
A dureza total e a alcalinidade total são importantes parâmetros físico-químicos para avaliação da qualidade de águas e são determinados por métodos volumétricos de análise. Estes métodos apresentam difícil detecção do ponto final da titulação devido à dificuldade de visualização das transições de cores inerentes a cada um deles. Para contornar este problema, foi proposta neste trabalho uma nova metodologia automática para a detecção do ponto final nas determinações de dureza total e alcalinidade total em águas. O titulador em fluxo-batelada proposto é composto de uma bomba peristáltica, cinco válvulas solenoides de três vias, um agitador magnético, um acionador de válvulas, uma placa Arduíno MEGA 2560TM, uma câmara de mistura e uma webcam. O programa de gerenciamento e controle do titulador foi escrito em linguagem ActionScript 3.0. A webcam grava o filme digital durante a adição do titulante na câmara de mistura, registrando as variações de cor decorrentes das reações químicas entre titulante e amostra no interior de câmara. Enquanto o filme é gravado, este é decomposto em quadros ordenados sequencialmente a uma taxa constante de 30 quadros por segundo (FPS). O primeiro quadro é utilizado como referência para definir uma região de interesse (RI) com 48 x 50 pixels, na qual seus valores R, G e B são utilizados para calcular os valores de coeficiente de correlação de Pearson (r). O valor de r é calculado entre os valores de R do quadro inicial e de cada quadro subsequente. As curvas de titulação são obtidas em tempo real usando os valores de r (ordenadas) e o tempo total de abertura da válvula de titulante (abscissas). O ponto final é estimado pelo método de segunda derivada. O método foi aplicado na análise de águas minerais e os resultados foram comparados com a titulação clássica, não apresentando diferenças estatisticamente significativas com aplicação do teste t pareado a 95% de confiança. O método proposto foi capaz de processar até 71 amostras por hora e a sua precisão foi confirmada pelos valores de desvio padrão relativos (DPR) globais, sempre inferiores as 2,4% para dureza total e 1,4% para alcalinidade total.
Ozbal, Gozde. "A Content Boosted Collaborative Filtering Approach For Movie Recommendation Based On Local &." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610984/index.pdf.
Повний текст джерелаs world, many systems and approaches make it possible for the users to be guided by the recommendations that they provide about new items such as articles, news, books, music, and movies. However, a lot of traditional recommender systems result in failure when the data to be used throughout the recommendation process is sparse. In another sense, when there exists an inadequate number of items or users in the system, unsuccessful recommendations are produced. Within this thesis work, ReMovender, a web based movie recommendation system, which uses a content boosted collaborative filtering approach, will be presented. ReMovender combines the local/global similarity and missing data prediction v techniques in order to handle the previously mentioned sparseness problem effectively. Besides, by putting the content information of the movies into consideration during the item similarity calculations, the goal of making more successful and realistic predictions is achieved.
Kovařík, Tomáš. "Řízení poslechových testů pro subjektivní hodnocení kvality audio signálu." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219467.
Повний текст джерелаMihulka, Tomáš. "Evoluční optimalizace analogových obvodů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2017. http://www.nusl.cz/ntk/nusl-363843.
Повний текст джерелаWatanabe, Jorge. "Métodos geoestatísticos de co-estimativas: estudo do efeito da correlação entre variáveis na precisão dos resultados." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/44/44137/tde-14082008-165227/.
Повний текст джерелаThis master dissertation presents the results of a survey into co-estimation methods commonly used in geostatistics. These methods are ordinary cokriging, collocated cokriging and kriging with an external drift. Besides that ordinary kriging was considered just to illustrate how it does work when the primary variable is poorly sampled. As we know co-estimation methods depend on a secondary variable sampled over the estimation domain. Moreover, this secondary variable should present linear correlation with the main variable or primary variable. Usually the primary variable is poorly sampled whereas the secondary variable is known over the estimation domain. For instance in oil exploration the primary variable is porosity as measured on rock samples gathered from drill holes and the secondary variable is seismic amplitude derived from processing seismic reflection data. It is important to mention that primary and secondary variables must present some degree of correlation. However, we do not know how they work depending on the correlation coefficient. That is the question. Thus, we have tested co-estimation methods for several data sets presenting different degrees of correlation. Actually, these data sets were generated in computer based on some data transform algorithms. Five correlation values have been considered in this study: 0.993; 0.870; 0.752; 0.588 and 0.461. Collocated simple cokriging was the best method among all tested. This method has an internal filter applied to compute the weight for the secondary variable, which in its turn depends on the correlation coefficient. In fact, the greater the correlation coefficient the greater the weight of secondary variable is. Then it means this method works even when the correlation coefficient between primary and secondary variables is low. This is the most impressive result that came out from this research.
Gaspar, Willians Cesar Rocha. "A correlação entre jornada de trabalho e produtividade: uma perspectiva macroeconômica entre países." reponame:Repositório Institucional do FGV, 2017. http://hdl.handle.net/10438/19961.
Повний текст джерелаApproved for entry into archive by Janete de Oliveira Feitosa (janete.feitosa@fgv.br) on 2018-01-24T12:00:40Z (GMT) No. of bitstreams: 1 A Correlação entre Jornada de Trabalho e Produtividade - Uma Perspectiva Macroeconômica entre Países.pdf: 1651221 bytes, checksum: 10a95ba6074b04f5e4e0f6d88a9bf7b6 (MD5)
Made available in DSpace on 2018-01-29T18:55:15Z (GMT). No. of bitstreams: 1 A Correlação entre Jornada de Trabalho e Produtividade - Uma Perspectiva Macroeconômica entre Países.pdf: 1651221 bytes, checksum: 10a95ba6074b04f5e4e0f6d88a9bf7b6 (MD5) Previous issue date: 2017-12-19
This research has as general objective to identify the variables or contributing factors to subsidize the discussion about reduction of the Working Day. As a specific objective, what is proposed is to verify how these same variables affect Productivity. For both objectives the macroeconomic aspects of the countries analyzed are considered. The criterion for selecting these countries is based on the "ranking" of the OECD and World Bank database for the year 2013, considering all the major world economies, which together represent 65.22% of global GDP. The data extracted refer to the "Gross Domestic Product - GDP at (PPP) - Purchasing Power Parity", which consists of the Gross Domestic Product, in international dollars, with a view to the comparative possibility of these economies by purchasing power parity (PPP). Other sources of information were considered as objects of analysis and observations, including the statistical series of secondary data from the International Labor Office (ILO), the International Monetary Fund (IMF), the United Nations (UNDP), the Brazilian Institute of Geography and Economics (IBGE), the Department of Statistics and Socioeconomic Studies (DIEESE) and the Institute of Economic and Applied Research (IPEA). The research was conducted at the macroeconomic level of the countries, with a longitudinal temporal cut between the years 2007 and 2013, in order to observe the behavior of these economies, including during the period of the 2008 global crisis. evolution of the historical series of GDP, revealing the size of the economy, GDP per capita, which captures wealth in relation to the population. Finally, we consider the labor productivity factor itself, which deals with the relationship between GDP, the number of people and the number of hours worked in the period. This research has as general objective to identify the variables or contributing factors to subsidize the discussion about reduction of the Working Day. As a specific objective, what is proposed is to verify how these same variables affect Productivity. For both objectives the macroeconomic aspects of the countries analyzed are considered. The criterion for selecting these countries is based on the "ranking" of the OECD and World Bank database for the year 2013, considering all the major world economies, which together represent 65.22% of global GDP. The data extracted refer to the "Gross Domestic Product - GDP at (PPP) - Purchasing Power Parity", which consists of the Gross Domestic Product, in international dollars, with a view to the comparative possibility of these economies by purchasing power parity (PPP). Other sources of information were considered as objects of analysis and observations, including the statistical series of secondary data from the International Labor Office (ILO), the International Monetary Fund (IMF), the United Nations (UNDP), the Brazilian Institute of Geography and Economics (IBGE), the Department of Statistics and Socioeconomic Studies (DIEESE) and the Institute of Economic and Applied Research (IPEA). The research was conducted at the macroeconomic level of the countries, with a longitudinal temporal cut between the years 2007 and 2013, in order to observe the behavior of these economies, including during the period of the 2008 global crisis. evolution of the historical series of GDP, revealing the size of the economy, GDP per capita, which captures wealth relative to the population. Finally, we consider the labor productivity factor itself, which deals with the relationship between GDP, the number of people and the number of hours worked in the period. Design/Methodology/ approach – The method is a qualitative research of the exploratory type, subsidized by quantitative correlation analysis, and the statistical design is directed to the verification of the degree of association between the variables: Working day and Labor productivity; that is, calculation and interpretation of the degree of correlation between these two variables. Findings – In the final conclusion of the study, it is inferred based on the theoretical reference and the analysis of the statistical data, if the reduction in the working day contributes to changes in productivity indexes, and just as other variables are considered in this discussion. Research limitations – No aspects of the national culture, climatic conditions and segregation of nations by percentage of participation in agriculture, industry, and services were considered in the composition of their economies, with a view to performing comparative analysis by subgroups. In addition, the sample set is restricted both in number of countries and in relation to the relatively short period between 2007 and 2013, in addition to being marked by an atypical event such as the global economic crisis of 2008. Practical contributions – To governments, organizations and workers to rethink the possible economic and social benefits, through public policies that allow greater flexibility in working hours, focusing on the competitive advantages and the balance of the relation between labor and capital, observing the legal aspects, productivity, quality of life, unit costs and the generation of jobs
Esta pesquisa tem como objetivo geral identificar as variáveis ou fatores contribuintes para subsidiar a discussão sobre redução da Jornada de Trabalho. Como objetivo específico, o que se propõe é verificar como essas mesmas variáveis afetam a Produtividade. Para ambos os objetivos são considerados os aspectos macroeconômicos dos países analisados. O critério para seleção desses países se fundamenta no “ranking” da base de dados da Organização para a Cooperação e Desenvolvimento Econômico – OCDE e do Banco Mundial, ano base 2013, considerando-se o conjunto das maiores economias mundiais, que, juntas, representam 65,22% do PIB global. Os dados extraídos são referentes ao “Gross Domestic Product – GDP at (PPP) - Purchasing Power Parity”, que consiste no Produto Interno Bruto, em dólares internacionais, com vistas à possibilidade comparativa destas economias pela paridade do poder de compra (PPC). Outras fontes de informações foram consideradas como objetos de análise e observações, incluindo-se as séries estatísticas de dados secundários do Instituto Internacional do Trabalho (OIT), do Fundo Monetário Internacional (FMI), das Nações Unidas (UNDP), do Instituto Brasileiro de Geografia e Economia (IBGE), do Departamento Intersindical de Estatística e Estudos Socioeconômicos (DIEESE) e do Instituto de Pesquisa Econômica e Aplicada (IPEA). A pesquisa foi conduzida no nível macroeconômico dos países, com corte temporal longitudinal entre os anos de 2007 a 2013, com o objetivo de observar-se o comportamento dessas economias, inclusive durante o período da crise mundial de 2008. Nesse sentido, foi avaliada a evolução da série histórica do PIB, como reveladora do tamanho da economia, o PIB per capita, que captura a riqueza em relação à população. Por último, considera-se o fator produtividade do trabalho propriamente dito, que trata da relação entre o PIB, o número de pessoas e o número de horas trabalhadas no período. Quanto ao método, trata-se de pesquisa qualitativa do tipo exploratória, subsidiada por análise quantitativa correlacional, sendo o delineamento estatístico direcionado para a verificação do grau de associação entre as varáveis: Jornada de trabalho e Produtividade do trabalho; ou seja, cálculo e interpretação do grau de correlação entre essas duas variáveis. Na conclusão final do trabalho, infere-se com base no referencial teórico e na análise dos dados estatísticos, se a redução na jornada de trabalho contribui para alterações nos índices de produtividade, e assim como outras variáveis são consideradas nesta discussão. Não foram considerados aspectos da cultura nacional, condições climáticas e segregação das nações por percentual de participação respectivamente em agricultura, indústria, e serviços, na composição de suas economias, visando realizar análise comparativa por subgrupos. Além disto o conjunto amostral é restrito, tanto em número de países, quanto em relação ao período, relativamente curto, entre 2007 e 2013, além de ter sido marcado por fato atípico como a crise econômica mundial de 2008. Á governos, organizações e trabalhadores para repensarem os eventuais benefícios econômicos e sociais, através de políticas públicas que permitam maior flexibilização das jornadas de trabalho, com foco nas vantagens competitivas e no equilíbrio da relação entre mão de obra e capital, observando os aspectos legais, a produtividade, a qualidade de vida, os custos unitários e a geração de empregos
Lee, Wan Yi, and 李宛懌. "Is Pearson Sample Correlation Coefficient Always Feasible To Test For Correlations ?" Thesis, 2016. http://ndltd.ncl.edu.tw/handle/43295031226491363248.
Повний текст джерелаLei, Cheng. "Student performance prediction based on course grade correlation." Thesis, 2019. http://hdl.handle.net/1828/10654.
Повний текст джерелаGraduate
Chen, Chin-Han, and 陳勁含. "Constructing Molecular Phylogeny by Pearson''s Correlation Coefficient and Molecular Phylogenetic Analysis of PRMT Super Family." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/70958272384702013640.
Повний текст джерела中山醫學大學
生物醫學科學學系碩士班
101
Evolutionary relationship of all living organisms can be viewed by the phylogenetic tree. So far there are many methods have been developed to evaluate evolutionary relationships. However, multiple sequence alignment (MSA) should be performed before those methods. Several studies have shown that the order in which sequences were added to a MSA could significantly affect the end result. Therefore we want to find if there is another method that makes more reliable results. Our goal is to construct a unique and reasonable phylogenetic tree building method better than the others. Here we propose a novel approach to replace the MSA process. We combine pair-wise sequence alignment (BLAST) and Pearson''s correlation coefficient (PCC) to simulate the interactive relationship of compared sequences. The relationship would be clustered by hierarchical clustering (HC) method. The results have shown that our method indeed improved the problem that MSA may occur. Our method also has a better clustering ability than the conventional methods and could produce a more reasonable tree. We subsequently use our method to perform a phylogenetic analysis of protein arginine methyltransferase (PRMT) families. In addition, we are curious to find if there is a way to identify the pattern of each PRMT family, which makes a fast classification of an unknown sequence.
Lin, Jian-Fa, and 林建發. "Transitive Pearson Product-Moment Correlation Coefficient Based Particle Swarm Optimization on Applying Hyperspectral Image Dimension Reduction." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/5ruft5.
Повний текст джерела國立臺北科技大學
電機工程研究所
105
In recent years, the satellite remote sensing technology progress in wide applications, resulting in the increased amount of hyperspectral imaging bands and datasets. To avoid containing wrong data and noise bands lead to the correct rate drops, we utilize hyperspectral image processing to select these bands representative in the spectral bands. Thus, reducing the data complexity is essential procedure. This paper proposed Transitive Pearson Product-Moment Correlation Coefficient (TPMCC), improving the correlation coefficient between two bands via those similar neighbor bands according with specified conditions to enhance the ability to select the bands and effectively achieve the effect of dimensionality reduction. The previous study posed Particle Swarm Optimization (PSO), this algorithm clusters the correlation coefficient matrix generated by original hyperspectral image into a cluster module of feature space, and then choose the representative bands for the effect of reduction dimension. However, when dealing with more category images, each category bands grouped by the correlation coefficient matrix inefficiently. Additionally, PSO as vulnerable as to be disturbed cannot find out suitable universal correlation coefficient matrix. In this dissertation, Salinas’s AVIRIS and Washington DC Mall’s HYDICE remote sensing images are the experiments. The experimental results show that TPMCC algorithm have more effective than Pearson Product-Moment Correlation Coefficient to prove the dimension reduction rate and reduce the selection of bands then achieve a good classification result.
Říha, Samuel. "Parciální a podmíněné korelační koeficienty." Master's thesis, 2015. http://www.nusl.cz/ntk/nusl-350851.
Повний текст джерелаFonseca, João Miguel Lucas da. "Covid-19 versus H1N1: a comparative study of the impact of viruses on small and large economies on the Stock Market: The special study of Portugal." Master's thesis, 2022. http://hdl.handle.net/10362/134920.
Повний текст джерелаEsta dissertação descreve e detalha, o impacto que as crises pandémicas têm na volatilidade dos mercados da bolsa de pequenas e grandes economias. Relativamente aos países estudados, escolhemos como casos de estudo os Estados Unidos da América (grande economia), a Grécia (pequena economia), e Portugal, como comparação aos dois países mencionados anteriormente, mas apenas no que se refere à consequência da pandemia provocada pela Covid-19. Em termos de metodologia, a volatilidade financeira diária é tradicional para modelar um processo GARCH (1,1). Este modelo foi utilizado no programa SAS para provar se a volatilidade podia ser ou não correlacionada. Adicionalmente, os coeficientes de correlação linear de Pearson foram realizados e analisados em várias variáveis, tais como o valor no fecho, casos e mortes confirmados com a volatilidade diária entre cada país e a volatilidade histórica diária. Finalmente, o estudo mostra como as pandemias do século XXI tiveram impacto tanto na bolsa de valores (financeiramente), como no produto interno bruto (economicamente). Esta dissertação comprova que existe, de facto, uma enorme volatilidade no mercado de bolsa no início de um fenómeno atípico. Contudo, após um determinado período de tempo, o mercado de bolsa corrige-se. Saliento, que na pandemia de Covid-19, apesar dos Estados Unidos da América terem sofrido uma repercussão no seu Produto Interno Bruto, Portugal teve implicações ainda mais fortes na economia, tal como a Grécia (países de economia pequena). Referente à parte financeira, Portugal compara-se igualmente à Grécia aquando se realizou as correlações entre as volatilidades históricas diárias. No entanto aproxima se dos Estados Unidos da América nas correlações das várias variáveis, transcritas anteriormente, tais como o valor no fecho e os casos e mortes confirmados com a volatilidade diária entre cada país. Conclui-se que as pandemias causaram impacto nos mercados de bolsa nos países estudados e mencionados supra.
Vondra, Jan. "Vliv vybraných kondičních faktorů na výkonnost ve vodním slalomu." Master's thesis, 2016. http://www.nusl.cz/ntk/nusl-342055.
Повний текст джерелаKŘÍŽOVÁ, Tereza. "Měření návštěvnosti." Master's thesis, 2019. http://www.nusl.cz/ntk/nusl-394863.
Повний текст джерелаPereira, Carlos Miguel Dias. "Teste e validação de um programa de simulação do comportamento termofisiológico do corpo humano." Master's thesis, 2014. http://hdl.handle.net/10316/38875.
Повний текст джерелаOs programas de simulação do comportamento termofisiológico do corpo humano têm ganho uma enorme importância na nossa sociedade ao longo dos tempos, em particular nos tempos modernos. A sua utilização permite ajudar a melhorar as condições de conforto térmico no interior de edifícios, a melhorar o rendimento de atividades desportivas, a elaborar protocolos para certas atividades, a estudar situações em que indivíduos são sujeitos a elevados esforços ou em que trabalham em condições térmicas muito frias ou muito quentes. Foi esta a motivação que motivou o desenvolvimento, no DEM-FCTUC, de um software de simulação do comportamento termofisiológico do corpo humano, o programa Huthereg. Em função das características físicas da pessoa, do seu nível de vestuário, da intensidade da sua atividade e das condições térmicas do ambiente em que se encontra, o programa Huthereg consegue prever um conjunto alargado de parâmetros (temperaturas interiores, na pele e do vestuário, fluxos de calor sensíveis e latentes, estado térmico global e de cada uma das zonas do corpo humano, etc.). Este trabalho tem como objectivo validar e testar o programa Huthereg, encontrando os seus pontos fortes e fracos, as suas limitações e testando a sua fiabilidade. Para isso foram simulados vários casos relatados na bibliografia científica, com comparação dos resultados experimentais daí retirados com os correspondentes previstos pelo software. De modo a testar o programa numa gama abrangente de situações, os casos selecionados envolvem diferentes ambientes térmicos (de extremamente quente a extremamente frio), com diversos níveis de vestuário (de indivíduos nus até ao uso de vestuário de proteção térmica), com diferentes intensidades de atividade (desde repouso a elevado esforço físico). No caso de indivíduos vestidos foram estudados diferentes níveis de isolamento térmico e com variação da distribuição da roupa pelas várias zonas do corpo humano. Como ferramentas auxiliares na avaliação do nível de conformidade entre os resultados experimentais retirados da bibliografia e os previstos pelo software utilizaram-se as funções estatísticas média aritmética das diferenças relativas (e respectivo desvio padrão), desvio quadrático médio e aquela que foi dada uma importância extra, o coeficiente de correlação de Pearson. Estas funções estudam a proximidade entre os valores medidos e previstos e o tipo de relação que existe entre eles. Adicionalmente foi ainda efectuada uma avaliação com recurso a gráficos. Pensa-se ter atingido os objectivos propostos. Ou seja, testou-se o programa Huthereg para uma gama muito alargada de situações. Obtiveram-se quase sempre bons e muito bons resultados, chegando a ser excelentes em muitos deles. Mesmo quando foram detectadas diferenças entre os valores experimentais e os previstos pelo programa não é garantido que a falha se deve à sua falta de capacidade de previsão. Isto porque na bibliografia, o protocolo que serviu de base aos ensaios quase nunca vem suficientemente detalhado, a descrição dos ambientes térmicos está quase sempre incompleta, o nível de atividade física não aparece quantificado em termos numéricos e nunca é apresentada a distribuição do vestuário pelas diferentes zonas do corpo humano. Dentro da gama de situações analisadas não foram encontradas limitações à aplicação do software. Face ao exposto é óbvia a utilidade, a aplicabilidade e a fiabilidade do programa Huthereg na simulação da resposta termofisiológica do corpo humano. Espera-se, assim, que ele continue a ser usado no futuro quer em estudos de investigação, em desenvolvimentos tecnológicos, para avaliar situações térmicas que de algum modo possam ser de risco para as pessoas ou para simplesmente melhorar o desempenho na prática de atividades relacionadas com trabalho, lazer ou desporto.
The importance of programs simulating the behaviour of human thermoregulatory systems has increased greatly. They have been used to improve the conditions of thermal comfort inside buildings, to enhance performance in sporting activities, to develop protocols for specific activities and to study situations in which people are subjected to high levels of stress resulting from very cold or very hot thermal conditions. This was the motivation that led to the development of a software tool that simulates the behaviour of the thermoregulatory systems of human body at DEM-FCTUC. Its name is Huthereg. Depending on the physical characteristics of the person, the amount of clothing, the level of activity and the thermal conditions of the thermal environment where s/he is. The Huthereg program can provide a broad set of parameters (the internal temperature of human body, the skin temperatures and clothing temperatures, the flow of the sensible and latent heat, global thermal state and each of the areas of the human body, etc…) This study is intended to validate and test the Huthereg program, seeking to discover its strengths and weaknesses, its limitations and to test its reliability. To this end, numerous cases were simulated with data taken from the scientific literature and then, compared with the experimental results, removed therefrom, with the corresponding predicted by the software. In order to test the program on a broad range of situations, the selected cases involve different thermal environments varying from extremely hot to extremely cold, with different levels of clothing (naked subjects to subjects wearing thermal protective clothing), with different levels of activity (from resting to high physical exercise). The dressed subjects were studied with different levels of thermal insulation and with changes in the distribution of the clothing over various parts of the human body. As auxiliary tools to evaluate the level of compliance between the experimental results taken from the literature and those provided by the software, we use: the statistical functions arithmetic mean of relative differences (and the respective standard deviation), the root mean square deviation and the Pearson correlation coefficient, which was given an extra importance. These functions study the proximity between the measured and the predicted values and the type of ratio that exists between them. Additionally, graphics were used in the evaluation. The following objective was achieved, the Huthereg program was tested for a very wide range of situations, good and very good results were often obtained, reaching excellent in many of them. Even when differences between experimental and predicted values were detected, the program may not be at fault. It may be that the protocol that was used in the tests is almost never detailed completely in the bibliography, the description of thermal environments is often incomplete, the level of physical activity is not quantified in numerical terms and the distribution of clothing for the different areas of the human body is never presented. Within the range of the situations analysed, no limitations in the software application were found. Taking this into consideration, the usefulness, the applicability and the reliability of the Huthereg program in the simulation of the thermphysiological response of the human body is apparent. It is expected that this program will continue to be used in the future in many different research studies, in technological developments, to evaluate thermal situations that may entail risks or simply to improve performance in work or sport related activities.
Bárta, Vít. "Komparace konsolidace demokracie na území bývalého východního bloku." Master's thesis, 2016. http://www.nusl.cz/ntk/nusl-353713.
Повний текст джерела