Academic literature on the topic 'National averages'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'National averages.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "National averages"

1

Becker, Lee B. "Enrollment Growth Exceeds National University Averages." Journalism Educator 44, no. 3 (September 1989): 3–15. http://dx.doi.org/10.1177/107769588904400301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Connelly, Brian, and Deniz Ones. "NATIONAL CORRUPTION, NATIONAL PERSONALITY AND NATIONAL CULTURE." Psihologia Resurselor Umane 5, no. 1 (January 29, 2020): 16–31. http://dx.doi.org/10.24837/pru.v5i1.308.

Full text
Abstract:
Even though corruption continues to mar economic progression, worker enthusiasm, and societies' moral constitution, most studies of corruption have been confined to the fields of economics and political science. However, psychological variables, such as personality and cultural values, are likely to be relevant to studying corruption. In the present study of 62 countries, we examined how national averages on the Big Three personality traits, a measure of social desirability, and Hofstede's cultural dimensions relate to perceptions of a nation's level of corruption. The Big Three personality traits showed modest relationships with corruption. However, national averages on a social desirability measure were strongly and positivel correlated with corruption, suggesting that national dishonesty in responding to personality items is related to national dishonesty in corruption. In addition, the discrete, combined, and unique effects of personality and culture on corruption were compared. The findings suggest that both cultural values and personality have relevance for understanding corruption. As globalization continues to promote the exchange of cultural values and the assimilation of both individuals and organizations into new cultures, these findings highlight the need for I/O psychologists to be attentive to both culture and personality in designing human resource systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Kisielińska, Joanna. "Dochody z gospodarstwa rolnego a wynagrodzenia z pracy najemnej w krajach UE." Zeszyty Naukowe SGGW w Warszawie - Problemy Rolnictwa Światowego 18(33), no. 2 (July 2, 2018): 130–39. http://dx.doi.org/10.22630/prs.2018.18.2.40.

Full text
Abstract:
The aim of the research presented in the article was to compare the income from commercial farms with national averages for wages in EU countries. Average income in farms and national averages by economic classes (KS6 classification) were compared. In 12 EU countries, the average income from farms was higher than national wage averages, and in 15 countries it was lower. An increase in average agricultural income is usually a result of a general increase in the economic class. The economically strongest farms, with incomes that are higher than national average wages, are those that are owned first in Western, and then in Southern, EU countries, with the smallest numbers found in post-Communist countries. Among new EU members, however, the preservation of income parity in agriculture is easier than in the case of old members.
APA, Harvard, Vancouver, ISO, and other styles
4

Huf, Ben. "Averages, indexes and national income: accounting for progress in colonial Australia." Accounting History Review 30, no. 1 (October 8, 2019): 7–43. http://dx.doi.org/10.1080/21552851.2019.1670220.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hanks, G. E., T. Peters, and J. Owen. "15 Year USA national averages for patients with seminoma treated with radiation." International Journal of Radiation Oncology*Biology*Physics 21 (January 1991): 178–79. http://dx.doi.org/10.1016/0360-3016(91)90549-j.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gros, Daniel, and Carsten Hefeker. "One Size Must Fit All: National Divergences in a Monetary Union." German Economic Review 3, no. 3 (August 1, 2002): 247–62. http://dx.doi.org/10.1111/1468-0475.00059.

Full text
Abstract:
Abstract Should a common central bank in a heterogeneous monetary union base its decisions on EU-wide averages of economic variables or on national welfare losses? A central bank that minimizes the sum of national welfare losses reacts less to common shocks. Under certain parameter constellations this leads to higher average union-wide expected welfare and it might thus be preferable that decision-making is dominated by national representatives. Countries with a transmission mechanism far from the average benefit from an orientation on national welfare losses. For countries with a transmission mechanism close to the average, welfare can be lower in this case.
APA, Harvard, Vancouver, ISO, and other styles
7

Massavon, William, Levi Mugenyi, Martin Nsubuga, Rebecca Lundin, Martina Penazzato, Maria Nannyonga, Charles Namisi, et al. "Nsambya Community Home-Based Care Complements National HIV and TB Management Efforts and Contributes to Health Systems Strengthening in Uganda: An Observational Study." ISRN Public Health 2014 (March 6, 2014): 1–10. http://dx.doi.org/10.1155/2014/623690.

Full text
Abstract:
Community Home-Based Care (CHBC) has evolved in resource-limited settings to fill the unmet needs of people living with HIV/AIDS (PLHA). We compare HIV and tuberculosis (TB) outcomes from the Nsambya CHBC with national averages in Kampala, Uganda. This retrospective observational study compared HIV and TB outcomes from adults and children in the Nsambya CHBC to national averages from 2007 to 2011. Outcomes included numbers of HIV and TB patients enrolled into care, retention, loss to follow-up (LTFU), and mortality among patients on antiretroviral therapy (ART) at 12 months from initiation; new smear-positive TB cure and defaulter rates; and proportion of TB patients tested for HIV. Chi-square test and trends analyses were used to compare outcomes from Nsambya CHBC with national averages. By 2011, approximately 14,000 PLHA had been enrolled in the Nsambya CHBC, and about 4,000 new cases of TB were detected and managed over the study period. Overall, retention and LTFU of ART patients 12 months after initiation, proportion of TB patients tested for HIV, and cure rates for new smear-positive TB scored higher in the Nsambya CHBC compared to national averages. The findings show that Nsambya CHBC complements national HIV and TB management and results in more positive outcomes.
APA, Harvard, Vancouver, ISO, and other styles
8

Blundell, M. S. J., L. P. Hunt, E. J. Mayer, A. D. Dick, and J. M. Sparrow. "Reduced mortality compared with national averages following phacoemulsification cataract surgery: a retrospective observational study." British Journal of Ophthalmology 93, no. 3 (October 6, 2008): 290–95. http://dx.doi.org/10.1136/bjo.2008.141473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hodge, William. "REDUCED MORTALITY COMPARED WITH NATIONAL AVERAGES FOLLOWING PHACOEMULSIFICATION CATARACT SURGERY: A RETROSPECTIVE OBSERVATIONAL STUDY." Evidence-Based Ophthalmology 10, no. 3 (July 2009): 148–49. http://dx.doi.org/10.1097/ieb.0b013e3181ab817f.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Edens, David, and Erik Froyen. "Undergraduate Student Thriving: An Assessment and Comparison of Nutrition Science and Dietetics Students." Journal of Education and Training 7, no. 2 (August 21, 2020): 1. http://dx.doi.org/10.5296/jet.v7i2.16320.

Full text
Abstract:
This study utilizes the Thriving Quotient to determine the factors of student thriving for students in a nutrition and dietetics program at a large university in Southern California. Additionally, the study compared these students to the national averages for the factors of thriving. The Thriving Quotient assesses student levels of engagement, academic determination, positive perspective, social connectedness, and diverse citizenship. The largest influence on thriving for the sample population were engagement and academic determination. Comparisons to the national average and implications on practice are discussed.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "National averages"

1

Treloar, Graham J., and edu au jillj@deakin edu au mikewood@deakin edu au wildol@deakin edu au kimg@deakin. "Energy analysis of the construction of office buildings." Deakin University. School of Architecture and Building, 1994. http://tux.lib.deakin.edu.au./adt-VDU/public/adt-VDU20040617.170806.

Full text
Abstract:
Buildings have a significant impact on the environment due to the energy required for the manufacture of construction materials. The method of assessing the energy embodied in a product is known as energy analysis. Detailed office building embodied energy case studies are very rare. However, there is evidence to suggest that the energy requirements for the construction phase of commercial buildings, including the energy embodied in materials, is a significant component of the life cycle energy requirements. This thesis sets out to examine the current state of energy analysis, determine the national average energy intensities < i.e. embodied energy rates < for building materials and assess the significance of using national average energy intensities for the energy analysis of a case study office building. Likely ranges of variation in the building material embodied energy rates from the national averages are estimated and the resulting distribution for total embodied energy in the case study building simulated. Strategies for improving the energy analysis methods and data are suggested. Detailed energy analysis is shown to be a useful indicative method of quantifying the energy required for the construction of buildings.
APA, Harvard, Vancouver, ISO, and other styles
2

Peterson, Amy S. "An analysis of national average car rental rates and economic indicators /." Onine version of thesis, 1993. http://hdl.handle.net/1850/11571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

ROCHA, RENATO DE ALMEIDA. "CALCULUS OF THE WEIGHTED AVERAGE COST OF CAPITAL OF THE BRAZILIAN ELECTRICITY DISTRIBUTION SECTOR WITH NATIONAL ECONOMY DATA AND APT MODEL." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2009. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=14713@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
A ANEEL efetua o cálculo do custo médio ponderado de capital do setor brasileiro de distribuição de energia elétrica (WACC Regulatório) e o insere nas tarifas, a partir de dados da economia norte-americana por entender que os dados da economia brasileira não apresentam séries consistentes. Com relação ao cálculo do custo do capital próprio, a ANEEL o define utilizando o modelo CAPM. Uma vez que os resultados obtidos a partir de dados da economia norte-americana, para refletirem a realidade brasileira, carecem de ajustes posteriores, além da limitação do uso do CAPM que apenas correlaciona o desempenho do setor com o mercado; a proposta apresentada neste trabalho é de calcular o custo médio ponderado de capital do setor através de dados da economia brasileira, e no caso do custo do capital próprio utilizar o modelo APT para sua estimação, correlacionando o desempenho do setor com as variáveis macroeconômicas que mais o impactam. Os resultados indicam que já é possível trabalhar com dados da economia brasileira e que o custo médio ponderado de capital estimado para o setor em estudo, feito pela ANEEL pode estar subestimado, uma vez que, por partir de dados da economia norte-americana pode acabar por não captar plenamente alguns riscos que o modelo APT capta partindo de dados da economia brasileira.
ANEEL calculates the weighted average cost of capital of the Brazilian electric energy distribution sector (Regulatory WACC) based on American economic data, as it understands that the data from the Brazilian economy does not present consistent series. In the case of the cost of equity, ANEEL uses the CAPM model and inserts the results into the tariffs. Due to the fact that, the results obtained from the American economy in order to reflect the Brazilian reality need further adjustments, aside from the limitation of CAPM which correlates the performance of the sector exclusively with the market; we calculate the average cost of capital of the sector with Brazilian economic data. In the case of the cost of equity, we use the APT model to correlate the performance of the sector with the macroeconomics variables that have greatest impacts. The results indicate that it’s already possible to work with Brazilian economic data and that the average cost of capital of the sector as calculated by ANEEL might be underestimated, due to the use of American economic data that may not completely capture some risks that the APT model with Brazilian data captures.
APA, Harvard, Vancouver, ISO, and other styles
4

Shariff, Samina. "The Role of Gender Equality and Economic Development in Explaining Female Smoking Rates." Digital Archive @ GSU, 2007. http://digitalarchive.gsu.edu/iph_theses/4.

Full text
Abstract:
Globally female smoking rates are considerably lower than male smoking rates. However, there is great concern regarding female smoking due to the potential for future increases and the associated harm to health. To gain a better understanding regarding female smoking, this study examines the role of gender equality and economic development in explaining the variability in female smoking rates and female-to-male smoking differentials by examining data from 193 World Health Organization member states. Data on the dependent variables, female smoking prevalence rates and female-to-male smoking prevalence ratio, were obtained from the Tobacco Atlas. Data on independent variables i.e., measures of gender equality and gross national income per capita, proxy measure for economic development, were obtained from the 2005 Human Development Report, Central Intelligence Agency, and the World Bank. A composite gender equality index was constructed from the individual measures of gender equality. Multiple regression analysis showed composite gender equality index and gross national income per capita to be significant positive predictors of relative and absolute female smoking rates, with income being a stronger predicator. Individual measures of gender equality failed to show significance with either dependent variable. The results attest to the need for disentangling smoking from the notion of advancement in gender equality and economic development.
APA, Harvard, Vancouver, ISO, and other styles
5

Tanner, Janet Jeffery. "Financial Analysis and Fiscal Viability of Secondary Schools in Mukono District, Uganda." BYU ScholarsArchive, 2006. https://scholarsarchive.byu.edu/etd/1289.

Full text
Abstract:
Within the worldwide business community, many analysis tools and techniques have evolved to assist in the evaluation and encouragement of financial health and fiscal viability. However, in the educational community, such analysis is uncommon. It has long been argued that educational institutions bear little resemblance to, and should not be treated like, businesses. This research identifies an educational environment where educational institutions are, indeed, businesses, and may greatly benefit from the use of business analyses. The worldwide effort of Education for All (EFA) has focused on primary education, particularly in less developed countries (LDCs). In Sub-Saharan Africa, Uganda increased its primary school enrollments from 2.7 million in 1996 to 7.6 million in 2003. This rapid primary school expansion substantially increased the demand for secondary education. Limited government funding for secondary schools created an educational bottleneck. In response to this demand, laws were passed to allow the establishment of private secondary schools, operated and taxed as businesses. Revenue reports, filed by individual private schools with the Uganda Revenue Authority, formed the database for the financial analysis portion of this research. These reports, required of all profitable businesses in Uganda, are similar to audited corporate financial statements. Survey data and national examination (UNEB) scores were also utilized. This research explored standard business financial analysis tools, including financial statement ratio analysis, and evaluated the applicability of each to this LDC educational environment. A model for financial assessment was developed and industry averages were calculated for private secondary schools in the Mukono District of Uganda. Industry averages can be used by individual schools as benchmarks in assessing their own financial health. Substantial deviations from the norms signal areas of potential concern. Schools may take appropriate corrective action, leading to sustainable fiscal viability. An example of such analysis is provided. Finally, school financial health, defined by eight financial measures, was compared with quality of education, defined by UNEB scores. Worldwide, much attention is given to education and its role in development. This research, with its model for financial assessment of private LDC schools, offers a new and pragmatic perspective.
APA, Harvard, Vancouver, ISO, and other styles
6

Miranda, Collins M. "The Relationship of Selected Admission and Program Variables and the Success of Marietta College Physician Assistant Student Performance on the Physician Assistant National Certification Examination." Marietta College / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=marietta1144851787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lobarinhas, Roberto Beier. "Modelos black-litterman e GARCH ortogonal para uma carteira de títulos do tesouro nacional." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-17092012-145914/.

Full text
Abstract:
Uma grande dificuldade da gestão financeira é conseguir associar métodos quantitativos às formas tradicionais de gestão, em um único arranjo. O estilo tradicional de gestão tende a não crer, na devida medida, que métodos quantitativos sejam capazes de captar toda sua visão e experiência, ao passo que analistas quantitativos tendem a subestimar a importância do enfoque tradicional, gerando flagrante desarmonia e ineficiência na análise de risco. Um modelo que se propõe a diminuir a distância entre essas visões é o modelo Black-Litterman. Mais especificamente, propõe-se a diminuir os problemas enfrentados na aplicação da teoria moderna de carteiras e, em particular, os decorrentes da aplicação do modelo de Markowitz. O modelo de Markowitz constitui a base da teoria de carteiras há mais de meio século, desde a publicação do artigo Portfolio Selection [Mar52], entretanto, apesar do papel de destaque da abordagem média-variância para o meio acadêmico, várias dificuldades aparecem quando se tenta utilizá-lo na prática, e talvez, por esta razão, seu impacto no mundo dos investimentos tem sido bastante limitado. Apesar das desvantagens na utilização do modelo de média-variância de Markowitz, a idéia de maximizar o retorno, para um dado nível de risco é tão atraente para investidores, que a busca por modelos com melhor comportamento continuou e é neste contexto que o modelo Black-Litterman surgiu. Em 1992, Fischer Black e Robert Litterman publicam o artigo Portfolio Optimization [Bla92], fazendo considerações sobre o papel de pouco destaque da alocação quantitativa de ativos, e lançam o modelo conhecido por Black-Litterman. Uma grande diferença entre o modelo Black-Litterman e um modelo média-variância tradicional é que, enquanto o segundo gera pesos em uma carteira a partir de um processo de otimização, o modelo Black-Litterman parte de uma carteira de mercado em equilíbrio de longo prazo (CAPM). Outro ponto de destaque do modelo é ser capaz de fornecer uma maneira clara para que investidores possam expressar suas visões de curto prazo e, mais importante, fornece uma estrutura para combinar de forma consistente a informação do equilíbrio de longo prazo (priori) com a visão do investidor (curto prazo), gerando um conjunto de retornos esperados, a partir do qual os pesos em cada ativo são fornecidos. Para a escolha do método de estimação dos parâmetros, levou-se em consideração o fato de que matrizes de grande dimensão têm um papel importante na avaliação de investimentos, uma vez que o risco de uma carteira é fundamentalmente determinado pela matriz de covariância de seus ativos. Levou-se também em consideração que seria desejável utilizar um modelo flexível ao aumento do número de ativos. Um modelo capaz de cumprir este papel é o GARCH ortogonal, pois este pode gerar matrizes de covariâncias do modelo original a partir de algumas poucas volatilidades univariadas, sendo, portanto, um método computacionalmente bastante simples. De fato, as variâncias e correlações são transformações de duas ou três variâncias de fatores ortogonais obtidas pela estimação GARCH. Os fatores ortogonais são obtidos por componentes principais. A decomposição da variância do sistema em fatores de risco permite quantificar a variabilidade que cada fator de risco traz, o que é de grande relevância, pois o gestor de risco poderá direcionar mais facilmente sua atenção para os fatores mais relevantes. Ressalta-se também que a ideia central da ortogonalização é utilizar um espaço reduzido de componentes. Neste modelo de dimensão reduzida, suficientes fatores de risco serão considerados, assim, os demais movimentos, ou seja, aqueles não capturados por estes fatores, serão considerados ruídos insignificantes para este sistema. Não obstante, a precisão, ao desconsiderar algumas componentes, irá depender de o número de componentes principais ser suficiente para explicar grande parte da variação do sistema. Logo, o método funcionará melhor quando a análise de componentes principais funcionar melhor, ou seja, em estruturas a termo e outros sistemas altamente correlacionados. Cabe mencionar que o GARCH ortogonal continua igualmente útil e viável quando pretende-se gerar matriz de covariâncias de fatores de risco distintos, isto é, tanto dos altamente correlacionados, quanto daqueles pouco correlacionados. Neste caso, basta realizar a análise de componentes principais em grupos correlacionados. Feito isto, obtêm-se as matrizes de covariâncias utilizando a estimação GARCH. Em seguida faz-se a combinação de todas as matrizes de covariâncias, gerando a matriz de covariâncias do sistema original. A estimação GARCH foi escolhida pois esta é capaz de captar os principais fatos estilizados que caracterizam séries temporais financeiras. Entende-se por fatos estilizados padrões estatísticos observados empiricamente, que, acredita-se serem comuns a um grande número de séries temporais. Séries financeiras com suficiente alta frequência (observações intraday e diárias) costumam apresentar tais características. Este modelo foi utilizado para a estimação dos retornos e, com isso, obtivemos todas as estimativas para que, com o modelo B-L, pudéssemos gerar uma carteira ótima em um instante de tempo inicial. Em seguida, faremos previsões, obtendo carteiras para as semanas seguintes. Por fim, mostraremos que a associação do modelo B-L e da estimação GARCH ortogonal pode gerar resultados bastante satisfatórios e, ao mesmo tempo, manter o modelo simples e gerar resultados coerentes com a intuição. Este estudo se dará sobre retornos de títulos de renda fixa, mais especificamente, títulos emitidos pelo Tesouro Nacional no mercado brasileiro. Tanto a escolha do modelo B-L, quanto a escolha por utilizar uma carteira de títulos emitidos pelo Tesouro Nacional tiveram como motivação o objetivo de aproximar ferramentas estatísticas de aplicações em finanças, em particular, títulos públicos federais emitidos em mercado, que têm se tornado cada vez mais familiares aos investidores pessoas físicas, sobretudo através do programa Tesouro Direto. Ao fazê-lo, espera-se que este estudo traga informações úteis tanto para investidores, quanto para gestores de dívida, uma vez que o modelo média-variância presta-se tanto àqueles que adquirem títulos, buscando, portanto, maximizar retorno para um dado nível de risco, quanto para aqueles que emitem títulos, e que, portanto, buscam reduzir seus custos de emissão a níveis prudenciais de risco.
One major challenge to financial management resides in associating traditional management with quantitative methods. Traditional managers tend to be skeptical about the quantitative methods contributions, whereas quantitative analysts tend to disregard the importance of the traditional view, creating clear disharmony and inefficiency in the risk management process. A model that seeks to diminish the distance between these two views is the Black-Litterman model (BLM). More specifically, it comes as a solution to difficulties faced when using modern portfolio in practice, particularly those derived from the usage of the Markowitz model. Although the Markowitz model has constituted the basis of portfolio theory for over half century, since the publication of the article Portfolio Selection [Mar52], its impact on the investment world has been quite limited. The Markowitz model addresses the most central objectives of an investment: maximizing the expected return, for a given level of risk. Even though it has had a standout role in the mean-average approach to academics, several difficulties arise when one attempts to make use of it in practice. Despite the disadvantages of its practical usage, the idea of maximizing the return for a given level of risk is so appealing to investors, that the search for models with better behavior continued, and is in this context that the Black-Litterman model came out. In 1992, Fischer Black and Robert Litterman wrote an article on the Black-Litterman model. One intrinsic difference between the BLM and a traditional mean-average one is that, while the second provides the weights of the assets in a portfolio out of a optimization routine, the BLM has its starting point at the long-run equilibrium market portfolio(CAPM). Another highlighting point of the BLM is the ability to provide one clear structucture that is able to combine the long term equilibrium information with the investors views, providing a set of expected returns, which, together, will be the input to generate the weights on the assets. As far as the estimation process is concerned, and for the purpose of choosing the most appropriate model, it was taken into consideration the fact that the risk of a portfolio is determined by the covariation matrix of its assets and, being so, matrices with large dimensions play an important role in the analysis of investments. Whereas, provided the application under study, it is desirable to have a model that is able to carry out the analysis for a considerable number of assets. For these reasons, the Orthogonal GARCH was selected, once it can generate the matrix of covariation of the original system from just a few univariate volatilities, and for this reason, it is a computationally simple method. The orthogonal factors are obtained with principal components analysis. Decomposing the variance of the system into risk factors is highly important, once it allows the risk manager to focus separately on each relevant source of risk. The main idea behind the orthogonalization consists in working with a reduced dimension of components. In this kind of model, sufficient risk factors are considered, thus, the variability not perceived by the model will be considered insigficant noise to the system. Nevertheless, the precision, when not using all the components, will depend on the number of components be sufficient to explain the major part of the variability. Moreover, the model will provide reasonable results depending on principal component analysis performing properly as well, what will be more likely to happen, in highly correlated systems. It is worthy of note that the Orthogonal GARCH is equally useful and feasible when one intends to analyse a portfolio consisting of assets across various types of risk, it means, a system which is not highly correlated. It is common to have such a portfolio, with, for instance, currency rates, stocks, fixed income and commodities. In order to make it to perform properly, it is necessary to separate groups with the same kind of risk and then carry out the principal component analysis by group and then merge the covariance matrices, producing the covariance matrix of the original system. To work together with the orthogonalization method, the GARCH model was chosen because it is able to draw the main stylized facts which characterize financial time series. Stylized facts are statistical patterns empirically observed, which are believed to be present in a number of time series. Financial time series which sufficient high frequency (intraday, daily and even weekly) usually present such behavior. For estimating returns purposes, it was used a ARMA model, and together with the covariance matrix estimation, we have all the parameters needed to perform the BLM study, coming out, in the end, with the optimal portfolio in a given initial time. In addition, we will make forecasts with the GARCH model, obtaining optimal portfolio for the following weeks. We will show that the association of the BLM with the Orthogonal GARCH model can generate satisfactory and coherent with intuition results and, at the same time, keeping the model simple. Our application is on fixed income returns, more specifically, returns of bonds issued in the domestic market by the Brazilian National Treasury. The motivation of this work was to put together statistical tolls and finance uses and applications, more specifically those related to the bonds issued by the National Treasuy, which have become more and more popular due to the \"Tesouro Direto\" program. In conclusion, this work aims to bring useful information either for investors or to debt managers, once the mean-variance model can be useful for those who want to maximize return at a given level or risk as for those who issue bonds, and, thus, seek to reduce their issuance costs at prudential levels of risk.
APA, Harvard, Vancouver, ISO, and other styles
8

Miller, La Tarsha M. "The Degree Attainment of Black Students: A Qualitative Study." University of Toledo / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1434203856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Aires, Neto Abilio Wolney. "Princípio da Razoável Duração do Processo: contribuição ao desenvolvimento de legislação e medidas que o levem a efeito." Pontifícia Universidade Católica de Goiás, 2012. http://localhost:8080/tede/handle/tede/2641.

Full text
Abstract:
Made available in DSpace on 2016-08-10T10:46:40Z (GMT). No. of bitstreams: 1 ABILIO WOLNEY AIRES NETO.pdf: 1818311 bytes, checksum: 2c88d17cd63dd5cf393a12535ad19234 (MD5) Previous issue date: 2012-10-19
The present study aims at the analysis of Constitutional Amendment No. 45/2004, which entered the principle of reasonable duration of the process within the fundamental guarantees assured to each individual and is insculpido in item LXXVIII of art. 5, of the Constitution of 1988, in view of the judicial protection must be effective, timely and appropriate. It is seen that this issue is of paramount importance, since the introduction of the term reasonable in adjudication as a constitutional principle brings a commitment of the state to the citizen in order to give greater effectiveness to the process and ensure the fundamental right of access to justice . To reach this conclusion, we used literature search, legislative, administrative and judicial, with theoretical frameworks in several authors, starting with Barroso and converging into arguments which support the applicability of this Amendment, from a historical analysis (ontological) and evaluative (axiological). Then, there was the jurisprudential research on the subject in the main Brazilian courts, celing in the Superior Courts, to then undertake a comparative analysis with the bibliographic material. The importance of the principle stands out as a precondition for full citizenship in Democratic States of law, guaranteeing citizens the realization of their rights are constitutionally guaranteed. The principles of speed and duration of the process should be applied with observation of the principles of reasonableness and proportionality, ensuring that the process does not extend beyond the reasonable deadline, nor will compromise other principles such as defense and full of contradiction. It is certain, however - and for the benefit of people who need an effective justice - that Constitutional Amendment 45/04 (which among other novelties inserted explicitly the principle of reasonable duration of the process) seeks to reform the judiciary means for ensuring that become more agile and stronger, which is essential in a society like ours so devoid of enforcing rights to citizens. The current concern guiding procedures and the right to a speedy and effective duration of the process, summons us to an analysis of the role of the National Council of Justice - CNJ and programs, like the "Update" in the Goiás FONAJE and Process Judicial E-EO, as these tools, among others, that result in responses necessary for today's social and economic problems. On the other hand, alternative means of conflict resolution, complementary to the formal judicial process, even because of its informality and adaptability, suggest the solution many cases, in the antechambers of mediation and conciliation (consensus building). It would be a paradigm shift, erecting alternative model judicialization as a counter-archetype adjunct to mitigate the culture of demanda.Daí the idea of the Courts or adoptive Forums Multiport as promoting integrative means for the settlement of disputes. The traditional process would be for more complex cases, adapting to the American experience to our reality, given the similarity.
O presente estudo tem por objeto a analise da Emenda Constitucional nº 45/2004, que inseriu o princípio da razoável duração do processo dentro das garantias fundamentais asseguradas a cada indivíduo e está insculpido no inciso LXXVIII, do art. 5º, da Constituição Federal de 1988, na perspectiva de que a tutela jurisdicional deve ser efetiva, tempestiva e adequada. Vê-se que tal questão é de suma importância, vez que a introdução do prazo razoável na prestação jurisdicional como princípio constitucional traz um compromisso do Estado para com o cidadão a fim de dar maior efetividade ao processo e garantir o direito fundamental de acesso à Justiça. Para chegar a essa conclusão, utilizou-se pesquisa bibliográfica, legislativa, administrativa e jurisprudencial, com marcos teóricos em diversos autores, iniciando-se com Barroso e confluindo para argumentos que sirvam de suporte à aplicabilidade da referida Emenda, a partir de uma análise histórica (ontológica) e valorativa (axiológica). Em seguida, foi feita a pesquisa jurisprudencial relativa ao tema nos principais tribunais brasileiros, máxime nos Tribunais Superiores, para então proceder a uma análise comparativa com o material bibliográfico. A importância do princípio se destaca como pressuposto para o exercício pleno da cidadania nos Estados Democráticos de Direito, garantindo aos cidadãos a concretização dos direitos que lhes são constitucionalmente assegurados. Os princípios da celeridade e da duração do processo devem ser aplicados com observação aos princípios da razoabilidade e da proporcionalidade, assegurando que o processo não se estenda além do prazo razoável, nem tampouco venha comprometer outros princípios como o da plena defesa e do contraditório. É certo, porém e para benefício da população que necessita de uma justiça efetiva que pela Emenda Constitucional 45/04 (que dentre outras novidades inseriu expressamente o princípio da duração razoável do processo) procura-se reformar o Poder Judiciário garantindo meios para que se torne mais ágil e fortalecido, o que é fundamental em uma sociedade como a nossa tão carente da efetivação de direitos aos cidadãos. A preocupação atual que norteia os procedimentos e o direito a uma rápida e eficaz duração do processo, nos convoca a uma análise do papel do Conselho Nacional de Justiça CNJ e de Programas, a exemplo do Atualizar , em Goiás do FONAJE e do Processo Judicial Eletrônico PJE, estes como ferramentas, dentre outras, que resultam em repostas necessárias aos problemas sociais e econômicos hodiernos. De outro lado, os meios alternativos de solução dos conflitos, complementares ao processo judicial formal, em razão mesmo da sua informalidade e adaptabilidade, sugerem a solução de muitos casos, nas antecâmaras de mediação e conciliação (consensus building). Seria uma mudança de paradigmas, erigindo alternativa ao modelo de judicialização como um contra-arquétipo coadjuvante para mitigar a cultura da demanda.Daí a idéia adotiva dos Tribunais ou Fóruns Multiportas, como promoção de meios integrativos para a solução das controvérsias. O processo tradicional ficaria para os casos de maior complexidade, adaptando-se a experiência norteamericana à nossa realidade, dada a similitude.
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, An-Chiao, and 陳安嬌. "The Relationship Between Education Inequality, Average Education, and Happiness – A Cross-Nation Analysis." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/sn2v24.

Full text
Abstract:
碩士
東吳大學
經濟學系
101
This research combines and utilizes data from the third and fifth waves of the World Values Survey (WVS) and the data from Barro and Lee (2010) used by Wail et al. (2011) to calculate the Gini coefficients for education in 146 nations for the 5-year intervals between 1950 and 2010, and a total of 70 nations are analyzed. It can be observed from various literatures that, among studies on education and happiness, as social status, income, and health are all considered, the relationship of education to happiness declines or even becomes insignificant. Therefore, this research explores in societies with different education inequality and different average education levels, the difference in the impact of education on happiness through variables such as social status, income, and health, and examines whether the residual impact education has on happiness is explainable by adding living control variables. Before analysis commences, descriptive statistics are first applied to perform preliminary survey under different conditions on the average values of the variables of people with different education levels, so as to understand the relationships between education level and the variables. Subsequently, ordinary least squares (OLS) are used for regression analysis to add, one at a time and in groups, such variables as social status, income, health, and living control, and individual discussions are provided on whether the impact of education on happiness changes because of education inequality, average education, and both education inequality and average education. Empirical results indicate that both education inequality and average education will affect the impact of education on happiness via different channels, and that in societies with unequal education or low average education levels, education has higher degrees of impact on happiness via social status and income.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "National averages"

1

White, Eric M. Estimation of national forest visitor spending averages from national visitor use monitoring: Round 2. Portland, OR: U.S. Dept. of Agriculture, Forest Service, Pacific Northwest Research Station, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Balisacan, A. M. Probing beneath cross-national averages: Poverty, inequality, and growth in the Philippines. Manila: Asian Development Bank, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Balisacan, Arsenio M. Probing beneath cross-national averages: Poverty, inequality and growth in the Philippines. Manila: Asian Development Bank, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lee, Valerie Elizabeth. National assessment of educational progress writing proficiency, 1983-84: Catholic school results and national averages : final report, 1987. Washington, D.C: National Catholic Educational Association, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Valerie E. National Assessment of Educational Progress, reading proficiency, 1983-84: Catholic school results and national averages : final report, 1985. Washington, D.C: National Catholic Educational Association, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Topolski, James M. Comparison of alcohol and other drug abuse treatment expenditures and admissions in Missouri with national averages obtained from the 1987 SADAP report. [Jefferson City]: Missouri Division of Alcohol and Drug Abuse, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Above the national average: From the other side. New York: Rivercross Pub., 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

The average American: The extraordinary search for the nation's most ordinary citizen. New York: Public Affairs, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Perfectly average: The pursuit of normality in postwar America. Amherst: University of Massachusetts Press, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Massachusetts. General Court. Senate. Post Audit and Oversight Bureau. Workers' compensation windfall: Massachusetts workers' compensation insurers continue to make windfall profits well above the national average. [Boston, Mass.]: Senate Post Audit and Oversight Bureau, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "National averages"

1

Ramady, Mohamed A. "Not Your Average National Oil Company." In Saudi Aramco 2030, 25–59. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-67750-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hoda, Anwarul, Ashok Gulati, Shyma Jose, and Pallavi Rajkhowa. "Sources and Drivers of Agricultural Growth in Bihar." In India Studies in Business and Economics, 211–46. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-9335-2_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Prümm, Kathrin, and Stefan Alscher. "From Model to Average Student: the Europeanization of Migration Policy and Politics in Germany." In The Europeanization of National Policies and Politics of Immigration, 73–92. London: Palgrave Macmillan UK, 2007. http://dx.doi.org/10.1057/9780230800717_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kojima, Satoshi, and Kenji Asakawa. "Expectations for Carbon Pricing in Japan in the Global Climate Policy Context." In Economics, Law, and Institutions in Asia Pacific, 1–21. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-6964-7_1.

Full text
Abstract:
Abstract Realizing a decarbonized society in consistent with the Paris Agreement, a fundamental transformation of the entire economic and social system is needed, and not only carbon intensive sectors but also all sectors and all stakeholders including households must be decarbonized. This chapter demonstrates increasing expectations for carbon pricing in Japan in this global policy context. After the review of the global trend of carbon pricing, historical progress of carbon pricing in Japan and the existing nation-wide carbon tax, i.e. the Global Warming Countermeasure Tax, is explained. There are also two sub-national carbon pricing schemes in Japan, Tokyo ETS and Saitama ETS, which are explained in Chaps. 10.1007/978-981-15-6964-7_6 and 10.1007/978-981-15-6964-7_7 respectively, and not focused in this chapter. We examine the claim that Japan has already implemented high level carbon pricing in terms of various forms of energy taxes. Based on the effective carbon rate which is defined by OECD as the sum of explicit carbon prices and fossil fuel taxes per carbon emission, the nationwide average effective carbon rate of Japan is lower than the average effective carbon rates of OECD countries and its key partner countries. The current carbon pricing schemes in Japan are too modest to realize decarbonization transition and there is a room to upgrade them to exploit full potential of carbon pricing. This chapter discusses adequate levels of carbon prices in compatible with decarbonization transition.
APA, Harvard, Vancouver, ISO, and other styles
5

Vogel, Lars. "Illiberal and Anti-EU Politics in the Name of the People? Euroscepticism in East Central Europe 2004–2019 in Comparative Perspective." In Palgrave Studies in European Union Politics, 29–55. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-54674-8_2.

Full text
Abstract:
Abstract This chapter describes patterns, trends and determinants of public Euroscepticism in East Central Europe (ECE). It investigates whether public opinion on European integration in this region is connected to the contestation of both the immigration policies and the constitutional principles of the EU by the respective national governments. By applying longitudinal and comparative analyses based on European Election Studies from 2004 to 2019, it shows public support for European integration in ECE as more closely linked to instrumental performance assessments than in the EU average and as structured by country-specific rather than region-specific patterns. Cultural issues, like immigration and conceptions of democracy, which dominate ECE governmental politics, are only related to public Euroscepticism in some of those countries. Based on these results, the chapter suggests that the connection between the illiberal and anti-EU politics of ECE national governments and public Euroscepticism is loose and conditional upon the national context.
APA, Harvard, Vancouver, ISO, and other styles
6

Liefbroer, Aart C., and Mioara Zoutewelle-Terovan. "Meta-Analysis and Meta-Regression: An Alternative to Multilevel Analysis When the Number of Countries Is Small." In Social Background and the Demographic Life Course: Cross-National Comparisons, 101–23. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67345-1_6.

Full text
Abstract:
AbstractHierarchically nested data structures are often analyzed by means of multilevel techniques. A common situation in cross-national comparative research is data on two levels, with information on individuals at level 1 and on countries at level 2. However, when dealing with few level-2 units (e.g. countries), results from multilevel models may be unreliable due to estimation bias (e.g. underestimated standard errors, unreliable country-level variance estimates). This chapter provides a discussion on multilevel modeling inaccuracies when using a small level-2 sample size, as well as a list of available alternative analytic tools for analyzing such data. However, as in practice many of these alternatives remain unfeasible in testing hypotheses central to cross-national comparative research, the aim of this chapter is to propose and illustrate a new technique – the 2-step meta-analytic approach – reliable in the analysis of nested data with few level-2 units. In addition, this method is highly infographic and accessible to the average social scientist (not skilled in advanced simulation techniques).
APA, Harvard, Vancouver, ISO, and other styles
7

Stenroos, Marko, and Jenni Helakorpi. "The Multiple Stories in Finnish Roma Schooling." In Social and Economic Vulnerability of Roma People, 99–116. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-52588-0_7.

Full text
Abstract:
AbstractRegardless of the good reputation of the Finnish basic education system, Finnish Roma children fall behind the overall average in their performance of academic skills: Roma children face more challenges completing basic education and have more repeated school years. Furthermore, compared to the average, Roma youth apply less for upper secondary education and thus their general level of education remains low. However, looking at Roma education solely through problematic representations only provides a partial picture. In this article, based on two separate sets of fieldwork among Finnish Kaale Roma, we examine how teachers, Roma activists and mediators perceive the educational trajectories of Finnish Roma children and youth. The article seeks to scrutinize Finnish Roma schooling within the framework of the Finnish National Policy on Roma (NRIS). The analysis highlights the multiplicity of voices in the field, discusses the possibilities, and thus problematizes the single-aspect discourse on Roma education. Many countries in Central and Eastern Europe struggle with school and residential segregation, but Finnish Roma face different challenges.
APA, Harvard, Vancouver, ISO, and other styles
8

Dai, Juanjuan, Yaojian Wu, and Yurong Ouyang. "Calculations of the National Average Yield, Equivalence Factor and Yield Factor in Ten Years Based on National Hectares’ Ecological Footprint Model – A Case Study of Xiamen City." In Geo-Informatics in Resource Management and Sustainable Ecosystem, 338–49. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-45737-5_35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gomendio, Montse. "Spain: The Evidence Provided by International Large-Scale Assessments About the Spanish Education System: Why Nobody Listens Despite All the Noise." In Improving a Country’s Education, 175–201. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59031-4_9.

Full text
Abstract:
AbstractILSAs show that student performance in Spain is lower than the OECD average and has shown no progress from 2000 until 2011/2012. One of the main features is the low proportion of top performers. During this long period of stagnation, the education system was characterized by having no national (or standardized regional) evaluations and no flexibility to adapt to the different needs of the student population. The fact that the system was blind and rigid, plus the lack of common standards at the national level, gave rise to three major deficiencies: a high rate of grade repetition, which led to high rates of early school leaving, and large differences between regions. These features of the Spanish education system represent major inequities. However, PISA findings were used to reinforce the misguided view that the Spanish education system prioritized equity over excellence. After the implementation of an education reform, some improvements in student performance took place in 2015 and 2016. Unfortunately, the results for PISA 2018 in reading were withdrawn for Spain, apparently due to changes in methodology which led to unreliable results. To this date, no explanation has been provided raising concerns about the reliability and accountability of PISA.
APA, Harvard, Vancouver, ISO, and other styles
10

Misso, Francesco Edoardo, Irina Di Ruocco, and Cino Repetto. "The ELVITEN Project as Promoter of LEVs in Urban Mobility: Focus on the Italian Case of Genoa." In Small Electric Vehicles, 55–67. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-65843-4_5.

Full text
Abstract:
AbstractOne of the growing innovations in the electric vehicle market concerns light electric vehicles (LEVs), promoted at local and national level by many initiatives, such as the European project ELVITEN, involving six cities, which is analysed in the present paper in relation to the Genoa pilot case study. In Italy, LEVs have been increasingly successful, as the number of their registrations shows (+76% in 2019 compared to 2018). In this context, the city of Genoa, where a considerable fleet of mopeds and motorcycles (214,499 in its metropolitan area in 2018) circulates, lends itself well to the experimentation of two-wheeled LEVs. The monitoring of the use of LEVs within the framework of the ELVITEN project has shown that the average daily round trips recorded in the metropolitan area of Genoa are equal to 15–20 km, thus reinforcing the idea that LEVs represent a valid alternative to Internal Combustion Engine (ICE) private vehicles. Moreover, the characteristics of the travel monitored and the users’ feedback highlight that the question of range anxiety is less present than expected. Finally, and contrary to our expectations, the data analysis indicates that the use of LEVs in Genoa during two months of Covid-19 pandemic lockdown—March and April 2020—shows a decrease of 21%, while the average decrease recorded by the six cities globally considered is 51%.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "National averages"

1

Kelly, Jarod C., Deepak Sivaraman, and Gregory A. Keoleian. "Analysis of Avoided Carbon-Dioxide Due to Photovoltaic and Wind Turbine Technologies Displacing Electrical Peaking Facilities." In ASME 2009 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/detc2009-86511.

Full text
Abstract:
Many studies that examine the impact of renewable energy installations on avoided carbon-dioxide utilize national, regional or state averages to determine the predicted carbon-dioxide offset. The approach of this computational study was to implement a dispatching strategy in order to determine precisely which electrical facilities would be avoided due to the installation of renewable energy technologies. This study focused on a single geographic location for renewable technology installation, San Antonio, Texas. The results indicate an important difference between calculating avoided carbon-dioxide when using simple average rates of carbon-dioxide emissions and a dispatching strategy that accounts for the specific electrical plants used to meet electrical demands. The avoided carbon-dioxide due to renewable energy technologies is overestimated when using national, regional and state averages. This occurs because these averages include the carbon-dioxide emission factors of electrical generating assets that are not likely to be displaced by the renewable technology installation. The study also provides a comparison of two specific renewable energy technologies: photovoltaics (PV) and wind turbines. The results suggest that investment in PV is more cost effective for the San Antonio location. While the results are only applicable to this location, the methodology is useful for evaluating renewable technologies at any location.
APA, Harvard, Vancouver, ISO, and other styles
2

Vliet, Gary C. "Texas Solar Radiation Database (TSRDB)." In ASME Solar 2002: International Solar Energy Conference. ASMEDC, 2002. http://dx.doi.org/10.1115/sed2002-1075.

Full text
Abstract:
Solar radiation data have been acquired over approximately a five year period (1996 to present) at 15 sites in Texas (Texas Solar Radiation DataBase – TSRDB). These data are compared with comparable sites in the National Solar Radiation DataBase (NSRDB). Comparison of the TSRDB and NSRDB data for eleven (11) coincident or nearby locations show reasonably good agreement between the global horizontal values. Relative to the NSRDB, individual monthly average differences between the two sets range from −20 to +13%, and the annual averages varied from −9 to +8.5%, with positive values meaning the TSRDB values are higher. Overall, the TSRDB global horizontal data are about 2% lower than the NSRDB. However, there are considerable differences in the direct normal values, with the TSRDB values generally being higher. The monthly average differences ranged from −18 to +36%, and the average annual difference for the compared locations is about +5%. The greatest deviations for direct normal data are for coastal locations in the winter, with the three compared coastal locations exhibiting an average difference of about +30% for the combined months of December and January. Also, the TSRDB data for the Trans-Pecos region in west Texas exhibits significantly higher direct normal solar radiation throughout the year than does the NSRDB.
APA, Harvard, Vancouver, ISO, and other styles
3

Twomey, Kelly M., and Michael E. Webber. "Evaluating the Energy Intensity of the US Public Water System." In ASME 2011 5th International Conference on Energy Sustainability. ASMEDC, 2011. http://dx.doi.org/10.1115/es2011-54165.

Full text
Abstract:
Previous analyses have concluded that the United State’s water sector uses over 3% of national electricity consumption for the production, conveyance, and treatment of water and wastewater and as much as 10% when considering the energy required for on-site heating, cooling, pumping, and softening of water for end-use. The energy intensity of water is influenced by factors such as source water quality, its proximity to a water treatment facility and end-use, its intended end-use and sanitation level, as well as its conveyance to and treatment at a wastewater treatment facility. Since these requirements differ by geographic location, climate, season, and local water quality standards, the energy consumption of regional water systems vary significantly. While national studies have aggregated averages for the energy use and energy intensity of various stages of the of the US water system, these estimates do not capture the wide disparity between regional water systems. For instance, 19 percent of California’s total electricity generation is used to withdraw, collect, convey, treat, distribute, and prepare water for end-use, nearly doubling the national average. Much of this electricity is used to move water over high elevations and across long distances from water-rich to water-stressed regions of the state. Potable water received by users in Southern California has typically been pumped as far as 450 miles, and lifted nearly 2000ft over the system’s highest point in the Tehachapi Mountains. Consequently, the energy intensity of San Diego County’s water is approximately 11,000 kWh per million gallons for pumping treatment and distribution, as compared to the US average which is estimated to be in the vicinity of 1,500–2,000 kWh per million gallons. With added pressures on the state’s long-haul transfer systems from population growth and growing interest in energy-intensive desalination, this margin will likely increase. This manuscript consists of a first-order analysis to quantify the energy embedded in the US public water supply, which is the primary water source to residential, commercial, and municipal users. Our analysis finds that energy use associated with the public water supply is 4.7% of the nation’s annual primary energy and 6.1% of national electricity consumption, respectively. Public water and wastewater pumping, treatment, and distribution, as well as commercial and residential water-heating were considered in this preliminary work. End-use energy requirements associated with water for municipal, industrial, and self-supplied sectors (i.e. agriculture, thermoelectric, mining, etc.) were not included in this analysis.
APA, Harvard, Vancouver, ISO, and other styles
4

Wilson, James H., and Asfaw Beyene. "California Wave Energy Resource Evaluation." In ASME 2007 26th International Conference on Offshore Mechanics and Arctic Engineering. ASMEDC, 2007. http://dx.doi.org/10.1115/omae2007-29619.

Full text
Abstract:
In this paper, a collection of deep water (&gt;100 m) wave records was assessed to create a long-term, statistically reliable data set. These wave data were derived from the Coastal Information Data Program (CDIP) Buoy Data from UCSD Scripps Institute of Oceanography, National Data Buoy Center (NDBC) Buoy Data from NOAA, and other sources. From this data set, long-term annual averages and monthly wave probability distributions were analyzed for ten one degree latitude bins bounded by the 100 m and 1000 m depth contours seaward of the California coast. The probability distributions were used to quantify the potential for useful electricity extraction from the coastal wave of California. Optimal locations for developing wave energy installations are specified. The California coast north of Point Conception has an ideal wave resource for the generation of electricity from wave energy. South of Point Conception the wave energy arriving from North Pacific storms is efficiently blocked by the significant change in California coast orientation south of Point Conception and the Channel Islands. The near coastal Southern California (SOCAL) region has a significantly reduced wave resource compared to the California coast north of Pont Conception. Factors impacting the status of ocean wave energy technologies and their development are also discussed. Applicability of the wave statistical results is critical to determine the average “wave to wire” efficiency for the many different types of wave energy converter (WEC) technologies that exist in many different states of commercial development.
APA, Harvard, Vancouver, ISO, and other styles
5

Sepúlveda-Páez, Geraldy, and Carmen Araneda-Guirriman. "WOMEN FACULTY AND SCIENTIFIC PRODUCTIVITY IN LATIN AMERICAN CONTEXT: EVIDENCE FROM CHILE." In International Conference on Education and New Developments. inScience Press, 2021. http://dx.doi.org/10.36315/2021end026.

Full text
Abstract:
Since the 19th century, the position of women in the context of higher education has undergone multiple changes, although their incorporation has not been a simple or homogeneous task. Currently, women face new consequential challenges of a globalized world and the notion of market education that characterizes institutions nowadays. One of the great challenges is related to the under-representation of women in senior research positions (Aiston and Fo, 2020). In this context, new standards have been established to measure the productivity, quality, and effectiveness of teachers, specifically scientific productivity has been internalized as an indicator of professional progress, the type of publication, its impact, and the citation rates today. They have special relevance, where many times achieving high scientific productivity is very complex for academics who do not access the teaching staff early (Webber and Rogers, 2018). Furthermore, it is very difficult for academic women to maintain high levels of productivity constantly both at work and home (Lipton, 2020). In this sense, the principles that encourage academic productivity increase competition among teachers and reinforce gender inequalitiestogether with a valuation of male professional life (Martínez, 2017). Indeed, the participation of women in sending articles is much lower than their male counterparts (Lerback and Hanson, 2017). Therefore, the present study aims to visualize the participation of Chilean academics in current productivity indices, based on the description of secondary data obtained from the DataCiencia and Scival platforms. The sample consists of 427 people, of which 17.3% were women, with an average of 10 publications for the year 2019. To achieve the objectives, the following strategy was developed: 1) describe and interpret the secondary data obtained during the year 2019 on each of the platforms. 2) Compare the data obtained to national averages and type of institution and gender. Based on the analyzes, the implications of female participation in the number of women observed at the national level and their position in international indicators and new lines of research are discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

English, Jeffrey D. "Thin Glass CSP Mirrors: “From Reflection to Concentration”." In ASME 2007 Energy Sustainability Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/es2007-36173.

Full text
Abstract:
The goal for Concentrated Solar Power (CSP) mirror is not just reflection, but the complete capture and utilization of the entire solar spectrum. Solar radiation is emitted over a range of wavelengths, analytically measured between 250–2500 nm, to ensure maximum captivation. An effective solar mirror, utilized for concentrating this energy must be capable of maintaining a high level of reflectance, under adverse environmental conditions for a prolonged duration in a CSP system. Thin (1-mm) flat low-iron, silvered glass mirrors have been utilized for CSP applications for many years, but obstacles with respect to quality and durability have had to be overcome. Developments have improved the reflectance from averages in the low 90% range to averages between ∼96%–97%. The reflectance durability standard for utilization of mirror for solar applications requires a minimal reflective loss of less than 5% over a 15 year period in the field. The ultimate goal is to expand the solar mirror’s field life to 20–30 years, the life of a CSP system. Overcoming harsh accelerated testing parameters continues to be the focus, as these tests attempt to correlate the lifetime to actual field applications. Test chambers with elevated temperatures and humidity conditions continue to be the most severe, and results continually show dramatic improvement. Focus was drawn on the loss of spectral reflectance, as degradation was occurring at a rapid rate specifically with the lower wave spectra. Drawing on the expertise and direction of the National Renewable Energy Laboratory (NREL), CSP thin-glass mirrors are emerging to be a viable choice for solar concentration. Thinglass mirrors offer a low-weight, highly reflective option, that resists harsh weather conditions, including water and humidity variances along with surface contamination. Mirror coating advancements have exceeded the physical and chemical resistance properties of standard “off the shelf” mirror coating products to precise, industry specific components. This study will review the obstacles and highlight the progress that has led to the success of the thin-glass mirror CSP market. A compilation of test results from NREL and other analytical, laboratories along with the collaboration of mirror manufacturing expertise from a vast knowledge base in the chemically plated mirror industry. It is the primary focus of the industry to continue to strive for a superior quality concentrating mirror while making it economically viable to the solar industry.
APA, Harvard, Vancouver, ISO, and other styles
7

Thorneloe, Susan A., Keith A. Weitz, and Jesse Miller. "Analysis of the “Zero Waste” Management Option Using the Municipal Solid Waste Decision Support Tool." In 17th Annual North American Waste-to-Energy Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/nawtec17-2347.

Full text
Abstract:
The U.S. Environmental Protection Agency’s Office of Research and Development (US EPA ORD) has developed a “Municipal Solid Waste Decision Support Tool”, or MSW-DST, for local government solid waste managers to use for the life cycle evaluation of integrated solid waste management options. The MSW-DST was developed over a five year period (1994–1999) with the assistance of numerous outside contractors and organizations, including the Research Triangle Institute, North Carolina State University, the University of Wisconsin-Madison, the Environmental Research and Education Foundation, Franklin Associates and Roy F. Weston. The MSW-DST can be used to quantify and evaluate the following impacts for each integrated solid waste management alternative: • Energy consumption, • Air emissions, • Water pollutant discharges, • Solid Waste disposal impacts. Recently, the MSW-DST was used by the U.S. EPA to identify solid waste management strategies that would help to meet the goal of the EPA’s “Resource Conservation Challenge.” In this effort, ten solid waste management strategies were evaluated for a hypothetical, medium-sized U.S. community, with a population of 750,000 and a waste generation rate of approximately 3.5 pounds per person per day. (Table 1). The assumed waste composition was based on national averages. A peer-reviewed paper on this research was published in 2008 by the American Society of Mechanical Engineers (ASME).
APA, Harvard, Vancouver, ISO, and other styles
8

Bala Sukumaran, Vineeth, and Utpal Mukherji. "Tradeoff of average power and average delay for a point-to-point link with fading." In 2013 National Conference on Communications (NCC). IEEE, 2013. http://dx.doi.org/10.1109/ncc.2013.6488019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sukumaran, Vineeth Bala, and Utpal Mukherji. "Tradeoff of average service cost and average delay for the state dependent M/M/1 queue." In 2013 National Conference on Communications (NCC). IEEE, 2013. http://dx.doi.org/10.1109/ncc.2013.6488024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wong, Kau-Fui Vincent, and Nicolas Perilla. "Comparison of Green House Gases Emitted by Electrical and Gasoline Cars, Taking Into Consideration Performance." In ASME 2009 International Mechanical Engineering Congress and Exposition. ASMEDC, 2009. http://dx.doi.org/10.1115/imece2009-12226.

Full text
Abstract:
The goal of this study is to add to the understanding of the overall emissions caused by cars using both gasoline and existing alternative fuels. We will include the emission from the vehicle itself and also from upstream sources, primarily the source of the energy used to actually move the vehicle. The fact that electric motors have better efficiencies than internal combustion engines and the fact that power plants usually have higher thermal efficiencies than an engine seems to suggest that that the electric vehicle will be the more efficient in terms of emissions per vehicle kilometer. The complexities of vehicle propulsion become evident when one compares all the details of the available options, such as electric vehicles have to transport extra weight in batteries to increase performance. In this work we evaluate the emissions from electric and gasoline vehicles that are on the road. The data shows under most conditions the current vehicles have lower emissions than gasoline cars in terms of kilograms of carbon dioxide per kilometer. The different propulsion systems are then evaluated in how they would perform in moving a standardized vehicle including the system itself through a standardized cycle, to assess whether differences in emissions are the result of the system itself or other design differences. This study found that while in general the electric vehicle is better, the source of the electricity is a crucial factor in the determination. It is found that the cars currently being produced produce less green house gases than the gasoline cars on the average. In fact two of the four cars performed better even at the highest possible emission levels. While this casts a positive light on the electric car, it is a simplistic way of looking at the data. The calculations also show that the performance levels of the gasoline cars are much higher than the electric cars; this could be the main reason for the lower emissions of electric cars. The second part of this study is focused on quantifying the differences in emissions by studying that from a standardized car in all 50 states and D.C. These differences arise from the different levels of emissions owing to the variety of combinations of methods used and the methods themselves in the generation of electricity within the 51 regions. An analysis is done on of the most efficient car that could be made with commercially available products. The results show the dependence of actual emission on the energy source. Although the national, California, Florida and lowest averages all beat the performance of the gasoline vehicle, the gasoline car won if the electric car was operated in D.C. using electricity generated in the D.C. Results for the electric car in all 51 regions and for the gasoline car have been obtained. There is an implication that lower specific power would result in more states where electric vehicles will emit more green house gases. Assuming that new cars do use the higher specific power batteries, electric vehicles will produce less green house gases than gasoline vehicles at a national level.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "National averages"

1

White, Eric M., Darren B. Goodding, and Daniel J. Stynes. Estimation of national forest visitor spending averages from National Visitor Use Monitoring: round 2. Portland, OR: U.S. Department of Agriculture, Forest Service, Pacific Northwest Research Station, 2013. http://dx.doi.org/10.2737/pnw-gtr-883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gillis, Jessica, Jeffrey Whicker, Michael Mcnaughton, and William Eisele. Comparison of Background Radiation Effective Dose Rates for Residents in the Vicinity of a Research and Nuclear Weapons Laboratory (Los Alamos County, USA) with National Averages. Office of Scientific and Technical Information (OSTI), November 2014. http://dx.doi.org/10.2172/1163627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Skone, Timothy J. Petroleum, 2005 National Average, Production and Transport. Office of Scientific and Technical Information (OSTI), May 2012. http://dx.doi.org/10.2172/1509179.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Skone, Timothy J. U.S. National Average Electricity Grid Mix 2007. Office of Scientific and Technical Information (OSTI), June 2012. http://dx.doi.org/10.2172/1509462.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Farley, Chelsea Farley, and Sandra Hackman Hackman. Ready4Work In Brief Interim Outcomes Are In: Recidivism at Half the National Average. Philadelphia, PA United States: Public/Private Ventures, September 2006. http://dx.doi.org/10.15868/socialsector.13956.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ruderman, Florence K., and Richard W. Haynes. Volume and average stumpage price of selected species on the national forests of the Pacific Northwest region, 1973 to 1984. Portland, OR: U.S. Department of Agriculture, Forest Service, Pacific Northwest Research Station, 1986. http://dx.doi.org/10.2737/pnw-rn-446.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cram, Jana, Mary Levandowski, Kaci Fitzgibbon, and Andrew Ray. Water resources summary for the Snake River and Jackson Lake Reservoir in Grand Teton National Park and John D. Rockefeller, Jr. Memorial Parkway: Preliminary analysis of 2016 data. National Park Service, June 2021. http://dx.doi.org/10.36967/nrr-2285179.

Full text
Abstract:
This report summarizes discharge and water quality monitoring data for the Snake River and Jackson Lake reservoir levels in Grand Teton National Park and John D. Rockefeller, Jr. Memorial Parkway for calendar year 2016. Annual and long-term discharge summaries and an evaluation of chemical conditions relative to state and federal water quality standards are presented. These results are considered provisional, and may be subject to change. River Discharge: Hydrographs for the Snake River at Flagg Ranch, WY, and Moose, WY, exhibit a general pattern of high early summer flows and lower baseflows occurring in late summer and fall. During much of 2016, flows at the Flagg Ranch monitoring location were similar to the 25th percentile of daily flows at that site. Peak flows at Flagg Ranch were similar to average peak flow from 1983 to 2015 but occurred eleven days earlier in the year compared to the long-term average. Peak flows and daily flows at the Moose monitoring station were below the long-term average. Peak flows occurred four days later than the long-term average. During summer months, the unnatural hydro-graph at the Moose monitoring location exhibited signs of flow regulation associated with the management of Jackson Lake. Water Quality Monitoring in the Snake River: Water quality in the Snake River exhibited seasonal variability over the sampling period. Specifically, total iron peaked during high flows. In contrast, chloride, sulfate, sodium, magnesium, and calcium levels were at their annual minimum during high flows. Jackson Lake Reservoir: Reservoir storage dynamics in Jackson Lake exhibit a pattern of spring filling associated with early snowmelt runoff reaching maximum storage in mid-summer (on or near July 1). During 2016, filling water levels and reservoir storage began to increase in Jackson Lake nearly two weeks earlier than the long-term average and coincident with increases in runoff-driven flows in the Snake River. Although peak storage in Jackson Lake was larger and occurred earlier than the long-term average, minimum storage levels were similar to the long-term average.
APA, Harvard, Vancouver, ISO, and other styles
8

Neves, Mateus C. R., Felipe De Figueiredo Silva, and Carlos Otávio Freitas. The Effect of Extension Services and Credit on Agricultural Production in Bolivia, Peru, and Colombia. Inter-American Development Bank, July 2021. http://dx.doi.org/10.18235/0003404.

Full text
Abstract:
In this paper we estimate the average treatment effect from access to extension services and credit on agricultural production in selected Andean countries (Bolivia, Peru, and Colombia). More specifically, we want to identify the effect of accessibility, here represented as travel time to the nearest area with 1,500 or more inhabitants per square kilometer or at least 50,000 inhabitants, on the likelihood of accessing extension and credit. To estimate the treatment effect and identify the effect of accessibility on these variables, we use data from the Colombian and Bolivian Agricultural Censuses of 2013 and 2014, respectively; a national agricultural survey from 2017 for Peru; and geographic information on travel time. We find that the average treatment effect for extension is higher compared to that of credit for farms in Bolivia and Peru, and lower for Colombia. The average treatment effects of extension and credit for Peruvian farms are $2,387.45 and $3,583.42 respectively. The average treatment effect for extension and credit are $941.92 and $668.69, respectively, while in Colombia are $1,365.98 and $1,192.51, respectively. We also find that accessibility and the likelihood of accessing these services are nonlinearly related. Results indicate that higher likelihood is associated with lower travel time, especially in the analysis of credit.
APA, Harvard, Vancouver, ISO, and other styles
9

Job, Jacob. Mesa Verde National Park: Acoustic monitoring report. National Park Service, July 2021. http://dx.doi.org/10.36967/nrr-2286703.

Full text
Abstract:
In 2015, the Natural Sounds and Night Skies Division (NSNSD) received a request to collect baseline acoustical data at Mesa Verde National Park (MEVE). Between July and August 2015, as well as February and March 2016, three acoustical monitoring systems were deployed throughout the park, however one site (MEVE002) stopped recording after a couple days during the summer due to wildlife interference. The goal of the study was to establish a baseline soundscape inventory of backcountry and frontcountry sites within the park. This inventory will be used to establish indicators and thresholds of soundscape quality that will support the park and NSNSD in developing a comprehensive approach to protecting the acoustic environment through soundscape management planning. Additionally, results of this study will help the park identify major sources of noise within the park, as well as provide a baseline understanding of the acoustical environment as a whole for use in potential future comparative studies. In this deployment, sound pressure level (SPL) was measured continuously every second by a calibrated sound level meter. Other equipment included an anemometer to collect wind speed and a digital audio recorder collecting continuous recordings to document sound sources. In this document, “sound pressure level” refers to broadband (12.5 Hz–20 kHz), A-weighted, 1-second time averaged sound level (LAeq, 1s), and hereafter referred to as “sound level.” Sound levels are measured on a logarithmic scale relative to the reference sound pressure for atmospheric sources, 20 μPa. The logarithmic scale is a useful way to express the wide range of sound pressures perceived by the human ear. Sound levels are reported in decibels (dB). A-weighting is applied to sound levels in order to account for the response of the human ear (Harris, 1998). To approximate human hearing sensitivity, A-weighting discounts sounds below 1 kHz and above 6 kHz. Trained technicians calculated time audible metrics after monitoring was complete. See Methods section for protocol details, equipment specifications, and metric calculations. Median existing (LA50) and natural ambient (LAnat) metrics are also reported for daytime (7:00–19:00) and nighttime (19:00–7:00). Prominent noise sources at the two backcountry sites (MEVE001 and MEVE002) included vehicles and aircraft, while building and vehicle predominated at the frontcountry site (MEVE003). Table 1 displays time audible values for each of these noise sources during the monitoring period, as well as ambient sound levels. In determining the current conditions of an acoustical environment, it is informative to examine how often sound levels exceed certain values. Table 2 reports the percent of time that measured levels at the three monitoring locations were above four key values.
APA, Harvard, Vancouver, ISO, and other styles
10

Alviarez, Vanessa, Keith Head, and Thierry Mayer. Global Giants and Local Stars: How Changes in Brand Ownership Affect Competition. Inter-American Development Bank, June 2021. http://dx.doi.org/10.18235/0003333.

Full text
Abstract:
We assess the consequences for consumers in 76 countries of multinational acquisitions in beer and spirits. Outcomes depend on how changes in ownership affect markups versus efficiency. We find that owner fixed effects contribute very little to the performance of brands. On average, foreign ownership tends to raise costs and lower appeal. Using the estimated model, we simulate the consequences of counter-factual national merger regulation. The US beer price index would have been 4-7% higher without divestitures. Up to 30% savings could have been obtained in Latin America by emulating the pro-competition policies of the US and EU.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography