Siga este link para ver outros tipos de publicações sobre o tema: Low quantity.

Teses / dissertações sobre o tema "Low quantity"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Low quantity".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Lowas, Albert Frank III. "Improved Spare Part Forecasting for Low Quantity Parts with Low and Increasing Failure Rates". Wright State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=wright1432380369.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Bokhour, Edward Bijan. "Energy absorption methods for fluid quantity gauging in low gravity". Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/35942.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Uhlman, Kristine, e Janick Artiola. "Arizona Wells: Low Yielding Domestic Water Wells". College of Agriculture and Life Sciences, University of Arizona (Tucson, AZ), 2011. http://hdl.handle.net/10150/146926.

Texto completo da fonte
Resumo:
3 pp.
Arizona Well Owner's Guide to Water Supply
To develop a ground water resource, it is necessary to design and construct a well capable of yielding a pumping rate compatible with the needs of the water well owner. Sufficient and sustained well yields are highly dependent on the characteristics of the aquifer, the construction of the well, and the maintenance of the well. Causes of low-yielding wells are explained and practices to restore well performance are recommended.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Thompson, Lindsay Paige. "Degenerate Oligonucleotide Primed - Polymerase Chain Reaction Evaluation And Optimization To Improve Downstream Forensic STR Analysis Of Low Quality/Low Quantity DNA". VCU Scholars Compass, 2006. http://scholarscompass.vcu.edu/etd/1299.

Texto completo da fonte
Resumo:
When forensic biological samples yield low quality/low quantity DNA, thecurrent STR analysis methods do not generate acceptable profiles. Whole genomeamplification can be used to pre-amplify the entire genome for downstream analyses. A commercially available kit for DOP-PCR, a form of WGA, is currently being used in the clinical for downstream single locus targets. Forensic analyses utilize a multiplex amplification. This study determined that the "home brew" created by our lab performs the same as the commercially available kit. Future optimization studies of DOP-PCR can utilize this "home brew". Additionally, this research determined that a 10 second increase in electrokinetic injection time onto the Capillary Electrophoresis (CE) in combination with a post-STR amplification purification and elution into formamide produces a slightly higher percent STR allele success over the standard protocol. After future optimization studies, this may be a useful method to obtain more accurate and complete STR profiles from low quality/low quantity biological samples.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Balfour, Robert Andrew. "Differences in the growth of the wolf spider Hogna helluo (Araneae : Lycosidae) reared under high and low food quantity diets". Oxford, Ohio : Miami University, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=miami1078419602.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Larnane, Amel. "Identification par empreintes génétiques : développement et évaluation de nouvelles méthodologies pour l'analyse de traces d'ADN en faible quantité et/ou dégradé". Electronic Thesis or Diss., université Paris-Saclay, 2024. https://www.biblio.univ-evry.fr/theses/2024/interne/2024UPASL102.pdf.

Texto completo da fonte
Resumo:
L'identification par empreintes génétiques est devenue une méthode incontournable dans les enquêtes judiciaires au cours des trois dernières décennies. Cependant, l'analyse des traces biologiques issues des scènes d'infraction reste un défi majeur, en particulier lorsque l'ADN est dégradé et/ou présent en faible quantité. Actuellement, seulement environ 33 % des traces prélevées à des fins d'analyse génétique sont exploitables avec les techniques classiques, correspondant principalement aux cas les plus simples. Les traces plus complexes, qu'elles présentent une quantité insuffisante d'ADN ou un ADN altéré, voire constituées de mélanges, posent encore des difficultés, limitant ainsi l'identification de suspects ou de victimes. Surmonter ces obstacles représente un enjeu important pour les communautés criminalistique et judiciaire, ainsi que pour la société. Cette thèse vise à repousser ces limites en développant de nouvelles méthodologies pour analyser les traces d'ADN dégradé et/ou en faible quantité. Dans une première partie, nous avons cherché à comprendre la composition de ces traces complexes à partir de cas réels. Pour cela, nous avons utilisé une technologie d'électrophorèse ultra-sensible à champ pulsé pour visualiser l'ADN, couplée à la quantification de l'ADN humain via les séquences Alu et au séquençage du gène de l'ARN ribosomique 16S pour identifier la présence de micro-organismes. Cette combinaison a révélé que l'ADN humain était présent dans plus de 84 % des cas, bien que souvent en quantité insuffisante et avec différents niveaux de dégradation, tandis que l'ADN bactérien prédominait. Dans une deuxième partie, nous nous sommes focalisés sur la faible quantité d'ADN en examinant des traces issues de cas réels. Nous avons choisi d'adapter un protocole d'amplification d'ADN en le couplant à une technologie innovante de miniaturisation robotisée. Cette stratégie a permis notamment de rendre exploitables des traces qui, avec les méthodes classiques, ne l'étaient pas. Dans une dernière partie, nous avons abordé la question de la dégradation avec l'analyse des Single Nucleotide Polymorphisms (SNP), par l'utilisation du séquençage nouvelle génération (NGS) ciblé. Les résultats obtenus indiquent la possibilité d'établir des profils génétiques hybrides, composés de short tandem repeats (STR) et de SNP, à partir d'ADN très dégradé. Ces nouvelles méthodologies apportent un regard nouveau sur l'exploitation des traces d'ADN dans les enquêtes criminelles et soulignent l'importance de redéfinir les cadres réglementaires autour des informations génétiques multiples disponibles issues des traces biologiques, un enjeu qui devrait être au cœur des futures discussions de cette décennie. Ces résultats pourraient transformer l'approche de l'identification génétique avec un impact direct sur les procédures judiciaires
Genetic fingerprinting has become a cornerstone method in criminal investigations over the past three decades. However, the analysis of biological traces from crime scenes remains a major challenge, particularly when DNA is degraded and/or present in low quantities. Currently, only about 33% of traces collected for genetic analysis are usable with conventional techniques, mainly in the simplest cases. More complex traces, whether they contain insufficient amounts of DNA, degraded DNA, or are composed of mixtures, still pose difficulties, thereby limiting the identification of suspects or victims. Overcoming these obstacles is a significant challenge for forensic and judicial communities, as well as for society as a whole. This thesis aims to push these boundaries by developing new methodologies to analyze degraded and/or low-quantity DNA samples. In the first part, we sought to understand the composition of these complex traces using casework. To achieve this, we employed ultra-sensitive pulsed-field electrophoresis technology to visualize the DNA, coupled with the quantification of human DNA via Alu sequences and sequencing of the 16S ribosomal RNA gene to identify the presence of microorganisms. This approach revealed that human DNA was present in over 84% of cases, although often in insufficient quantities and with varying levels of degradation, while bacterial DNA predominated. In the second part, we focused on the issue of low DNA quantities by examining traces from casework. We chose to adapt a DNA amplification protocol, integrating it with an innovative robotic miniaturization technology. This strategy allowed us to make previously unusable traces analyzable with conventional methods. Finally, in the third part, we addressed the issue of degradation by analyzing Single Nucleotide Polymorphisms (SNPs) using targeted Next Generation Sequencing (NGS). The results indicate the possibility of establishing hybrid genetic profiles composed of short tandem repeats (STR) and SNPs from highly degraded DNA samples. These new methodologies offer a fresh perspective on the use of DNA traces in criminal investigations and emphasize the importance of redefining regulatory frameworks surrounding the multiple genetic data available from biological traces, an issue that should be central to discussions in the coming decade. These findings could transform the approach to genetic identification, with a direct impact on judicial procedures
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Ras, Anna. "Analysis of the quantity and cost of modelled nitrate deposition to the Vaal River from power station emissions with insights for cost-benefit analysis and policy recommendations". Master's thesis, Faculty of Engineering and the Built Environment, 2019. http://hdl.handle.net/11427/30869.

Texto completo da fonte
Resumo:
Anthropogenic processes have led to high levels of reactive nitrogen entering freshwater ecosystems. This increase in reactive nitrogen levels has caused several adverse environmental and health effects and has resulted in higher deposition rates of nitrates to freshwater ecosystems. The costs and benefits associated with nitrate deposition have been analysed by the European Nitrogen Assessment (ENA) for European countries. However, no studies similar to this have been done for the South African context. The aim of the study was to present a cost analysis of nitrate deposition originating from power station NOx emissions. The objectives were: to examine the changes in nitrate deposition for the years 1980, 2005, 2006 and 2014; to determine the costs associated with nitrate deposition to freshwater ecosystems for the South African context; to calculate the costs of power station emissions to the Vaal River; to consider how European costs differ from South African costs; to consider the impact of the NEMAQA of 2004 and finally, to evaluate the likelihood of these costs being incurred. The years that were selected for this study were chosen due to availability of data, which were supplied by EScience Associates. Three scenarios were considered for each of these years: Scenario 1 was a case in which Eskom operated as usual without any retrofits of power stations, Scenario 2 considered the implementation of the Eskom air quality management strategy and Scenario 3 considered full compliance with the minimum emissions standards set out in the NEMAQA of 2004. The costing method followed the ENA approach, whilst considering the South African context by consulting the relevant literature. The monetized annual costs for the South African context were: mitigation options for improving water quality; increased coal consumption due to power station interventions; agricultural costs; water purification and waste treatment; health impacts and loss of biodiversity as a result of acidification and eutrophication. Power station interventions were found to be the only capital expenditure. The nitrate deposition per unit of electricity generated was expected to decrease, due to changes within the electricity mix of Eskom during this period. Furthermore, the least costly option was expected to be a scenario in which no intervention was made by Eskom to reduce emissions, due to the high capital cost associated with retrofitting low NOx burners in the older power stations. The final expected outcome was that the National Environment Management: Air Quality Act (NEMAQA) of 2004 would have led to a significant decrease in the emissions and, therefore, nitrate deposition to the Vaal River. The costs that were calculated for the South African context differed greatly from the costs in the ENA, indicating that the European costs could not be used directly for the South African context. Furthermore, the results showed that the costs of nitrate deposition increased between 1980 and 2005, decreased between 2005 and 2006 and increased again between 2006 and 2014. Between 1980, 2005 and 2006, a clear link is seen between electricity generated and nitrate deposition. Even though electricity generation increased from 2006 to 2014, the 2014 emissions data show that emissions decreased over the same period. The cost of a fine for non-compliance to emission limits is R10 million. The lowest cost calculated for each year and scenario was found to be Scenario 1 for 1980, and was approximately R 70 million of costs arising from nitrate deposition from power station emissions. The R 70 million, therefore, does not include mitigation options for water quality, increased coal consumption and power station interventions. Therefore, the fines associated with non-compliance, which occur in Scenario 1 and Scenario 2, should be increased to force compliance. The total cost associated with Eskom’s air quality strategy, calculated as part of scenario two, was the lowest cost option for 1980, 2006 and 2014. In 2005, the lowest cost option was scenario 1, where no retrofits were done by Eskom. This indicated that there was a trade-off between capital expenditure for low NOx burners and the annual costs, listed previously. This study concluded that when air quality policies, such as the NEMAQA of 2004 are implemented without stringent enforcement, the desired result is not achieved. The findings in this study show that no significant decrease in nitrate deposition occurred between 2005, when the NEMAQA of 2004 was released, and 2014, which was almost 10 years after the policy was implemented. This study makes a valuable contribution to informing policy makers on the impact of reactive nitrogen addition to the environment. Future research should be done on the cost of agricultural nitrate deposition to the Vaal River, considering that these inputs to the Vaal River are several times larger than those of deposition from power station emissions and could, therefore, have costs of a larger scale associated with them.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Orie, Kenneth Kanu. "Legal aspects of groundwater quantity allocation and quality protection in Canada". Thesis, McGill University, 1992. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=41192.

Texto completo da fonte
Resumo:
Groundwater quantity allocation and quality protection in Canada largely proceed in a fragmented fashion. Each jurisdiction pursues the management of its water resources and the aquatic environment separately as well as independently of other jurisdictions. This approach is at odds with the unity of the natural environment and the inter-connectedness of groundwater resources.
The challenge facing Canada is to make the law recognize and be more responsive to the unity of the aquatic environment and water resources. An active federal role in uniting and coordinating the efforts of the provinces in this regard is crucial if this challenge is to be met. However, since the constitutional division of powers in Canada encourages a fragmented approach to managing environment and water resources, the federal government is incapacitated, purely on a legal score, with respect to pulling together the efforts of the provinces. A cooperative approach, based on political rather than legal coordination, is therefore, the most realistic option for the federal government to meet the challenge.
In this work, the writer examines the various areas for federal-provincial cooperation regarding groundwater allocation and protection. Such institutional integration or cooperation cannot be effective unless groundwater is addressed together with the other component of the hydrologic cycle, namely: surface water and the ecosystem they support. At the same time, in adopting an integrated hydrologic cycle approach, specific groundwater management strategies canvassed in this work must be taken into account if groundwater is to be more efficiently allocated and protected. Pursuant to these considerations, this writer is of the opinion that groundwater resources in Canada should be managed in a way that meets both present and future needs of Canadians, thus in a sustainable fashion. This can best be achieved if resource management relies upon a combination of contaminant-focused and resource-focused approaches adopted under unified federal-provincial efforts as well as under an integrated hydrologic cycle management.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Decman, John M. "Effects of state deregulation on the quantity and adequacy of school facilities". Virtual Press, 2000. http://liblink.bsu.edu/uhtbin/catkey/1191105.

Texto completo da fonte
Resumo:
The general purpose of this study was to determine whether deregulation in Indiana via Public Law 25-1995 has had an adverse effect on either quantity or adequacy of new school construction. Data for projects approved during the three years preceding deregulation (1992-1994) were compared with data for projects approved during the three years following deregulation (1996-1998).Data for the projects were obtained from state agencies. They included the number of projects approved, the cost of each project, the size of each project, and school district enrollment, and the assessed valuation of each school district in each of the years studied. Major findings included: (a) The annual average number of approved projects prior to deregulation was 14 and the annual average following deregulation was 13. (b) The size of approved elementary level projects did not change following deregulation (it remained at 138 square feet per student). The size of approved middle level projects decreased from 196 square feet per student to 170 square feet per student after deregulation (a 14% decrease), and the size of middle schools became less uniform. The size of approved high school projects decreased from 230 square feet per student to 209 square feet per student after deregulation (a 9% decrease). (c) The average cost per square foot of approved elementary school projects declined from $113 to $109, and the average cost per square foot of approved high school projects declined from $119 to $107 after deregulation. The average cost per square foot of approved middle level projects increased from $105 to $110. (d) School district wealth did not have a significant effect on either the quantity of projects or the size of projects. (e) School district size did not have a significant effect on either the quantity of projects or the size of projects.Recommendations include additional long-term studies to address not only the effects of deregulation on school facilities, but also the effects of deregulation on educational programming.
Department of Educational Leadership
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Meroni, Elena Claudia. "Average and quantile effects of more instruction time in low achieving schools: evidence from Southern Italy". Doctoral thesis, Università degli studi di Padova, 2013. http://hdl.handle.net/11577/3423018.

Texto completo da fonte
Resumo:
The thesis is composed by two main chapters. Both study the effectiveness of a program funded by the European Union, which was implemented during the academic year 2010/11 in low achieving lower secondary schools located in four Southern Italian regions . The intervention's aim was to increase student performances in mathematics and Italian language through the provision of extra instruction time, to be held in the afternoon, thus outside regular school time. The first chapter focuses on average treatment effects. I control for sorting across classes using the fact that student are divided into groups distinguished by letters, they remain in the same group across grades and the composition of teachers in the school assigned to each group is substantially stable over time. I implement a difference-in-differences strategy, and compare two contiguous cohorts of sixth grade students enrolled in the same group. I contrast groups with and without additional instruction time in participating schools, to groups in non-participating schools that I selected to be similar with respect to a long list of pre-programme indicators. I find that the programme raised test scores in mathematics in schools characterised by students from less advantaged backgrounds, while no effect is found on Italian language test scores. In particular the gain is higher for the mathematical reasoning dimensions, while the knowledge of mathematics concepts is not affected. In the second chapter, I go beyond average effects, using two non-linear methods (Threshold difference-in-differences and Changes-in-changes) which allow to recover the counterfactual distribution of the treated group had it not been treated and the quantile treatment effects of the intervention. Both methods suggest that the positive effect documented for mathematics is driven by larger effects for the best students in the group, while low achieving students seem not to benefit form the intervention.
La tesi è composta principalmente da due capitoli. Entrambi studiano gli effetti sui risultati scolastici in Italiano e matematica di un programma finanziato dall'Unione Europea. L'intervento è stato implementato in alcune scuole medie di quattro regioni del Sud Italia durante l'anno scolastico 2010/11 e ha lo scopo di migliorare i risultati in italiano e matematica degli studenti coinvolti attraverso ore extra di lezione tenute nel pomeriggio, quindi in più rispetto al normale orario scolastico. Il primo capitolo si focalizza sull'average treatment effect dell'intervento. Attraverso un matchig di scuole simili e una strategia di difference-in-differences, che sfrutta osservazioni ripetute di studenti appartenenti alla stessa sezione in due coorti contigue, trovo che il programma ha effetti positivi sui punteggi in matematica, solo nel gruppo di scuole caratterizzate da un profilo socio-economico basso. In particolare l'effetto è maggiore nell'ambito cognitivo, cioè l'ambito che coinvolge il ragionamento e lo sviluppo del pensiero matematico, mentre l'aspetto di pura conoscenza dei concetti matematici rimane inalterato. Sui punteggi di italiano non si trova invece nessun effetto. Nel secondo capitolo invece identifico, attraverso due metodi diversi (il Threshold difference-in-difference e il Change-in-changes), l'intera distribuzione controfattuale del gruppo di classi trattate in assenza di trattamento, e ricavo quindi i quantile treatment effects. Con entrambi i metodi si trova che l'effetto positivo trovato nelle scuole caratterizzate da un profilo socio-economico basso, è influenzato da alti guadagni per gli studenti migliori, mentre gli studenti peggiori non sembrano beneficiare del programma.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Cilurzo, Luiz Fernando. "A desjudicialização na execução por quantia". Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/2/2137/tde-29082016-122503/.

Texto completo da fonte
Resumo:
Este trabalho estuda a desjudicialização da execução por quantia enquanto técnica de aceleração do processo, de modo a avaliar sua possibilidade e funcionalidade para a melhora da atual crise de sobrecarga enfrentada pelo Poder Judiciário. O trabalho apoiou-se principalmente em revisão bibliográfica, análise de dados estatísticos e em pesquisa de campo realizada junto a um cartório judicial. Está dividido em três capítulos, dispostos da seguinte forma. O primeiro capítulo desenvolve o conceito de desjudicialização da execução para, em seguida, analisar o histórico e momento atual das codificações brasileiras de processo civil, identificando, em cada caso, movimentos de desjudicialização. Posteriormente, pontua os principais aspectos do devido processo legal atual que têm relevância para o estudo da desjudicialização. Após, analisa os dados estatísticos acerca da sobrecarga do Poder Judiciário, dando ênfase ao impacto dos processos executivos nos gabinetes e cartórios judiciais. O segundo capítulo estuda procedimentos executivos esparsos do ordenamento brasileiro que também fazem uso da desjudicialização, bem como a utilização da técnica em ordenamentos estrangeiros. O terceiro e último capítulo analisa, primeiramente, de modo geral e abstrato, qual a viabilidade de se utilizar as diversas formas de desjudicialização à luz dos aspectos relevantes do devido processo legal destacados no primeiro capítulo. Finalmente, com base no diagnóstico de sobrecarga do Poder Judiciário do primeiro capítulo, aponta as características e os principais elementos para se aplicar a desjudicialização de forma mais incisiva no procedimento geral de execução por quantia, de modo a contribuir com a melhora da crise de morosidade enfrentada pelo Poder Judiciário. De forma conclusiva, pode-se dizer que a desjudicialização é técnica compatível com o devido processo legal e que uma iniciativa desjudicializada que crie uma alternativa para os cartórios judiciais pode ser um primeiro passo para que, dentre outras melhorias, seja possível um progressivo alívio no fluxo de processos levados ao Judiciário
This study analyzes the possibility and functionality of the de-judicialization of debt enforcement as a technique to accelerate the procedure, with the purpose of reducing the overload of court cases faced by the judiciary system. The study relied on extensive review of the literature, statistical data analysis and field research carried out in a notary public office. The text is divided into three chapters. The first one develops the concept of de-judicialization of the enforcement, with the subsequent analysis of the Brazilian codifications of the civil process through the history to the present, identifying, in each case, movements of de-judicialization. It also presents the main aspects of the present due process of law, that are relevant for the study of the de-judicialization and analyses statistical data on the court system overload, emphasizing the impact of the executive proceedings on chambers and notary public offices. The second chapter presents Brazilian sparse executive proceedings that also make use of de-judicialization, as well as the use of the technique abroad. The third chapter analyses, first, in a general and abstract point of view, the viability of using the different forms of de-judicialization, as per relevant aspects of the due process of law referenced in the first chapter. Finally, based on the diagnose of overload of the court system presented in the first chapter, it`s appointed characteristics and main elements for the implementation of de-judicialization in a more effective way in the general proceeding of debt enforcement, as a way to reduce the lengthy of the Brazilian courts. Conclusively we can say that de-judicialization, is a technique that meets the due process requirements and represents a de-judicialized initiative that provides an alternative for the notary public offices, which may become a first step, among others, to a progressive relief in the flow of cases brought to the judiciary system.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Fencl, Jane S. "How big of an effect do small dams have?: using ecology and geomorphology to quantify impacts of low-head dams on fish biodiversity". Thesis, Kansas State University, 2015. http://hdl.handle.net/2097/18960.

Texto completo da fonte
Resumo:
Master of Science
Division of Biology
Martha E. Mather
In contrast to well documented adverse impacts of large dams, little is known about how smaller low-head dams affect fish biodiversity. Over 2,000,000 low-head dams fragment United States streams and rivers and can alter biodiversity. The spatial impacts of low-head dams on geomorphology and ecology are largely untested despite how numerous they are. A select review of how intact low-head dams affect fish species identified four methodological inconsistencies that impede our ability to generalize about the ecological impacts of low-head dams on fish biodiversity. We tested the effect of low-head dams on fish biodiversity (1) upstream vs. downstream at dams and (2) downstream of dammed vs. undammed sites. Fish assemblages for both approaches were evaluated using three summary metrics and habitat guilds based on species occurrence in pools, riffles, and runs. Downstream of dams vs. undammed sites, we tested if (a) spatial extent of dam disturbance, (b) reference site choice, and (c) site variability altered fish biodiversity at dams. Based on information from geomorphic literature, we quantified the spatial extent of low-head dam impacts using width, depth, and substrate. Sites up- and downstream of dams had different fish assemblages regardless of the measure of fish biodiversity. Richness, abundance and Shannon’s index were significantly lower upstream compared to downstream of dams. In addition, only three of seven habitat guilds were present upstream of dams. Methodological decisions about spatial extent, and reference choice affected observed fish assemblage responses between dammed and undammed sites. For example, species richness was significantly different when comparing transects within the spatial extent of dam impact but not when transects outside the dam footprint were included. Site variability did not significantly influence fish response. These small but ubiquitous disturbances may have large ecological impacts because of their potential cumulative effects. Therefore, low-head dams need to be examined using a contextual riverscape approach. How low-head dam studies are designed has important ecological insights for scientific generalizations and methodological consequences for interpretations about low-head dam effects. My research provides a template on which to build this approach that will benefit both ecology and conservation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Megahed, Aly. "Supply chain planning models with general backorder penalties, supply and demand uncertainty, and quantity discounts". Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/54011.

Texto completo da fonte
Resumo:
In this thesis, we study three supply chain planning problems. The first two problems fall in the tactical planning level, while the third one falls in the strategic/tactical level. We present a direct application for the first two planning problems in the wind turbines industry. For the third problem, we show how it can be applied to supply chains in the food industry. Many countries and localities have the explicitly stated goal of increasing the fraction of their electrical power that is generated by wind turbines. This has led to a rapid growth in the manufacturing and installation of wind turbines. The globally installed capacity for the manufacturing of different components of the wind turbine is nearly fully utilized. Because of the large penalties for missing delivery deadlines for wind turbines, the effective planning of its supply chain has a significant impact on the profitability of the turbine manufacturers. Motivated by the planning challenges faced by one of the world’s largest manufacturers of wind turbines, we present a comprehensive tactical supply chain planning model for manufacturing of wind turbines in the first part of this thesis. The model is multi-period, multi-echelon, and multi-commodity. Furthermore, the model explicitly incorporates backorder penalties with a general cost structure, i.e., the cost structure does not have to be linear in function of the backorder delay. To the best of our knowledge, modeling-based supply chain planning has not been applied to wind turbines, nor has a model with all the above mentioned features been described in the literature. Based on real-world data, we present numerical results that show the significant impact of the capability to model backorder penalties with general cost structures on the overall cost of supply chains for wind turbines. With today’s rapidly changing global market place, it is essential to model uncertainty in supply chain planning. In the second part of this thesis, we develop a two-stage stochastic programming model for the comprehensive tactical planning of supply chains under supply uncertainty. In the first stage, procurement decisions are made while in the second stage, production, inventory, and delivery decisions are made. The considered supply uncertainty combines supplier random yields and stochastic lead times, and is thus the most general form of such uncertainty to date. We apply our model to the same wind turbines supply chain. We illustrate theoretical and numerical results that show the impact of supplier uncertainty/unreliability on the optimal procurement decisions. We also quantify the value of modeling uncertainty versus deterministic planning. Supplier selection with quantity discounts has been an active research problem in the operations research community. In this the last part of this thesis, we focus on a new quantity discounts scheme offered by suppliers in some industries. Suppliers are selected for a strategic planning period (e.g., 5 years). Fixed costs associated with suppliers’ selection are paid. Orders are placed monthly from any of the chosen suppliers, but the quantity discounts are based on the aggregated annual order quantities. We incorporate all this in a multi-period multi-product multi-echelon supply chain planning problem and develop a mixed integer programming (MIP) model for it. Leading commercial MIP solvers take 40 minutes on average to get any feasible solution for realistic instances of our model. With the aim of getting high-quality feasible solutions quickly, we develop an algorithm that constructs a good initial solution and three other iterative algorithms that improve this initial solution and are capable of getting very fast high quality primal solutions. Two of the latter three algorithms are based on MIP-based local search and the third algorithm incorporates a variable neighborhood Descent (VND) combining the first two. We present numerical results for a set of instances based on a real-world supply chain in the food industry and show the efficiency of our customized algorithms. The leading commercial solver CPLEX finds only a very few feasible solutions that have lower total costs than our initial solution within a three hours run time limit. All our iterative algorithms well outperform CPLEX. The VND algorithm has the best average performance. Its average relative gap to the best known feasible solution is within 1% in less than 40 minutes of computing time.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Horrigue, Walid. "Prévision non paramétrique dans les modèles de censure via l'estimation du quantile conditionnel en dimension infinie". Thesis, Littoral, 2012. http://www.theses.fr/2012DUNK0511.

Texto completo da fonte
Resumo:
Dans cette thèse, nous étudions les propriétés asymptotiques de paramètres fonctionnels conditionnels en statistique non paramétrique, quand la variable explicative prend ses valeurs dans un espace de dimension infinie. Dans ce cadre non paramétrique, on considère les estimateurs des paramètres fonctionnels usuels, tels la loi conditionnelle, la densité de probabilité conditionnelle, ainsi que le quantile conditionnel. Le premier travail consiste à proposer un estimateur du quantile conditionnel et de prouver sa convergence uniforme sur un sous-ensemble compact. Afin de suivre la convention dans les études biomédicales, nous considérons une suite de v.a {Ti, i ≥ 1} identiquement distribuées, de densité f, censurée à droite par une suite aléatoire {Ci, i ≥ 1} supposée aussi indépendante, identiquement distribuée et indépendante de {Ti, i ≥ 1}. Notre étude porte sur des données fortement mélangeantes et X la covariable prend des valeurs dans un espace à dimension infinie.Le second travail consiste à établir la normalité asymptotique de l’estimateur à noyau du quantile conditionnel convenablement normalisé, pour des données fortement mélangeantes, et repose sur la probabilité de petites boules. Plusieurs applications à des cas particuliers ont été traitées. Enfin, nos résultats sont appliqués à des données simulées et montrent la qualité de notre estimateur
In this thesis, we study some asymptotic properties of conditional functional parameters in nonparametric statistics setting, when the explanatory variable takes its values in infinite dimension space. In this nonparametric setting, we consider the estimators of the usual functional parameters, as the conditional law, the conditional probability density, the conditional quantile. We are essentially interested in the problem of forecasting in the nonparametric conditional models, when the data are functional random variables. Firstly, we propose an estimator of the conditional quantile and we establish its uniform strong convergence with rates over a compact subset. To follow the convention in biomedical studies, we consider an identically distributed sequence {Ti, i ≥ 1}, here density f, right censored by a random {Ci, i ≥ 1} also assumed independent identically distributed and independent of {Ti, i ≥ 1}. Our study focuses on dependent data and the covariate X takes values in an infinite space dimension. In a second step we establish the asymptotic normality of the kernel estimator of the conditional quantile, under α-mixing assumption and on the concentration properties on small balls of the probability measure of the functional regressors. Many applications in some particular cases have been also given
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Richard, Michael. "Évaluation et validation de prévisions en loi". Thesis, Orléans, 2019. http://www.theses.fr/2019ORLE0501.

Texto completo da fonte
Resumo:
Cette thèse porte sur l’évaluation et la validation de prévisions en loi. Dans la première partie, nous nous intéressons à l’apport du machine learning vis à vis des prévisions quantile et des prévisions en loi. Pour cela, nous avons testé différents algorithmes de machine learning dans un cadre de prévisions de quantiles sur données réelles. Nous tentons ainsi de mettre en évidence l’intérêt de certaines méthodes selon le type de données auxquelles nous sommes confrontés. Dans la seconde partie, nous exposons quelques tests de validation de prévisions en loi présents dans la littérature. Certains de ces tests sont ensuite appliqués sur données réelles relatives aux log-rendements d’indices boursiers. Dans la troisième, nous proposons une méthode de recalibration permettant de simplifier le choix d’une prévision de densité en particulier par rapport à d’autres. Cette recalibration permet d’obtenir des prévisions valides à partir d’un modèle mal spécifié. Nous mettons également en évidence des conditions sous lesquelles la qualité des prévisions recalibrées, évaluée à l’aide du CRPS, est systématiquement améliorée, ou très légèrement dégradée. Ces résultats sont illustrés par le biais d’applications sur des scénarios de températures et de prix
In this thesis, we study the evaluation and validation of predictive densities. In a first part, we are interested in the contribution of machine learning in the field of quantile and densityforecasting. We use some machine learning algorithms in quantile forecasting framework with real data, inorder to highlight the efficiency of particular method varying with nature of the data.In a second part, we expose some validation tests of predictive densities present in the literature. Asillustration, we use two of the mentionned tests on real data concerned about stock indexes log-returns.In the third part, we address the calibration constraint of probability forecasting. We propose a generic methodfor recalibration, which allows us to enforce this constraint. Thus, it permits to simplify the choice betweensome density forecasts. It remains to be known the impact on forecast quality, measured by predictivedistributions sharpness, or specific scores. We show that the impact on the Continuous Ranked ProbabilityScore (CRPS) is weak under some hypotheses and that it is positive under more restrictive ones. We use ourmethod on weather and electricity price ensemble forecasts.Keywords : Density forecasting, quantile forecasting, machine learning, validity tests, calibration, bias correction,PIT series , Pinball-Loss, CRPS
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Moran, Terrence J. "A Simulation and Evaluation Study of the Economic Production Quantity Lot Size and Kanban for a Single Line, Multi-Product Production System Under Various Setup Times". Kent State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=kent1213302997.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Andersson, Jonathan, e Henrik Månsson. "KOSTNADSKALKYLERING VID VOLYMVARIATION". Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Industriell organisation och produktion, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-29326.

Texto completo da fonte
Resumo:
Syfte - Syftet med denna studie är att bidra med kunskap gällande kostnadskalkylering vid volymvariation. För att uppnå syftet bröts det ner i följande två frågeställningar. [1] Vilka kostnader kan identifieras i operationsenheter? [2] Hur bör en kostnadskalkyl utformas för operationsenheter vid volymvariation? Metod och genomförande - Under studiens gång har en fallstudie bedrivits för att anskaffa empiri och en litteraturstudie för att anskaffa teori. I fallstudien användes observationer, intervjuer och dokumentstudier för insamling av empiri. Den insamlade empirin analyserades tillsammans med teori för att besvara studiens frågeställningar och på så sätt uppfylla syftet. Resultat - Studiens resultat identifierar de kostnader som uppstod i det studerade området samt en kalkylmodell som kan hantera volymvariation. Kostnaderna förklaras med hjälp av de aktiviteter i tillverkningen där de uppstod, vilket är resultatet på den första frågeställningen. För att besvara den andra frågeställningen har olika kostnadskalkylmodeller analyserats för att undersöka deras styrkor och svagheter vid volymvariation. Genom analysen framkom olika egenskaper en kostnadskalkyl borde ha för att hantera volymvariation. Dessa egenskaper har format studiens kalkylmodell och gett den de egenskaper som en kostnadskalkyl vid volymvariation borde ha. Studiens kalkylmodell jämfördes sedan med självkostnadskalkyl för att tydliggöra resultatet. Implikationer - Resultatets implikationer är både teoretiska och praktiska då resultatet påverkar praktiken men även utökar teorin inom kalkylering vid volymvariation. De praktiska implikationerna är den påverkan resultatet har på utförandet av kostnadskalkylering vid volymvariation. Till skillnad från rapportens praktiska implikationer så går de teoretiska implikationer åt båda håll då teorin påverkat studiens riktning men även då resultatet påverkar teorin. Resultatet påvisar att kalkylmodeller inom volymvariation är bristfälliga vilket leder till at de kan ifrågasättas och vidare forskning behövs för att komplettera eller utveckla dessa. Vidare forskning - Studiens resultat är grundat på en fallstudie av enfalls karaktär vilket sänker resultatets generaliserbarhet. Vidare studier för att stärka generaliserbarheten kan genomföras med en flerfallstudie. Studiens kalkylmodell har inte testats i praktiken så vidare studier kan bedrivas genom att praktiskt implementera rapportens resultat. Begränsningar - Den bedrivna fallstudien var en enfallsstudie vilket betyder att endast ett företag studerades. För att öka generaliserbarheten kunde fallstudien bedrivits på fler företag med olika förutsättningar.
Purpose - The purpose is to contribute with knowledge regarding cost calculations during variating lot quantity. To achieve the purpose, it was broken down to two questions that are presented below. [1] Which costs can be identified in manufacturing units? [2] How should product costing be constructed to handle variety in lot quantity? Method and implementations - During the study a case study have been conducted to gather empirical data and a literature study to gather theories. In the case study we used observations, interviews and document study to gather empirical data. Collected empirical data was analyzed against theory to answer the studies questionnaire and through that achieve the purpose. Result - The result consists of the identified costs in the studied field and a model for costing that can handle variation in lot quantity. The costs are explained with help of the activity in the production where it arose, which is the result of the first question. To answer the second question, different models for cost accounting have been analyzed to identify their strengths and weaknesses in regard of variating lot quantity. During the analysis different characteristics was identified that cost calculations should include to handle variating lot quantity. These characteristics have formed the developed model and given it the characteristics a model for cost calculations should have when calculating with variating lot quantity. The model was compared with self-costing to make the result clearer. Implications - The results implications are both theoretical and practical since the result affects the practice and extend the theories regarding costing with variating lot quantity. The practical implications are the impact on the way cost calculations are performed with variating lot quantity. Unlike the practical implications the theoretical are going both ways as theory have affected the result and the result are affecting the theory. The result is showing that today’s way of cost calculation is deficient when handling variation in lot quantity, which leads to questioning of the established theories and further research is needed for complementing or developing. Further research - The reports result is founded on a case study with a single case which lower the results generalizability. Further research to strengthen generalizability can be conducted with a case study that have multiple cases. The result hasn’t been tested in the practice so further research can be performed with practical implementation of the reports result. Limitations - The case study was of single case characteristic which mean that one company was studied. To increase the level of generalizability the case study could have been conducted at multiple businesses with different conditions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

McKnight, Michael. "Exploring the Private Music Studio: Problems Faced by Teachers in Attempting to Quantify the Success of Teaching Theory in Private Lessons through One Method as Opposed to Another". Thesis, connect to online resource, 2006. http://www.unt.edu/etd/all/Aug2006/mcknight%5Fmichael%5Fwilliam/index.htm.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Bessa, Vagner Henrique Loiola. "Osciladores log-periÃdicos e tipo Caldirola-Kanai". Universidade Federal do CearÃ, 2012. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=8210.

Texto completo da fonte
Resumo:
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior
Nesse trabalho apresentamos as soluÃÃes clÃssicas e quÃnticas de duas classes de osciladores harmÃnicos dependentes de tempo, a saber: (a) o oscilador log-periÃdico e (b) o oscilador tipo Caldirola-Kanai. Para a classe (a) estudamos os seguintes osciladores: (I) $m(t)=m_0frac{t}{t_0}$, (II) $m(t)=m_0$ e (III) $m(t)=m_0ajust{frac{t}{t_0}}^2$. Nesses trÃs casos $omega(t)=omega_0frac{t_0}{t}$. Para a classe (b) estudamos o oscilador (IV) de Caldirola-Kanai onde $omega(t)=omega_0$ e $m(t)=m_0 ext{Exp}ajust{gamma t}$ e osciladores com $omega(t)=omega_0$ e $m(t)=m_0ajust{1+frac{t}{t_0}}^alpha$, para (V) $alpha=2$ e (VI) $alpha=4$. Para obter as soluÃÃes clÃssicas de cada oscilador resolvemos suas respectivas equaÃÃes de movimento e analisamos o comportamento de $q(t)$, $p(t)$ assim como do diagrama de fase $q(t)$ vs $p(t)$. Para obter as soluÃÃes quÃnticas usamos uma transformaÃÃo unitÃria e o mÃtodo dos invariantes quÃnticos de Lewis e Riesenfeld. A funÃÃo de onda obtida à escrita em termos de uma funÃÃo $ ho$, que à soluÃÃo da equaÃÃo de Milne-Pinney. Ainda, para cada sistema resolvemos a respectiva equaÃÃo de Milne-Pinney e discutimos como o produto da incerteza evolui no tempo.
In this work we present the classical and quantum solutions of two classes of time-dependent harmonic oscillators, namely: (a) the log-periodic and (b) the Caldirola-Kanai-type oscillators. For class (a) we study the following oscillators: (I) $m(t)=m_0frac{t}{t_0}$, (II) $m(t)=m_0$ and (III) $m(t)=m_0ajust{frac{t}{t_0}}^2$. In all three cases $omega(t)=omega_0frac{t_0}{t}$. For class (b) we study the Caldirola-Kanai oscillator (IV)where $omega(t)=omega_0$ and $m(t)=m_0 ext{exp}ajust{gamma t}$ and the oscillator with $omega(t)=omega_0$ and $m(t)=m_0ajust{1+frac{t}{t_0}}^alpha$, for $alpha=2$ (V) and $alpha=4$ (VI). To obtain the classical solution for each oscillator we solve the respective equation of motion and analyze the behavior of $q(t)$, $p(t)$ as well as the phase diagram $q(t)$ vs $p(t)$. To obtain the quantum solutions we use a unitary transformation and the Lewis and Riesenfeld quantum invariant method. The wave functions obtained are written in terms of a function ($ ho$) which is solution of the Milne-Pinney equation. Futhermore, for each system we solve the respective Milne-Pinney equation and discuss how the uncertainty product evolves with time.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Segura, Lorena. "Consideraciones epistemológicas sobre algunos ítems de los fundamentos de las matemáticas". Doctoral thesis, Universidad de Alicante, 2018. http://hdl.handle.net/10045/80507.

Texto completo da fonte
Resumo:
Tomando como punto de partida el proceso revisión de los fundamentos matemáticos llevado a cabo durante el siglo XIX, este estudio se centra en uno de los conceptos matemáticos más importantes: el infinito. Es innegable la importancia de este concepto en el avance de las Matemáticas y es fácil encontrar ejemplos matemáticos en los que interviene (definición de límite, definición de derivada, definición de integral de Riemann, entre otras). Debido a que algunas de las paradojas y contradicciones originadas por la falta de rigor en las Matemáticas están relacionadas con este concepto, se comienza con el estudio epistemológico del concepto matemático del infinito revisando la bipolaridad que presentan algunos conceptos semánticos, definidos de forma inseparable y conjunta, constituyendo un único concepto como si representaran los polos de un imán. En este estudio se concluye que la bipolaridad revela que una lógica conceptual que puede asumir la comprensión de la negación, debe ser una lógica dialéctica, es decir que admite como verdaderas algunas contradicciones. En el caso del concepto matemático de lo finito-infinito, nos encontramos de nuevo con una bipolaridad lógica. Por todo lo expuesto se presenta una teoría no cantoriana para el infinito potencial y actual, basada en la imprecisión lingüística del concepto de infinito, y utilizando el concepto de conjunto homógono, formado por una sucesión convergente y su límite, previamente introducido por Leibniz, que permite aunar los dos polos del concepto de infinito en un único conjunto. Esta nueva teoría de conjuntos permitirá presentar en lenguaje homogónico, algunos de los conceptos fundamentales del análisis tales como, la diferencial y la integral, así como algunas aplicaciones a la Óptica y a la Mecánica Cuántica. Posteriormente se presenta la categoría lógica de la oposición cualitativa a través de diferentes ejemplos de diversas áreas de la ciencia, y se define, a través de tres reglas o normas básicas, el paso de la lógica aristotélica o analítica a la lógica sintética, que incluye al neutro como parte de la oposición cualitativa. Con la aplicación de estas normas a la oposición cualitativa y, en particular, a su neutro, se demuestra que la lógica sintética permite la verdad de algunas contradicciones. Esta lógica sintética es dialéctica y multivaluada y da a cada proposición un valor de verdad en el intervalo [0,1], que coincide con el cuadrado del módulo de un número complejo. Esto marca una notable novedad respecto de la lógica aristotélica o analítica que otorga valores de verdad reales, o incluso a la lógica difusa que, a pesar de ser una lógica multivaluada otorga valores de verdad reales en el intervalo [0,1]. En esta lógica dialéctica, las contradicciones del neutro de una oposición pueden ser verdaderas. Finalmente se plantea la aplicación de la lógica dialéctica, a la Mecánica Cuántica, cuyo carácter es no determinista y en la que es posible encontrar ejemplos de situaciones contradictorias debido a la dualidad onda-corpúsculo. Para ello se establece un isomorfismo entre la lógica dialéctica y la teoría de la probabilidad, a la que se añade el concepto de fortuidad, precisamente para reflejar el carácter no determinista.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

BRITO, MARGARIDA. "Encadrement presque sur des statistiques d'ordre". Paris 6, 1987. http://www.theses.fr/1987PA066284.

Texto completo da fonte
Resumo:
Soit k(n) une suite non-decroissante d'entiers positifs. Sous certaines hypotheses pour la suite k(n), on determine des encadrements presque surement optimaux de la k(n)eme statistique d'ordre d'un echantillon de taille n. On commence par aborder le cas ou k(n) est inferieur ou egal a log(log(n)). En utilisant des approximations des queues de la loi binomiale, obtenues a partir des techniques usuelles de la theorie des grandes deviations, on determine d'abord des suites qui majorent ou minorent de facon optimale la k(n)eme statistique d'ordre d'un echantillon uniforme. On applique ensuite les resultats obtenus aux lois de probabilite actuelles
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Guimarães, Milena de Oliveira. "As medidas coercitivas aplicadas à execução de entregar coisa e de pagar quantia". Pontifícia Universidade Católica de São Paulo, 2010. https://tede2.pucsp.br/handle/handle/8986.

Texto completo da fonte
Resumo:
Made available in DSpace on 2016-04-26T20:30:16Z (GMT). No. of bitstreams: 1 Milena de Oliveira Guimaraes.pdf: 1201234 bytes, checksum: f07a56f10121e5662ece2c0ef66c76ac (MD5) Previous issue date: 2010-05-27
The present study aimed at presenting compatible solutions to the civil procedural system for the effectiveness of the enforcement, mainly, for the problematic disobedience of the judgments. The process for enforcing requires coercive methods, as civil prison or fines, compelling contemnor to enforce the order contained in the decision. In this line, it had the intention to approach the specific performance and the money judgments, and giving them an imperative protection from the Court order. The contempt of court institute was mentioned, that is, a typical institute of the common law system, whose aim is to assure the dignity of justice by imposing coercive and punishing procedures. After comparing the both systems civil law and common law the civil contempt was emphasized, a coercive procedure aiming to force him to execute the judicial order. An effective enforcement depends on respect to the administration of justice as corollary of due process of law
O presente estudo tem por objetivo sugerir soluções compatíveis com o sistema processual civil para a efetividade da tutela jurisdicional executiva, notadamente, para a problemática do descumprimento das ordens judiciais. Partiu-se da conceituação da decisão mandamental como tutela executiva, por comportar medidas executivas como meio de compelir o recalcitrante ao cumprimento do comando judicial contido na decisão. Nessa linha, houve a intenção de aproximar as obrigações de entregar coisa e as de pagar quantia, agasalhando-as sobre a proteção do comando judicial imperativo, que exorta ao cumprimento, sob risco de sanção. Trazendo a lume as medidas coercitivas aplicáveis ao devedor renitente, no sistema do common law, à moda do contempt of court, buscou-se ressaltar a eficácia dos provimentos executivos, com evidente superioridade em relação às parcas medidas de apoio permitidas no processo civil brasileiro. Deve-se ter presente que uma tutela executiva efetiva depende de uma ordem jurídica que coloca o respeito à administração da justiça como corolário do devido processo legal. Conclui-se a imprescindibilidade, para a efetividade da tutela executiva, do apoio das medidas coercitivas como a prisão civil e a multa diária nas situações autorizadas pelo ordenamento jurídico, no fim último de sancionar o devedor recalcitrante
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Marques, Camila Salgueiro da Purificação. "A execução provisória por quantia certa contra devedor solvente no Código de Processo Civil Brasileiro". Pontifícia Universidade Católica de São Paulo, 2014. https://tede2.pucsp.br/handle/handle/6515.

Texto completo da fonte
Resumo:
Made available in DSpace on 2016-04-26T20:23:03Z (GMT). No. of bitstreams: 1 Camila Salgueiro da Purificacao Marques.pdf: 1368679 bytes, checksum: 4ae6dafc3f60a0575ff861bf256e3467 (MD5) Previous issue date: 2014-08-21
The present study aims focuse on the Brazilian procedural law, specifically the institute of the provisional execution in court rulings, authorized by the Code of Civil Procedure in the cases of appeals that are received only in their non-staying effect, with a procedure regulated by the article 475-O of the Code of Civil Procedure. As such, it encompasses the provisional execution with exact amount against a solvent debtor in the Code of Civil Procedure. The present study is justified by the need to carry out court rulings, mainly the ones provided by first instance judges, in order to avoid that the plaintiff, prevailing party of the demand, wait for the trial of the appeal put in motion by the opposing party, to only then perform the execution. The technique of researching indirect documentation was used, and the approach method is the logical deductive. The research approached the following items: the effectiveness of the court rulings, mainly within the ambit of execution, as well as its sentences, its respective chapters and efficacy; the moment of the efficacy of the decisions, approaching the provisional execution title, the provisional execution and the advanced judicial protection, and the execution of the astreintes ; the concept of the provisional execution, the situations that lead to it, and its distinctive criteria in relation to the definitive execution, specifically the bond and the responsibility of the execution creditor; the procedure of the provisional execution; and other questions considered relevant, even if they don t compose the focus of the study, such as the provisional execution against the Public Treasury, the provisional execution of the fees and the procedural costs, and the specific provisional execution. The investigation shows the necessity and the urgency to carry out the court rulings, and the theme should be continuously studied
O presente estudo objetiva focar o direito processual civil brasileiro, especificamente o instituto da execução provisória das decisões judiciais, autorizada pelo Código de Processo Civil, nos casos dos recursos que são recebidos apenas em seu efeito devolutivo, com procedimento regulado pelo artigo 475-O, do Código de Processo Civil. Abrange, assim, a execução provisória por quantia certa contra devedor solvente no Código de Processo Civil Brasileiro. Justifica-se esta pesquisa pela necessidade de se efetivar as decisões judiciais, mormente as proferidas pelos juízes de primeira instância, de modo a evitar que o autor e vencedor da demanda aguarde o julgamento do recurso interposto pela parte contrária para só então realizar a execução. Utilizou-se a técnica de pesquisa da documentação indireta e o método de abordagem é o lógico-dedutivo. A pesquisa abordou os seguintes itens: a efetividade das decisões judiciais, em especial no âmbito da execução, assim como as sentenças, seus respectivos capítulos e eficácia; o momento da eficácia das decisões, abordando o título executivo provisório, a execução provisória e a antecipação de tutela, e a execução das astreintes ; o conceito da execução provisória, as situações que a ensejam e os seus critérios distintivos em relação à execução definitiva, especificamente a caução e a responsabilidade do exequente; o procedimento da execução provisória; e outras questões consideradas relevantes, ainda que não componham o foco do trabalho, como a execução provisória contra a Fazenda Pública, a execução provisória dos honorários e das custas processuais e a execução específica provisória. A investigação mostra a necessidade e a urgência de se efetivar as decisões judiciais, devendo o tema ser continuadamente estudado
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Tricard, Julien. "Les quantités dans la nature : les conditions ontologiques de l’applicabilité des mathématiques". Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUL132.

Texto completo da fonte
Resumo:
Si nos théories physiques peuvent décrire les traits les plus généraux de la réalité, on sait aussi que pour le faire, elles utilisent le langage des mathématiques. On peut alors légitimement se demander si notre capacité à décrire, sinon la nature intime des objets et phénomènes physiques, du moins les relations et structures qu’ils instancient, ne vient pas de cette application des mathématiques. Dans cette thèse, nous soutenons que les mathématiques sont si efficacement applicables en physique tout simplement parce que la réalité décrite par les physiciens est de nature quantitative. Pour cela, nous proposons d’abord une ontologie des quantités, puis des lois de la nature, qui s’inscrit dans les débats contemporains sur la nature des propriétés (théorie des universaux, théorie des tropes, ou nominalisme), et des lois (régularités, ou relations entre universaux). Ensuite, nous examinons deux sortes d’application des mathématiques : la mathématisation des phénomènes par la mesure, puis la formulation mathématique des équations reliant des grandeurs physiques. Nous montrons alors que les propriétés et les lois doivent être comme notre ontologie les décrit, pour que les mathématiques soient légitimement, et si efficacement, applicables. L’intérêt de ce travail est d’articuler des discussions purement ontologiques (et très anciennes, comme la querelle des universaux) avec des exigences épistémologiques rigoureuses qui émanent de la physique actuelle. Cette articulation est conçue de manière transcendantale, car la nature quantitative de la réalité (des propriétés et des lois) y est défendue comme condition d’applicabilité des mathématiques en physique
Assuming that our best physical theories succeed in describing the most general features of reality, one can only be struck by the effectiveness of mathematics in physics, and wonder whether our ability to describe, if not the very nature of physical entities, at least their relations and the fundamental structures they enter, does not result from applying mathematics. In this dissertation, we claim that mathematical theories are so effectively applicable in physics merely because physical reality is of quantitative nature. We begin by displaying and supporting an ontology of quantities and laws of nature, in the context of current philosophical debates on the nature of properties (universals, classes of tropes, or even nominalistic resemblance classes) and of laws (as mere regularities or as relations among universals). Then we consider two main ways mathematics are applied: first, the way measurement mathematizes physical phenomena, second, the way mathematical concepts are used to formulate equations linking physical quantities. Our reasoning has eventually a transcendental flavor: properties and laws of nature must be as described by the ontology we first support with purely a priori arguments, if mathematical theories are to be legitimately and so effectively applied in measurements and equations. What could make this work valuable is its attempt to link purely ontological (and often very ancient) discussions with rigorous epistemological requirements of modern and contemporary physics. The quantitative nature of being (properties and laws) is thus supported on a transcendental basis: as a necessary condition for mathematics to be legitimately applicable in physics
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Thomaz, Osvir Guimarães. "A tutela jurisdicional efetiva nas ações de execução por quantia certa em face da Fazenda Pública". Universidade Católica de Pernambuco, 2010. http://www.unicap.br/tede//tde_busca/arquivo.php?codArquivo=508.

Texto completo da fonte
Resumo:
A dissertação propõe analisar a concretização da efetividade do processo nas ações de execução por quantia certa em face da Fazenda Pública. O tema constitui-se relevante uma vez que a efetividade tornou-se a voga da onda renovatória do processo civil. Torna-se de fundamental importância uma vez que a Fazenda Pública é a maior demandada em juízo em todas as esferas sejam a União, os Estados, o Distrito Federal, os Municípios e as autarquias. Nessa toada, é mister que sejam enfrentadas as prerrogativas garantidas à Fazenda Pública como instrumento ou não de óbice contra a efetividade do processo tutelado na Constituição Federal. Dentre as prerrogativas, a Constituição estabeleceu que os pagamentos devidos pela Fazenda Federal, Estadual ou Municipal, em virtude de sentença judicial, far-se-ão exclusivamente na ordem de apresentação dos precatórios. O que vem a ser essa figura tipicamente brasileira denominada precatório? Quais os impactos desse instituto em face da efetividade do processo? Seria constitucional esse instituto estabelecido quando se analisa sistemicamente a constituição? Pelo princípio da unidade da constituição e pela máxima efetividade, seria cabível o que trouxeram as Emendas Constitucionais n 30/2000 e 62/2009, que estabeleceram uma verdadeira moratória das dívidas já reconhecidas e que já se encontravam em fila de espera? Uma vez que a Constituição vem enfrentando uma verdadeira afronta mediante as reformas propostas via emenda constitucional do poder constituinte derivado, é necessário que seja feita uma reflexão quanto à inconstitucionalidade ou não dessas emendas, sob pena de ser afrontado um direito fundamental garantido pela Constituição. Se não houver uma profunda quebra de paradigma, é possível que a efetividade do processo em face da Fazenda Pública tornese uma verdadeira utopia, ficando todos os litigantes contra a Fazenda Pública com o desconfortável sentimento de injustiça não sendo o Poder Judiciário capaz de ser um instrumento de pacificação social
The thesis proposes to analyze the effectiveness of civil process regarding the attachment and garnishment actions for debts of the Public Administration. The issue is relevant because the process effectiveness has become the new trend in civil process, and because the Public Administration is the major defendant in all levels of the Judiciary, being the Federal Government, the States, Federal District, Municipalities or Agencies. Therefore it is mandatory to analyze the privileges granted to the Public Administration as instrument for or impediment against the effectiveness of process, as mentioned by the Federal Constitution. One of the Public Administrations privileges the Constitution established is that payments owed by the Federal, State, and Municipal Governments, from any judicial decision are to be made exclusively in the order Judicial Awards for Payment by Public Administration are filed. What is this typical Brazilian institute called Judicial Award for Payment by Public Administration? What is the impact of such institute on process effectiveness? Would such institute be considered constitutional when one interprets systematically the Constitution? Considering the principles of unity of the Constitution and maximum effectiveness, are the Amendments # 30/2000 and 62/2009 that have established an actual moratorium to Public debts already awarded and waiting on the line legally adequate? While the Constitution is facing a big challenge by the proposed Constitutional Amendments from Congress an examination of their unconstitutionality is called for. If such examination is not done provisions from fundamental rights granted by the Constitution may be disregarded. And if there is no profound paradigm break it is possible the process effectiveness against the Public Administration becomes an utopia, and all litigants against the Public Administration remain with the uncomfortable feeling of injustice. In such a case, the Judiciary Power is not going to be able to work as an instrument of social pacification
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Isolan, Ilaria. "Environmental economics models for efficient and sustainable logistics systems". Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3427294.

Texto completo da fonte
Resumo:
The Intergovernmental Panel on Climate Change (IPCC) reports that global warming poses a grave threat to the world’s ecological system and the human race. This phenomenon is very likely caused by increasing concentrations of carbon emissions, which mainly results from such human activities as fossil fuel burning and deforestation (IPCC, 2007). A powerful action is required to stabilize the rising temperatures, involving many countries with a common objective. As asserted by Stavins (2008), without an effective global climate agreement no result will be accomplished. In order to mitigate global warming, the United Nations (UN), the European Union (EU), and many countries have introduced some policies and mechanisms to contain the total amount of greenhouse gas emissions. Among these, one of the primary legislations is the European Union Emission Trading System (EU-ETS). On the contrary, other nations still consider the efforts to mitigate global warming as obstacles to striving for economic growth. Therefore, without a comprehensive engagement, some actors are advantaged and more competitive in the global economy. Some others, involved in emission saving policies, have to face stronger investments and restrictions, with the risk of suffering economic disadvantages. Since the emissions released by companies’ operational activities into the air are one of the main causes of global climate change (He et al. 2015), businesses are becoming increasingly conscious of their carbon footprint and have begun to incorporate environmental thinking into their business strategy and supply chain management. In order to help managers driving companies towards sustainable and efficient purchasing decisions, in this research work the Sustainable Economic Order Quantity (S-EOQ) Model introduced by Battini et al. (2014) is improved by developing a bi-objective lot-sizing model with two different objective functions to minimize (costs and emissions) and integrating the Cap and Trade regulatory policy (characteristic of the EU-ETS). This S-EOQ model is useful in practice to support managers in understanding the Pareto frontier shape linked to a specific purchasing problem, defining the cost-optimal and emission-optimal solutions and identifying a sustainable quantity to purchase when a Cap and Trade mitigation policy is present. The model behavior is analyzed according to variation in the market carbon price and it is analytically demonstrated that today carbon prices are still far too low to motivate managers towards sustainable purchasing choices. Moreover, two innovative bi-objective Sustainable Joint Economic Lot Size (S-JELS) Models under a Cap and Trade policy are introduced (applying the Cap and Trade regulation only to the buyer or to both buyer and supplier), in order to consider costs and emissions related to a two-echelon supply chain, not only to the buyer. By considering two different objective functions to minimize (costs and emissions), both economic and sustainable issues are equally considered and integrated in the contest of a supply chain. In this way, the models lead the Decision Makers to more sustainable and efficient logistic and purchasing solutions, considering a supply chain point of view. With the purpose of helping companies analyzing the trade-offs among different supplies, the S-JELS models can be run iteratively for many sourcing options, in order to build the Pareto frontiers for each supplier and compare then the frontier shapes, the cost-optimal solutions and the emission-optimal ones. One of the two S-JELS models presented (the one with Cap and Trade regulation applied only to the buyer) is then integrated into a procedure for assessing a Sustainable Supplier Selection. The objective is to provide the managers with numerical KPI and user-friendly graphs, in order to help them on analysing the trade-offs among different supplies and on evaluating the selection criteria for each potential supplier in an easier, faster, analytical and correct way. In the end, it is presented a case study from the manufacturing industry. The objective is to help managers on carrying out a Sustainable Supplier Selection between a Domestic and a Far East sourcing, by applying the S-JELS model integrated in an AHP supplier selection procedure. The model is exploited in order to provide the Decision Makers (DMs) with the tools for selecting the best sourcing option for their company. The DMs by iterating the solution process can obtain and compare different Pareto frontiers, being able to consider trade-offs before taking a purchasing strategy decision.
Secondo l’Intergovernmental Panel on Climate Change (IPCC), il riscaldamento globale rappresenta una grave minaccia per il sistema ecologico mondiale e quindi anche per l’umanità. Questo fenomeno è causato in gran parte dall'aumento di emissioni di CO2, derivanti principalmente da attività umane come la combustione fossile e la deforestazione (IPCC, 2007). Risulta quindi necessaria una decisa azione per stabilizzare le temperature in aumento, tale da coinvolgere molti Paesi per il raggiungimento di un obiettivo comune; come sostenuto da Stavins (2008), senza un efficace accordo globale non sarà possibile raggiungere alcun risultato. Al fine di mitigare il riscaldamento globale, le Nazioni Unite (ONU), l'Unione Europea (UE) e molti altri Paesi hanno introdotto politiche e meccanismi per contenere la quantità totale di emissioni di gas serra. Tra questi, una delle normative più rilevanti è l’European Union Emission Trading System (EU-ETS). Altre Nazioni, al contrario, considerano gli sforzi per mitigare il global warming un ostacolo alla loro crescita economica e risultano quindi avvantaggiate e più competitive rispetto a quei Paesi coinvolti in politiche per la riduzione delle emissioni. Dato che le emissioni rilasciate nell’aria dalle attività operative delle imprese sono una delle principali cause del cambiamento climatico globale (He et al., 2015), le aziende stanno prendendo consapevolezza del loro impatto ambientale e iniziano a seguire una filosofia più sostenibile sia a livello di strategia aziendale, che di gestione della supply chain. In questo lavoro di ricerca, il Sustainable Economic Order Quantity (S-EOQ) Model introdotto da Battini et al. (2014) viene perfezionato, al fine di aiutare i manager a guidare le aziende verso decisioni di acquisto sostenibili ed efficienti. Si sviluppa un modello di dimensionamento del lotto con due diverse funzioni obiettivo da minimizzare (costi ed emissioni), inoltre viene integrata la politica di regolamentazione Cap and Trade, caratteristica dell’EU-ETS. Questo modello S-EOQ risulta utile per varie ragioni: comprendere la forma della di frontiera di Pareto associata ad uno specifico problema di acquisto; definire le soluzioni ottimali in termini di costi e di emissioni; identificare una quantità sostenibile di acquisto quando è presente una politica di Cap and Trade delle emissioni. Il comportamento del modello viene analizzato in relazione alla variazione del prezzo delle emissioni di carbonio, dimostrando analiticamente che i prezzi attuali sono ancora troppo bassi per motivare i manager verso scelte di acquisto sostenibili. Inoltre, vengono introdotti due Sustainable Joint Economic Lot Size (S-JELS) Models nell'ambito di una politica di Cap and Trade (applicando la regolamentazione solo al buyer o sia al buyer che al supplier), in modo da considerare i costi e le emissioni relativi ad una catena di fornitura, non solo al buyer. Considerando due differenti funzioni obiettivo da minimizzare (i costi e le emissioni), le problematiche economiche e sostenibili sono ugualmente tenute in considerazione e integrate nel contesto di una supply chain. In questo modo, i modelli supportano i manager nel prendere decisioni logistiche e di acquisto più sostenibili ed efficienti, considerando il punto di vista della supply chain. Con lo scopo di aiutare le aziende ad analizzare i trade-off tra diverse forniture, i modelli S-JELS possono essere eseguiti iterativamente per varie opzioni di sourcing, al fine di costruire le frontiere di Pareto per ciascun fornitore e confrontare quindi le forme della frontiera, le soluzioni ottimali in termini di costi e in termini di emissioni. Uno dei due modelli S-JELS presentati (quello in cui la politica di Cap and Trade è applicata solo al buyer) viene inoltre integrato in una procedura per effettuare una Sustainable Supplier Selection. L'obiettivo è fornire ai decisori KPI numerici e grafici user-friendly, al fine di aiutarli ad analizzare i trade-off tra le diverse opzioni di fornitura e valutare quindi i criteri di selezione per ogni potenziale fornitore in modo più semplice, rapido, analitico e corretto. Infine, viene presentato un caso studio del settore manifatturiero. L'obiettivo è quello di aiutare i manager a condurre una Sustainable Supplier Selection tra un fornitore Nazionale ed uno collocato nel Far East, applicando il modello S-JELS, integrato in una procedura AHP per la selezione dei fornitori. Tale modello viene dunque impiegato per fornire ai Decision Makers (DMs) gli strumenti per selezionare la migliore opzione di approvvigionamento aziendale. I DMs, iterando il modello, possono ottenere e confrontare diverse frontiere di Pareto, valutando così i trade-off prima di prendere una decisione in merito alla strategia di acquisto.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Braga, Altemir da Silva. "Extensions of the normal distribution using the odd log-logistic family: theory and applications". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-02102017-092313/.

Texto completo da fonte
Resumo:
In this study we propose three new distributions and a study with longitudinal data. The first was the Odd log-logistic normal distribution: theory and applications in analysis of experiments, the second was Odd log-logistic t Student: theory and applications, the third was the Odd log-logistic skew normal: the new distribution skew-bimodal with applications in analysis of experiments and the fourth regression model with random effect of the Odd log-logistic skew normal distribution: an application in longitudinal data. Some have been demonstrated such as symmetry, quantile function, some expansions, ordinary incomplete moments, mean deviation and the moment generating function. The estimation of the model parameters were approached by the method of maximum likelihood. In applications were used regression models to data from a completely randomized design (CRD) or designs completely randomized in blocks (DBC). Thus, the models can be used in practical situations for as a completely randomized designs or completely randomized blocks designs, mainly, with evidence of asymmetry, kurtosis and bimodality.
A distribuição normal é uma das mais importantes na área de estatística. Porém, não é adequada para ajustar dados que apresentam características de assimetria ou de bimodalidade, uma vez que tal distribuição possui apenas os dois primeiros momentos, diferentes de zero, ou seja, a média e o desvio-padrão. Por isso, muitos estudos são realizados com a finalidade de criar novas famílias de distribuições que possam modelar ou a assimetria ou a curtose ou a bimodalidade dos dados. Neste sentido, é importante que estas novas distribuições tenham boas propriedades matemáticas e, também, a distribuição normal como um submodelo. Porém, ainda, são poucas as classes de distribuições que incluem a distribuição normal como um modelo encaixado. Dentre essas propostas destacam-se: a skew-normal, a beta-normal, a Kumarassuamy-normal e a gama-normal. Em 2013 foi proposta a nova família X de distribuições Odd log-logística-G com o objetivo de criar novas distribuições de probabildade. Assim, utilizando as distribuições normal e a skew-normal como função base foram propostas três novas distribuições e um quarto estudo com dados longitudinais. A primeira, foi a distribuição Odd log-logística normal: teoria e aplicações em dados de ensaios experimentais; a segunda foi a distribuição Odd log-logística t Student: teoria e aplicações; a terceira foi a distribuição Odd log-logística skew-bimodal com aplicações em dados de ensaios experimentais e o quarto estudo foi o modelo de regressão com efeito aleatório para a distribuição distribuição Odd log-logística skew-bimodal: uma aplicação em dados longitudinais. Estas distribuições apresentam boas propriedades tais como: assimetria, curtose e bimodalidade. Algumas delas foram demonstradas como: simetria, função quantílica, algumas expansões, os momentos incompletos ordinários, desvios médios e a função geradora de momentos. A flexibilidade das novas distrições foram comparada com os modelos: skew-normal, beta-normal, Kumarassuamy-normal e gama-normal. A estimativas dos parâmetros dos modelos foram obtidas pelo método da máxima verossimilhança. Nas aplicações foram utilizados modelos de regressão para dados provenientes de delineamentos inteiramente casualizados (DIC) ou delineamentos casualizados em blocos (DBC). Além disso, para os novos modelos, foram realizados estudos de simulação para verificar as propriedades assintóticas das estimativas de parâmetros. Para verificar a presença de valores extremos e a qualidade dos ajustes foram propostos os resíduos quantílicos e a análise de sensibilidade. Portanto, os novos modelos estão fundamentados em propriedades matemáticas, estudos de simulação computacional e com aplicações para dados de delineamentos experimentais. Podem ser utilizados em ensaios inteiramente casualizados ou em blocos casualizados, principalmente, com dados que apresentem evidências de assimetria, curtose e bimodalidade.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Huarca, Guevara Kevin Paolo, e Ricalde Harless Hanset Ninahuanca. "Análisis correlacional entre los costos evitables en operaciones de importación marítima y nivel de servicio: el caso de una empresa". Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2018. http://hdl.handle.net/10757/626038.

Texto completo da fonte
Resumo:
La presente tesis tiene como objetivo identificar si existe correlación alguna entre los costos evitables en operaciones de importación marítima y la prestación de nivel de servicio mostrando resultados sobre la implementación realizada en una de las principales empresas en el rubro de importación y distribución de equipamiento gastronómico profesional de la ciudad de Lima. Luego de realizar la implementación aprobada para el 2017, los costos evitables en operaciones de importación marítima fueron reducidos en un 13.85% con respecto al periodo anterior. Se demostró que los costos evitables (r=0.75) tiene una relación alta y positiva con el nivel de servicio, es decir que a medida que se disminuyan y la empresa deje de asumir estos podrá ser mejor su nivel de servicio. Para la presentación se tiene una estructura de cinco capítulos: En el primer capítulo, el marco teórico, se detalló conceptos relacionados a los costos evitables en operación de importación marítima, nivel de servicio, cantidad óptima a ordenar, inventario de seguridad, punto de reorden y costos totales de gestión de inventarios. Además, se mostró información sobre la empresa donde se implementaron las mejoras. En el segundo capítulo, se explicó temas en relación al plan de investigación, dando a conocer el problema, la formulación de la hipótesis, el objetivo general y los específicos. En el tercer capítulo, se trató la metodología de la investigación, donde se determinó el enfoque y diseño, la población, la definición de variables y la recolección de datos. En el cuarto capítulo, se desarrolló la investigación calculando los costos evitables en 3 niveles de servicio. Se estableció dos escenarios (con costos evitables y sin costos evitables) a los sub-variables niveles de servicio, cantidad óptima a ordenar, inventario de seguridad, punto de reorden y costos totales de gestión de inventarios. Finalmente, en el quinto capítulo se mostró el análisis de resultados respondiendo al problema si existe correlación entre los costos evitables en operación de importación marítima y la prestación de nivel de servicio, y brindaremos conclusiones y las recomendaciones sobre la investigación desarrollada.
The objective of this thesis is to identify if there is any correlation between the avoidable costs in maritime import operations and the level of service provision, showing results on the implementation carried out in one of the main companies in the import and distribution of professional gastronomic equipment category from the city of Lima. After implementing the approved implementation for 2017, avoidable costs in maritime import operations were reduced by 13.85% compared to the previous period. It was shown that the avoidable costs (r = 0.75) has a high and positive relationship with the service level, that is to say that as they decrease and the company stops assuming these, their level of service may be better. For the presentation there is a structure of five chapters: In the first chapter, the theoretical framework, detailed concepts related to avoidable costs in maritime import operation, service level, optimal quantity to order, safety inventory, reorder point and total inventory management costs. In addition, information was shown about the company where the improvements were implemented. In the second chapter, topics were explained in relation to the research plan, making known the problem, the formulation of the hypothesis, the general objective and the specific ones. In the third chapter, the methodology of the research was discussed, where the approach and design, the population, the definition of variables and the data collection were determined. In the fourth chapter, the research was developed calculating avoidable costs in 3 levels of service. Two scenarios were established (with avoidable costs and no avoidable costs) to the sub-variables service levels, optimal quantity to be ordered, security inventory, reorder point and total inventory management costs. Finally, the fifth chapter showed the analysis of results answering the problem if there is a correlation between the avoidable costs in maritime import operation and the level of service provision, and we will provide conclusions and recommendations on the research developed.
Tesis
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Adolfsson, Rasmus, e André Hannercrantz. "Optimering av orderstorlek ur ett kostnads- och produktivitetsperspektiv : en kvantitativ fallstudie på Zoégas i Helsingborg". Thesis, Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-74569.

Texto completo da fonte
Resumo:
Studiens syfte är att undersöka optimala orderstorlekar på kaffeproducenten Zoégas förpackningslinjer i Helsingborg. Bakgrunden är att företaget är omedvetna om hur dagens orderstorlekar påverkar produktionen, med hänsyn till produktivitet samt ställ- och lagerhållningskostnader. Fallföretaget har dessutom en vilja att på längre sikt att införa ”pull planning”, vilket understödjer vetskap om förpackningslinjernas förutsättningar. Studien är i huvudsak uppdelad i två distinkta moment; ett som jämför historisk produktivitet i förhållande till orderstorlek och ett som beräknar optimal orderstorlek med hjälp av schemaläggningsverktyget ”Economic lot scheduling problem” (ELSP). Ena förpackningslinjen utmynnar i ett klassiskt fall av ”basic period approach”, som frekvent har behandlats inom litteraturen för ELSP. De andra linjerna har ställtider som beror på ordning, vilket komplicerar tillvägagångssättet. Huvuddelen av datainsamlingen har möjliggjorts genom tillgång till data från företagets produktions- och affärssystem. ELSP-resultatet genererar orderstorlekar och tillverkningsscheman för samtliga förpackningslinjer; optimerade med kostnadsminimering. Studien fann även statistiskt signifikanta samband mellan orderstorlek och produktivitet på Zoégas; samband som resulterade i gränsvärden för mest produktiva orderstorlekar. Majoriteten av ELSP-storlekarna placerade sig innanför dessa gränser. Slutligen fann studien att företaget, vid implementering av studiens förbättringsförslag, kan öka sin produktomsättning och på så sätt vara bättre förberedda för införande av ”pull planning”.
The purpose of this thesis is to investigate optimal order quantities at the coffee producer Zoéga’s packaging lines in Helsingborg. The company is currently unaware of how order quantities affect their production; with regards to productivity as well as setup- and holding costs. With a long-term vision of incorporating “pull planning”, the case company also needs to evaluate the capabilities of their current production system. The case study mainly addresses two areas; one comparing historical productivity in relation to order quantity, and one determining optimal order quantity with the scheduling-tool known as “Economic lot scheduling problem”. One of the packaging lines results in a classic case of “Basic period approach”, a problem frequently reviewed in ELSP-literature. The other lines have sequence dependent setup times, which required a more complex model. The primarily data collection has been from internal production- and management systems. The ELSP-results generated optimal order quantities and production schedules for all packaging lines. The study also found statistically significant correlations between order quantity and productivity for Zoéga’s. These correlations compiled upper and lower limits for the most productive order quantities; where most of the ELSP-quantities placed inside these limits. Finally, the study shows that by implementing these suggestions, Zoéga’s could speed up their product turnover and be better prepared for “pull planning” implementation in the future.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Nishimura, Shin Pinto. "A precarização do trabalho docente como necessidade do capital : um estudo sobre o REUNI na UFRGS". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2012. http://hdl.handle.net/10183/61741.

Texto completo da fonte
Resumo:
Esta dissertação apresenta um estudo sobre a precarização do trabalho docente no ensino superior público à luz da implementação do Programa de Apoio a Planos de Reestruturação e Expansão das Universidades Federais (REUNI – Decreto 6.096/07). A opção por pesquisar a precarização do trabalho docente se deve à localização chave que os trabalhadores em educação tem no sistema, pois estão diretamente ligados à formação, inclusive a formação da força de trabalho. A relação entre expansão de vagas nas universidades federais e as implicações para o trabalho docente são analisadas na Universidade Federal do Rio Grande do Sul (UFRGS). Em específico analisamos a Escola de Educação Física (ESEF) e a Faculdade de Educação (FACED), por serem, respectivamente favorável e contrária a proposta de expansão, sendo que a primeira implementou cursos novos. Como método, que nos ajuda a analisar a realidade, buscamos a referência no materialismo histórico-dialético, pois este nos parece o mais avançado por considerar o movimento dos fenômenos em um contexto determinado historicamente. A metodologia articuladora da pesquisa é o estudo de caso, tendo como objetivo aprofundar os elementos constitutivos deste caso particular, identificada a possibilidade de este ser representativo de um conjunto de casos. O instrumento central utilizado para chegar aos dados foi a análise documental, apoiado nas formulações de Olinda Evangelista, de localizar, selecionar, ler, reler, sistematizar e analisar as evidências contidas nos documentos. Na análise, faço indicações sobre a precarização e a intensificação do trabalho docente nas unidades enunciadas, a partir dos dados obtidos nos relatórios de avaliação e matrículas por departamento. Por fim, problematizo a relação entre quantidade e qualidade nas condições de trabalho dos professores destas unidades.
The present dissertation presents a study on precarization of teaching work in public higher education in the light of the implementation of the Programme of Support for Restructuring Plans and Expansion of Federal Universities (REUNI-Decreto 6,096/07). The option to research the precarization of the teaching job is due to the key location that education workers have in the system, since they are directly related to training, including training of the workforce. The connection between expansion of vacancies in federal universities and the implications to the teaching work are examined at the Universidade Federal do Rio Grande do Sul (UFRGS). In particular, it is analyzed the Escola de Educação Física (ESEF) and the Faculdade de Educação (FACED), because they are, respectively, favorable and opposed a proposal for expansion, with the former having implemented new courses. As a method, that helps one understanding reality, we seek a reference in historical-dialectic materialism, as this seems the most advanced by considering the motion of the phenomena in a context determined historically. The articulating methodology of the research is the case study, having as a goal to deepen the constituent elements of this particular case, identified a possibility of this beinga representative set of cases. The central instrument used to search the data was a documentary analysis, supported in formulations of Olinda Evangelist in, locating, selecting, reading, rereading, systematizing and analyzing the evidence present in the documents. In the analysis I make indications about the precarization and the intensification of teaching work in the units mentioned, from data obtained from the evaluation reports and registrations by department. Finally, I problematize the relation between quantity and quality of working conditions of teachers of these units.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Tronko, Natalia. "Hamiltonian Perturbation Methods for Magnetically Confined Fusion Plasmas". Thesis, Aix-Marseille 2, 2010. http://www.theses.fr/2010AIX22088/document.

Texto completo da fonte
Resumo:
Les effets auto-consistantes sont omniprésents dans les plasmas de fusion. Ils sont dus au fait que les équations de Maxwell qui décrivent l’évolution des champs électromagnétiques contiennent la densité de charge et de courant des particules.D’autre côté à son tour les trajectoires des particules sont influencés par les champs à travers les équations de mouvement ( où l’équation de Vlasov). Le résultat decette interaction auto-consistente se résume dans un effet cumulatif qui peut causer le déconfinement de plasma à l’intérieur d’une machine de fusion. Ce travail de thèse traite les problèmes liés à l’amélioration de confinement de plasma de fusion dans le cadre des approches hamiltonienne et lagrangien par le contrôle de transport turbulent et la création des barrières de transport. Les fluctuations auto-consistantes de champs électromagnétiques et de densités des particules sont à l’origine de l’apparition des instabilités de plasma qui sont à son tour liés aux phénomènes de transport. Dans la perspective de comprendre les mécanismes de la turbulence sousjacente,on considère ici l’application des méthodes hamiltoniennes pour des plasmasnon-collisionnelles
This thesis deals with dynamicla investigation of magnetically confined fusion plasmas by using Lagrangian and Hamilton formalisms. It consists of three parts. The first part is devoted to the investigation of barrier formation for the EXB drift model by means of the Hamiltonian control method. The strong magnetic field approach is relevant for magnetically confined fusion plasmas ; this is why at the first approximation one can consider the dynamics of particles driven by constant and uniform magnetic field. In this case only the electrostatic turbulence is taken into account. During this study the expressions for the control term (quadratic in perturbation amplitude) additive to the electrostatic potential, has been obtained. The effeciency of such a control for stopping turbulent diffusion has been shown analytically abd numerically. The second and the third parts of this thesis are devoted to study of self consistent phenomena in magnetized plasmas through the Maxwell-Vlasov model. In particular, the second part of this thesis treats the problem of the monumentum transport by derivation of its conservation law. the Euler-Poincare variational principle (with constrained variations) as well as Noether's theorem is apllied here. this derivation is realized in two cases : first, in electromagnetic turbulence case for the full Maxwell-Vlasov system, and then in electrostatic turbulence case for the gyrokinetic Maxwell-Vlasov system. Then the intrinsic mechanisms reponsible for the intrinsic plama rotation, that can give an important in plasma stabilization, are identified. The last part of this thesis deals with dynamicla reduction for the Maxwell-Vlaslov model. More particularly; the intrisic formulation for the guiding center model is derived. Here the term 'intrinsis" means that no fixed frame was used during its construction. Due to that not any problem related to the gyrogauge dependence of dynamics appears. The study of orbits of trapped particles is considered as one of the possible for illustration of the first step of such a dynamical reduction
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Silva, Everton Nunes da. "Ensaios em economia da sáude : transplantes de rim". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2008. http://hdl.handle.net/10183/15641.

Texto completo da fonte
Resumo:
A tese abordou questões relacionadas à economia da saúde, particularmente à visão econômica dos transplantes renais. Foi conduzida uma análise de custo-utilidade para verificar qual tratamento, transplante renal ou hemodiálise, possui menor razão de custo por anos de vida ajustados por qualidade. O resultado obtido corrobora as evidências internacionais, as quais indicam o transplante renal como estratégia mais custo-efetiva. No caso deste estudo, a razão de custo-utilidade para o transplante renal e hemodiálise foi de R$ 18.161,00/AVAQ e R$ 40.872,00/AVAQ, respectivamente. Apesar de o transplante renal ser uma estratégia dominante, a escassez de órgãos impede que essa estratégia seja amplamente utilizada, reduzindo, assim, os ganhos de eficiência na alocação dos recursos escassos. Nesse contexto, também foi alvo desta tese a questão da escassez de órgãos. Pelo levantamento feito, há tendência de aumento do desequilíbrio entre demanda e oferta de órgãos, visto que a primeira cresce rapidamente, enquanto a segunda mostra pequena tendência de crescimento. Assim, alternativas para contornar esse problema foram analisadas, especialmente as relacionadas a mudanças institucionais na lei de doação de órgãos. Entre elas, foi argüido que a lei de consentimento presumido seria a opção mais factível, por não ferir o pressuposto do altruísmo. Objetivando estimar quanto seria o eventual incremento na doação de órgãos por doador cadáver devido à lei de consentimento presumido, fez-se uso do ferramental da econometria da saúde, aplicando, para uma amostra de 34 países ao longo de cinco anos, o método de regressão quantílica para dados de painel. Os resultados obtidos nessa aplicação indicam que há benefício na adoção da lei de consentimento presumido, que tem um efeito positivo sobre a taxa de doação de órgãos, em torno de 21-26%, comparada à lei de consentimento informado.
The thesis broaches questions related to health economics, particularly the economic vision of renal transplants. A cost-utility analysis was conducted to assess which treatment, renal transplant or hemodialysis, has a lower cost rate per quality-adjusted life years. The result obtained corroborates the international evidence, which indicates renal transplant as the most cost effective strategy. In the case of this study, the cost-utility ratio for renal transplant and hemodialysis was US$ 11,157/QALY and US$ 25,110/QALY, respectively. In spite of renal transplant being the dominant strategy, the scarcity of organs hinders this strategy to be widely used, reducing in this way, the efficiency gain in the allocation of scarce resources. Within this context, the organ shortage was also a target issue of this thesis. Through the survey performed, there is a tendency towards the increase of unbalance between the demand and supply of organs, being that the first grows rapidly while the second shows small tendency towards growth. Within this context, the investigation target of this thesis was to look into possible alternatives to by-pass this problem, especially those related to institutional changes in the organ donation law. Among them, it was argued that the law of presumed consent would be the most feasible option, since it does not harm the presupposition of altruism. With the object of estimating what would be the eventual increase in organ donation, per cadaveric donor, due to the law of presumed consent, the health econometric tool of quantile regression method for panel data was used, applied to a sample of 34 countries during a five-year period. The results obtained in this application indicate that there is benefit in adopting the law of presumed consent, which has a positive effect on the organ donation rate, around 21 – 26%, compared to the law of informed consent.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Giacomini, Daniel Orfale. "A devolução das quantias pagas pelos consumidores desistentes e excluídos dos contratos de consórcio à luz da Lei 11.795/08 e do Código de Defesa do Consumidor". Pontifícia Universidade Católica de São Paulo, 2010. https://tede2.pucsp.br/handle/handle/9060.

Texto completo da fonte
Resumo:
Made available in DSpace on 2016-04-26T20:30:27Z (GMT). No. of bitstreams: 1 Daniel Orfale Giacomini.pdf: 1286484 bytes, checksum: 4a75dc52f7bd341992dfa3eac2df70f2 (MD5) Previous issue date: 2010-05-28
This study aims to investigate the refund of credits to consumers that were excluded or discontinued of purchasing pool agreements, seeking to scrutinize it in light of the recent legislation that regulates purchasing pool agreements in Brazil, Law No. 11795/08, as well as the Consumer Protection Code. At first, the study draws the mechanism of purchasing pool agreements in Brazil, broaching its historical evolution and legal development. Once the mechanism of purchasing pool agreements in Brazil is scrutinized, grounds of consumers defense in analyzed, once, as it deals with a consumer relation, purchasing pool agreements are also subject to the rules set forth in the Consumer Protections Code. Another aspect of the study relates to consumer agreements, provided that purchasing pool agreements are emblematic adhesion contracts. It is also necessary to analyze purchasing pool agreements, identifying its parties, ground concepts and main characteristics, emphasizing Brazilian Central Bank s attribution to regulate and supervise such agreements, as well as to determine the ground conditions of purchasing pool agreements, as provided for in respective bills. Once this scenario is drew, the ground is set to present the argumentation concerning refunding of amounts paid by consumers that were excluded or discontinued of purchasing pool agreements. Before the recent legislation, such topic caused disagreement among Court decisions and legal writers, with sound arguments, economical and legal, both to sustain that refund should occur immediately or that the refund should occur upon termination of the purchasing pool, dully updated. The actual rules concerning refund of amounts is then scrutinized, considering that according to Law No. 11795/08 there is no need to wait termination of the purchasing pool in order to refund the amounts due to consumers, as excluded and discontinued consumers take part in the raffle in order to be reimbursed immediately. The study carried on resulted in the conclusion that Law No. 11795/08 adopted the stand that the general interest of all the individuals that take part in the purchasing pool should prevail over the interest of one single consumer. To that extent, the Law No. 11795/08 is in line with rule of reason and the principle of harmony, as it conditioned the refund of amounts to the rule of raffle, which is proper to purchasing pool mechanisms, as it was conceived
O presente estudo traz como objeto de investigação a devolução das quantias pagas pelos consumidores desistentes e excluídos do contrato de consórcio, pretendendo abordá-la e analisá-la à luz da nova legislação que regula o sistema de consórcios no Brasil, a Lei 11.795/08, e do Código de Defesa do Consumidor. Para adentrar o tema, o trabalho traça, primeiramente, um perfil do sistema de consórcios no Brasil, com a evolução de sua história e de sua disciplina jurídica. Com a análise do sistema de consórcios no Brasil, passa-se a discorrer sobre os fundamentos da defesa do consumidor, uma vez que, por se tratar de relação de consumo, o contrato de consórcio se submete às disposições do Código de Defesa do Consumidor. Outra abordagem alude aos contratos de consumo, sendo o contrato de consórcio um típico contrato de adesão. Faz-se necessário, também, a análise do contrato de consórcio, com a identificação de suas partes, conceitos fundamentais e características principais, com destaque para o poder regulatório e fiscalizador do Banco Central do Brasil e a fixação das condições mínimas do contrato de consórcio constante de suas circulares. Com esse pano de fundo, o estudo encontra subsídios para discorrer sobre a questão da devolução das quantias pagas pelos consorciados desistentes e excluídos e que, até a novel legislação, encontrava divisão na doutrina e na jurisprudência, com relevantes argumentos econômicos e de direito pelos que entendem que a mesma deveria ocorrer de maneira imediata, assim como por aqueles que entendem que a devolução dessas quantias deveria ocorrer somente após o término do grupo, devidamente corrigidas. Passa-se, então, a análise da forma com a questão da devolução das quantias pagas aos consorciados desistentes e excluídos foi tratada pela atual legislação, onde não há mais necessidade de se aguardar o encerramento do grupo, passando estes consumidores a participarem do sorteio para receberem de volta os valores pagos. O resultado da pesquisa aponta no sentido de que a Lei nº. 11.795/08, categorizando o funcionamento da sociedade consorcial na prevalência do interesse do grupo de consórcio sobre o interesse individual do consorciado, agiu com espírito de razoabilidade e harmonia ao definir a nova sistemática de devolução das quantias pagas aos consumidores desistentes e excluídos, submetendo sua ocorrência à contemplação em sorteio, como é próprio do sistema de consórcio, desde a sua concepção
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Nichil, Geoffrey. "Provisionnement en assurance non-vie pour des contrats à maturité longue et à prime unique : application à la réforme Solvabilité 2". Thesis, Université de Lorraine, 2014. http://www.theses.fr/2014LORR0200/document.

Texto completo da fonte
Resumo:
Nous considérons le cas d’un assureur qui doit indemniser une banque à la suite de pertes liées à un défaut de remboursement de ses emprunteurs. Les modèles couramment utilisés sont collectifs et ne permettent pas de prendre en compte les comportements individuels des emprunteurs. Dans une première partie nous définissons un modèle pour étudier le montant des pertes liées à ces défauts de paiement (provision) pour une période donnée. La quantité clé de notre modèle est le montant d’un défaut. Pour un emprunteur j et une date de fin de prêt Tj , ce montant vaut max(Sj Tj -Rj Tj ; 0), où Sj Tj est le montant dû par l’emprunteur et dépend de la durée et du montant du prêt, et Rj Tj est le montant de la revente du bien immobilier financé par le prêt. Rj Tj est proportionnel au montant emprunté; le coefficient de proportionnalité est modélisé par un mouvement Brownien géométrique et représente les fluctuations des prix de l’immobilier. La loi des couples (Date de fin du prêt, Durée du prêt) est modélisée par un processus ponctuel de Poisson. La provision Ph, où h est la durée maximale des contrats considérés, est alors définie comme la somme d’un nombre aléatoire de montants de défauts individuels. Nous pouvons ainsi calculer l’espérance et la variance de la provision mais aussi donner un algorithme de simulation. Il est également possible d’estimer les paramètres liés au modèle et de fournir une valeur numérique aux quantiles de la provision. Dans une deuxième partie nous nous intéresserons au besoin de solvabilité associé au risque de provisionnement (problématique imposée par la réforme européenne Solvabilité 2). La question se ramène à étudier le comportement asymptotique de Ph lorsque h ! +1. Nous montrons que Ph, convenablement normalisée, converge en loi vers une variable aléatoire qui est la somme de deux variables dont l’une est gaussienne
We consider an insurance company which has to indemnify a bank against losses related to a borrower defaulting on payments. Models normally used by insurers are collectives and do not allows to take into account the personal characteristics of borrowers. In a first part, we defined a model to evaluate potential future default amounts (provision) over a fixed period.The amount of default is the key to our model. For a borrower j and an associated maturity Tj, this amount is max(Sj Tj -Rj Tj ; 0), where Sj Tj is the outstanding amount owed by the borrower and depends on the borrowed amount and the term of the loan, and Rj Tj is the property sale amount. Rj Tj is proportionate to the borrowed amount; the proportionality coefficient is modeled by a geometric Brownian motion and represents the fluctuation price of real estate. The couples (Maturity of the loan, Term of the loan) are modeled by a Poisson point process. The provision Ph, where h is the maximum duration of the loans, is defined as the sum of the random number of individual defaults amounts. We can calculate the mean and the variance of the provision and also give an algorithm to simulate the provision. It is also possible to estimate the parameters of our model and then give a numerical value of the provision quantile. In the second part we will focus on the solvency need due to provisioning risk (topic imposed by the european Solvency 2 reform). The question will be to study the asymptotic behaviour of Ph when h ! +1. We will show that Ph, well renormalized, converges in law to a random variable which is the sum of two random variables whose one is a Gaussian
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Lenormand, Maxime. "Initialiser et calibrer un modèle de microsimulation dynamique stochastique : application au modèle SimVillages". Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2012. http://tel.archives-ouvertes.fr/tel-00822114.

Texto completo da fonte
Resumo:
Le but de cette thèse est de développer des outils statistiques permettant d'initialiser et de calibrer les modèles de microsimulation dynamique stochastique, en partant de l'exemple du modèle SimVillages (développé dans le cadre du projet Européen PRIMA). Ce modèle couple des dynamiques démographiques et économiques appliquées à une population de municipalités rurales. Chaque individu de la population, représenté explicitement dans un ménage au sein d'une commune, travaille éventuellement dans une autre, et possède sa propre trajectoire de vie. Ainsi, le modèle inclut-il des dynamiques de choix de vie, d'étude, de carrière, d'union, de naissance, de divorce, de migration et de décès. Nous avons développé, implémenté et testé les modèles et méthodes suivants : 1 / un modèle permettant de générer une population synthétique à partir de données agrégées, où chaque individu est membre d'un ménage, vit dans une commune et possède un statut au regard de l'emploi. Cette population synthétique est l'état initial du modèle. 2 / un modèle permettant de simuler une table d'origine-destination des déplacements domicile-travail à partir de données agrégées. 3 / un modèle permettant d'estimer le nombre d'emplois dans les services de proximité dans une commune donnée en fonction de son nombre d'habitants et de son voisinage en termes de service. 4 / une méthode de calibration des paramètres inconnus du modèle SimVillages de manière à satisfaire un ensemble de critères d'erreurs définis sur des sources de données hétérogènes. Cette méthode est fondée sur un nouvel algorithme d'échantillonnage séquentiel de type Approximate Bayesian Computation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Lundberg, Robin. "En undersökning av kvantiloptioners egenskaper". Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-138949.

Texto completo da fonte
Resumo:
Optioner säljs och köps idag flitigt av många olika anledningar. En av dessa kan vara spekulation kring framtida händelser för aktiepriser där optioner har fördelar jämfört med aktier i form av en hävstångseffekt. En annan anledning för optionshandel är för att hedga (säkra) risker vilket ställer krav på att innehavet av optionen ska kompensera den negativa effekt som riskerna bidrar till. Med andra ord, om det finns en risk för ett negativt framtida scenario som man inte vill riskera att utsätta sig för så kan optioner vara rätt verktyg att använda sig av.   Risker finns idag överallt, i olika former, vilket har bidragit till att efterfrågan av optioner har ökat enormt de senaste årtiondena. Dock kan risker vara både komplexa och varierande vilket har lett till att mer komplexa optioner har utvecklats för att mätta den efterfrågan som utvecklats på marknaden. Dessa, mer komplexa optioner, kallas exotiska optioner och de skiljer sig från de vanliga europeiska och amerikanska köp- och säljoptionerna. Däribland hittar vi bland annat lookback-optioner i form av bland annat köpoptioner på maximum och kvantiloptioner vilka är två av de huvudsakliga optionerna som diskuteras i uppsatsen.   Det har länge varit känt hur man prissätter europeiska köp- och säljoptioner via Black-Scholes-Mertons modell men desto fler komplexa optioner som tillkommer på marknaden desto mer komplicerade prissättningsmodeller utvecklas. Till skillnad från europeiska köp- och säljoptioner vars utdelning beror på aktiepriset på lösendagen så är lookback-optioner beroende av aktieprisets rörelse under hela kontraktstiden. Detta medför att prissättningen av dessa beror av fler parametrar än i Black-Scholes-Mertons modell, bland annat ockupationstiden för den stokastiska process som beskriver aktiepriset, vilket bidrar till andra prissättningsmodeller.   Uppsatsen har som syfte att redogöra för modellen som används vid prissättningen av kvantiloptioner samt presentera hur deras egenskaper förhåller sig till andra typer av lookback-optioners egenskaper. Det presenteras i rapporten att kvantiloptioner liknar vissa typer av lookback-optioner, mer bestämt köpoptioner på maximum, och att kvantiloptioners egenskaper faktiskt konvergerar mot köpoptioner på maximums egenskaper då kvantilen närmar sig 1. Utifrån detta resonemang så kan det finnas fördelar i att använda kvantiloptioner snarare än köpoptioner på maximum vilket investerare bör ta i hänsyn när, och om, kvantiloptioner introduceras på marknaden.
Options are today used by investors for multiple reasons. One of these are speculation about future market movements, here ownership of options is advantageous over usual ownership of shares in the underlying stock in terms of a leverage effect. Furthermore, investors use options to hedge different kinds of risks that they are exposed to, this demands that the option compensates the possible negative effect that the risk brings to the table. In other words, if there is a risk of a future negative scenario which the investor is risk averse to, then owning specific options which neutralize this risk could be the perfect tool to use.   Risks are today seen all over the market in different shapes which have created a great demand for options over the last decades. However, since risks can be both complex and range over multiple business areas, investors have demanded more complex options which can neutralize the risk exposures. These, more complex options, are called exotic options, and they differ from the regular American and European options in the way they behave with respect to the underlying stock. Amongst these exotic options, we can find different kind of lookback options as well as quantile options which are two of the main options that are discussed in this thesis.   It has been known for a while how to price European call and put options by the Black-Scholes-Merton model. However, with more complex options also comes more complex pricing models and unlike the European options’ payoff which depend on the underlying stock price at time of maturity, the lookback option’s and quantile option’s payoff depend on the stock price movement over the total life span of the option contract. Hence, the pricing of these options depends on more variables than the classic Black-Scholes-Merton model include. One of these variables is the occupation time of the stochastic process which describes the stock price movement, this leads to a more complex and extensive pricing model than the general Black-Scholes-Merton’s model.   The objective of this thesis is to derive the pricing model that is used for quantile options and prove that the properties of quantile options are advantageous when compared to some specific lookback options, viz. call options on maximum.  It is concluded in the thesis that quantile options in fact converges to the call option on maximum for quantiles approaching 1. However, quantile options come with some different properties which potentially makes them a good substitute for the call option on maximum. This is a relevant factor for investors to consider when, and if, quantile options are introduced to the market.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Nilsson, Mathias, e Corswant Sophie von. "How Certain Are You of Getting a Parking Space? : A deep learning approach to parking availability prediction". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166989.

Texto completo da fonte
Resumo:
Traffic congestion is a severe problem in urban areas and it leads to the emission of greenhouse gases and air pollution. In general, drivers lack knowledge of the location and availability of free parking spaces in urban cities. This leads to people driving around searching for parking places, and about one-third of traffic congestion in cities is due to drivers searching for an available parking lot. In recent years, various solutions to provide parking information ahead have been proposed. The vast majority of these solutions have been applied in large cities, such as Beijing and San Francisco. This thesis has been conducted in collaboration with Knowit and Dukaten to predict parking occupancy in car parks one hour ahead in the relatively small city of Linköping. To make the predictions, this study has investigated the possibility to use long short-term memory and gradient boosting regression trees, trained on historical parking data. To enhance decision making, the predictive uncertainty was estimated using the novel approach Monte Carlo dropout for the former, and quantile regression for the latter. This study reveals that both of the models can predict parking occupancy ahead of time and they are found to excel in different contexts. The inclusion of exogenous features can improve prediction quality. More specifically, we found that incorporating hour of the day improved the models’ performances, while weather features did not contribute much. As for uncertainty, the employed method Monte Carlo dropout was shown to be sensitive to parameter tuning to obtain good uncertainty estimates.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Nichil, Geoffrey. "Provisionnement en assurance non-vie pour des contrats à maturité longue et à prime unique : application à la réforme Solvabilité 2". Electronic Thesis or Diss., Université de Lorraine, 2014. http://www.theses.fr/2014LORR0200.

Texto completo da fonte
Resumo:
Nous considérons le cas d’un assureur qui doit indemniser une banque à la suite de pertes liées à un défaut de remboursement de ses emprunteurs. Les modèles couramment utilisés sont collectifs et ne permettent pas de prendre en compte les comportements individuels des emprunteurs. Dans une première partie nous définissons un modèle pour étudier le montant des pertes liées à ces défauts de paiement (provision) pour une période donnée. La quantité clé de notre modèle est le montant d’un défaut. Pour un emprunteur j et une date de fin de prêt Tj , ce montant vaut max(Sj Tj -Rj Tj ; 0), où Sj Tj est le montant dû par l’emprunteur et dépend de la durée et du montant du prêt, et Rj Tj est le montant de la revente du bien immobilier financé par le prêt. Rj Tj est proportionnel au montant emprunté; le coefficient de proportionnalité est modélisé par un mouvement Brownien géométrique et représente les fluctuations des prix de l’immobilier. La loi des couples (Date de fin du prêt, Durée du prêt) est modélisée par un processus ponctuel de Poisson. La provision Ph, où h est la durée maximale des contrats considérés, est alors définie comme la somme d’un nombre aléatoire de montants de défauts individuels. Nous pouvons ainsi calculer l’espérance et la variance de la provision mais aussi donner un algorithme de simulation. Il est également possible d’estimer les paramètres liés au modèle et de fournir une valeur numérique aux quantiles de la provision. Dans une deuxième partie nous nous intéresserons au besoin de solvabilité associé au risque de provisionnement (problématique imposée par la réforme européenne Solvabilité 2). La question se ramène à étudier le comportement asymptotique de Ph lorsque h ! +1. Nous montrons que Ph, convenablement normalisée, converge en loi vers une variable aléatoire qui est la somme de deux variables dont l’une est gaussienne
We consider an insurance company which has to indemnify a bank against losses related to a borrower defaulting on payments. Models normally used by insurers are collectives and do not allows to take into account the personal characteristics of borrowers. In a first part, we defined a model to evaluate potential future default amounts (provision) over a fixed period.The amount of default is the key to our model. For a borrower j and an associated maturity Tj, this amount is max(Sj Tj -Rj Tj ; 0), where Sj Tj is the outstanding amount owed by the borrower and depends on the borrowed amount and the term of the loan, and Rj Tj is the property sale amount. Rj Tj is proportionate to the borrowed amount; the proportionality coefficient is modeled by a geometric Brownian motion and represents the fluctuation price of real estate. The couples (Maturity of the loan, Term of the loan) are modeled by a Poisson point process. The provision Ph, where h is the maximum duration of the loans, is defined as the sum of the random number of individual defaults amounts. We can calculate the mean and the variance of the provision and also give an algorithm to simulate the provision. It is also possible to estimate the parameters of our model and then give a numerical value of the provision quantile. In the second part we will focus on the solvency need due to provisioning risk (topic imposed by the european Solvency 2 reform). The question will be to study the asymptotic behaviour of Ph when h ! +1. We will show that Ph, well renormalized, converges in law to a random variable which is the sum of two random variables whose one is a Gaussian
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Al, Labadi Luai. "On New Constructive Tools in Bayesian Nonparametric Inference". Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/22917.

Texto completo da fonte
Resumo:
The Bayesian nonparametric inference requires the construction of priors on infinite dimensional spaces such as the space of cumulative distribution functions and the space of cumulative hazard functions. Well-known priors on the space of cumulative distribution functions are the Dirichlet process, the two-parameter Poisson-Dirichlet process and the beta-Stacy process. On the other hand, the beta process is a popular prior on the space of cumulative hazard functions. This thesis is divided into three parts. In the first part, we tackle the problem of sampling from the above mentioned processes. Sampling from these processes plays a crucial role in many applications in Bayesian nonparametric inference. However, having exact samples from these processes is impossible. The existing algorithms are either slow or very complex and may be difficult to apply for many users. We derive new approximation techniques for simulating the above processes. These new approximations provide simple, yet efficient, procedures for simulating these important processes. We compare the efficiency of the new approximations to several other well-known approximations and demonstrate a significant improvement. In the second part, we develop explicit expressions for calculating the Kolmogorov, Levy and Cramer-von Mises distances between the Dirichlet process and its base measure. The derived expressions of each distance are used to select the concentration parameter of a Dirichlet process. We also propose a Bayesain goodness of fit test for simple and composite hypotheses for non-censored and censored observations. Illustrative examples and simulation results are included. Finally, we describe the relationship between the frequentist and Bayesian nonparametric statistics. We show that, when the concentration parameter is large, the two-parameter Poisson-Dirichlet process and its corresponding quantile process share many asymptotic pr operties with the frequentist empirical process and the frequentist quantile process. Some of these properties are the functional central limit theorem, the strong law of large numbers and the Glivenko-Cantelli theorem.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

WU, HSING-CHEN, e 吳倖禎. "Recovering Copper from Great Quantity of Low Concentration Wastewater by Ion Exchange Resin". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/42307786766794587460.

Texto completo da fonte
Resumo:
碩士
國立高雄應用科技大學
化學工程與材料工程系博碩士班
104
This study proposes a concept of “Ion extraction apparatus” to recover valuable Cu from a great quantity low concentration wastewater. The ion exchange resin is suitably wrapped with a net material and it is put into the wastewater. After the resin reaches to adsorption equilibrium, the package of resin is moved into a regeneration tank. Then, the resin is desorbed Cu by an acid solution. For the wastewater of pH 3 and 200 mg/L Cu, it has the most economical efficiency when the rest concentration of Cu is 8 mg/L. The needed resin of the “ion extraction apparatus” is only 2 kg per m3 wastewater and the extracted Cu can be 192 g per cycle. At the optimum operation of desorption, the 3 times resin volume of condensed liquid can be obtained with Cu concentration 28,000 mg/L.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Thompson, Lindsay P. "Degenerate oligonucleotide primed - polymerase chain reaction evaluation and optimization to improve downstream forensic STR analysis of low quality/low quantity DNA /". 2006. http://hdl.handle.net/10156/1959.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Cassivi, Alexandra. "Access to drinking water in low-and middle-income countries: monitoring and assessment". Thesis, 2020. http://hdl.handle.net/1828/12102.

Texto completo da fonte
Resumo:
Lack of access to drinking water remains widespread as 2.1 billion people live without safely managed service that includes improved water sources located on premises, available when needed, and free from contamination. Monitoring global access to drinking water is complex, yet essential, particularly in settings where households need to fetch water to meet their basic needs, as multiple factors that relate to accessibility, quantity and quality ought to be considered. The overall objective of this observational study is to increase knowledge surrounding monitoring and assessment of access to drinking water supply in low-and middle-income countries. The dissertation was comprised of five manuscripts which address the objective using various approaches including systematic review (manuscript 1), secondary data analysis (manuscript 2), and primary data analysis (manuscripts 3-5) to gather evidence towards improving access to drinking water. Primary data were collected through a seasonal cohort study conducted in Southern Malawi that included 375 households randomly selected in three different urban and rural sites. Methods used included structured questionnaires, observations, GPS-based measurements, and water quality testing. Findings from this study highlight the importance of conducting appropriate assessment of household behaviours in accessing drinking water in view of improving reliability of the indicators and methods used to monitor access to water. Seasonal variations that may affect water sources' reliability and household’s needs should be put forward to improve benefits of improving access to water and sustainable health outcomes. Further to target reliable and continuous availability from an improved water source at proximity to the household, interventions should aim to ensure safe quality of water at the point of use for mitigating the effect of post-collection contamination, and ensure sufficient quantities of water to allocate for personal and domestic hygiene. Focusing on the benefits of improving access to water at the point of consumption is essential to generate more realistic estimations, suitable interventions and appropriate responses to need.
Graduate
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Liao, Hui-Zhu, e 廖慧珠. "A Study of the Impact of the Low Birth Rate on the Educational Quality and Quantity of Elementary School and its Coping Strategies". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/57u455.

Texto completo da fonte
Resumo:
碩士
臺中技術學院
事業經營研究所
98
The purpose of the study is to understand how elementary school educators think about the impact of the educational quality and quantity due to the phenomenon of low birth rate based on the data of educational staff in the Taichung City elementary school, then providing the reference for the educational authority and schools. A questionnaire was used in this study. 600 questionnaire surveys were distributed and the rate of return reached to 88%. The valid sample was 512 which have been statistically analyzed. The results of the research is shown as following: 1. The phenomenon of declining number of class and over-supply of teachers has been exited in elementary school due to low birth rate. 2. Elementary school teachers are aware of the situation that improving classroom management is the top issue among the impact of the educational quality and quantity due to low birth rate. 3. Elementary school teachers believe that professional learning seminar to improve teaching quality is the most important strategy to copy with low birth rate. 4. There are some different viewpoints related to the impact of low birth rate on the educational quality and quantity among educational staff with different service years, positions and school districts.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Eddon, Maya. "Quantity and quality naturalness in metaphysics /". 2009. http://hdl.rutgers.edu/1782.2/rucore10001600001.ETD.000051814.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Shen, Meng-Hui, e 沈孟輝. "Optical Low Coherence Interferometry to Quantify Silicon Wire Process Variation". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/v6p963.

Texto completo da fonte
Resumo:
碩士
國立臺灣科技大學
電子工程系
107
In the component of the Silicon waveguide, the phase change of the light affects the wavelength response of the component, and the cause of the phase change mainly comes from the change in the effective refractive index between the waveguides. Microring Resonator (MRR) and Arrayed Waveguide Grating (AWG) are main components that require not only the accuracy of the filter band but also reduce the crosstalk. The reason is Phase, which makes the monitoring phase the most important issue, so this thesis will discuss the various phases of the Silicon waveguide and how to improve it. In this thesis, the wavelength spectra of filter elements such as MRR and AWG are designed and measured, and the Low-Coherent light source and Mach-Zehnder interferometer are used as the main body of the experimental framework. Analyze component interference wave packets to know the coherence length between wave packets. By pushing the results into the simulation software, the phase change is reversed and how it can be improved from design to process. Avoid abnormalities in filtering bands and crosstalk due to phase. The Low-Coherence Optical Interference technique can also be extended to the components of Modal Division Multiplexing (MDM). The phase is monitored by the effective refractive index change of TE0 and TE1 in the waveguide. Learn about the modal changes between the waveguides.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Chiu, Sin-ga W., e 王聖嘉. "Optimal Lot-Size Decision for Economic Production Quantity Model with Defective Items Rework". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/xvd6nt.

Texto completo da fonte
Resumo:
博士
中原大學
工業工程研究所
92
This research considers optimal lot-size decision for Economic Production Quantity Model (EPQ) with defective items rework. The assumption of perfect quality production condition of the classical EPQ model is unrealistic. Due to process deterioration or other factors, the generation of defective items is inevitable. This study assumes that the defective rate is a random variable and all items produced are inspected. The imperfect quality items can fall into two groups, the repairable and the scrap. The reworking of the repairable defective items starts when the regular production process finishes in each cycle. The rework process is assumed to be imperfect, a random portion of reworked items fail and become scrap. Five specific situations of EPQ model are examined in this research. They are situations where: (1) all defective items are scrap; (2) all defective items are reworked and can be repaired with an additional holding and reworking cost; (3) situation where the defective items fall into two groups, one is scrap and the other is repairable; (4) all defective items are reworked, however, a portion of them fail the reworking and become scrap; (5) An imperfect EPQ model combining the situations described by (3) and (4) above. Mathematical models are developed for each of the aforementioned situations, respectively. Disposal cost per scrap item and repairing and holding cost for each reworked item are included in the cost analysis. The renewal reward theorem is utilized in the proposed mathematical modeling to cope with the variable cycle length. The optimal lot size that minimizes the expected overall costs is derived for each model, where shortages are not permitted. Numerical examples are provided demonstrate the ready and practical usages of our resulting models to the real-life manufacturing firms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Chang, Nai-Fang, e 張乃方. "Re-examining Okun's law: Evidence from Quantile Regression Analysis". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/59q5t4.

Texto completo da fonte
Resumo:
碩士
國立高雄應用科技大學
企業管理系
103
In the past, many studies of Okun's law mainly focused on much stronger industrial countries, but few literatures focused on the others. This study classified countries as low-income countries, medium-income countries and high-income countries in accordance with gross national income of World Bank. We estimated Okun coefficient values with the ordinary least-squares method and quantile regression. Then, we further explain the relationship between different values depending on various components. We re-examined first-difference model and gap model by Okun (1962) and found that all the countries still exist Okun’slow, furthermore, medium-income countries’ absolute value is maximum, followed by high-income countries. We used quantile regression to analyze 44 countries. These countries’ absolute value will vary with their own quantile. We divided countries into symmetric type country and asymmetric type country; moreover, asymmetrical country can be further divided into asymmetrical decrease country and asymmetrical increase country. The absolute value of asymmetrical decrease country that corresponds to the coefficient will become bigger. In the other hand, the absolute value of asymmetrical increase country corresponds to the coefficient will become bigger. Asymmetrical decrease country means that the rise of unemployment rate is an important cause of economic deterioration during negative shocks (e.g. depression). In contrast, the impact between depression of unemployment rate and economic growth is decrease during positive shocks (e.g. economic prosperity). And asymmetrical increase country means that the impact between depression of unemployment rate and economic growth during negative shocks. In contrast, the rise of unemployment rate can enhance economic growth during positive shocks. Over all, our data can be mainly defined as asymmetrical decrease countries based on gap model and first-difference model. Hence, our empirical results support that in most countries will establish policies to suppress the rise of unemployment rate, such as increasing in public construction or government spending, and vice versa.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Hu, Feng-Lin, e 胡豐麟. "Determining the optimal lot size for imperfect EPQ model with scrap and fix-quantify delivery". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/26727629483440982345.

Texto completo da fonte
Resumo:
碩士
朝陽科技大學
工業工程與管理系碩士班
97
The paper studies the optimal lot size for imperfect EPQ model with scrap and fix-quantify delivery. Assumes that imperfect production process, and the defective goods totally scraps, each batch is dispatched to the buyer in a number of shipments. During production process, Traditional Economic Production Quantity assumes production process is perfect, and continuous deliveries that is impossible. Because the production of defective goods that a lot of human factors or mechanical equipment factors will all cause and increase of the cost during production process. This research focuses on discussing: Model 1, defective goods totally scraps and does not include the inventory cost of buyer. Model 2, defective goods totally scraps and combining the inventory cost of buyer. The results of these models are verified, numerical examples are given, and the sensitivity of optimal batch size is provided to demonstrate its practical usage.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Lin, Chih-An, e 林芝安. "Optimal lot size and optimal frequency of delivery for an imperfect finite production model with fixed quantity multiple shipments". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/81140963269725589587.

Texto completo da fonte
Resumo:
碩士
朝陽科技大學
工業工程與管理系碩士班
97
This paper determines the optimal lot size for the finite production model with an imperfect rework process of defective items and fixed quantity multiple shipments. Assuming the rework procedure is imperfect and each batch is dispatched to the customer with a number of shipments. The purpose of this study is to find the economic production quantity (EPQ) and optimal number of deliveries that minimize the total inventory costs of manufacturer and buyer. Traditional EPQ model assume that the demand for the shipment is continuous, but it is not realistic because when the manufacturer delivery products to the buyer, they often adopt a fixed quantity shipments. Although this approach includes the purchaser of inventory costs and transportation costs, so total cost increases, it studies a realistic integrated production-inventory system. Quality factors such as random defective items and imperfect rework process are also considered in the proposed system. Three different models are examined in this study: (1) the optimal lot size and optimal frequency of delivery with EPQ model when the delivery time from distribution after the rework. (2) the optimal lot size and optimal frequency of delivery with EPQ model when the delivery time from distribution before the rework but and (3) the optimal lot size with EPQ model when the delivery time from distribution before the rework but . Numerical examples are provided to demonstrate its practical usage.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Gebrewold, Fetene. "Descriptive study of current practices of hazardous waste management among identified small quantity generators in Benton County". Thesis, 1993. http://hdl.handle.net/1957/36811.

Texto completo da fonte
Resumo:
Current evidence suggests that development and industrialization has engendered the manufacture and use of chemical products which may harm human health and degrade the environment. One of the most pressing environmental needs since World War II is perhaps the issue of how society either manages or mismanages hazardous wastes. The purpose of this study was to assess current management and disposal practices among Small Quantity Generators (SQG) and Conditionally Exempt Generators (CEG) in Benton County, Oregon. Study objectives included identification of the number of registered and nonregistered SQGs and CEGs, identification of the types of businesses, estimation of the quantities of hazardous wastes produced and used, and assessment of current levels of awareness among generators of hazardous wastes of pertinent regulations and safe environmental practices. A survey instrument was used to collect data during in-person interviews with representatives from a total of 48 businesses in Benton County. Findings indicated that the majority of both the registered (70%) and nonregistered (72.2%) businesses performed cleaning and degreasing activities at their business locations. Other activities, in order of importance, included fabrication, retail sales, manufacturing, and painting. With respect to the types of wastes produced or used, the majority of the respondents indicated the production or use of waste oils and aqueous liquids. Similarly, the majority of registered businesses (96.7%) indicated that they provided employee training in hazardous waste management. Asked to identify their method of disposal, both SQG and CEG respondents listed return to supplier, recycle on-site, treatment, storage and disposal facilities, garbage/landfills, evaporation, and sales of wastes, in order of importance, as their preferred method of disposal. Most of the respondents indicated that their principal recycled wastes were solvents and oils, followed by refrigerated gases and other products. The study also considered the influence of state and federal laws and regulations as applied to hazardous wastes, and whether or not these administrative rules created a problem for Benton County businesses. In contrast to prior studies which have indicated that among most businesses federal and state laws and regulations were regarded as too complex and inflexible, or who complained that lack of access to information or lack of time to remain informed served as significant constraints upon their ability to comply, the majority of Benton County businesses indicated "no problem" with the administrative rules. The conclusion of the study was that an overall comparison of Benton County SQGs and CEGs does not provide clear and convincing evidence that nonregistered businesses, by virtue of the regulatory exemption, practice illegal hazardous waste disposal and management procedures to a greater degree than the more fully regulated registered business.
Graduation date: 1993
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia