Teses / dissertações sobre o tema "Qh 244"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 17 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Qh 244".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.
Rusch, Thomas, Patrick Mair e Reinhold Hatzinger. "Psychometrics With R: A Review Of CRAN Packages For Item Response Theory". WU Vienna University of Economics and Business, 2013. http://epub.wu.ac.at/4010/1/resrepIRThandbook.pdf.
Texto completo da fonteSeries: Discussion Paper Series / Center for Empirical Research Methods
Zeileis, Achim, e Christian Kleiber. "Approximate replication of high-breakdown robust regression techniques". Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2008. http://epub.wu.ac.at/422/1/document.pdf.
Texto completo da fonteSeries: Research Report Series / Department of Statistics and Mathematics
Zeileis, Achim, Ajay Shah e Ila Patnaik. "Exchange Rate Regime Analysis Using Structural Change Methods". Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2007. http://epub.wu.ac.at/386/1/document.pdf.
Texto completo da fonteSeries: Research Report Series / Department of Statistics and Mathematics
Zeileis, Achim, Christian Kleiber e Simon Jackman. "Regression Models for Count Data in R". Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2007. http://epub.wu.ac.at/1168/1/document.pdf.
Texto completo da fonteSeries: Research Report Series / Department of Statistics and Mathematics
Yang, Congcong, Alfred Taudes e Guozhi Dong. "Efficiency Analysis of European Freight Villages-Three Peers for Benchmarking". Department für Informationsverarbeitung und Prozessmanagement, WU Vienna University of Economics and Business, 2015. http://epub.wu.ac.at/4517/1/Efficiency_analysis_of_European_FVs%2Dthree_peers_for_benchmarking.pdf.
Texto completo da fonteSeries: Working Papers on Information Systems, Information Business and Operations
Yildirim, Evrim. "Development Of In Vitro Micropropagation Techniques For Saffron (crocus Sativus L.)". Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608730/index.pdf.
Texto completo da fonteBenko, Michal. "Functional data analysis with applications in finance". Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2007. http://dx.doi.org/10.18452/15585.
Texto completo da fonteIn many different fields of applied statistics an object of interest is depending on some continuous parameter. Typical examples in finance are implied volatility functions, yield curves or risk-neutral densities. Due to the different market conventions and further technical reasons, these objects are observable only on a discrete grid, e.g. for a grid of strikes and maturities for which the trade has been settled at a given time-point. By collecting these functions for several time points (e.g. days) or for different underlyings, a bunch (sample) of functions is obtained - a functional data set. The first topic considered in this thesis concerns the strategies of recovering the functional objects (e.g. implied volatilities function) from the observed data based on the nonparametric smoothing methods. Besides the standard smoothing methods, a procedure based on a combination of nonparametric smoothing and the no-arbitrage-theory results is proposed for implied volatility smoothing. The second part of the thesis is devoted to the functional data analysis (FDA) and its connection to the problems present in the empirical analysis of the financial markets. The theoretical part of the thesis focuses on the functional principal components analysis -- functional counterpart of the well known multivariate dimension-reduction-technique. A comprehensive overview of the existing methods is given, an estimation method based on the dual problem as well as the two-sample inference based on the functional principal component analysis are discussed. The FDA techniques are applied to the analysis of the implied volatility and yield curve dynamics. In addition, the implementation of the FDA techniques together with a FDA library for the statistical environment XploRe are presented.
Maier, Marco J. "DirichletReg: Dirichlet Regression for Compositional Data in R". WU Vienna University of Economics and Business, 2014. http://epub.wu.ac.at/4077/1/Report125.pdf.
Texto completo da fonteSeries: Research Report Series / Department of Statistics and Mathematics
Chao, Shih-Kang. "Quantile regression in risk calibration". Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2015. http://dx.doi.org/10.18452/17223.
Texto completo da fonteQuantile regression studies the conditional quantile function QY|X(τ) on X at level τ which satisfies FY |X QY |X (τ ) = τ , where FY |X is the conditional CDF of Y given X, ∀τ ∈ (0,1). Quantile regression allows for a closer inspection of the conditional distribution beyond the conditional moments. This technique is par- ticularly useful in, for example, the Value-at-Risk (VaR) which the Basel accords (2011) require all banks to report, or the ”quantile treatment effect” and ”condi- tional stochastic dominance (CSD)” which are economic concepts in measuring the effectiveness of a government policy or a medical treatment. Given its value of applicability, to develop the technique of quantile regression is, however, more challenging than mean regression. It is necessary to be adept with general regression problems and M-estimators; additionally one needs to deal with non-smooth loss functions. In this dissertation, chapter 2 is devoted to empirical risk management during financial crises using quantile regression. Chapter 3 and 4 address the issue of high-dimensionality and the nonparametric technique of quantile regression.
Burda, Maike M. "Testing for causality with Wald tests under nonregular conditions". Doctoral thesis, [S.l.] : [s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=968852432.
Texto completo da fonteBorak, Szymon. "Dynamic semiparametric factor models". Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2008. http://dx.doi.org/10.18452/15802.
Texto completo da fonteHigh-dimensional regression problems which reveal dynamic behavior occur frequently in many different fields of science. The dynamics of the whole complex system is typically analyzed by time propagation of few number of factors, which are loaded with time invariant functions of exploratory variables. In this thesis we consider dynamic semiparametric factor model, which assumes nonparametric loading functions. We start with a short discussion of related statistical techniques and present the properties of the model. Additionally real data applications are discussed with particular focus on implied volatility dynamics and resulting factor hedging of barrier options.
Xu, Yafei. "High Dimensional Financial Engineering: Dependence Modeling and Sequential Surveillance". Doctoral thesis, Humboldt-Universität zu Berlin, 2018. http://dx.doi.org/10.18452/18790.
Texto completo da fonteThis dissertation focuses on the high dimensional financial engineering, especially in dependence modeling and sequential surveillance. In aspect of dependence modeling, an introduction of high dimensional copula concentrating on state-of-the-art research in copula is presented. A more complex application in financial engineering using high dimensional copula is concentrated on the pricing of the portfolio-like credit derivative, i.e. credit default swap index (CDX) tranches. In this part, the convex combination of copulas is proposed in CDX tranche pricing with components stemming from elliptical copula family (Gaussian and Student-t), Archimedean copula family (Frank, Gumbel, Clayton and Joe) and hierarchical Archimedean copula family used in some publications. In financial surveillance part, the chapter focuses on the monitoring of high dimensional portfolios (in 5, 29 and 90 dimensions) by development of a nonparametric multivariate statistical process control chart, i.e. energy test based control chart (ETCC). In order to support the further research and practice of nonparametric multivariate statistical process control chart devised in this dissertation, an R package "EnergyOnlineCPM" is developed. At moment, this package has been accepted and published in the Comprehensive R Archive Network (CRAN), which is the first package that can online monitor the shift in mean and covariance jointly.
Nesterov, Alexander. "Three essays in matching mechanism design". Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17647.
Texto completo da fonteI consider the problem of allocating indivisible objects among agents according to their preferences when transfers are absent. In Chapter 1, I study the tradeoff between fairness and efficiency in the class of strategy-proof allocation mechanisms. The main finding is that for strategy-proof mechanisms the following efficiency and fairness criteria are mutually incompatible: (1) Ex-post efficiency and envy-freeness, (2) ordinal efficiency and weak envy-freeness and (3) ordinal efficiency and equal division lower bound. In Chapter 2, the focus is on two representations of an allocation when randomization is used: as a probabilistic assignment and as a lottery over deterministic assignments. To help facilitate the design of practical lottery mechanisms, we provide new tools for obtaining stochastic improvements in lotteries. As applications, we propose lottery mechanisms that improve upon the widely-used random serial dictatorship mechanism, and a lottery representation of its competitor, the probabilistic serial mechanism. In Chapter 3, I propose a new mechanism to assign students to primary schools: the Adaptive Acceptance rule (AA). AA collects von Neumann-Morgenstern utilities of students over schools and implements the assignment using an iterative procedure similar to the prevalent Immediate Acceptance rule (IA). AA enjoys a strong combination of incentive and efficiency properties compared to IA and its rival, the Deferred Acceptance rule (DA). In case of strict priorities, AA implements the student-optimal stable matching in dominant strategies, which dominates each equilibrium outcome of IA. In case of no priorities, AA is ex-post efficient while some equilibrium outcomes of IA are not; also, AA causes loss of ex-ante efficiency less often than DA. If, in addition, students have common ordinal preferences, AA is approximately strategy-proof and ex-ante dominates DA.
Örsal, Deniz Dilan Karaman. "Essays on panel cointegration testing". Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2009. http://dx.doi.org/10.18452/15894.
Texto completo da fonteThis thesis is composed of four essays which contribute to the literature in panel cointegration methodology. The first essay compares the finite sample properties of the four residual-based panel cointegration tests of Pedroni (1995, 1999) and the likelihood-based panel cointegration test of Larsson et al. (2001). The simulation results indicate that the panel-t test statistic of Pedroni has the best finite sample properties among the five panel cointegration test statistics evaluated. The second essay presents a corrected version of the proof of Larsson et al. (2001) related to the finiteness of the moments of the asymptotic trace statistic. The proof is corrected for the case, in which the difference between the number of variables and the number of existing cointegrating relations is one. The third essay proposes a new likelihood-based panel cointegration test in the presence of a linear time trend in the data generating process. This new test is an extension of the likelihood ratio test of Saikkonen and Lütkepohl (2000) for trend-adjusted data to the panel data framework, and is called the panel SL test. Under the null hypothesis, the panel SL test statistic is standard normally distributed as the number of time periods (T) and the number of cross-sections (N) tend to infinity sequentially. By means of a Monte Carlo study the finite sample properties of the test are investigated. The new test presents reasonable size with the increase in T and N, and has high power in small samples. The last essay of the thesis analyzes the long-run money demand relation among OECD countries by panel unit root and cointegration testing techniques. The panel SL cointegration test and the tests of Pedroni (1999) are used to detect the existence of a stationary long-run money demand relation. Moreover, the money demand function is estimated with the panel dynamic ordinary least squares method of Mark and Sul (2003).
Song, Song. "Confidence bands in quantile regression and generalized dynamic semiparametric factor models". Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2010. http://dx.doi.org/10.18452/16341.
Texto completo da fonteIn many applications it is necessary to know the stochastic fluctuation of the maximal deviations of the nonparametric quantile estimates, e.g. for various parametric models check. Uniform confidence bands are therefore constructed for nonparametric quantile estimates of regression functions. The first method is based on the strong approximations of the empirical process and extreme value theory. The strong uniform consistency rate is also established under general conditions. The second method is based on the bootstrap resampling method. It is proved that the bootstrap approximation provides a substantial improvement. The case of multidimensional and discrete regressor variables is dealt with using a partial linear model. A labor market analysis is provided to illustrate the method. High dimensional time series which reveal nonstationary and possibly periodic behavior occur frequently in many fields of science, e.g. macroeconomics, meteorology, medicine and financial engineering. One of the common approach is to separate the modeling of high dimensional time series to time propagation of low dimensional time series and high dimensional time invariant functions via dynamic factor analysis. We propose a two-step estimation procedure. At the first step, we detrend the time series by incorporating time basis selected by the group Lasso-type technique and choose the space basis based on smoothed functional principal component analysis. We show properties of this estimator under the dependent scenario. At the second step, we obtain the detrended low dimensional stochastic process (stationary).
Hofmarcher, Paul. "Advanced Regression Methods in Finance and Economics: Three Essays". Thesis, 2012. http://epub.wu.ac.at/3489/1/DissFINAL.pdf.
Texto completo da fonteMarch, Nicolas. "Building a Data Mining Framework for Target Marketing". Thesis, 2011. http://epub.wu.ac.at/3242/1/diss_epub_nicolas_march_20111002.pdf.
Texto completo da fonte