Tesi sul tema "Qh 244"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Qh 244.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-17 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Qh 244".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Rusch, Thomas, Patrick Mair e Reinhold Hatzinger. "Psychometrics With R: A Review Of CRAN Packages For Item Response Theory". WU Vienna University of Economics and Business, 2013. http://epub.wu.ac.at/4010/1/resrepIRThandbook.pdf.

Testo completo
Abstract (sommario):
In this paper we review the current state of R packages for Item Response Theory (IRT). We group the available packages based on their purpose and provide an overview of each package's main functionality. Each of the packages we describe has a peer-reviewed publication associated with it. We also provide a tutorial analysis of data from the 1990 Workplace Industrial Relation Survey to show how the breadth and exibility of IRT packages in R can be leveraged to conduct even challenging item analyses with versatility and ease. These items relate to the type of consultations that are carried out in a firm when major changes are implemented. We first use unidimensional IRT models just to discover that they fit do not fit well. We then use nonparametric IRT to explore the possible causes for the scaling problem. Based on the results from the exploration, we finally use a two-dimensional model on a subset of the original items to achieve a good fit with a sensible interpretation, namely that there are two types of consultations a firm may engage in: consultations with workers/representatives from the firm and with official union representatives. The different items relate mostly to one of these dimensions and firms can be scaled well along these two dimensions.
Series: Discussion Paper Series / Center for Empirical Research Methods
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Zeileis, Achim, e Christian Kleiber. "Approximate replication of high-breakdown robust regression techniques". Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2008. http://epub.wu.ac.at/422/1/document.pdf.

Testo completo
Abstract (sommario):
This paper demonstrates that even regression results obtained by techniques close to the standard ordinary least squares (OLS) method can be difficult to replicate if a stochastic model fitting algorithm is employed.
Series: Research Report Series / Department of Statistics and Mathematics
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Zeileis, Achim, Ajay Shah e Ila Patnaik. "Exchange Rate Regime Analysis Using Structural Change Methods". Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2007. http://epub.wu.ac.at/386/1/document.pdf.

Testo completo
Abstract (sommario):
Regression models for de facto currency regime classification are complemented by inferential techniques for tracking the stability of exchange rate regimes. Several structural change methods are adapted to these regressions: tools for assessing the stability of exchange rate regressions in historical data (testing), in incoming data (monitoring) and for determining the breakpoints of shifts in the exchange rate regime (dating). The tools are illustrated by investigating the Chinese exchange rate regime after China gave up on a fixed exchange rate to the US dollar in 2005 and to track the evolution of the Indian exchange rate regime since 1993.
Series: Research Report Series / Department of Statistics and Mathematics
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Zeileis, Achim, Christian Kleiber e Simon Jackman. "Regression Models for Count Data in R". Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2007. http://epub.wu.ac.at/1168/1/document.pdf.

Testo completo
Abstract (sommario):
The classical Poisson, geometric and negative binomial regression models for count data belong to the family of generalized linear models and are available at the core of the statistics toolbox in the R system for statistical computing. After reviewing the conceptual and computational features of these methods, a new implementation of zero-inflated and hurdle regression models in the functions zeroinfl() and hurdle() from the package pscl is introduced. It re-uses design and functionality of the basic R functions just as the underlying conceptual tools extend the classical models. Both model classes are able to incorporate over-dispersion and excess zeros - two problems that typically occur in count data sets in economics and the social and political sciences - better than their classical counterparts. Using cross-section data on the demand for medical care, it is illustrated how the classical as well as the zero-augmented models can be fitted, inspected and tested in practice. (author's abstract)
Series: Research Report Series / Department of Statistics and Mathematics
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Yang, Congcong, Alfred Taudes e Guozhi Dong. "Efficiency Analysis of European Freight Villages-Three Peers for Benchmarking". Department für Informationsverarbeitung und Prozessmanagement, WU Vienna University of Economics and Business, 2015. http://epub.wu.ac.at/4517/1/Efficiency_analysis_of_European_FVs%2Dthree_peers_for_benchmarking.pdf.

Testo completo
Abstract (sommario):
Measuring the performance of Freight Villages (FVs) has important implications for logistics companies and other related companies as well as governments. In this paper we apply Data Envelopment Analysis (DEA) to measure the performance of European FVs in a purely data-driven way incorporating the nature of FVs as complex operations that use multiple inputs and produce several outputs. We employ several DEA models and perform a complete sensitivity analysis of the appropriateness of the chosen input and output variables, and an assessment of the robustness of the efficiency score. It turns out that about half of the 20 FVs analyzed are inefficient, with utilization of the intermodal area and warehouse capacity and level of goods handed the being the most important areas of improvement. While we find no significant differences in efficiency between FVs of different sizes and in different countries, it turns out that the FVs Eurocentre Toulouse, Interporto Quadrante Europa and GVZ Nürnberg constitute more than 90% of the benchmark share.
Series: Working Papers on Information Systems, Information Business and Operations
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Yildirim, Evrim. "Development Of In Vitro Micropropagation Techniques For Saffron (crocus Sativus L.)". Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608730/index.pdf.

Testo completo
Abstract (sommario):
In vitro micropropagation of saffron (Crocus sativus L.) by using direct and indirect organogenesis was the aim of this study. Also, the effect of plant growth regulators on growth parameters, such as corm production, sprouting time and germination ratio were investigated in ex vitro conditions. For in vitro regeneration of saffron, the effects of 2,4-D (2,4-dichlorophenoxyacetic acid) and BAP (6-benzylaminopurine) were tested initially. It was observed that 0,25 mg/L 2,4-D and 1 mg/L BAP combination was superior for indirect organogenesis while 1 mg/L 2,4-D and 1 mg/L BAP combination was favorable for direct organogenesis. During the improvement of direct organogenesis experiments, BAP (1 mg/L) without 2,4-D stimulated further shoot development. For adventitious corm and root induction, NAA (naphthaleneacetic acid) and BAP combinations were tested. Although a few corm formations were achieved, root development was not observed with these treatments. Further experiments with the culture medium supplemented with 1 mg/L IBA (indole-3-butyric acid) and 5% sucrose was effective on obtaining contractile root formation and increasing corm number. As a result, the overall efficiency was calculated as 59.26% for contractile root formation, 35.19% for corm formation and 100% for shoot development. In ex vitro studies, 50 mg/L IAA (indole-3-acetic acid) , 50 mg/L kinetin and 200 mg/L GA3 (gibberellic acid) were used. These applications were not as efficient as expected on assessed growth parameters.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Benko, Michal. "Functional data analysis with applications in finance". Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2007. http://dx.doi.org/10.18452/15585.

Testo completo
Abstract (sommario):
An vielen verschiedenen Stellen der angewandten Statistik sind die zu untersuchenden Objekte abhängig von stetigen Parametern. Typische Beispiele in Finanzmarktapplikationen sind implizierte Volatilitäten, risikoneutrale Dichten oder Zinskurven. Aufgrund der Marktkonventionen sowie weiteren technisch bedingten Gründen sind diese Objekte nur an diskreten Punkten, wie zum Beispiel an Ausübungspreise und Maturitäten, für die ein Geschäft in einem bestimmten Zeitraum abgeschlossen wurde, beobachtbar. Ein funktionaler Datensatz ist dann vorhanden, wenn diese Funktionen für verschiedene Zeitpunkte (z.B. Tage) oder verschiedene zugrundeliegende Aktiva gesammelt werden. Das erste Thema, das in dieser Dissertation betrachtet wird, behandelt die nichtparametrischen Methoden der Schätzung dieser Objekte (wie z.B. implizierte Volatilitäten) aus den beobachteten Daten. Neben den bekannten Glättungsmethoden wird eine Prozedur für die Glättung der implizierten Volatilitäten vorgeschlagen, die auf einer Kombination von nichtparametrischer Glättung und den Ergebnissen der arbitragefreien Theorie basiert. Der zweite Teil der Dissertation ist der funktionalen Datenanalyse (FDA), speziell im Zusammenhang mit den Problemen, der empirischen Finanzmarktanalyse gewidmet. Der theoretische Teil der Arbeit konzentriert sich auf die funktionale Hauptkomponentenanalyse -- das funktionale Ebenbild der bekannten Dimensionsreduktionstechnik. Ein umfangreicher überblick der existierenden Methoden wird gegeben, eine Schätzmethode, die von der Lösung des dualen Problems motiviert ist und die Zwei-Stichproben-Inferenz basierend auf der funktionalen Hauptkomponentenanalyse werden behandelt. Die FDA-Techniken sind auf die Analyse der implizierten Volatilitäten- und Zinskurvendynamik angewandt worden. Darüber hinaus, wird die Implementation der FDA-Techniken zusammen mit einer FDA-Bibliothek für die statistische Software Xplore behandelt.
In many different fields of applied statistics an object of interest is depending on some continuous parameter. Typical examples in finance are implied volatility functions, yield curves or risk-neutral densities. Due to the different market conventions and further technical reasons, these objects are observable only on a discrete grid, e.g. for a grid of strikes and maturities for which the trade has been settled at a given time-point. By collecting these functions for several time points (e.g. days) or for different underlyings, a bunch (sample) of functions is obtained - a functional data set. The first topic considered in this thesis concerns the strategies of recovering the functional objects (e.g. implied volatilities function) from the observed data based on the nonparametric smoothing methods. Besides the standard smoothing methods, a procedure based on a combination of nonparametric smoothing and the no-arbitrage-theory results is proposed for implied volatility smoothing. The second part of the thesis is devoted to the functional data analysis (FDA) and its connection to the problems present in the empirical analysis of the financial markets. The theoretical part of the thesis focuses on the functional principal components analysis -- functional counterpart of the well known multivariate dimension-reduction-technique. A comprehensive overview of the existing methods is given, an estimation method based on the dual problem as well as the two-sample inference based on the functional principal component analysis are discussed. The FDA techniques are applied to the analysis of the implied volatility and yield curve dynamics. In addition, the implementation of the FDA techniques together with a FDA library for the statistical environment XploRe are presented.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Maier, Marco J. "DirichletReg: Dirichlet Regression for Compositional Data in R". WU Vienna University of Economics and Business, 2014. http://epub.wu.ac.at/4077/1/Report125.pdf.

Testo completo
Abstract (sommario):
Dirichlet regression models can be used to analyze a set of variables lying in a bounded interval that sum up to a constant (e.g., proportions, rates, compositions, etc.) exhibiting skewness and heteroscedasticity, without having to transform the data. There are two parametrization for the presented model, one using the common Dirichlet distribution's alpha parameters, and a reparametrization of the alpha's to set up a mean-and-dispersion-like model. By applying appropriate link-functions, a GLM-like framework is set up that allows for the analysis of such data in a straightforward and familiar way, because interpretation is similar to multinomial logistic regression. This paper gives a brief theoretical foundation and describes the implementation as well as application (including worked examples) of Dirichlet regression methods implemented in the package DirichletReg (Maier, 2013) in the R language (R Core Team, 2013). (author's abstract)
Series: Research Report Series / Department of Statistics and Mathematics
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Chao, Shih-Kang. "Quantile regression in risk calibration". Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2015. http://dx.doi.org/10.18452/17223.

Testo completo
Abstract (sommario):
Die Quantilsregression untersucht die Quantilfunktion QY |X (τ ), sodass ∀τ ∈ (0, 1), FY |X [QY |X (τ )] = τ erfu ̈llt ist, wobei FY |X die bedingte Verteilungsfunktion von Y gegeben X ist. Die Quantilsregression ermo ̈glicht eine genauere Betrachtung der bedingten Verteilung u ̈ber die bedingten Momente hinaus. Diese Technik ist in vielerlei Hinsicht nu ̈tzlich: beispielsweise fu ̈r das Risikomaß Value-at-Risk (VaR), welches nach dem Basler Akkord (2011) von allen Banken angegeben werden muss, fu ̈r ”Quantil treatment-effects” und die ”bedingte stochastische Dominanz (CSD)”, welches wirtschaftliche Konzepte zur Messung der Effektivit ̈at einer Regierungspoli- tik oder einer medizinischen Behandlung sind. Die Entwicklung eines Verfahrens zur Quantilsregression stellt jedoch eine gro ̈ßere Herausforderung dar, als die Regression zur Mitte. Allgemeine Regressionsprobleme und M-Scha ̈tzer erfordern einen versierten Umgang und es muss sich mit nicht- glatten Verlustfunktionen besch ̈aftigt werden. Kapitel 2 behandelt den Einsatz der Quantilsregression im empirischen Risikomanagement w ̈ahrend einer Finanzkrise. Kapitel 3 und 4 befassen sich mit dem Problem der h ̈oheren Dimensionalit ̈at und nichtparametrischen Techniken der Quantilsregression.
Quantile regression studies the conditional quantile function QY|X(τ) on X at level τ which satisfies FY |X QY |X (τ ) = τ , where FY |X is the conditional CDF of Y given X, ∀τ ∈ (0,1). Quantile regression allows for a closer inspection of the conditional distribution beyond the conditional moments. This technique is par- ticularly useful in, for example, the Value-at-Risk (VaR) which the Basel accords (2011) require all banks to report, or the ”quantile treatment effect” and ”condi- tional stochastic dominance (CSD)” which are economic concepts in measuring the effectiveness of a government policy or a medical treatment. Given its value of applicability, to develop the technique of quantile regression is, however, more challenging than mean regression. It is necessary to be adept with general regression problems and M-estimators; additionally one needs to deal with non-smooth loss functions. In this dissertation, chapter 2 is devoted to empirical risk management during financial crises using quantile regression. Chapter 3 and 4 address the issue of high-dimensionality and the nonparametric technique of quantile regression.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Burda, Maike M. "Testing for causality with Wald tests under nonregular conditions". Doctoral thesis, [S.l.] : [s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=968852432.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Borak, Szymon. "Dynamic semiparametric factor models". Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2008. http://dx.doi.org/10.18452/15802.

Testo completo
Abstract (sommario):
Hochdimensionale Regressionsprobleme, die sich dynamisch entwickeln, sind in zahlreichen Bereichen der Wissenschaft anzutreffen. Die Dynamik eines solchen komplexen Systems wird typischerweise mittels der Zeitreiheneigenschaften einer geringen Anzahl von Faktoren analysiert. Diese Faktoren wiederum sind mit zeitinvarianten Funktionen von explikativen Variablen bewichtet. Diese Doktorarbeit beschäftigt sich mit einem dynamischen semiparametrischen Faktormodell, dass nichtparametrische Bewichtungsfunktionen benutzt. Zu Beginn sollen kurz die wichtigsten statistischen Methoden diskutiert werden um dann auf die Eigenschaften des verwendeten Modells einzugehen. Im Anschluss folgt die Diskussion einiger Anwendungen des Modellrahmens auf verschiedene Datensätze. Besondere Aufmerksamkeit wird auf die Dynamik der so genannten Implizierten Volatilität und das daraus resultierende Faktor-Hedging von Barrier Optionen gerichtet.
High-dimensional regression problems which reveal dynamic behavior occur frequently in many different fields of science. The dynamics of the whole complex system is typically analyzed by time propagation of few number of factors, which are loaded with time invariant functions of exploratory variables. In this thesis we consider dynamic semiparametric factor model, which assumes nonparametric loading functions. We start with a short discussion of related statistical techniques and present the properties of the model. Additionally real data applications are discussed with particular focus on implied volatility dynamics and resulting factor hedging of barrier options.
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Xu, Yafei. "High Dimensional Financial Engineering: Dependence Modeling and Sequential Surveillance". Doctoral thesis, Humboldt-Universität zu Berlin, 2018. http://dx.doi.org/10.18452/18790.

Testo completo
Abstract (sommario):
Diese Dissertation konzentriert sich auf das hochdimensionale Financial Engineering, insbesondere in der Dependenzmodellierung und der sequentiellen Überwachung. Im Bereich der Dependenzmodellierung wird eine Einführung hochdimensionaler Kopula vorgestellt, die sich auf den Stand der Forschung in Kopula konzentriert. Eine komplexere Anwendung im Financial Engineering, bei der eine hochdimensionale Kopula verwendet wird, konzentriert sich auf die Bepreisung von Portfolio-ähnlichen Kreditderivaten, d. h. CDX-Tranchen (Credit Default Swap Index). In diesem Teil wird die konvexe Kombination von Kopulas in der CDX-Tranche mit Komponenten aus der elliptischen Kopula-Familie (Gaussian und Student-t), archimedischer Kopula-Familie (Frank, Gumbel, Clayton und Joe) und hierarchischer archimedischer Kopula-Familie vorgeschlagen. Im Abschnitt über finanzielle Überwachung konzentriert sich das Kapitel auf die Überwachung von hochdimensionalen Portfolios (in den Dimensionen 5, 29 und 90) durch die Entwicklung eines nichtparametrischen multivariaten statistischen Prozesssteuerungsdiagramms, d.h. eines Energietest-basierten Kontrolldiagramms (ETCC). Um die weitere Forschung und Praxis der nichtparametrischen multivariaten statistischen Prozesskontrolle zu unterstützen, die in dieser Dissertation entwickelt wurde, wird ein R-Paket "EnergyOnlineCPM" entwickelt. Dieses Paket wurde im Moment akzeptiert und veröffentlicht im Comprehensive R Archive Network (CRAN), welches das erste Paket ist, das die Verschiebung von Mittelwert und Kovarianz online überwachen kann.
This dissertation focuses on the high dimensional financial engineering, especially in dependence modeling and sequential surveillance. In aspect of dependence modeling, an introduction of high dimensional copula concentrating on state-of-the-art research in copula is presented. A more complex application in financial engineering using high dimensional copula is concentrated on the pricing of the portfolio-like credit derivative, i.e. credit default swap index (CDX) tranches. In this part, the convex combination of copulas is proposed in CDX tranche pricing with components stemming from elliptical copula family (Gaussian and Student-t), Archimedean copula family (Frank, Gumbel, Clayton and Joe) and hierarchical Archimedean copula family used in some publications. In financial surveillance part, the chapter focuses on the monitoring of high dimensional portfolios (in 5, 29 and 90 dimensions) by development of a nonparametric multivariate statistical process control chart, i.e. energy test based control chart (ETCC). In order to support the further research and practice of nonparametric multivariate statistical process control chart devised in this dissertation, an R package "EnergyOnlineCPM" is developed. At moment, this package has been accepted and published in the Comprehensive R Archive Network (CRAN), which is the first package that can online monitor the shift in mean and covariance jointly.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Nesterov, Alexander. "Three essays in matching mechanism design". Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17647.

Testo completo
Abstract (sommario):
In diese Dissertation, betrachte ich das Problem der Aufteilung der unteilbaren Objekte unter Agenten, ihren Vorlieben entsprechend, und die Transfers fehlen. In Kapitel 1 studiere ich den Kompromiss zwischen Fairness und Effizienz in der Klasse der strategy-proof Aufteilungsmechanismen. Das wichtigste Ergebnis ist, dass für die strategy-proof Mechanismen folgende Effizienz- und Fairness-Kriterien nicht miteinander vereinbar sind: (1) Ex-post-Effizienz und Neidfreiheit, (2) Ordnung-Effizienz und schwache Neidfreiheit und (3) Ordnung-Effizienz und gleiche-Teilung-untere-Grenze. In Kapitel 2 ist der Fokus auf zwei Darstellungen einer Zuteilung: als probabilistische Zuordnung und als Lotterie über deterministische Zuordnungen. Um die Gestaltung der praktischen Lotterie-Mechanismen zu erleichtern schlagen wir neue Werkzeuge für den Erhalt der stochastischen Verbesserungen bei Lotterien vor. Als Anwendungen schlagen wir Lotterie Mechanismen, die die weit verbreiteten Random serial dictatorship Mechanismus verbessern, und eine Lotterie-Darstellung seiner Konkurrent, die Probabilistic serial Mechanismus, vor. In Kapitel 3 schlage ich einen neuen Mechanismus vor, der Schüler an Grundschulen zuweist: Adaptive Acceptance (AA). AA sammelt von Neumann-Morgenstern Präferenzen von Studenten über Schulen und implementiert die Zuordnung unter Verwendung eines iterativen Verfahrens, das ähnlich der vorherrschenden Immediate Acceptance (IA) ist. AA verfügt über eine starke Kombination von Anreize und Effizienzeigenschaften im Vergleich zu IA und sein Rivale, Deferred Acceptance (DA).
I consider the problem of allocating indivisible objects among agents according to their preferences when transfers are absent. In Chapter 1, I study the tradeoff between fairness and efficiency in the class of strategy-proof allocation mechanisms. The main finding is that for strategy-proof mechanisms the following efficiency and fairness criteria are mutually incompatible: (1) Ex-post efficiency and envy-freeness, (2) ordinal efficiency and weak envy-freeness and (3) ordinal efficiency and equal division lower bound. In Chapter 2, the focus is on two representations of an allocation when randomization is used: as a probabilistic assignment and as a lottery over deterministic assignments. To help facilitate the design of practical lottery mechanisms, we provide new tools for obtaining stochastic improvements in lotteries. As applications, we propose lottery mechanisms that improve upon the widely-used random serial dictatorship mechanism, and a lottery representation of its competitor, the probabilistic serial mechanism. In Chapter 3, I propose a new mechanism to assign students to primary schools: the Adaptive Acceptance rule (AA). AA collects von Neumann-Morgenstern utilities of students over schools and implements the assignment using an iterative procedure similar to the prevalent Immediate Acceptance rule (IA). AA enjoys a strong combination of incentive and efficiency properties compared to IA and its rival, the Deferred Acceptance rule (DA). In case of strict priorities, AA implements the student-optimal stable matching in dominant strategies, which dominates each equilibrium outcome of IA. In case of no priorities, AA is ex-post efficient while some equilibrium outcomes of IA are not; also, AA causes loss of ex-ante efficiency less often than DA. If, in addition, students have common ordinal preferences, AA is approximately strategy-proof and ex-ante dominates DA.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Örsal, Deniz Dilan Karaman. "Essays on panel cointegration testing". Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2009. http://dx.doi.org/10.18452/15894.

Testo completo
Abstract (sommario):
Diese Dissertation beinhaltet vier Aufsätze, die zur Literatur der Panelkointegrationsmethodik beitragen. Der erste Aufsatz vergleicht die Eigenschaften der vier Residuen-basierten Panelkointegrationstests von Pedroni (1995, 1999) mit dem Likelihood-basierten Panelkointegrationstest von Larsson et al. (2001) in endlichen Stichproben. Die Simulationsergebnisse zeigen, dass unter den fünf untersuchten Panelkointegrationsteststatistiken die Panel-t Teststatistik von Pedroni (1995, 1999) die besten Eigenschaften in endlichen Stichproben besitzt. Der zweite Aufsatz präsentiert eine Korrektur des Beweises von Larsson et al. (2001) bezüglich der Endlichkeit der Momente der asymptotischen Trace-Statistik für den Fall, dass die Differenz zwischen der Anzahl der Variablen und der Anzahl der existierenden Kointegrationsbeziehungen eins ist. Im dritten Aufsatz wird ein neuer Likelihood-basierter Panelkointegrationstest vorgestellt, der die Existenz eines linearen Trends in dem datengenerierenden Prozess erlaubt. Dieser neue Test ist eine Erweiterung des Likelihood-Quotienten-Tests von Saikkonen und Lütkepohl (2000a) für trendbereinigte Daten auf die Paneldatenanalyse. Unter der Nullhypothese folgt die Panel-SL Teststatistik einer standardisierten Normalverteilung, wenn die Anzahl der Beobachtungen über die Zeit (T) und die Anzahl der Querschnitte (N) sequentiell gegen unendlich gehen. In einer Monte-Carlo-Studie werden die Eigenschaften des Panel-SL Tests in endlichen Stichproben untersucht. Der neue Test hat ein annehmbares empirisches Signifikanzniveau für wachsende T und N sowie eine hohe Güte in kleinen Stichproben. Der letzte Aufsatz der Dissertation analysiert die langfristige Geldnachfragefunktion in OECD Ländern mit Hilfe von Paneleinheitswurzel- und Panelkointegrationstests. Um eine mögliche Existenz einer stationären langfristigen Geldnachfragefunktion zu untersuchen, werden der Panel-SL Kointegrationstest und die Tests von Pedroni (1999) verwendet. Im Anschluss daran wird eine Paneldatenschätzung für die Geldnachfragefunktion mittels der dynamischen Kleinste-Quadrate-Methode von Mark und Sul (2003) durchgeführt.
This thesis is composed of four essays which contribute to the literature in panel cointegration methodology. The first essay compares the finite sample properties of the four residual-based panel cointegration tests of Pedroni (1995, 1999) and the likelihood-based panel cointegration test of Larsson et al. (2001). The simulation results indicate that the panel-t test statistic of Pedroni has the best finite sample properties among the five panel cointegration test statistics evaluated. The second essay presents a corrected version of the proof of Larsson et al. (2001) related to the finiteness of the moments of the asymptotic trace statistic. The proof is corrected for the case, in which the difference between the number of variables and the number of existing cointegrating relations is one. The third essay proposes a new likelihood-based panel cointegration test in the presence of a linear time trend in the data generating process. This new test is an extension of the likelihood ratio test of Saikkonen and Lütkepohl (2000) for trend-adjusted data to the panel data framework, and is called the panel SL test. Under the null hypothesis, the panel SL test statistic is standard normally distributed as the number of time periods (T) and the number of cross-sections (N) tend to infinity sequentially. By means of a Monte Carlo study the finite sample properties of the test are investigated. The new test presents reasonable size with the increase in T and N, and has high power in small samples. The last essay of the thesis analyzes the long-run money demand relation among OECD countries by panel unit root and cointegration testing techniques. The panel SL cointegration test and the tests of Pedroni (1999) are used to detect the existence of a stationary long-run money demand relation. Moreover, the money demand function is estimated with the panel dynamic ordinary least squares method of Mark and Sul (2003).
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Song, Song. "Confidence bands in quantile regression and generalized dynamic semiparametric factor models". Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2010. http://dx.doi.org/10.18452/16341.

Testo completo
Abstract (sommario):
In vielen Anwendungen ist es notwendig, die stochastische Schwankungen der maximalen Abweichungen der nichtparametrischen Schätzer von Quantil zu wissen, zB um die verschiedene parametrische Modelle zu überprüfen. Einheitliche Konfidenzbänder sind daher für nichtparametrische Quantil Schätzungen der Regressionsfunktionen gebaut. Die erste Methode basiert auf der starken Approximation der empirischen Verfahren und Extremwert-Theorie. Die starke gleichmäßige Konsistenz liegt auch unter allgemeinen Bedingungen etabliert. Die zweite Methode beruht auf der Bootstrap Resampling-Verfahren. Es ist bewiesen, dass die Bootstrap-Approximation eine wesentliche Verbesserung ergibt. Der Fall von mehrdimensionalen und diskrete Regressorvariablen wird mit Hilfe einer partiellen linearen Modell behandelt. Das Verfahren wird mithilfe der Arbeitsmarktanalysebeispiel erklärt. Hoch-dimensionale Zeitreihen, die nichtstationäre und eventuell periodische Verhalten zeigen, sind häufig in vielen Bereichen der Wissenschaft, zB Makroökonomie, Meteorologie, Medizin und Financial Engineering, getroffen. Der typische Modelierungsansatz ist die Modellierung von hochdimensionalen Zeitreihen in Zeit Ausbreitung der niedrig dimensionalen Zeitreihen und hoch-dimensionale zeitinvarianten Funktionen über dynamische Faktorenanalyse zu teilen. Wir schlagen ein zweistufiges Schätzverfahren. Im ersten Schritt entfernen wir den Langzeittrend der Zeitreihen durch Einbeziehung Zeitbasis von der Gruppe Lasso-Technik und wählen den Raumbasis mithilfe der funktionalen Hauptkomponentenanalyse aus. Wir zeigen die Eigenschaften dieser Schätzer unter den abhängigen Szenario. Im zweiten Schritt erhalten wir den trendbereinigten niedrig-dimensionalen stochastischen Prozess (stationär).
In many applications it is necessary to know the stochastic fluctuation of the maximal deviations of the nonparametric quantile estimates, e.g. for various parametric models check. Uniform confidence bands are therefore constructed for nonparametric quantile estimates of regression functions. The first method is based on the strong approximations of the empirical process and extreme value theory. The strong uniform consistency rate is also established under general conditions. The second method is based on the bootstrap resampling method. It is proved that the bootstrap approximation provides a substantial improvement. The case of multidimensional and discrete regressor variables is dealt with using a partial linear model. A labor market analysis is provided to illustrate the method. High dimensional time series which reveal nonstationary and possibly periodic behavior occur frequently in many fields of science, e.g. macroeconomics, meteorology, medicine and financial engineering. One of the common approach is to separate the modeling of high dimensional time series to time propagation of low dimensional time series and high dimensional time invariant functions via dynamic factor analysis. We propose a two-step estimation procedure. At the first step, we detrend the time series by incorporating time basis selected by the group Lasso-type technique and choose the space basis based on smoothed functional principal component analysis. We show properties of this estimator under the dependent scenario. At the second step, we obtain the detrended low dimensional stochastic process (stationary).
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Hofmarcher, Paul. "Advanced Regression Methods in Finance and Economics: Three Essays". Thesis, 2012. http://epub.wu.ac.at/3489/1/DissFINAL.pdf.

Testo completo
Abstract (sommario):
In this thesis advanced regression methods are applied to discuss and investigate highly relevant research questions in the areas of finance and economics. In the field of credit risk the thesis investigates a hierarchical model which allows to obtain a consensus score, if several ratings are available for each firm. Autoregressive processes and random effects are used to model both a correlation structure between and within the obligors in the sample. The model also allows to validate the raters themselves. The problem of model uncertainty and multicollinearity between the explanatory variables is addressed in the other two applications. Penalized regressions, like bridge regressions, are used to handle multicollinearity while model averaging techniques allow to account for model uncertainty. The second part of the thesis makes use of Bayesian elastic nets and Bayesian Model Averaging (BMA) techniques to discuss long-term economic growth. It identifies variables which are significantly related to long-term growth. Additionally, it illustrates the superiority of this approach in terms of predictive accuracy. Finally, the third part combines ridge regressions with BMA to identify macroeconomic variables which are significantly related to aggregated firm failure rates. The estimated results deliver important insights for e.g., stress-test scenarios. (author's abstract)
Gli stili APA, Harvard, Vancouver, ISO e altri
17

March, Nicolas. "Building a Data Mining Framework for Target Marketing". Thesis, 2011. http://epub.wu.ac.at/3242/1/diss_epub_nicolas_march_20111002.pdf.

Testo completo
Abstract (sommario):
Most retailers and scientists agree that supporting the buying decisions of individual customers or groups of customers with specific product recommendations holds great promise. Target-oriented promotional campaigns are more profitable in comparison to uniform methods of sale promotion such as discount pricing campaigns. This seems to be particulary true if the promoted products are well matched to the preferences of the customers or customer groups. But how can retailers identify customer groups and determine which products to offer them? To answer this question, this dissertation describes an algorithmic procedure which identifies customer groups with similar preferences for specific product combinations in recorded transaction data. In addition, for each customer group it recommends products which promise higher sales through cross-selling if appropriate promotion techniques are applied. To illustrate the application of this algorithmic approach, an analysis is performed on the transaction database of a supermarket. The identified customer groups are used for a simulation. The results show that appropriate promotional campaigns which implement this algorithmic approach can achieve an increase in profit from 15% to as much as 191% in contrast to uniform discounts on the purchase price of bestsellers. (author's abstract)
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia