Dissertations / Theses on the topic 'Identification structurelle'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 21 dissertations / theses for your research on the topic 'Identification structurelle.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Balaniuk, Remis. "Identification structurelle." Phd thesis, Grenoble INPG, 1996. http://tel.archives-ouvertes.fr/tel-00004974.
Full textBalaniuk, Remis. "Identification structurelle." Phd thesis, Grenoble INPG, 1996. https://theses.hal.science/tel-00004974.
Full textLesellier, Max. "Articles en microéconométrie structurelle." Electronic Thesis or Diss., Toulouse 1, 2023. http://www.theses.fr/2023TOU10016.
Full textIn this thesis, I develop new econometric methods to test and relax statistical or equilibrium restrictions that are commonly assumed in popular industrial organization models including the random coefficient logit model, entry games, and optimal contracts. I then apply these methods to investigate how the usual assumptions affect the results obtained in several pertinent empirical examples. This thesis is organized into three chapters.The first chapter of my thesis is entitled "Testing and Relaxing Distributional Assumptions on Random Coefficients in Demand Models''. This chapter is co-authored with two fellow graduate students Hippolyte Boucher and Gökçe Gökkoca. We provide a method to test and relax the distributional assumptions on random coefficients in the differentiated products demand model initiated by Berry (1994) and Berry, Levinsohn and Pakes (1995). This model is the workhorse model for demand estimation with market-level data and it uses random coefficients to account for unobserved preference heterogeneity. In this chapter, we provide a formal moment-based specification test on the distribution of random coefficients, which allows researchers to test the chosen specification (for instance normality) without re-estimating the model under a more flexible parametrization. The moment conditions (or equivalently the instruments) chosen for the test are designed to maximize the power of the test when the RC distribution is misspecified. By exploiting the duality between estimation and testing, we show that these instruments can also improve the estimation of the BLP model under a flexible parametrization (here, we consider the case of the Gaussian mixture). Finally, we validate our approach with Monte Carlo simulations and an empirical application using data on car purchases in Germany.The second chapter is entitled: "Moment Inequalities for Entry Games with HeterogeneousTypes". This chapter is coauthored with my advisor Christian Bontemps and Rohit Kumar.We develop new methods to simplify the estimation of entry games when the equilibrium selection mechanism is unrestricted. In particular, we develop an algorithm that allows us to recursively select a relevant subset of inequalities that sharply characterize the set of admissible set of parameters. Then, we propose a way to circumvent the problem of deriving an easy-to-compute and competitive critical value by smoothing the minimum function. In our case, it allows us to obtain a pivotal test statistic that eliminates ``numerically” the non-binding moments. We show that we recover a consistent confidence region by letting the smoothing parameter increase with the sample size. Interestingly, we show that our procedure can easily be adapted to the case with covariates including continuous ones. Finally, we conduct full-scale Monte Carlo simulations to assess the performance of our new estimation procedure.The third chapter is entitled "Identification and Estimation of Incentive Contracts under Asymmetric Information: an application to the French Water Sector". This chapter has its roots in a project Christian Bontemps and David Martimort started many years ago. We develop a Principal-Agent model to represent management contracting for public-service delivery. A firm (the Agent) has private knowledge of its marginal cost of production. The local public authority (the Principal) cares about the consumers' net surplus from consuming the services and the (weighted) firm's profit. Contractual negotiation is modeled as the choice by the privately informed firm within a menu of options determining both the unit-price charged to consumers and the fixed fee. Our theoretical model characterizes optimal contracting in this environment. We then explicitly study the nonparametric identification of the model and perform a semi-parametric estimation on a dataset coming from the 2004 wave of a survey from the French environment Institute
Decotte, Benjamin. "Identifiabilité structurelle de modèles bond graphs." Lille 1, 2002. https://pepite-depot.univ-lille.fr/RESTREINT/Th_Num/2002/50376-2002-291.pdf.
Full textDuong, Hoài Nghia. "Identification structurelle et paramétrique des systèmes linéaires monovariables et multivariables." Grenoble INPG, 1993. http://www.theses.fr/1993INPG0080.
Full textBariani, Jean-Paul. "Conception et réalisation d'un logiciel de CAO en automatique : identification structurelle et commande PID." Nice, 1988. http://www.theses.fr/1988NICE4175.
Full textBariani, Jean-Paul. "Conception et réalisation d'un logiciel de C.A.O. en automatique identification structurelle et commande PID /." Grenoble 2 : ANRT, 1988. http://catalogue.bnf.fr/ark:/12148/cb376115157.
Full textJedidi, Safa. "Identification décentralisée des systèmes de grande taille : approches appliquées à la thermique des bâtiments." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S072/document.
Full textWith the increasing complexity of dynamical systems that appear in engineering and other fields of science, the study of large systems consisting of a set of interconnected subsystems has become an important subject of attention in various areas such as robotics, transport networks, large spacial structures (solar panels, antennas, telescopes, \ldots), buildings, … and led to interesting problems of parametric identification analysis, distributed control and optimization. The lack of a universal definition of systems called "large systems", "complex systems", "interconnected systems", ..., demonstrates the confusion between these concepts and the difficulty of defining clear boundaries for such systems. The analysis of the identifiability and identification of these systems requires processing digital models of large scale, the management of diverse dynamics within the same system and the consideration of structural constraints (interconnections, ...) . This is very complicated and very difficult to handle. Thus, these analyzes are rarely taken into consideration globally. Simplifying the problem by decomposing the large system to sub-problems is often the only possible solution. This thesis presents a decentralized approach for the identification of "large scale systems" composed of a set of interconnected subsystems. This approach is based on the structural properties (controllability, observability and identifiability) of the global system. This methodological approach is implemented on thermal applications of buildings. The advantage of this approach is demonstrated through comparisons with a global approach
Denis-Vidal, Lilianne. "Identification d'un système biochimique, modélisation et contrôle d'un système de réacteurs." Compiègne, 1993. http://www.theses.fr/1993COMPD640.
Full textWang, Ao. "Three essays on microeconometric models of demand and their applications in empirical industrial organisation." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAG003.
Full textThe thesis consists of three chapters that study microeconometric models of demand and their applications in empirical industrial organisation.The first two papers focus on models of demand for bundles and study the identification and estimation under different data availabilities. The first paper is a joint work with Alessandro Iaria (University of Bristol) and focuses on the situations where purchase data at bundle-level is available. We present novel identification and estimation results for a mixed logit model of demand for bundles. In particular, we propose a new demand inverse in the presence of complementarity that enables to concentrate out of the likelihood function the (potentially numerous) market-product specific fixed effects, substantially alleviating the challenge of dimensionality inherent in estimation. To illustrate the use of our methods, we estimate demand and supply in the US ready-to-eat cereal industry, where the proposed MLE reduces the numerical search from approximately 12000 to 130 parameters. Our estimates suggest that ignoring Hicksian complementarity among different products often purchased in bundles may result in misleading demand estimates and counterfactuals.The second paper focuses on the situations where only aggregate purchase data at product-level is available. It proposes a Berry, Levinsohn and Pakes (BLP, 1995) model of demand for bundles. Compared to BLP models of demand for single products, this model does not restrict products to be substitutes and, notably, allows for Hicksian complementarities among products that can be jointly chosen in a bundle. Leveraging the demand inverse of the first paper, it proposes constructive identification arguments of the model and a practically useful Generalized Method of Moments (GMM) estimator. In particular, this estimator can handle potentially large choice sets and its implementation is straightforward, essentially as a standard BLP estimator. Finally, I illustrate the practical implementation of the methods and estimate the demand for Ready-To-Eat (RTE) cereals and milk in the US. The demand estimates suggest that RTE cereals and milk are overall Hicksian complementary and these complementarities are heterogeneous across bundles. Ignoring such complementarities results in misleading counterfactuals.The third paper is a joint work with Xavier d’Haultfoeuille, Philippe Fevrier and Lionel Wilner and focuses on revenue management. Despite that this management has greatly increased flexibility in the way firms set prices, firms usually still impose constraints on their pricing strategy. There is yet scarce evidence on the gains or losses of such strategies compared to uniform pricing or fully flexible strategies. In this paper, we quantify these gains and losses and identify their underlying sources in the context of French railway transportation. This is complicated by the censoring on demand and the absence of exogenous price variations. We develop an original identification strategy on the demand that combines temporal variations in relative prices andmoment inequalities stemming from basic rationality on consumers’ side and weak optimality conditions on the firm’s pricing strategy. Our results suggest significant gains of the actual revenue management compared to uniform pricing, but also substantial losses compared to the optimal pricing strategy. Finally, we highlight the key role of revenue management for acquiring information when demand is uncertain
Stefan, Diana. "Structural and parametric identification of bacterial regulatory networks." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM019/document.
Full textHigh-throughput technologies yield large amounts of data about the steady-state levels and the dynamical changes of gene expression in bacteria. An important challenge for the biological interpretation of these data consists in deducing the topology of the underlying regulatory network as well as quantitative gene regulation functions from such data. A large number of inference methods have been proposed in the literature and have been successful in a variety of applications, although several problems remain. We focus here on improving two aspects of the inference methods. First, transcriptome data reflect the abundance of mRNA, whereas the components that regulate are most often the proteins coded by the mRNAs. Although the concentrations of mRNA and protein correlate reasonably during steady-state growth, this correlation becomes much more tenuous in time-series data acquired during growth transitions in bacteria because of the very different half-lives of proteins and mRNA. Second, the dynamics of gene expression is not only controlled by transcription factors and other specific regulators, but also by global physiological effects that modify the activity of all genes. For example, the concentrations of (free) RNA polymerase and the concentration of ribosomes vary strongly with growth rate. We therefore have to take into account such effects when trying to reconstruct a regulatory network from gene expression data. We propose here a combined experimental and computational approach to address these two fundamental problems in the inference of quantitative models of the activity of bacterial promoters from time-series gene expression data. We focus on the case where the dynamics of gene expression is measured in vivo and in real time by means of fluorescent reporter genes. Our network reconstruction approach accounts for the differences between mRNA and protein half-lives and takes into account global physiological effects. When the half-lives of the proteins are available, the measurement models used for deriving the activities of genes from fluorescence data are integrated to yield estimates of protein concentrations. The global physiological state of the cell is estimated from the activity of a phage promoter, whose expression is not controlled by any transcription factor and depends only on the activity of the transcriptional and translational machinery. We apply the approach to a central module in the regulatory network controlling motility and the chemotaxis system in Escherichia coli. This module comprises the FliA, FlgM and tar genes. FliA is a sigma factor that directs RNA polymerase to operons coding for components of the flagellar assembly. The effect of FliA is counteracted by the antisigma factor FlgM, itself transcribed by FliA. The third component of the network, tar, codes for the aspartate chemoreceptor protein Tar and is directly transcribed by the FliA-containing RNA polymerase holoenzyme. The FliA-FlgM module is particularly well-suited for studying the inference problems considered here, since the network has been well-studied and protein half-lives play an important role in its functioning. We stimulated the FliA-FlgM module in a variety of wild-type and mutant strains and different growth media. The measured transcriptional response of the genes was used to systematically test the information required for the reliable inference of the regulatory interactions and quantitative predictive models of gene regulation. Our results show that for the reliable reconstruction of transcriptional regulatory networks in bacteria it is necessary to include global effects into the network model and explicitly deduce protein concentrations from the observed expression profiles. Our approach should be generally applicable to a large variety of network inference problems and we discuss limitations and possible extensions of the method
Boubacar, Mainassara Yacouba. "Estimation, validation et identification des modèles ARMA faibles multivariés." Phd thesis, Université Charles de Gaulle - Lille III, 2009. http://tel.archives-ouvertes.fr/tel-00452032.
Full textCorlay, Trujillo Monica Maria. "Identification / égalisation aveugle spatio-temporelles : combinaison des approches structurelles et des approches d'ordre supérieur." Paris, ENST, 2001. http://www.theses.fr/2001ENST0004.
Full textCorlay, Trujillo Monica Maria. "Identification-égalisation aveugle spatio-temporelles : combinaison des approches structurelles et des approches d'ordre supérieur /." Paris : École nationale supérieure des télécommunications, 2002. http://catalogue.bnf.fr/ark:/12148/cb38838485q.
Full textVincent, Rémy. "Identification passive en acoustique : estimateurs et applications au SHM." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT020/document.
Full textWard identity is a relationship that enables damped linear system identification, ie the estimation its caracteristic properties. This identity is used to provide new observation models that are available in an estimation context where sources are uncontrolled by the user. An estimation and detection theory is derived from these models and various performances studies areconducted for several estimators. The reach of the proposed methods is extended to Structural Health Monitoring (SHM), that aims at measuring and tracking the health of buildings, such as a bridge or a sky-scraper for instance. The acoustic modality is chosen as it provides complementary parameters estimation to the state of the art in SHM, such as structural and geometrical parameters recovery. Some scenarios are experimentally illustrated by using the developed algorithms, adapted to fit the constrains set by embedded computation on anautonomous sensor network
Laporte, Pierre. "Conception assistée par ordinateur en automatique : un logiciel d'identification." Grenoble INPG, 1985. http://www.theses.fr/1985INPG0135.
Full textFeytout, Benjamin. "Commande crone appliquée à l'optimisation de la production d'une éolienne." Thesis, Bordeaux 1, 2013. http://www.theses.fr/2014BOR14946/document.
Full textThe research studies, in collaboration with VALEOL and IMS laboratory, propose several solutions to optimize the production and the efficiency of a wind turbine. The general theme of the work is based on control laws of the system or subsystems using the CRONE robust design. Each part highlights aspects of modeling, system identification and design before simulations or tests of scale and full size models. Chapter 1 provides an overview of the issues discussed in this manuscript, using states of the art and precisions on the industrial and economic context of 2013.Chapter 2 introduces the CRONE command for robust design. It is used to achieve the control of the rotation speed of a variable speed wind turbine, with an innovative architecture - mechanical variable speed solution and synchronous generator.Chapter 3 makes a comparison of three new optimization criteria for CRONE design. The aim is to reduce the methodology complexity and to facilitate handling by any user. The results are obtained through simulations on an academic example, then with a DFIG wind turbine model. Chapter 4 focuses on the reduction of structural loads transmitted by the wind on the turbine. It is about better control of the pitch angle by individual pitch control, depending on the rotor position or wind disturbances.Chapter 5 deals with the design of an anti-icing/de-icing system for blades. After the modeling and identification steps, the CRONE design is used to control the temperature of a heating coating disposed on the blades. An observer is finally designed to detect the presence of ice
Alkhoury, Ziad. "Minimality, input-output equivalence and identifiability of LPV systems in state-space and linear fractional representations." Thesis, Poitiers, 2017. http://www.theses.fr/2017POIT2319/document.
Full textIn this thesis, important concepts related to the identification of Linear Parameter-Varying (LPV) systems are studied.First, we tackle the problem of identifiability of Affine-LPV (ALPV) state-space parametrizations. A new sufficient and necessary condition is introduced in order to guarantee the structural identifiability for ALPV parameterizations. The identifiability of this class of parameterizations is related to the lack of state-space isomorphisms between any two models corresponding to different scheduling parameter values. In addition, we present a sufficient and necessary condition for local structural identifiability, and a sufficient condition for (global) structural identifiability which are both based on the rank of a model-based matrix. These latter conditions allow systematic verification of structural identifiability of ALPV models. Moreover, since local identification techniques are inevitable in certain applications, it is thus a priority to study the discrepancy between different LPV models obtained using different local techniques. We provide an analytic error bound on the difference between the input-output behaviors of any two LPV models which are frozen equivalent. This error bound turns out to be a function of both (i) the speed of the change of the scheduling signal and (ii) the discrepancy between the coherent bases of the two LPV models. In particular, the difference between the outputs of the two models can be made arbitrarily small by choosing a scheduling signal which changes slowly enough.Finally, we introduce and study important properties of the transformation of ALPV statespace representations into Linear Fractional Representations (LFRs). More precisely, we show that (i) state minimal ALPV representations yield minimal LFRs, and vice versa, (ii) the inputoutput behavior of the ALPV representation determines uniquely the input-output behavior of theresulting LFR, (iii) structurally identifiable ALPV models yield structurally identifiable LFRs, and vice versa. We then characterize LFRs which correspond to equivalent ALPV models based on their input-output maps. As illustrated all along the manuscript, these results have important consequences for identification and control of LPV systems
Méango, Natoua Romuald. "Analyse en identification partielle de la décision d'émigrer des étudiants africains." Thèse, 2013. http://hdl.handle.net/1866/10561.
Full textInternational migration of students is a costly investment for family units in many developing countries. However, it might yield substantial financial and social return for the investors, as well as externalities for other family members. Furthermore, when these family decisions aggregate at the country-level, they affect the stock of human capital available to the origin country. This thesis addresses primarily two aspects of international student migration: (i) Who goes? What are the determinants of the probability of migration? (ii) Who pays? How does the family organize to bear the cost of the migration? Engaging in this study, one faces the challenge of data limitation, a direct consequence of the geographical dispersion of the population of interest. The first important contribution of this work is to provide a new snowball sampling methodology for hard-to-reach population, along with estimators to correct selection-biases. I collected data which include both migrant and non-migrant students from Cameroon, using an online-platform. A second challenge is the well-documented problem of endogeneity of the educational attainment. I take advantage of recent advances in the treatment of identification problems in discrete choice models to solve this issue while keeping assumptions at a low level. In particular, validity of the partial identification methodology does not rest on the existence of an instrument. To the best of my knowledge, this is the first empirical application of this methodology to development related issues. The first chapter studies the decision made by a family to invest in student. I propose an empirical structural decision model which reflects the importance of both the return of the investment and the budgetary constraint in agent choices. Our results show that the choice of level of education, the help of the family and academic results in secondary school are significant determinant of the probability to migrate, unlike the gender which does not seem to play any role in the family decision. The objective of the second chapter is to understand how agents decide to be part of the migration project and how the family organizes itself to share profits and discourage free riding-behavior. Further results on partial identification for games of incomplete information allow us to consider strategic behavior of family. My estimation suggests that models with a representative individual suit only families which consist of parent and child, but are rejected when a significant extended family member is introduced. Helpers incur a non-zero cost of participation that discourages involvement in the migration process. Kinship obligations and not altruism appears as the main reason of participation. Finally, the third chapter presents the more general theoretical framework in which my models are imbedded. The method presented is specialized to infinite games of complete information, but is of interest for application to the empirical analysis of instrumental variable models of discrete choice (Chapter 1), cooperative and non-cooperative games (Chapter 2), as well as revealed preference analysis. With my co-authors, we propose an efficient combinatorial bootstrap procedure for inference in games of complete information that runs in linear computing time and an application to the determinants of long term elderly care choices.
Doko, Tchatoka Sabro Firmin. "Exogeneity, weak identification and instrument selection in econometrics." Thèse, 2010. http://hdl.handle.net/1866/3886.
Full textThe last decade shows growing interest for the so-called weak instruments problems in the econometric literature, i.e. situations where instruments are poorly correlated with endogenous explanatory variables. More generally, these can be viewed as situations where model parameters are not identified or nearly so (see Dufour and Hsiao, 2008). It is well known that when instruments are weak, the limiting distributions of standard test statistics - like Student, Wald, likelihood ratio and Lagrange multiplier criteria in structural models - have non-standard distributions and often depend heavily on nuisance parameters. Several empirical studies including the estimation of returns to education [Angrist and Krueger (1991, 1995), Angrist et al. (1999), Bound et al. (1995), Dufour and Taamouti (2007)] and asset pricing model (C-CAPM) [Hansen and Singleton (1982, 1983), Stock and Wright (2000)], have showed that the above procedures are unreliable in presence of weak identification. As a result, identification-robust tests [Anderson and Rubin (1949), Moreira (2003), Kleibergen (2002), Dufour and Taamouti (2007)] are often used to make reliable inference. However, little is known about the quality of these procedures when the instruments are invalid or both weak and invalid. This raises the following question: what happens to inference procedures when some instruments are endogenous or both weak and endogenous? In particular, what happens if an invalid instrument is added to a set of valid instruments? How robust are these inference procedures to instrument endogeneity? Do alternative inference procedures behave differently? If instrument endogeneity makes statistical inference unreliable, can we propose the procedures for selecting "good instruments" (i.e. strong and valid instruments)? Can we propose instrument selection procedure which will be valid even in presence of weak identification? This thesis focuses on structural models and answers these questions through four chapiters. The first chapter is published in Journal of Statistical Planning and Inference 138 (2008) 2649 – 2661. In this chapter, we analyze the effects of instrument endogeneity on two identificationrobust procedures: Anderson and Rubin (1949, AR) and Kleibergen (2002, K) test statistics, with or without weak instruments. First, when the level of instrument endogeneity is fixed (does not depend on the sample size), we show that all these procedures are in general consistent against the presence of invalid instruments (hence asymptotically invalid for the hypothesis of interest), whether the instruments are "strong" or "weak". We also describe situations where this consistency may not hold, but the asymptotic distribution is modified in a way that would lead to size distortions in large samples. These include, in particular, cases where 2SLS estimator remains consistent, but the tests are asymptotically invalid. Second, when the instruments are locally exogenous (the level of instrument endogeneity approaches zero as the sample size increases), we find asymptotic noncentral chi-square distributions with or without weak instruments, and describe situations where the non-centrality parameter is zero and the asymptotic distribution remains the same as in the case of valid instruments (despite the presence of invalid instruments). The second chapter analyzes the effects of weak identification on Durbin-Wu-Hausman (DWH) specification tests an Revankar-Harttley exogeneity test. We propose a finite-and large-sample analysis of the distribution of DWH tests under the null hypothesis (level) and the alternative hypothesis (power), including when identification is deficient or weak (weak instruments). Our finite-sample analysis provides several new insights and extensions of earlier procedures. The characterization of the finite-sample distribution of the test-statistics allows the construction of exact identificationrobust exogeneity tests even with non-Gaussian errors (Monte Carlos tests) and shows that such tests are typically robust to weak instruments (level is controlled). Furthermore, we provide a characterization of the power of the tests, which clearly exhibits factors which determine power. We show that DWH-tests have no power when all instruments are weak [similar to Guggenberger(2008)]. However, power does exist as soon as we have one strong instruments. The conclusions of Guggenberger (2008) focus on the case where all instruments are weak (a case of little practical interest). Our asymptotic distributional theory under weaker assumptions confirms the finite-sample theory. Moreover, we present simulation evidence indicating: (1) over a wide range cases, including weak IV and moderate endogeneity, OLS performs better than 2SLS [finding similar to Kiviet and Niemczyk (2007)]; (2) pretest-estimators based on exogeneity tests have an excellent overall performance compared with usual IV estimator. We illustrate our theoretical results through simulation experiment and two empirical applications: the relation between trade and economic growth and the widely studied problem of returns to education. In the third chapter, we extend the generalized Wald partial exogeneity test [Dufour (1987)] to non-gaussian errors. Testing whether a subset of explanatory variables is exogenous is an important challenge in econometrics. This problem occurs in many applied works. For example, in the well know wage model, one should like to assess if mother’s education is exogenous without imposing additional assumptions on ability and schooling. In the growth model, the exogeneity of the constructed instrument on the basis of geographical characteristics for the trade share is often questioned and needs to be tested without constraining trade share and the other variables. Standard exogeneity tests of the type proposed by Durbin-Wu-Hausman and Revankar-Hartley cannot solve such problems. A potential cure for dealing with partial exogeneity is the use of the generalized linear Wald (GW) method (Dufour, 1987). The GW-procedure however assumes the normality of model errors and it is not clear how robust is this test to non-gaussian errors. We develop in this chapter, a modified version of earlier procedure which is valid even when model errors are not normally distributed. We present simulation evidence indicating that when identification is strong, the standard GW-test is size distorted in presence of non-gaussian errors. Furthermore, our analysis of the performance of different pretest-estimators based on GW-tests allow us to propose two new pretest-estimators of the structural parameter. The Monte Carlo simulations indicate that these pretest-estimators have a better performance over a wide range cases compared with 2SLS. Therefore, this can be viewed as a procedure for selecting variable where a GW-test is used in the first stage to decide which variables should be instruments and which ones are valid instruments. We illustrate our theoretical results through two empirical applications: the well known wage equation and the returns to scale in electricity supply. The results show that the GW-tests cannot reject the exogeneity of mother’s education, i.e. mother’s education may constitute a valid IV for schooling. However, the output in cost equation is endogenous and the price of fuel is a valid IV for estimating the returns to scale. The fourth chapter develops identification-robust inference for the covariances between errors and regressors of an IV regression. The results are then applied to develop partial exogeneity tests and partial IV pretest-estimators which are more efficient than usual IV estimator. When more than one stochastic explanatory variables are involved in the model, it is often necessary to determine which ones are independent of the disturbances. This problem arises in many empirical applications. For example, in the New Keynesian Phillips Curve, one should like to assess whether the interest rate is exogenous without imposing additional assumptions on inflation rate and the other variables. Standard Wu-Durbin-Hausman (DWH) tests which are commonly used in applied work are inappropriate to deal with such a problem. The generalized Wald (GW) procedure (Dufour, 1987) which typically allows the construction of confidence sets as well as testing linear restrictions on covariances assumes that the available instruments are strong. When the instruments are weak, the GW-test is in general size distorted. As a result, its application in models where instruments are possibly weak–returns to education, trade and economic growth, life cycle labor supply, New Keynesian Phillips Curve, pregnancy and the demand for cigarettes–may be misleading. To answer this problem, we develop a finite-and large-sample valid procedure for building confidence sets for covariances allowing for the presence of weak instruments. We provide analytic forms of the confidence sets and characterize necessary and sufficient conditions under which they are bounded. Moreover, we propose two new pretest-estimators of structural parameters based on our above procedure. Both estimators combine 2SLS and partial IV-estimators. The Monte Carlo experiment shows that: (1) partial IV-estimators outperform 2SLS when the instruments are weak; (2) pretestestimators have an excellent overall performance–bias and MSE– compared with 2SLS. Therefore, this can be viewed as a variable selection method where the projection-based techniques is used to decide which variables should be instrumented and which ones are valid instruments. We illustrate our results through two empirical applications: the relation between trade and economic growth and the widely studied problem of returns to education. The results show unbounded confidence sets, suggesting that the IV are relatively poor in these models, as questioned in the literature [Bound (1995)].
Kemoe, Laurent. "Three essays in macro-finance, international economics and macro-econometrics." Thèse, 2017. http://hdl.handle.net/1866/19308.
Full textCette thèse présente de nouveaux résultats sur différentes branches de la littérature en macro-finance, économie internationale et macro-économétrie. Les deux premiers chapitres combinent des modèles théoriques et des techniques empiriques pour approfondir l’étude de phénomènes économiques importants tels que les effets de l’incertitude liée aux politiques économiques sur les marchés financiers et la convergence entre les pays émergents et les pays avancés sur ces marchés. Le troisième chapitre, qui est le fruit d’une collaboration avec Hafedh Bouakez, contribue à la littérature sur l’identification des chocs anticipés sur la productivité future. Dans le premier chapitre, j’étudie l’effet de l’incertitude relative aux politiques monétaire et fiscale sur les rendements et les primes de risque associés aux actifs nominaux du gouvernement des États-Unis. J’utilise un modèle d’équilibre stochastique et dynamique de type néo-Keynesien prenant en compte des préférences récursives des agents et des rigidités réelles et nominales. En utilisant un modèle VAR structurel. L’incertitude relative aux politiques économiques est définie comme étant une expansion de la distribution des chocs de politique, expansion au cours de laquelle la moyenne de la distribution reste inchangée. Mes résultats montrent que : (i) Lorsque l’économie est sujette à des chocs imprévisibles sur la volatilité des instruments de politique, le niveau médian de la courbe des rendements baisse de 8,56 points de base, sa pente s’accroît de 13,5 points de base et les primes de risque baissent en moyenne de 0.21 point de base. Cet effet négatif sur le niveau de rendements et les primes de risque est dû à l’impact asymétrique des chocs de signes opposés mais de même amplitude; (ii) Un choc positif à la volatilité des politiques économiques entraîne une hausse des rendements pour toutes les durées de maturité. Cet effet s’explique par le comportement des ménages qui, à la suite du choc, augmentent leur demande de bons dans le but de se prémunir contre les fortes fluctuations espérées au niveau de la consommation, ce qui entraîne des pressions à la baisse sur les rendements. De façon simultanée, ces ménages requièrent une hausse des taux d’intérêt en raison d’une espérance d’inflation future plus grande. Les analyses montrent que le premier effet est dominant, entraînant donc la hausse des rendements observée. Enfin, j’utilise plusieurs mesures empiriques d’incertitude de politiques économiques et un modèle VAR structurel pour montrer les résultats ci-dessus sont conformes avec les faits empiriques. Le Chapitre 2 explore le marché des bons du gouvernement de 12 pays avancés et 8 pays émergents, pendant la période 1999-2012, et analyses la question de savoir s’il y a eu une quelconque convergence du risque associé à ces actifs entre les deux catégories de pays. Je fais une distinction entre risque de défaut et autres types de risque, comme ceux liés au risque d’inflation, de liquidité ou de change. Je commence par montrer théoriquement que le différentiel au niveau des primes de risque « forward » entre les deux pays peut être utilisé pour faire la distinction entre le risque « forward » et les utilise pour montrer qu’il est difficile de conclure que ces autres types de risque dans les pays émergents ont convergé vers les niveaux différents de risque politique, jouent un rôle important dans l’explication des différences de primes de risque – autres que celles associées au risque de défaut– entre les pays émergents et les pays avancés. Le Chapitre 3 propose une nouvelle stratégie d’identification des chocs technologiques anticipés et non-anticipés, qui conduit à des résultats similaires aux prédictions des modèles néo-Keynésiens conventionnels. Il montre que l’incapacité de plusieurs méthodes empiriques à générer des résultats rejoignant la théorie est due à l’impureté des données existences sur la productivité totale des facteurs (TFP), conduisant à mauvaise identification des chocs technologiques non-anticipés-dont les effets estimés ne concordent pas avec l’interprétation de tels chocs comme des chocs d’offre. Ce problème, à son tour, contamine l’identification des chocs technologiques anticipés. Mon co-auteur, Hafedh Bouakez, et moi proposons une stratégie d’identification agnostique qui permet à la TFP d’être affectée de façon contemporaine par deux chocs surprises (technologique et non technologique), le premier étant identifié en faisant recours aux restrictions de signe sur la réponse de l’inflation. Les résultats montrent que les effets des chocs technologiques anticipés et non-anticipés concordent avec les prédictions des modèles néo-Keynésiens standards. En particulier, le puzzle rencontré dans les travaux précédents concernant les effets d’un choc non-anticipé sur l’inflation disparaît lorsque notre nouvelle stratégie est employée.