Academic literature on the topic 'Social Sciences Mathematical Methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Social Sciences Mathematical Methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Dissertations / Theses on the topic "Social Sciences Mathematical Methods"

1

Hwang, Heungsun 1969. "Structural equation modeling by extended redundancy analysis." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=36954.

Full text
Abstract:
A new approach to structural equation modeling based on so-called extended redundancy analysis (ERA) is proposed. In ERA, latent variables are obtained as exact linear combinations of observed variables, and model parameters are estimated by consistently minimizing a single criterion. As a result, the method can avoid limitations of covariance structure analysis (e.g., stringent distributional assumptions, improper solutions, and factor score indeterminacy) in addition to those of partial least squares (e.g., the lack of a global optimization procedure). The method is simple yet versatile enough to fit more complex models; e.g., those with higher-order latent variables and direct effects of observed variables. It can also fit a model to more than one sample simultaneously. Other relevant topics are also discussed, including data transformations, missing data, metric matrices, robust estimation, and efficient estimation. Examples are given to illustrate the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
2

Cisneros-Molina, Myriam. "Mathematical methods for valuation and risk assessment of investment projects and real options." Thesis, University of Oxford, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.491350.

Full text
Abstract:
In this thesis, we study the problems of risk measurement, valuation and hedging of financial positions in incomplete markets when an insufficient number of assets are available for investment (real options). We work closely with three measures of risk: Worst-Case Scenario (WCS) (the supremum of expected values over a set of given probability measures), Value-at-Risk (VaR) and Average Value-at-Risk (AVaR), and analyse the problem of hedging derivative securities depending on a non-traded asset, defined in terms of the risk measures via their acceptance sets. The hedging problem associated to VaR is the problem of minimising the expected shortfall. For WCS, the hedging problem turns out to be a robust version of minimising the expected shortfall; and as AVaR can be seen as a particular case of WCS, its hedging problem is also related to the minimisation of expected shortfall. Under some sufficient conditions, we solve explicitly the minimal expected shortfall problem in a discrete-time setting of two assets driven by correlated binomial models. In the continuous-time case, we analyse the problem of measuring risk by WCS, VaR and AVaR on positions modelled as Markov diffusion processes and develop some results on transformations of Markov processes to apply to the risk measurement of derivative securities. In all cases, we characterise the risk of a position as the solution of a partial differential equation of second order with boundary conditions. In relation to the valuation and hedging of derivative securities, and in the search for explicit solutions, we analyse a variant of the robust version of the expected shortfall hedging problem. Instead of taking the loss function $l(x) = [x]^+$ we work with the strictly increasing, strictly convex function $L_{\epsilon}(x) = \epsilon \log \left( \frac{1+exp\{−x/\epsilon\} }{ exp\{−x/\epsilon\} } \right)$. Clearly $lim_{\epsilon \rightarrow 0} L_{\epsilon}(x) = l(x)$. The reformulation to the problem for L_{\epsilon}(x) also allow us to use directly the dual theory under robust preferences recently developed in [82]. Due to the fact that the function $L_{\epsilon}(x)$ is not separable in its variables, we are not able to solve explicitly, but instead, we use a power series approximation in the dual variables. It turns out that the approximated solution corresponds to the robust version of a utility maximisation problem with exponential preferences $(U(x) = −\frac{1}{\gamma}e^{-\gamma x})$ for a preferenes parameter $\gamma = 1/\epsilon$. For the approximated problem, we analyse the cases with and without random endowment, and obtain an expression for the utility indifference bid price of a derivative security which depends only on the non-traded asset.
APA, Harvard, Vancouver, ISO, and other styles
3

Fang, Zhou. "Reweighting methods in high dimensional regression." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:26f8541a-9e2d-466a-84aa-e6850c4baba9.

Full text
Abstract:
In this thesis, we focus on the application of covariate reweighting with Lasso-style methods for regression in high dimensions, particularly where p ≥ n. We apply a particular focus to the case of sparse regression under a-priori grouping structures. In such problems, even in the linear case, accurate estimation is difficult. Various authors have suggested ideas such as the Group Lasso and the Sparse Group Lasso, based on convex penalties, or alternatively methods like the Group Bridge, which rely on convergence under repetition to some local minimum of a concave penalised likelihood. We propose in this thesis a methodology that uses concave penalties to inspire a procedure whereupon we compute weights from an initial estimate, and then do a single second reweighted Lasso. This procedure -- the Co-adaptive Lasso -- obtains excellent results in empirical experiments, and we present some theoretical prediction and estimation error bounds. Further, several extensions and variants of the procedure are discussed and studied. In particular, we propose a Lasso style method of doing additive isotonic regression in high dimensions, the Liso algorithm, and enhance it using the Co-adaptive methodology. We also propose a method of producing rules based regression estimates for high dimensional non-parametric regression, that often outperforms the current leading method, the RuleFit algorithm. We also discuss extensions involving robust statistics applied to weight computation, repeating the algorithm, and online computation.
APA, Harvard, Vancouver, ISO, and other styles
4

Dunu, Emeka Samuel. "Comparing the Powers of Several Proposed Tests for Testing the Equality of the Means of Two Populations When Some Data Are Missing." Thesis, University of North Texas, 1994. https://digital.library.unt.edu/ark:/67531/metadc278198/.

Full text
Abstract:
In comparing the means .of two normally distributed populations with unknown variance, two tests very often used are: the two independent sample and the paired sample t tests. There is a possible gain in the power of the significance test by using the paired sample design instead of the two independent samples design.
APA, Harvard, Vancouver, ISO, and other styles
5

Stewart, Joanna L. "Glasgow's spatial arrangement of deprivation over time : methods to measure it and meanings for health." Thesis, University of Glasgow, 2016. http://theses.gla.ac.uk/7936/.

Full text
Abstract:
Background: Socio-economic deprivation is a key driver of population health. High levels of socio-economic deprivation have long been offered as the explanation for exceptionally high levels of mortality in Glasgow, Scotland. A number of recent studies have, however, suggested that this explanation is partial. Comparisons with Liverpool and Manchester suggest that mortality rates have been higher in Glasgow since the 1970s despite very similar levels of deprivation in these three cities. It has, therefore, been argued that there is an “excess” of mortality in Glasgow; that is, mortality rates are higher than would be expected given the city’s age, gender, and deprivation profile. A profusion of possible explanations for this excess has been proffered. One hypothesis is that the spatial arrangement of deprivation might be a contributing factor. Particular spatial configurations of deprivation have been associated with negative health impacts. It has been suggested that Glasgow experienced a distinct, and more harmful, development of spatial patterning of deprivation. Measuring the development of spatial arrangements of deprivation over time is technically challenging however. Therefore, this study brought together a number of techniques to compare the development of the spatial arrangement of deprivation in Glasgow, Liverpool and Manchester between 1971 and 2011. It then considered the plausibility of the spatial arrangement of deprivation as a contributing factor to Glasgow’s high levels of mortality. Methods: A literature review was undertaken to inform understandings of relationships between the spatial arrangement of deprivation and health outcomes. A substantial element of this study involved developing a methodology to facilitate temporal and inter-city comparisons of the spatial arrangement of deprivation. Key contributions of this study were the application of techniques to render and quantify whole-landscape perspectives on the development of spatial patterns of household deprivation, over time. This was achieved by using surface mapping techniques to map information relating to deprivation from the UK census, and then analysing these maps with spatial metrics. Results: There is agreement in the literature that the spatial arrangement of deprivation can influence health outcomes, but mechanisms and expected impacts are not clear. The temporal development of Glasgow’s spatial arrangement of deprivation exhibited both similarities and differences with Liverpool and Manchester. Glasgow often had a larger proportion of its landscape occupied with areas of deprivation, particularly in 1971 and 1981. Patch density and mean patch size (spatial metrics which provide an indication of fragmentation), however, were not found to have developed differently in Glasgow. Conclusion: The spatial extent of deprivation developed differently in Glasgow relative to Liverpool and Manchester as the results indicated that deprivation was substantially more spatially prevalent in Glasgow, this was particularly pronounced in 1971 and 1981. This implies that exposure of more affluent and deprived people to each other has been greater in Glasgow. Given that proximal inequality has been related to poor health outcomes, it would appear plausible that this may have adversely affected Glasgow’s mortality rates. If this is the case, however, it is unlikely that this will account for a substantial proportion of Glasgow’s excess mortality. Further research into Glasgow’s excess mortality is, therefore, required.
APA, Harvard, Vancouver, ISO, and other styles
6

Ciampa, Julia Grant. "Multilocus approaches to the detection of disease susceptibility regions : methods and applications." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:8f82a624-7d80-438c-af3e-68ce983ff45f.

Full text
Abstract:
This thesis focuses on multilocus methods designed to detect single nucleotide polymorphisms (SNPs) that are associated with disease using case-control data. I study multilocus methods that allow for interaction in the regression model because epistasis is thought to be pervasive in the etiology of common human diseases. In contrast, the single-SNP models widely used in genome wide association studies (GWAS) are thought to oversimplify the underlying biology. I consider both pairwise interactions between individual SNPs and modular interactions between sets of biologically similar SNPs. Modular epistasis may be more representative of disease processes and its incorporation into regression analyses yields more parsimonious models. My methodological work focuses on strategies to increase power to detect susceptibility SNPs in the presence of genetic interaction. I emphasize the effect of gene-gene independence constraints and explore methods to relax them. I review several existing methods for interaction analyses and present their first empirical evaluation in a GWAS setting. I introduce the innovative retrospective Tukey score test (RTS) that investigates modular epistasis. Simulation studies suggest it offers a more powerful alternative to existing methods. I present diverse applications of these methods, using data from a multi-stage GWAS on prostate cancer (PRCA). My applied work is designed to generate hypotheses about the functionality of established susceptibility regions for PRCA by identifying SNPs that affect disease risk through interactions with them. Comparison of results across methods illustrates the impact of incorporating different forms of epistasis on inference about disease association. The top findings from these analyses are well supported by molecular studies. The results unite several susceptibility regions through overlapping biological pathways known to be disrupted in PRCA, motivating replication study.
APA, Harvard, Vancouver, ISO, and other styles
7

Churchhouse, Claire. "Bayesian methods for estimating human ancestry using whole genome SNP data." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:0cae8a4a-6989-485b-a7cb-0a03fb86096d.

Full text
Abstract:
The past five years has seen the discovery of a wealth of genetics variants associated with an incredible range of diseases and traits that have been identified in genome- wide association studies (GWAS). These GWAS have typically been performed in in- dividuals of European descent, prompting a call for such studies to be conducted over a more diverse range of populations. These include groups such as African Ameri- cans and Latinos as they are recognised as bearing a disproportionately large burden of disease in the U.S. population. The variation in ancestry among such groups must be correctly accounted for in association studies to avoid spurious hits arising due to differences in ancestry between cases and controls. Such ancestral variation is not all problematic as it may also be exploited to uncover loci associated with disease in an approach known as admixture mapping, or to estimate recombination rates in admixed individuals. Many models have been proposed to infer genetic ancestry and they differ in their accuracy, the type of data they employ, their computational efficiency, and whether or not they can handle multi-way admixture. Despite the number of existing models, there is an unfulfilled requirement for a model that performs well even when the ancestral populations are closely related, is extendible to multi-way admixture scenarios, and can handle whole- genome data while remaining computationally efficient. In this thesis we present a novel method of ancestry estimation named MULTIMIX that satisfies these criteria. The underlying model we propose uses a multivariate nor- mal to approximate the distribution of a haplotype at a window of contiguous SNPs given the ancestral origin of that part of the genome. The observed allele types and the ancestry states that we aim to infer are incorporated in to a hidden Markov model to capture the correlations in ancestry that we expect to exist between neighbouring sites. We show via simulation studies that its performance on two-way and three-way admixture is competitive with state-of-the-art methods, and apply it to several real admixed samples of the International HapMap Project and the 1000 Genomes Project.
APA, Harvard, Vancouver, ISO, and other styles
8

Iotchkova, Valentina Valentinova. "Bayesian methods for multivariate phenotype analysis in genome-wide association studies." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:66fd61e1-a6e3-4e91-959b-31a3ec88967c.

Full text
Abstract:
Most genome-wide association studies search for genetic variants associated to a single trait of interest, despite the main interest usually being the understanding of a complex genotype-phenotype network. Furthermore, many studies collect data on multiple phenotypes, each measuring a different aspect of the biological system under consideration, therefore it can often make sense to jointly analyze the phenotypes. However this is rarely the case and there is a lack of well developed methods for multiple phenotype analysis. Here we propose novel approaches for genome-wide association analysis, which scan the genome one SNP at a time for association with multivariate traits. The first half of this thesis focuses on an analytic model averaging approach which bi-partitions traits into associated and unassociated, fits all such models and measures evidence of association using a Bayes factor. The discrete nature of the model allows very fine control of prior beliefs about which sets of traits are more likely to be jointly associated. Using simulated data we show that this method can have much greater power than simpler approaches that do not explicitly model residual correlation between traits. On real data of six hematological parameters in 3 population cohorts (KORA, UKNBS and TwinsUK) from the HaemGen consortium, this model allows us to uncover an association at the RCL locus that was not identified in the original analysis but has been validated in a much larger study. In the second half of the thesis we propose and explore the properties of models that use priors encouraging sparse solutions, in the sense that genetic effects of phenotypes are shrunk towards zero when there is little evidence of association. To do this we explore and use spike and slab (SAS) priors. All methods combine both hypothesis testing, via calculation of a Bayes factor, and model selection, which occurs implicitly via the sparsity priors. We have successfully implemented a Variational Bayesian approach to fit this model, which provides a tractable approximation to the posterior distribution, and allows us to approximate the very high-dimensional integral required for the Bayes factor calculation. This approach has a number of desirable properties. It can handle missing phenotype data, which is a real feature of most studies. It allows for both correlation due to relatedness between subjects or population structure and residual phenotype correlation. It can be viewed as a sparse Bayesian multivariate generalization of the mixed model approaches that have become popular recently in the GWAS literature. In addition, the method is computationally fast and can be applied to millions of SNPs for a large number of phenotypes. Furthermore we apply our method to 15 glycans from 3 isolated population cohorts (ORCADES, KORCULA and VIS), where we uncover association at a known locus, not identified in the original study but discovered later in a larger one. We conclude by discussing future directions.
APA, Harvard, Vancouver, ISO, and other styles
9

Martins, Maria do Rosario Fraga Oliveira. "The use of nonparametric and semiparametric methods based on kernels in applied economics with an application to Portuguese female labour market." Doctoral thesis, Universite Libre de Bruxelles, 1998. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211989.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Eisman, Elyktra. "GIS-integrated mathematical modeling of social phenomena at macro- and micro- levels—a multivariate geographically-weighted regression model for identifying locations vulnerable to hosting terrorist safe-houses: France as case study." FIU Digital Commons, 2015. http://digitalcommons.fiu.edu/etd/2261.

Full text
Abstract:
Adaptability and invisibility are hallmarks of modern terrorism, and keeping pace with its dynamic nature presents a serious challenge for societies throughout the world. Innovations in computer science have incorporated applied mathematics to develop a wide array of predictive models to support the variety of approaches to counterterrorism. Predictive models are usually designed to forecast the location of attacks. Although this may protect individual structures or locations, it does not reduce the threat—it merely changes the target. While predictive models dedicated to events or social relationships receive much attention where the mathematical and social science communities intersect, models dedicated to terrorist locations such as safe-houses (rather than their targets or training sites) are rare and possibly nonexistent. At the time of this research, there were no publically available models designed to predict locations where violent extremists are likely to reside. This research uses France as a case study to present a complex systems model that incorporates multiple quantitative, qualitative and geospatial variables that differ in terms of scale, weight, and type. Though many of these variables are recognized by specialists in security studies, there remains controversy with respect to their relative importance, degree of interaction, and interdependence. Additionally, some of the variables proposed in this research are not generally recognized as drivers, yet they warrant examination based on their potential role within a complex system. This research tested multiple regression models and determined that geographically-weighted regression analysis produced the most accurate result to accommodate non-stationary coefficient behavior, demonstrating that geographic variables are critical to understanding and predicting the phenomenon of terrorism. This dissertation presents a flexible prototypical model that can be refined and applied to other regions to inform stakeholders such as policy-makers and law enforcement in their efforts to improve national security and enhance quality-of-life.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography