To see the other types of publications on this topic, follow the link: Social Sciences Mathematical Methods.

Dissertations / Theses on the topic 'Social Sciences Mathematical Methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Social Sciences Mathematical Methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hwang, Heungsun 1969. "Structural equation modeling by extended redundancy analysis." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=36954.

Full text
Abstract:
A new approach to structural equation modeling based on so-called extended redundancy analysis (ERA) is proposed. In ERA, latent variables are obtained as exact linear combinations of observed variables, and model parameters are estimated by consistently minimizing a single criterion. As a result, the method can avoid limitations of covariance structure analysis (e.g., stringent distributional assumptions, improper solutions, and factor score indeterminacy) in addition to those of partial least squares (e.g., the lack of a global optimization procedure). The method is simple yet versatile enough to fit more complex models; e.g., those with higher-order latent variables and direct effects of observed variables. It can also fit a model to more than one sample simultaneously. Other relevant topics are also discussed, including data transformations, missing data, metric matrices, robust estimation, and efficient estimation. Examples are given to illustrate the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
2

Cisneros-Molina, Myriam. "Mathematical methods for valuation and risk assessment of investment projects and real options." Thesis, University of Oxford, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.491350.

Full text
Abstract:
In this thesis, we study the problems of risk measurement, valuation and hedging of financial positions in incomplete markets when an insufficient number of assets are available for investment (real options). We work closely with three measures of risk: Worst-Case Scenario (WCS) (the supremum of expected values over a set of given probability measures), Value-at-Risk (VaR) and Average Value-at-Risk (AVaR), and analyse the problem of hedging derivative securities depending on a non-traded asset, defined in terms of the risk measures via their acceptance sets. The hedging problem associated to VaR is the problem of minimising the expected shortfall. For WCS, the hedging problem turns out to be a robust version of minimising the expected shortfall; and as AVaR can be seen as a particular case of WCS, its hedging problem is also related to the minimisation of expected shortfall. Under some sufficient conditions, we solve explicitly the minimal expected shortfall problem in a discrete-time setting of two assets driven by correlated binomial models. In the continuous-time case, we analyse the problem of measuring risk by WCS, VaR and AVaR on positions modelled as Markov diffusion processes and develop some results on transformations of Markov processes to apply to the risk measurement of derivative securities. In all cases, we characterise the risk of a position as the solution of a partial differential equation of second order with boundary conditions. In relation to the valuation and hedging of derivative securities, and in the search for explicit solutions, we analyse a variant of the robust version of the expected shortfall hedging problem. Instead of taking the loss function $l(x) = [x]^+$ we work with the strictly increasing, strictly convex function $L_{\epsilon}(x) = \epsilon \log \left( \frac{1+exp\{−x/\epsilon\} }{ exp\{−x/\epsilon\} } \right)$. Clearly $lim_{\epsilon \rightarrow 0} L_{\epsilon}(x) = l(x)$. The reformulation to the problem for L_{\epsilon}(x) also allow us to use directly the dual theory under robust preferences recently developed in [82]. Due to the fact that the function $L_{\epsilon}(x)$ is not separable in its variables, we are not able to solve explicitly, but instead, we use a power series approximation in the dual variables. It turns out that the approximated solution corresponds to the robust version of a utility maximisation problem with exponential preferences $(U(x) = −\frac{1}{\gamma}e^{-\gamma x})$ for a preferenes parameter $\gamma = 1/\epsilon$. For the approximated problem, we analyse the cases with and without random endowment, and obtain an expression for the utility indifference bid price of a derivative security which depends only on the non-traded asset.
APA, Harvard, Vancouver, ISO, and other styles
3

Fang, Zhou. "Reweighting methods in high dimensional regression." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:26f8541a-9e2d-466a-84aa-e6850c4baba9.

Full text
Abstract:
In this thesis, we focus on the application of covariate reweighting with Lasso-style methods for regression in high dimensions, particularly where p ≥ n. We apply a particular focus to the case of sparse regression under a-priori grouping structures. In such problems, even in the linear case, accurate estimation is difficult. Various authors have suggested ideas such as the Group Lasso and the Sparse Group Lasso, based on convex penalties, or alternatively methods like the Group Bridge, which rely on convergence under repetition to some local minimum of a concave penalised likelihood. We propose in this thesis a methodology that uses concave penalties to inspire a procedure whereupon we compute weights from an initial estimate, and then do a single second reweighted Lasso. This procedure -- the Co-adaptive Lasso -- obtains excellent results in empirical experiments, and we present some theoretical prediction and estimation error bounds. Further, several extensions and variants of the procedure are discussed and studied. In particular, we propose a Lasso style method of doing additive isotonic regression in high dimensions, the Liso algorithm, and enhance it using the Co-adaptive methodology. We also propose a method of producing rules based regression estimates for high dimensional non-parametric regression, that often outperforms the current leading method, the RuleFit algorithm. We also discuss extensions involving robust statistics applied to weight computation, repeating the algorithm, and online computation.
APA, Harvard, Vancouver, ISO, and other styles
4

Dunu, Emeka Samuel. "Comparing the Powers of Several Proposed Tests for Testing the Equality of the Means of Two Populations When Some Data Are Missing." Thesis, University of North Texas, 1994. https://digital.library.unt.edu/ark:/67531/metadc278198/.

Full text
Abstract:
In comparing the means .of two normally distributed populations with unknown variance, two tests very often used are: the two independent sample and the paired sample t tests. There is a possible gain in the power of the significance test by using the paired sample design instead of the two independent samples design.
APA, Harvard, Vancouver, ISO, and other styles
5

Stewart, Joanna L. "Glasgow's spatial arrangement of deprivation over time : methods to measure it and meanings for health." Thesis, University of Glasgow, 2016. http://theses.gla.ac.uk/7936/.

Full text
Abstract:
Background: Socio-economic deprivation is a key driver of population health. High levels of socio-economic deprivation have long been offered as the explanation for exceptionally high levels of mortality in Glasgow, Scotland. A number of recent studies have, however, suggested that this explanation is partial. Comparisons with Liverpool and Manchester suggest that mortality rates have been higher in Glasgow since the 1970s despite very similar levels of deprivation in these three cities. It has, therefore, been argued that there is an “excess” of mortality in Glasgow; that is, mortality rates are higher than would be expected given the city’s age, gender, and deprivation profile. A profusion of possible explanations for this excess has been proffered. One hypothesis is that the spatial arrangement of deprivation might be a contributing factor. Particular spatial configurations of deprivation have been associated with negative health impacts. It has been suggested that Glasgow experienced a distinct, and more harmful, development of spatial patterning of deprivation. Measuring the development of spatial arrangements of deprivation over time is technically challenging however. Therefore, this study brought together a number of techniques to compare the development of the spatial arrangement of deprivation in Glasgow, Liverpool and Manchester between 1971 and 2011. It then considered the plausibility of the spatial arrangement of deprivation as a contributing factor to Glasgow’s high levels of mortality. Methods: A literature review was undertaken to inform understandings of relationships between the spatial arrangement of deprivation and health outcomes. A substantial element of this study involved developing a methodology to facilitate temporal and inter-city comparisons of the spatial arrangement of deprivation. Key contributions of this study were the application of techniques to render and quantify whole-landscape perspectives on the development of spatial patterns of household deprivation, over time. This was achieved by using surface mapping techniques to map information relating to deprivation from the UK census, and then analysing these maps with spatial metrics. Results: There is agreement in the literature that the spatial arrangement of deprivation can influence health outcomes, but mechanisms and expected impacts are not clear. The temporal development of Glasgow’s spatial arrangement of deprivation exhibited both similarities and differences with Liverpool and Manchester. Glasgow often had a larger proportion of its landscape occupied with areas of deprivation, particularly in 1971 and 1981. Patch density and mean patch size (spatial metrics which provide an indication of fragmentation), however, were not found to have developed differently in Glasgow. Conclusion: The spatial extent of deprivation developed differently in Glasgow relative to Liverpool and Manchester as the results indicated that deprivation was substantially more spatially prevalent in Glasgow, this was particularly pronounced in 1971 and 1981. This implies that exposure of more affluent and deprived people to each other has been greater in Glasgow. Given that proximal inequality has been related to poor health outcomes, it would appear plausible that this may have adversely affected Glasgow’s mortality rates. If this is the case, however, it is unlikely that this will account for a substantial proportion of Glasgow’s excess mortality. Further research into Glasgow’s excess mortality is, therefore, required.
APA, Harvard, Vancouver, ISO, and other styles
6

Ciampa, Julia Grant. "Multilocus approaches to the detection of disease susceptibility regions : methods and applications." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:8f82a624-7d80-438c-af3e-68ce983ff45f.

Full text
Abstract:
This thesis focuses on multilocus methods designed to detect single nucleotide polymorphisms (SNPs) that are associated with disease using case-control data. I study multilocus methods that allow for interaction in the regression model because epistasis is thought to be pervasive in the etiology of common human diseases. In contrast, the single-SNP models widely used in genome wide association studies (GWAS) are thought to oversimplify the underlying biology. I consider both pairwise interactions between individual SNPs and modular interactions between sets of biologically similar SNPs. Modular epistasis may be more representative of disease processes and its incorporation into regression analyses yields more parsimonious models. My methodological work focuses on strategies to increase power to detect susceptibility SNPs in the presence of genetic interaction. I emphasize the effect of gene-gene independence constraints and explore methods to relax them. I review several existing methods for interaction analyses and present their first empirical evaluation in a GWAS setting. I introduce the innovative retrospective Tukey score test (RTS) that investigates modular epistasis. Simulation studies suggest it offers a more powerful alternative to existing methods. I present diverse applications of these methods, using data from a multi-stage GWAS on prostate cancer (PRCA). My applied work is designed to generate hypotheses about the functionality of established susceptibility regions for PRCA by identifying SNPs that affect disease risk through interactions with them. Comparison of results across methods illustrates the impact of incorporating different forms of epistasis on inference about disease association. The top findings from these analyses are well supported by molecular studies. The results unite several susceptibility regions through overlapping biological pathways known to be disrupted in PRCA, motivating replication study.
APA, Harvard, Vancouver, ISO, and other styles
7

Churchhouse, Claire. "Bayesian methods for estimating human ancestry using whole genome SNP data." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:0cae8a4a-6989-485b-a7cb-0a03fb86096d.

Full text
Abstract:
The past five years has seen the discovery of a wealth of genetics variants associated with an incredible range of diseases and traits that have been identified in genome- wide association studies (GWAS). These GWAS have typically been performed in in- dividuals of European descent, prompting a call for such studies to be conducted over a more diverse range of populations. These include groups such as African Ameri- cans and Latinos as they are recognised as bearing a disproportionately large burden of disease in the U.S. population. The variation in ancestry among such groups must be correctly accounted for in association studies to avoid spurious hits arising due to differences in ancestry between cases and controls. Such ancestral variation is not all problematic as it may also be exploited to uncover loci associated with disease in an approach known as admixture mapping, or to estimate recombination rates in admixed individuals. Many models have been proposed to infer genetic ancestry and they differ in their accuracy, the type of data they employ, their computational efficiency, and whether or not they can handle multi-way admixture. Despite the number of existing models, there is an unfulfilled requirement for a model that performs well even when the ancestral populations are closely related, is extendible to multi-way admixture scenarios, and can handle whole- genome data while remaining computationally efficient. In this thesis we present a novel method of ancestry estimation named MULTIMIX that satisfies these criteria. The underlying model we propose uses a multivariate nor- mal to approximate the distribution of a haplotype at a window of contiguous SNPs given the ancestral origin of that part of the genome. The observed allele types and the ancestry states that we aim to infer are incorporated in to a hidden Markov model to capture the correlations in ancestry that we expect to exist between neighbouring sites. We show via simulation studies that its performance on two-way and three-way admixture is competitive with state-of-the-art methods, and apply it to several real admixed samples of the International HapMap Project and the 1000 Genomes Project.
APA, Harvard, Vancouver, ISO, and other styles
8

Iotchkova, Valentina Valentinova. "Bayesian methods for multivariate phenotype analysis in genome-wide association studies." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:66fd61e1-a6e3-4e91-959b-31a3ec88967c.

Full text
Abstract:
Most genome-wide association studies search for genetic variants associated to a single trait of interest, despite the main interest usually being the understanding of a complex genotype-phenotype network. Furthermore, many studies collect data on multiple phenotypes, each measuring a different aspect of the biological system under consideration, therefore it can often make sense to jointly analyze the phenotypes. However this is rarely the case and there is a lack of well developed methods for multiple phenotype analysis. Here we propose novel approaches for genome-wide association analysis, which scan the genome one SNP at a time for association with multivariate traits. The first half of this thesis focuses on an analytic model averaging approach which bi-partitions traits into associated and unassociated, fits all such models and measures evidence of association using a Bayes factor. The discrete nature of the model allows very fine control of prior beliefs about which sets of traits are more likely to be jointly associated. Using simulated data we show that this method can have much greater power than simpler approaches that do not explicitly model residual correlation between traits. On real data of six hematological parameters in 3 population cohorts (KORA, UKNBS and TwinsUK) from the HaemGen consortium, this model allows us to uncover an association at the RCL locus that was not identified in the original analysis but has been validated in a much larger study. In the second half of the thesis we propose and explore the properties of models that use priors encouraging sparse solutions, in the sense that genetic effects of phenotypes are shrunk towards zero when there is little evidence of association. To do this we explore and use spike and slab (SAS) priors. All methods combine both hypothesis testing, via calculation of a Bayes factor, and model selection, which occurs implicitly via the sparsity priors. We have successfully implemented a Variational Bayesian approach to fit this model, which provides a tractable approximation to the posterior distribution, and allows us to approximate the very high-dimensional integral required for the Bayes factor calculation. This approach has a number of desirable properties. It can handle missing phenotype data, which is a real feature of most studies. It allows for both correlation due to relatedness between subjects or population structure and residual phenotype correlation. It can be viewed as a sparse Bayesian multivariate generalization of the mixed model approaches that have become popular recently in the GWAS literature. In addition, the method is computationally fast and can be applied to millions of SNPs for a large number of phenotypes. Furthermore we apply our method to 15 glycans from 3 isolated population cohorts (ORCADES, KORCULA and VIS), where we uncover association at a known locus, not identified in the original study but discovered later in a larger one. We conclude by discussing future directions.
APA, Harvard, Vancouver, ISO, and other styles
9

Martins, Maria do Rosario Fraga Oliveira. "The use of nonparametric and semiparametric methods based on kernels in applied economics with an application to Portuguese female labour market." Doctoral thesis, Universite Libre de Bruxelles, 1998. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211989.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Eisman, Elyktra. "GIS-integrated mathematical modeling of social phenomena at macro- and micro- levels—a multivariate geographically-weighted regression model for identifying locations vulnerable to hosting terrorist safe-houses: France as case study." FIU Digital Commons, 2015. http://digitalcommons.fiu.edu/etd/2261.

Full text
Abstract:
Adaptability and invisibility are hallmarks of modern terrorism, and keeping pace with its dynamic nature presents a serious challenge for societies throughout the world. Innovations in computer science have incorporated applied mathematics to develop a wide array of predictive models to support the variety of approaches to counterterrorism. Predictive models are usually designed to forecast the location of attacks. Although this may protect individual structures or locations, it does not reduce the threat—it merely changes the target. While predictive models dedicated to events or social relationships receive much attention where the mathematical and social science communities intersect, models dedicated to terrorist locations such as safe-houses (rather than their targets or training sites) are rare and possibly nonexistent. At the time of this research, there were no publically available models designed to predict locations where violent extremists are likely to reside. This research uses France as a case study to present a complex systems model that incorporates multiple quantitative, qualitative and geospatial variables that differ in terms of scale, weight, and type. Though many of these variables are recognized by specialists in security studies, there remains controversy with respect to their relative importance, degree of interaction, and interdependence. Additionally, some of the variables proposed in this research are not generally recognized as drivers, yet they warrant examination based on their potential role within a complex system. This research tested multiple regression models and determined that geographically-weighted regression analysis produced the most accurate result to accommodate non-stationary coefficient behavior, demonstrating that geographic variables are critical to understanding and predicting the phenomenon of terrorism. This dissertation presents a flexible prototypical model that can be refined and applied to other regions to inform stakeholders such as policy-makers and law enforcement in their efforts to improve national security and enhance quality-of-life.
APA, Harvard, Vancouver, ISO, and other styles
11

Niewiadomska, Ewa Maria. "Exploring the experiences of Australian science researchers; Library, Google and beyond." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2021. https://ro.ecu.edu.au/theses/2451.

Full text
Abstract:
Universities and research institutions in Australia are under pressure to produce high-quality research outputs. To generate the desired level of research, continuous provision of information is required. As a result of developments of digital technologies, the information behaviour of academics, both as consumers and creators of new information and knowledge, has evolved and changed over the decades. In this study, the primary research question focused on how science academics based at Australian universities experience digital information sources as part of their scholarly activities. To support these research goals, the thesis explores where science academics seek information to support their research activities, the factors that influence those information choices and how they utilise the information once it has been found. A mixed methods approach including a Web survey and interviews was utilised to explore these issues. The Web survey employed a range of questions, including Likert-scale, multiple-choice and open-ended questions, enabling qualitative and quantitative data analysis. 210 science academics from 34 Australian universities were surveyed with 24 taking part in follow-up interviews. The resulting data was analysed by using a combination of selected statistical and thematic analysis to draw out findings aligned to the primary and supporting research questions. The study concluded that Australian science researchers experience digital information sources in a variety of ways, and the modern academic environment shapes these experiences—with performance metrics, time drivers and personal circumstances being the leading factors that impact researcher’s actions when seeking, retrieving and disseminating information to support their academic work and resulting outcomes. The study findings envisioned science academics working at Australian universities as self-sufficient, independent individuals, adapting their information behaviour to their current circumstances and needs. Their self-sufficiency is expressed in their performance of a variety of information behaviours by themselves, without recourse to or the need for the input of others. Engagement with other scholars and the university library are of low priority for these academics. They are not concerned with where their information comes from as long as it is deemed to be of high quality, credible and available to access and retrieve when they need it. While aware of the existence of their university library, science academics are not particularly interested in using them, except as a supplier of full-text publications. Their attitude to university libraries can be described as “positive but indifferent”; that is, libraries are there but mostly invisible to users. This study investigated the information behaviours of Australian science academics throughout their entire research journey and analysed the results in the context of a series of existing information science behavioural models. The research contributed a new Science Academics Information-Seeking and Transformation Model, which encompasses an academic’s actions from the moment the need for information arises to when the scholarly outcomes are published. The results also provide insight to those responsible for supporting scholars to understand the challenges they face when seeking, retrieving and disseminating new information and new knowledge in the context of modern academia.
APA, Harvard, Vancouver, ISO, and other styles
12

Kaya, Ahmet. "Modern mathematical methods for actuarial sciences." Thesis, University of Leicester, 2017. http://hdl.handle.net/2381/39613.

Full text
Abstract:
In the ruin theory, premium income and outgoing claims play an important role. We introduce several ruin type mathematical models and apply various mathematical methods to find optimal premium price for the insurance companies. Quantum theory is one of the significant novel approaches to compute the finite time non-ruin probability. More exactly, we apply the discrete space Quantum mechanics formalism (see main thesis for formalism) and continuous space Quantum mechanics formalism (see main thesis for formalism) with the appropriately chosen Hamiltonians. Several particular examples are treated via the traditional basis and quantum mechanics formalism with the different eigenvector basis. The numerical results are also obtained using the path calculation method and compared with the stochastic modeling results. In addition, we also construct various models with interest rate. For these models, optimal premium prices are stochastically calculated for independent and dependent claims with different dependence levels by using the Frank copula method.
APA, Harvard, Vancouver, ISO, and other styles
13

Gray, Katharine Lynn. "Comparison of Trend Detection Methods." The University of Montana, 2007. http://etd.lib.umt.edu/theses/available/etd-09262007-104625/.

Full text
Abstract:
Trend estimation is important in many fields, though arguably the most important applications appear in ecology. Trend is difficult to quantify; in fact, the term itself is not well-defined. Often, trend is quantified by estimating the slope coefficient in a regression model where the response variable is an index of population size, and time is the explanatory variable. Linear trend is often unrealistic for biological populations; in fact, many critical environmental changes occur abruptly as a result of very rapid changes in human activities. My PhD research has involved formulating methods with greater flexibility than those currently in use. Penalized spline regression provides a flexible technique for fitting a smooth curve. This method has proven useful in many areas including environmental monitoring; however, inference is more difficult than with ordinary linear regression because so many parameters are estimated. My research has focused on developing methods of trend detection and comparing these methods to other methods currently in use. Attention is given to comparing estimated Type I error rates and power across several trend detection methods. This was accomplished through an extensive simulation study. Monte Carlo simulations and randomization tests were employed to construct an empirical sampling distribution for the test statistic under the null hypothesis of no trend. These methods are superior over smoothing methods over other smoothing methods of trend detection with respect to achieving the designated Type I error rate. The likelihood ratio test using a mixed effects model had the most power for detecting linear trend while a test involving the first derivative was the most powerful for detecting nonlinear trend for small sample sizes.
APA, Harvard, Vancouver, ISO, and other styles
14

Elias, Joran. "Randomness In Tree Ensemble Methods." The University of Montana, 2009. http://etd.lib.umt.edu/theses/available/etd-10092009-110301/.

Full text
Abstract:
Tree ensembles have proven to be a popular and powerful tool for predictive modeling tasks. The theory behind several of these methods (e.g. boosting) has received considerable attention. However, other tree ensemble techniques (e.g. bagging, random forests) have attracted limited theoretical treatment. Specifically, it has remained somewhat unclear as to why the simple act of randomizing the tree growing algorithm should lead to such dramatic improvements in performance. It has been suggested that a specific type of tree ensemble acts by forming a locally adaptive distance metric [Lin and Jeon, 2006]. We generalize this claim to include all tree ensembles methods and argue that this insight can help to explain the exceptional performance of tree ensemble methods. Finally, we illustrate the use of tree ensemble methods for an ecological niche modeling example involving the presence of malaria vectors in Africa.
APA, Harvard, Vancouver, ISO, and other styles
15

Laobeul, N'Djekornom Dara. "REGULARIZATION METHODS FOR ILL-POSED POISSON IMAGING." The University of Montana, 2009. http://etd.lib.umt.edu/theses/available/etd-12092008-133704/.

Full text
Abstract:

The noise contained in images collected by a charge coupled device (CCD) camera is predominantly of Poisson type. This motivates the use of the negative logarithm of the Poisson likelihood in place of the ubiquitous least squares t-to-data. However, if the underlying mathematical model is assumed to have the form z = Au, where A is a linear, compact operator, the problem of minimizing the negative log-Poisson likelihood function is ill-posed, and hence some form of regularization is required. In this work, it involves solving a variational problem of the form u def = arg min u0 `(Au; z) + J(u); where ` is the negative-log of a Poisson likelihood functional, and J is a regularization functional. The main result of this thesis is a theoretical analysis of this variational problem for four dierent regularization functionals. In addition, this work presents an ecient computational method for its solution, and the demonstration of the eectiveness of this approach in practice by applying the algorithm to simulated astronomical imaging data corrupted by the CCD camera noise model mentioned above.

APA, Harvard, Vancouver, ISO, and other styles
16

Ahmed, Samah. "Perturbation field theory methods for calculating expectation values." Thesis, University of Cape Town, 2016. http://hdl.handle.net/11427/26214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Goldes, John. "REGULARIZATION PARAMETER SELECTION METHODS FOR ILL POSED POISSON IMAGING PROBLEMS." The University of Montana, 2010. http://etd.lib.umt.edu/theses/available/etd-07072010-124233/.

Full text
Abstract:
A common problem in imaging science is to estimate some underlying true image given noisy measurements of image intensity. When image intensity is measured by the counting of incident photons emitted by the object of interest, the data-noise is accurately modeled by a Poisson distribution, which motivates the use of Poisson maximum likelihood estimation. When the underlying model equation is ill-posed, regularization must be employed. I will present a computational framework for solving such problems, including statistically motivated methods for choosing the regularization parameter. Numerical examples will be included.
APA, Harvard, Vancouver, ISO, and other styles
18

Hursit, Adem E. "Applications of conformal methods to relativistic trace-free matter models." Thesis, Queen Mary, University of London, 2018. http://qmro.qmul.ac.uk/xmlui/handle/123456789/36674.

Full text
Abstract:
Conformal methods have proven to be very useful in the analysis global properties and stability of vacuum spacetimes in general relativity. These methods transform the physical spacetime into a different Lorentzian manifold known as the unphysical spacetime where the ideal points at infinity are located at a finite position. This thesis makes use of conformal methods and applies them to various problems involving trace-free matter models. In particular, it makes progress towards the understanding of the evolution of unphysical spacetimes perturbed by trace-free matter as well as the behaviour of the the matter itself. To this end, evolution equations (wave equations) are derived and analyzed for both the unphysical spacetime and the matter. To investigate the relation between solutions of these wave equations to the Einstein field equations, a suitable system of subsidiary evolution equations is also derived. Furthermore, this thesis looks in detail at the behaviour of an unphysical spacetime coupled to the simplest matter trace free model: the confomally invariant scalar field. Finally, the system of conformal wave equations is used to show that the deSitter spacetime is non-linearly stable under perturbations by trace-free matter.
APA, Harvard, Vancouver, ISO, and other styles
19

MacGahan, Christopher, and Christopher MacGahan. "Mathematical Methods for Enhanced Information Security in Treaty Verification." Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/621280.

Full text
Abstract:
Mathematical methods have been developed to perform arms-control-treaty verification tasks for enhanced information security. The purpose of these methods is to verify and classify inspected items while shielding the monitoring party from confidential aspects of the objects that the host country does not wish to reveal. Advanced medical-imaging methods used for detection and classification tasks have been adapted for list-mode processing, useful for discriminating projection data without aggregating sensitive information. These models make decisions off of varying amounts of stored information, and their task performance scales with that information. Development has focused on the Bayesian ideal observer, which assumes com- plete probabilistic knowledge of the detector data, and Hotelling observer, which assumes a multivariate Gaussian distribution on the detector data. The models can effectively discriminate sources in the presence of nuisance parameters. The chan- nelized Hotelling observer has proven particularly useful in that quality performance can be achieved while reducing the size of the projection data set. The inclusion of additional penalty terms into the channelizing-matrix optimization offers a great benefit for treaty-verification tasks. Penalty terms can be used to generate non- sensitive channels or to penalize the model's ability to discriminate objects based on confidential information. The end result is a mathematical model that could be shared openly with the monitor. Similarly, observers based on the likelihood probabilities have been developed to perform null-hypothesis tasks. To test these models, neutron and gamma-ray data was simulated with the GEANT4 toolkit. Tasks were performed on various uranium and plutonium in- spection objects. A fast-neutron coded-aperture detector was simulated to image the particles.
APA, Harvard, Vancouver, ISO, and other styles
20

Youssefpour, Hamed. "Mathematical Modeling of Cancer Stem Cells and Therapeutic Intervention Methods." Thesis, University of California, Irvine, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3557204.

Full text
Abstract:

We develop a multispecies continuum model to simulate the spatiotemporal dynamics of cell lineages in solid tumors is discussed. The model accounts for protein signaling factors produced by cells in lineages, and nutrients supplied by the microenvironment. We find that the combination therapy involving differentiation promoters and radiotherapy is very effective in eradicating such a tumor. We investigate the effect of production of various feedback factors by healthy tissue on tumor morphologies. Our simulation results show that the larger production rate of the negative feedback factor by healthy tissue surrounding the tumor, in general lead to smaller, more compact and more circular tumor shapes. However, the increase in the concentration of these feedback factors may have non-monotone effect on the tumor morphologies. We investigate the effect of initial shape on therapy effectiveness. The results from the simulations show that the initial tumor geometry might play an important role in tumor prognostic and the effectiveness of a specific treatment. We observe that the therapy is more effective on tumors that still respond to the signals received from the healthy tissue in comparison with the ones that do not respond to signaling factors (in this case differentiation signals) by stromal tissue or healthy tissue surrounding the tumor. It is shown that the tumors with larger shape factors and smaller areas (more elongated and thinner) respond better to treatment, and the combination therapy is more successful on tumors with such characteristics. We applied mathematical modeling of radiotherapy using experimental data provided from our collaborative work with radiational oncology department of University of California, Los Angeles. Our investigations show that in order to match the experimental results with the simulations, the dedifferentiation rate of non-stem cells should be increased as a function of radiation dose. It is also observed that the population of induced stem cells followed such exponential relationship with respect to therapy dose. The results from simulations and the analysis of the equations suggest that in order for the simulation results to match with the experimental data, the original stem cells and the induced stem cells may undergo direct differentiation.

APA, Harvard, Vancouver, ISO, and other styles
21

Giddens, Spencer. "Applications of Mathematical Optimization Methods to Digital Communications and Signal Processing." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8601.

Full text
Abstract:
Mathematical optimization is applicable to nearly every scientific discipline. This thesis specifically focuses on optimization applications to digital communications and signal processing. Within the digital communications framework, the channel encoder attempts to encode a message from a source (the sender) in such a way that the channel decoder can utilize the encoding to correct errors in the message caused by the transmission over the channel. Low-density parity-check (LDPC) codes are an especially popular code for this purpose. Following the channel encoder in the digital communications framework, the modulator converts the encoded message bits to a physical waveform, which is sent over the channel and converted back to bits at the demodulator. The modulator and demodulator present special challenges for what is known as the two-antenna problem. The main results of this work are two algorithms related to the development of optimization methods for LDPC codes and the two-antenna problem. Current methods for optimization of LDPC codes analyze the degree distribution pair asymptotically as block length approaches infinity. This effectively ignores the discrete nature of the space of valid degree distribution pairs for LDPC codes of finite block length. While large codes are likely to conform reasonably well to the infinite block length analysis, shorter codes have no such guarantee. Chapter 2 more thoroughly introduces LDPC codes, and Chapter 3 presents and analyzes an algorithm for completely enumerating the space of all valid degree distribution pairs for a given block length, code rate, maximum variable node degree, and maximum check node degree. This algorithm is then demonstrated on an example LDPC code of finite block length. Finally, we discuss how the result of this algorithm can be utilized by discrete optimization routines to form novel methods for the optimization of small block length LDPC codes. In order to solve the two-antenna problem, which is introduced in greater detail in Chapter 2, it is necessary to obtain reliable estimates of the timing offset and channel gains caused by the transmission of the signal through the channel. The timing offset estimator can be formulated as an optimization problem, and an optimization method used to solve it was previously developed. However, this optimization method does not utilize gradient information, and as a result is inefficient. Chapter 4 presents and analyzes an improved gradient-based optimization method that solves the two-antenna problem much more efficiently.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhu, Huaiyu. "Neural networks and adaptive computers : theory and methods of stochastic adaptive computation." Thesis, University of Liverpool, 1993. http://eprints.aston.ac.uk/365/.

Full text
Abstract:
This thesis studies the theory of stochastic adaptive computation based on neural networks. A mathematical theory of computation is developed in the framework of information geometry, which generalises Turing machine (TM) computation in three aspects - It can be continuous, stochastic and adaptive - and retains the TM computation as a subclass called "data processing". The concepts of Boltzmann distribution, Gibbs sampler and simulated annealing are formally defined and their interrelationships are studied. The concept of "trainable information processor" (TIP) - parameterised stochastic mapping with a rule to change the parameters - is introduced as an abstraction of neural network models. A mathematical theory of the class of homogeneous semilinear neural networks is developed, which includes most of the commonly studied NN models such as back propagation NN, Boltzmann machine and Hopfield net, and a general scheme is developed to classify the structures, dynamics and learning rules. All the previously known general learning rules are based on gradient following (GF), which are susceptible to local optima in weight space. Contrary to the widely held belief that this is rarely a problem in practice, numerical experiments show that for most non-trivial learning tasks GF learning never converges to a global optimum. To overcome the local optima, simulated annealing is introduced into the learning rule, so that the network retains adequate amount of "global search" in the learning process. Extensive numerical experiments confirm that the network always converges to a global optimum in the weight space. The resulting learning rule is also easier to be implemented and more biologically plausible than back propagation and Boltzmann machine learning rules: Only a scalar needs to be back-propagated for the whole network. Various connectionist models have been proposed in the literature for solving various instances of problems, without a general method by which their merits can be combined. Instead of proposing yet another model, we try to build a modular structure in which each module is basically a TIP. As an extension of simulated annealing to temporal problems, we generalise the theory of dynamic programming and Markov decision process to allow adaptive learning, resulting in a computational system called a "basic adaptive computer", which has the advantage over earlier reinforcement learning systems, such as Sutton's "Dyna", in that it can adapt in a combinatorial environment and still converge to a global optimum. The theories are developed with a universal normalisation scheme for all the learning parameters so that the learning system can be built without prior knowledge of the problems it is to solve.
APA, Harvard, Vancouver, ISO, and other styles
23

Webber, Thomas. "Methods for the improvement of power resource prediction and residual range estimation for offroad unmanned ground vehicles." Thesis, University of Brighton, 2017. https://research.brighton.ac.uk/en/studentTheses/0fa4a3b9-bb71-413a-9b0e-ed0e1574225a.

Full text
Abstract:
Unmanned Ground Vehicles (UGVs) are becoming more widespread in their deployment. Advances in technology have improved not only their reliability but also their ability to perform complex tasks. UGVs are particularly attractive for operations that are considered unsuitable for human operatives. These include dangerous operations such as explosive ordnance disarmament, as well as situations where human access is limited including planetary exploration or search and rescue missions involving physically small spaces. As technology advances, UGVs are gaining increased capabilities and consummate increased complexity, allowing them to participate in increasingly wide range of scenarios. UGVs have limited power reserves that can restrict a UGV’s mission duration and also the range of capabilities that it can deploy. As UGVs tend towards increased capabilities and complexity, extra burden is placed on the already stretched power resources. Electric drives and an increasing array of processors, sensors and effectors, all need sufficient power to operate. Accurate prediction of mission power requirements is therefore of utmost importance, especially in safety critical scenarios where the UGV must complete an atomic task or risk the creation of an unsafe environment due to failure caused by depleted power. Live energy prediction for vehicles that traverse typical road surfaces is a wellresearched topic. However, this is not sufficient for modern UGVs as they are required to traverse a wide variety of terrains that may change considerably with prevailing environmental conditions. This thesis addresses the gap by presenting a novel approach to both off and on-line energy prediction that considers the effects of weather conditions on a wide variety of terrains. The prediction is based upon nonlinear polynomial regression using live sensor data to improve upon the accuracy provided by current methods. The new approach is evaluated and compared to existing algorithms using a custom ‘UGV mission power’ simulation tool. The tool allows the user to test the accuracy of various mission energy prediction algorithms over a specified mission routes that include a variety of terrains and prevailing weather conditions. A series of experiments that test and record the ‘real world’ power use of a typical small electric drive UGV are also performed. The tests are conducted for a variety of terrains and weather conditions and the empirical results are used to validate the results of the simulation tool. The new algorithm showed a significant improvement compared with current methods, which will allow for UGVs deployed in real world scenarios where they must contend with a variety of terrains and changeable weather conditions to make accurate energy use predictions. This enables more capabilities to be deployed with a known impact on remaining mission power requirement, more efficient mission durations through avoiding the need to maintain excessive estimated power reserves and increased safety through reduced risk of aborting atomic operations in safety critical scenarios. As supplementary contribution, this work created a power resource usage and prediction test bed UGV and resulting data-sets as well as a novel simulation tool for UGV mission energy prediction. The tool implements a UGV model with accurate power use characteristics, confirmed by an empirical test series. The tool can be used to test a wide variety of scenarios and power prediction algorithms and could be used for the development of further mission energy prediction technology or be used as a mission energy planning tool.
APA, Harvard, Vancouver, ISO, and other styles
24

van, Romondt Vis Pauline. "Changing social scientific research practices : negotiating creative methods." Thesis, Loughborough University, 2016. https://dspace.lboro.ac.uk/2134/22639.

Full text
Abstract:
In recent decades social scientists have started to use qualitative creative methods1 more and more, because of epistemological and methodological developments on the one hand and demands of innovation by governmental funding agencies on the other. In my thesis I look at the research practices of social scientists who use these qualitative creative methods and answer the following main research question: How are practices and approaches from the arts (specifically visual lens-based arts, poetry, performance and narrative) negotiated in social scientific research practice? This question has been divided into the following three sub-questions: 1) How do social scientists negotiate the use of creative methods with other members of their research community? 2) How do social scientists negotiate the use of creative methods into their own research practices? 3) And how do creative methods emerge in the process? Using Lave and Wenger's approach to communities of practice (1991; Wenger, 1998) and Ingold and Hallam's (2007) conceptualisation of improvisation for my theoretical framework, I look at these practices as constantly emerging and changing, but at the same time determined by those same practices. Based on ongoing conversations with postgraduate research students, interviews with experienced researchers, participant observation at conferences and videos of my participants' presentations, I conclude that the use of creative methods is always embedded within existing research practices. When this is not the case, either participants themselves or other academics experience the creative methods as problematic or even as non-academic. In those cases boundarywork (the in- and exclusion of what is deemed academic) is performed more fiercely, making it difficult, if not impossible for creative methods to be truly innovative in the sense that it means a break with previous practices. Instead, we see small shifts in participants' academic practices and how creative methods are taken up in these practices. This means improvisation is a more apt term to describe how creative methods are making their way into social scientific research practices/into the social sciences. As such this conclusion has consequences for the way we think about learning methods, the production of knowledge, innovative methods and (inter)disciplinarity.
APA, Harvard, Vancouver, ISO, and other styles
25

Acuña-Agost, Rodrigo. "Mathematical modeling and methods for rescheduling trains under disrupted operations." Phd thesis, Université d'Avignon, 2009. http://tel.archives-ouvertes.fr/tel-00453640.

Full text
Abstract:
En raison de problèmes opérationnels et d'autres événements inattendus, un grand nombre d'incidents se produisent quotidiennement dans les systèmes de transport ferroviaire. Certains d'entre eux ont un impact local, mais quelques fois, essentiellement dans les réseaux ferroviaires plus saturés, des petits incidents peuvent se propager à travers tout le réseau et perturber de manière significative les horaires des trains. Dans cette thèse doctorale, nous présentons le problème de réordonnancement de plan de circulation ferroviaire en cas d'incident comme la problématique de créer un plan de circulation provisoire de manière à minimiser les effets de la propagation des incidents. Ce travail est issu du projet MAGES (Module d'Aide à la Gestion des Sillons) qui développe des systèmes de régulation pour le trafic ferroviaire. Nous présentons deux modèles différents qui permettent de trouver des solutions à ce problème : Programmation Linéaire en Nombres Entiers (PLNE) et Programmation Par Contraintes (PPC). Du fait de la nature fortement combinatoire du problème et de la nécessité de répondre rapidement aux incidents, il ne paraît pas raisonnable d'envisager une résolution exacte. Les méthodes correctives proposées consistent donc à explorer un voisinage restreint des solutions : right-shift rescheduling; une méthode basée sur des coupes de proximité; une méthode d'analyse statistique de la propagation des incidents (SAPI) et un méthode basée sur la PPC. Additionnellement, certaines de ces méthodes ont été adaptées sous forme d'algorithmes itératifs avec l'objectif d'améliorer progressivement la solution quand le temps d'exécution le permet. SAPI est une des principales contributions de cette thèse. SAPI intègre les concepts de right-shift rescheduling avec les coupes de proximité. Du fait de la taille des réseaux en jeu et du nombre de circulations, les phénomènes complexes de propagation d'un incident font qu'il est très difficile de connaitre de manière précise les événements qui seront affectés. Toutefois, il est tout de même envisageable d'évaluer la probabilité qu'un événement soit affecté. Pour calculer cette probabilité, un modèle de régression logistique est utilisé avec des variables explicatives dérivées du réseau et des circulations. Diverses variantes de ces méthodes sont évaluées et comparées en utilisant deux réseaux ferroviaires localisés en France et au Chili. À partir des résultats obtenus, il est possible de conclure que SAPI est meilleure que les autres méthodes en terme de vitesse de convergence vers l'optimum pour les instances de petite taille et moyenne alors qu'une méthode coopérative PNLE/PPC est capable de trouver des solutions pour les instances de plus grande taille. La difficulté de comparer SAPI avec d'autres méthodes présentées dans la littérature nous a encouragés à appliquer la méthode à un autre problème. Ainsi, cette méthodologie a été également adaptée au problème de réordonnancement de passagers, vols et appareils (avions) en cas de perturbations, problème originalement proposé dans le contexte du Challenge ROADEF 2009. Les résultats montrent que SAPI est efficace pour résoudre ce problème avec des solutions au-dessus de la moyenne des équipes finalistes en obtenant la troisième place du challenge
APA, Harvard, Vancouver, ISO, and other styles
26

Yu, Jingyuan. "Discovering Twitter through Computational Social Science Methods." Doctoral thesis, Universitat Autònoma de Barcelona, 2021. http://hdl.handle.net/10803/671609.

Full text
Abstract:
Visibilitzant la vida quotidiana de la gent, Twitter s'ha convertit en una de les plataformes d'intercanvi d'informació més importants i ha atret ràpidament l'atenció dels científics. Investigadors de tot el món s'han centrat en les ciències socials i en els estudis d'Internet amb dades de Twitter com a mostra del món real, i en l'última dècada s'han dissenyat nombroses eines d'anàlisis i algorismes. La present tesi doctoral consta de tres recerques, en primer lloc, donats els 14 anys (fins a 2020) d'història des de la fundació de Twitter, hem assistit a una explosió de publicacions científiques relacionades, però el panorama actual de la recerca en aquesta plataforma de mitjans socials continuava sent desconegut. Per a omplir aquest buit de recerca, vam fer una anàlisi bibliomètrica dels estudis relacionats amb Twitter per a analitzar com van evolucionar els estudis de Twitter al llarg del temps, i per a proporcionar una descripció general de l'entorn acadèmic de recerca de Twitter des d'un nivell macro. En segon lloc, atès que hi ha moltes eines de programari analític que estan disponibles actualment per a la recerca en Twitter, una pregunta pràctica per als investigadors júnior és com triar el programari més apropiat per al seu propi projecte de recerca. Per a resoldre aquest problema, vam fer una revisió del programari per a alguns dels sistemes integrats que es consideren més rellevants per a la recerca en ciències socials. Atès que els investigadors júnior en ciències socials poden enfrontar-se a possibles limitacions financeres, vam reduir el nostre abast per a centrar-nos únicament en el programari gratuït i de baix cost. En tercer lloc, donada l'actual crisi de salut pública, hem observat que els mitjans de comunicació social són una de les fonts d'informació i notícies més accessibles per al públic. Durant una pandèmia, la forma en què s'emmarquen els problemes de salut i les malalties en la premsa influeix en la comprensió del públic sobre l'actual brot epidèmic i les seves actituds i comportaments. Per tant, decidim usar Twitter com una font de notícies de fàcil accés per a analitzar l'evolució dels marcs de notícies espanyols durant la pandèmia COVID-19. En general, les tres recerques s'han associat estretament amb l'aplicació de mètodes computacionals, incloent la recol·lecció de dades en línia, la mineria de textos, l'anàlisi de xarxes i la visualització de dades. Aquest projecte de doctorat ha mostrat com la gent estudia i utilitza Twitter des de tres nivells diferents: el nivell acadèmic, el nivell pràctic i el nivell empíric.
Visibilizando la vida cotidiana de la gente, Twitter se ha convertido en una de las plataformas de intercambio de información más importantes y ha atraído rápidamente la atención de los científicos. Investigadores de todo el mundo se han centrado en las ciencias sociales y en los estudios de Internet con datos de Twitter como muestra del mundo real, y en la última década se han diseñado numerosas herramientas de análisis y algoritmos. La presente tesis doctoral consta de tres investigaciones, en primer lugar, dados los 14 años (hasta 2020) de historia desde la fundación de Twitter, hemos asistido a una explosión de publicaciones científicas relacionadas, pero el panorama actual de la investigación en esta plataforma de medios sociales seguía siendo desconocido. Para llenar este vacío de investigación, hicimos un análisis bibliométrico de los estudios relacionados con Twitter para analizar cómo evolucionaron los estudios de Twitter a lo largo del tiempo, y para proporcionar una descripción general del entorno académico de investigación de Twitter desde un nivel macro. En segundo lugar, dado que hay muchas herramientas de software analítico que están disponibles actualmente para la investigación en Twitter, una pregunta práctica para los investigadores junior es cómo elegir el software más apropiado para su propio proyecto de investigación. Para resolver este problema, hicimos una revisión del software para algunos de los sistemas integrados que se consideran más relevantes para la investigación en ciencias sociales. Dado que los investigadores junior en ciencias sociales pueden enfrentarse a posibles limitaciones financieras, redujimos nuestro alcance para centrarnos únicamente en el software gratuito y de bajo coste. En tercer lugar, dada la actual crisis de salud pública, hemos observado que los medios de comunicación social son una de las fuentes de información y noticias más accesibles para el público. Durante una pandemia, la forma en que se enmarcan los problemas de salud y las enfermedades en la prensa influye en la comprensión del público sobre el actual brote epidémico y sus actitudes y comportamientos. Por lo tanto, decidimos usar Twitter como una fuente de noticias de fácil acceso para analizar la evolución de los marcos de noticias españoles durante la pandemia COVID-19. En general, las tres investigaciones se han asociado estrechamente con la aplicación de métodos computacionales, incluyendo la recolección de datos en línea, la minería de textos, el análisis de redes y la visualización de datos. Este proyecto de doctorado ha mostrado cómo la gente estudia y utiliza Twitter desde tres niveles diferentes: el nivel académico, el nivel práctico y el nivel empírico.
As Twitter has covered up people’s daily life, it has became one of the most important information exchange platforms, and quickly attracted scientists’ attention. Researchers around the world have highly focused on social science and internet studies with Twitter data as a real world sample, and numerous analytics tools and algorithms have been designed in the last decade. The present doctoral thesis consists of three researches, first, given the 14 years (until 2020) of history since the foundation of Twitter, an explosion of related scientific publications have been witnessed, but the current research landscape on this social media platform remained unknown, to fill this research gap, we did a bibliometric analysis on Twitter-related studies to analyze how the Twitter studies evolved over time, and to provide a general description of the Twitter research academic environment from a macro level. Second, since there are many analytic software tools that are currently available for Twitter research, a practical question for junior researchers is how to choose the most appropriate software for their own research project, to solve this problem, we did a software review for some of the integrated frameworks that are considered most relevant for social science research, given that junior social science researchers may face possible financial constraints, we narrowed our scope to solely focus on the free and low-cost software. Third, given the current public health crisis, we have noticed that social media are one of the most accessed information and news sources for the public. During a pandemic, how health issues and diseases are framed in the news release impacts public’s understanding of the current epidemic outbreak and their attitudes and behaviors. Hence, we decided to use Twitter as an easy-access news source to analyze the evolution of the Spanish news frames during the COVID-19 pandemic. Overall, the three researches have closely associated with the application of computational methods, including online data collection, text mining, complex network and data visualization. And this doctoral project has discovered how people study and use Twitter from three different levels: the academic level, the practical level and the empirical level.
APA, Harvard, Vancouver, ISO, and other styles
27

Akay, Guler. "The Effect Of Peer Instruction Method On The 8th Grade Students&#039." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12612931/index.pdf.

Full text
Abstract:
The purpose of the research study is to investigate the effect of peer instruction method on the 8th grade students&rsquo
mathematics achievement and mathematics attitudes in transformation geometry (fractals, rotation, reflection, translation) in crowded classrooms (more than 50 students). Besides, in this study it was aimed to investigate the gender differences regarding mathematics achievement and mathematics attitude. The study was conducted during the academic year 2009-2010. The sample was consisted of 112 eighth grade students from a public elementary school in Kü
ç
ü

ekmece district in Istanbul. Two classes, instructed by the researcher, were randomly assigned as experimental and control groups. The experimental group students were taught the subject transformation geometry through peer instruction method, while the control group students were taught the subject transformation geometry conventionally. Mathematics Achievement Test (MAT) and Attitude towards Mathematics Scale (ATMS) were administered to students as measuring instruments. The two-way ANCOVA and two-way ANOVA statistical techniques were performed in order to answer to the research questions. Results indicated that the peer instruction method has significant positive effects on students&rsquo
mathematics achievement and attitudes towards mathematics. Also, it was shown that there is not a significant difference between the female and male students&rsquo
mathematics achievement and mathematics attitudes.
APA, Harvard, Vancouver, ISO, and other styles
28

Squires, Timothy Richard. "Efficient numerical methods for ultrasound elastography." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:332c7b2b-10c3-4dff-b875-ac1ee2c5d4fb.

Full text
Abstract:
In this thesis, two algorithms are introduced for use in ultrasound elastography. Ultrasound elastography is a technique developed in the last 20 years by which anomalous regions in soft tissue are located and diagnosed without the need for biopsy. Due to this, the relativity cheap cost of ultrasound imaging and the high level of accuracy in the methods, ultrasound elastography methods have shown great potential for the diagnosis of cancer in soft tissues. The algorithms introduced in this thesis represent an advance in this field. The first algorithm is a two-step iteration procedure consisting of two minimization problems - displacement estimation and elastic parameter calculation that allow for diagnosis of any anomalous regions within soft tissue. The algorithm represents an improvement on existing methods in several ways. A weighting factor is introduced for each different point in the tissue dependent on the confidence in the accuracy of the data at that point, an exponential substitution is made for the elasticity modulus, an adjoint method is used for efficient calculation of the gradient vector and a total variation regularization technique is used. Most importantly, an adaptive mesh refinement strategy is introduced that allows highly efficient calculation of the elasticity distribution of the tissue though using a number of degrees of freedom several orders lower than methods that use a uniform mesh refinement strategy. Results are presented that show the algorithm is robust even in the presence of significant noise and that it can locate a tumour of 4mm in diameter within a 5cm square region of tissue. Also, the algorithm is extended into 3 dimensions and results are presented that show that it can calculate a 3 dimensional elasticity distribution efficiently. This extension into 3-d is a significant advance in the field. The second algorithm is a one-step algorithm that seeks to combine the two problems of elasticity distribution and displacement calculation into one. As in the two-step algorithm, a weighting factor, exponential substitution for the elasticity parameter, adjoint method for calculation of the gradient vector, total variation regularization and adaptive mesh refinement strategy are incorporated. Results are presented that show that this original approach can locate tumours of varying sizes and shapes in the presence of varying levels of added artificial noise and that it can determine the presence of a tumour in images taken from breast tissue in vivo.
APA, Harvard, Vancouver, ISO, and other styles
29

Winter, Marcus. "A design space for social object labels in museums." Thesis, University of Brighton, 2016. https://research.brighton.ac.uk/en/studentTheses/73a2c271-e987-4226-a7b0-3031a824f6d7.

Full text
Abstract:
Taking a problematic user experience with ubiquitous annotation as its point of departure, this thesis defines and explores the design space for Social Object Labels (SOLs), small interactive displays aiming to support users' in-situ engagement with digital annotations of physical objects and places by providing up-to-date information before, during and after interaction. While the concept of ubiquitous annotation has potential applications in a wide range of domains, the research focuses in particular on SOLs in a museum context, where they can support the institution's educational goals by engaging visitors in the interpretation of exhibits and providing a platform for public discourse to complement official interpretations provided on traditional object labels. The thesis defines and structures the design space for SOLs, investigates how they can support social interpretation in museums and develops empirically validated design recommendations. Reflecting the developmental character of the research, it employs Design Research as a methodological framework, which involves the iterative development and evaluation of design artefacts together with users and other stakeholders. The research identifies the particular characteristics of SOLs and structures their design space into ten high-level aspects, synthesised from taxonomies and heuristics for similar display concepts and complemented with aspects emerging from the iterative design and evaluation of prototypes. It presents findings from a survey exploring visitors' mental models, preferences and expectations of commenting in museums and translates them into requirements for SOLs. It reports on scenario-based design activities, expert interviews with museum professionals, formative user studies and co-design sessions, and two empirical evaluations of SOL prototypes in a gallery environment. Pulling together findings from these research activities it then formulates design recommendations for SOLs and supports them with related evidence and implementation examples. The main contributions are (i) to delineate and structure the design space for SOLs, which helps to ground SOLs in the literature and understand them as a distinct display concept with its own characteristics; (ii) to explore, for the first time, a visitor perspective on commenting in museums, which can inform research, development and policies on user-generated content in museums and the wider cultural heritage sector; (iii) to develop empirically validated design recommendations, which can inform future research and development into SOLs and related display concept. The thesis concludes by summarising findings in relation to its stated research questions, restating its contributions from ubiquitous computing, domain and methodology perspectives, and discussing open issues and future work.
APA, Harvard, Vancouver, ISO, and other styles
30

Bak, Jun Hyeong. "Sustainable urban development in South Korea : compact urban form, land use, housing type, and development methods." Thesis, University of Birmingham, 2014. http://etheses.bham.ac.uk//id/eprint/4781/.

Full text
Abstract:
Over the past few decades, South Korea has experienced economic development and urbanisation, the effects of which have included environmental degradation and social problems. The principles of sustainable development have gained support as an approach to dealing with these issues; and the compact city has been proposed as a means of delivering sustainable development without the sprawl of Western cities. This thesis examines the applicability of the compact city to South Korea, particularly to large-scale developments, through the perspective of sustainable development. The research questions, ‘How and why have urban developments in South Korea been accompanied by compactness?’ and, ‘What implications does this have for sustainable development?’ are examined through two case studies: Yong-in, a city developed by diverse methods; and Se-jong, a city developed as a single new project. The case studies demonstrate that new settlements by high-rise apartments in South Korea have achieved a high degree of compactness, and residents have appreciated their liveability and made them their popular housing choice. The thesis concludes that the compact city in South Korean urban development is not only feasible, but is acceptable to residents; and it suggests a compact city model and strategies applicable in the South Korean context.
APA, Harvard, Vancouver, ISO, and other styles
31

Brizic, Biloglav Marija. "Preschool teachers´ views on preschool mathematical environment." Thesis, Malmö högskola, Fakulteten för lärande och samhälle (LS), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-31962.

Full text
Abstract:
Föreliggande studie handlar om förskolans uppdrag att skapa goda förutsättningar för barns matematiska förståelse. Studiens syfte är att beskriva hur sex pedagoger arbetar för att skapa en matematisk lärandemiljö för barn i ålder 3-5 år. För att uppnå studiens syfte ställs två frågeställningar som är: Hur ordnar sex pedagoger förskolans miljö och material för att stimulera barns matematiska förståelse? samt Vilka matematiska aktiviteter beskriver pedagogerna som ett led i att stimulera barns matematiska förståelse?Studien grundas i ett Sociokulturellt perspektiv med fokus på miljö, språk och samlärande. I urvalet ingår sex pedagoger med varierande utbildning och arbetserfarenhet men har det gemensamt att arbeta inom samma storstadskommun. Empirin samlades in med kvalitativ intervjumetod.Studiens viktigaste resultat visar att de intervjuade pedagogerna arbetar målmedvetet med att skapa och omskapa miljö och material allt efter barnens intressen och behov. De redskap som pedagogerna använder i sin verksamhet och i syfte att utveckla barns matematiska förståelse är redskap som barnen på ett lekfullt sätt kan sortera, para ihop, bygga och konstruera med, räkna antal, samt arbeta med rumsuppfattning. Alla pedagogerna arbetar med matematik som kan kopplas till de sex matematiska aktiviteterna: räkna, lokalisera, mäta, konstruera, lokalisera och förklara. Studien visar också att pedagogerna skapar matematiska aktiviteter i samråd med barnen och, för barnen, i meningsfulla sammanhang. Enligt de sex pedagogerna behövs det skapas en lugn miljö med tydlig struktur som utmanar barnen i deras matematiska lärande.De slutsatser som kan dras av resultaten är att pedagogerna i de aktuella förskolorna är väl förberedda för att uppfylla förskolans uppdrag att skapa goda förutsättningar för barns matematiska förståelse.
APA, Harvard, Vancouver, ISO, and other styles
32

Hampton, Jennifer. "The nature of quantitative methods in A level sociology." Thesis, Cardiff University, 2018. http://orca.cf.ac.uk/118019/.

Full text
Abstract:
British sociology has been characterised as suffering from a 'quantitative deficit' originating from a shift towards qualitative methods in the discipline in the 1960s. Over the years, this has inspired a number of initiatives aimed at improving number work within the discipline, of which the Q-step programme is the most recent. These initiatives, and the work that supports them, primarily concern themselves with the curricula, attitudes, and output of students and academics within Higher Education. As such, the role that the substantive A level plays in post-16 quantitative education has been largely ignored. This thesis addresses this apparent gap in the literature, providing a study of the curriculum, with a particular focus on the quantitative method element therein. The thesis takes a mixed-method approach to curriculum research, encompassing the historical as well as the current, and the written as well as the practiced. The analysis is presented in a synoptic manner, interweaving data from across the methods used, in an attempt to provide an integrated and holistic account of A level Sociology. An overarching theme of marginalisation becomes apparent; not least with the subject itself, but also with quantitative methods positioned as problematic within the research methods element of the curriculum, which is itself bound and limited. The high-stakes exam culture is shown to dominate the behaviour of both teachers and students, regardless of their attitudes and understanding of the relevancy and/or importance of quantitative methods in the subject. Taken together, these findings imply a potential problem for recruitment into quantitative sociology, whilst offering an avenue by which this might be addressed. Linked to the high-stakes performativity culture, a novel conceptualisation of teachers' understandings of the relationship between their role, the curriculum, the discipline, and notions of powerful knowledge is offered.
APA, Harvard, Vancouver, ISO, and other styles
33

Subedi, Harendra. "Mathematical Modelling of Delegation in Role Based Access Control." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-222381.

Full text
Abstract:
One of the most widespread access control model that assigns permissions to a user is Role Based Access Control (RBAC). The basic idea is to limit the access to resources by using the indirection of roles, which are associated both to users and permissions. There has been research conducted with respect to clarifying RBAC and its components, as well as in creating mathematical models describing different aspects of its administrative issues in RBAC. But, till date no work has been done in terms of formalization (Mathematical Modelling) of delegation and revocation of roles in RBAC. Which provides most important extensions of the policy and provides flexibility in the user to user delegation of roles, especially in the environment where roles are organized in a hierarchy. The process allows a user with a role that is higher in the hierarchy to assign a part of the role to someone who is lower in the hierarchy or at the same level. This can be done for a limited time or permanently. The reverse process is called revocation and it consists of ending different types of delegations. This thesis has found the answer to the following research question i.e. how different mathematical Modelling for delegation and revocation of Roles in RBAC can be performed? This thesis presents different types of delegation and techniques for revocation with a comprehensive mathematical Modelling of both processes. As this can be clearly visible that this thesis objective is to derive a mathematical models for delegation and revocation of roles in RBAC policy, for deriving mathematical models formal method is applied. The mathematical models developed include grant and transfer delegation with and without role hierarchy, time based revocation, user based revocation and cascading revocation. The case scenario of an organization using RBAC is used to illustrate and clarify the mathematical models. The mathematical models presented here can serve as a starting point for developing, implementations of delegation and revocation on top of existing authorization modules based on the RBAC model.
APA, Harvard, Vancouver, ISO, and other styles
34

Cody, Emily. "Mathematical Modeling of Public Opinion using Traditional and Social Media." ScholarWorks @ UVM, 2016. http://scholarworks.uvm.edu/graddis/620.

Full text
Abstract:
With the growth of the internet, data from text sources has become increasingly available to researchers in the form of online newspapers, journals, and blogs. This data presents a unique opportunity to analyze human opinions and behaviors without soliciting the public explicitly. In this research, I utilize newspaper articles and the social media service Twitter to infer self-reported public opinions and awareness of climate change. Climate change is one of the most important and heavily debated issues of our time, and analyzing large-scale text surrounding this issue reveals insights surrounding self-reported public opinion. First, I inquire about public discourse on both climate change and energy system vulnerability following two large hurricanes. I apply topic modeling techniques to a corpus of articles about each hurricane in order to determine how these topics were reported on in the post event news media. Next, I perform sentiment analysis on a large collection of data from Twitter using a previously developed tool called the "hedonometer". I use this sentiment scoring technique to investigate how the Twitter community reports feeling about climate change. Finally, I generalize the sentiment analysis technique to many other topics of global importance, and compare to more traditional public opinion polling methods. I determine that since traditional public opinion polls have limited reach and high associated costs, text data from Twitter may be the future of public opinion polling.
APA, Harvard, Vancouver, ISO, and other styles
35

NOVELLO, NOEMI. "The Quest for Integration in Mixed Methods Inquiry: A Research Synthesis on Mixed Methods Studies in Social Sciences." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2019. http://hdl.handle.net/10281/241259.

Full text
Abstract:
Nel campo dei mixed methods nella ricerca sociale, l’integrazione può seguire due possibili logiche generali: una prospettiva di complementarietà, secondo cui com binare strategie qualitative e quantitative sarebbe un tentativo di arricchimento informativo – per ottenere una comprensione più completa di un determinato fenomeno sociale – e una di convergenza, che pone l’attenzione sulla possibilità di superare il bias legato a singoli metodi, attraverso l’integrazione. Mentre il primo punto di vista sembra piuttosto aproblematico, sia da un punto di vista teorico che nell’implementazione empirica, la convergenza sembra porre maggiori sfide metodologiche, soprattutto nell’elicitazione di “meta-inferenze”. Questa tesi di dottorato propone una sintesi di ricerca metodologica di studi che utilizzano un approccio mixed methods alla ricerca sociale. Le domande di ricerca sono relative alla concezione, implementazione e legittimazione epistemologica della questione dell’integrazione all’interno della comunità accademica degli studiosi che si avvalgono di mixed methods nella ricerca sociale. Varie strategie di analisi sono state utilizzate per rispondere agli obiettivi di ricerca: l’analisi automatica del contenuto di articoli pubblicati in riviste accademiche; l’analisi delle reti citazionali degli stessi paper; alcune interviste semistrutturate a esperti nel campo e la relativa analisi tematica, nell’ottica di un’esplorazione più approfondita del punto di vista delle/gli autrici/ori sull’integrazione, nonché come modalità di indagine delle tematiche legate all’epistemologia.
Mixed methods studies in social inquiry may follow two main perspectives on integration: on the one hand, complementarity seeks an information enrichment, a fuller and more comprehensive picture on a social phenomenon; on the other hand, convergence focuses on the chance of overcoming single methods’ bias through mixing. While the first approach results rather unproblematic – both theoretically and empirically – convergence seems to pose additional challenges, especially in the elicitation of “meta-inferences”. This dissertation presents a methodological research synthesis of mixed methods studies in social inquiry. Research questions are related to understandings, implementation and epistemological legitimization of integration within the academic community of scholars applying mixed methods in social sciences. Diverse research strategies were implemented, in order to answer to research objectives: automated content analysis was performed on articles published in academic journals; citation network analysis was applied on references lists of the same papers; semi-structured interviews with experts and the related thematic analysis were helpful to address scholars’ points of view on integration, as well as a modality to explore paradigms and epistemological issues.
APA, Harvard, Vancouver, ISO, and other styles
36

Hanappi, Hardy, and Manuel Scholz-Wäckerle. "Evolutionary Political Economy: Content and Methods." Taylor & Francis Group, 2017. http://dx.doi.org/10.1080/07360932.2017.1287748.

Full text
Abstract:
In this paper we present the major theoretical and methodological pillars of evolutionary political economy. We proceed in four steps. Aesthetics: In chapter 1 the immediate appeal of evolutionary political economy as a specific scientific activity is described. Content: Chapter 2 explores the object of investigation of evolutionary political economy. Power: The third chapter develops the interplay between politics and economics. Methods: Chapter 4 focuses on the evolution of methods necessary for evolutionary political economy.
APA, Harvard, Vancouver, ISO, and other styles
37

Johansson, Erik. "Testing the Explanation Hypothesis using Experimental Methods." Thesis, Linköping University, Department of Computer and Information Science, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-57308.

Full text
Abstract:

The Explanation Hypothesis is a psychological hypothesis about how people attribute moral responsibility. The hypothesis makes general claims about everyday thinking of moral responsibility and is also said to have important consequences for related philosophical issues. Since arguments in favor of the hypothesis are largely based on a number of intuitive cases, there is need to investigate whether it is supported by empirical evidence. In this study, the hypothesis was tested by means of quantitative experimental methods. The data were collected by conducting online surveys in which participants were introduced to a number of different scenarios. For each scenario, questions about moral responsibility were asked. Results provide general support for the Explanation Hypothesis and there are therefore more reasons to take its proposed consequences seriously.

APA, Harvard, Vancouver, ISO, and other styles
38

Amartey, Philomina. "A COMPARISON OF SOME ESTIMATION METHODS FOR HANDLING OMITTED VARIABLES : A SIMULATION STUDY." Thesis, Uppsala universitet, Statistiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-412896.

Full text
Abstract:
Omitted variable problem is a primary statistical challenge in various observational studies. Failure to control for the omitted variable bias in any regression analysis can alter the efficiency of results obtained. The purpose of this study is to compare the performance of four estimation methods (Proxy variable, Instrumental Variable, Fixed Effect, First Difference) in controlling the omitted variable problem when they are varying with time, constant over time and slightly varying with time. Results from the Monte Carlo study showed that, the prefect proxy variable estimator performed better  than the other models under all three cases. The instrument Variable estimator performed better than the Fixed Effect and First Difference estimator except in the case when the omitted variable is constant over time. Also, the Fixed Effect performed better than First Difference estimator when the omitted variable is time-invariant and vice versa when the omitted is slightly varying with time.
APA, Harvard, Vancouver, ISO, and other styles
39

Westergren, Cecilia. "”Det är ju lättare sagt än gjort ju…” : En intervjustudie om lärares strategier för att främja matematiska samtal." Thesis, Jönköping University, Högskolan för lärande och kommunikation, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-50887.

Full text
Abstract:
Forskning har visat på att etablerandet av en dialogisk klassrumskultur ger rikare lärandetillfällen för eleverna eftersom de får rikare möjlighet att utbyta tankar och uppfattningar. Studien är en kvalitativ intervjustudie av hur lärare, verksamma i förskoleklass samt årskurserna 1–3, arbetar för att skapa dialog i klassrummet. Syftet med intervjuanalysen är att belysa olika strategier lärare använder för att främja en samtalande klassrumskultur. Datan från intervjuerna kodades och analyserades i relation till tidigare forskning. Forskningen som presenteras i studien grundas i den konstruktivistiska traditionen och lägger mycket tyngd på normskapande, specifikt sociomatematiska normer som är knutna till matematiskt lärande. Forskningen, som till stor del består av gedigna forskningsöversikter, lyfter att förmågor som att resonera och argumentera endast utvecklas i en social kontext och att lärares handlingar och strukturering av undervisning därför har stor betydelse. Studiens resultat ger en bild av vilka förväntningar de intervjuade lärarna har på sina elever, det vill säga vilka sociala normer som råder. Dessa är: vara aktivt deltagande, att verbalisera samt interagera. Resultatet visar också på de specifika sociomatematiska normer som att: använda korrekta matematiska termer, använda sig av olika matematiska representationer samt att ge godtagbara matematiska förklaringar. Analysen visar att lärarna använder sig av ett stort spann av strategier för att etablera normer för ett samtalande klassrumsklimat. Exempel på strategier för att främja ett samtalande klassrumsklimat är att etablera fasta samtals-strukturer och svarssituationer som eleverna är införstådda med, positiv förstärkning och respons, att låta eleverna lyckas med sitt aktiva deltagande, modellering och revoicing. Syftet är att skapa mönster och normer för matematisk kommunikation. Genom lärarnas strategiska val av uppgifter främjas deltagande, interaktion och kommunikations-mönster, det vill säga att eleverna får en normativ förståelse för att ”alla ska vara med”, ”hur vi talar om matematik” och ”vad vi ska tala om” i genuint matematiska samtal. Lärarna upplevs ha en ansats att samtalande ska bli utforskande till sin karaktär. När lärare kan se sin undervisning ur ett normperspektiv kan det påverka deras strukturering av matematik-undervisningen mot ett samtalande klassrumsklimat och ett fördjupat deltagande i elevers interaktion. Interaktion mellan elever och lärare bidrar till att eleverna får tillgång till andras matematiska uppfattningar vilket kan berika undervisningen signifikant. Enligt forskning möjliggör sådana strukturer förhandlingar av sociomatematiska normer vilket kan bidra till att utveckla högre kognitiva förmågor hos eleverna. De specifika sociomatematiska normerna som förhandlas fram i den sociala kontexten har potential att med lärarens stöd och hjälp ge elever rikare möjligheter att reflektera över, fördjupa och förfina sina förklaringar.
APA, Harvard, Vancouver, ISO, and other styles
40

Loesch, III George F. "A Replication and Validation Study of Methods for Calculating Self-Conception Disparity." DigitalCommons@USU, 1993. https://digitalcommons.usu.edu/etd/2410.

Full text
Abstract:
Self-esteem has been defined by James as a ratio of one's successes to one's pretensions. There are two formulas which have been utilized to calculate self-conception disparity. These formulas are the subtraction-absolute value formula and the ratio formula, which was derived from James. Stuart compared and contrasted these two formulas utilized to calculate self-conception disparity. THe purpose of this study was to replicate the work of Stuart, utilizing the same construct scales and statistical methodology, adding to the Moos Family Environment Scale, and taking into account the age and gender of the respondent. The results of this study indicate, as the two formulas are compared, there isa difference in the amount of variance accounted for when the age and gender of the subject are taken into consideration. Two of the concerns that have been identified as a result of this study are 1) in relationship to the construct-related scales which were utilized int his study, are the two disparity formulas measuring the females and in each age group?; and 2) why did age and gender have such an impact on the amount of variance accounted for between the two formulas for calculating self-conception disparity?
APA, Harvard, Vancouver, ISO, and other styles
41

Geraci, Andrea. "Three essays in microeconometric methods and applications." Thesis, University of Essex, 2016. http://repository.essex.ac.uk/19599/.

Full text
Abstract:
This thesis comprises three essays. The first two make use of individual-level data on British workers from the British Household Panel Survey to study different aspects of nonstandard employment. The first essay, co-authored with Mark Bryan, presents estimates of the implicit monetary value that workers attach to non-standard work. We employ and compare two alternative methods to measure workers’ willingness to pay for four non-standard working arrangements: flexitime, part-time, night work, and rotating shifts. The first method is based on job-to-job transitions within a job search framework, while the second is based on estimating the determinants of subjective well-being. We find that the results of the two methods differ, and relate them to conceptual differences between utility and subjective wellbeing proposed recently in the happiness literature. The second essay builds on economic theories of consumption and saving choices to investigate whether workers expect temporary work to be a stepping stone towards better jobs, or a source of uncertainty and insecurity. The evidence provided shows that temporary work entails both expected improvements in future earnings, and uncertainty. Households’ consumption and saving choices are used to assess which of these two effects is prevailing, providing an alternative empirical approach to measure the consequences of temporary work for workers’ welfare. The results suggest that a stepping stone effect towards better jobs is present and, more importantly, is perceived by individuals and internalized in their behaviour. Finally, the last essay has a specific focus on econometric methods. A Monte Carlo experiment is used to investigate the extent to with the Poisson RE estimator is likely to produce results similar to ones obtained using the Poisson FE estimator when the random effects assumption is violated. The first order conditions of the two estimators differ by a term that tends to zero when the number of time periods (T), or the variance of the time-constant unobserved heterogeneity (V), tend to infinity. Different data generating processes are employed to understand if this result is likely to apply in common panel data where both characteristics are finite. As expected, the bias of RE estimates decreases with T and V. However, the same does not hold for the estimated coefficient on the time invariant dummy variable embedded in the conditional mean, which remains substantially biased. This raises a note of caution for practitioners.
APA, Harvard, Vancouver, ISO, and other styles
42

Meskarian, Rudabeh. "Stochastic programming models and methods for portfolio optimization and risk management." Thesis, University of Southampton, 2012. https://eprints.soton.ac.uk/347154/.

Full text
Abstract:
This project is focused on stochastic models and methods and their application in portfolio optimization and risk management. In particular it involves development and analysis of novel numerical methods for solving these types of problem. First, we study new numerical methods for a general second order stochastic dominance model where the underlying functions are not necessarily linear. Specifically, we penalize the second order stochastic dominance constraints to the objective under Slater’s constraint qualification and then apply the well known stochastic approximation method and the level function methods to solve the penalized problem and present the corresponding convergence analysis. All methods are applied to some portfolio optimization problems, where the underlying functions are not necessarily linear all results suggests that the portfolio strategy generated by the second order stochastic dominance model outperform the strategy generated by the Markowitz model in a sense of having higher return and lower risk. Furthermore a nonlinear supply chain problem is considered, where the performance of the level function method is compared to the cutting plane method. The results suggests that the level function method is more efficient in a sense of having lower CPU time as well as being less sensitive to the problem size. This is followed by study of multivariate stochastic dominance constraints. We propose a penalization scheme for the multivariate stochastic dominance constraint and present the analysis regarding the Slater constraint qualification. The penalized problem is solved by the level function methods and a modified cutting plane method and compared to the cutting surface method proposed in [70] and the linearized method proposed in [4]. The convergence analysis regarding the proposed algorithms are presented. The proposed numerical schemes are applied to a generic budget allocation problem where it is shown that the proposed methods outperform the linearized method when the problem size is big. Moreover, a portfolio optimization problem is considered where it is shown that the a portfolio strategy generated by the multivariate second order stochastic dominance model outperform the portfolio strategy generated by the Markowitz model in sense of having higher return and lower risk. Also the performance of the algorithms is investigated with respect to the computation time and the problem size. It is shown that the level function method and the cutting plane method outperform the cutting surface method in a sense of both having lower CPU time as well as being less sensitive to the problem size. Finally, reward-risk analysis is studied as an alternative to stochastic dominance. Specifically, we study robust reward-risk ratio optimization. We propose two robust formulations, one based on mixture distribution, and the other based on the first order moment approach. We propose a sample average approximation formulation as well as a penalty scheme for the two robust formulations respectively and solve the latter with the level function method. The convergence analysis are presented and the proposed models are applied to Sortino ratio and some numerical test results are presented. The numerical results suggests that the robust formulation based on the first order moment results in the most conservative portfolio strategy compared to the mixture distribution model and the nominal model.
APA, Harvard, Vancouver, ISO, and other styles
43

Behzadi, Mahsa. "A mathematical model of Phospholipid Biosynthesis." Phd thesis, Ecole Polytechnique X, 2011. http://pastel.archives-ouvertes.fr/pastel-00674401.

Full text
Abstract:
A l'heure de l'acquisition de donn'ees 'a haut d'ebit concernant le m'etabolisme cellulaire et son 'evolution, il est absolument n'ecessaire de disposer de mod'eles permettant d'int'egrer ces donn'ees en un ensemble coh'erent, d'en interpr'eter les variations m'etaboliques r'ev'elatrice, les 'etapes clefs o'u peuvent s'exercer des r'egulations, voire mˆeme d'en r'ev'eler des contradictions apparentes met- tant en cause les bases sur lesquelles le mod'ele lui-mˆeme est construit. C'est ce type de travail que j'ai entrepris 'a propos de donn'ees exp'erimentales obtenues dans le laboratoire biologique sur le m'etabolisme de cellules tu- morales en r'eponse 'a un traitement anti-canc'ereux. Je me suis attach'ee 'a la mod'elisation d'un point particulier de ce m'etabolisme. Il concerne le m'etabolisme des glyc'erophospholipides qui sont de bons marqueurs de la prolif'eration cellulaire. Les phospholipides constituent l'essentiel des mem- branes d'une cellule et l''etude de leur synth'ese (en particulier chez les cellules de mammif'eres) est de ce fait un sujet important. Ici, nous avons pris le parti de mettre en place un mod'ele math'ematique par 'equations diff'erentielles ordinaires, qui est essentiellement bas'e sur des 'equations hyperboliques (Michaelis-Menten), mais aussi sur des cin'etiques type loi d'action de masse et diffusion. Le mod'ele, compos'e de 8 'equations diff'erentielles, donc de 8 substrats d'int'erˆet, comporte naturellementdes param'etres inconnus in vivo, et certains d'ependents des conditions cellulaires (diff'erentiations de cellules, pathologies, . . .). Le mod'ele s'epare la structure du r'eseau m'etabolique, l''ecriture de la matrice de stoechiom'etrie, celles des 'equations de vitesse et enfin des 'equations diff'erentielles. Le mod'ele choisi est le mod'ele murin (souris/rat), parce qu'il est lui-mˆeme un mod'ele de l'homme. Plusieurs con- ditions sont successivement consid'er'ees pour l'identification des param'etres, afin d''etudier les liens entre la synth'ese de phospholipides et le cancer : - le foie sain du rat, - le m'elanome B16 et le carcinome de la lign'ee 3LL chez la souris, respectivement sans traitement, en cours de traitement 'a la Chloro'ethyl-nitrosour'ee et apr'es traitement, - enfin le m'elanome B16 chez la souris sous stress de privation de m'ethionine. En r'esum'e, ce tra- vail fourni une interpr'etation nouvelle des donn'ees exprimentales en mon- trant le rˆole essentiel de la PEMT et la nature superstable de l''etat sta- tionnaire de fonctionnement du r'eseau m'etabolique des phospholipides lors de la canc'erog'en'ese et du traitement des cancers. Il montre bien l'avantage de l'utilisation d'un mod'ele math'ematique dans l'interpr'etation de donn'ees m'etaboliques complexes.
APA, Harvard, Vancouver, ISO, and other styles
44

Botha, Stephen Gordon. "The effect of evolutionary rate estimation methods on correlations observed between substitution rates in models of evolution." Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/19938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Enkhbat, Javzmaa, and Patrik Wikström. "Swedish Social Workers’ Perceptions of Harm Reduction Methods in Substance Abuse Treatment." Thesis, Högskolan i Gävle, Avdelningen för socialt arbete och kriminologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-30220.

Full text
Abstract:
The aim of this study was to explore how harm reduction methods are perceived by Swedish social workers working with treatment of substance abuse. In doing so, qualitative research method with semi-structured interviews was conducted with five social workers practicing within the social services adult unit from three different municipalities in mid-Sweden. The gathered data was analyzed through the perspective of the two related theoretical frameworks of social constructionism and discourse theory. The result revealed diverse perceptions and perspectives regarding harm reduction methods which both were conflicted between participants and within the participants. Methods practiced in Sweden were to a large degree perceived as positive. Perceptions regarding methods outside of Sweden were to a large degree split between an overall negative perception and a conflicted perception between negative views on the legitimization of drugs and positive perceptions regarding preventive positive outcomes. From the chosen theoretical framework, The participating social workers’ perceptions appeared to be influenced by experience, social context, and a prohibitionist discourse on drug abuse that has since long been predominant in Sweden.
APA, Harvard, Vancouver, ISO, and other styles
46

Sandlin, Carter. "An Analysis of Online Training: Effectiveness, Efficiency, and Implementation Methods in a Corporate Environment." Digital Commons @ East Tennessee State University, 2013. https://dc.etsu.edu/honors/57.

Full text
Abstract:
The current research will assess online training specifically as it relates to learning effectiveness in a corporate environment. Currently, research concerning the effectiveness of online learning is abundant; however, none of this recent research is compiled in one place, nor does this research specifically interpret the information to determine the applicability of online training in a corporate environment. The thesis will analyze numerous secondary sources to compile relevant statistics related to the effectiveness of online training resources. Using this research, the thesis will culminate in recommendations for the implementation of an online training process, one useful for managers that focuses on effective learning, the need for personal interaction, and cost savings.
APA, Harvard, Vancouver, ISO, and other styles
47

Ellison, Michael Steven. "Ninth Grade Student Responses to Authentic Science Instruction." Thesis, Portland State University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3722299.

Full text
Abstract:

This mixed methods case study documents an effort to implement authentic science and engineering instruction in one teacher’s ninth grade science classrooms in a science-focused public school. The research framework and methodology is a derivative of work developed and reported by Newmann and others (Newmann & Associates, 1996). Based on a working definition of authenticity, data were collected for eight months on the authenticity in the experienced teacher’s pedagogy and in student performance. Authenticity was defined as the degree to which a classroom lesson, an assessment task, or an example of student performance demonstrates construction of knowledge through use of the meaning-making processes of science and engineering, and has some value to students beyond demonstrating success in school (Wehlage et al., 1996). Instruments adapted for this study produced a rich description of the authenticity of the teacher’s instruction and student performance.

The pedagogical practices of the classroom teacher were measured as moderately authentic on average. However, the authenticity model revealed the teacher’s strategy of interspersing relatively low authenticity instructional units focused on building science knowledge with much higher authenticity tasks requiring students to apply these concepts and skills. The authenticity of the construction of knowledge and science meaning-making processes components of authentic pedagogy were found to be greater, than the authenticity of affordances for students to find value in classroom activities beyond demonstrating success in school. Instruction frequently included one aspect of value beyond school, connections to the world outside the classroom, but students were infrequently afforded the opportunity to present their classwork to audiences beyond the teacher.

When the science instruction in the case was measured to afford a greater level of authentic intellectual work, a higher level of authentic student performance on science classwork was also measured. In addition, direct observation measures of student behavioral engagement showed that behavioral engagement was generally high, but not associated with the authenticity of the pedagogy. Direct observation measures of student self-regulation found evidence that when instruction focused on core science and engineering concepts and made stronger connections to the student’s world beyond the classroom, student self-regulated learning was greater, and included evidence of student ownership.

In light of the alignment between the model of authenticity used in this study and the Next Generation Science Standards (NGSS), the results suggest that further research on the value beyond school component of the model could improve understanding of student engagement and performance in response to the implementation of the NGSS. In particular, it suggests a unique role environmental education can play in affording student success in K-12 science and a tool to measure that role.

APA, Harvard, Vancouver, ISO, and other styles
48

Christian, Richard Dennis Rhodes Dent. "A design for teaching preservice secondary social studies teachers methods for teaching critical thinking skills." Normal, Ill. Illinois State University, 1995. http://wwwlib.umi.com/cr/ilstu/fullcit?p9633389.

Full text
Abstract:
Thesis (Ed. D.)--Illinois State University, 1995.
Title from title page screen, viewed May 10, 2006. Dissertation Committee: Dent M. Rhodes (chair), Larry Kennedy, Kenneth Jerrich, Frederick Drake. Includes bibliographical references (leaves 195-204) and abstract. Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
49

Colino, Juan. "Audience engagement for presentations via interactive methods." Thesis, Malmö högskola, Fakulteten för kultur och samhälle (KS), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-22447.

Full text
Abstract:
Keeping the audience engaged when presenting a topic in a conventional setting (a class presentation or a keynote in a conference) can be challenging. Often, presenta- tions tend to be linear and non-engaging. It was my intention to research how the ex- perience can be improved by using different methods to engage the audience.In this thesis the reader will find the results of my exploration and research on how to make presentations more engaging for the audience via interactive methods.After some background information, I go through the process of developing concepts that could improve the presenting experience. I describe different contexts where peo- ple deliver presentations and research about these environments to discuss the context of the thesis. I also discuss the concept of audience engagement.After selecting one of these concepts I describe the development of a prototype that il- lustrates the concept and discuss it after a series of user testing procedures.Finally some conclusions and comments are discussed in the final part of the docu- ment.
APA, Harvard, Vancouver, ISO, and other styles
50

Du, Hailiang. "Combining statistical methods with dynamical insight to improve nonlinear estimation." Thesis, London School of Economics and Political Science (University of London), 2009. http://etheses.lse.ac.uk/66/.

Full text
Abstract:
Physical processes such as the weather are usually modelled using nonlinear dynamical systems. Statistical methods are found to be difficult to draw the dynamical information from the observations of nonlinear dynamics. This thesis is focusing on combining statistical methods with dynamical insight to improve the nonlinear estimate of the initial states, parameters and future states. In the perfect model scenario (PMS), method based on the Indistin-guishable States theory is introduced to produce initial conditions that are consistent with both observations and model dynamics. Our meth-ods are demonstrated to outperform the variational method, Four-dimensional Variational Assimilation, and the sequential method, En-semble Kalman Filter. Problem of parameter estimation of deterministic nonlinear models is considered within the perfect model scenario where the mathematical structure of the model equations are correct, but the true parameter values are unknown. Traditional methods like least squares are known to be not optimal as it base on the wrong assumption that the distribu-tion of forecast error is Gaussian IID. We introduce two approaches to address the shortcomings of traditional methods. The first approach forms the cost function based on probabilistic forecasting; the second approach focuses on the geometric properties of trajectories in short term while noting the global behaviour of the model in the long term. Both methods are tested on a variety of nonlinear models, the true parameter values are well identified. Outside perfect model scenario, to estimate the current state of the model one need to account the uncertainty from both observatiOnal noise and model inadequacy. Methods assuming the model is perfect are either inapplicable or unable to produce the optimal results. It is almost certain that no trajectory of the model is consistent with an infinite series of observations. There are pseudo-orbits, however, that are consistent with observations and these can be used to estimate the model states. Applying the Indistinguishable States Gradient De-scent algorithm with certain stopping criteria is introduced to find rel-evant pseudo-orbits. The difference between Weakly Constraint Four-dimensional Variational Assimilation (WC4DVAR) method and Indis-tinguishable States Gradient Descent method is discussed. By testing on two system-model pairs, our method is shown to produce more consistent results than the WC4DVAR method. Ensemble formed from the pseudo-orbit generated by Indistinguishable States Gradient Descent method is shown to outperform the Inverse Noise ensemble in estimating the current states. Outside perfect model scenario, we demonstrate that forecast with relevant adjustment can produce better forecast than ignoring the existence of model error and using the model directly to make fore-casts. Measurement based on probabilistic forecast skill is suggested to measure the predictability outside PMS.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography