To see the other types of publications on this topic, follow the link: Research models.

Dissertations / Theses on the topic 'Research models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Research models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Benedetti, Andrea. "Generalized models in epidemiology research." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=84472.

Full text
Abstract:
Traditionally, epidemiologists have used methods that categorize or assume a linear or log-linear form to model dose-response associations between continuous independent variables and binary or continuous outcomes. Recent advances in both statistical methodology and computing resources have made it possible to model relationships of greater complexity. Generalized additive models (GAMs) are a flexible nonparametric modelling tool that allows the user to model a variety of non-linear dose-response curves without imposing a priori assumptions about the functional form of the relationship. In GAMs, the extent of smoothing is controlled by the user-defined degrees of freedom (df). GAMs are generally used to: (i) suggest the parametric functional form for the association of interest; (ii) model the main effect nonparametrically; and (iii) control confounding by continuous covariates. By way of a series of simulation studies, this thesis addresses several unresolved methodological issues involving all three of these uses. Although GAMs have been used to detect and estimate thresholds in the association of interest, the methods have been mostly subjective or ad hoc, and the statistical properties have not been evaluated for the most part. In the first simulation study, a formal approach to the use of GAMs for this purpose is suggested and compared with simpler approaches. When GAMs are used to estimate the effect of the primary exposure of interest different approaches to determining the amount of smoothing are employed. In the second simulation study, the impact on statistical inference of various a priori and automatic df-selection strategies is investigated and a method to correct the type I error is introduced and evaluated.
In the final simulation study, parametric multiple logistic regression was compared with its nonparametric GAM extension in their ability to control for a continuous confounding variable and several issues related to the implementation of GAMs in this context are investigated.
The results of these simulations will help researchers make optimal use of the potential advantages of flexible assumption-free modelling.
APA, Harvard, Vancouver, ISO, and other styles
2

Lambert, Paul Christopher. "Hierarchical models in medical research." Thesis, University of Leicester, 2000. http://hdl.handle.net/2381/29361.

Full text
Abstract:
This thesis describes and develops the use of hierarchical models in medical research from both a classical and Bayesian perspective. Hierarchical models are appropriate when observations are clustered into larger units within a data set, which is a common occurence in medical research. The use and versatility of hierarchical models is shown through a number of examples, with the aim of developing improved and more appropriate methods of analysis. The examples are real data sets and present real problems in terms of statistical analysis. The data sets presented include two data sets involved with longitudinal data where repeated measurements are clustered within individuals. One data set has repeated blood pressure measurements taken on pregnant women and the other consists of repeated peak expiratory flow measurements taken on asthmatic children. Bayesian and classical analyses are compared. A number of issues are explored including the modelling of complex mean profiles, interpretation and quantification of variance components and the modelling of heterogeneous within-subject variances. Other data sets are concerned with meta-analysis, where individuals are clustered within studies. The classical and Bayesian frameworks are compared and one data set investigates the potential to combine estimates from different study types in order to estimate the attributable risk. One of the meta-analysis data sets included individual patient data, where there is a substantial amount of missing covariate data. For this data set models that incorporate individuals with incomplete data when modelling survival times for children with Neuroblastoma are developed. This thesis thus demonstrates that hierarchical models are of great importance in analysing data in medical research. In many situations a Bayesian analysis provides a number of advantages over classical models especially when introducing realistic complexity that would be hard to incorporate using classical methodology.
APA, Harvard, Vancouver, ISO, and other styles
3

Spencer, Neil Hardy. "Longitudinal multilevel models in educational research." Thesis, Lancaster University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

NEVES, ANTONIO BERNARDO FERREIRA. "STATISTICAL MODELS IN ADVERTISING MARKET RESEARCH." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1991. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=9046@1.

Full text
Abstract:
A propaganda é sem dúvida uma das armas mais importantes do Markenting. Porém, medir sua eficiência a curto prazo como resultado do aumento de vendas pode ser uma tarefa árdua, principalmente quando este é comparado com resultados de promoções. Desta forma, modelos estatísticos vêm sendo desenvolvidos utilizando-se de outros tipos de medidas diferente do volume de vendas. Ao mesmo tempo, a propaganda passou a ser vista como algo mais científico. Mais ainda, ela tomou lugar de destaque dentro da Pesquisa de Mercado, gerando diversas tendências sobre a melhor forma de garantir o retorno em seu investimento. Assim, tenta-se aqui reunir a teoria e os resultados mais importantes e concretos da Pesquisa de Propaganda, de forma a apresentar uma metodologia que assegure esse retorno com alguma garantia.
The advertising is no doubt one of the most important weapons of Marketing. However, to measure its short-term efficiency as the result of sales increase can be a difficult task, mainly when it is compared to promotion results. For that, statistical models are being developed using other measures rather than sales volumes. At the same time, the advertising turn up to be seen in a more scientific way. Moreover, it took a remarkable place within the Marketing Research, creating different tendencies about the best way of guarenteeing the investiment return. Here is showed an essay of linking the theory and the most important and confirmed results of Advertising Reserch, presenting one methodology that assures this return with some guarantee.
APA, Harvard, Vancouver, ISO, and other styles
5

Messmacher, Eduardo B. (Eduardo Bernhart) 1972. "Models for project management." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/9217.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2000.
Also available online at the DSpace at MIT website.
Includes bibliographical references (p. 119-122).
Organizations perform work essentially through operations and projects. The characteristics of projects makes them extremely difficult to manage: their non repetitive nature discards the trial and error learning, while their short life span is particularly unforgiving to misjudgments. Some authors have found that effective scheduling is an important contributor to the success of research and development (R&D), as well as construction projects. The widely used critical path method for scheduling projects and identifying important activities fails to capture two important dimensions of the problem: the availability of different technologies (or options) to perform the activities, and the inherent problem of limited availability of resources that most managers face. Nevertheless, when one tries to account for such additional constraints, the problems become very hard to solve. In this thesis we propose an approach to the scheduling problem using a genetic algorithm, and try to compare its performance to more traditional approaches, such as an extension to a very innovative Lagrangian relaxation approach recently proposed. The purpose of using genetic algorithms is twofold: first to obtain good approximations to very hard problems, and second to realize the limitations and virtues of this search technique. The purpose of this thesis is not only to develop the algorithms, but also to obtain insight about the implications of the additional constraints in the perspective of a project manager.
by Eduardo B. Messmacher.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
6

Brus, Linda. "Recursive black-box identification of nonlinear state-space ODE models." Licentiate thesis, Uppsala : Department of Information Technology, Uppsala University, 2006. http://www.it.uu.se/research/publications/lic/2006-001/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wiedemann, Michael. "Robust parameter design for agent-based simulation models with application in a cultural geography model." Thesis, Monterey, California : Naval Postgraduate School, 2010. http://edocs.nps.edu/npspubs/scholarly/theses/2010/Jun/10Jun%5FWiedemann.pdf.

Full text
Abstract:
Thesis (M.S. in Operations Research)--Naval Postgraduate School, June 2010.
Thesis Advisor(s): Johnson, Rachel T. ; Second Reader: Baez, Francisco R, "June 2010." Description based on title screen as viewed on July 15, 2010. Author(s) subject terms: Cultural Geography, Agent-Based Model (ABM), Irregular Warfare (IW), Theory of planned Behavior (TpB), Baysian Belief Nets (BBN), Counterinsurgency Operations (COIN), Stability Operations, Discrete Event Simulation (DES), Design of Experiments (DOX), Robust Parameter Design (RPD). Includes bibliographical references (p. 69-70). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
8

Chandler, James D. "Estimating reliability with discrete growth models." Thesis, Monterey, Calif. : Naval Postgraduate School, 1988. http://edocs.nps.edu/npspubs/scholarly/theses/2008/Dec/08Dec%5FNAME.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Monsch, Matthieu (Matthieu Frederic). "Large scale prediction models and algorithms." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/84398.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Operations Research Center, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 129-132).
Over 90% of the data available across the world has been produced over the last two years, and the trend is increasing. It has therefore become paramount to develop algorithms which are able to scale to very high dimensions. In this thesis we are interested in showing how we can use structural properties of a given problem to come up with models applicable in practice, while keeping most of the value of a large data set. Our first application provides a provably near-optimal pricing strategy under large-scale competition, and our second focuses on capturing the interactions between extreme weather and damage to the power grid from large historical logs. The first part of this thesis is focused on modeling competition in Revenue Management (RM) problems. RM is used extensively across a swathe of industries, ranging from airlines to the hospitality industry to retail, and the internet has, by reducing search costs for customers, potentially added a new challenge to the design and practice of RM strategies: accounting for competition. This work considers a novel approach to dynamic pricing in the face of competition that is intuitive, tractable and leads to asymptotically optimal equilibria. We also provide empirical support for the notion of equilibrium we posit. The second part of this thesis was done in collaboration with a utility company in the North East of the United States. In recent years, there has been a number of powerful storms that led to extensive power outages. We provide a unified framework to help power companies reduce the duration of such outages. We first train a data driven model to predict the extent and location of damage from weather forecasts. This information is then used in a robust optimization model to optimally dispatch repair crews ahead of time. Finally, we build an algorithm that uses incoming customer calls to compute the likelihood of damage at any point in the electrical network.
by Matthieu Monsch.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
10

McLure, Stewart William Douglas. "Improving models for translational research in osteoarthritis." Thesis, University of Leeds, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.590471.

Full text
Abstract:
Recent advances in medical technology have revealed osteoarthritis to be truly multifactorial, affecting all the major tissues in synovial joints. Despite these advances and the fact that osteoarthritis is the most prevalent joint disease worldwide, our grasp of its etiology and underlying pathological process is still remarkably poor. Subchondral bone pathology in osteoarthritis is one area in particular that has been neglected. Thus, investigators must focus on defining the processes that control the causation and repair mechanisms in osteoarthritis before a viable therapeutic target is identified. In vitro investigations have relied on animal models in osteoarthritis research; however the degree to which they reflect human properties differs and their validity remains in question. The overall aim of this thesis was to develop a greater understanding of the osteoarthritis disease process and characterise the tibiofemoral osteochondral properties of three quadrupeds, to improve in vitro osteoarthritis research. A novel fully quantitative methodology was utilised to characterise the natural history of bone marrow lesions, a form of trabecular bone disruption, in subjects with knee osteoarthritis. Using a combination of manual image segmentation and automated statistical bone shape modelling the spatial distribution and volumetric change over a 24 month period was investigated. Furthermore, cartilage segmentations were incorporated to determine whether bone marrow lesions correlated with osteochondral progression in osteoarthritis. Results revealed the lesions to be inherently unstable and prevalent in subjects with knee osteoarthritis. The spatial distribution and significant association to deleterious joint loading environment suggested a mechanical role in bone marrow lesion genesis. Worsening cartilage pathology was significantly associated with increased bone marrow lesion volume and a striking co-location between trabecular disruption and cartilage denudation was identified. These findings identified a clear need for further investigations focussed on the role of trabecular bone changes in osteoarthritis. In vitro analysis was targeted as a potential forum for these studies . - vi- Animals slaughtered for human consumption are routinely used in vitro for musculoskeletal studies. Unfortunately little data has been published validating model selection. A series of imaging and mechanical testing techniques were used to characterise variation in the osteochondral properties of porcine, bovine and ovine stifle joints. Significant interspecies variation in animal maturity and osteochondral morphological and mechanical properties were identified. Results indicated none of the quadrupeds provided an ideal whole joint model for the human knee; but careful selection based on empirical evidence and study goals could be justified. In conclusion, more must be done to investigate how trabecular disruption affects the osteoarthritis pathological pathway, particularly in articular cartilage. In vitro analysis offers a controlled environment to perform these investigations; however access to human cadaveric tissue is notoriously challenging. In vitro quadruped animal models offer an alternative tissue source; however species selection must be validated based on tissue properties, and the inherent limitations of the model must be recognised.
APA, Harvard, Vancouver, ISO, and other styles
11

Selén, Yngve. "Model selection /." Uppsala : Univ. : Dept. of Information Technology, Univ, 2004. http://www.it.uu.se/research/reports/lic/2004-003/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Weimar, Jörg Richard. "Cellular automata models for excitable media." Thesis, Virginia Tech, 1991. http://hdl.handle.net/10919/41365.

Full text
Abstract:

A cellular automaton is developed for simulating excitable media. First, general "masks" as discrete approximations to the diffusion equation are examined, showing how to calculate the diffusion coefficient from the elements of the mask. The mask is then combined with a thresholding operation to simulate the propagation of waves (shock fronts) in excitable media, showing that (for well-chosen masks) the waves obey a linear "speedcurvature" relation with slope given by the predicted diffusion coefficient. The utility of different masks in terms of computational efficiency and adherence to a linear speed-curvature relation is assessed. Then, a cellular automaton model for wave propagation in reaction diffusion systems is constructed based on these "masks" for the diffusion component and on singular perturbation analysis for the reaction component. The cellular automaton is used to model spiral waves in the Belousov-Zhabotinskii reaction. The behavior of the spiral waves and the movement of the spiral tip are analyzed. By comparing these results to solutions of the Oregonator PDE model, the automaton is shown to be a useful and efficient replacement for the standard numerical solution of the PDE's.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
13

Cerrón-Palomino, Rodolfo, and Peter Kaulicke. "Research in Andean Linguistics." Pontificia Universidad Católica del Perú, 2012. http://repositorio.pucp.edu.pe/index/handle/123456789/113289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Imam, Md Kaisar. "Improvements to the complex question answering models." Thesis, Lethbridge, Alta. : University of Lethbridge, c2011, 2011. http://hdl.handle.net/10133/3214.

Full text
Abstract:
In recent years the amount of information on the web has increased dramatically. As a result, it has become a challenge for the researchers to find effective ways that can help us query and extract meaning from these large repositories. Standard document search engines try to address the problem by presenting the users a ranked list of relevant documents. In most cases, this is not enough as the end-user has to go through the entire document to find out the answer he is looking for. Question answering, which is the retrieving of answers to natural language questions from a document collection, tries to remove the onus on the end-user by providing direct access to relevant information. This thesis is concerned with open-domain complex question answering. Unlike simple questions, complex questions cannot be answered easily as they often require inferencing and synthesizing information from multiple documents. Hence, we considered the task of complex question answering as query-focused multi-document summarization. In this thesis, to improve complex question answering we experimented with both empirical and machine learning approaches. We extracted several features of different types (i.e. lexical, lexical semantic, syntactic and semantic) for each of the sentences in the document collection in order to measure its relevancy to the user query. We have formulated the task of complex question answering using reinforcement framework, which to our best knowledge has not been applied for this task before and has the potential to improve itself by fine-tuning the feature weights from user feedback. We have also used unsupervised machine learning techniques (random walk, manifold ranking) and augmented semantic and syntactic information to improve them. Finally we experimented with question decomposition where instead of trying to find the answer of the complex question directly, we decomposed the complex question into a set of simple questions and synthesized the answers to get our final result.
x, 128 leaves : ill. ; 29 cm
APA, Harvard, Vancouver, ISO, and other styles
15

Cha, Jin Seob. "Obtaining information from stochastic Lanchester-type combat models /." The Ohio State University, 1989. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487673114113213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Wong, Chun-mei May. "Multilevel models for survival analysis in dental research." Click to view the E-thesis via HKUTO, 2005. http://sunzi.lib.hku.hk/hkuto/record/B3637216X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Duncan, Warwick John, and n/a. "Sheep mandibular animal models for dental implantology research." University of Otago. School of Dentistry, 2005. http://adt.otago.ac.nz./public/adt-NZDU20060707.144214.

Full text
Abstract:
This inquiry investigated the suitability of the jaw of domestic sheep as an animal model for dental implantology research. Initially, parameters for osseous healing of critical size defects (CSD) in the sheep mandible were established. Pilot studies were conducted using machined-surface implants and a surgical protocol established for dental implant placement in ovine mandibular sites. Subsequent experiments considered the utility of this animal model for examination of techniques designed to enhance osseointegration. Hydroxyapatite-coated implants were compared with titanium plasma-sprayed (TPS) implants, either alone or combined with autogenous bone grafts or a bone graft/collagen vehicle loaded with transforming growth factor-beta (TGF-β). Immunofluorescent bone labelling gave information on the mineral apposition rate (MAR). Implant survival and "acceptability" (likelihood of clinical success) were major output variables, along with histomorphometric analysis of percent bone-implant contact (%BIC) and percent peri-implant bone density (%density). Naturally-occurring "broken-mouth" periodontitis in sheep was identified as a potential confounder. Subsequent experiments considered implants with different surfaces. The model was also extended from a two-stage surgical protocol to include single-stage implants. The effect of pre-existing ovine peridontitis was also examined. A systematic review and meta-analysis of published animal implant experiments was conducted in order to validate the candidate sheep model. Major findings were as follows. The size of non-healing sheep mandibular unicortical CSD is >12mm. Attempts to establish a chronic non-healing CSD were unsuccessful. The sheep diastema proved unsuitable for implant placement. The model was modified to a post-extraction protocol. Implant "acceptability" rates after 3 months integration in the sheep mandible (defined as implant survival with %BIC >10%) ranged from 50% - 100% for different implant surface treatments and placement protocols. Histomorphometriic analyses revealed that %BIC ranged from 11 � 17% to 81 � 29 % for different titanium surfaces and up to 85 � 11% for hydroxyapatite surfaces. Implants with TGF-β plus autogenous bone grafts had %BIC of 36 � 30% compared with 43 � 30% for implants with grafts alone. Bone per unit area (%density) adjacent to, but outside of the implant threads, ranged from 63 � 16% to 86 � 3% and was markedly lower for titanium plasma-sprayed surfaces and for one-stage implants. Within the implant threads, %density varied from 31 � 33% to 73.4 � 8.3%, and was markedly lower for machined titanium surfaces. Sheep periodontitis had little effect on the protocols investigated. The sheep mandibular model was found to be comparable to similar models in other species and merits further development.
APA, Harvard, Vancouver, ISO, and other styles
18

Wong, Chun-mei May, and 王春美. "Multilevel models for survival analysis in dental research." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B3637216X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Lu, Jun. "Bayesian hierarchical models and applications in psychology research /." free to MU campus, to others for purchase, 2004. http://wwwlib.umi.com/cr/mo/fullcit?p3144437.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Pan, Huiqi. "Multilevel models in human growth and development research." Thesis, University College London (University of London), 1995. http://discovery.ucl.ac.uk/10020243/.

Full text
Abstract:
The analysis of change is an important issue in human growth and development. In longitudinal studies, growth patterns are often summarized by growth 'models' so that a small number of parameters, or the functions of them can be used to make group comparisons or to be related to other measurements. To analyse complete and balanced data, growth curves can be modelled using multivariate analysis of variance with an unstructured variance-covariance matrix; for incomplete and unbalanced data, models such as the two-stage model of Laird and Ware (1982) or the multilevel models of Goldstein (1987) are necessary. The use of multilevel models for describing growth is recognized as an important technique. It is an efficient procedure for incorporating growth models, either linear or nonlinear, into a population study. Up to now there is little literature concerning growth models over wide age ranges using multilevel models. The purpose of this study is to explore suitable multilevel models of growth over a wide age range. Extended splines are proposed, which extend conventional splines using the '+' function and by including logarithmic or negative power terms. The work has been focused on modelling human growth in length, particularly, height and head circumference as they are interesting and important measures of growth. The investigation of polynomials, conventional splines and extended splines on data from the Edinburgh Longitudinal Study shows that the extended splines are better than polynomials and conventional splines for this purpose. It also shows that extended splines are, in fact, piecewise fractional polynomials and describe data better than a single segment of a fractional polynomial. The extended splines are useful, flexible, and easily incorporated in multilevel models for studying populations and for the estimation and comparison of parameters.
APA, Harvard, Vancouver, ISO, and other styles
21

Kight, William D. "An analysis of reasonableness models for research assessments." ScholarWorks, 2010. https://scholarworks.waldenu.edu/dissertations/719.

Full text
Abstract:
Individuals who screen research grant applications often select candidates on the basis of a few key parameters; success or failure can be reduced to a series of peer-reviewed Likert scores on as little as four criteria: risk, relevance, return, and reasonableness. Despite the vital impact these assessments have upon the sponsors, researchers, and society in general as a benefactor of the research, there is little empirical research into the peer-review process. The purpose of this study was to investigate how reviewers evaluate reasonableness and how the process can be modeled in a decision support system. The research questions both address the relationship between an individual's estimates of reasonableness and the indicators of scope, resources, cost, and schedule as well as evaluate the performance of several cognitive models as predictors of reasonableness. Building upon Brunswik's theory of probabilistic functionalism, a survey methodology was used to implement a policy-capturing exercise that yielded a quantitative baseline of reasonableness estimates. The subsequent data analysis addressed the predictive performance of six cognitive models as measured by the mean-square-deviation between the models and the data. A novel mapping approach developed by von Helversen and Rieskamp, a fuzzy logic model, and an exemplar model were found to outperform classic linear regression. A neural network model and the QuickEst heuristic model did not perform as well as linear regression. This information can be used in a decision support system to improve the reliability and validity of future research assessments. The positive social impact of this work would be more efficient allocation and prioritization of increasingly scarce research funds in areas of science such as social, psychological, medical, pharmaceutical, and engineering.
APA, Harvard, Vancouver, ISO, and other styles
22

Sidumo, Bonelwa. "Generalized linear models, with applications in fisheries research." Thesis, Rhodes University, 2018. http://hdl.handle.net/10962/61102.

Full text
Abstract:
Gambusia affinis (G. affinis) is an invasive fish species found in the Sundays River Valley of the Eastern Cape, South Africa, The relative abundance and population dynamics of G. affinis were quantified in five interconnected impoundments within the Sundays River Valley, This study utilised a G. affinis data set to demonstrate various, classical ANOVA models. Generalized linear models were used to standardize catch per unit effort (CPUE) estimates and to determine environmental variables which influenced the CPUE, Based on the generalized linear model results dam age, mean temperature, Oreochromis mossambicus abundance and Glossogobius callidus abundance had a significant effect on the G. affinis CPUE. The Albany Angling Association collected data during fishing tag and release events. These data were utilized to demonstrate repeated measures designs. Mixed-effects models provided a powerful and flexible tool for analyzing clustered data such as repeated measures data and nested data, lienee it has become tremendously popular as a framework for the analysis of bio-behavioral experiments. The results show that the mixed-effects methods proposed in this study are more efficient than those based on generalized linear models. These data were better modeled with mixed-effects models due to their flexibility in handling missing data.
APA, Harvard, Vancouver, ISO, and other styles
23

Gupta, Vishal Ph D. Massachusetts Institute of Technology. "Data-driven models for uncertainty and behavior." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91301.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2014.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
117
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 173-180).
The last decade has seen an explosion in the availability of data. In this thesis, we propose new techniques to leverage these data to tractably model uncertainty and behavior. Specifically, this thesis consists of three parts: In the first part, we propose a novel schema for utilizing data to design uncertainty sets for robust optimization using hypothesis testing. The approach is flexible and widely applicable, and robust optimization problems built from our new data driven sets are computationally tractable, both theoretically and practically. Optimal solutions to these problems enjoy a strong, finite-sample probabilistic guarantee. Computational evidence from classical applications of robust optimization { queuing and portfolio management { confirm that our new data-driven sets significantly outperform traditional robust optimization techniques whenever data is available. In the second part, we examine in detail an application of the above technique to the unit commitment problem. Unit commitment is a large-scale, multistage optimization problem under uncertainty that is critical to power system operations. Using real data from the New England market, we illustrate how our proposed data-driven uncertainty sets can be used to build high-fidelity models of the demand for electricity, and that the resulting large-scale, mixed-integer adaptive optimization problems can be solved efficiently. With respect to this second contribution, we propose new data-driven solution techniques for this class of problems inspired by ideas from machine learning. Extensive historical back-testing confirms that our proposed approach generates high quality solutions that compare with state-of-the-art methods. In the third part, we focus on behavioral modeling. Utility maximization (single agent case) and equilibrium modeling (multi-agent case) are by far the most common behavioral models in operations research. By combining ideas from inverse optimization with the theory of variational inequalities, we develop an efficient, data-driven technique for estimating the primitives of these models. Our approach supports both parametric and nonparametric estimation through kernel learning. We prove that our estimators enjoy a strong generalization guarantee even when the model is misspecified. Finally, we present computational evidence from applications in economics and transportation science illustrating the effectiveness of our approach and its scalability to large-scale instances.
by Vishal Gupta.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
24

Ng, Yee Sian. "Advances in data-driven models for transportation." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122100.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 163-176).
With the rising popularity of ride-sharing and alternative modes of transportation, there has been a renewed interest in transit planning to improve service quality and stem declining ridership. However, it often takes months of manual planning for operators to redesign and reschedule services in response to changing needs. To this end, we provide four models of transportation planning that are based on data and driven by optimization. A key aspect is the ability to provide certificates of optimality, while being practical in generating high-quality solutions in a short amount of time. We provide approaches to combinatorial problems in transit planning that scales up to city-sized networks. In transit network design, current tractable approaches only consider edges that exist, resulting in proposals that are closely tethered to the original network. We allow new transit links to be proposed and account for commuters transferring between different services. In integrated transit scheduling, we provide a way for transit providers to synchronize the timing of services in multimodal networks while ensuring regularity in the timetables of the individual services. This is made possible by taking the characteristics of transit demand patterns into account when designing tractable formulations. We also advance the state of the art in demand models for transportation optimization. In emergency medical services, we provide data-driven formulations that outperforms their probabilistic counterparts in ensuring coverage. This is achieved by replacing independence assumptions in probabilistic models and capturing the interactions of services in overlapping regions. In transit planning, we provide a unified framework that allows us to optimize frequencies and prices jointly in transit networks for minimizing total waiting time.
by Yee Sian Ng.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Kevin Bozhe. "Multiperiod Optimization Models in Operations Management." Thesis, University of California, Berkeley, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13423656.

Full text
Abstract:

In the past two decades, retailers have witnessed rapid changes in markets due to an increase in competition, the rise of e-commerce, and ever-changing consumer behavior. As a result, retailers have become increasingly aware of the need to better coordinate inventory control with pricing in order to maximize their profitability. This dissertation was motivated by two of such problems facing retailers at the interface between pricing and inventory control. One considers inventory control decisions for settings in which planned prices fluctuate over time, and the other considers pricing of multiple substitutable products for settings in which customers hold inventory as a consequence of stockpiling when promotional prices are offered.

In Chapter 1, we provide a brief motivation for each problem. In Chapter 2, we consider optimization of procurement and inventory allocation decisions by a retailer that sells a product with a long production lead time and a short selling season. The retailer orders most products months before the selling season, and places only one order for each product due to short product life cycles and long delivery lead times. Goods are initially stored at the warehouse and then sent to stores over the course of the season. The stores are in high-rent locations, necessitating efficient use of space, so there is no backroom space and it is uneconomical to send goods back to the warehouse; thus, all inventory at each store is available for sale. Due to marketing and logistics considerations, the planned trajectory of prices is determined in advance and may be non-monotonic. Demand is stochastic and price-dependent, and independent across time periods. We begin our analysis with the case of a single store. We first formulate the inventory allocation problem given a fixed initial order quantity with the objective of maximizing expected profit as a dynamic program and explain both technical and computational challenges in identifying the optimal policy. We then present two variants of a heuristic based on the notion of equalizing the marginal value of inventory across the time periods. Results from a numerical study indicate that the more sophisticated variant of the heuristic performs well when compared with both an upper bound and an industry benchmark, and even the simpler variant performs fairly well for realistic settings. We then generalize our approaches to the case of multiple stores, where we allow the stores to have different price trajectories. Our numerical results suggest that the performance of both heuristics is still robust in the multiple store setting, and does not suffer from the same performance deterioration observed for the industry benchmark as the number of stores increases or as price differences increase across stores and time periods. For the pre-season procurement problem, we develop a heuristic based on a generalization of the newsvendor problem that accounts for the two-tiered salvage values in our setting, specifically, a low price during end-of-season markdown periods and a very low or zero salvage value after the season has concluded. Results for numerical examples indicate that our modified newsvendor heuristic provides solutions that are as good as those obtained via grid search.

In Chapter 3, we address a retailer's problem of setting prices, including promotional prices, over a multi-period horizon for multiple substitutable products in the same product category. We consider the problem in a setting in which customers anticipate the retailer's pricing strategy and the retailer anticipates the customers' purchasing decisions. We formulate the problem as a two-stage game in which the profit maximizing retailer chooses prices and the utility maximizing customers respond by making explicit decisions regarding purchasing and consumption, and thus also implicit decisions regarding stockpiling. We incorporate a fairly general reference price formation process that allows for cross-product effects of prices on reference prices. We initially focus on a single customer segment. The representative customer's utility function accounts for the value of consumption of the products, psychological benefit (for deal-seekers) from purchasing at a price below his/her reference price but with diminishing marginal returns, costs of purchases, penalties for both shortages and holding inventory, and disutility for deviating from a consumption target in each period (where applicable). We are the first to develop a model that simultaneously accounts for this combination of realistic factors for the customer, and we also separate the customer's purchasing and consumption decisions. We develop a methodology for solving the customer's problem for arbitrary price trajectories based on a linear quadratic control formulation of an approximation of the customer's utility maximization problem. We derive analytical representations for the customer's optimal decisions as simple linear functions of prices, reference prices, inventory levels (as state variables), and the cumulative aggregate consumption level (as a state variable). (Abstract shortened by ProQuest.)

APA, Harvard, Vancouver, ISO, and other styles
26

Closson, Taunia Lydia Lynn, and University of Lethbridge Faculty of Arts and Science. "Biological models with a square wave driving force." Thesis, Lethbridge, Alta. : University of Lethbridge, Faculty of Arts and Science, 2002, 2002. http://hdl.handle.net/10133/146.

Full text
Abstract:
Systems that require a driving force of some kind are very common in physical and biological settings. Driving forces in a biological context are usually referred to as rhythms, pulses or clocks. Here we are interested in the effect of adding a square wave periodic driving force to a biological model. This is intended to model inputs from biological circuits with all-or-none or switch-like resposes. We study a model of cell division proposed by Novak and Tyson. Our switched input is intended to model the interaction of the mitotic oscillator with an ultradian clock. We thoroughly characterize the behaviour as a function of the durations of the active and inactive phases. We also study a model of vein formation in plant leaves proposed by Mitchison. Pulsed hormonal release greatly accelerates vein formation in this model.
x, 105 leaves : ill. (some col.) ; 29 cm.
APA, Harvard, Vancouver, ISO, and other styles
27

Koutroumpis, Panagiotis. "Research on futures-commodities, macroeconomic volatility and financial development." Thesis, Brunel University, 2016. http://bura.brunel.ac.uk/handle/2438/13989.

Full text
Abstract:
This thesis consists of eight studies that cover topics in the increasingly influential field of futures- commodities, macroeconomic volatility and financial development. Chapter 2 considers the case of Argentina and provides a first thorough examination of the timing of the Argentine debacle. By applying a group of econometric tests for structural breaks on a range of GDP growth series over a period from 1886 to 2003 we conclude that there are two key dates in Argentina's economic history (1918 and 1948) that need to be inspected closely in order to further our understanding of the Argentine debacle. Chapters 3 and 4 investigated the time-varying link between financial development and economic growth. By employing the logistic smooth transition framework to annual data for Brazil covering the period 1890-2003 we found that financial development has a mixed (either positive or negative) time- varying effect on growth, which depends on trade openness thresholds. We also find a positive impact of trade openness on growth while a mainly negative one for the various political instability measures. Chapter 5 studied the convergence properties of inflation rates among the countries of the European Monetary Union over the period 1980-2013. By applying recently developed panel unit root/stationarity tests overall we are able to accept the stationarity hypothesis. Similarly, results from the univariate testing procedure indicated a mixed evidence in favour of convergence. Hence next we employ a clustering algorithm in the context of multivariate stationarity tests and we statistically detect three absolute convergence clubs in the pre-euro period, which consist of early accession countries. We also detect two separate clusters of early accession countries in the post-1997 period. For the rest of the countries/cases we find evidence of divergent behaviour. For robustness check we additionally employ a pairwise convergence Bayesian framework, which broadly confirms our findings. Finally, we show that in the presence of volatility spillovers and structural breaks time-varying persistence will be transmitted from the conditional variance to the conditional mean. Chapter 6 focuses on the negative consequences that the five years of austerity (2010-2014) imposed on the Greek economy and the society in general. To achieve that goal we summarize the views of three renowned economists, namely Paul De Grauwe, Paul Krugman and Joseph Stiglitz on the eurozone crisis as well as the Greek case. In support of their claims we provide solid evidence of the dramatic effects that the restrictive policies had on Greece. Chapter 7 analyzes the properties of inflation rates and their volatilities among five European countries over a period 1960-2003. Unlike to previous studies we investigate whether or not the infl ation rate and its volatility of each individual country displayed time-varying characteristics. By applying various power ARCH processes with structural breaks and with or without in-mean effects the results indicated that the conditional means, variances as well as the in-mean effect displayed time-varying behaviour. We also show that for France, Italy and Netherlands the in-mean effect is positive, whereas that of Austria and Denmark is negative. Chapter 8 examines the stochastic properties of different commodity time series during the recent fi nancial and EU sovereign debt crisis (1997-2013). By employing the Bai-Perron method we detect five breaks for each of the commodity returns (both in the mean and in the variance). The majority of the breaks are closely associated with the two aforementioned crises. Having obtained the breaks we estimated the power ARCH models for each commodity allowing the conditional means and variances to switch across the breakpoints. The results indicate overall that there is a time-varying behaviour of the conditional mean and variance parameters in the case of grains, energies and softs. In contrast, metals and soya complex show time-varying characteristics only in the conditional variance. Finally, conducting a forecasting analysis using spectral techniques (in both mapped and unmapped data) we find that the prices of corn remained almost stable while for wheat, heating oil, wti and orange juice the prices decreased further, though slightly. In the case of natural gas, coffee and sugar overall the prices experienced significant defl ationary pressures. As far as the prices of oats, platinum, rbob, cocoa, soybean, soymeal and soyoil is concerned, they showed an upward trend. Chapter 9 examines the effect of health and military expenditures, trade openness and political instability on output growth. By employing a pooled generalised least squares method for 19 NATO countries from 1993 to 2010 we fi nd that there is a negative impact of health and military expenditures, and political instability on economic growth whereas that of trade openness is positive.
APA, Harvard, Vancouver, ISO, and other styles
28

Müller, Werner, and Michaela Nettekoven. "A Panel Data Analysis: Research & Development Spillover." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1998. http://epub.wu.ac.at/620/1/document.pdf.

Full text
Abstract:
Panel data analysis has become an important tool in applied econometrics and the respective statistical techniques are well described in several recent textbooks. However, for an analyst using these methods there remains the task of choosing a reasonable model for the behavior of the panel data. Of special importance is the choice between so-called fixed and random coefficient models. This choice can have a crucial effect on the interpretation of the analyzed phenomenon, which is demonstrated by an application on research and development spillover. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
29

Zarybnisky, Eric J. (Eric Jack) 1979. "Maintenance scheduling for modular systems-models and algorithms." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/68972.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 185-188).
Maintenance scheduling is an integral part of many complex systems. For instance, without effective maintenance scheduling, the combined effects of preventative and corrective maintenance can have severe impacts on the availability of those systems. Based on current Air Force trends including maintenance manpower, dispersed aircraft basing, and increased complexity, there has been a renewed focus on preventative maintenance. To address these concerns, this thesis develops two models for preventative maintenance scheduling for complex systems, the first of interest in the system concept development and design phase, and the second of interest during operations. Both models are highly complex and intractable to solve in their original forms. For the first model, we develop approximation algorithms that yield high quality and easily implementable solutions. To address the second model, we propose a decomposition strategy that produces submodels that can be solved via existing algorithms or via specialized algorithms we develop. While much of the literature has examined stochastically failing systems, preventative maintenance of usage limited systems has received less attention. Of particular interest is the design of modular systems whose components must be repaired/replaced to prevent a failure. By making cost tradeoffs early in development, program managers, designers, engineers, and test conductors can better balance the up front costs associated with system design and testing with the long term cost of maintenance. To facilitate such a tradeoff, the Modular Maintenance Scheduling Problem provides a framework for design teams to evaluate different design and operations concepts and then evaluate the long term costs. While the general Modular Maintenance Scheduling Problem does not require maintenance schedules with specific structure, operational considerations push us to consider cyclic schedules in which components are maintained at a fixed frequency. In order to efficiently find cyclic schedules, we propose the Cycle Rounding algorithm, which has an approximation guarantee of 2, and a family of Shifted Power-of-Two algorithms, which have an approximation guarantee of 1/ ln(2) ~ 1.4427. Computational results indicate that both algorithms perform much better than their associated performance guarantees providing solutions within 15%-25% of a lower bound. Once a modular system has moved into operations, manpower and transportation scheduling become important considerations when developing maintenance schedules. To address the operations phase, we develop the Modular Maintenance and System Assembly Model to balance the tradeoffs between inventory, maintenance capacity, and transportation resources. This model explicitly captures the risk-pooling effects of a central repair facility while also modeling the interaction between repair actions at such a facility. The full model is intractable for all but the smallest instances. Accordingly, we decompose the problem into two parts, the system assembly portion and module repair portion. Finally, we tie together the Modular Maintenance and System Assembly Model with key concepts from the Modular Maintenance Scheduling Problem to propose an integrated methodology for design and operation.
by Eric Jack Zarybnisky.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
30

Chhaochhria, Pallav. "Forecast-driven tactical planning models for manufacturing systems." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/68700.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2011.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student submitted PDF version of thesis.
Includes bibliographical references (p. 243-247).
Our work is motivated by real-world planning challenges faced by a manufacturer of industrial products. In the first part of the thesis, we study a multi-product serial-flow production line that operates in a low-volume, long lead-time environment. The objective is to minimize variable operating costs, in the face of forecast uncertainty, raw material arrival uncertainty and in-process failure. We develop a dynamic-programming-based tactical model to capture the key uncertainties and trade-offs, and to determine the minimum-cost operating tactics. The tactics include smoothing production to reduce production-related costs, and segmenting the serial-flow line with decoupling buffers to protect against variance propagation. For each segment, we specify a work release policy and a production control policy to manage the work-in-process inventory within the segment and to maintain the inventory targets in the downstream buffer. We also optimize the raw material ordering policy with fixed ordering times, long lead-times and staggered deliveries. In the second part of the thesis, we examine a multi-product assembly system that operates in a high-volume, short lead- time environment. The operating tactics used here include determining a fixed-length cyclic schedule to control production, in addition to smoothing production and segmenting the system with decoupling buffers. We develop another dynamic-programming-based tactical model that determines optimal policies for production planning and scheduling, inventory, and raw material ordering; these policies minimize the operating cost for the system in the face of forecast and raw material arrival uncertainty. We tested these models on both hypothetical and actual factory scenarios. The results confirmed our intuition and also helped develop new managerial insights on the application of these operating tactics. Moreover, the tactical model's factory performance predictions were found to be within 10% of simulation results for the testbed systems, thus validating the models.
by Pallav Chhaochhria.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhou, Xinfeng. "Application of robust statistics to asset allocation models." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/36231.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2006.
Includes bibliographical references (p. 105-107).
Many strategies for asset allocation involve the computation of expected returns and the covariance or correlation matrix of financial instruments returns. How much of each instrument to own is determined by an attempt to minimize risk (the variance of linear combinations of investments in these financial assets) subject to various constraints such as a given level of return, concentration limits, etc. The expected returns and the covariance matrix contain many parameters to estimate and two main problems arise. First, the data will very likely have outliers that will seriously affect the covariance matrix. Second, with so many parameters to estimate, a large number of observations are required and the nature of markets may change substantially over such a long period. In this thesis we use robust covariance procedures, such as FAST-MCD, quadrant-correlation-based covariance and 2D-Huber-based covariance, to address the first problem and regularization (Bayesian) methods that fully utilize the market weights of all assets for the second. High breakdown affine equivariant robust methods are effective, but tend to be costly when cross-validation is required to determine regularization parameters.
(cont.) We, therefore, also consider non-affine invariant robust covariance estimation. When back-tested on market data, these methods appear to be effective in improving portfolio performance. In conclusion, robust asset allocation methods have great potential to improve risk-adjusted portfolio returns and therefore deserve further exploration in investment management research.
by Xinfeng Zhou.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
32

Keller, Philipp W. (Philipp Wilhelm) 1982. "Tractable multi-product pricing under discrete choice models." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82871.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 199-204).
We consider a retailer offering an assortment of differentiated substitutable products to price-sensitive customers. Prices are chosen to maximize profit, subject to inventory/ capacity constraints, as well as more general constraints. The profit is not even a quasi-concave function of the prices under the basic multinomial logit (MNL) demand model. Linear constraints can induce a non-convex feasible region. Nevertheless, we show how to efficiently solve the pricing problem under three important, more general families of demand models. Generalized attraction (GA) models broaden the range of nonlinear responses to changes in price. We propose a reformulation of the pricing problem over demands (instead of prices) which is convex. We show that the constrained problem under MNL models can be solved in a polynomial number of Newton iterations. In experiments, our reformulation is solved in seconds rather than days by commercial software. For nested-logit (NL) demand models, we show that the profit is concave in the demands (market shares) when all the price-sensitivity parameters are sufficiently close. The closed-form expressions for the Hessian of the profit that we derive can be used with general-purpose nonlinear solvers. For the special (unconstrained) case already considered in the literature, we devise an algorithm that requires no assumptions on the problem parameters. The class of generalized extreme value (GEV) models includes the NL as well as the cross-nested logit (CNL) model. There is generally no closed form expression for the profit in terms of the demands. We nevertheless how the gradient and Hessian can be computed for use with general-purpose solvers. We show that the objective of a transformed problem is nearly concave when all the price sensitivities are close. For the unconstrained case, we develop a simple and surprisingly efficient first-order method. Our experiments suggest that it always finds a global optimum, for any model parameters. We apply the method to mixed logit (MMNL) models, by showing that they can be approximated with CNL models. With an appropriate sequence of parameter scalings, we conjecture that the solution found is also globally optimal.
by Philipp Wilhelm Keller.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
33

Weber, Theophane. "Correlation decay and decentralized optimization in graphical models." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/58079.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 213-229) and index.
Many models of optimization, statistics, social organizations and machine learning capture local dependencies by means of a network that describes the interconnections and interactions of different components. However, in most cases, optimization or inference on these models is hard due to the dimensionality of the networks. This is so even when using algorithms that take advantage of the underlying graphical structure. Approximate methods are therefore needed. The aim of this thesis is to study such large-scale systems, focusing on the question of how randomness affects the complexity of optimizing in a graph; of particular interest is the study of a phenomenon known as correlation decay, namely, the phenomenon where the influence of a node on another node of the network decreases quickly as the distance between them grows. In the first part of this thesis, we develop a new message-passing algorithm for optimization in graphical models. We formally prove a connection between the correlation decay property and (i) the near-optimality of this algorithm, as well as (ii) the decentralized nature of optimal solutions. In the context of discrete optimization with random costs, we develop a technique for establishing that a system exhibits correlation decay. We illustrate the applicability of the method by giving concrete results for the cases of uniform and Gaussian distributed cost coefficients in networks with bounded connectivity. In the second part, we pursue similar questions in a combinatorial optimization setting: we consider the problem of finding a maximum weight independent set in a bounded degree graph, when the node weights are i.i.d. random variables.
(cont.) Surprisingly, we discover that the problem becomes tractable for certain distributions. Specifically, we construct a PTAS for the case of exponentially distributed weights and arbitrary graphs with degree at most 3, and obtain generalizations for higher degrees and different distributions. At the same time we prove that no PTAS exists for the case of exponentially distributed weights for graphs with sufficiently large but bounded degree, unless P=NP. Next, we shift our focus to graphical games, which are a game-theoretic analog of graphical models. We establish a connection between the problem of finding an approximate Nash equilibrium in a graphical game and the problem of optimization in graphical models. We use this connection to re-derive NashProp, a message-passing algorithm which computes Nash equilibria for graphical games on trees; we also suggest several new search algorithms for graphical games in general networks. Finally, we propose a definition of correlation decay in graphical games, and establish that the property holds in a restricted family of graphical games. The last part of the thesis is devoted to a particular application of graphical models and message-passing algorithms to the problem of early prediction of Alzheimer's disease. To this end, we develop a new measure of synchronicity between different parts of the brain, and apply it to electroencephalogram data. We show that the resulting prediction method outperforms a vast number of other EEG-based measures in the task of predicting the onset of Alzheimer's disease.
by Théophane Weber.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
34

Dyachenko, Tatiana L. "Bayesian Models for Studying Consumer Behavior." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1403017394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Delgado, San Martin Juan A. "Mathematical models for preclinical heterogeneous cancers." Thesis, University of Aberdeen, 2016. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=230139.

Full text
Abstract:
Cancer is a deadly, complex disease with 14 million new cases diagnosed every year and the endeavour to develop a cure is a global multidisciplinary effort. The complexity of cancer and the resulting vast volume of data derived from its research necessitates a robust and cutting-edge system of mathematical and statistical modelling. This thesis proposes novel mathematical models of quantification and modelling applied to heterogeneous preclinical cancers, focusing on the translation of animal studies into patients with particular emphasis on tumour stroma. The first section of this thesis (quantification) will present different techniques of extracting and quantifying data from bioanalytical assays. The overall aim will be to present and discuss potential methods of obtaining data regarding tumour volume, stromal morphology, stromal heterogeneity, and oxygen distribution. Firstly, a 3D scanning technique will be discusses. This technique aims to assess tumour volume in mice more precisely than the current favoured method (callipers) and record any cutaneous symptoms as well, with the potential to revolutionise tumour growth analysis. Secondly, a series of image processing methods will be presented which, when applied to tumour histopathology, demonstrate that tumour stromal morphology and its microenvironment play a key role in tumour physiology. Lastly, it will be demonstrated through the integration of in-vitro data from various sources that oxygen and nutrient distribution in tumours is very irregular, creating metabolic niches with distinct physiologies within a single tumour. Tumour volume, oxygen, and stroma are the three aspects central to the successful modelling of tumour drug responses over time. The second section of this thesis (modelling) will feature a mathematical oxygen-driven model - utilising 38 cell lines and 5 patient-derived animal models - that aims to demonstrate the relationship between homogeneous oxygen distribution and preclinical tumour growth. Finally, all concepts discussed will be merged into a computational tumour-stroma model. This cellular automaton (stochastic) model will demonstrate that tumour stroma plays a key role in tumour growth and has both positive (at a molecular level) and negative (at both a molecular and tissue level) effects on cancers. This thesis contains a useful set of algorithms to help visualise, quantify, and understand tissue phenomena in cancer physiology, as well as providing a series of platforms to predict tumour outcome in the preclinical setting with clinical relevance.
APA, Harvard, Vancouver, ISO, and other styles
36

McEwan, J. A. "Methodology and new applications in food acceptance research." Thesis, University of Reading, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.380836.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Han, Sung Ho. "Integrated empirical models based on a sequential research strategy." Diss., This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-07282008-134026/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Zaretsky, M. (Marina). "Essays on variational inequalities and competitive supply chain models." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28859.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2004.
Includes bibliographical references (p. 103-107).
In the first part of the thesis we combine ideas from cutting plane and interior point methods to solve variational inequality problems efficiently. In particular, we introduce "smarter" cuts into two general methods for solving these problems. These cuts utilize second order information on the problem through the use of a gap function. We establish convergence results for both methods, as well as complexity results for one of the methods. Finally, we compare the performance of an approach that combines affine scaling and cutting plane methods with other methods for solving variational inequalities. The second part of the thesis considers a supply chain setting where several capacitated suppliers compete for orders from a single retailer in a multi-period environment. At each period the retailer places orders to the suppliers in response to the prices and capacities they announce. Our model allows the retailer to carry inventory. Furthermore, suppliers can expand their capacity at an additional cost; the retailer faces exogenous, price-dependent, stochastic demand. We analyze discrete as well as continuous time versions of the model: (i) we illustrate the existence of equilibrium policies; (ii) we characterize the structure of these policies; (iii) we consider coordination mechanisms; and (iv) we present some computational results. We also consider a modified model that uses option contracts and finally present some extensions.
by Marina Zaretsky.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
39

Anderson, Ross Michael. "Stochastic models and data driven simulations for healthcare operations." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/92055.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2014.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 251-257).
This thesis considers problems in two areas in the healthcare operations: Kidney Paired Donation (KPD) and scheduling medical residents in hospitals. In both areas, we explore the implications of policy change through high fidelity simulations. We then build stochastic models to provide strategic insight into how policy decisions affect the operations of these healthcare systems. KPD programs enable patients with living but incompatible donors (referred to as patient-donor pairs) to exchange kidneys with other such pairs in a centrally organized clearing house. Exchanges involving two or more pairs are performed by arranging the pairs in a cycle, where the donor from each pair gives to the patient from the next pair. Alternatively, a so called altruistic donor can be used to initiate a chain of transplants through many pairs, ending on a patient without a willing donor. In recent years, the use of chains has become pervasive in KPD, with chains now accounting for the majority of KPD transplants performed in the United States. A major focus of our work is to understand why long chains have become the dominant method of exchange in KPD, and how to best integrate their use into exchange programs. In particular, we are interested in policies that KPD programs use to determine which exchanges to perform, which we refer to as matching policies. First, we devise a new algorithm using integer programming to maximize the number of transplants performed on a fixed pool of patients, demonstrating that matching policies which must solve this problem are implementable. Second, we evaluate the long run implications of various matching policies, both through high fidelity simulations and analytic models. Most importantly, we find that: (1) using long chains results in more transplants and reduced waiting time, and (2) the policy of maximizing the number of transplants performed each day is as good as any batching policy. Our theoretical results are based on introducing a novel model of a dynamically evolving random graph. The analysis of this model uses classical techniques from Erdos-Renyi random graph theory as well as tools from queueing theory including Lyapunov functions and Little's Law. In the second half of this thesis, we consider the problem of how hospitals should design schedules for their medical residents. These schedules must have capacity to treat all incoming patients, provide quality care, and comply with regulations restricting shift lengths. In 2011, the Accreditation Council for Graduate Medical Education (ACGME) instituted a new set of regulations on duty hours that restrict shift lengths for medical residents. We consider two operational questions for hospitals in light of these new regulations: will there be sufficient staff to admit all incoming patients, and how will the continuity of patient care be affected, particularly in a first day of a patients hospital stay, when such continuity is critical? To address these questions, we built a discrete event simulation tool using historical data from a major academic hospital, and compared several policies relying on both long and short shifts. The simulation tool was used to inform staffing level decisions at the hospital, which was transitioning away from long shifts. Use of the tool led to the following strategic insights. We found that schedules based on shorter more frequent shifts actually led to a larger admitting capacity. At the same time, such schedules generally reduce the continuity of care by most metrics when the departments operate at normal loads. However, in departments which operate at the critical capacity regime, we found that even the continuity of care improved in some metrics for schedules based on shorter shifts, due to a reduction in the use of overtime doctors. We develop an analytically tractable queueing model to capture these insights. The analysis of this model requires analyzing the steady-state behavior of the fluid limit of a queueing system, and proving a so called "interchange of limits" result.
by Ross Michael Anderson.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
40

Harsha, Pavithra. "Mitigating airport congestion : market mechanisms and airline response models." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/46387.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2009.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (leaves 157-165).
Efficient allocation of scarce resources in networks is an important problem worldwide. In this thesis, we focus on resource allocation problems in a network of congested airports. The increasing demand for access to the world's major commercial airports combined with the limited operational capacity at many of these airports have led to growing air traffic congestion resulting in several billion dollars of delay cost every year. In this thesis, we study two demand-management techniques -- strategic and operational approaches -- to mitigate airport congestion. As a strategic initiative, auctions have been proposed to allocate runway slot capacity. We focus on two elements in the design of such slot auctions -- airline valuations and activity rules. An aspect of airport slot market environments, which we argue must be considered in auction design, is the fact that the participating airlines are budget-constrained. -- The problem of finding the best bundle of slots on which to bid in an iterative combinatorial auction, also called the preference elicitation problem, is a particularly hard problem, even more in the case of airlines in a slot auction. We propose a valuation model, called the Aggregated Integrated Airline Scheduling and Fleet Assignment Model, to help airlines understand the true value of the different bundles of slots in the auction. This model is efficient and was found to be robust to data uncertainty in our experimental simulations.
(cont.) -- Activity rules are checks made by the auctioneer at the end of every round to suppress strategic behavior by bidders and to promote consistent, continual preference elicitation. These rules find applications in several real world scenarios including slot auctions. We show that the commonly used activity rules are not applicable for slot auctions as they prevent straightforward behavior by budget-constrained bidders. We propose the notion of a strong activity rule which characterizes straightforward bidding strategies. We then show how a strong activity rule in the context of budget-constrained bidders (and quasilinear bidders) can be expressed as a linear feasibility problem. This work on activity rules also applies to more general iterative combinatorial auctions.We also study operational (real-time) demand-management initiatives that are used when there are sudden drops in capacity at airports due to various uncertainties, such as bad-weather. We propose a system design that integrates the capacity allocation, airline recovery and inter-airline slot exchange procedures, and suggest metrics to evaluate the different approaches to fair allocations.
by Pavithra Harsha.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
41

Menjoge, Rajiv (Rajiv Shailendra). "New procedures for visualizing data and diagnosing regression models." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61190.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 97-103).
This thesis presents new methods for exploring data using visualization techniques. The first part of the thesis develops a procedure for visualizing the sampling variability of a plot. The motivation behind this development is that reporting a single plot of a sample of data without a description of its sampling variability can be uninformative and misleading in the same way that reporting a sample mean without a confidence interval can be. Next, the thesis develops a method for simplifying large scatter plot matrices, using similar techniques as the above procedure. The second part of the thesis introduces a new diagnostic method for regression called backward selection search. Backward selection search identifies a relevant feature set and a set of influential observations with good accuracy, given the difficulty of the problem, and additionally provides a description, in the form of a set of plots, of how the regression inferences would be affected with other model choices, which are close to optimal. This description is useful, because an observation, that one analyst identifies as an outlier, could be identified as the most important observation in the data set by another analyst. The key idea behind backward selection search has implications for methodology improvements beyond the realm of visualization. This is described following the presentation of backward selection search. Real and simulated examples, provided throughout the thesis, demonstrate that the methods developed in the first part of the thesis will improve the effectiveness and validity of data visualization, while the methods developed in the second half of the thesis will improve analysts' abilities to select robust models.
by Rajiv Menjoge.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
42

Arkhipov, Dmitri I. "Computational Models for Scheduling in Online Advertising." Thesis, University of California, Irvine, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10168557.

Full text
Abstract:

Programmatic advertising is an actively developing industry and research area. Some of the research in this area concerns the development of optimal or approximately optimal contracts and policies between publishers, advertisers and intermediaries such as ad networks and ad exchanges. Both the development of contracts and the construction of policies governing their implementation are difficult challenges, and different models take different features of the problem into account. In programmatic advertising decisions are made in real time, and time is a scarce resource particularly for publishers who are concerned with content load times. Policies for advertisement placement must execute very quickly once content is requested; this requires policies to either be pre-computed and accessed as needed, or for the policy execution to be very efficient. We formulate a stochastic optimization problem for per publisher ad sequencing with binding latency constraints. Within our context an ad request lifecycle is modeled as a sequence of one by one solicitations (OBOS) subprocesses/lifecycle stages. From the viewpoint of a supply side platform (SSP) (an entity acting in proxy for a collection of publishers), the duration/span of a given lifecycle stage/subprocess is a stochastic variable. This stochasticity is due both to the stochasticity inherent in Internet delay times, and the lack of information regarding the decision processes of independent entities. In our work we model the problem facing the SSP, namely the problem of optimally or near-optimally choosing the next lifecycle stage of a given ad request lifecycle at any given time. We solve this problem to optimality (subject to the granularity of time) using a classic application of Richard Bellman's dynamic programming approach to the 0/1 Knapsack Problem. The DP approach does not scale to a large number of lifecycle stages/subprocesses so a sub-optimal approach is needed. We use our DP formulation to derive a focused real time dynamic programming (FRTDP) implementation, a heuristic method with optimality guarantees for solving our problem. We empirically evaluate (through simulation) the performance of our FRTDP implementation relative to both the DP implementation (for tractable instances) and to several alternative heuristics for intractable instances. Finally, we make the case that our work is usefully applicable to problems outside the domain of online advertising.

APA, Harvard, Vancouver, ISO, and other styles
43

Bhuiyan, Farina. "Dynamic models of concurrent engineering processes and performance." Thesis, McGill University, 2001. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=38153.

Full text
Abstract:
Mathematical and stochastic computer models were built to simulate concurrent engineering processes (CE) in order to study how different process mechanisms contribute to new product development (NPD) performance. Micro-models of various phenomena which occur in concurrent engineering processes, such as functional participation, overlapping, decision-making, rework, and learning, were included, and their effects on the overall NPD process were related to process span time and effort. The study focused on determining under what conditions CE processes are more favorable than sequential processes, in terms of expected payoff, span time, and effort, as dependent variables of functional participation and overlapping, and the corresponding trade-offs between more upfront effort versus span time reduction.
APA, Harvard, Vancouver, ISO, and other styles
44

Monaghan, Paul Francis. "Model misspecification in survival analysis : applications to cancer research." Thesis, University of Liverpool, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.366244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Mišić, Velibor V. "Data, models and decisions for large-scale stochastic optimization problems." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105003.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 204-209).
Modern business decisions exceed human decision making ability: often, they are of a large scale, their outcomes are uncertain, and they are made in multiple stages. At the same time, firms have increasing access to data and models. Faced with such complex decisions and increasing access to data and models, how do we transform data and models into effective decisions? In this thesis, we address this question in the context of four important problems: the dynamic control of large-scale stochastic systems, the design of product lines under uncertainty, the selection of an assortment from historical transaction data and the design of a personalized assortment policy from data. In the first chapter, we propose a new solution method for a general class of Markov decision processes (MDPs) called decomposable MDPs. We propose a novel linear optimization formulation that exploits the decomposable nature of the problem data to obtain a heuristic for the true problem. We show that the formulation is theoretically stronger than alternative proposals and provide numerical evidence for its strength in multi-armed bandit problems. In the second chapter, we consider to how to make strategic product line decisions under uncertainty in the underlying choice model. We propose a method based on robust optimization for addressing both parameter uncertainty and structural uncertainty. We show using a real conjoint data set the benefits of our approach over the traditional approach that assumes both the model structure and the model parameters are known precisely. In the third chapter, we propose a new two-step method for transforming limited customer transaction data into effective assortment decisions. The approach involves estimating a ranking-based choice model by solving a large-scale linear optimization problem, and solving a mixed-integer optimization problem to obtain a decision. Using synthetic data, we show that the approach is scalable, leads to accurate predictions and effective decisions that outperform alternative parametric and non-parametric approaches. In the last chapter, we consider how to leverage auxiliary customer data to make personalized assortment decisions. We develop a simple method based on recursive partitioning that segments customers using their attributes and show that it improves on a "uniform" approach that ignores auxiliary customer information.
by Velibor V. Mišić.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
46

Jernigan, Nicholas R. (Nicholas Richard). "Multi-modal, multi-period, multi-commodity transportation : models and algorithms." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91399.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2014.
33
"June 2014." Cataloged from PDF version of thesis.
Includes bibliographical references (pages 51-54).
In this paper we present a mixed integer optimization framework for modeling the shipment of goods between origin destination (O-D) pairs by vehicles of different types over a time-space network. The output of the model is an optimal schedule and routing of vehicle movements and assignment of goods to vehicles. Specifically, this framework allows for: multiple vehicles of differing characteristics (including speed, cost of travel, and capacity), transshipment locations where goods can be transferred between vehicles; and availability times for goods at their origins and delivery time windows for goods at their destinations. The model is composed of three stages: In the first, vehicle quantities, by type, and goods are allocated to routes in order to minimize late deliveries and vehicle movement costs. In the second stage, individual vehicles, specified by vehicle identification numbers, are assigned routes, and goods are assigned to those vehicles based on the results of the first stage and a minimization of costs involved with the transfer of goods between vehicles. In the third stage we reallocate the idle time of vehicles in order to satisfy crew rest constraints. Computational results show that provably optimal or near optimal solutions are possible for realistic instance sizes.
by Nicholas R. Jernigan.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
47

Doan, Xuan Vinh. "Optimization under moment, robust, and data-driven models of uncertainty." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/57538.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2010.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student submitted PDF version of thesis.
Includes bibliographical references (p. 151-156).
We study the problem of moments and present two diverse applications that apply both the hierarchy of moment relaxation and the moment duality theory. We then propose a moment-based uncertainty model for stochastic optimization problems, which addresses the ambiguity of probability distributions of random parameters with a minimax decision rule. We establish the model tractability and are able to construct explicitly the extremal distributions. The quality of minimax solutions is compared with that of solutions obtained from other approaches such as data-driven and robust optimization approach. Our approach shows that minimax solutions hedge against worst-case distributions and usually provide low cost variability. We also extend the moment-based framework for multi-stage stochastic optimization problems, which yields a tractable model for exogenous random parameters and affine decision rules. Finally, we investigate the application of data-driven approach with risk aversion and robust optimization approach to solve staffing and routing problem for large-scale call centers. Computational results with real data of a call center show that a simple robust optimization approach can be more efficient than the data-driven approach with risk aversion.
by Xuan Vinh Doan.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
48

Sun, Peng 1974. "Constructing learning models from data : the dynamic catalog mailing problem." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/16927.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2003.
Includes bibliographical references (p. 105-107).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
The catalog industry is a large and important industry in the US economy. One of the most important and challenging business decisions in the industry is to decide who should receive catalogs, due to the significant mailing cost and the low response rate. The problem is a dynamic one - when a customer is ready to purchase, s/he may order from a previous catalog if s/he does not have the most recent one. In this sense, customers' purchasing behavior depends not only on the firm's most recent mailing decision, but also on prior mailing decisions. From the firm's perspective, in order to maximize its long-term profit it should make a series of optimal mailing decisions to each customer over time. Contrary to the traditional myopic catalog mailing decision process that is generally implemented in the catalog industry, we propose a model that allows firms to design optimal dynamic mailing policies using their own business data. We constructed the model from a large data set provided by a catalog mailing company. The computational results from the historical data show great potential profit improvement. This application differs from many other applications of (approximate) dynamic programming in that an underlying Markov model is not a priori available, nor can it be derived in a principled manner. Instead, it has to be estimated or "learned" from available data. The thesis furthers the discussion on issues related to constructing learning models from data. More specifically, we discuss the so called "endogeneity problem" and the effects of inaccuracy in model parameter estimation. The fact that the model parameter estimation depends on data collected according to a specific policy introduces an endogeneity problem. As a result, the derived optimal policy depends on the original policy used to collect the data.
(cont.) In the thesis we discuss a specific endogeneity problem, "attribution error." We also investigate whether online learning can solve this problem. More specifically, we discuss the existence of fixed point policies for potential on-line learning algorithms. Imprecision in model parameter estimation also creates the potential for bias. We illustrate this problem and offer a method for detecting it. Finally, we report preliminary results from a large scale field test that tests the effectiveness of the proposed approach in a real business decision setting.
by Peng Sun.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
49

蘇美子 and Mee-chi Meko So. "An operations research model and algorithm for a production planning application." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B31226681.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

McGill, Trevor, and University of Lethbridge Faculty of Arts and Science. "Functionally non-adaptive retinal plasticity in rat models of human retinal degenerative disease." Thesis, Lethbridge, Alta. : University of Lethbridge, Faculty of Arts and Science, 2008, 2008. http://hdl.handle.net/10133/726.

Full text
Abstract:
The established model used for evaluating potential therapies for retinal disease has significant limitations. A new model is proposed to account for these limitations: the visual adaptation model. The visual adaptation model was developed to provide a novel approach for testing potential treatments for retinal disease, and the work in this thesis provides empirical support for this model. Specifically, we evaluated two potential therapies for retinal degenerative disease and examined their effects on vision and retinal anatomy. In addition, the profile of retinal reorganization and its functional correlates were examined in RCS rats and transgenic rats which express a rhodopsin mutation; however, immunohistological work targeted one specific line (S334ter-4). Collectively, these studies provide evidence that supports the retinal adaptation model. These studies also provide a novel view of retinal and visual function in retinal disease which should be considered when evaluating treatments involving retinal degeneration.
xvii, 205 leaves : ill. ; 29 cm. --
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography