Dissertations / Theses on the topic 'Model selection curves'

To see the other types of publications on this topic, follow the link: Model selection curves.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 16 dissertations / theses for your research on the topic 'Model selection curves.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

MONTEIRO, ANDRE MONTEIRO DALMEIDA. "NON-PARAMETRIC ESTIMATIONS OF INTEREST RATE CURVES : MODEL SELECTION CRITERION: MODEL SELECTION CRITERIONPERFORMANCE DETERMINANT FACTORS AND BID-ASK S." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2002. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=2684@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Esta tese investiga a estimação de curvas de juros sob o ponto de vista de métodos não-paramétricos. O texto está dividido em dois blocos. O primeiro investiga a questão do critério utilizado para selecionar o método de melhor desempenho na tarefa de interpolar a curva de juros brasileira em uma dada amostra. Foi proposto um critério de seleção de método baseado em estratégias de re-amostragem do tipo leave-k-out cross validation, onde K k £ £ 1 e K é função do número de contratos observados a cada curva da amostra. Especificidades do problema reduzem o esforço computacional requerido, tornando o critério factível. A amostra tem freqüência diária: janeiro de 1997 a fevereiro de 2001. O critério proposto apontou o spline cúbico natural -utilizado com método de ajuste perfeito aos dados - como o método de melhor desempenho. Considerando a precisão de negociação, este spline mostrou-se não viesado. A análise quantitativa de seu desempenho identificou, contudo, heterocedasticidades nos erros simulados. A partir da especificação da variância condicional destes erros e de algumas hipóteses, foi proposto um esquema de intervalo de segurança para a estimação de taxas de juros pelo spline cúbico natural, empregado como método de ajuste perfeito aos dados. O backtest sugere que o esquema proposto é consistente, acomodando bem as hipóteses e aproximações envolvidas. O segundo bloco investiga a estimação da curva de juros norte-americana construída a partir dos contratos de swaps de taxas de juros dólar-Libor pela Máquina de Vetores Suporte (MVS), parte do corpo da Teoria do Aprendizado Estatístico. A pesquisa em MVS tem obtido importantes avanços teóricos, embora ainda sejam escassas as implementações em problemas reais de regressão. A MVS possui características atrativas para a modelagem de curva de juros: é capaz de introduzir já na estimação informações a priori sobre o formato da curva e sobre aspectos da formação das taxas e liquidez de cada um dos contratos a partir dos quais ela é construída. Estas últimas são quantificadas pelo bid-ask spread (BAS) de cada contrato. A formulação básica da MVS é alterada para assimilar diferentes valores do BAS sem que as propriedades dela sejam perdidas. É dada especial atenção ao levantamento de informação a priori para seleção dos parâmetros da MVS a partir do formato típico da curva. A amostra tem freqüência diária: março de 1997 a abril de 2001. Os desempenhos fora da amostra de diversas especificações da MVS foram confrontados com aqueles de outros métodos de estimação. A MVS foi o método que melhor controlou o trade- off entre viés e variância dos erros.
This thesis investigates interest rates curve estimation under non-parametric approach. The text is divided into two parts. The first one focus on which criterion to use to select the best performance method in the task of interpolating Brazilian interest rate curve. A selection criterion is proposed to measure out-of-sample performance by combining resample strategies leave-k-out cross validation applied upon the whole sample curves, where K k £ £ 1 and K is function of observed contract number in each curve. Some particularities reduce substantially the required computational effort, making the proposed criterion feasible. The data sample range is daily, from January 1997 to February 2001. The proposed criterion selected natural cubic spline, used as data perfect-fitting estimation method. Considering the trade rate precision, the spline is non-biased. However, quantitative analysis of performance determinant factors showed the existence of out-of-sample error heteroskedasticities. From a conditional variance specification of these errors, a security interval scheme is proposed for interest rate generated by perfect-fitting natural cubic spline. A backtest showed that the proposed security interval is consistent, accommodating the evolved assumptions and approximations. The second part estimate US free-for-floating interest rate swap contract curve by using Support Vector Machine (SVM), a method derived from Statistical Learning Theory. The SVM research has got important theoretical results, however the number of implementation on real regression problems is low. SVM has some attractive characteristics for interest rates curves modeling: it has the ability to introduce already in its estimation process a priori information about curve shape and about liquidity and price formation aspects of the contracts that generate the curve. The last information set is quantified by the bid-ask spread. The basic SVM formulation is changed in order to be able to incorporate the different values for bid-ask spreads, without losing its properties. Great attention is given to the question of how to extract a priori information from swap curve typical shape to be used in MVS parameter selection. The data sample range is daily, from March 1997 to April 2001. The out-of-sample performances of different SVM specifications are faced with others method performances. SVM got the better control of trade- off between bias and variance of out-of-sample errors.
APA, Harvard, Vancouver, ISO, and other styles
2

Chia, Yan Wah. "Radiation from curved (conical) frequency selective surfaces." Thesis, Loughborough University, 1993. https://dspace.lboro.ac.uk/2134/7200.

Full text
Abstract:
The thesis deals with the analysis of a microwave Frequency Selective Surface (FSS) on a conical dielectric radome illuminated by a feed hom located at the base. Two approaches have been adopted to solve this problem. The first approach is to calculate the element currents under the assumption that the surface is locally flat. Consequently, the element current at that locality can be determined by employing Floquet modal analysis. The local incidence has been modelled from the radiation pattern of the source or the aperture fields of the feed. Three types of feed model were used to account for the field illumination on the radome. The transmitted fields from the curved surface are obtained from the sum of the radiated fields due to the equivalent magnetic and electric current sources distributed in each local unit cell of the conical surface. This method treats the interaction of neighbouring FSS elements only. In the second approach the curvature is taken into account by dividing the each element into segments which conform to the curved surface. An integral formulation is used to take into account the interaction of all the elements. The current source in each FSS element from the formulation is solved using the method of moments (MOM) technique. A linear system of simultaneous equations is obtained from the MOM and has been solved using elimination method and an iterative method which employs conjugate gradients. The performance of both methods has been compared with regard to the speed of computations and the memory storage capability. New formulations using quasi static approximations have been derived to account for thin dielectric backing in the curved aperture FSS analysis. Computer models have been developed to predict the radiation performance of the curved(conical) FSS. Experiments were performed in an anechoic chamber where the FSS cone was mounted on a jig resting on a turntable. The measuring setup contained a sweep oscillator that supplied power to a transmitting feed placed at the base of the cone. Amplitude and phase values of the far field radiation pattern of the cone were measured with the aid of a vector network analyser. Cones with different dimensions and FSS element geometries were constructed and the measured transmission losses and radiation patterns compared with predictions.
APA, Harvard, Vancouver, ISO, and other styles
3

Paterson, Chay Giles Blair. "Minimal models of invasion and clonal selection in cancer." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/28986.

Full text
Abstract:
One of the defining features of cancer is cell migration: the tendency of malignant cells to become motile and move significant distances through intervening tissue. This is a necessary precondition for metastasis, the ability of cancers to spread, which once underway permits more rapid growth and complicates effective treatment. In addition, the emergence and development of cancer is currently believed to be an evolutionary process, in which the emergence of cancerous cell lines and the subsequent appearance of resistant clones is driven by selection. In this thesis we develop minimal models of the relationship between motility, growth, and evolution of cancer cells. These should be simple enough to be easily understood and analysed, but remain realistic in their biologically relevant assumptions. We utilise simple simulations of a population of individual cells in space to examine how changes in mechanical properties of invasive cells and their surroundings can affect the speed of cell migration. We similarly examine how differences in the speed of migration can affect the growth of tumours. From this we conclude that cells with a higher elastic stiffness experience stronger resistance to their movement through tissue, but this resistance is limited by the elasticity of the surrounding tissue. We also find that the growth rate of large lesions depends weakly on the migration speed of escaping cells, and has stronger and more complex dependencies on the rates of other stochastic processes in the model, namely the rate at which cells transition to being motile and the reverse rate at which cells cease to be motile. To examine how the rates of growth and evolution of an ensemble of cancerous lesions depends on their geometry and underlying fitness landscape, we develop an analytical framework in which the spatial structure is coarse grained and the cancer treated as a continuously growing system with stochastic migration events. Both the fully stochastic realisations of the system and deterministic population transport approaches are studied. Both approaches conclude that the whole ensemble can undergo migration-driven exponential growth regardless of the dependence of size on time of individual lesions, and that the relationship between growth rate and rate of migration is determined by the geometrical constraints of individual lesions. We also find that linear fitness landscapes result in faster-than-exponential growth of the ensemble, and we can determine the expected number of driver mutations present in several important cases of the model. Finally, we study data from a clinical study of the effectiveness of a new low-dose combined chemotherapy. This enables us to test some important hypotheses about the growth rate of pancreatic cancers and the speed with which evolution occurs in reality. We test a moderately successful simple model of the observed growth curves, and use it to infer how frequently drug resistant mutants appear in this clinical trial. We conclude that the main shortcomings of the model are the difficulty of avoiding over-interpretation in the face of noise and small datasets. Despite this, we find that the frequency of resistant mutants is far too high to be explained without resorting to novel mechanisms of cross-resistance to multiple drugs. We outline some speculative explanations and attempt to provide possible experimental tests.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Wen-Chyi. "Regularized variable selection in proportional hazards model using area under receiver operating characteristic curve criterion." College Park, Md.: University of Maryland, 2009. http://hdl.handle.net/1903/9972.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2009.
Thesis research directed by: Dept. of Mathematics. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
5

Flake, Darl D. II. "Separation of Points and Interval Estimation in Mixed Dose-Response Curves with Selective Component Labeling." DigitalCommons@USU, 2016. https://digitalcommons.usu.edu/etd/4697.

Full text
Abstract:
This dissertation develops, applies, and investigates new methods to improve the analysis of logistic regression mixture models. An interesting dose-response experiment was previously carried out on a mixed population, in which the class membership of only a subset of subjects (survivors) were subsequently labeled. In early analyses of the dataset, challenges with separation of points and asymmetric confidence intervals were encountered. This dissertation extends the previous analyses by characterizing the model in terms of a mixture of penalized (Firth) logistic regressions and developing methods for constructing profile likelihood-based confidence and inverse intervals, and confidence bands in the context of such a model. The proposed methods are applied to the motivating dataset and another related dataset, resulting in improved inference on model parameters. Additionally, a simulation experiment is carried out to further illustrate the benefits of the proposed methods and to begin to explore better designs for future studies. The penalized model is shown to be less biased than the traditional model and profile likelihood-based intervals are shown to have better coverage probability than Wald-type intervals. Some limitations, extensions, and alternatives to the proposed methods are discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Boruvka, Audrey. "Data-driven estimation for Aalen's additive risk model." Thesis, Kingston, Ont. : [s.n.], 2007. http://hdl.handle.net/1974/489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lee, Kyeong Eun. "Bayesian models for DNA microarray data analysis." Diss., Texas A&M University, 2005. http://hdl.handle.net/1969.1/2465.

Full text
Abstract:
Selection of signi?cant genes via expression patterns is important in a microarray problem. Owing to small sample size and large number of variables (genes), the selection process can be unstable. This research proposes a hierarchical Bayesian model for gene (variable) selection. We employ latent variables in a regression setting and use a Bayesian mixture prior to perform the variable selection. Due to the binary nature of the data, the posterior distributions of the parameters are not in explicit form, and we need to use a combination of truncated sampling and Markov Chain Monte Carlo (MCMC) based computation techniques to simulate the posterior distributions. The Bayesian model is ?exible enough to identify the signi?cant genes as well as to perform future predictions. The method is applied to cancer classi?cation via cDNA microarrays. In particular, the genes BRCA1 and BRCA2 are associated with a hereditary disposition to breast cancer, and the method is used to identify the set of signi?cant genes to classify BRCA1 and others. Microarray data can also be applied to survival models. We address the issue of how to reduce the dimension in building model by selecting signi?cant genes as well as assessing the estimated survival curves. Additionally, we consider the wellknown Weibull regression and semiparametric proportional hazards (PH) models for survival analysis. With microarray data, we need to consider the case where the number of covariates p exceeds the number of samples n. Speci?cally, for a given vector of response values, which are times to event (death or censored times) and p gene expressions (covariates), we address the issue of how to reduce the dimension by selecting the responsible genes, which are controlling the survival time. This approach enables us to estimate the survival curve when n << p. In our approach, rather than ?xing the number of selected genes, we will assign a prior distribution to this number. The approach creates additional ?exibility by allowing the imposition of constraints, such as bounding the dimension via a prior, which in e?ect works as a penalty. To implement our methodology, we use a Markov Chain Monte Carlo (MCMC) method. We demonstrate the use of the methodology with (a) di?use large B??cell lymphoma (DLBCL) complementary DNA (cDNA) data and (b) Breast Carcinoma data. Lastly, we propose a mixture of Dirichlet process models using discrete wavelet transform for a curve clustering. In order to characterize these time??course gene expresssions, we consider them as trajectory functions of time and gene??speci?c parameters and obtain their wavelet coe?cients by a discrete wavelet transform. We then build cluster curves using a mixture of Dirichlet process priors.
APA, Harvard, Vancouver, ISO, and other styles
8

Plašil, Miroslav. "Empirické ověření nové Keynesiánské Philipsovy křivky v ČR." Doctoral thesis, Vysoká škola ekonomická v Praze, 2003. http://www.nusl.cz/ntk/nusl-77088.

Full text
Abstract:
New keynesian Phillips curve (NKPC) has become a central model to study the relation between inflation and real economic activity, notably in the framework of optimal monetary policy design. However, some recent evidence suggests that empirical data are usually at odds with the underlying theory. The model due to its inherent structure represents a statistical challenge in its own right. Since Galí and Gertler (1999) published their seminal paper introducing estimation via GMM techniques, they have triggered a heated debate on its empirical relevance. Their approach has been heavily criticised by later authors, mainly on the grounds of questionable behaviour of GMM estimator in the NKPC context and/or its small sample properties. The common criticism includes sensitivity to the choice of instrument set, weak identification and small sample bias. In this thesis I propose a new estimation strategy that provides a remedy to above mentioned shortcomings and allows to obtain reliable estimates. The procedure exploits recent advances in GMM theory as well as in other fields of statistics, in particular in the area of time series factor analysis and bootstrap. The proposed estimation strategy consists of several consecutive steps: first, to reduce a small sample bias resulting from excessive use of instruments I summarize all available information by employing factor analysis and include estimated factors into information set. In the second step I use statistical information criteria to select optimal instruments and eventually I obtain confidence intervals on parameters using bootstrap method. In NKPC context all these methods were used for the first time and can also be used independently. Their combination however provides synergistic effect that helps to improve the properties of estimates and to check the efficiency of given steps. Obtained results suggest that NKPC model can explain Czech inflation dynamics fairly well and provide some support for underlying theory. Among other things the results imply that the policy of disinflation may not be as costly with respect to a loss in aggregate product as earlier versions of Phillips curve would indicate. However, finding a good proxy for real economic activity has proved to be a difficult task. In particular we demonstrated that results are conditional on how the measure is calculated, some measures even showed countercyclical behaviour. This issue -- in the thesis discussed only in passing -- is a subject of future research. In addition to the proposed strategy and provided parameter estimates the thesis brings some partial simulation-based findings. Simulations elaborate on earlier literature on naive bootstrap in GMM context and study performance of bootstrap modifications of unit root and KPSS test.
APA, Harvard, Vancouver, ISO, and other styles
9

Rückert, Nadja. "Studies on two specific inverse problems from imaging and finance." Doctoral thesis, Universitätsbibliothek Chemnitz, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-91587.

Full text
Abstract:
This thesis deals with regularization parameter selection methods in the context of Tikhonov-type regularization with Poisson distributed data, in particular the reconstruction of images, as well as with the identification of the volatility surface from observed option prices. In Part I we examine the choice of the regularization parameter when reconstructing an image, which is disturbed by Poisson noise, with Tikhonov-type regularization. This type of regularization is a generalization of the classical Tikhonov regularization in the Banach space setting and often called variational regularization. After a general consideration of Tikhonov-type regularization for data corrupted by Poisson noise, we examine the methods for choosing the regularization parameter numerically on the basis of two test images and real PET data. In Part II we consider the estimation of the volatility function from observed call option prices with the explicit formula which has been derived by Dupire using the Black-Scholes partial differential equation. The option prices are only available as discrete noisy observations so that the main difficulty is the ill-posedness of the numerical differentiation. Finite difference schemes, as regularization by discretization of the inverse and ill-posed problem, do not overcome these difficulties when they are used to evaluate the partial derivatives. Therefore we construct an alternative algorithm based on the weak formulation of the dual Black-Scholes partial differential equation and evaluate the performance of the finite difference schemes and the new algorithm for synthetic and real option prices.
APA, Harvard, Vancouver, ISO, and other styles
10

Sun, Limei. "Probabilistic model designs and selection curves of trawl gears /." 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
11

WENG, SHIH-CHIEH, and 翁士傑. "Exploring Influential Factors for Model Selection in Latent Growth Curve Models." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/50528846949302923171.

Full text
Abstract:
碩士
國立臺北大學
統計學系
95
The use of Latent Growth Curve Modeling(LGCM), in longitudinal study data analysis has been widely used in the field of Psychology, Education and Medical research. We are interested in observing the effect of time on the action or attitude of the subject in a general longitudinal study, whether these behavior vary as time progress. Hence, LGCM is a technique to analyze the repeat measurements of a variable at different time frame. This essay will discuss the use of LGCM with small sample size, to identify the accuracy of model selection or power using test statistics. In the past, researchers rarely investigate the use of LGCM with small sample size, and the model selection is mainly continues variable type. This research will follow the selection of model and use continues variable type. In the simulation study, the following six factors are used: number of sample sizes, variance of intercept, variance of slope, means of slope, covariance of intercept and slope, number of observation variables. This research uses the test statistics to identify the power performance of LGCM in small sample size. The research uses Monte Carlo simulation, first simulating the data need for analysis, then feedback the data in to the five nested model for simulation research to estimate the power performance in different affecting factor variation. The results of simulation indicate that the main factors are number of sample sizes, covariance of intercept and slope and the number of observation variables. In terms of the power of the sample size and model selection for this research, BIC model selection indicator has the best performance, followed by , Adjusted-BIC has the worst performance. This research has resulted in producing a power table for the use of experimental researchers.
APA, Harvard, Vancouver, ISO, and other styles
12

Gadoury, David. "Distributions d'auto-amorçage exactes ponctuelles des courbes ROC et des courbes de coûts." Thèse, 2009. http://hdl.handle.net/1866/7896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Chen, Chun-Shu, and 陳春樹. "Model Selection for Curve and Surface Fitting Using Generalized Degrees of Freedom." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/49906881842793562861.

Full text
Abstract:
博士
國立中央大學
統計研究所
95
In the process of data analysis, there are usually a number of candidate statistical methods (models) that can be used, and different methods (models) generally have different performances under different situations. In this thesis, we focus on model selection in curve and surface fitting. We develop a general rule to fairly assess among candidate curve or surface fitting methods regardless of whether the fitting procedures are complex and whether the corresponding estimates are linear, nonlinear, or even discontinuous. Based on the concept of generalized degrees of freedom (GDF) (Ye 1998), we propose an improved Cp method to select among a class of selection criteria in spline smoothing. In addition, a general methodology for geostatistical model selection is proposed by further generalizing GDF to spatial prediction. The proposed method not only can be used to select among various spatial prediction methods, but also can be applied to the variable selection problem in spatial regression. The validities of the proposed model selection methods for curve and surface fitting are justified both numerically and theoretically.
APA, Harvard, Vancouver, ISO, and other styles
14

"Addressing the Variable Selection Bias and Local Optimum Limitations of Longitudinal Recursive Partitioning with Time-Efficient Approximations." Doctoral diss., 2019. http://hdl.handle.net/2286/R.I.54792.

Full text
Abstract:
abstract: Longitudinal recursive partitioning (LRP) is a tree-based method for longitudinal data. It takes a sample of individuals that were each measured repeatedly across time, and it splits them based on a set of covariates such that individuals with similar trajectories become grouped together into nodes. LRP does this by fitting a mixed-effects model to each node every time that it becomes partitioned and extracting the deviance, which is the measure of node purity. LRP is implemented using the classification and regression tree algorithm, which suffers from a variable selection bias and does not guarantee reaching a global optimum. Additionally, fitting mixed-effects models to each potential split only to extract the deviance and discard the rest of the information is a computationally intensive procedure. Therefore, in this dissertation, I address the high computational demand, variable selection bias, and local optimum solution. I propose three approximation methods that reduce the computational demand of LRP, and at the same time, allow for a straightforward extension to recursive partitioning algorithms that do not have a variable selection bias and can reach the global optimum solution. In the three proposed approximations, a mixed-effects model is fit to the full data, and the growth curve coefficients for each individual are extracted. Then, (1) a principal component analysis is fit to the set of coefficients and the principal component score is extracted for each individual, (2) a one-factor model is fit to the coefficients and the factor score is extracted, or (3) the coefficients are summed. The three methods result in each individual having a single score that represents the growth curve trajectory. Therefore, now that the outcome is a single score for each individual, any tree-based method may be used for partitioning the data and group the individuals together. Once the individuals are assigned to their final nodes, a mixed-effects model is fit to each terminal node with the individuals belonging to it. I conduct a simulation study, where I show that the approximation methods achieve the goals proposed while maintaining a similar level of out-of-sample prediction accuracy as LRP. I then illustrate and compare the methods using an applied data.
Dissertation/Thesis
Doctoral Dissertation Psychology 2019
APA, Harvard, Vancouver, ISO, and other styles
15

Ma, Liangzhuang. "Optimization of trawlnet codend mesh size to allow for maximal undersized fish release and a model consideration of towing time to the effects of the selection curve /." 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
16

Kruger, Ester. "Assessing the accuracy of the growth in theoretical capability as predicted by the career path appreciation (CPA) 1 VS CPA 2." Diss., 2013. http://hdl.handle.net/10500/11875.

Full text
Abstract:
The need for the identification and appropriate development of talent in organisations has led to a renewed interest in the accuracy of tools used in this context. The objectives of the study were to: (1) determine whether there is a significant difference in the growth in theoretical capability as predicted by Career Path Appreciation (CPA) 1 and CPA 2 among the sample population, (2) determine whether there is a significant difference in Mode as predicted by CPA 1 and CPA 2 among the sample population, and (3) formulate recommendations for Talent Management and Industrial and Organisational Psychology practices and future research. The CPA is a tool used for the selection and development of talent nationally and internationally. Limited recent test-retest research has been done regarding the utilisation of the CPA in this context. Scholars in the field of industrial psychology could therefore benefit from follow-up research regarding the validity and reliability of the CPA. The research design is an ex post facto correlational design using longitudinal data of a sample of convenience (N=527). Overall, the results indicated a significant correlation between CLC for CPA 1 and CPA 2 as well as between Mode for CPA 1 and CPA 2. The CPA as a measure of theoretical capability is consistently accurate between measures and can be used with confidence for the identification and development of talent within organisations.
Industrial & Organisational Psychology
M. Admin. (Industrial and Organisational Psychology)
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography