Academic literature on the topic 'Regression analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Regression analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Regression analysis":

1

Asakura, Koko, and Toshimitsu Hamasaki. "Regression analysis." Drug Delivery System 31, no. 1 (2016): 72–81. http://dx.doi.org/10.2745/dds.31.72.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Anthony, Denis. "Regression analysis." Nurse Researcher 4, no. 4 (August 1997): 54–62. http://dx.doi.org/10.7748/nr.4.4.54.s6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Anthony, Denis. "Regression analysis." Nurse Researcher 4, no. 1 (October 1996): 318–26. http://dx.doi.org/10.7748/nr1996.10.4.1.318.c6066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Munro, Barbara Hazard. "Regression Analysis." Clinical Nurse Specialist 6, no. 2 (1992): 77. http://dx.doi.org/10.1097/00002800-199200620-00006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rawles, John, and J. C. Bignall. "REGRESSION ANALYSIS." Lancet 327, no. 8481 (March 1986): 614–15. http://dx.doi.org/10.1016/s0140-6736(86)92832-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bland, J. M., and D. G. Altman. "REGRESSION ANALYSIS." Lancet 327, no. 8486 (April 1986): 908–9. http://dx.doi.org/10.1016/s0140-6736(86)91008-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Glaser, Elton. "Regression Analysis." Missouri Review 27, no. 1 (2004): 140–41. http://dx.doi.org/10.1353/mis.2004.0010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lewis, S. "Regression analysis." Practical Neurology 7, no. 4 (August 1, 2007): 259–64. http://dx.doi.org/10.1136/jnnp.2007.120055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bolshakova, Lyudmila Valentinovna. "Correlation and Regression Analysis of Economic Problems." Revista Gestão Inovação e Tecnologias 11, no. 3 (June 30, 2021): 2077–88. http://dx.doi.org/10.47059/revistageintec.v11i3.2074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kopsidas, Gerassimos C. "A regression analysis on the green olives debittering." Grasas y Aceites 42, no. 6 (December 30, 1991): 401–3. http://dx.doi.org/10.3989/gya.1991.v42.i6.1200.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Regression analysis":

1

Sullwald, Wichard. "Grain regression analysis." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/86526.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (MSc)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: Grain regression analysis forms an essential part of solid rocket motor simulation. In this thesis a numerical grain regression analysis module is developed as an alternative to cumbersome and time consuming analytical methods. The surface regression is performed by the level-set method, a numerical interface advancement scheme. A novel approach to the integration of the surface area and volume of a numerical interface, as defined implicitly in a level-set framework, by means of Monte-Carlo integration is proposed. The grain regression module is directly coupled to a quasi -1D internal ballistics solver in an on-line fashion, in order to take into account the effects of spatially varying burn rate distributions. A multi-timescale approach is proposed for the direct coupling of the two solvers.
AFRIKAANSE OPSOMMING: Gryn regressie analise vorm ’n integrale deel van soliede vuurpylmotor simulasie. In hierdie tesis word ’n numeriese gryn regressie analise model, as ’n alternatief tot dikwels omslagtige en tydrowende analitiese metodes, ontwikkel. Die oppervlak regressie word deur die vlak-set metode, ’n numeriese koppelvlak beweging skema uitgevoer. ’n Nuwe benadering tot die integrasie van die buite-oppervlakte en volume van ’n implisiete numeriese koppelvlak in ’n vlakset raamwerk, deur middel van Monte Carlo-integrasie word voorgestel. Die gryn regressie model word direk en aanlyn aan ’n kwasi-1D interne ballistiek model gekoppel, ten einde die uitwerking van ruimtelik-wisselende brand-koers in ag te neem. ’n Multi-tydskaal benadering word voorgestel vir die direkte koppeling van die twee modelle.
2

Dai, Elin, and Lara Güleryüz. "Factors that influence condominium pricing in Stockholm: A regression analysis : A regression analysis." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254235.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis aims to examine which factors that are of significance when forecasting the selling price of condominiums in Stockholm city. Through the use of multiple linear regression, response variable transformation, and a multitude of methods for refining the model fit, a conclusive, out of sample validated model with a confidence level of 95% was obtained. To conduct the statistical methods, the software R was used. This study is limited to the districts of inner city Stockholm with the postal codes 112-118, and the final model can only be applied to this area as the postal codes are included as regressors in the model. The time period in which the selling price was analyzed varied between January 2014 and April 2019, in which the volatility of the time value of money has not been taken into account for the time period. The final model included the following variables as the ones having an impact on the selling price: floor, living area, monthly fee, construction year, district of the city.
Denna studie ämnar till att undersöka vilka faktorer som är av betydelse när syftet är att förutsäga prissättningen på bostadsrätter i Stockholms innerstad. Genom att använda multipel linjär regression, transformation av responsvariabeln, samt en mängd olika metoder för att förfina modellen, togs en slutgiltig, out of sample-validerad modell med ett 95%-konfidensintervall fram. För att genomföra de statistiska metoderna användes programmet R. Denna studie är avgränsad till de distrikt i Stockholms innerstad vars postnummer varierar mellan 112-118, därav är det viktigt att modellen endast appliceras på dessa områden eftersom de är inkluderade i modellen som regressorer. Tidsperioden inom vilket slutpriserna analyserades var mellan januari 2014 och april 2019, i vilket valutans volatilitet inte har analyserats som en ekonomisk påverkande faktor. Den slutgiltiga modellen innefattar de följande variablerna: våning, boarea, månadsavgift, konstruktionsår, distrikt.
3

Zuo, Yanling. "Monotone regression functions." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/29457.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In some applications, we require a monotone estimate of a regression function. In others, we want to test whether the regression function is monotone. For solving the first problem, Ramsay's, Kelly and Rice's, as well as point-wise monotone regression functions in a spline space are discussed and their properties developed. Three monotone estimates are defined: least-square regression splines, smoothing splines and binomial regression splines. The three estimates depend upon a "smoothing parameter": the number and location of knots in regression splines and the usual [formula omitted] in smoothing splines. Two standard techniques for choosing the smoothing parameter, GCV and AIC, are modified for monotone estimation, for the normal errors case. For answering the second question, a test statistic is proposed and its null distribution conjectured. Simulations are carried out to check the conjecture. These techniques are applied to two data sets.
Science, Faculty of
Statistics, Department of
Graduate
4

Ryu, Duchwan. "Regression analysis with longitudinal measurements." Texas A&M University, 2005. http://hdl.handle.net/1969.1/2398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Bayesian approaches to the regression analysis for longitudinal measurements are considered. The history of measurements from a subject may convey characteristics of the subject. Hence, in a regression analysis with longitudinal measurements, the characteristics of each subject can be served as covariates, in addition to possible other covariates. Also, the longitudinal measurements may lead to complicated covariance structures within each subject and they should be modeled properly. When covariates are some unobservable characteristics of each subject, Bayesian parametric and nonparametric regressions have been considered. Although covariates are not observable directly, by virtue of longitudinal measurements, the covariates can be estimated. In this case, the measurement error problem is inevitable. Hence, a classical measurement error model is established. In the Bayesian framework, the regression function as well as all the unobservable covariates and nuisance parameters are estimated. As multiple covariates are involved, a generalized additive model is adopted, and the Bayesian backfitting algorithm is utilized for each component of the additive model. For the binary response, the logistic regression has been proposed, where the link function is estimated by the Bayesian parametric and nonparametric regressions. For the link function, introduction of latent variables make the computing fast. In the next part, each subject is assumed to be observed not at the prespecifiedtime-points. Furthermore, the time of next measurement from a subject is supposed to be dependent on the previous measurement history of the subject. For this outcome- dependent follow-up times, various modeling options and the associated analyses have been examined to investigate how outcome-dependent follow-up times affect the estimation, within the frameworks of Bayesian parametric and nonparametric regressions. Correlation structures of outcomes are based on different correlation coefficients for different subjects. First, by assuming a Poisson process for the follow- up times, regression models have been constructed. To interpret the subject-specific random effects, more flexible models are considered by introducing a latent variable for the subject-specific random effect and a survival distribution for the follow-up times. The performance of each model has been evaluated by utilizing Bayesian model assessments.
5

Campbell, Ian. "The geometry of regression analysis." Thesis, University of Ottawa (Canada), 1989. http://hdl.handle.net/10393/5755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wiencierz, Andrea. "Regression analysis with imprecise data." Diss., Ludwig-Maximilians-Universität München, 2013. http://nbn-resolving.de/urn:nbn:de:bvb:19-166786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Statistical methods usually require that the analyzed data are correct and precise observations of the variables of interest. In practice, however, often only incomplete or uncertain information about the quantities of interest is available. The question studied in the present thesis is, how a regression analysis can reasonably be performed when the variables are only imprecisely observed. At first, different approaches to analyzing imprecisely observed variables that were proposed in the Statistics literature are discussed. Then, a new likelihood-based methodology for regression analysis with imprecise data called Likelihood-based Imprecise Regression is introduced. The corresponding methodological framework is very broad and permits accounting for coarsening errors, in contrast to most alternative approaches to analyzing imprecise data. The methodology suggests considering as the result of a regression analysis the entire set of all regression functions that cannot be excluded in the light of the data, which can be interpreted as a confidence set. In the subsequent chapter, a very general regression method is derived from the likelihood-based methodology. This regression method does not impose restrictive assumptions about the form of the imprecise observations, about the underlying probability distribution, and about the shape of the relationship between the variables. Moreover, an exact algorithm is developed for the special case of simple linear regression with interval data and selected statistical properties of this regression method are studied. The proposed regression method turns out to be robust in terms of a high breakdown point and to provide very reliable insights in the sense of a set-valued result with a high coverage probability. In addition, an alternative approach proposed in the literature based on Support Vector Regression is studied in detail and generalized by embedding it into the framework of the formerly introduced likelihood-based methodology. In the end, the discussed regression methods are applied to two practical questions.
Methoden der statistischen Datenanalyse setzen in der Regel voraus, dass die vorhandenen Daten präzise und korrekte Beobachtungen der untersuchten Größen sind. Häufig können aber bei praktischen Studien die interessierenden Werte nur unvollständig oder unscharf beobachtet werden. Die vorliegende Arbeit beschäftigt sich mit der Fragestellung, wie Regressionsanalysen bei unscharfen Daten sinnvoll durchgeführt werden können. Zunächst werden verschiedene Ansätze zum Umgang mit unscharf beobachteten Variablen diskutiert, bevor eine neue Likelihood-basierte Methodologie für Regression mit unscharfen Daten eingeführt wird. Als Ergebnis der Regressionsanalyse wird bei diesem Ansatz keine einzelne Regressionsfunktion angestrebt, sondern die gesamte Menge aller anhand der Daten plausiblen Regressionsfunktionen betrachtet, welche als Konfidenzbereich für den untersuchten Zusammenhang interpretiert werden kann. Im darauffolgenden Kapitel wird im Rahmen dieser Methodologie eine Regressionsmethode entwickelt, die sehr allgemein bezüglich der Form der unscharfen Beobachtungen, der möglichen Verteilungen der Zufallsgrößen sowie der Form des funktionalen Zusammenhangs zwischen den untersuchten Variablen ist. Zudem werden ein exakter Algorithmus für den Spezialfall der linearen Einfachregression mit Intervalldaten entwickelt und einige statistische Eigenschaften der Methode näher untersucht. Dabei stellt sich heraus, dass die entwickelte Regressionsmethode sowohl robust im Sinne eines hohen Bruchpunktes ist, als auch sehr verlässliche Erkenntnisse hervorbringt, was sich in einer hohen Überdeckungswahrscheinlichkeit der Ergebnismenge äußert. Darüber hinaus wird in einem weiteren Kapitel ein in der Literatur vorgeschlagener Alternativansatz ausführlich diskutiert, der auf Support Vector Regression aufbaut. Dieser wird durch Einbettung in den methodologischen Rahmen des vorher eingeführten Likelihood-basierten Ansatzes weiter verallgemeinert. Abschließend werden die behandelten Regressionsmethoden auf zwei praktische Probleme angewandt.
7

Jeffrey, Stephen Glenn. "Quantile regression and frontier analysis." Thesis, University of Warwick, 2012. http://wrap.warwick.ac.uk/47747/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In chapter 3, quantile regression is used to estimate probabilistic frontiers, i.e. frontiers based on the probability of being dominated. The results from the empirical application using an Italian hotel dataset show rejections of a parametric functional form and a location shift effect, large uncertainty of the estimates of the frontier and wide confidence intervals for the estimates of efficiency. Quantile regression is further developed to estimate thick probabilistic frontiers, i.e. frontiers based on a group of efficient firms. The empirical results show that the differences between the inefficient and efficient firms at lower quantiles of the conditional distribution function are from the coefficient (85 percent of the total effect) and the residual effects (25 percent) and at higher quantiles from the coefficient (68 percent) and the regressor effects (22 percent). The results from the Monte Carlo simulations in chapter 4 show that under the correctly assumed stochastic frontier models, the probabilistic frontiers can have the lowest bias and mean squared error of the efficiency estimates. When outliers or location-scale shift effects are included, more preference is towards the probabilistic frontiers. The nonparametric probabilistic frontiers are nearly always preferable to Data Envelopment Analysis and Free Disposable Hull. In chapter 5, a fixed effects quantile regression estimator is used to estimate a cost frontier and efficiency levels for a panel dataset of English NHS Trusts. Waiting times elasticities are estimated from -0.14 to 0.17 in the cross-sectional models and -0.008 to 0.03 in the panel models. Cost minimisation ranged from 33 to 60 days in the cross-sectional model and from 37 to 54 days in the panel model. The results show that the effects of the inputs and control variables vary depending on the efficiency of the Trusts. The efficiency estimates reveal very different conclusions depending on the model choice.
8

Ranganai, Edmore. "Aspects of model development using regression quantiles and elemental regressions." Thesis, Stellenbosch : Stellenbosch University, 2007. http://hdl.handle.net/10019.1/18668.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dissertation (PhD)--University of Stellenbosch, 2007.
ENGLISH ABSTRACT: It is well known that ordinary least squares (OLS) procedures are sensitive to deviations from the classical Gaussian assumptions (outliers) as well as data aberrations in the design space. The two major data aberrations in the design space are collinearity and high leverage. Leverage points can also induce or hide collinearity in the design space. Such leverage points are referred to as collinearity influential points. As a consequence, over the years, many diagnostic tools to detect these anomalies as well as alternative procedures to counter them were developed. To counter deviations from the classical Gaussian assumptions many robust procedures have been proposed. One such class of procedures is the Koenker and Bassett (1978) Regressions Quantiles (RQs), which are natural extensions of order statistics, to the linear model. RQs can be found as solutions to linear programming problems (LPs). The basic optimal solutions to these LPs (which are RQs) correspond to elemental subset (ES) regressions, which consist of subsets of minimum size to estimate the necessary parameters of the model. On the one hand, some ESs correspond to RQs. On the other hand, in the literature it is shown that many OLS statistics (estimators) are related to ES regression statistics (estimators). Therefore there is an inherent relationship amongst the three sets of procedures. The relationship between the ES procedure and the RQ one, has been noted almost “casually” in the literature while the latter has been fairly widely explored. Using these existing relationships between the ES procedure and the OLS one as well as new ones, collinearity, leverage and outlier problems in the RQ scenario were investigated. Also, a lasso procedure was proposed as variable selection technique in the RQ scenario and some tentative results were given for it. These results are promising. Single case diagnostics were considered as well as their relationships to multiple case ones. In particular, multiple cases of the minimum size to estimate the necessary parameters of the model, were considered, corresponding to a RQ (ES). In this way regression diagnostics were developed for both ESs and RQs. The main problems that affect RQs adversely are collinearity and leverage due to the nature of the computational procedures and the fact that RQs’ influence functions are unbounded in the design space but bounded in the response variable. As a consequence of this, RQs have a high affinity for leverage points and a high exclusion rate of outliers. The influential picture exhibited in the presence of both leverage points and outliers is the net result of these two antagonistic forces. Although RQs are bounded in the response variable (and therefore fairly robust to outliers), outlier diagnostics were also considered in order to have a more holistic picture. The investigations used comprised analytic means as well as simulation. Furthermore, applications were made to artificial computer generated data sets as well as standard data sets from the literature. These revealed that the ES based statistics can be used to address problems arising in the RQ scenario to some degree of success. However, due to the interdependence between the different aspects, viz. the one between leverage and collinearity and the one between leverage and outliers, “solutions” are often dependent on the particular situation. In spite of this complexity, the research did produce some fairly general guidelines that can be fruitfully used in practice.
AFRIKAANSE OPSOMMING: Dit is bekend dat die gewone kleinste kwadraat (KK) prosedures sensitief is vir afwykings vanaf die klassieke Gaussiese aannames (uitskieters) asook vir data afwykings in die ontwerpruimte. Twee tipes afwykings van belang in laasgenoemde geval, is kollinearitiet en punte met hoë hefboom waarde. Laasgenoemde punte kan ook kollineariteit induseer of versteek in die ontwerp. Na sodanige punte word verwys as kollinêre hefboom punte. Oor die jare is baie diagnostiese hulpmiddels ontwikkel om hierdie afwykings te identifiseer en om alternatiewe prosedures daarteen te ontwikkel. Om afwykings vanaf die Gaussiese aanname teen te werk, is heelwat robuuste prosedures ontwikkel. Een sodanige klas van prosedures is die Koenker en Bassett (1978) Regressie Kwantiele (RKe), wat natuurlike uitbreidings is van rangorde statistieke na die lineêre model. RKe kan bepaal word as oplossings van lineêre programmeringsprobleme (LPs). Die basiese optimale oplossings van hierdie LPs (wat RKe is) kom ooreen met die elementale deelversameling (ED) regressies, wat bestaan uit deelversamelings van minimum grootte waarmee die parameters van die model beraam kan word. Enersyds geld dat sekere EDs ooreenkom met RKe. Andersyds, uit die literatuur is dit bekend dat baie KK statistieke (beramers) verwant is aan ED regressie statistieke (beramers). Dit impliseer dat daar dus ‘n inherente verwantskap is tussen die drie klasse van prosedures. Die verwantskap tussen die ED en die ooreenkomstige RK prosedures is redelik “terloops” van melding gemaak in die literatuur, terwyl laasgenoemde prosedures redelik breedvoerig ondersoek is. Deur gebruik te maak van bestaande verwantskappe tussen ED en KK prosedures, sowel as nuwes wat ontwikkel is, is kollineariteit, punte met hoë hefboom waardes en uitskieter probleme in die RK omgewing ondersoek. Voorts is ‘n lasso prosedure as veranderlike seleksie tegniek voorgestel in die RK situasie en is enkele tentatiewe resultate daarvoor gegee. Hierdie resultate blyk belowend te wees, veral ook vir verdere navorsing. Enkel geval diagnostiese tegnieke is beskou sowel as hul verwantskap met meervoudige geval tegnieke. In die besonder is veral meervoudige gevalle beskou wat van minimum grootte is om die parameters van die model te kan beraam, en wat ooreenkom met ‘n RK (ED). Met sodanige benadering is regressie diagnostiese tegnieke ontwikkel vir beide EDs en RKe. Die belangrikste probleme wat RKe negatief beinvloed, is kollineariteit en punte met hoë hefboom waardes agv die aard van die berekeningsprosedures en die feit dat RKe se invloedfunksies begrensd is in die ruimte van die afhanklike veranderlike, maar onbegrensd is in die ontwerpruimte. Gevolglik het RKe ‘n hoë affiniteit vir punte met hoë hefboom waardes en poog gewoonlik om uitskieters uit te sluit. Die finale uitset wat verkry word wanneer beide punte met hoë hefboom waardes en uitskieters voorkom, is dan die netto resultaat van hierdie twee teenstrydige pogings. Alhoewel RKe begrensd is in die onafhanklike veranderlike (en dus redelik robuust is tov uitskieters), is uitskieter diagnostiese tegnieke ook beskou om ‘n meer holistiese beeld te verkry. Die ondersoek het analitiese sowel as simulasie tegnieke gebruik. Voorts is ook gebruik gemaak van kunsmatige datastelle en standard datastelle uit die literatuur. Hierdie ondersoeke het getoon dat die ED gebaseerde statistieke met ‘n redelike mate van sukses gebruik kan word om probleme in die RK omgewing aan te spreek. Dit is egter belangrik om daarop te let dat as gevolg van die interafhanklikheid tussen kollineariteit en punte met hoë hefboom waardes asook dié tussen punte met hoë hefboom waardes en uitskieters, “oplossings” dikwels afhanklik is van die bepaalde situasie. Ten spyte van hierdie kompleksiteit, is op grond van die navorsing wat gedoen is, tog redelike algemene riglyne verkry wat nuttig in die praktyk gebruik kan word.
9

Lo, Sau Yee. "Measurement error in logistic regression model /." View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?MATH%202004%20LO.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2004.
Includes bibliographical references (leaves 82-83). Also available in electronic version. Access restricted to campus users.
10

Meless, Dejen. "Test Cycle Optimization using Regression Analysis." Thesis, Linköping University, Automatic Control, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54809.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

Industrial robots make up an important part in today’s industry and are assigned to a range of different tasks. Needless to say, businesses need to rely on their machine park to function as planned, avoiding stops in production due to machine failures. This is where fault detection methods play a very important part. In this thesis a specific fault detection method based on signal analysis will be considered. When testing a robot for fault(s), a specific test cycle (trajectory) is executed in order to be able to compare test data from different test occasions. Furthermore, different test cycles yield different measurements to analyse, which may affect the performance of the analysis. The question posed is: Can we find an optimal test cycle so that the fault is best revealed in the test data? The goal of this thesis is to, using regression analysis, investigate how the presently executed test cycle in a specific diagnosis method relates to the faults that are monitored (in this case a so called friction fault) and decide if a different one should be recommended. The data also includes representations of two disturbances.

The results from the regression show that the variation in the test quantities utilised in the diagnosis method are not explained by neither the friction fault or the test cycle. It showed that the disturbances had too large effect on the test quantities. This made it impossible to recommend a different (optimal) test cycle based on the analysis.

Books on the topic "Regression analysis":

1

Sen, Ashish, and Muni Srivastava. Regression Analysis. New York, NY: Springer New York, 1990. http://dx.doi.org/10.1007/978-1-4612-4470-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sen, Ashish, and Muni Srivastava. Regression Analysis. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/978-3-662-25092-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lewis-Beck, Michael S. Regression analysis. Thousand Oaks, CA: Sage, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Draper, N. R. Applied regression analysis. 3rd ed. New York: Wiley, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kahane, Leo H. Regression basics. 2nd ed. Thousand Oaks, Calif: Sage Publications, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Schroeder, Larry, David Sjoquist, and Paula Stephan. Understanding Regression Analysis. 2455 Teller Road, Newbury Park California 91320 United States of America: SAGE Publications, Inc., 1986. http://dx.doi.org/10.4135/9781412986410.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Westfall, Peter H., and Andrea L. Arias. Understanding Regression Analysis. Boca Raton : CRC Press, [2020]: Chapman and Hall/CRC, 2020. http://dx.doi.org/10.1201/9781003025764.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

von Rosen, Dietrich. Bilinear Regression Analysis. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-78784-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rawlings, John O., Sastry G. Pantula, and David A. Dickey, eds. Applied Regression Analysis. New York: Springer-Verlag, 1998. http://dx.doi.org/10.1007/b98890.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Thrane, Christer. Applied Regression Analysis. Abingdon, Oxon ; New York, NY : Routledge, 2020.: Routledge, 2019. http://dx.doi.org/10.4324/9780429443756.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Regression analysis":

1

Arkes, Jeremy. "Summarizing thoughts." In Regression Analysis, 352–63. 2nd ed. London: Routledge, 2022. http://dx.doi.org/10.4324/9781003285007-14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Arkes, Jeremy. "Time-series models." In Regression Analysis, 287–314. 2nd ed. London: Routledge, 2022. http://dx.doi.org/10.4324/9781003285007-10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Arkes, Jeremy. "Regression analysis basics." In Regression Analysis, 12–50. 2nd ed. London: Routledge, 2022. http://dx.doi.org/10.4324/9781003285007-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Arkes, Jeremy. "Methods to address biases." In Regression Analysis, 225–65. 2nd ed. London: Routledge, 2022. http://dx.doi.org/10.4324/9781003285007-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Arkes, Jeremy. "Introduction." In Regression Analysis, 1–11. 2nd ed. London: Routledge, 2022. http://dx.doi.org/10.4324/9781003285007-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Arkes, Jeremy. "What could go wrong when estimating causal effects?" In Regression Analysis, 132–207. 2nd ed. London: Routledge, 2022. http://dx.doi.org/10.4324/9781003285007-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Arkes, Jeremy. "Strategies for other regression objectives." In Regression Analysis, 208–24. 2nd ed. London: Routledge, 2022. http://dx.doi.org/10.4324/9781003285007-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Arkes, Jeremy. "What does “holding other factors constant” mean?" In Regression Analysis, 67–89. 2nd ed. London: Routledge, 2022. http://dx.doi.org/10.4324/9781003285007-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Arkes, Jeremy. "How to conduct a research project." In Regression Analysis, 331–42. 2nd ed. London: Routledge, 2022. http://dx.doi.org/10.4324/9781003285007-12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Arkes, Jeremy. "The ethics of regression analysis." In Regression Analysis, 343–51. 2nd ed. London: Routledge, 2022. http://dx.doi.org/10.4324/9781003285007-13.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Regression analysis":

1

Saragih, Jason. "Principal regression analysis." In 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2011. http://dx.doi.org/10.1109/cvpr.2011.5995618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chivukula, V. N. Aditya Datta, and Sri Keshava Reddy Adupala. "Music Signal Analysis: Regression Analysis." In 2nd International Conference on Machine Learning, IOT and Blockchain (MLIOB 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.111205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Machine learning techniques have become a vital part of every ongoing research in technical areas. In recent times the world has witnessed many beautiful applications of machine learning in a practical sense which amaze us in every aspect. This paper is all about whether we should always rely on deep learning techniques or is it really possible to overcome the performance of simple deep learning algorithms by simple statistical machine learning algorithms by understanding the application and processing the data so that it can help in increasing the performance of the algorithm by a notable amount. The paper mentions the importance of data pre-processing than that of the selection of the algorithm. It discusses the functions involving trigonometric, logarithmic, and exponential terms and also talks about functions that are purely trigonometric. Finally, we discuss regression analysis on music signals.
3

Khan, Mohiuddeen, and Kanishk Srivastava. "Regression Model for Better Generalization and Regression Analysis." In ICMLSC 2020: The 4th International Conference on Machine Learning and Soft Computing. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3380688.3380691.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Podgurski, Andy. "Session details: Regression testing." In ISSTA '08: International Symposium on Software Testing and Analysis. New York, NY, USA: ACM, 2008. http://dx.doi.org/10.1145/3260628.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

van Erp, N., and P. van Gelder. "Bayesian logistic regression analysis." In BAYESIAN INFERENCE AND MAXIMUM ENTROPY METHODS IN SCIENCE AND ENGINEERING: 32nd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering. AIP, 2013. http://dx.doi.org/10.1063/1.4819994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Krasotkina, O., and V. Mottl. "Adaptive nonstationary regression analysis." In 2008 19th International Conference on Pattern Recognition (ICPR). IEEE, 2008. http://dx.doi.org/10.1109/icpr.2008.4761666.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Duller, Christine. "Model selection for logistic regression models." In NUMERICAL ANALYSIS AND APPLIED MATHEMATICS ICNAAM 2012: International Conference of Numerical Analysis and Applied Mathematics. AIP, 2012. http://dx.doi.org/10.1063/1.4756152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Karim, Rezaul, Md Khorshed Alam, and Md Rezaul Hossain. "Stock Market Analysis Using Linear Regression and Decision Tree Regression." In 2021 1st International Conference on Emerging Smart Technologies and Applications (eSmarTA). IEEE, 2021. http://dx.doi.org/10.1109/esmarta52612.2021.9515762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kavitha S, Varuna S, and Ramya R. "A comparative analysis on linear regression and support vector regression." In 2016 Online International Conference on Green Engineering and Technologies (IC-GET). IEEE, 2016. http://dx.doi.org/10.1109/get.2016.7916627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Araveeporn, Autcha, and Choojai Kuharatanachai. "Comparing Penalized Regression Analysis of Logistic Regression Model with Multicollinearity." In the 2019 2nd International Conference. New York, New York, USA: ACM Press, 2019. http://dx.doi.org/10.1145/3343485.3343487.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Regression analysis":

1

Burke, Kevin. Regression in Analysis. Fort Belvoir, VA: Defense Technical Information Center, November 2008. http://dx.doi.org/10.21236/ada494895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Steed, Chad A., J. Edward SwanII, Patrick J. Fitzpatrick, and T. J. Jankun-Kelly. A Visual Analytics Approach for Correlation, Classification, and Regression Analysis. Office of Scientific and Technical Information (OSTI), February 2012. http://dx.doi.org/10.2172/1035521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hutny, W. P., and J. T. Price. Analysis and regression model of blast furnace coal injection. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 1987. http://dx.doi.org/10.4095/304361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jacob, Brian, and Lars Lefgren. Remedial Education and Student Achievement: A Regression-Discontinuity Analysis. Cambridge, MA: National Bureau of Economic Research, May 2002. http://dx.doi.org/10.3386/w8918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kerr, William, Josh Lerner, and Antoinette Schoar. The Consequences of Entrepreneurial Finance: A Regression Discontinuity Analysis. Cambridge, MA: National Bureau of Economic Research, March 2010. http://dx.doi.org/10.3386/w15831.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Frome E.L., Watkins J. P. ,. Ellis E. D. Poisson Regression Analysis of Illness and Injury Surveillance Data. Office of Scientific and Technical Information (OSTI), December 2012. http://dx.doi.org/10.2172/1060522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Krishnaiah, P. R., and S. Sarkar. Principal Component Analysis Under Correlated Multivariate Regression Equations Model. Fort Belvoir, VA: Defense Technical Information Center, April 1985. http://dx.doi.org/10.21236/ada160266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Garrett, Thomas A. Aggregated vs. Disaggregated Data in Regression Analysis: Implications for Inference. Federal Reserve Bank of St. Louis, 2002. http://dx.doi.org/10.20955/wp.2002.024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sun, T. C. Using Regression Analysis Method to Develop a Material Outgassing Model. Office of Scientific and Technical Information (OSTI), February 2019. http://dx.doi.org/10.2172/1499976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Harris, J. M., S. D. Frans, P. E. Poston, and A. L. Wong. Advances in Regression: Use of Models in Spectroscopic Data Analysis. Fort Belvoir, VA: Defense Technical Information Center, July 1989. http://dx.doi.org/10.21236/ada210541.

Full text
APA, Harvard, Vancouver, ISO, and other styles

To the bibliography