Academic literature on the topic 'Regularized quantiles'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Regularized quantiles.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Regularized quantiles"

1

Santos, Patricia Mendes dos, Ana Carolina Campana Nascimento, Moysés Nascimento, Fabyano Fonseca e. Silva, Camila Ferreira Azevedo, Rodrigo Reis Mota, Simone Eliza Facioni Guimarães, and Paulo Sávio Lopes. "Use of regularized quantile regression to predict the genetic merit of pigs for asymmetric carcass traits." Pesquisa Agropecuária Brasileira 53, no. 9 (September 2018): 1011–17. http://dx.doi.org/10.1590/s0100-204x2018000900004.

Full text
Abstract:
Abstract: The objective of this work was to evaluate the use of regularized quantile regression (RQR) to predict the genetic merit of pigs for asymmetric carcass traits, compared with the Bayesian lasso (Blasso) method. The genetic data of the traits carcass yield, bacon thickness, and backfat thickness from a F2 population composed of 345 individuals, generated by crossing animals from the Piau breed with those of a commercial breed, were used. RQR was evaluated considering different quantiles (τ = 0.05 to 0.95). The RQR model used to estimate the genetic merit showed accuracies higher than or equal to those obtained by Blasso, for all studies traits. There was an increase of 6.7 and 20.0% in accuracy when the quantiles 0.15 and 0.45 were considered in the evaluation of carcass yield and bacon thickness, respectively. The obtained results are indicative that the regularized quantile regression presents higher accuracy than the Bayesian lasso method for the prediction of the genetic merit of pigs for asymmetric carcass variables.
APA, Harvard, Vancouver, ISO, and other styles
2

Bang, Sungwan, and Myoungshic Jhun. "Adaptive sup-norm regularized simultaneous multiple quantiles regression." Statistics 48, no. 1 (August 30, 2012): 17–33. http://dx.doi.org/10.1080/02331888.2012.719512.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zou, Hui, and Ming Yuan. "Regularized simultaneous model selection in multiple quantiles regression." Computational Statistics & Data Analysis 52, no. 12 (August 2008): 5296–304. http://dx.doi.org/10.1016/j.csda.2008.05.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nascimento, Ana Carolina Campana, Camila Ferreira Azevedo, Cynthia Aparecida Valiati Barreto, Gabriela França Oliveira, and Moysés Nascimento. "Quantile regression for genomic selection of growth curves." Acta Scientiarum. Agronomy 46, no. 1 (December 12, 2023): e65081. http://dx.doi.org/10.4025/actasciagron.v46i1.65081.

Full text
Abstract:
This study evaluated the efficiency of genome-wide selection (GWS) based on regularized quantile regression (RQR) to obtain genomic growth curves based on genomic estimated breeding values (GEBV) of individuals with different probability distributions. The data were simulated and composed of 2,025 individuals from two generations and 435 markers randomly distributed across five chromosomes. The simulated phenotypes presented symmetrical, skewed, positive, and negative distributions. Data were analyzed using RQR considering nine quantiles (0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, and 0.9) and traditional methods of genomic selection (specifically, RR-BLUP, BLASSO, BayesA, and BayesB). In general, RQR-based estimation of the GEBV was efficient—at least for a quantile model, the results obtained were more accurate than those obtained by the other evaluated methodologies. Specifically, in the symmetrical-distribution scenario, the highest accuracy values were obtained for the parameters with the models RQR0.4, RQR0.3, and RQR0.4. For positive skewness, the models RQR0.2, RQR0.3, and RQR0.1 presented higher accuracy values, whereas for negative skewness, the best model was RQR0.9. Finally, the GEBV vectors obtained by RQR facilitated the construction of genomic growth curves at different levels of interest (quantiles), illustrating the weight–age relationship.
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Jia, Viktor Todorov, and George Tauchen. "ESTIMATING THE VOLATILITY OCCUPATION TIME VIA REGULARIZED LAPLACE INVERSION." Econometric Theory 32, no. 5 (May 25, 2015): 1253–88. http://dx.doi.org/10.1017/s0266466615000171.

Full text
Abstract:
We propose a consistent functional estimator for the occupation time of the spot variance of an asset price observed at discrete times on a finite interval with the mesh of the observation grid shrinking to zero. The asset price is modeled nonparametrically as a continuous-time Itô semimartingale with nonvanishing diffusion coefficient. The estimation procedure contains two steps. In the first step we estimate the Laplace transform of the volatility occupation time and, in the second step, we conduct a regularized Laplace inversion. Monte Carlo evidence suggests that the proposed estimator has good small-sample performance and in particular it is far better at estimating lower volatility quantiles and the volatility median than a direct estimator formed from the empirical cumulative distribution function of local spot volatility estimates. An empirical application shows the use of the developed techniques for nonparametric analysis of variation of volatility.
APA, Harvard, Vancouver, ISO, and other styles
6

Oliveira, Gabriela França, Ana Carolina Campana Nascimento, Moysés Nascimento, Isabela de Castro Sant'Anna, Juan Vicente Romero, Camila Ferreira Azevedo, Leonardo Lopes Bhering, and Eveline Teixeira Caixeta Moura. "Quantile regression in genomic selection for oligogenic traits in autogamous plants: A simulation study." PLOS ONE 16, no. 1 (January 5, 2021): e0243666. http://dx.doi.org/10.1371/journal.pone.0243666.

Full text
Abstract:
This study assessed the efficiency of Genomic selection (GS) or genome‐wide selection (GWS), based on Regularized Quantile Regression (RQR), in the selection of genotypes to breed autogamous plant populations with oligogenic traits. To this end, simulated data of an F2 population were used, with traits with different heritability levels (0.10, 0.20 and 0.40), controlled by four genes. The generations were advanced (up to F6) at two selection intensities (10% and 20%). The genomic genetic value was computed by RQR for different quantiles (0.10, 0.50 and 0.90), and by the traditional GWS methods, specifically RR-BLUP and BLASSO. A second objective was to find the statistical methodology that allows the fastest fixation of favorable alleles. In general, the results of the RQR model were better than or equal to those of traditional GWS methodologies, achieving the fixation of favorable alleles in most of the evaluated scenarios. At a heritability level of 0.40 and a selection intensity of 10%, RQR (0.50) was the only methodology that fixed the alleles quickly, i.e., in the fourth generation. Thus, it was concluded that the application of RQR in plant breeding, to simulated autogamous plant populations with oligogenic traits, could reduce time and consequently costs, due to the reduction of selfing generations to fix alleles in the evaluated scenarios.
APA, Harvard, Vancouver, ISO, and other styles
7

Sun, Pengju, Meng Li, and Hongwei Sun. "Quantile Regression Learning with Coefficient Dependent lq-Regularizer." MATEC Web of Conferences 173 (2018): 03033. http://dx.doi.org/10.1051/matecconf/201817303033.

Full text
Abstract:
In this paper, We focus on conditional quantile regression learning algorithms based on the pinball loss and lq-regularizer with 1≤q≤2. Our main goal is to study the consistency of this kind of regularized quantile regression learning. With concentration inequality and operator decomposition techniques, we obtained satisfied error bounds and convergence rates.
APA, Harvard, Vancouver, ISO, and other styles
8

Papp, Gábor, Imre Kondor, and Fabio Caccioli. "Optimizing Expected Shortfall under an ℓ1 Constraint—An Analytic Approach." Entropy 23, no. 5 (April 24, 2021): 523. http://dx.doi.org/10.3390/e23050523.

Full text
Abstract:
Expected Shortfall (ES), the average loss above a high quantile, is the current financial regulatory market risk measure. Its estimation and optimization are highly unstable against sample fluctuations and become impossible above a critical ratio r=N/T, where N is the number of different assets in the portfolio, and T is the length of the available time series. The critical ratio depends on the confidence level α, which means we have a line of critical points on the α−r plane. The large fluctuations in the estimation of ES can be attenuated by the application of regularizers. In this paper, we calculate ES analytically under an ℓ1 regularizer by the method of replicas borrowed from the statistical physics of random systems. The ban on short selling, i.e., a constraint rendering all the portfolio weights non-negative, is a special case of an asymmetric ℓ1 regularizer. Results are presented for the out-of-sample and the in-sample estimator of the regularized ES, the estimation error, the distribution of the optimal portfolio weights, and the density of the assets eliminated from the portfolio by the regularizer. It is shown that the no-short constraint acts as a high volatility cutoff, in the sense that it sets the weights of the high volatility elements to zero with higher probability than those of the low volatility items. This cutoff renormalizes the aspect ratio r=N/T, thereby extending the range of the feasibility of optimization. We find that there is a nontrivial mapping between the regularized and unregularized problems, corresponding to a renormalization of the order parameters.
APA, Harvard, Vancouver, ISO, and other styles
9

Wu, Hanwei, and Markus Flierl. "Vector Quantization-Based Regularization for Autoencoders." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6380–87. http://dx.doi.org/10.1609/aaai.v34i04.6108.

Full text
Abstract:
Autoencoders and their variations provide unsupervised models for learning low-dimensional representations for downstream tasks. Without proper regularization, autoencoder models are susceptible to the overfitting problem and the so-called posterior collapse phenomenon. In this paper, we introduce a quantization-based regularizer in the bottleneck stage of autoencoder models to learn meaningful latent representations. We combine both perspectives of Vector Quantized-Variational AutoEncoders (VQ-VAE) and classical denoising regularization methods of neural networks. We interpret quantizers as regularizers that constrain latent representations while fostering a similarity-preserving mapping at the encoder. Before quantization, we impose noise on the latent codes and use a Bayesian estimator to optimize the quantizer-based representation. The introduced bottleneck Bayesian estimator outputs the posterior mean of the centroids to the decoder, and thus, is performing soft quantization of the noisy latent codes. We show that our proposed regularization method results in improved latent representations for both supervised learning and clustering downstream tasks when compared to autoencoders using other bottleneck structures.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Meng, and Hong-Wei Sun. "Asymptotic analysis of quantile regression learning based on coefficient dependent regularization." International Journal of Wavelets, Multiresolution and Information Processing 13, no. 04 (July 2015): 1550018. http://dx.doi.org/10.1142/s0219691315500186.

Full text
Abstract:
In this paper, we consider conditional quantile regression learning algorithms based on the pinball loss with data dependent hypothesis space and ℓ2-regularizer. Functions in this hypothesis space are linear combination of basis functions generated by a kernel function and sample data. The only conditions imposed on the kernel function are the continuity and boundedness which are pretty weak. Our main goal is to study the consistency of this regularized quantile regression learning. By concentration inequality with ℓ2-empirical covering numbers and operator decomposition techniques, satisfied error bounds and convergence rates are explicitly derived.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Regularized quantiles"

1

Thurin, Gauthier. "Quantiles multivariés et transport optimal régularisé." Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0262.

Full text
Abstract:
L’objet d’intérêt principal de cette thèse est la fonction quantile de Monge- Kantorovich. On s’intéresse d’abord à la question cruciale de son estimation, qui revient à résoudre un problème de transport optimal. En particulier, on tente de tirer profit de la connaissance a priori de la loi de référence, une information additionnelle par rapport aux algorithmes usuels, qui nous permet de paramétrer les potentiels de transport par leur série de Fourier. Ce faisant, la régularisation entropique du transport optimal permet deux avantages : la construction d’un algorithme efficace et convergent pour résoudre la version semi-duale de notre problème, et l’obtention d’une fonction quantile empirique lisse et monotone. Ces considérations sont ensuite étendues à l’étude de données sphériques, en remplaçant les séries de Fourier par des harmoniques sphériques, et en généralisant la carte entropique à ce cadre non-euclidien. Le second objectif de cette thèse est de définir de nouvelles notions de superquantiles et d’expected shortfalls multivariés, pour compléter l’information fournie par les quantiles. Ces fonctions caractérisent la loi d’un vecteur aléatoire, ainsi que la convergence en loi, sous certaines hypothèses, et trouvent des applications directes en analyse de risque multivarié, pour étendre les mesures de risque classiques de Value-at-Risk et Conditional-Value-at-Risk
This thesis is concerned with the study of the Monge-Kantorovich quantile function. We first address the crucial question of its estimation, which amounts to solve an optimal transport problem. In particular, we try to take advantage of the knowledge of the reference distribution, that represents additional information compared with the usual algorithms, and which allows us to parameterize the transport potentials by their Fourier series. Doing so, entropic regularization provides two advantages: to build an efficient and convergent algorithm for solving the semi-dual version of our problem, and to obtain a smooth and monotonic empirical quantile function. These considerations are then extended to the study of spherical data, by replacing the Fourier series with spherical harmonics, and by generalizing the entropic map to this non-Euclidean setting. The second main purpose of this thesis is to define new notions of multivariate superquantiles and expected shortfalls, to complement the information provided by the quantiles. These functions characterize the law of a random vector, as well as convergence in distribution under certain assumptions, and have direct applications in multivariate risk analysis, to extend the traditional risk measures of Value-at-Risk and Conditional-Value-at-Risk
APA, Harvard, Vancouver, ISO, and other styles
2

Hashem, Hussein Abdulahman. "Regularized and robust regression methods for high dimensional data." Thesis, Brunel University, 2014. http://bura.brunel.ac.uk/handle/2438/9197.

Full text
Abstract:
Recently, variable selection in high-dimensional data has attracted much research interest. Classical stepwise subset selection methods are widely used in practice, but when the number of predictors is large these methods are difficult to implement. In these cases, modern regularization methods have become a popular choice as they perform variable selection and parameter estimation simultaneously. However, the estimation procedure becomes more difficult and challenging when the data suffer from outliers or when the assumption of normality is violated such as in the case of heavy-tailed errors. In these cases, quantile regression is the most appropriate method to use. In this thesis we combine these two classical approaches together to produce regularized quantile regression methods. Chapter 2 shows a comparative simulation study of regularized and robust regression methods when the response variable is continuous. In chapter 3, we develop a quantile regression model with a group lasso penalty for binary response data when the predictors have a grouped structure and when the data suffer from outliers. In chapter 4, we extend this method to the case of censored response variables. Numerical examples on simulated and real data are used to evaluate the performance of the proposed methods in comparisons with other existing methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Schulze, Bert-Wolfgang, Vladimir Nazaikinskii, and Boris Sternin. "The index of quantized contact transformations on manifolds with conical singularities." Universität Potsdam, 1998. http://opus.kobv.de/ubp/volltexte/2008/2527/.

Full text
Abstract:
The quantization of contact transformations of the cosphere bundle over a manifold with conical singularities is described. The index of Fredholm operators given by this quantization is calculated. The answer is given in terms of the Epstein-Melrose contact degree and the conormal symbol of the corresponding operator.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Regularized quantiles"

1

Godsey, William D. The Sinews of Habsburg Power. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198809395.001.0001.

Full text
Abstract:
This book explores the domestic foundations of the immense growth of central European Habsburg power from the rise of a permanent standing army after the Thirty Years War to the end of the Napoleonic wars. With a force that grew in size from around 25,000 soldiers to half a million in the War of the Sixth Coalition, the Habsburg monarchy participated in shifting international constellations of rivalry and in some two dozen armed conflicts. Raising forces of such magnitude constituted a central task of Habsburg government, one that required the cooperation of society and its elites. The monarchy’s composite-territorial structures in the guise of the Lower Austrian Estates—a leading representative body and privileged corps—formed a vital, if changing, element underlying Habsburg international success and resilience. With its capital at Vienna, the archduchy below the river Enns (the historic designation of Lower Austria) was geographically, politically, and financially a key Habsburg possession. Fiscal-military exigency induced the Estates to take part in new and evolving arrangements of power that served the purposes of government; in turn the Estates were able in previously little-understood ways to preserve vital interests in a changing world. The Estates survived because they were necessary, not only thanks to their increasing financial potency but because they offered a politically viable way of exacting ever-larger quantities of money and other resources from local society. These circumstances persisted as ruling became more regularized and formalized, and as the very understanding of the Estates as a social and political phenomenon evolved.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Regularized quantiles"

1

Ahmad, Tawsif, and Ning Zhou. "Enhancing Solar Power Forecasting with Regularized Constrained Quantile Regression Averaging and Bootstrapping Techniques." In 2024 IEEE Power & Energy Society General Meeting (PESGM), 1–5. IEEE, 2024. http://dx.doi.org/10.1109/pesgm51994.2024.10688655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lane, R. G., R. A. Johnston, R. Irwan, and T. J. Connolly. "Regularized blind deconvolution." In Signal Recovery and Synthesis. Washington, D.C.: Optica Publishing Group, 1998. http://dx.doi.org/10.1364/srs.1998.stua.2.

Full text
Abstract:
Blind deconvolution is an important problem that arises in many fields of research. It is of particular relevance to imaging through turbulence where the point spread function can only be modelled statistically, and direct measurement may be difficult. We describe this problem by a noisy convolution, where f(x, y) represents the true image, h(x, y) the instantaneous atmospheric blurring, g(x, y) the noise free data and n(x, y) is the noise present on the detected image. We use to denote an estimate of these quantities and our objective is to recover both f(x, y) and h(x, y) from the observed data d(x, y).
APA, Harvard, Vancouver, ISO, and other styles
3

Jongebloed, Rolf, Erik Bochinski, Lieven Lange, and Thomas Sikora. "Quantized and Regularized Optimization for Coding Images Using Steered Mixtures-of-Experts." In 2019 Data Compression Conference (DCC). IEEE, 2019. http://dx.doi.org/10.1109/dcc.2019.00044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sun, Hanbo, Zhenhua Zhu, Yi Cai, Xiaoming Chen, Yu Wang, and Huazhong Yang. "An Energy-Efficient Quantized and Regularized Training Framework For Processing-In-Memory Accelerators." In 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC). IEEE, 2020. http://dx.doi.org/10.1109/asp-dac47756.2020.9045192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yu, Shujian, Luis Sanchez Giraldo, and Jose Principe. "Information-Theoretic Methods in Deep Neural Networks: Recent Advances and Emerging Opportunities." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/633.

Full text
Abstract:
We present a review on the recent advances and emerging opportunities around the theme of analyzing deep neural networks (DNNs) with information-theoretic methods. We first discuss popular information-theoretic quantities and their estimators. We then introduce recent developments on information-theoretic learning principles (e.g., loss functions, regularizers and objectives) and their parameterization with DNNs. We finally briefly review current usages of information-theoretic concepts in a few modern machine learning problems and list a few emerging opportunities.
APA, Harvard, Vancouver, ISO, and other styles
6

Prokopchina, Svetlana, and Veronika Zaslavskaia. "Methodology of Measurement Intellectualization based on Regularized Bayesian Approach in Uncertain Conditions." In 9th International Conference on Artificial Intelligence and Applications. Academy & Industry Research Collaboration Center, 2023. http://dx.doi.org/10.5121/csit.2023.131805.

Full text
Abstract:
Modern measurement tasks are confronted with inherent uncertainty. This significant uncertainty arises due to incomplete and imprecise knowledge about the models of measurement objects, influencing factors, measurement conditions, and the diverse nature of experimental data. This article provides a concise overview of the historical development of methodologies aimed at intellectualizing measurement processes in the context of uncertainty. It also discusses the classification of measurements and measurement systems. Furthermore, the fundamental requirements for intelligent measurement systems and technologies are outlined. The article delves into the conceptual aspects of intelligent measurements, which are rooted in the integration of metrologically certified data and knowledge. It defines intelligent measurements and establishes their key properties. Additionally, the article explores the main characteristics of soft measurements and highlights their distinctions from traditional deterministic measurements of physical quantities. The emergence of cognitive, systemic, and global measurements as new measurement types is discussed. In this paper, we offer a comprehensive examination of the methodology and technologies underpinning Bayesian intelligent measurements, with a foundation in the regularizing Bayesian approach. This approach introduces a novel concept of measurement, where the measurement problem is framed as an inverse problem of pattern recognition, aligning with Bayesian principles. Within this framework, innovative models and coupled scales with dynamic constraints are proposed. These dynamic scales facilitate the development of measurement technologies for enhancing the cognition and interpretation of measurement results by measurement systems. This novel type of scale enables the integration of numerical data (for quantifiable information) and linguistic information (for knowledge-based information) to enhance the quality of measurement solutions. A new set of metrological characteristics for intelligent measurements is introduced, encompassing accuracy, reliability (including error levels of the 1st and 2nd kind), dependability, risk assessment, and entropy characteristics. The paper provides explicit formulas for implementing the measurement process, complete with a metrological justification of the solutions. The article concludes by outlining the advantages and prospects of employing intelligent measurements. These benefits extend to solving practical problems, as well as advancing and integrating artificial intelligence and measurement theory technologies.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Yuzhao, Yatao Bian, Xi Xiao, Yu Rong, Tingyang Xu, and Junzhou Huang. "On Self-Distilling Graph Neural Network." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/314.

Full text
Abstract:
Recently, the teacher-student knowledge distillation framework has demonstrated its potential in training Graph Neural Networks (GNNs). However, due to the difficulty of training over-parameterized GNN models, one may not easily obtain a satisfactory teacher model for distillation. Furthermore, the inefficient training process of teacher-student knowledge distillation also impedes its applications in GNN models. In this paper, we propose the first teacher-free knowledge distillation method for GNNs, termed GNN Self-Distillation (GNN-SD), that serves as a drop-in replacement of the standard training process. The method is built upon the proposed neighborhood discrepancy rate (NDR), which quantifies the non-smoothness of the embedded graph in an efficient way. Based on this metric, we propose the adaptive discrepancy retaining (ADR) regularizer to empower the transferability of knowledge that maintains high neighborhood discrepancy across GNN layers. We also summarize a generic GNN-SD framework that could be exploited to induce other distillation strategies. Experiments further prove the effectiveness and generalization of our approach, as it brings: 1) state-of-the-art GNN distillation performance with less training cost, 2) consistent and considerable performance enhancement for various popular backbones.
APA, Harvard, Vancouver, ISO, and other styles
8

Singh, Yuvraj, Adithya Jayakumar, and Giorgio Rizzoni. "Data-Driven Estimation of Coastdown Road Load." In WCX SAE World Congress Experience. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2024. http://dx.doi.org/10.4271/2024-01-2276.

Full text
Abstract:
<div class="section abstract"><div class="htmlview paragraph">Emissions and fuel economy certification testing for vehicles is carried out on a chassis dynamometer using standard test procedures. The vehicle coastdown method (SAE J2263) used to experimentally measure the road load of a vehicle for certification testing is a time-consuming procedure considering the high number of distinct variants of a vehicle family produced by an automaker today. Moreover, test-to-test repeatability is compromised by environmental conditions: wind, pressure, temperature, track surface condition, etc., while vehicle shape, driveline type, transmission type, etc. are some factors that lead to vehicle-to-vehicle variation. Controlled lab tests are employed to determine individual road load components: tire rolling resistance (SAE J2452), aerodynamic drag (wind tunnels), and driveline parasitic loss (dynamometer in a driveline friction measurement lab). These individual components are added to obtain a road load model to be applied on a chassis dynamometer. However, lab-tested quantities may not account for environmental noise factors and qualitative vehicle characteristics leading to a significant residual road load between the track-tested and lab-tested road loads. Regression modeling techniques are explored for estimating this residual road load and the challenges are discussed. Additionally, a technique is developed to choose feature selection metrics using simulation of multivariate non-gaussian continuous and discrete data having similar statistical properties as the data obtained from automotive road tests. Using the selected features, two regularized regression techniques are experimented with. The first technique models the residual road load power, while the second technique models a polynomial relationship between vehicle speed and residual road load.</div></div>
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography