To see the other types of publications on this topic, follow the link: Generalization bound.

Journal articles on the topic 'Generalization bound'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Generalization bound.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Cohn, David, and Gerald Tesauro. "How Tight Are the Vapnik-Chervonenkis Bounds?" Neural Computation 4, no. 2 (March 1992): 249–69. http://dx.doi.org/10.1162/neco.1992.4.2.249.

Full text
Abstract:
We describe a series of numerical experiments that measure the average generalization capability of neural networks trained on a variety of simple functions. These experiments are designed to test the relationship between average generalization performance and the worst-case bounds obtained from formal learning theory using the Vapnik-Chervonenkis (VC) dimension (Blumer et al. 1989; Haussler et al. 1990). Recent statistical learning theories (Tishby et al. 1989; Schwartz et al. 1990) suggest that surpassing these bounds might be possible if the spectrum of possible generalizations has a “gap” near perfect performance. We indeed find that, in some cases, the average generalization is significantly better than the VC bound: the approach to perfect performance is exponential in the number of examples m, rather than the 1/m result of the bound. However, in these cases, we have not found evidence of the gap predicted by the above statistical theories. In other cases, we do find the 1/m behavior of the VC bound, and in these cases, the numerical prefactor is closely related to the prefactor contained in the bound.
APA, Harvard, Vancouver, ISO, and other styles
2

Pereira, Rajesh, and Mohammad Ali Vali. "Generalizations of the Cauchy and Fujiwara Bounds for Products of Zeros of a Polynomial." Electronic Journal of Linear Algebra 31 (February 5, 2016): 565–71. http://dx.doi.org/10.13001/1081-3810.3333.

Full text
Abstract:
The Cauchy bound is one of the best known upper bounds for the modulus of the zeros of a polynomial. The Fujiwara bound is another useful upper bound for the modulus of the zeros of a polynomial. In this paper, compound matrices are used to derive a generalization of both the Cauchy bound and the Fujiwara bound. This generalization yields upper bounds for the modulus of the product of $m$ zeros of the polynomial.
APA, Harvard, Vancouver, ISO, and other styles
3

Nedovic, M. "Norm bounds for the inverse for generalized Nekrasov matrices in point-wise and block case." Filomat 35, no. 8 (2021): 2705–14. http://dx.doi.org/10.2298/fil2108705n.

Full text
Abstract:
Lower-semi-Nekrasov matrices represent a generalization of Nekrasov matrices. For the inverse of lower-semi-Nekrasov matrices, a max-norm bound is proposed. Numerical examples are given to illustrate that new norm bound can give tighter results compared to already known bounds when applied to Nekrasov matrices. Also, we presented new max-norm bounds for the inverse of lower-semi-Nekrasov matrices in the block case. We considered two types of block generalizations and illustrated the results with numerical examples.
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Tongliang, Dacheng Tao, and Dong Xu. "Dimensionality-Dependent Generalization Bounds for k-Dimensional Coding Schemes." Neural Computation 28, no. 10 (October 2016): 2213–49. http://dx.doi.org/10.1162/neco_a_00872.

Full text
Abstract:
The k-dimensional coding schemes refer to a collection of methods that attempt to represent data using a set of representative k-dimensional vectors and include nonnegative matrix factorization, dictionary learning, sparse coding, k-means clustering, and vector quantization as special cases. Previous generalization bounds for the reconstruction error of the k-dimensional coding schemes are mainly dimensionality-independent. A major advantage of these bounds is that they can be used to analyze the generalization error when data are mapped into an infinite- or high-dimensional feature space. However, many applications use finite-dimensional data features. Can we obtain dimensionality-dependent generalization bounds for k-dimensional coding schemes that are tighter than dimensionality-independent bounds when data are in a finite-dimensional feature space? Yes. In this letter, we address this problem and derive a dimensionality-dependent generalization bound for k-dimensional coding schemes by bounding the covering number of the loss function class induced by the reconstruction error. The bound is of order [Formula: see text], where m is the dimension of features, k is the number of the columns in the linear implementation of coding schemes, and n is the size of sample, [Formula: see text] when n is finite and [Formula: see text] when n is infinite. We show that our bound can be tighter than previous results because it avoids inducing the worst-case upper bound on k of the loss function. The proposed generalization bound is also applied to some specific coding schemes to demonstrate that the dimensionality-dependent bound is an indispensable complement to the dimensionality-independent generalization bounds.
APA, Harvard, Vancouver, ISO, and other styles
5

Rubab, Faiza, Hira Nabi, and Asif R. Khan. "GENERALIZATION AND REFINEMENTS OF JENSEN INEQUALITY." Journal of Mathematical Analysis 12, no. 5 (October 31, 2021): 1–27. http://dx.doi.org/10.54379/jma-2021-5-1.

Full text
Abstract:
We give generalizations and refinements of Jensen and Jensen− Mercer inequalities by using weights which satisfy the conditions of Jensen and Jensen− Steffensen inequalities. We also give some refinements for discrete and integral version of generalized Jensen−Mercer inequality and shown to be an improvement of the upper bound for the Jensen’s difference given in [32]. Applications of our work include new bounds for some important inequalities used in information theory, and generalizing the relations among means.
APA, Harvard, Vancouver, ISO, and other styles
6

Nedovic, M., and Lj Cvetkovic. "Norm bounds for the inverse and error bounds for linear complementarity problems for {P1,P2}-Nekrasov matrices." Filomat 35, no. 1 (2021): 239–50. http://dx.doi.org/10.2298/fil2101239n.

Full text
Abstract:
{P1,P2}-Nekrasov matrices represent a generalization of Nekrasov matrices via permutations. In this paper, we obtained an error bound for linear complementarity problems for fP1; P2g-Nekrasov matrices. Numerical examples are given to illustrate that new error bound can give tighter results compared to already known bounds when applied to Nekrasov matrices. Also, we presented new max-norm bounds for the inverse of {P1,P2}-Nekrasov matrices in the block case, considering two different types of block generalizations. Numerical examples show that new norm bounds for the block case can give tighter results compared to already known bounds for the point-wise case.
APA, Harvard, Vancouver, ISO, and other styles
7

Han, Xinyu, Yi Zhao, and Michael Small. "A tighter generalization bound for reservoir computing." Chaos: An Interdisciplinary Journal of Nonlinear Science 32, no. 4 (April 2022): 043115. http://dx.doi.org/10.1063/5.0082258.

Full text
Abstract:
While reservoir computing (RC) has demonstrated astonishing performance in many practical scenarios, the understanding of its capability for generalization on previously unseen data is limited. To address this issue, we propose a novel generalization bound for RC based on the empirical Rademacher complexity under the probably approximately correct learning framework. Note that the generalization bound for the RC is derived in terms of the model hyperparameters. For this reason, it can explore the dependencies of the generalization bound for RC on its hyperparameters. Compared with the existing generalization bound, our generalization bound for RC is tighter, which is verified by numerical experiments. Furthermore, we study the generalization bound for the RC corresponding to different reservoir graphs, including directed acyclic graph (DAG) and Erdős–R[Formula: see text]nyi undirected random graph (ER graph). Specifically, the generalization bound for the RC whose reservoir graph is designated as a DAG can be refined by leveraging the structural property (i.e., the longest path length) of the DAG. Finally, both theoretical and experimental findings confirm that the generalization bound for the RC of a DAG is lower and less sensitive to the model hyperparameters than that for the RC of an ER graph.
APA, Harvard, Vancouver, ISO, and other styles
8

Gassner, Niklas, Marcus Greferath, Joachim Rosenthal, and Violetta Weger. "Bounds for Coding Theory over Rings." Entropy 24, no. 10 (October 16, 2022): 1473. http://dx.doi.org/10.3390/e24101473.

Full text
Abstract:
Coding theory where the alphabet is identified with the elements of a ring or a module has become an important research topic over the last 30 years. It has been well established that, with the generalization of the algebraic structure to rings, there is a need to also generalize the underlying metric beyond the usual Hamming weight used in traditional coding theory over finite fields. This paper introduces a generalization of the weight introduced by Shi, Wu and Krotov, called overweight. Additionally, this weight can be seen as a generalization of the Lee weight on the integers modulo 4 and as a generalization of Krotov's weight over the integers modulo 2s for any positive integer s. For this weight, we provide a number of well-known bounds, including a Singleton bound, a Plotkin bound, a sphere-packing bound and a Gilbert–Varshamov bound. In addition to the overweight, we also study a well-known metric on finite rings, namely the homogeneous metric, which also extends the Lee metric over the integers modulo 4 and is thus heavily connected to the overweight. We provide a new bound that has been missing in the literature for homogeneous metric, namely the Johnson bound. To prove this bound, we use an upper estimate on the sum of the distances of all distinct codewords that depends only on the length, the average weight and the maximum weight of a codeword. An effective such bound is not known for the overweight.
APA, Harvard, Vancouver, ISO, and other styles
9

Abou–Moustafa, Karim, and Csaba Szepesvári. "An Exponential Tail Bound for the Deleted Estimate." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3143–50. http://dx.doi.org/10.1609/aaai.v33i01.33013143.

Full text
Abstract:
There is an accumulating evidence in the literature that stability of learning algorithms is a key characteristic that permits a learning algorithm to generalize. Despite various insightful results in this direction, there seems to be an overlooked dichotomy in the type of stability-based generalization bounds we have in the literature. On one hand, the literature seems to suggest that exponential generalization bounds for the estimated risk, which are optimal, can be only obtained through stringent, distribution independent and computationally intractable notions of stability such as uniform stability. On the other hand, it seems that weaker notions of stability such as hypothesis stability, although it is distribution dependent and more amenable to computation, can only yield polynomial generalization bounds for the estimated risk, which are suboptimal. In this paper, we address the gap between these two regimes of results. In particular, the main question we address here is whether it is possible to derive exponential generalization bounds for the estimated risk using a notion of stability that is computationally tractable and distribution dependent, but weaker than uniform stability. Using recent advances in concentration inequalities, and using a notion of stability that is weaker than uniform stability but distribution dependent and amenable to computation, we derive an exponential tail bound for the concentration of the estimated risk of a hypothesis returned by a general learning rule, where the estimated risk is expressed in terms of the deleted estimate. Interestingly, we note that our final bound has similarities to previous exponential generalization bounds for the deleted estimate, in particular, the result of Bousquet and Elisseeff (2002) for the regression case.
APA, Harvard, Vancouver, ISO, and other styles
10

Harada, Masayasu, Francesco Sannino, Joseph Schechter, and Herbert Weigel. "Generalization of the bound state model." Physical Review D 56, no. 7 (October 1, 1997): 4098–114. http://dx.doi.org/10.1103/physrevd.56.4098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Khare, Niraj, Nishali Mehta, and Naushad Puliyambalath. "Generalization of Erdős–Gallai edge bound." European Journal of Combinatorics 43 (January 2015): 124–30. http://dx.doi.org/10.1016/j.ejc.2014.07.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Baddai, Saad abood. "A Generalization of t-Practical Numbers." Baghdad Science Journal 17, no. 4 (December 1, 2020): 1250. http://dx.doi.org/10.21123/bsj.2020.17.4.1250.

Full text
Abstract:
This paper generalizes and improves the results of Margenstren, by proving that the number of -practical numbers which is defined by has a lower bound in terms of . This bound is more sharper than Mangenstern bound when Further general results are given for the existence of -practical numbers, by proving that the interval contains a -practical for all
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Shusen. "A Sharper Generalization Bound for Divide-and-Conquer Ridge Regression." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5305–12. http://dx.doi.org/10.1609/aaai.v33i01.33015305.

Full text
Abstract:
We study the distributed machine learning problem where the n feature-response pairs are partitioned among m machines uniformly at random. The goal is to approximately solve an empirical risk minimization (ERM) problem with the minimum amount of communication. The divide-and-conquer (DC) method, which was proposed several years ago, lets every worker machine independently solve the same ERM problem using its local feature-response pairs and the driver machine combine the solutions. This approach is in one-shot and thereby extremely communication-efficient. Although the DC method has been studied by many prior works, reasonable generalization bound has not been established before this work.For the ridge regression problem, we show that the prediction error of the DC method on unseen test samples is at most ε times larger than the optimal. There have been constantfactor bounds in the prior works, their sample complexities have a quadratic dependence on d, which does not match the setting of most real-world problems. In contrast, our bounds are much stronger. First, our 1 + ε error bound is much better than their constant-factor bounds. Second, our sample complexity is merely linear with d.
APA, Harvard, Vancouver, ISO, and other styles
14

Hagiwara, Katsuyuki. "On the Problem in Model Selection of Neural Network Regression in Overrealizable Scenario." Neural Computation 14, no. 8 (August 1, 2002): 1979–2002. http://dx.doi.org/10.1162/089976602760128090.

Full text
Abstract:
In considering a statistical model selection of neural networks and radial basis functions under an overrealizable case, the problem of unidentifiability emerges. Because the model selection criterion is an unbiased estimator of the generalization error based on the training error, this article analyzes the expected training error and the expected generalization error of neural networks and radial basis functions in overrealizable cases and clarifies the difference from regular models, for which identifiability holds. As a special case of an overrealizable scenario, we assumed a gaussian noise sequence as training data. In the least-squares estimation under this assumption, we first formulated the problem, in which the calculation of the expected errors of unidentifiable networks is reduced to the calculation of the expectation of the supremum of thex2 process. Under this formulation, we gave an upper bound of the expected training error and a lower bound of the expected generalization error, where the generalization is measured at a set of training inputs. Furthermore, we gave stochastic bounds on the training error and the generalization error. The obtained upper bound of the expected training error is smaller than in regular models, and the lower bound of the expected generalization error is larger than in regular models. The result tells us that the degree of overfitting in neural networks and radial basis functions is higher than in regular models. Correspondingly, it also tells us that the generalization capability is worse than in the case of regular models. The article may be enough to show a difference between neural networks and regular models in the context of the least-squares estimation in a simple situation. This is a first step in constructing a model selection criterion in an overrealizable case. Further important problems in this direction are also included in this article.
APA, Harvard, Vancouver, ISO, and other styles
15

Martin, W. J., and D. R. Stinson. "A Generalized Rao Bound for Ordered Orthogonal Arrays and (t, m, s)-Nets." Canadian Mathematical Bulletin 42, no. 3 (September 1, 1999): 359–70. http://dx.doi.org/10.4153/cmb-1999-042-x.

Full text
Abstract:
AbstractIn this paper, we provide a generalization of the classical Rao bound for orthogonal arrays, which can be applied to ordered orthogonal arrays and (t, m, s)-nets. Application of our new bound leads to improvements in many parameter situations to the strongest bounds (i.e., necessary conditions) for existence of these objects.
APA, Harvard, Vancouver, ISO, and other styles
16

Jose, Sharu Theresa, and Osvaldo Simeone. "Information-Theoretic Generalization Bounds for Meta-Learning and Applications." Entropy 23, no. 1 (January 19, 2021): 126. http://dx.doi.org/10.3390/e23010126.

Full text
Abstract:
Meta-learning, or “learning to learn”, refers to techniques that infer an inductive bias from data corresponding to multiple related tasks with the goal of improving the sample efficiency for new, previously unobserved, tasks. A key performance measure for meta-learning is the meta-generalization gap, that is, the difference between the average loss measured on the meta-training data and on a new, randomly selected task. This paper presents novel information-theoretic upper bounds on the meta-generalization gap. Two broad classes of meta-learning algorithms are considered that use either separate within-task training and test sets, like model agnostic meta-learning (MAML), or joint within-task training and test sets, like reptile. Extending the existing work for conventional learning, an upper bound on the meta-generalization gap is derived for the former class that depends on the mutual information (MI) between the output of the meta-learning algorithm and its input meta-training data. For the latter, the derived bound includes an additional MI between the output of the per-task learning procedure and corresponding data set to capture within-task uncertainty. Tighter bounds are then developed for the two classes via novel individual task MI (ITMI) bounds. Applications of the derived bounds are finally discussed, including a broad class of noisy iterative algorithms for meta-learning.
APA, Harvard, Vancouver, ISO, and other styles
17

Chen, Jun, Hong Chen, Xue Jiang, Bin Gu, Weifu Li, Tieliang Gong, and Feng Zheng. "On the Stability and Generalization of Triplet Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 7033–41. http://dx.doi.org/10.1609/aaai.v37i6.25859.

Full text
Abstract:
Triplet learning, i.e. learning from triplet data, has attracted much attention in computer vision tasks with an extremely large number of categories, e.g., face recognition and person re-identification. Albeit with rapid progress in designing and applying triplet learning algorithms, there is a lacking study on the theoretical understanding of their generalization performance. To fill this gap, this paper investigates the generalization guarantees of triplet learning by leveraging the stability analysis. Specifically, we establish the first general high-probability generalization bound for the triplet learning algorithm satisfying the uniform stability, and then obtain the excess risk bounds of the order O(log(n)/(√n) ) for both stochastic gradient descent (SGD) and regularized risk minimization (RRM), where 2n is approximately equal to the number of training samples. Moreover, an optimistic generalization bound in expectation as fast as O(1/n) is derived for RRM in a low noise case via the on-average stability analysis. Finally, our results are applied to triplet metric learning to characterize its theoretical underpinning.
APA, Harvard, Vancouver, ISO, and other styles
18

Kiernan, Barbara J., and David P. Snow. "Bound-Morpheme Generalization by Children With SLI." Journal of Speech, Language, and Hearing Research 42, no. 3 (June 1999): 649–62. http://dx.doi.org/10.1044/jslhr.4203.649.

Full text
Abstract:
We investigated whether limited bound-morpheme generalization (BMG) by preschool children with SLI is functionally related to limited learning of training targets (words, affixed forms). Thirty children with SLI and 30 age-/gendermatched controls participated in the study. Production probes revealed a dissociation between learning and generalization performance. In addition, the number of children who achieved criterion-level BMG increased abruptly during an additional instructional experience with new training targets. These findings suggest that positive evidence of a bound morpheme's generalizability to different vocabulary stems benefits BMG. Furthermore, they suggest that limited BMG reflects problems not with the storage or access of specific trained facts but with the extraction and extension of the linguistic pattern (e.g., regularity, "rule") instantiated in the learning targets.
APA, Harvard, Vancouver, ISO, and other styles
19

Guo, Zheng-Chu, and Yiming Ying. "Guaranteed Classification via Regularized Similarity Learning." Neural Computation 26, no. 3 (March 2014): 497–522. http://dx.doi.org/10.1162/neco_a_00556.

Full text
Abstract:
Learning an appropriate (dis)similarity function from the available data is a central problem in machine learning, since the success of many machine learning algorithms critically depends on the choice of a similarity function to compare examples. Despite many approaches to similarity metric learning that have been proposed, there has been little theoretical study on the links between similarity metric learning and the classification performance of the resulting classifier. In this letter, we propose a regularized similarity learning formulation associated with general matrix norms and establish their generalization bounds. We show that the generalization error of the resulting linear classifier can be bounded by the derived generalization bound of similarity learning. This shows that a good generalization of the learned similarity function guarantees a good classification of the resulting linear classifier. Our results extend and improve those obtained by Bellet, Habrard, and Sebban ( 2012 ). Due to the techniques dependent on the notion of uniform stability (Bousquet & Elisseeff, 2002 ), the bound obtained there holds true only for the Frobenius matrix-norm regularization. Our techniques using the Rademacher complexity (Bartlett & Mendelson, 2002 ) and its related Khinchin-type inequality enable us to establish bounds for regularized similarity learning formulations associated with general matrix norms, including sparse L1-norm and mixed (2,1)-norm.
APA, Harvard, Vancouver, ISO, and other styles
20

Cao, Yuan, and Quanquan Gu. "Generalization Error Bounds of Gradient Descent for Learning Over-Parameterized Deep ReLU Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3349–56. http://dx.doi.org/10.1609/aaai.v34i04.5736.

Full text
Abstract:
Empirical studies show that gradient-based methods can learn deep neural networks (DNNs) with very good generalization performance in the over-parameterization regime, where DNNs can easily fit a random labeling of the training data. Very recently, a line of work explains in theory that with over-parameterization and proper random initialization, gradient-based methods can find the global minima of the training loss for DNNs. However, existing generalization error bounds are unable to explain the good generalization performance of over-parameterized DNNs. The major limitation of most existing generalization bounds is that they are based on uniform convergence and are independent of the training algorithm. In this work, we derive an algorithm-dependent generalization error bound for deep ReLU networks, and show that under certain assumptions on the data distribution, gradient descent (GD) with proper random initialization is able to train a sufficiently over-parameterized DNN to achieve arbitrarily small generalization error. Our work sheds light on explaining the good generalization performance of over-parameterized deep neural networks.
APA, Harvard, Vancouver, ISO, and other styles
21

NG, WING W. Y., DANIEL S. YEUNG, and ERIC C. C. TSANG. "THE LOCALIZED GENERALIZATION ERROR MODEL FOR SINGLE LAYER PERCEPTRON NEURAL NETWORK AND SIGMOID SUPPORT VECTOR MACHINE." International Journal of Pattern Recognition and Artificial Intelligence 22, no. 01 (February 2008): 121–35. http://dx.doi.org/10.1142/s0218001408006168.

Full text
Abstract:
We had developed the localized generalization error model for supervised learning with minimization of Mean Square Error. In this work, we extend the error model to Single Layer Perceptron Neural Network (SLPNN) and Support Vector Machine (SVM) with sigmoid kernel function. For a trained SLPNN or SVM and a given training dataset, the proposed error model bounds above the error for unseen samples which are similar to the training samples. As the major component of the localized generalization error model, the stochastic sensitivity measure formula for perceptron neural network derived in this work has relaxed the assumptions of same distribution for all inputs and each sample perturbed only once in previous works. These make the sensitivity measure applicable to pattern classification problems. The stochastic sensitivity measure of SVM with Sigmoid kernel is also derived in this work as a component of the localized generalization error model. At the end of this paper, we discuss the advantages of the proposed error bound over existing error bound.
APA, Harvard, Vancouver, ISO, and other styles
22

Chadan, K., R. Kobayashi, A. Martin, and J. Stubbe. "Generalization of the Calogero–Cohn bound on the number of bound states." Journal of Mathematical Physics 37, no. 3 (March 1996): 1106–14. http://dx.doi.org/10.1063/1.531450.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Yangjit, Wijit. "On the Montgomery–Vaughan weighted generalization of Hilbert’s inequality." Proceedings of the American Mathematical Society, Series B 10, no. 38 (December 19, 2023): 439–54. http://dx.doi.org/10.1090/bproc/199.

Full text
Abstract:
This paper concerns the problem of determining the optimal constant in the Montgomery–Vaughan weighted generalization of Hilbert’s inequality. We consider an approach pursued by previous authors via a parametric family of inequalities. We obtain upper and lower bounds for the constants in inequalities in this family. A lower bound indicates that the method in its current form cannot achieve any value below 3.19497 3.19497 , so cannot achieve the conjectured constant π \pi . The problem of determining the optimal constant remains open.
APA, Harvard, Vancouver, ISO, and other styles
24

Poliquin, Guillaume. "Principal frequency of the p-Laplacian and the inradius of Euclidean domains." Journal of Topology and Analysis 07, no. 03 (May 15, 2015): 505–11. http://dx.doi.org/10.1142/s1793525315500211.

Full text
Abstract:
We study the lower bounds for the principal frequency of the p-Laplacian on N-dimensional Euclidean domains. For p > N, we obtain a lower bound for the first eigenvalue of the p-Laplacian in terms of its inradius, without any assumptions on the topology of the domain. Moreover, we show that a similar lower bound can be obtained if p > N - 1 assuming the boundary is connected. This result can be viewed as a generalization of the classical bounds for the first eigenvalue of the Laplace operator on simply connected planar domains.
APA, Harvard, Vancouver, ISO, and other styles
25

YERNAUX, GONZAGUE, and WIM VANHOOF. "Anti-unification in Constraint Logic Programming." Theory and Practice of Logic Programming 19, no. 5-6 (September 2019): 773–89. http://dx.doi.org/10.1017/s1471068419000188.

Full text
Abstract:
AbstractAnti-unification refers to the process of generalizing two (or more) goals into a single, more general, goal that captures some of the structure that is common to all initial goals. In general one is typically interested in computing what is often called a most specific generalization, that is a generalization that captures a maximal amount of shared structure. In this work we address the problem of anti-unification in CLP, where goals can be seen as unordered sets of atoms and/or constraints. We show that while the concept of a most specific generalization can easily be defined in this context, computing it becomes an NP-complete problem. We subsequently introduce a generalization algorithm that computes a well-defined abstraction whose computation can be bound to a polynomial execution time. Initial experiments show that even a naive implementation of our algorithm produces acceptable generalizations in an efficient way.
APA, Harvard, Vancouver, ISO, and other styles
26

Bian, Wei, and Dacheng Tao. "Asymptotic Generalization Bound of Fisher’s Linear Discriminant Analysis." IEEE Transactions on Pattern Analysis and Machine Intelligence 36, no. 12 (December 1, 2014): 2325–37. http://dx.doi.org/10.1109/tpami.2014.2327983.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ying, Yiming, and Colin Campbell. "Rademacher Chaos Complexities for Learning the Kernel Problem." Neural Computation 22, no. 11 (November 2010): 2858–86. http://dx.doi.org/10.1162/neco_a_00028.

Full text
Abstract:
We develop a novel generalization bound for learning the kernel problem. First, we show that the generalization analysis of the kernel learning problem reduces to investigation of the suprema of the Rademacher chaos process of order 2 over candidate kernels, which we refer to as Rademacher chaos complexity. Next, we show how to estimate the empirical Rademacher chaos complexity by well-established metric entropy integrals and pseudo-dimension of the set of candidate kernels. Our new methodology mainly depends on the principal theory of U-processes and entropy integrals. Finally, we establish satisfactory excess generalization bounds and misclassification error rates for learning gaussian kernels and general radial basis kernels.
APA, Harvard, Vancouver, ISO, and other styles
28

ELIAHOU, SHALOM, and MICHEL KERVAIRE. "BOUNDS ON THE MINIMAL SUMSET SIZE FUNCTION IN GROUPS." International Journal of Number Theory 03, no. 04 (December 2007): 503–11. http://dx.doi.org/10.1142/s1793042107001085.

Full text
Abstract:
In this paper, we give lower and upper bounds for the minimal size μG(r,s) of the sumset (or product set) of two finite subsets of given cardinalities r,s in a group G. Our upper bound holds for solvable groups, our lower bound for arbitrary groups. The results are expressed in terms of variants of the numerical function κG(r,s), a generalization of the Hopf–Stiefel function that, as shown in [6], exactly models μG(r,s) for G abelian.
APA, Harvard, Vancouver, ISO, and other styles
29

Wu, Liang, Antoine Ledent, Yunwen Lei, and Marius Kloft. "Fine-grained Generalization Analysis of Vector-Valued Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (May 18, 2021): 10338–46. http://dx.doi.org/10.1609/aaai.v35i12.17238.

Full text
Abstract:
Many fundamental machine learning tasks can be formulated as a problem of learning with vector-valued functions, where we learn multiple scalar-valued functions together. Although there is some generalization analysis on different specific algorithms under the empirical risk minimization principle, a unifying analysis of vector-valued learning under a regularization framework is still lacking. In this paper, we initiate the generalization analysis of regularized vector-valued learning algorithms by presenting bounds with a mild dependency on the output dimension and a fast rate on the sample size. Our discussions relax the existing assumptions on the restrictive constraint of hypothesis spaces, smoothness of loss functions and low-noise condition. To understand the interaction between optimization and learning, we further use our results to derive the first generalization bounds for stochastic gradient descent with vector-valued functions. We apply our general results to multi-class classification and multi-label classification, which yield the first bounds with a logarithmic dependency on the output dimension for extreme multi-label classification with the Frobenius regularization. As a byproduct, we derive a Rademacher complexity bound for loss function classes defined in terms of a general strongly convex function.
APA, Harvard, Vancouver, ISO, and other styles
30

BRACKEN, PAUL. "ON A GENERALIZATION OF A THEOREM OF LICHNEROWICZ TO MANIFOLDS WITH BOUNDARY." International Journal of Geometric Methods in Modern Physics 08, no. 03 (May 2011): 639–46. http://dx.doi.org/10.1142/s0219887811005300.

Full text
Abstract:
A theorem due to Lichnerowicz which establishes a lower bound on the lowest nonzero eigenvalue of the Laplacian acting on functions on a compact, closed manifold is reviewed. It is shown how this theorem can be extended to the case of a manifold with nonempty boundary. Lower bounds for different boundary conditions, analogous to the empty boundary case, are formulated and some novel proofs are presented.
APA, Harvard, Vancouver, ISO, and other styles
31

Swisher, Linda, Maria Adelaida Restrepo, Elena Plante, and Soren Lowell. "Effect of Implicit and Explicit "Rule" Presentation on Bound-Morpheme Generalization in Specific Language Impairment." Journal of Speech, Language, and Hearing Research 38, no. 1 (February 1995): 168–73. http://dx.doi.org/10.1044/jshr.3801.168.

Full text
Abstract:
This study addressed whether generalization of a trained bound morpheme to untrained vocabulary stems differs between children with specific language impairment (SLI) and children with normal language (NL) under two controlled instructional conditions. Twenty-five children with NL and 25 children with SLI matched for age served as subjects. Contrasts between affixed and unaffixed words highlighted the affixation "rule" in the "implicit-rule" condition. The "rule" was verbalized by the trainer in the "explicit-rule" condition. Bimodal generalization results occurred in both subject groups, indicating that generalization was not incremental. Chi-square analyses suggested that the SLI group generalized the bound morpheme less often than the NL group under the explicit-rule training condition. The findings add to those that indicate children with SLI have a unique language-learning style, and suggest that the explicit presentation of metalinguistic information during training may be detrimental to bound-morpheme generalization by preschool-age children with SLI.
APA, Harvard, Vancouver, ISO, and other styles
32

Ma, Zhi-Hao, Zhi-Hua Chen, Shuai Han, Shao-Ming Feii, and Simone Severin. "Improved bounds on negativity of superpositions." Quantum Information and Computation 12, no. 11&12 (November 2012): 983–88. http://dx.doi.org/10.26421/qic12.11-12-6.

Full text
Abstract:
We consider an alternative formula for the negativity based on a simple generalization of the concurrence. We use the formula to bound the amount of entanglement in a superposition of two bipartite pure states of arbitrary dimension. Various examples indicate that our bounds are tighter than the previously known results.
APA, Harvard, Vancouver, ISO, and other styles
33

Usta, Fuat, and Mehmet Sarikaya. "On generalization conformable fractional integral inequalities." Filomat 32, no. 16 (2018): 5519–26. http://dx.doi.org/10.2298/fil1816519u.

Full text
Abstract:
The main issues addressed in this paper are making generalization of Gronwall, Volterra and Pachpatte type inequalities for conformable differential equations. By using the Katugampola definition for conformable calculus we found some upper and lower bound for integral inequalities. The established results are extensions of some existing Gronwall, Volterra and Pachpatte type inequalities in the previous published studies.
APA, Harvard, Vancouver, ISO, and other styles
34

Blinovsky, V. M. "Plotkin bound generalization to the case of multiple packings." Problems of Information Transmission 45, no. 1 (March 2009): 1–4. http://dx.doi.org/10.1134/s0032946009010013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Deng, Xiaoge, Tao Sun, Shengwei Li, and Dongsheng Li. "Stability-Based Generalization Analysis of the Asynchronous Decentralized SGD." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 7340–48. http://dx.doi.org/10.1609/aaai.v37i6.25894.

Full text
Abstract:
The generalization ability often determines the success of machine learning algorithms in practice. Therefore, it is of great theoretical and practical importance to understand and bound the generalization error of machine learning algorithms. In this paper, we provide the first generalization results of the popular stochastic gradient descent (SGD) algorithm in the distributed asynchronous decentralized setting. Our analysis is based on the uniform stability tool, where stable means that the learned model does not change much in small variations of the training set. Under some mild assumptions, we perform a comprehensive generalizability analysis of the asynchronous decentralized SGD, including generalization error and excess generalization error bounds for the strongly convex, convex, and non-convex cases. Our theoretical results reveal the effects of the learning rate, training data size, training iterations, decentralized communication topology, and asynchronous delay on the generalization performance of the asynchronous decentralized SGD. We also study the optimization error regarding the objective function values and investigate how the initial point affects the excess generalization error. Finally, we conduct extensive experiments on MNIST, CIFAR-10, CIFAR-100, and Tiny-ImageNet datasets to validate the theoretical findings.
APA, Harvard, Vancouver, ISO, and other styles
36

Fanzi, Zeng, and Ma Xiaolong. "The Generalization Error Bound for the Multiclass Analytical Center Classifier." Scientific World Journal 2013 (2013): 1–5. http://dx.doi.org/10.1155/2013/574748.

Full text
Abstract:
This paper presents the multiclass classifier based on analytical center of feasible space (MACM). This multiclass classifier is formulated as quadratic constrained linear optimization and does not need repeatedly constructing classifiers to separate a single class from all the others. Its generalization error upper bound is proved theoretically. The experiments on benchmark datasets validate the generalization performance of MACM.
APA, Harvard, Vancouver, ISO, and other styles
37

Lv, Shao-Gao. "Refined Generalization Bounds of Gradient Learning over Reproducing Kernel Hilbert Spaces." Neural Computation 27, no. 6 (June 2015): 1294–320. http://dx.doi.org/10.1162/neco_a_00739.

Full text
Abstract:
Gradient learning (GL), initially proposed by Mukherjee and Zhou ( 2006 ) has been proved to be a powerful tool for conducting variable selection and dimensional reduction simultaneously. This approach presents a nonparametric version of a gradient estimator with positive definite kernels without estimating the true function itself, so that the proposed version has wide applicability and allows for complex effects between predictors. In terms of theory, however, existing generalization bounds for GL depend on capacity-independent techniques, and the capacity of kernel classes cannot be characterized completely. Thus, this letter considers GL estimators that minimize the empirical convex risk. We prove generalization bounds for such estimators with rates that are faster than previous results. Moreover, we provide a novel upper bound for Rademacher chaos complexity of order two, which also plays an important role in general pairwise-type estimations, including ranking and score problems.
APA, Harvard, Vancouver, ISO, and other styles
38

Yang, Yanxia, Pu Wang, and Xuejin Gao. "A Novel Radial Basis Function Neural Network with High Generalization Performance for Nonlinear Process Modelling." Processes 10, no. 1 (January 10, 2022): 140. http://dx.doi.org/10.3390/pr10010140.

Full text
Abstract:
A radial basis function neural network (RBFNN), with a strong function approximation ability, was proven to be an effective tool for nonlinear process modeling. However, in many instances, the sample set is limited and the model evaluation error is fixed, which makes it very difficult to construct an optimal network structure to ensure the generalization ability of the established nonlinear process model. To solve this problem, a novel RBFNN with a high generation performance (RBFNN-GP), is proposed in this paper. The proposed RBFNN-GP consists of three contributions. First, a local generalization error bound, introducing the sample mean and variance, is developed to acquire a small error bound to reduce the range of error. Second, the self-organizing structure method, based on a generalization error bound and network sensitivity, is established to obtain a suitable number of neurons to improve the generalization ability. Third, the convergence of this proposed RBFNN-GP is proved theoretically in the case of structure fixation and structure adjustment. Finally, the performance of the proposed RBFNN-GP is compared with some popular algorithms, using two numerical simulations and a practical application. The comparison results verified the effectiveness of RBFNN-GP.
APA, Harvard, Vancouver, ISO, and other styles
39

Zhang, Tong. "Leave-One-Out Bounds for Kernel Methods." Neural Computation 15, no. 6 (June 1, 2003): 1397–437. http://dx.doi.org/10.1162/089976603321780326.

Full text
Abstract:
In this article, we study leave-one-out style cross-validation bounds for kernel methods. The essential element in our analysis is a bound on the parameter estimation stability for regularized kernel formulations. Using this result, we derive bounds on expected leave-one-out cross-validation errors, which lead to expected generalization bounds for various kernel algorithms. In addition, we also obtain variance bounds for leave-oneout errors. We apply our analysis to some classification and regression problems and compare them with previous results.
APA, Harvard, Vancouver, ISO, and other styles
40

Cahn, Patricia. "A generalization of Turaev’s virtual string cobracket and self-intersections of virtual strings." Communications in Contemporary Mathematics 19, no. 04 (June 14, 2016): 1650053. http://dx.doi.org/10.1142/s021919971650053x.

Full text
Abstract:
Previously we defined an operation [Formula: see text] that generalizes Turaev’s cobracket for loops on a surface. We showed that, in contrast to the cobracket, this operation gives a formula for the minimum number of self-intersections of a loop in a given free homotopy class. In this paper, we consider the corresponding question for virtual strings, and conjecture that [Formula: see text] gives a formula for the minimum number of self-intersection points of a virtual string in a given virtual homotopy class. To support the conjecture, we show that [Formula: see text] gives a bound on the minimal self-intersection number of a virtual string which is stronger than a bound given by Turaev’s virtual string cobracket. We also use Turaev’s based matrices to describe a large set of strings [Formula: see text] such that [Formula: see text] gives a formula for the minimal self-intersection number [Formula: see text]. Finally, we compare the bound given by [Formula: see text] to a bound given by Turaev’s based matrix invariant [Formula: see text], and construct an example that shows the bound on the minimal self-intersection number given by [Formula: see text] is sometimes stronger than the bound [Formula: see text].
APA, Harvard, Vancouver, ISO, and other styles
41

Martinazzo, Rocco, and Eli Pollak. "Lower bounds to eigenvalues of the Schrödinger equation by solution of a 90-y challenge." Proceedings of the National Academy of Sciences 117, no. 28 (June 29, 2020): 16181–86. http://dx.doi.org/10.1073/pnas.2007093117.

Full text
Abstract:
The Ritz upper bound to eigenvalues of Hermitian operators is essential for many applications in science. It is a staple of quantum chemistry and physics computations. The lower bound devised by Temple in 1928 [G. Temple,Proc. R. Soc. A Math. Phys. Eng. Sci.119, 276–293 (1928)] is not, since it converges too slowly. The need for a good lower-bound theorem and algorithm cannot be overstated, since an upper bound alone is not sufficient for determining differences between eigenvalues such as tunneling splittings and spectral features. In this paper, after 90 y, we derive a generalization and improvement of Temple’s lower bound. Numerical examples based on implementation of the Lanczos tridiagonalization are provided for nontrivial lattice model Hamiltonians, exemplifying convergence over a range of 13 orders of magnitude. This lower bound is typically at least one order of magnitude better than Temple’s result. Its rate of convergence is comparable to that of the Ritz upper bound. It is not limited to ground states. These results complement Ritz’s upper bound and may turn the computation of lower bounds into a staple of eigenvalue and spectral problems in physics and chemistry.
APA, Harvard, Vancouver, ISO, and other styles
42

Meng, Juan, Guyu Hu, Dong Li, Yanyan Zhang, and Zhisong Pan. "Generalization Bounds Derived IPM-Based Regularization for Domain Adaptation." Computational Intelligence and Neuroscience 2016 (2016): 1–8. http://dx.doi.org/10.1155/2016/7046563.

Full text
Abstract:
Domain adaptation has received much attention as a major form of transfer learning. One issue that should be considered in domain adaptation is the gap between source domain and target domain. In order to improve the generalization ability of domain adaption methods, we proposed a framework for domain adaptation combining source and target data, with a new regularizer which takes generalization bounds into account. This regularization term considers integral probability metric (IPM) as the distance between the source domain and the target domain and thus can bound up the testing error of an existing predictor from the formula. Since the computation of IPM only involves two distributions, this generalization term is independent with specific classifiers. With popular learning models, the empirical risk minimization is expressed as a general convex optimization problem and thus can be solved effectively by existing tools. Empirical studies on synthetic data for regression and real-world data for classification show the effectiveness of this method.
APA, Harvard, Vancouver, ISO, and other styles
43

Redkin, Nikolay P. "A generalization of Shannon function." Discrete Mathematics and Applications 28, no. 5 (October 25, 2018): 309–18. http://dx.doi.org/10.1515/dma-2018-0027.

Full text
Abstract:
Abstract When investigating the complexity of implementing Boolean functions, it is usually assumed that the basis inwhich the schemes are constructed and the measure of the complexity of the schemes are known. For them, the Shannon function is introduced, which associates with each Boolean function the least complexity of implementing this function in the considered basis. In this paper we propose a generalization of such a Shannon function in the form of an upper bound that is taken over all functionally complete bases. This generalization gives an idea of the complexity of implementing Boolean functions in the “worst” bases for them. The conceptual content of the proposed generalization is demonstrated by the example of a conjunction.
APA, Harvard, Vancouver, ISO, and other styles
44

Sun, Haichao, and Jie Yang. "The Generalization of Non-Negative Matrix Factorization Based on Algorithmic Stability." Electronics 12, no. 5 (February 27, 2023): 1147. http://dx.doi.org/10.3390/electronics12051147.

Full text
Abstract:
The Non-negative Matrix Factorization (NMF) is a popular technique for intelligent systems, which can be widely used to decompose a nonnegative matrix into two factor matrices: a basis matrix and a coefficient one, respectively. The main objective of NMF is to ensure that the operation results of the two matrices are as close to the original matrix as possible. Meanwhile, the stability and generalization ability of the algorithm should be ensured. Therefore, the generalization performance of NMF algorithms is analyzed from the perspective of algorithm stability and the generalization error bounds are given, which is named AS-NMF. Firstly, a general NMF prediction algorithm is proposed, which can predict the labels for new samples, and then the corresponding loss function is defined further. Secondly, the stability of the NMF algorithm is defined according to the loss function, and two generalization error bounds can be obtained by employing uniform stability in the case where U is fixed and it is not fixed under the multiplicative update rule. The bounds numerically show that its stability parameter depends on the upper bound on the module length of the input data, dimension of hidden matrix and Frobenius norm of the basis matrix. Finally, a general and stable framework is established, which can analyze and measure generalization error bounds for the NMF algorithm. The experimental results demonstrate the advantages of new methods on three widely used benchmark datasets, which indicate that our AS-NMF can not only achieve efficient performance, but also outperform the state-of-the-art of recommending tasks in terms of model stability.
APA, Harvard, Vancouver, ISO, and other styles
45

Shen, Yukai. "$ k $th powers in a generalization of Piatetski-Shapiro sequences." AIMS Mathematics 8, no. 9 (2023): 22411–18. http://dx.doi.org/10.3934/math.20231143.

Full text
Abstract:
<abstract><p>The article considers a generalization of Piatetski-Shapiro sequences in the sense of Beatty sequences. The sequence is defined by $ \left(\left\lfloor\alpha n^c+\beta\right\rfloor\right)_{n = 1}^{\infty} $, where $ \alpha \geq 1 $, $ c &gt; 1 $, and $ \beta $ are real numbers.</p> <p>The focus of the study is on solving equations of the form $ \left\lfloor \alpha n^c +\beta\right\rfloor = s m^k $, where $ m $ and $ n $ are positive integers, $ 1 \leq n \leq N $, and $ s $ is an integer. Bounds for the solutions are obtained for different values of the exponent $ k $, and an average bound is derived over $ k $-free numbers $ s $ in a given interval.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
46

Liu, Jiawen, Weihao Qu, Marco Gaboardi, Deepak Garg, and Jonathan Ullman. "Program Analysis for Adaptive Data Analysis." Proceedings of the ACM on Programming Languages 8, PLDI (June 20, 2024): 914–38. http://dx.doi.org/10.1145/3656414.

Full text
Abstract:
Data analyses are usually designed to identify some property of the population from which the data are drawn, generalizing beyond the specific data sample. For this reason, data analyses are often designed in a way that guarantees that they produce a low generalization error. That is, they are designed so that the result of a data analysis run on a sample data does not differ too much from the result one would achieve by running the analysis over the entire population. An adaptive data analysis can be seen as a process composed by multiple queries interrogating some data, where the choice of which query to run next may rely on the results of previous queries. The generalization error of each individual query/analysis can be controlled by using an array of well-established statistical techniques. However, when queries are arbitrarily composed, the different errors can propagate through the chain of different queries and bring to a high generalization error. To address this issue, data analysts are designing several techniques that not only guarantee bounds on the generalization errors of single queries, but that also guarantee bounds on the generalization error of the composed analyses. The choice of which of these techniques to use, often depends on the chain of queries that an adaptive data analysis can generate. In this work, we consider adaptive data analyses implemented as while-like programs and we design a program analysis which can help with identifying which technique to use to control their generalization errors. More specifically, we formalize the intuitive notion of adaptivity as a quantitative property of programs. We do this because the adaptivity level of a data analysis is a key measure to choose the right technique. Based on this definition, we design a program analysis for soundly approximating this quantity. The program analysis generates a representation of the data analysis as a weighted dependency graph, where the weight is an upper bound on the number of times each variable can be reached, and uses a path search strategy to guarantee an upper bound on the adaptivity. We implement our program analysis and show that it can help to analyze the adaptivity of several concrete data analyses with different adaptivity structures.
APA, Harvard, Vancouver, ISO, and other styles
47

Hosseinian, Seyedmohammadhossein, Dalila B. M. M. Fontes, and Sergiy Butenko. "A Lagrangian Bound on the Clique Number and an Exact Algorithm for the Maximum Edge Weight Clique Problem." INFORMS Journal on Computing 32, no. 3 (July 2020): 747–62. http://dx.doi.org/10.1287/ijoc.2019.0898.

Full text
Abstract:
This paper explores the connections between the classical maximum clique problem and its edge-weighted generalization, the maximum edge weight clique (MEWC) problem. As a result, a new analytic upper bound on the clique number of a graph is obtained and an exact algorithm for solving the MEWC problem is developed. The bound on the clique number is derived using a Lagrangian relaxation of an integer (linear) programming formulation of the MEWC problem. Furthermore, coloring-based bounds on the clique number are used in a novel upper-bounding scheme for the MEWC problem. This scheme is employed within a combinatorial branch-and-bound framework, yielding an exact algorithm for the MEWC problem. Results of computational experiments demonstrate a superior performance of the proposed algorithm compared with existing approaches.
APA, Harvard, Vancouver, ISO, and other styles
48

Wieczorek, Rafał, and Hanna Podsędkowska. "Entropic upper bound for Bayes risk in the quantum case." Probability and Mathematical Statistics 38, no. 2 (December 28, 2018): 429–40. http://dx.doi.org/10.19195/0208-4147.38.2.9.

Full text
Abstract:
The entropic upper bound for Bayes risk in a general quantum case is presented. We obtained generalization of the entropic lower bound for probability of detection. Our result indicates upper bound for Bayes risk in a particular case of loss function – for probability of detection in a pretty general setting of an arbitrary finite von Neumann algebra. It is also shown under which condition the indicated upper bound is achieved.
APA, Harvard, Vancouver, ISO, and other styles
49

Swisher, Linda, and David Snow. "Learning and Generalization Components of Morphological Acquisition by Children With Specific Language Impairment." Journal of Speech, Language, and Hearing Research 37, no. 6 (December 1994): 1406–13. http://dx.doi.org/10.1044/jshr.3706.1406.

Full text
Abstract:
Children with specific language impairment (SLI) have particular difficulty acquiring bound morphemes. To determine whether these morphological deficits spring from impairments of rule-induction or memory (storage/access) skills, 25 preschool-age children with normal language (NL) and 25 age-matched children with SLI were presented with a novel vocabulary and novel bound-morpheme learning task. A chi square analysis revealed that the children with SLI had significantly lower vocabulary learning levels than NL children. In addition, there was tentative evidence that a dependency relationship existed in some children between success in vocabulary learning and proficiency in generalizing a trained bound morpheme to untrained vocabulary stems. These findings are predicted by the storage/access but not the rule-induction theory of specific language impairment. They suggest that intervention targeting bound-morpheme skills acquisition in children with SLI might include attention to vocabulary development.
APA, Harvard, Vancouver, ISO, and other styles
50

Tian, Yingjie, Saiji Fu, and Jingjing Tang. "Incomplete-view oriented kernel learning method with generalization error bound." Information Sciences 581 (December 2021): 951–77. http://dx.doi.org/10.1016/j.ins.2021.10.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography