Journal articles on the topic 'Approximation of convex function'

To see the other types of publications on this topic, follow the link: Approximation of convex function.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Approximation of convex function.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Petrova, T. "One counterexample for convex approximation of function with fractional derivatives, r>4." Bulletin of Taras Shevchenko National University of Kyiv. Series: Physics and Mathematics, no. 3 (2018): 53–56. http://dx.doi.org/10.17721/1812-5409.2018/3.7.

Full text
Abstract:
We discuss whether on not it is possible to have interpolatory estimates in the approximation of a function f \in W^r [0,1] by polynomials. The problem of positive approximation is to estimate the pointwise degree of approximation of a function f \in C^r [0,1] \Wedge \Delta^0, where \Delta^0 is the set of positive functions on [0,1]. Estimates of the form (1) for positive approximation are known ([1],[2]). The problem of monotone approximation is that of estimating the degree of approximation of a monotone nondecreasing function by monotone nondecreasing polynomials. Estimates of the form (1) for monotone approximation were proved in [3],[4],[8]. In [3],[4] is consider r \in N, r>2. In [8] is consider r \in R, r>2. It was proved that for monotone approximation estimates of the form (1) are fails for r \in R, r>2. The problem of convex approximation is that of estimating the degree of approximation of a convex function by convex polynomials. The problem of convex approximation is that of estimating the degree of approximation of a convex function by convex polynomials. The problem of convex approximation is consider in ([5],[6],[11]). In [5] is consider r \in N, r>2. It was proved that for convex approximation estimates of the form (1) are fails for r \in N, r>2. In [6] is consider r \in R, r\in(2;3). It was proved that for convex approximation estimates of the form (1) are fails for r \in R, r\in(2;3). In [11] is consider r \in R, r\in(3;4). It was proved that for convex approximation estimates of the form (1) are fails for r \in R, r\in(3;4). In [9] is consider r \in R, r>4. It was proved that for f \in W^r [0,1] \Wedge \Delta^2, r>4 estimate (1) is not true. In this paper the question of approximation of function f \in W^r [0,1] \Wedge \Delta^2, r>4 by algebraic polynomial p_n \in \Pi_n \Wedge \Delta^2 is consider. It is proved, that for f \in W^r [0,1] \Wedge \Delta^2, r>4, estimate (1) can be improved, generally speaking.
APA, Harvard, Vancouver, ISO, and other styles
2

Zala, Vidhi, Mike Kirby, and Akil Narayan. "Structure-Preserving Function Approximation via Convex Optimization." SIAM Journal on Scientific Computing 42, no. 5 (January 2020): A3006—A3029. http://dx.doi.org/10.1137/19m130128x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

J. J. Koliha. "Approximation of Convex Functions." Real Analysis Exchange 29, no. 1 (2004): 465. http://dx.doi.org/10.14321/realanalexch.29.1.0465.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tang, Wee-Kee. "Sets of differentials and smoothness of convex functions." Bulletin of the Australian Mathematical Society 52, no. 1 (August 1995): 91–96. http://dx.doi.org/10.1017/s0004972700014477.

Full text
Abstract:
Approximation by smooth convex functions and questions on the Smooth Variational Principle for a given convex function f on a Banach space are studied in connection with majorising f by C1-smooth functions.
APA, Harvard, Vancouver, ISO, and other styles
5

Bosch, Paul. "A Numerical Method for Two-Stage Stochastic Programs under Uncertainty." Mathematical Problems in Engineering 2011 (2011): 1–13. http://dx.doi.org/10.1155/2011/840137.

Full text
Abstract:
Motivated by problems coming from planning and operational management in power generation companies, this work extends the traditional two-stage linear stochastic program by adding probabilistic constraints in the second stage. In this work we describe, under special assumptions, how the two-stage stochastic programs with mixed probabilities can be treated computationally. We obtain a convex conservative approximations of the chance constraints defined in second stage of our model and use Monte Carlo simulation techniques for approximating the expectation function in the first stage by the average. This approach raises with another question: how to solve the linear program with the convex conservative approximation (nonlinear constrains) for each scenario?
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Xin, Houduo Qi, Liqun Qi, and Kok-Lay Teo. "Smooth Convex Approximation to the Maximum Eigenvalue Function." Journal of Global Optimization 30, no. 2-3 (November 2004): 253–70. http://dx.doi.org/10.1007/s10898-004-8271-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ubhaya, Vasant A. "Uniform approximation by quasi-convex and convex functions." Journal of Approximation Theory 55, no. 3 (December 1988): 326–36. http://dx.doi.org/10.1016/0021-9045(88)90099-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ubhaya, Vasant A. "Lp approximation by quasi-convex and convex functions." Journal of Mathematical Analysis and Applications 139, no. 2 (May 1989): 574–85. http://dx.doi.org/10.1016/0022-247x(89)90130-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zwick, D. "Best Approximation by Convex Functions." American Mathematical Monthly 94, no. 6 (June 1987): 528. http://dx.doi.org/10.2307/2322845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zwick, D. "Best Approximation by Convex Functions." American Mathematical Monthly 94, no. 6 (June 1987): 528–34. http://dx.doi.org/10.1080/00029890.1987.12000679.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Gao, Bo, Donald J. Newman, and V. A. Popov. "Convex Approximation by Rational Functions." SIAM Journal on Mathematical Analysis 26, no. 2 (March 1995): 488–99. http://dx.doi.org/10.1137/s0036141092232853.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Chen, Lijian, and Dustin J. Banet. "Polynomial Approximation for Two Stage Stochastic Programming with Separable Objective." International Journal of Operations Research and Information Systems 1, no. 3 (July 2010): 75–88. http://dx.doi.org/10.4018/joris.2010070105.

Full text
Abstract:
In this paper, the authors solve the two stage stochastic programming with separable objective by obtaining convex polynomial approximations to the convex objective function with an arbitrary accuracy. Our proposed method will be valid for realistic applications, for example, the convex objective can be either non-differentiable or only accessible by Monte Carlo simulations. The resulting polynomial is constructed by Bernstein polynomial and norm approximation models. At a given accuracy, the necessary degree of the polynomial and the replications are properly determined. Afterward, the authors applied the first gradient type algorithms on the new stochastic programming model with the polynomial objective, resulting in the optimal solution being attained.
APA, Harvard, Vancouver, ISO, and other styles
13

Alabdali, Osama, and Allal Guessab. "Sharp multidimensional numerical integration for strongly convex functions on convex polytopes." Filomat 34, no. 2 (2020): 601–7. http://dx.doi.org/10.2298/fil2002601a.

Full text
Abstract:
This paper introduces and studies a new class of multidimensional numerical integration, which we call ?strongly positive definite cubature formulas?. We establish, among others, a characterization theorem providing necessary and sufficient conditions for the approximation error (based on such cubature formulas) to be bounded by the approximation error of the quadratic function. This result is derived as a consequence of two characterization results, which are of independent interest, for linear functionals obtained in a more general seeting. Thus, this paper extends some result previously reported in [2, 3] when convexity in the classical sense is only assumed. We also show that the centroidal Voronoi Tesselations provide an efficient way for constructing a class of optimal of cubature formulas. Numerical results for the two-dimensional test functions are given to illustrate the efficiency of our resulting cubature formulas.
APA, Harvard, Vancouver, ISO, and other styles
14

Simons, S. "Regularisations of convex functions and slicewise suprema." Bulletin of the Australian Mathematical Society 50, no. 3 (December 1994): 481–99. http://dx.doi.org/10.1017/s0004972700013599.

Full text
Abstract:
For a number of years, there has been interest in the regularisation of a given proper convex lower semicontinuous function on a Banach space, defined to be the episum (=inf-convolution) of the function with a scalar multiple of the norm. There is an obvious geometric way of characterising this regularisation as the lower envelope of cones lying above the graph of the original function. In this paper, we consider the more interesting problem of characterising the regularisation in terms of approximations from below, expressing the regularisation as the upper envelope of certain subtangents to the graph of the original function. We shall show that such an approximation is sometimes (but not always) valid. Further, we shall give an extension of the whole procedure in which the scalar multiple of the norm is replaced by a more general sublinear functional. As a by-product of our analysis, we are led to the consideration of two senses stronger than the pointwise sense in which a function on a Banach space can be expressed as the upper envelope of a family of functions. These new senses of suprema lead to some questions in Banach space theorey.
APA, Harvard, Vancouver, ISO, and other styles
15

Darwesh, Halgwrd Mohammed. "Log-Convex Polynomial Approximation in Log. of Twice Differentiable Functions." Journal of Zankoy Sulaimani - Part A 11, no. 1 (August 12, 2007): 21–28. http://dx.doi.org/10.17656/jzs.10177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Shang, Shaoqiang. "Uniform approximation of convex function in smooth Banach spaces." Journal of Mathematical Analysis and Applications 478, no. 2 (October 2019): 526–38. http://dx.doi.org/10.1016/j.jmaa.2019.05.041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Güder, F., and J. G. Morris. "Optimal objective function approximation for separable convex quadratic programming." Mathematical Programming 67, no. 1-3 (October 1994): 133–42. http://dx.doi.org/10.1007/bf01582218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Hanin, Boris. "Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations." Mathematics 7, no. 10 (October 18, 2019): 992. http://dx.doi.org/10.3390/math7100992.

Full text
Abstract:
This article concerns the expressive power of depth in neural nets with ReLU activations and a bounded width. We are particularly interested in the following questions: What is the minimal width w min ( d ) so that ReLU nets of width w min ( d ) (and arbitrary depth) can approximate any continuous function on the unit cube [ 0 , 1 ] d arbitrarily well? For ReLU nets near this minimal width, what can one say about the depth necessary to approximate a given function? We obtain an essentially complete answer to these questions for convex functions. Our approach is based on the observation that, due to the convexity of the ReLU activation, ReLU nets are particularly well suited to represent convex functions. In particular, we prove that ReLU nets with width d + 1 can approximate any continuous convex function of d variables arbitrarily well. These results then give quantitative depth estimates for the rate of approximation of any continuous scalar function on the d-dimensional cube [ 0 , 1 ] d by ReLU nets with width d + 3 .
APA, Harvard, Vancouver, ISO, and other styles
19

Kopotun, K. A., D. Leviatan, and I. A. Shevchuk. "Convex Polynomial Approximation in the Uniform Norm: Conclusion." Canadian Journal of Mathematics 57, no. 6 (December 1, 2005): 1224–48. http://dx.doi.org/10.4153/cjm-2005-049-6.

Full text
Abstract:
AbstractEstimating the degree of approximation in the uniform norm, of a convex function on a finite interval, by convex algebraic polynomials, has received wide attention over the last twenty years. However, while much progress has been made especially in recent years by, among others, the authors of this article, separately and jointly, there have been left some interesting open questions. In this paper we give final answers to all those open problems. We are able to say, for each r-th differentiable convex function, whether or not its degree of convex polynomial approximation in the uniform norm may be estimated by a Jackson-type estimate involving the weighted Ditzian–Totik kth modulus of smoothness, and how the constants in this estimate behave. It turns out that for some pairs (k, r) we have such estimate with constants depending only on these parameters. For other pairs the estimate is valid, but only with constants that depend on the function being approximated, while there are pairs for which the Jackson-type estimate is, in general, invalid.
APA, Harvard, Vancouver, ISO, and other styles
20

DeVore, Ronald, Boris Hanin, and Guergana Petrova. "Neural network approximation." Acta Numerica 30 (May 2021): 327–444. http://dx.doi.org/10.1017/s0962492921000052.

Full text
Abstract:
Neural networks (NNs) are the method of choice for building learning algorithms. They are now being investigated for other numerical tasks such as solving high-dimensional partial differential equations. Their popularity stems from their empirical success on several challenging learning problems (computer chess/Go, autonomous navigation, face recognition). However, most scholars agree that a convincing theoretical explanation for this success is still lacking. Since these applications revolve around approximating an unknown function from data observations, part of the answer must involve the ability of NNs to produce accurate approximations.This article surveys the known approximation properties of the outputs of NNs with the aim of uncovering the properties that are not present in the more traditional methods of approximation used in numerical analysis, such as approximations using polynomials, wavelets, rational functions and splines. Comparisons are made with traditional approximation methods from the viewpoint of rate distortion, i.e. error versus the number of parameters used to create the approximant. Another major component in the analysis of numerical approximation is the computational time needed to construct the approximation, and this in turn is intimately connected with the stability of the approximation algorithm. So the stability of numerical approximation using NNs is a large part of the analysis put forward.The survey, for the most part, is concerned with NNs using the popular ReLU activation function. In this case the outputs of the NNs are piecewise linear functions on rather complicated partitions of the domain of f into cells that are convex polytopes. When the architecture of the NN is fixed and the parameters are allowed to vary, the set of output functions of the NN is a parametrized nonlinear manifold. It is shown that this manifold has certain space-filling properties leading to an increased ability to approximate (better rate distortion) but at the expense of numerical stability. The space filling creates the challenge to the numerical method of finding best or good parameter choices when trying to approximate.
APA, Harvard, Vancouver, ISO, and other styles
21

Sano, Takashi. "A noniterative solution to the inverse Ising problem using a convex upper bound on the partition function." Journal of Statistical Mechanics: Theory and Experiment 2022, no. 2 (February 1, 2022): 023406. http://dx.doi.org/10.1088/1742-5468/ac50b1.

Full text
Abstract:
Abstract The inverse Ising problem, or the learning of Ising models, is notoriously difficult, as evaluating the partition function has a large computational cost. To quickly solve this problem, inverse formulas using approximation methods such as the Bethe approximation have been developed. In this paper, we employ the tree-reweighted (TRW) approximation to construct a new inverse formula. An advantage of using the TRW approximation is that it provides a rigorous upper bound on the partition function, allowing us to optimize a lower bound for the learning objective function. We show that the moment-matching and self-consistency conditions can be solved analytically, and we obtain an analytic form of the approximate interaction matrix as a function of the given data statistics. Using this solution, we can compute the interaction matrix that is optimal to the approximate objective function without iterative computation. To evaluate the accuracy of the derived learning formula, we compared our formula to those obtained by other approximations. From our experiments on reconstructing interaction matrices, we found that the proposed formula gives the best estimates in models with strongly attractive interactions on various graphs.
APA, Harvard, Vancouver, ISO, and other styles
22

Yang, Wanping, Jinkai Zhao, and Fengmin Xu. "An Efficient Method for Convex Constrained Rank Minimization Problems Based on DC Programming." Mathematical Problems in Engineering 2016 (2016): 1–13. http://dx.doi.org/10.1155/2016/7473041.

Full text
Abstract:
The constrained rank minimization problem has various applications in many fields including machine learning, control, and signal processing. In this paper, we consider the convex constrained rank minimization problem. By introducing a new variable and penalizing an equality constraint to objective function, we reformulate the convex objective function with a rank constraint as a difference of convex functions based on the closed-form solutions, which can be reformulated as DC programming. A stepwise linear approximative algorithm is provided for solving the reformulated model. The performance of our method is tested by applying it to affine rank minimization problems and max-cut problems. Numerical results demonstrate that the method is effective and of high recoverability and results on max-cut show that the method is feasible, which provides better lower bounds and lower rank solutions compared with improved approximation algorithm using semidefinite programming, and they are close to the results of the latest researches.
APA, Harvard, Vancouver, ISO, and other styles
23

Swetits, J. J., S. E. Weinstein, and Yuesheng Xu. "Approximation inLp[0,1] byn-convex functions." Numerical Functional Analysis and Optimization 11, no. 1-2 (January 1990): 167–79. http://dx.doi.org/10.1080/01630569008816368.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Gifford, David, and Robert Huotari. "BestL 1 approximation by convex functions." Numerical Algorithms 9, no. 1 (March 1995): 107–11. http://dx.doi.org/10.1007/bf02143929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Nikoltjevahedberg, M. "Approximation of a Convex Function by Convex Algebraic Polynomials in Lp, 1 ≤ p < ∞." Journal of Approximation Theory 73, no. 3 (June 1993): 288–302. http://dx.doi.org/10.1006/jath.1993.1043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Gal, Sorin Gheorghe. "Properties of the modulus of continuity for monotonous convex functions and applications." International Journal of Mathematics and Mathematical Sciences 18, no. 3 (1995): 443–46. http://dx.doi.org/10.1155/s016117129500055x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Altomare, Francesco, and Sabrina Diomede. "Positive operators and approximation in function spaces on completely regular spaces." International Journal of Mathematics and Mathematical Sciences 2003, no. 61 (2003): 3841–71. http://dx.doi.org/10.1155/s0161171203301206.

Full text
Abstract:
We discuss the approximation properties of nets of positive linear operators acting on function spaces defined on Hausdorff completely regular spaces. A particular attention is devoted to positive operators which are defined in terms of integrals with respect to a given family of Borel measures. We present several applications which, in particular, show the advantages of such a general approach. Among other things, some new Korovkin-type theorems on function spaces on arbitrary topological spaces are obtained. Finally, a natural extension of the so-called Bernstein-Schnabl operators for convex (not necessarily compact) subsets of a locally convex space is presented as well.
APA, Harvard, Vancouver, ISO, and other styles
28

Demetriou, I. C. "Discrete piecewise monotonic approximation by a strictly convex distance function." Mathematics of Computation 64, no. 209 (January 1, 1995): 157. http://dx.doi.org/10.1090/s0025-5718-1995-1270617-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Kadhi, F., and A. Trad. "Characterization and Approximation of the Convex Envelope of a Function." Journal of Optimization Theory and Applications 110, no. 2 (August 2001): 457–66. http://dx.doi.org/10.1023/a:1017591716397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Shang, Shaoqiang, and Yunan Cui. "Approximation and Gâteaux differentiability of convex function in Banach spaces." Mathematische Nachrichten 294, no. 12 (December 2021): 2413–24. http://dx.doi.org/10.1002/mana.201900462.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Murota, Kazuo. "ON POLYHEDRAL APPROXIMATION OF L-CONVEX AND M-CONVEX FUNCTIONS." Journal of the Operations Research Society of Japan 58, no. 3 (2015): 291–305. http://dx.doi.org/10.15807/jorsj.58.291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Ditzian, Z., and A. Prymak. "Approximation by Dilated Averages and K-Functionals." Canadian Journal of Mathematics 62, no. 4 (August 1, 2010): 737–57. http://dx.doi.org/10.4153/cjm-2010-040-1.

Full text
Abstract:
AbstractFor a positive finite measure dμ(u ) on ℝd normalized to satisfy , the dilated average of f (x ) is given byIt will be shown that under some mild assumptions on dμ(u ) one has the equivalencewhere means , B is a Banach space of functions for which translations are continuous isometries and P(D) is an elliptic differential operator induced by μ. Many applications are given, notable among which is the averaging operator with where S is a bounded convex set in ℝd with an interior point, m(S) is the Lebesgue measure of S, and 𝒳S(u ) is the characteristic function of S. The rate of approximation by averages on the boundary of a convex set under more restrictive conditions is also shown to be equivalent to an appropriate K-functional.
APA, Harvard, Vancouver, ISO, and other styles
33

Prolla, João B. "A generalized Bernstein approximation theorem." Mathematical Proceedings of the Cambridge Philosophical Society 104, no. 2 (September 1988): 317–30. http://dx.doi.org/10.1017/s030500410006549x.

Full text
Abstract:
A celebrated theorem of Weierstrass states that any continuous real-valued function f defined on the closed interval [0, 1] ⊂ ℝ is the limit of a uniformly convergent sequence of polynomials. One of the most elegant and elementary proofs of this classic result is that which uses the Bernstein polynomials of fone for each integer n ≥ 1. Bernstein's Theorem states that Bn(f) → f uniformly on [0, 1] and, since each Bn(f) is a polynomial of degree at most n, we have as a consequence Weierstrass' theorem. See for example Lorentz [9]. The operator Bn, defined on the space C([0, 1]; ℝ) with values in the vector subspace of all polynomials of degree at most n has the property that Bn(f) ≥ 0 whenever f ≥ 0. Thus Bernstein's Theorem also establishes the fact that each positive continuous real-valued function on [0, 1] is the limit of a uniformly convergent sequence of positive polynomials. This raises the following natural question: consider a compact Hausdorff space X and the convex cone C+(X):= {f ∈ C(X; ℝ); f ≥ 0}. Now the analogue of Bernstein's Theorem would be a theorem stating when a convex cone contained in C+(X) is dense in it. More generally, one raises the question of describing the closure of a convex cone contained in C(X; ℝ), and, in particular, the closure of A+:= {f ∈ A; f ≥ 0}, where A is a subalgebra of C(X; ℝ).
APA, Harvard, Vancouver, ISO, and other styles
34

Rolshchikov, V. E. "Conditional Approximation Minimum and Approximation Saddle Points of Convex Functions." IFAC Proceedings Volumes 31, no. 13 (June 1998): 191–92. http://dx.doi.org/10.1016/s1474-6670(17)36021-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Yuan, Jing, Juan Shi, and Xue-Cheng Tai. "A Convex and Exact Approach to Discrete Constrained TV-L1 Image Approximation." East Asian Journal on Applied Mathematics 1, no. 2 (May 2011): 172–86. http://dx.doi.org/10.4208/eajam.220310.181110a.

Full text
Abstract:
AbstractWe study the TV-L1 image approximation model from primal and dual perspective, based on a proposed equivalent convex formulations. More specifically, we apply a convex TV-L1 based approach to globally solve the discrete constrained optimization problem of image approximation, where the unknown image function u(x) ∈ {f1,…,fn}, ∀x ∈ Ω. We show that the TV-L1 formulation does provide an exact convex relaxation model to the non-convex optimization problem considered. This result greatly extends recent studies of Chan et al., from the simplest binary constrained case to the general gray-value constrained case, through the proposed rounding scheme. In addition, we construct a fast multiplier-based algorithm based on the proposed primal-dual model, which properly avoids variability of the concerning TV-L1 energy function. Numerical experiments validate the theoretical results and show that the proposed algorithm is reliable and effective.
APA, Harvard, Vancouver, ISO, and other styles
36

Aguilera, Francisco, Daniel Cárdenas-Morales, and Pedro Garrancho. "Optimal Simultaneous Approximation via𝒜-Summability." Abstract and Applied Analysis 2013 (2013): 1–5. http://dx.doi.org/10.1155/2013/824058.

Full text
Abstract:
We present optimal convergence results for themth derivative of a function by sequences of linear operators. The usual convergence is replaced by𝒜-summability, with𝒜being a sequence of infinite matrices with nonnegative real entries, and the operators are assumed to bem-convex. Saturation results for nonconvergent but almost convergent sequences of operators are stated as corollaries.
APA, Harvard, Vancouver, ISO, and other styles
37

SHIOURA, AKIYOSHI. "ON THE PIPAGE ROUNDING ALGORITHM FOR SUBMODULAR FUNCTION MAXIMIZATION — A VIEW FROM DISCRETE CONVEX ANALYSIS." Discrete Mathematics, Algorithms and Applications 01, no. 01 (March 2009): 1–23. http://dx.doi.org/10.1142/s1793830909000063.

Full text
Abstract:
We consider the problem of maximizing a nondecreasing submodular set function under a matroid constraint. Recently, Calinescu et al. (2007) proposed an elegant framework for the approximation of this problem, which is based on the pipage rounding technique by Ageev and Sviridenko (2004), and showed that this framework indeed yields a (1 - 1/e)-approximation algorithm for the class of submodular functions which are represented as the sum of weighted rank functions of matroids. This paper sheds a new light on this result from the viewpoint of discrete convex analysis by extending it to the class of submodular functions which are the sum of M ♮-concave functions. M ♮-concave functions are a class of discrete concave functions introduced by Murota and Shioura (1999), and contain the class of the sum of weighted rank functions as a proper subclass. Our result provides a better understanding for why the pipage rounding algorithm works for the sum of weighted rank functions. Based on the new observation, we further extend the approximation algorithm to the maximization of a nondecreasing submodular function over an integral polymatroid. This extension has an application in multi-unit combinatorial auctions.
APA, Harvard, Vancouver, ISO, and other styles
38

HA, LY KIM. "-APPROXIMATION OF HOLOMORPHIC FUNCTIONS ON A CLASS OF CONVEX DOMAINS." Bulletin of the Australian Mathematical Society 97, no. 3 (April 23, 2018): 446–52. http://dx.doi.org/10.1017/s0004972718000114.

Full text
Abstract:
Let $\unicode[STIX]{x1D6FA}$ be a member of a certain class of convex ellipsoids of finite/infinite type in $\mathbb{C}^{2}$. In this paper, we prove that every holomorphic function in $L^{p}(\unicode[STIX]{x1D6FA})$ can be approximated by holomorphic functions on $\bar{\unicode[STIX]{x1D6FA}}$ in $L^{p}(\unicode[STIX]{x1D6FA})$-norm, for $1\leq p<\infty$. For the case $p=\infty$, the continuity up to the boundary is additionally required. The proof is based on $L^{p}$ bounds in the additive Cousin problem.
APA, Harvard, Vancouver, ISO, and other styles
39

Campos, José N. B., Iran E. Lima Neto, Ticiana M. C. Studart, and Luiz S. V. Nascimento. "Trade-off between reservoir yield and evaporation losses as a function of lake morphology in semi-arid Brazil." Anais da Academia Brasileira de Ciências 88, no. 2 (May 31, 2016): 1113–25. http://dx.doi.org/10.1590/0001-3765201620150124.

Full text
Abstract:
This study investigates the relationships between yield and evaporation as a function of lake morphology in semi-arid Brazil. First, a new methodology was proposed to classify the morphology of 40 reservoirs in the Ceará State, with storage capacities ranging from approximately 5 to 4500 hm3. Then, Monte Carlo simulations were conducted to study the effect of reservoir morphology (including real and simplified conical forms) on the water storage process at different reliability levels. The reservoirs were categorized as convex (60.0%), slightly convex (27.5%) or linear (12.5%). When the conical approximation was used instead of the real lake form, a trade-off occurred between reservoir yield and evaporation losses, with different trends for the convex, slightly convex and linear reservoirs. Using the conical approximation, the water yield prediction errors reached approximately 5% of the mean annual inflow, which is negligible for large reservoirs. However, for smaller reservoirs, this error became important. Therefore, this paper presents a new procedure for correcting the yield-evaporation relationships that were obtained by assuming a conical approximation rather than the real reservoir morphology. The combination of this correction with the Regulation Triangle Diagram is useful for rapidly and objectively predicting reservoir yield and evaporation losses in semi-arid environments.
APA, Harvard, Vancouver, ISO, and other styles
40

Alabdali, Osama, and Allal Guessab. "Optimal estimates of approximation errors for strongly positive linear operators on convex polytopes." Filomat 36, no. 2 (2022): 695–701. http://dx.doi.org/10.2298/fil2202695a.

Full text
Abstract:
In the present investigation, we introduce and study linear operators, which underestimate every strongly convex function. We call them, for brevity, sp-linear (approximation) operators. We will provide their sharp approximation errors. We show that the latter is bounded by the error approximation of the quadratic function. We use the centroidel Voronoi tessellations as a domain partition to construct best sp-linear operators. Finally, numerical examples are presented to illustrate the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
41

Huotari, R., and D. Zwick. "Approximation in the mean by convex functions." Numerical Functional Analysis and Optimization 10, no. 5-6 (January 1989): 489–98. http://dx.doi.org/10.1080/01630568908816314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Azagra, Daniel. "Global and fine approximation of convex functions." Proceedings of the London Mathematical Society 107, no. 4 (February 18, 2013): 799–824. http://dx.doi.org/10.1112/plms/pds099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Marano, Miguel. "Best φ-approximation by n-convex functions." Numerical Functional Analysis and Optimization 20, no. 7-8 (January 1999): 753–77. http://dx.doi.org/10.1080/01630569908816922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Brown, A. L. "Best approximation by continuous n-convex functions." Journal of Approximation Theory 57, no. 1 (April 1989): 69–76. http://dx.doi.org/10.1016/0021-9045(89)90084-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Zwick, D. "Best L1-approximation by generalized convex functions." Journal of Approximation Theory 59, no. 1 (October 1989): 116–23. http://dx.doi.org/10.1016/0021-9045(89)90164-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Roth, Walter. "A Korovkin type theorem for weighted spaces of continuous functions." Bulletin of the Australian Mathematical Society 55, no. 2 (April 1997): 239–48. http://dx.doi.org/10.1017/s0004972700033906.

Full text
Abstract:
We prove a Korovkin type approximation theorem for positive linear operators on weighted spaces of continuous real-valued functions on a compact Hausdorff space X. These spaces comprise a variety of subspaces of C (X) with suitable locally convex topologies and were introduced by Nachbin 1967 and Prolla 1977. Some early Korovkin type results on the weighted approximation of real-valued functions in one and several variables with a single weight function are due to Gadzhiev 1976 and 1980.
APA, Harvard, Vancouver, ISO, and other styles
47

Yu, Guohua. "Approximation of convex type function by partial sums of Fourier series." Applied Mathematics-A Journal of Chinese Universities 19, no. 1 (March 2004): 67–76. http://dx.doi.org/10.1007/s11766-004-0023-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Chen, Peng Jie, and Dong Dong Wang. "A Kernel-Enriched Quadratic Convex Meshfree Approximation." Applied Mechanics and Materials 444-445 (October 2013): 85–89. http://dx.doi.org/10.4028/www.scientific.net/amm.444-445.85.

Full text
Abstract:
Convex meshfree approximation with non-negative shape functions yields strict positive mass matrix and is particularly favorable for dynamic analysis. In this work, a kernel-enriched quadratic convex meshfree formulation with adjustable local approximation feature is presented. This formulation is built upon the generalized meshfree approximation with a relaxed quadratic reproducing condition. The resulting shape functions of the kernel-enriched quadratic convex meshfree formulation are presented in detail. The convergence behaviors for both static and vibration problems are discussed. Numerical results show that better accuracy can be achieved with the present formulation.
APA, Harvard, Vancouver, ISO, and other styles
49

Manne, Per E. "Carleman Approximation by Entire Functions on the Union of Two Totally Real Subspaces of Cn." Canadian Mathematical Bulletin 37, no. 4 (December 1, 1994): 522–26. http://dx.doi.org/10.4153/cmb-1994-075-3.

Full text
Abstract:
AbstractLet L1, L2 ⊂ Cn be two totally real subspaces of real dimension n, and such that L1 ∩ L2 = {0}. We show that continuous functions on L1 ∪L2 allow Carleman approximation by entire functions if and only if L1 ∪L2 is polynomially convex. If the latter condition is satisfied, then a function f:L1 ∪L2 —> C such that f|LiCk(Li), i = 1,2, allows Carleman approximation of order k by entire functions if and only if f satisfies the Cauchy-Riemann equations up to order k at the origin.
APA, Harvard, Vancouver, ISO, and other styles
50

Jamjoom, F. B., A. A. Siddiqui, H. M. Tahlawi, and A. M. Peralta. "APPROXIMATION AND CONVEX DECOMPOSITION BY EXTREMALS AND THE -FUNCTION IN JBW*-TRIPLES." Quarterly Journal of Mathematics 66, no. 2 (January 19, 2015): 583–603. http://dx.doi.org/10.1093/qmath/hau036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography