Academic literature on the topic 'Function approximation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Function approximation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Function approximation"

1

Ludanov, Konstantin. "METHOD OF OBTAINING APPROXIMATE FORMULAS." EUREKA: Physics and Engineering 2 (March 30, 2018): 72–78. http://dx.doi.org/10.21303/2461-4262.2018.00589.

Full text
Abstract:
The two-parameter method of approximating the sum of a power series in terms of its first three terms of the expansion, which allows one to obtain analytic approximations of various functions, decomposes into a Maclaurin series. As an approximation function of this approximation, it is proposed to use elementary functions constructed in the Nth degree, but with a "compressed" or "stretched" variable x due to the introduction of the numerical factor M (x ≡ ε ∙ m, M ≠ 0) into it. The use of this method makes it possible to significantly increase the range of very accurate approximation of the obtained approximate function with respect to a similar range of the output fragment of a series of three terms. Expressions for both the approximation parameters (M and N) are obtained in a general form and are determined by the coefficients of the second and third terms of the Maclaurin series. Also expressions of both approximation parameters are found for the case if the basis function and the approximant function decompose into the Maclaurin series in even powers of the argument. A number of examples of approximation of functions on the basis of the analysis of power series into which they decompose are given.
APA, Harvard, Vancouver, ISO, and other styles
2

Howard, Roy M. "Dual Taylor Series, Spline Based Function and Integral Approximation and Applications." Mathematical and Computational Applications 24, no. 2 (April 1, 2019): 35. http://dx.doi.org/10.3390/mca24020035.

Full text
Abstract:
In this paper, function approximation is utilized to establish functional series approximations to integrals. The starting point is the definition of a dual Taylor series, which is a natural extension of a Taylor series, and spline based series approximation. It is shown that a spline based series approximation to an integral yields, in general, a higher accuracy for a set order of approximation than a dual Taylor series, a Taylor series and an antiderivative series. A spline based series for an integral has many applications and indicative examples are detailed. These include a series for the exponential function, which coincides with a Padé series, new series for the logarithm function as well as new series for integral defined functions such as the Fresnel Sine integral function. It is shown that these series are more accurate and have larger regions of convergence than corresponding Taylor series. The spline based series for an integral can be used to define algorithms for highly accurate approximations for the logarithm function, the exponential function, rational numbers to a fractional power and the inverse sine, inverse cosine and inverse tangent functions. These algorithms are used to establish highly accurate approximations for π and Catalan’s constant. The use of sub-intervals allows the region of convergence for an integral approximation to be extended.
APA, Harvard, Vancouver, ISO, and other styles
3

Howard, Roy M. "Arbitrarily Accurate Analytical Approximations for the Error Function." Mathematical and Computational Applications 27, no. 1 (February 9, 2022): 14. http://dx.doi.org/10.3390/mca27010014.

Full text
Abstract:
A spline-based integral approximation is utilized to define a sequence of approximations to the error function that converge at a significantly faster manner than the default Taylor series. The real case is considered and the approximations can be improved by utilizing the approximation erf(x)≈1 for |x|>xo and with xo optimally chosen. Two generalizations are possible; the first is based on demarcating the integration interval into m equally spaced subintervals. The second, is based on utilizing a larger fixed subinterval, with a known integral, and a smaller subinterval whose integral is to be approximated. Both generalizations lead to significantly improved accuracy. Furthermore, the initial approximations, and those arising from the first generalization, can be utilized as inputs to a custom dynamic system to establish approximations with better convergence properties. Indicative results include those of a fourth-order approximation, based on four subintervals, which leads to a relative error bound of 1.43 × 10−7 over the interval [0, ∞]. The corresponding sixteenth-order approximation achieves a relative error bound of 2.01 × 10−19. Various approximations that achieve the set relative error bounds of 10−4, 10−6, 10−10, and 10−16, over [0, ∞], are specified. Applications include, first, the definition of functions that are upper and lower bounds, of arbitrary accuracy, for the error function. Second, new series for the error function. Third, new sequences of approximations for exp(−x2) that have significantly higher convergence properties than a Taylor series approximation. Fourth, the definition of a complementary demarcation function eC(x) that satisfies the constraint eC2(x)+erf2(x)=1. Fifth, arbitrarily accurate approximations for the power and harmonic distortion for a sinusoidal signal subject to an error function nonlinearity. Sixth, approximate expressions for the linear filtering of a step signal that is modeled by the error function.
APA, Harvard, Vancouver, ISO, and other styles
4

Patseika, Pavel G., Yauheni A. Rouba, and Kanstantin A. Smatrytski. "On one rational integral operator of Fourier – Chebyshev type and approximation of Markov functions." Journal of the Belarusian State University. Mathematics and Informatics, no. 2 (July 30, 2020): 6–27. http://dx.doi.org/10.33581/2520-6508-2020-2-6-27.

Full text
Abstract:
The purpose of this paper is to construct an integral rational Fourier operator based on the system of Chebyshev – Markov rational functions and to study its approximation properties on classes of Markov functions. In the introduction the main results of well-known works on approximations of Markov functions are present. Rational approximation of such functions is a well-known classical problem. It was studied by A. A. Gonchar, T. Ganelius, J.-E. Andersson, A. A. Pekarskii, G. Stahl and other authors. In the main part an integral operator of the Fourier – Chebyshev type with respect to the rational Chebyshev – Markov functions, which is a rational function of order no higher than n is introduced, and approximation of Markov functions is studied. If the measure satisfies the following conditions: suppμ = [1, a], a > 1, dμ(t) = ϕ(t)dt and ϕ(t) ἆ (t − 1)α on [1, a] the estimates of pointwise and uniform approximation and the asymptotic expression of the majorant of uniform approximation are established. In the case of a fixed number of geometrically distinct poles in the extended complex plane, values of optimal parameters that provide the highest rate of decreasing of this majorant are found, as well as asymptotically accurate estimates of the best uniform approximation by this method in the case of an even number of geometrically distinct poles of the approximating function. In the final part we present asymptotic estimates of approximation of some elementary functions, which can be presented by Markov functions.
APA, Harvard, Vancouver, ISO, and other styles
5

Malachivskyy, Petro. "Chebyshev approximation of the multivariable functions by some nonlinear expressions." Physico-mathematical modelling and informational technologies, no. 33 (September 2, 2021): 18–22. http://dx.doi.org/10.15407/fmmit2021.33.018.

Full text
Abstract:
A method for constructing a Chebyshev approximation of the multivariable functions by exponential, logarithmic and power expressions is proposed. It consists in reducing the problem of the Chebyshev approximation by a nonlinear expression to the construction of an intermediate Chebyshev approximation by a generalized polynomial. The intermediate Chebyshev approximation by a generalized polynomial is calculated for the values of a certain functional transformation of the function we are approximating. The construction of the Chebyshev approximation of the multivariable functions by a polynomial is realized by an iterative scheme based on the method of least squares with a variable weight function.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Zi, Aws Albarghouthi, Gautam Prakriya, and Somesh Jha. "Interval universal approximation for neural networks." Proceedings of the ACM on Programming Languages 6, POPL (January 16, 2022): 1–29. http://dx.doi.org/10.1145/3498675.

Full text
Abstract:
To verify safety and robustness of neural networks, researchers have successfully applied abstract interpretation , primarily using the interval abstract domain. In this paper, we study the theoretical power and limits of the interval domain for neural-network verification. First, we introduce the interval universal approximation (IUA) theorem. IUA shows that neural networks not only can approximate any continuous function f (universal approximation) as we have known for decades, but we can find a neural network, using any well-behaved activation function, whose interval bounds are an arbitrarily close approximation of the set semantics of f (the result of applying f to a set of inputs). We call this notion of approximation interval approximation . Our theorem generalizes the recent result of Baader et al. from ReLUs to a rich class of activation functions that we call squashable functions . Additionally, the IUA theorem implies that we can always construct provably robust neural networks under ℓ ∞ -norm using almost any practical activation function. Second, we study the computational complexity of constructing neural networks that are amenable to precise interval analysis. This is a crucial question, as our constructive proof of IUA is exponential in the size of the approximation domain. We boil this question down to the problem of approximating the range of a neural network with squashable activation functions. We show that the range approximation problem (RA) is a Δ 2 -intermediate problem, which is strictly harder than NP -complete problems, assuming coNP ⊄ NP . As a result, IUA is an inherently hard problem : No matter what abstract domain or computational tools we consider to achieve interval approximation, there is no efficient construction of such a universal approximator. This implies that it is hard to construct a provably robust network, even if we have a robust network to start with.
APA, Harvard, Vancouver, ISO, and other styles
7

Patseika, Pavel G., and Yauheni A. Rouba. "Fejer means of rational Fourier – Chebyshev series and approximation of function |x|s." Journal of the Belarusian State University. Mathematics and Informatics, no. 3 (November 29, 2019): 18–34. http://dx.doi.org/10.33581/2520-6508-2019-3-18-34.

Full text
Abstract:
Approximation properties of Fejer means of Fourier series by Chebyshev – Markov system of algebraic fractions and approximation by Fejer means of function |x|s, 0 < s < 2, on the interval [−1,1], are studied. One orthogonal system of Chebyshev – Markov algebraic fractions is considers, and Fejer means of the corresponding rational Fourier – Chebyshev series is introduce. The order of approximations of the sequence of Fejer means of continuous functions on a segment in terms of the continuity module and sufficient conditions on the parameter providing uniform convergence are established. A estimates of the pointwise and uniform approximation of the function |x|s, 0 < s < 2, on the interval [−1,1], the asymptotic expressions under n→∞ of majorant of uniform approximations, and the optimal value of the parameter, which provides the highest rate of approximation of the studied functions are sums of rational use of Fourier – Chebyshev are found.
APA, Harvard, Vancouver, ISO, and other styles
8

Zakharchenko, S. M., N. A. Shydlovska, and I. L. Mazurenko. "DISCREPANCY PARAMETERS OF APPROXIMATIONS OF DISCRETELY SPECIFIED DEPENDENCIES BY ANALYTICAL FUNCTIONS AND SEARCH CRITERIA FOR OPTIMAL VALUES OF THEIR COEFFICIENTS." Praci Institutu elektrodinamiki Nacionalanoi akademii nauk Ukraini 2021, no. 59 (September 20, 2021): 11–19. http://dx.doi.org/10.15407/publishing2021.59.011.

Full text
Abstract:
Universal discrepancy parameters of approximations of discretely specified dependencies by analytical functions and search criteria for optimal values of their coefficients, as well as analysis of features of their application are described. Discrepancy parameters of approximations, which do not depend on the ranges of variation of the values of functions and the number of points of a discretely specified dependence, are proposed. They can be effective for objectively comparing the quality of approximations of any dependencies by any functions. Approximations of a discretely specified dependence of the mathematical expectation of the equivalent electrical resistance of a layer of aluminum granules during spark-erosion dispersion in water on the instantaneous values of the discharge current are carried out. As approximating functions, we chose a power function with an exponent factor –1 and a function based on exponential. Using the criteria of the least approximation error, the optimal values of the coefficients of both approximating functions are founded. It is shown in which cases it is advisable to use the combined search criteria for the optimal values of the coefficients of the approximating functions, and in which are enough simple one-component ones. Ref. 27, fig. 2, tables 2.
APA, Harvard, Vancouver, ISO, and other styles
9

GREPL, MARTIN A. "CERTIFIED REDUCED BASIS METHODS FOR NONAFFINE LINEAR TIME-VARYING AND NONLINEAR PARABOLIC PARTIAL DIFFERENTIAL EQUATIONS." Mathematical Models and Methods in Applied Sciences 22, no. 03 (March 2012): 1150015. http://dx.doi.org/10.1142/s0218202511500151.

Full text
Abstract:
We present reduced basis approximations and associated a posteriori error bounds for parabolic partial differential equations involving (i) a nonaffine dependence on the parameter and (ii ) a nonlinear dependence on the field variable. The method employs the Empirical Interpolation Method in order to construct "affine" coefficient-function approximations of the "nonaffine" (or nonlinear) parametrized functions. We consider linear time-invariant as well as linear time-varying nonaffine functions and introduce a new sampling approach to generate the function approximation space for the latter case. Our a posteriori error bounds take both error contributions explicitly into account — the error introduced by the reduced basis approximation and the error induced by the coefficient function interpolation. We show that these bounds are rigorous upper bounds for the approximation error under certain conditions on the function interpolation, thus addressing the demand for certainty of the approximation. As regards efficiency, we develop an offline–online computational procedure for the calculation of the reduced basis approximation and associated error bound. The method is thus ideally suited for the many-query or real-time contexts. Numerical results are presented to confirm and test our approach.
APA, Harvard, Vancouver, ISO, and other styles
10

M.Nasir, Haniffa, and Kamel Nafa. "A New Second Order Approximation for Fractional Derivatives with Applications." Sultan Qaboos University Journal for Science [SQUJS] 23, no. 1 (May 6, 2018): 43. http://dx.doi.org/10.24200/squjs.vol23iss1pp43-55.

Full text
Abstract:
We propose a generalized theory to construct higher order Grünwald type approximations for fractional derivatives. We use this generalization to simplify the proofs of orders for existing approximation forms for the fractional derivative. We also construct a set of higher order Grünwald type approximations for fractional derivatives in terms of a general real sequence and its generating function. From this, a second order approximation with shift is shown to be useful in approximating steady state problems and time dependent fractional diffusion problems. Stability and convergence for a Crank-Nicolson type scheme for this second order approximation are analyzed and are supported by numerical results.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Function approximation"

1

楊文聰 and Man-chung Yeung. "Korovkin approximation in function spaces." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1990. http://hub.hku.hk/bib/B31209531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

伍卓仁 and Cheuk-yan Ng. "Pointwise Korovkin approximation in function spaces." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B31210934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

吳家樂 and Ka-lok Ng. "Relative korovkin approximation in function spaces." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B31213479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Miranda, Brando M. Eng Massachusetts Institute of Technology. "Training hierarchical networks for function approximation." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/113159.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 59-60).
In this work we investigate function approximation using Hierarchical Networks. We start of by investigating the theory proposed by Poggio et al [2] that Deep Learning Convolutional Neural Networks (DCN) can be equivalent to hierarchical kernel machines with the Radial Basis Functions (RBF).We investigate the difficulty of training RBF networks with stochastic gradient descent (SGD) and hierarchical RBF. We discovered that training singled layered RBF networks can be quite simple with a good initialization and good choice of standard deviation for the Gaussian. Training hierarchical RBFs remains as an open question, however, we clearly identified the issue surrounding training hierarchical RBFs and potential methods to resolve this. We also compare standard DCN networks to hierarchical Radial Basis Functions in tasks that has not been explored yet; the role of depth in learning compositional functions.
by Brando Miranda.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
5

Ng, Ka-lok. "Relative korovkin approximation in function spaces /." Hong Kong : University of Hong Kong, 1995. http://sunzi.lib.hku.hk/hkuto/record.jsp?B17506074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ng, Cheuk-yan. "Pointwise Korovkin approximation in function spaces /." [Hong Kong : University of Hong Kong], 1993. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13474522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cheung, Ho Yin. "Function approximation with higher-order fuzzy systems /." View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202006%20CHEUNG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ong, Wen Eng. "Some Basis Function Methods for Surface Approximation." Thesis, University of Canterbury. Mathematics and Statistics, 2012. http://hdl.handle.net/10092/7776.

Full text
Abstract:
This thesis considers issues in surface reconstruction such as identifying approximation methods that work well for certain applications and developing efficient methods to compute and manipulate these approximations. The first part of the thesis illustrates a new fast evaluation scheme to efficiently calculate thin-plate splines in two dimensions. In the fast multipole method scheme, exponential expansions/approximations are used as an intermediate step in converting far field series to local polynomial approximations. The contributions here are extending the scheme to the thin-plate spline and a new error analysis. The error analysis covers the practically important case where truncated series are used throughout, and through off line computation of error constants gives sharp error bounds. In the second part of this thesis, we investigates fitting a surface to an object using blobby models as a coarse level approximation. The aim is to achieve a given quality of approximation with relatively few parameters. This process involves an optimization procedure where a number of blobs (ellipses or ellipsoids) are separately fitted to a cloud of points. Then the optimized blobs are combined to yield an implicit surface approximating the cloud of points. The results for our test cases in 2 and 3 dimensions are very encouraging. For many applications, the coarse level blobby model itself will be sufficient. For example adding texture on top of the blobby surface can give a surprisingly realistic image. The last part of the thesis describes a method to reconstruct surfaces with known discontinuities. We fit a surface to the data points by performing a scattered data interpolation using compactly supported RBFs with respect to a geodesic distance. Techniques from computational geometry such as the visibility graph are used to compute the shortest Euclidean distance between two points, avoiding any obstacles. Results have shown that discontinuities on the surface were clearly reconstructed, and the
APA, Harvard, Vancouver, ISO, and other styles
9

Strand, Filip. "Latent Task Embeddings forFew-Shot Function Approximation." Thesis, KTH, Optimeringslära och systemteori, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-243832.

Full text
Abstract:
Approximating a function from a few data points is of great importance in fields where data is scarce, like, for example, in robotics applications. Recently, scalable and expressive parametric models like deep neural networks have demonstrated superior performance on a wide variety of function approximation tasks when plenty of data is available –however, these methods tend to perform considerably worse in low-data regimes which calls for alternative approaches. One way to address such limitations is by leveraging prior information about the function class to be estimated when such data is available. Sometimes this prior may be known in closed mathematical form but in general it is not. This the-sis is concerned with the more general case where the prior can only be sampled from, such as a black-box forward simulator. To this end, we propose a simple and scalable approach to learning a prior over functions by training a neural network on data from a distribution of related functions. This steps amounts to building a so called latent task embedding where all related functions (tasks) reside and which later can be efficiently searched at task-inference time - a process called fine-tuning. The pro-posed method can be seen as a special type of auto-encoder and employs the same idea of encoding individual data points during training as the recently proposed Conditional Neural Processes. We extend this work by also incorporating an auxiliary task and by providing additional latent space search methods for increased performance after the initial training step. The task-embedding framework makes finding the right function from a family of related function quick and generally requires only a few informative data points from that function. We evaluate the method by regressing onto the harmonic family of curves and also by applying it to two robotic systems with the aim of quickly identifying and controlling those systems.
Att snabbt kunna approximera en funktion baserat på ett fåtal data-punkter är ett viktigt problem, speciellt inom områden där tillgängliga datamängder är relativt små, till exempel inom delar av robotikområdet. Under de senaste åren har flexibla och skalbara inlärningsmetoder, såsom exempelvis neurala nätverk, uppvisat framstående egenskaper i scenarion där en stor mängd data finns att tillgå. Dessa metoder tenderar dock att prestera betydligt sämre i låg-data regimer vilket motiverar sökandet efter alternativa metoder. Ett sätt att adressera denna begränsning är genom att utnyttja tidigare erfarenheter och antaganden (eng. prior information) om funktionsklassen som skall approximeras när sådan information finns tillgänglig. Ibland kan denna typ av information uttryckas i sluten matematisk form, men mer generellt är så inte fallet. Denna uppsats är fokuserad på det mer generella fallet där vi endast antar att vi kan sampla datapunkter från en databas av tidigare erfarenheter - exempelvis från en simulator där vi inte känner till de interna detaljerna. För detta ändamål föreslår vi en metod för att lära från dessa tidigare erfarenheter genom att i förväg träna på en större datamängd som utgör en familj av relaterade funktioner. I detta steg bygger vi upp ett så kallat latent funktionsrum (eng. latent task embeddings) som innesluter alla variationer av funktioner från träningsdatan och som sedan effektivt kan genomsökas i syfte av att hitta en specifik funktion - en process som vi kallar för finjustering (eng. fine-tuning). Den föreslagna metoden kan betraktas som ett specialfall av en auto-encoder och använder sig av samma ide som den nyligen publicerade Conditional Neural Processes metoden där individuella datapunkter enskilt kodas och grupperas. Vi utökar denna metod genom att inkorporera en sidofunktion (eng. auxiliary function) och genom att föreslå ytterligare metoder för att genomsöka det latenta funktionsrummet efter den initiala träningen. Den föreslagna metoden möjliggör att sökandet efter en specifik funktion typiskt kan göras med endast ett fåtal datapunkter. Vi utvärderar metoden genom att studera kurvanpassningsförmågan på sinuskurvor och genom att applicera den på två robotikproblem med syfte att snabbt kunna identifiera och styra dessa dynamiska system.
APA, Harvard, Vancouver, ISO, and other styles
10

Hou, Jun. "Function Approximation and Classification with Perturbed Data." The Ohio State University, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1618266875924225.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Function approximation"

1

Nikolʹskiĭ, S. M. Izbrannye trudy: V trekh tomakh. Moskva: Nauka, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hedberg, Lars Inge. An axiomatic approach to function spaces, spectral synthesis, and Luzin approximation. Providence, RI: American Mathematical Society, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

International Conference "Function spaces, approximation theory, and nonlinear analysis" (2005). Funkt︠s︡ionalʹnye prostranstva teorii︠a︡ priblizheniĭ nelineĭnyĭ analiz: Sbornik stateĭ. Moskva: Nauka, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Domich, P. D. A near-optimal starting solution for polynomial approximation of a continuous function in the L. [Washington, D.C.]: U.S. Dept. of Commerce, National Bureau of Standards, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Domich, P. D. A near-optimal starting solution for polynomial approximation of a continuous function in the L. [Washington, D.C.]: U.S. Dept. of Commerce, National Bureau of Standards, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Domich, P. D. A near-optimal starting solution for polynomial approximation of a continuous function in the Lb1s norm. [Washington, D.C.]: U.S. Dept. of Commerce, National Bureau of Standards, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Domich, P. D. A near-optimal starting solution for polynomial approximation of a continuous function in the Lb1s norm. [Washington, D.C.]: U.S. Dept. of Commerce, National Bureau of Standards, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ghatak, A. K. Modified Airy function and WKB solutions to the wave equation. [Gaithersburg, Md.]: National Institute of Standards and Technology, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Oswald, Peter. Multilevel finite element approximation: Theory and applications. Stuttgart: Teubner, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

International Conference and Workshop Function Spaces, Approximation Theory, Nonlinear Analysis (2005 Moscow, Russia). Mezhdunarodnai︠a︡ konferent︠s︡ii︠a︡ Funkt︠s︡ionalʹnye prostranstva, teorii︠a︡ priblizheniĭ, nelineĭnyĭ analiz, Moskva, 23-29 mai︠a︡ 2005 g., posvi︠a︡shchennai︠a︡ stoletii︠u︡ Sergei︠a︡ Mikhaĭlovicha Nikolʹskogo (rodilsi︠a︡ 30. IV.1905), tezisy dokladov: International Conference and Workshop Function Spaces, Approximation Theory, Nonlinear Analysis, Moscow, Russia, May 23-29, 2005, dedicated to the centennial of Sergei Mikhailovich Nikolskii (born 30. IV.1905), abstracts. Moskva: Matematicheskiĭ in-t im. V.A. Steklova RAN (MIAN), 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Function approximation"

1

Abe, Shigeo. "Function Approximation." In Support Vector Machines for Pattern Classification, 395–442. London: Springer London, 2010. http://dx.doi.org/10.1007/978-1-84996-098-4_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Peterson, James K. "Function Approximation." In Calculus for Cognitive Scientists, 279–99. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-287-874-8_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Plaat, Aske. "Function Approximation." In Learning to Play, 135–94. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59238-7_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sanghi, Nimish. "Function Approximation." In Deep Reinforcement Learning with Python, 123–54. Berkeley, CA: Apress, 2021. http://dx.doi.org/10.1007/978-1-4842-6809-4_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Deb, Anish, Srimanti Roychoudhury, and Gautam Sarkar. "Function Approximation via Hybrid Functions." In Analysis and Identification of Time-Invariant Systems, Time-Varying Systems, and Multi-Delay Systems using Orthogonal Hybrid Functions, 49–86. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-26684-8_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Langtangen, Hans Petter, and Kent-Andre Mardal. "Function Approximation by Global Functions." In Texts in Computational Science and Engineering, 7–68. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-23788-2_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Barnsley, Michael F., John H. Elton, and Douglas P. Hardin. "Recurrent Iterated Function Systems." In Constructive Approximation, 3–31. Boston, MA: Springer US, 1989. http://dx.doi.org/10.1007/978-1-4899-6886-9_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Utgoff, Paul E., and Doina Precup. "Constructive Function Approximation." In Feature Extraction, Construction and Selection, 219–35. Boston, MA: Springer US, 1998. http://dx.doi.org/10.1007/978-1-4615-5725-8_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Abe, Shigeo. "Robust Function Approximation." In Pattern Classification, 287–97. London: Springer London, 2001. http://dx.doi.org/10.1007/978-1-4471-0285-4_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Whiteson, Shimon. "Evolutionary Function Approximation." In Adaptive Representations for Reinforcement Learning, 31–46. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13932-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Function approximation"

1

Simonov, Boris V., and Sergey Yu Tikhonov. "On embeddings of function classes defined by constructive characteristics." In Approximation and Probability. Warsaw: Institute of Mathematics Polish Academy of Sciences, 2006. http://dx.doi.org/10.4064/bc72-0-19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jaffard, Stéphane. "Pointwise regularity associated with function spaces and multifractal analysis." In Approximation and Probability. Warsaw: Institute of Mathematics Polish Academy of Sciences, 2006. http://dx.doi.org/10.4064/bc72-0-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sickel, Winfried. "Approximation from sparse grids and function spaces of dominating mixed smoothness." In Approximation and Probability. Warsaw: Institute of Mathematics Polish Academy of Sciences, 2006. http://dx.doi.org/10.4064/bc72-0-18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pycke, J. R. "Explicit Karhunen-Loève expansions related to the Green function of the Laplacian." In Approximation and Probability. Warsaw: Institute of Mathematics Polish Academy of Sciences, 2006. http://dx.doi.org/10.4064/bc72-0-17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yongquan Zhou, Bai Liu, Zhucheng Xie, and Dexiang Luo. "Rational functional network for function approximation." In 2008 3rd International Conference on Intelligent System and Knowledge Engineering (ISKE 2008). IEEE, 2008. http://dx.doi.org/10.1109/iske.2008.4731044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Yongquan, Xueyan Lu, Zhucheng Xie, and Bai Liu. "An Orthogonal Functional Network for Function Approximation." In 2008 International Conference on Intelligent Computation Technology and Automation (ICICTA). IEEE, 2008. http://dx.doi.org/10.1109/icicta.2008.164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yongquan Zhou, Bai Liu, Huajuan Huang, and Xingqong Wei. "Using separable functional network for function approximation." In 2008 IEEE International Conference on Granular Computing (GrC-2008). IEEE, 2008. http://dx.doi.org/10.1109/grc.2008.4664636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Panda, Biswanath, Mirek Riedewald, Johannes Gehrke, and Stephen B. Pope. "High-Speed Function Approximation." In Seventh IEEE International Conference on Data Mining (ICDM 2007). IEEE, 2007. http://dx.doi.org/10.1109/icdm.2007.107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Davarynejad, Mohsen, Jelmer van Ast, Jos Vrancken, and Jan van den Berg. "Evolutionary value function approximation." In 2011 Ieee Symposium On Adaptive Dynamic Programming And Reinforcement Learning. IEEE, 2011. http://dx.doi.org/10.1109/adprl.2011.5967349.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rodriguez, N., P. Julian, and E. Paolini. "Function Approximation Using Symmetric Simplicial Piecewise-Linear Functions." In 2019 XVIII Workshop on Information Processing and Control (RPIC). IEEE, 2019. http://dx.doi.org/10.1109/rpic.2019.8882186.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Function approximation"

1

Ward, Rachel A. Reliable Function Approximation and Estimation. Fort Belvoir, VA: Defense Technical Information Center, August 2016. http://dx.doi.org/10.21236/ad1013972.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lin, Daw-Tung, and Judith E. Dayhoff. Network Unfolding Algorithm and Universal Spatiotemporal Function Approximation. Fort Belvoir, VA: Defense Technical Information Center, January 1994. http://dx.doi.org/10.21236/ada453011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tong, C. An Adaptive Derivative-based Method for Function Approximation. Office of Scientific and Technical Information (OSTI), October 2008. http://dx.doi.org/10.2172/945874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nagayama, Shinobu, Tsutomu Sasao, and Jon T. Butler. Programmable Numerical Function Generators Based on Quadratic Approximation: Architecture and Synthesis Method. Fort Belvoir, VA: Defense Technical Information Center, January 2006. http://dx.doi.org/10.21236/ada599939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Potamianos, Gerasimos, and John Goutsias. Stochastic Simulation Techniques for Partition Function Approximation of Gibbs Random Field Images. Fort Belvoir, VA: Defense Technical Information Center, June 1991. http://dx.doi.org/10.21236/ada238611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Longcope, Donald B. ,. Jr, Thomas Lynn Warren, and Henry Duong. Aft-body loading function for penetrators based on the spherical cavity-expansion approximation. Office of Scientific and Technical Information (OSTI), December 2009. http://dx.doi.org/10.2172/986592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tang, Ping Tak Peter. Strong uniqueness of best complex Chebyshev approximation to analytic perturbations of analytic function. Office of Scientific and Technical Information (OSTI), March 1988. http://dx.doi.org/10.2172/6357493.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kitago, Masaki, Shunsuke Ehara, and Ichiro Hagiwara. Efficient Construction of Finite Element Model by Implicit Function Approximation of CAD Model. Warrendale, PA: SAE International, May 2005. http://dx.doi.org/10.4271/2005-08-0127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Schmitt-Grohe, Stephanie, and Martin Uribe. Solving Dynamic General Equilibrium Models Using a Second-Order Approximation to the Policy Function. Cambridge, MA: National Bureau of Economic Research, October 2002. http://dx.doi.org/10.3386/t0282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Blum, L., and Yaakov Rosenfeld. The Direct Correlation function of a Mixture of Hard Ions in the Mean Spherical Approximation. Fort Belvoir, VA: Defense Technical Information Center, January 1991. http://dx.doi.org/10.21236/ada232452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography