To see the other types of publications on this topic, follow the link: Kernel decomposition formula.

Journal articles on the topic 'Kernel decomposition formula'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 35 journal articles for your research on the topic 'Kernel decomposition formula.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sawyer, P. "Spherical Functions on SO0(p, q)/ SO(p) × SO(q)." Canadian Mathematical Bulletin 42, no. 4 (December 1, 1999): 486–98. http://dx.doi.org/10.4153/cmb-1999-056-5.

Full text
Abstract:
AbstractAn integral formula is derived for the spherical functions on the symmetric space G/K = SO0(p, q)/ SO(p) × SO(q). This formula allows us to state some results about the analytic continuation of the spherical functions to a tubular neighbourhood of the subalgebra a of the abelian part in the decomposition G = KAK. The corresponding result is then obtained for the heat kernel of the symmetric space SO0(p, q)/ SO(p) × SO(q) using the Plancherel formula.In the Conclusion, we discuss how this analytic continuation can be a helpful tool to study the growth of the heat kernel.
APA, Harvard, Vancouver, ISO, and other styles
2

LI, YOUFA, and TAO QIAN. "ANALYTIC SAMPLING APPROXIMATION BY PROJECTION OPERATOR WITH APPLICATION IN DECOMPOSITION OF INSTANTANEOUS FREQUENCY." International Journal of Wavelets, Multiresolution and Information Processing 11, no. 05 (September 2013): 1350040. http://dx.doi.org/10.1142/s0219691313500409.

Full text
Abstract:
A sequence of special functions in Hardy space [Formula: see text] are constructed from Cauchy kernel on unit disk 𝔻. Applying projection operator of the sequence of functions leads to an analytic sampling approximation to f, any given function in [Formula: see text]. That is, f can be approximated by its analytic samples in 𝔻s. Under a mild condition, f is approximated exponentially by its analytic samples. By the analytic sampling approximation, a signal in [Formula: see text] can be approximately decomposed into components of positive instantaneous frequency. Using circular Hilbert transform, we apply the approximation scheme in [Formula: see text] to Ls(𝕋2) such that a signal in Ls(𝕋2) can be approximated by its analytic samples on ℂs. A numerical experiment is carried out to illustrate our results.
APA, Harvard, Vancouver, ISO, and other styles
3

MEYER, Y., and Q. X. YANG. "CONTINUITY OF CALDERÓN–ZYGMUND OPERATORS ON BESOV OR TRIEBEL–LIZORKIN SPACES." Analysis and Applications 06, no. 01 (January 2008): 51–81. http://dx.doi.org/10.1142/s0219530508001055.

Full text
Abstract:
Calderón–Zygmund operators are playing an important role in real analysis. The continuity of Calderón–Zygmund operators T on L2 was studied in [2–4] and, here, we are investigating the continuity of T on the Besov spaces [Formula: see text] where 1 ≤ p, q ≤ ∞ and on the Triebel–Lizorkin spaces [Formula: see text] where 1 ≤ p < ∞, 1 ≤ q ≤ ∞. The exponents measuring the regularity of the distributional kernel K(x, y) of T away from the diagonal are playing a key role in our results. They are smaller than the ones which were assumed by other authors. Moreover, our results are sharp in the case of the Besov spaces [Formula: see text] and of the Triebel–Lizorkin spaces [Formula: see text] when 1 ≤ q ≤ ∞. The proof uses a pseudo-annular decomposition of Calderón–Zygmund operators.
APA, Harvard, Vancouver, ISO, and other styles
4

Gergün, Seçil, H. Turgay Kaptanoğlu, and A. Ersin Üreyen. "Harmonic Besov spaces on the ball." International Journal of Mathematics 27, no. 09 (August 2016): 1650070. http://dx.doi.org/10.1142/s0129167x16500701.

Full text
Abstract:
We initiate a detailed study of two-parameter Besov spaces on the unit ball of [Formula: see text] consisting of harmonic functions whose sufficiently high-order radial derivatives lie in harmonic Bergman spaces. We compute the reproducing kernels of those Besov spaces that are Hilbert spaces. The kernels are weighted infinite sums of zonal harmonics and natural radial fractional derivatives of the Poisson kernel. Estimates of the growth of kernels lead to characterization of integral transformations on Lebesgue classes. The transformations allow us to conclude that the order of the radial derivative is not a characteristic of a Besov space as long as it is above a certain threshold. Using kernels, we define generalized Bergman projections and characterize those that are bounded from Lebesgue classes onto Besov spaces. The projections provide integral representations for the functions in these spaces and also lead to characterizations of the functions in the spaces using partial derivatives. Several other applications follow from the integral representations such as atomic decomposition, growth at the boundary and of Fourier coefficients, inclusions among them, duality and interpolation relations, and a solution to the Gleason problem.
APA, Harvard, Vancouver, ISO, and other styles
5

Jorgenson, Jay, and Serge Lang. "Hilbert-Asai Eisenstein series, regularized products, and heat kernels." Nagoya Mathematical Journal 153 (1999): 155–88. http://dx.doi.org/10.1017/s0027763000006942.

Full text
Abstract:
AbstractIn a famous paper, Asai indicated how to develop a theory of Eisenstein series for arbitrary number fields, using hyperbolic 3-space to take care of the complex places. Unfortunately he limited himself to class number 1. The present paper gives a detailed exposition of the general case, to be used for many applications. First, it is shown that the Eisenstein series satisfy the authors’ definition of regularized products satisfying the generalized Lerch formula, and the basic axioms which allow the systematic development of the authors’ theory, including the Cramér theorem. It is indicated how previous results of Efrat and Zograf for the strict Hilbert modular case extend to arbitrary number fields, for instance a spectral decomposition of the heat kernel periodized with respect to SL2 of the integers of the number field. This gives rise to a theta inversion formula, to which the authors’ Gauss transform can be applied. In addition, the Eisenstein series can be twisted with the heat kernel, thus encoding an infinite amount of spectral information in one item coming from heat Eisenstein series. The main expected spectral formula is stated, but a complete exposition would require a substantial amount of space, and is currently under consideration.
APA, Harvard, Vancouver, ISO, and other styles
6

MAIRE, CHRISTIAN. "PLONGEMENTS LOCAUX ET EXTENSIONS DE CORPS DE NOMBRES." International Journal of Number Theory 07, no. 03 (May 2011): 721–38. http://dx.doi.org/10.1142/s1793042111004332.

Full text
Abstract:
Dans ce travail, nous nous intéressons au plongement [Formula: see text] des T-unités d'un corps de nombres K dans une partie de ses complétés p-adiques construite sur l'ensemble S. Nous montrons que l'injectivité de [Formula: see text] permet d'obtenir des informations sur la structure du groupe de Galois de certaines extensions de K où la ramification est liée à S et la décomposition à T. Nous étudions également le comportement asymptotique du noyau de [Formula: see text] le long d'une extension p-adique analytique sans p-torsion. In this article, we are interested in the embedding [Formula: see text] of the T-units of a number field K in some part of its p-adic completions at S. We show that the injectivity of [Formula: see text] allows us to obtain some information on the structure of the Galois group of some extensions of K where the ramification is attached at S and the decomposition at T. Moreover, we study the asymptotic behavior of the kernel [Formula: see text] along a p-adic analytic extension.
APA, Harvard, Vancouver, ISO, and other styles
7

Vatankhah, Saeed, Shuang Liu, Rosemary Anne Renaut, Xiangyun Hu, and Jamaledin Baniamerian. "Improving the use of the randomized singular value decomposition for the inversion of gravity and magnetic data." GEOPHYSICS 85, no. 5 (August 17, 2020): G93—G107. http://dx.doi.org/10.1190/geo2019-0603.1.

Full text
Abstract:
The focusing inversion of gravity and magnetic potential-field data using the randomized singular value decomposition (RSVD) method is considered. This approach facilitates tackling the computational challenge that arises in the solution of the inversion problem that uses the standard and accurate approximation of the integral equation kernel. We have developed a comprehensive comparison of the developed methodology for the inversion of magnetic and gravity data. The results verify that there is an important difference between the application of the methodology for gravity and magnetic inversion problems. Specifically, RSVD is dependent on the generation of a rank [Formula: see text] approximation to the underlying model matrix, and the results demonstrate that [Formula: see text] needs to be larger, for equivalent problem sizes, for the magnetic problem compared to the gravity problem. Without a relatively large [Formula: see text], the dominant singular values of the magnetic model matrix are not well approximated. We determine that this is due to the spectral properties of the matrix. The comparison also shows us how the use of the power iteration embedded within the randomized algorithm improves the quality of the resulting dominant subspace approximation, especially in magnetic inversion, yielding acceptable approximations for smaller choices of [Formula: see text]. Further, we evaluate how the differences in spectral properties of the magnetic and gravity input matrices also affect the values that are automatically estimated for the regularization parameter. The algorithm is applied and verified for the inversion of magnetic data obtained over a portion of the Wuskwatim Lake region in Manitoba, Canada.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Yinkun, Jianshu Luo, Xiangling Chen, and Lei Sun. "A Chebyshev collocation method for Hallén’s equation of thin wire antennas." COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering 34, no. 4 (July 6, 2015): 1319–34. http://dx.doi.org/10.1108/compel-06-2014-0142.

Full text
Abstract:
Purpose – The purpose of this paper is to propose a Chebyshev collocation method (CCM) for Hallén’s equation of thin wire antennas. Design/methodology/approach – Since the current induced on the thin wire antennas behaves like the square root of the distance from the end, a smoothed current is used to annihilate this end effect. Then the CCM adopts Chebyshev polynomials to approximate the smoothed current from which the actual current can be quickly recovered. To handle the difficulty of the kernel singularity and to realize fast computation, a decomposition is adopted by separating the singularity from the exact kernel. The integrals including the singularity in the linear system can be given in an explicit formula while the others can be evaluated efficiently by the fast cosine transform or the fast Fourier transform. Findings – The CCM convergence rate is fast and this method is more efficient than the other existing methods. Specially, it can attain less than 1 percent relative errors by using 32 basis functions when a/h is bigger than 2×10−5 where h is the half length of wire antenna and a is the radius of antenna. Besides, a new efficient scheme to evaluate the exact kernel has been proposed by comparing with most of the literature methods. Originality/value – Since the kernel evaluation is vital to the solution of Hallén’s and Pocklington’s equations, the proposed scheme to evaluate the exact kernel may be helpful in improving the efficiency of existing methods in the study of wire antennas. Due to the good convergence and efficiency, the CCM may be a competitive method in the analysis of radiation properties of thin wire antennas. Several numerical experiments are presented to validate the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
9

Barahona, Sonia, Pablo Centella, Ximo Gual-Arnau, M. Victoria Ibáñez, and Amelia Simó. "Generalized linear models for geometrical current predictors: An application to predict garment fit." Statistical Modelling 20, no. 6 (December 2, 2019): 562–91. http://dx.doi.org/10.1177/1471082x19885465.

Full text
Abstract:
The aim of this article is to model an ordinal response variable in terms of vector-valued functional data included on a vector-valued reproducing kernel Hilbert space (RKHS). In particular, we focus on the vector-valued RKHS obtained when a geometrical object (body) is characterized by a current and on the ordinal regression model. A common way to solve this problem in functional data analysis is to express the data in the orthonormal basis given by decomposition of the covariance operator. But our data present very important differences with respect to the usual functional data setting. On the one hand, they are vector-valued functions, and on the other, they are functions in an RKHS with a previously defined norm. We propose to use three different bases: the orthonormal basis given by the kernel that defines the RKHS, a basis obtained from decomposition of the integral operator defined using the covariance function and a third basis that combines the previous two. The three approaches are compared and applied to an interesting problem: building a model to predict the fit of children's garment sizes, based on a 3D database of the Spanish child population. Our proposal has been compared with alternative methods that explore the performance of other classifiers (Support Vector Machine and [Formula: see text]-NN), and with the result of applying the classification method proposed in this work, from different characterizations of the objects (landmarks and multivariate anthropometric measurements instead of currents), obtaining in all these cases worst results.
APA, Harvard, Vancouver, ISO, and other styles
10

Avila, Anderson, Renata Hax Sander Reiser, Maurício Lima Pilla, and Adenauer Correa Yamin. "Improving in situ GPU simulation of quantum computing in the D-GM environment." International Journal of High Performance Computing Applications 33, no. 3 (January 16, 2019): 462–72. http://dx.doi.org/10.1177/1094342018823251.

Full text
Abstract:
Exponential increase and global access to read/write memory states in quantum computing (QC) simulation limit both the number of qubits and quantum transformations which can be currently simulated. Although QC simulation is parallel by nature, spatial and temporal complexity are major performance hazards, making this a nontrivial application for high performance computing. A new methodology employing reduction and decomposition optimizations has shown relevant results, but its GPU implementation could be further improved. In this work, we develop a new kernel for in situ GPU simulation that better explores its resources without requiring further hardware. Shor’s and Grover’s algorithms are simulated up to 25 and 21 qubits respectively and compared to our previous version, to [Formula: see text] simulator and to ProjectQ framework, showing better results with relative speedups up to 4.38×, 3357.76× and 333× respectively.
APA, Harvard, Vancouver, ISO, and other styles
11

Cui, Xuejiao, Bo Ke, Songtao Yu, Ping Li, and Mingsheng Zhao. "Energy Characteristics of Seismic Waves on Cardox Blasting Tube." Geofluids 2021 (October 13, 2021): 1–13. http://dx.doi.org/10.1155/2021/9919764.

Full text
Abstract:
In order to study the energy characteristics of seismic waves on the liquid CO2 blasting system, the blasting seismic wave signal of liquid CO2 blasting was obtained by on-site microseismic monitoring tests. The adaptive optimal kernel time-frequency analysis method was used to study the basic time-frequency properties of the seismic wave signal. Combining wavelet packet transform decomposition and reconstruction and adaptive optimal kernel time-frequency analysis method, the liquid CO2 energy distribution of the seismic wave signal was further analyzed. And the energy regression model of seismic wave source of liquid CO2 blasting system was discussed. The results show that the vibration velocity is at a low level, and the main frequency range is between 30 and 70 Hz, and the duration is about 20-30 ms. The energy is mainly distributed in 0-125 Hz, which is composed of two main regions. The power function model can be used to describe the attenuation law of the seismic wave energy. The energy conversion coefficient and characteristic coefficient of the source of liquid CO2 blasting system were defined and analyzed. Combined with the empirical formula of the Sadovsky vibration velocity, the energy regression model of the seismic wave source of liquid CO2 blasting system was obtained.
APA, Harvard, Vancouver, ISO, and other styles
12

Cui, Xuejiao, Bo Ke, Songtao Yu, Ping Li, and Mingsheng Zhao. "Energy Characteristics of Seismic Waves on Cardox Blasting Tube." Geofluids 2021 (October 13, 2021): 1–13. http://dx.doi.org/10.1155/2021/9919764.

Full text
Abstract:
In order to study the energy characteristics of seismic waves on the liquid CO2 blasting system, the blasting seismic wave signal of liquid CO2 blasting was obtained by on-site microseismic monitoring tests. The adaptive optimal kernel time-frequency analysis method was used to study the basic time-frequency properties of the seismic wave signal. Combining wavelet packet transform decomposition and reconstruction and adaptive optimal kernel time-frequency analysis method, the liquid CO2 energy distribution of the seismic wave signal was further analyzed. And the energy regression model of seismic wave source of liquid CO2 blasting system was discussed. The results show that the vibration velocity is at a low level, and the main frequency range is between 30 and 70 Hz, and the duration is about 20-30 ms. The energy is mainly distributed in 0-125 Hz, which is composed of two main regions. The power function model can be used to describe the attenuation law of the seismic wave energy. The energy conversion coefficient and characteristic coefficient of the source of liquid CO2 blasting system were defined and analyzed. Combined with the empirical formula of the Sadovsky vibration velocity, the energy regression model of the seismic wave source of liquid CO2 blasting system was obtained.
APA, Harvard, Vancouver, ISO, and other styles
13

Tu, Zhijian, Lifu Gao, Xiaoyan Wu, Yongming Liu, and Zhuanzhe Zhao. "Rotate Vector Reducer Fault Diagnosis Model Based on EEMD-MPA-KELM." Applied Sciences 13, no. 7 (March 31, 2023): 4476. http://dx.doi.org/10.3390/app13074476.

Full text
Abstract:
With the increase of service time, the rotation period of rotating machinery may become irregular, and the Ensemble Empirical Mode Decomposition (EEMD)can effectively reflect its periodic state. In order to accurately evaluate the working state of the Rotate Vector (RV) reducer, the torque transfer formula of the RV reducer is first derived to theoretically prove periodicity of torque transfer in normal operation. Then, EEMD is able to effectively reflect the characteristics of data periodicity. A fault diagnosis model based on EEMD-MPA-KELM was proposed, and a bearing experimental dataset from Xi‘an Jiaotong University was used to verify the performance of the model. In view of the characteristics of the industrial robot RV reducer fault was not obvious and the sample data is few, spectrum diagram was used to diagnose the fault from the RV reducer measured data. The EEMD decomposition was performed on the data measured by the RV reducer test platform to obtain several Intrinsic Mode Functions (IMF). After the overall average checking and optimization of each IMF, several groups of eigenvalues were obtained. The eigenvalues were input into the Kernel Extreme Learning Machine (KELM) optimized by the Marine Predators Algorithm (MPA), and the fault diagnosis model was established. Finally, compared with other models, the prediction results showed that the proposed model can judge the working state of RV reducer more effectively.
APA, Harvard, Vancouver, ISO, and other styles
14

Wang, Jing, Qiaozhen Ma, and Wenxue Zhou. "Attractor of the nonclassical diffusion equation with memory on time- dependent space." AIMS Mathematics 8, no. 6 (2023): 14820–41. http://dx.doi.org/10.3934/math.2023757.

Full text
Abstract:
<abstract><p>We consider the dynamic behavior of solutions for a nonclassical diffusion equation with memory</p> <p><disp-formula> <label/> <tex-math id="FE1"> \begin{document}$ u_{t}-\varepsilon(t) \triangle u_{t}- \triangle u-\int_{0}^{\infty}\kappa(s)\triangle u(t-s)ds+f(u) = g(x) $\end{document} </tex-math></disp-formula></p> <p>on time-dependent space for which the norm of the space depends on the time $ t $ explicitly, and the nonlinear term satisfies the critical growth condition. First, based on the classical Faedo-Galerkin method, we obtain the well-posedness of the solution for the equation. Then, by using the contractive function method and establishing some delicate estimates along the trajectory of the solutions on the time-dependent space, we prove the existence of the time-dependent global attractor for the problem. Due to very general assumptions on memory kernel $ \kappa $ and the effect of time-dependent coefficient $ \varepsilon(t) $, our result will include and generalize the existing results of such equations with constant coefficients. It is worth noting that the nonlinear term cannot be treated by the common decomposition techniques, and this paper overcomes the difficulty by dealing with it as a whole.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
15

Hegland, Markus, and Frank De Hoog. "Low rank approximation of positive semi-definite symmetric matrices using Gaussian elimination and volume sampling." ANZIAM Journal 62 (November 14, 2021): C58—C71. http://dx.doi.org/10.21914/anziamj.v62.16036.

Full text
Abstract:
Positive semi-definite matrices commonly occur as normal matrices of least squares problems in statistics or as kernel matrices in machine learning and approximation theory. They are typically large and dense. Thus algorithms to solve systems with such a matrix can be very costly. A core idea to reduce computational complexity is to approximate the matrix by one with a low rank. The optimal and well understood choice is based on the eigenvalue decomposition of the matrix. Unfortunately, this is computationally very expensive. Cheaper methods are based on Gaussian elimination but they require pivoting. We show how invariant matrix theory provides explicit error formulas for an averaged error based on volume sampling. The formula leads to ratios of elementary symmetric polynomials on the eigenvalues. We discuss several bounds for the expected norm of the approximation error and include examples where this expected error norm can be computed exactly. References A. Dax. “On extremum properties of orthogonal quotients matrices”. In: Lin. Alg. Appl. 432.5 (2010), pp. 1234–1257. doi: 10.1016/j.laa.2009.10.034. M. Dereziński and M. W. Mahoney. Determinantal Point Processes in Randomized Numerical Linear Algebra. 2020. url: https://arxiv.org/abs/2005.03185. A. Deshpande, L. Rademacher, S. Vempala, and G. Wang. “Matrix approximation and projective clustering via volume sampling”. In: Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithm. SODA ’06. Miami, Florida: Society for Industrial and Applied Mathematics, 2006, pp. 1117–1126. url: https://dl.acm.org/doi/abs/10.5555/1109557.1109681. S. A. Goreinov, E. E. Tyrtyshnikov, and N. L. Zamarashkin. “A theory of pseudoskeleton approximations”. In: Lin. Alg. Appl. 261.1 (1997), pp. 1–21. doi: 10.1016/S0024-3795(96)00301-1. M. W. Mahoney and P. Drineas. “CUR matrix decompositions for improved data analysis”. In: Proc. Nat. Acad. Sci. 106.3 (Jan. 20, 2009), pp. 697–702. doi: 10.1073/pnas.0803205106. M. Marcus and L. Lopes. “Inequalities for symmetric functions and Hermitian matrices”. In: Can. J. Math. 9 (1957), pp. 305–312. doi: 10.4153/CJM-1957-037-9.
APA, Harvard, Vancouver, ISO, and other styles
16

Zenkov, I. V., A. V. Lapko, V. A. Lapko, S. T. Im, V. P. Tuboltsev, and V. L. Аvdeenok. "A nonparametric algorithm for automatic classification of large multivariate statistical data sets and its application." Computer Optics 45, no. 2 (April 2021): 253–60. http://dx.doi.org/10.18287/2412-6179-co-801.

Full text
Abstract:
A nonparametric algorithm for automatic classification of large statistical data sets is proposed. The algorithm is based on a procedure for optimal discretization of the range of values of a random variable. A class is a compact group of observations of a random variable corresponding to a unimodal fragment of the probability density. The considered algorithm of automatic classification is based on the «compression» of the initial information based on the decomposition of a multidimensional space of attributes. As a result, a large statistical sample is transformed into a data array composed of the centers of multidimensional sampling intervals and the corresponding frequencies of random variables. To substantiate the optimal discretization procedure, we use the results of a study of the asymptotic properties of a kernel-type regression estimate of the probability density. An optimal number of sampling intervals for the range of values of one- and two-dimensional random variables is determined from the condition of the minimum root-mean square deviation of the regression probability density estimate. The results obtained are generalized to the discretization of the range of values of a multidimensional random variable. The optimal discretization formula contains a component that is characterized by a nonlinear functional of the probability density. An analytical dependence of the detected component on the antikurtosis coefficient of a one-dimensional random variable is established. For independent components of a multidimensional random variable, a methodology is developed for calculating estimates of the optimal number of sampling intervals for random variables and their lengths. On this basis, a nonparametric algorithm for the automatic classification is developed. It is based on a sequential procedure for checking the proximity of the centers of multidimensional sampling intervals and relationships between frequencies of the membership of the random variables from the original sample of these intervals. To further increase the computational efficiency of the proposed automatic classification algorithm, a multithreaded method of its software implementation is used. The practical significance of the developed algorithms is confirmed by the results of their application in processing remote sensing data.
APA, Harvard, Vancouver, ISO, and other styles
17

Yang, Qixiang, and Zengjian Lou. "Commutators and rough kernels without zero homogeneous condition." International Journal of Wavelets, Multiresolution and Information Processing 16, no. 05 (September 2018): 1850044. http://dx.doi.org/10.1142/s0219691318500443.

Full text
Abstract:
In this paper, we consider the commutator [Formula: see text] where [Formula: see text] and [Formula: see text] is defined by the convolution type Calderón–Zygmund operators satisfying the weak boundedness condition and Hörmander condition, we prove its continuity by using wavelets, decomposition of compensated quantity by wavelets and commutators on orthogonal project operator.
APA, Harvard, Vancouver, ISO, and other styles
18

Karp, Dmitrii B., and José L. López. "An extension of the multiple Erdélyi-Kober operator and representations of the generalized hypergeometric functions." Fractional Calculus and Applied Analysis 21, no. 5 (October 25, 2018): 1360–76. http://dx.doi.org/10.1515/fca-2018-0071.

Full text
Abstract:
Abstract In this paper we investigate the extension of the multiple Erdélyi-Kober fractional integral operator of Kiryakova to arbitrary complex values of parameters by the way of regularization. The regularization involves derivatives of the function in question and the integration with respect to a kernel expressed in terms of special case of Meijer’s G-function. An action of the regularized multiple Erdélyi-Kober operator on some simple kernels leads to decomposition formulas for the generalized hypergeometric functions. In the ultimate section, we define an alternative regularization better suited for representing the Bessel type generalized hypergeometric function p−1Fp. A particular case of this regularization is then used to identify some new facts about the positivity and reality of zeros of this function.
APA, Harvard, Vancouver, ISO, and other styles
19

Klimanov, V. A., M. A. Kolyvanova, and A. N. Moiseev. "Spatial Distributions of the Dose Created Phantom Pencil Beam of Mono-Energy and Bremsstrahlung Photons in a Water with Energies from 0.25 to 20 MeV." MEDICAL RADIOLOGY AND RADIATION SAFETY 67, no. 3 (July 2022): 83–88. http://dx.doi.org/10.33266/1024-6177-2022-67-3-83-88.

Full text
Abstract:
Purpose: Critical analysis of existing and obtaining more accurate data on the spatial dose distributions created in the water phantom by pencil beams (PB) of monoenergetic and bremsstrahlung photons with energies from 0.25 to 20.0 MeV, and approximation of these distributions for the purpose of calculating doses in radiation therapy. Material and Methods: Using the Monte Carlo method, the EGSnrc program and the MATLAB mathematical package, these distributions were calculated for monoenergetic photons in the energy range from 0.25 to 19.75 MeV in increments of 0.5 MeV, for bremsstrahlung photons with a maximum energy of 4.0, 6.0, 10.0, 15.0, 18.0 MeV and for the gamma-radiation spectrum of the therapeutic apparatus ROCUS. The calculation results are converted into the so-called dose kernel of photon pencil beam. The obtained dose kernel values are compared with previously published data and the observed discrepancies are discussed. Depths in water were studied from 1.0 to 40 cm in increments of 0,5 cm and along the radius from 0.02 to 46.0 cm with an uneven grid. For bremsstrahlung and photons with the spectrum of the Rocus apparatus, the possibility of approximating dose kernel values using approximation formulas convenient for calculating doses in radiation therapy has been investigated. Results: On the basis of the results obtained, a new version of the library of dose kernels of a pencil photon beam for water was created, which differs from previous versions by the use for calculating a better description and modeling of the physical processes of the interaction of photons and charged particles with matter, more adequate data on the interaction cross sections and significantly lower values of statistical uncertainties of the results. For bremsstrahlung and photons with the spectrum of the Rocus apparatus, a mathematical model of dose kernels of a pencil beam is proposed, which includes decomposition of the dose kernels into components of the primary and scattered doses, approximation formulas and empirical coefficients convenient for integration. The values of empirical coefficients are determined by fitting to the results of the calculation of dose kernels using a combination of the random search method and the nonlinear regression method. Conclusion: The results obtained in this work will improve the algorithms and increase the accuracy of dose calculation when planning remote therapy with photon beams.
APA, Harvard, Vancouver, ISO, and other styles
20

Temme, F. P. "Role of democratic SU2 x n dual-group carrier space and group structure in the superboson quantum-Liouville physics of identical spin ensembles: Maximal n tensor product reduction, via symbolic algebraic combinatorics." Canadian Journal of Physics 79, no. 9 (September 1, 2001): 1175–97. http://dx.doi.org/10.1139/p01-026.

Full text
Abstract:
Structural aspects of superboson mappings and their dual group-based carrier spaces inherent in quantum-Liouville NMR formalisms are presented for their conceptual role in understanding the transformational properties and the spin dynamics of identical [A]n([Formula: see text]n) spin ensembles or [AX]n systems, given in terms of {Tkq (υ)} º {|kqυ > >} tensorial bases. Interest in the explicit democratic labelling of the Liouvillian carrier subspaces of [A]n spin ensembles, prompts an examination of the inner products (ITPs) that define the projective carrier space associated with superbosons. A weak-branching (WB) limit (for (bipartite) partitions λ [Formula: see text] n, for n-indexed (SU2 ×)[Formula: see text]n) ITPs gives rise to: (i) maximal coefficient sets for [µ] [Formula: see text] [µ']([Formula: see text]n) irrep products (in Butler's notation), under a sufficiently high (n [Formula: see text] 4µ, 4µ') indexed [Formula: see text]n group, or (ii) to identical numerical {c[Formula: see text], λ' } sequence-ordered sets of reduction coefficients over similar, displaced-sequence fields — on augmenting ITP to [µ] [Formula: see text] [µ''], where µ'' > µ' > µ, — or else (iii) to some (sequence-displaced) subset of the latter. The origins of the decompositional WB limit as part of group structure may be traced to an algorithmic similarity between the Littlewood–Richardson and Young(III) combinatorial rules. Alternative approaches to the bipartite product decompositional mappings are possible, using both (subspatial-restricted) Schur-function techniques of Wybourne and [Formula: see text]16[Formula: see text]n[Formula: see text]28 symbolic algorithmic enumerations based on the SYMMETRICA discrete-maths. package of Kerber et al. (J. Symbolic Comput. 14, 195 (1993)). The nature of maximal sets and scaling in ITP decompositions is established and recognition given to the role of combinatorics, [Formula: see text]n algorithms and the superboson algebras of the SU(2) × [Formula: see text]n group in (multiple) quantized spin physics. PACS Nos.: 02.10, 03.65Ca, 33.20Vq, 33.25+k
APA, Harvard, Vancouver, ISO, and other styles
21

GOODEARL, K. R., and T. H. LENAGAN. "PRIME IDEALS INVARIANT UNDER WINDING AUTOMORPHISMS IN QUANTUM MATRICES." International Journal of Mathematics 13, no. 05 (July 2002): 497–532. http://dx.doi.org/10.1142/s0129167x02001393.

Full text
Abstract:
The main goal of the paper is to establish the existence of tensor product decompositions for those prime ideals P of the algebra [Formula: see text] of quantum n × n matrices which are invariant under winding automorphisms of A, in the generic case (q not a root of unity). More specifically, every such P is the kernel of a map of the form [Formula: see text] where A → A ⊗ A is the comultiplication, A+ and A- are suitable localized factor algebras of A, and P± is a prime ideal of A± invariant under winding automorphisms. Further, the algebras A±, which vary with P, can be chosen so that the correspondence (P+, P-) ↦ P is a bijection. The main theorem is applied, in a sequel to this paper, to completely determine the winding-invariant prime ideals in the generic quantum 3 × 3 matrix algebra.
APA, Harvard, Vancouver, ISO, and other styles
22

Accardi, Luigi, Abdessatar Barhoumi, and Ameur Dhahri. "Identification of the theory of orthogonal polynomials in d-indeterminates with the theory of 3-diagonal symmetric interacting Fock spaces on ℂd." Infinite Dimensional Analysis, Quantum Probability and Related Topics 20, no. 01 (March 2017): 1750004. http://dx.doi.org/10.1142/s0219025717500047.

Full text
Abstract:
The identification mentioned in the title allows a formulation of the multidimensional Favard lemma different from the ones currently used in the literature and which parallels the original 1-dimensional formulation in the sense that the positive Jacobi sequence is replaced by a sequence of positive Hermitean (square) matrices and the real Jacobi sequence by a sequence of positive definite kernels. The above result opens the way to the program of a purely algebraic classification of probability measures on [Formula: see text] with moments of any order and more generally of states on the polynomial algebra on [Formula: see text]. The quantum decomposition of classical real-valued random variables with all moments is one of the main ingredients in the proof.
APA, Harvard, Vancouver, ISO, and other styles
23

Narita, Keiko, Noboru Endou, and Yasunari Shidama. "The Orthogonal Projection and the Riesz Representation Theorem." Formalized Mathematics 23, no. 3 (September 1, 2015): 243–52. http://dx.doi.org/10.1515/forma-2015-0020.

Full text
Abstract:
Abstract In this article, the orthogonal projection and the Riesz representation theorem are mainly formalized. In the first section, we defined the norm of elements on real Hilbert spaces, and defined Mizar functor RUSp2RNSp, real normed spaces as real Hilbert spaces. By this definition, we regarded sequences of real Hilbert spaces as sequences of real normed spaces, and proved some properties of real Hilbert spaces. Furthermore, we defined the continuity and the Lipschitz the continuity of functionals on real Hilbert spaces. Referring to the article [15], we also defined some definitions on real Hilbert spaces and proved some theorems for defining dual spaces of real Hilbert spaces. As to the properties of all definitions, we proved that they are equivalent properties of functionals on real normed spaces. In Sec. 2, by the definitions [11], we showed properties of the orthogonal complement. Then we proved theorems on the orthogonal decomposition of elements of real Hilbert spaces. They are the last two theorems of existence and uniqueness. In the third and final section, we defined the kernel of linear functionals on real Hilbert spaces. By the last three theorems, we showed the Riesz representation theorem, existence, uniqueness, and the property of the norm of bounded linear functionals on real Hilbert spaces. We referred to [36], [9], [24] and [3] in the formalization.
APA, Harvard, Vancouver, ISO, and other styles
24

Najoui, Mohamed, Mounir Bahtat, Anas Hatim, Said Belkouch, and Noureddine Chabini. "VLIW DSP-Based Low-Level Instruction Scheme of Givens QR Decomposition for Real-Time Processing." Journal of Circuits, Systems and Computers 26, no. 09 (April 24, 2017): 1750129. http://dx.doi.org/10.1142/s0218126617501298.

Full text
Abstract:
QR decomposition (QRD) is one of the most widely used numerical linear algebra (NLA) kernels in several signal processing applications. Its implementation has a considerable and an important impact on the system performance. As processor architectures continue to gain ground in the high-performance computing world, QRD algorithms have to be redesigned in order to take advantage of the architectural features on these new processors. However, in some processor architectures like very large instruction word (VLIW), compiler efficiency is not enough to make an effective use of available computational resources. This paper presents an efficient and optimized approach to implement Givens QRD in a low-power platform based on VLIW architecture. To overcome the compiler efficiency limits to parallelize the most of Givens arithmetic operations, we propose a low-level instruction scheme that could maximize the parallelism rate and minimize clock cycles. The key contributions of this work are as follows: (i) New parallel and fast version design of Givens algorithm based on the VLIW features (i.e., instruction-level parallelism (ILP) and data-level parallelism (DLP)) including the cache memory properties. (ii) Efficient data management approach to avoid cache misses and memory bank conflicts. Two DSP platforms C6678 and AK2H12 were used as targets for implementation. The introduced parallel QR implementation method achieves, in average, more than 12[Formula: see text] and 6[Formula: see text] speedups over the standard algorithm version and the optimized QR routine implementations, respectively. Compared to the state of the art, the proposed scheme implementation is at least 3.65 and 2.5 times faster than the recent CPU and DSP implementations, respectively.
APA, Harvard, Vancouver, ISO, and other styles
25

Fabiani, Gianluca, Evangelos Galaris, Lucia Russo, and Constantinos Siettos. "Parsimonious physics-informed random projection neural networks for initial value problems of ODEs and index-1 DAEs." Chaos: An Interdisciplinary Journal of Nonlinear Science 33, no. 4 (April 2023): 043128. http://dx.doi.org/10.1063/5.0135903.

Full text
Abstract:
We present a numerical method based on random projections with Gaussian kernels and physics-informed neural networks for the numerical solution of initial value problems (IVPs) of nonlinear stiff ordinary differential equations (ODEs) and index-1 differential algebraic equations (DAEs), which may also arise from spatial discretization of partial differential equations (PDEs). The internal weights are fixed to ones while the unknown weights between the hidden and output layer are computed with Newton’s iterations using the Moore–Penrose pseudo-inverse for low to medium scale and sparse QR decomposition with [Formula: see text] regularization for medium- to large-scale systems. Building on previous works on random projections, we also prove its approximation accuracy. To deal with stiffness and sharp gradients, we propose an adaptive step-size scheme and address a continuation method for providing good initial guesses for Newton iterations. The “optimal” bounds of the uniform distribution from which the values of the shape parameters of the Gaussian kernels are sampled and the number of basis functions are “parsimoniously” chosen based on bias-variance trade-off decomposition. To assess the performance of the scheme in terms of both numerical approximation accuracy and computational cost, we used eight benchmark problems (three index-1 DAEs problems, and five stiff ODEs problems including the Hindmarsh–Rose neuronal model of chaotic dynamics and the Allen–Cahn phase-field PDE). The efficiency of the scheme was compared against two stiff ODEs/DAEs solvers, namely, ode15s and ode23t solvers of the MATLAB ODE suite as well as against deep learning as implemented in the DeepXDE library for scientific machine learning and physics-informed learning for the solution of the Lotka–Volterra ODEs included in the demos of the library. A software/toolbox in Matlab (that we call RanDiffNet) with demos is also provided.
APA, Harvard, Vancouver, ISO, and other styles
26

Weiss, Robin M., and Jeffrey Shragge. "Solving 3D anisotropic elastic wave equations on parallel GPU devices." GEOPHYSICS 78, no. 2 (March 1, 2013): F7—F15. http://dx.doi.org/10.1190/geo2012-0063.1.

Full text
Abstract:
Efficiently modeling seismic data sets in complex 3D anisotropic media by solving the 3D elastic wave equation is an important challenge in computational geophysics. Using a stress-stiffness formulation on a regular grid, we tested a 3D finite-difference time-domain solver using a second-order temporal and eighth-order spatial accuracy stencil that leverages the massively parallel architecture of graphics processing units (GPUs) to accelerate the computation of key kernels. The relatively small memory of an individual GPU limits the model domain sizes that can be computed on a single device. To circumvent this constraint and move toward modeling industry-sized 3D anisotropic elastic data sets, we parallelized computation across multiple GPU devices by using domain decomposition and, for each time step, employing an interdevice communication protocol to exchange data values falling within interior boundaries of each subdomain. For two or more GPU devices within a single compute node, we use direct peer-to-peer (i.e., GPU-to-GPU) communication, whereas for networked nodes we employed message-passing interface directives to route data over the network. Our 2D GPU-based anisotropic elastic modeling tests achieved a [Formula: see text] speedup relative to an OpenMP CPU implementation run on an eight-core machine, whereas our 3D tests using dual-GPU devices produced up to a [Formula: see text] speedup. The performance boost afforded by the GPU architecture allowed us to model seismic data for 3D anisotropic elastic models at lower hardware cost and in less time than has been previously possible.
APA, Harvard, Vancouver, ISO, and other styles
27

Vasco, D. W. "Resolution and variance operators of gravity and gravity gradiometry." GEOPHYSICS 54, no. 7 (July 1989): 889–99. http://dx.doi.org/10.1190/1.1442717.

Full text
Abstract:
Gravity gradiometry represents a new potential field data set which may better constrain the density structure of the earth. Using singular value (spectral) decomposition of the gravity and gravity gradient kernels, the model parameter resolution and model parameter variance of the two data types are compared using data from the Defense Mapping Agency and a recently acquired collection of airborne gradient measurements from Bell Aerospace Textron’s Gravity Gradient Survey System (GGSS). The GGSS was flown over a portion of southwestern Oklahoma, where the gravitational anomaly from the buried Wichita basement rocks is over 60 mGal. The corresponding maximum vertical gravity gradient was found to be 46.2 Eötvös. The determination of the subsurface density structure is cast as a linear inverse problem and, for comparison, a nonlinear inverse problem. For both the linear and nonlinear inversions, the gravity gradients improve the resolution and result in smaller variances than the vertical component of gravity. The density resolution and variance were computed for a subset of tracks from an airborne gravity gradient survey made in the summer of 1987. For the linear inversion, the resolution of the density is not adequate below the second layer (20 km). Furthermore, the estimated error of the actual gradient observations for a resolution of 0.9 km is 10E, for which the maximum error of the density values is [Formula: see text]. The linearized resolution of the boundary perturbations is better, with most parameters being well resolved. The standard errors for the layer perturbations are less than 1 km for the shallower layer (5.0 km) when using the gradiometer data. For the deeper layer (25.0 km), the maximum error is larger, 4.3 km.
APA, Harvard, Vancouver, ISO, and other styles
28

Akishin, Pavel G., and Andrey A. Sapozhnikov. "The volume integral equation method in magnetostatic problem." Discrete and Continuous Models and Applied Computational Science 27, no. 1 (December 15, 2019): 60–69. http://dx.doi.org/10.22363/2658-4670-2019-27-1-60-69.

Full text
Abstract:
This article addresses the issues of volume integral equation method application to magnetic system calculations. The main advantage of this approach is that in this case finding the solution of equations is reduced to the area filled with ferromagnetic. The difficulty of applying the method is connected with kernel singularity of integral equations. For this reason in collocation method only piecewise constant approximation of unknown variables is used within the limits of fragmentation elements inside the famous package GFUN3D. As an alternative approach the points of observation can be replaced by integration over fragmentation element, which allows to use approximation of unknown variables of a higher order.In the presented work the main aspects of applying this approach to magnetic systems modelling are discussed on the example of linear approximation of unknown variables: discretisation of initial equations, decomposition of the calculation area to elements, calculation of discretised system matrix elements, solving the resulting nonlinear equation system. In the framework of finite element method the calculation area is divided into a set of tetrahedrons. At the beginning the initial area is approximated by a combination of macro-blocks with a previously constructed two-dimensional mesh at their borders. After that for each macro-block separately the procedure of tetrahedron mesh construction is performed. While calculating matrix elements sixfold integrals over two tetrahedra are reduced to a combination of fourfold integrals over triangles, which are calculated using cubature formulas. Reduction of singular integrals to the combination of the regular integrals is proposed with the methods based on the concept of homogeneous functions. Simple iteration methods are used to solve non-linear discretized systems, allowing to avoid reversing large-scale matrixes. The results of the modelling are compared with the calculations obtained using other methods.
APA, Harvard, Vancouver, ISO, and other styles
29

Du, Xinyu, Zichen Gu, Longquan Dai, and Jinhui Tang. "High Dimensional Convolution Acceleration via Tensor Decomposition." Journal of Circuits, Systems and Computers, March 28, 2022. http://dx.doi.org/10.1142/s0218126622501870.

Full text
Abstract:
The high-dimensional convolution, in either linear or nonlinear form, has been employed in a wide range of computer vision solutions due to its beneficial smoothing property. However, its full-kernel implementation is extremely slow. We do need a fast algorithm for this important operation. To solve this problem, we propose an acceleration pipeline assembled by three steps: [Formula: see text]-D nonlinear convolution [Formula: see text] [Formula: see text]-D linear convolution [Formula: see text] 1-D dimensional convolution [Formula: see text] 1-D recursive box filter. Thanks to the low computational complexity of box filtering, we speed up the computation significantly. Roughly speaking, our contribution is two-fold: (1) establishing the connection between the high-dimensional convolution acceleration algorithm and tensor decomposition; (2) propose total four acceleration technologies including demultiplexing–blurring–multiplexing framework, convolution decomposition, periodic tensorization and recursively box filtering to compose our acceleration pipeline under the line of the above connection. The effectiveness of these techniques is demonstrated in various comparisons and experiments. The running times of various applications are largely shortened from several minutes to fewer seconds or less.
APA, Harvard, Vancouver, ISO, and other styles
30

Le, Thanh T. "Evaluation of the production process of biodegradable drinking straws from corn kernel." Journal of Agriculture and Development 21, no. 3 (June 30, 2022). http://dx.doi.org/10.52997/jad.8.03.2022.

Full text
Abstract:
New product development is one of the most effective methods to expand the economic value of corn, which is currently a low-cost agricultural material. The present study was conducted to determine the production process of straws from corn under the laboratory scale, in which the influence of mixing formula, steaming time, screw speed, and drying temperature on hardness and expansion time of the drinking straws from corn kernels. In addition, the biodegradability of the resultant straws was also tested in the natural environment. The study determined the most suitable recipe for producing the drinking straw product, including corn flour (75%), tapioca starch (20%), rice flour (5%), with the addition of 0.5% xanthan gum and 0.2% potassium sorbate (w/w). The operating conditions of steaming time (10 min), screw speed (40 rpm), and air drying temperature (40°C) were found to be the most appropriate. For the decomposition study, the results indicated that the drinking straw product was capable of decomposing after 40 days at a temperature 30 ± 2°C. As such, it can be concluded that it is highly potential to build up the production process of the drinking straws from corn kernels that are environmentally friendly and ease of practical applications.
APA, Harvard, Vancouver, ISO, and other styles
31

ALRABAIAH, HUSSAM. "APPROXIMATE SOLUTION OF FORNBERG–WHITHAM EQUATION BY MODIFIED HOMOTOPY PERTURBATION METHOD UNDER NON-SINGULAR FRACTIONAL DERIVATIVE." Fractals 30, no. 01 (October 27, 2021). http://dx.doi.org/10.1142/s0218348x22400291.

Full text
Abstract:
The basic idea of this paper is to investigate the approximate solution to a well-known Fornberg–Whitham equation of arbitrary order. We consider the stated problem under [Formula: see text] fractional order derivative. The proposed derivative is non-local and contains non-singular kernel of Mittag-Leffler type. With the help of Modified Homotopy Perturbation Method (MHPM), we find approximate solution to the aforesaid equations. The required solution is computed in the form of infinite series. The method needs no discretization or collocation and easy to implement to compute the approximate solution that we intend. We also compare our results with that of the exact solution for the initial four terms approximate solution as well as with that computed by the Laplace decomposition method. We also plot the approximate solution of considered model through surface plots. For numerical illustration, we use Matlab throughout this work.
APA, Harvard, Vancouver, ISO, and other styles
32

Guzmán Adán, Alí, Irene Sabadini, and Frank Sommen. "On the Radon transform and the Dirac delta distribution in superspace." Analysis and Applications, February 7, 2022, 1–52. http://dx.doi.org/10.1142/s0219530521500305.

Full text
Abstract:
In this paper, we obtain a plane wave decomposition for the delta distribution in superspace, provided that the superdimension is not odd and negative. This decomposition allows for explicit inversion formulas for the super Radon transform in these cases. Moreover, we prove a more general Radon inversion formula valid for all possible integer values of the superdimension. The proof of this result comes along with the study of fractional powers of the super Laplacian, their fundamental solutions, and the plane wave decompositions of super Riesz kernels.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhang, Rongfeng, Shizeng Lu, Xiaohong Wang, Hongliang Yu, and Zhao Liu. "A multi-model fusion soft measurement method for cement clinker f-CaO content based on K-means ++ and EMD-MKRVM." Transactions of the Institute of Measurement and Control, August 2, 2022, 014233122211110. http://dx.doi.org/10.1177/01423312221111001.

Full text
Abstract:
The content of free calcium oxide (f-CaO) in cement clinker is a key indicator for testing the quality of cement clinker. To address the problem that the content of f-CaO cannot be detected online, a multi-model fusion soft measurement method based on K-means++ clustering, empirical modal decomposition combined with multi-kernel relevance vector machines (EMD-MKRVM) is proposed to predict f-CaO content under different operating conditions. First, time-series analysis and matching of input variables with f-CaO content were performed, based on which a combination of empirical modal decomposition (EMD) and sample entropy (SE) denoising method was used to filter out high- frequency noise from the original data and extract effective signal information for reconstruction. Second, the K-means++ algorithm was used to cluster the processed training sample data, and multi-kernel relevance vector machine (MKRVM) sub-models were established by training the sample data of each sub-class and then the affiliation between the test samples and each sub-class was calculated as the weights of the sub-model output values, and the final model prediction output was obtained by multi-model fusion. Finally, the real data from cement plants were used for validation. The results show that compared with the single MKRVM model, multi–relevance vector machine (RVM) model, multi–support vector machine (SVM) model, and multi-MKRVM model using only EMD denoising method, the mean absolute error (MAE) of the multi-MKRVM model proposed in this paper was reduced by 42%, 7%, 14%, and 35%; root mean square error (RMSE) is reduced by 28%, 10%, 12%, and 21%; squares due to error (SSE) is reduced by 51%, 24%, 27%, and 41%; Theils inequality coefficient (TIC) is reduced by 27%, 17%, 21%, and 19%; [Formula: see text] is improved by 64%, 18%, 39%, and 91%; and Index of agreement (IA) is improved by 28%, 9%, 22%, and 13%. The multi-MKRVM model proposed in this paper has higher accuracy, better generalization ability and stability, and provides an effective method for f-CaO content prediction under complex multiple working conditions.
APA, Harvard, Vancouver, ISO, and other styles
34

Han, Yongsheng, Ming-Yi Lee, Ji Li, and Brett Wick. "Maximal Functions, Littlewood–Paley Theory, Riesz Transforms and Atomic Decomposition in the Multi-parameter Flag Setting." Memoirs of the American Mathematical Society 279, no. 1373 (September 2022). http://dx.doi.org/10.1090/memo/1373.

Full text
Abstract:
In this paper, we develop via real variable methods various characterisations of the Hardy spaces in the multi-parameter flag setting. These characterisations include those via, the non-tangential and radial maximal function, the Littlewood–Paley square function and area integral, Riesz transforms and the atomic decomposition in the multi-parameter flag setting. The novel ingredients in this paper include (1) establishing appropriate discrete Calderón reproducing formulae in the flag setting and a version of the Plancherel–Pólya inequalities for flag quadratic forms; (2) introducing the maximal function and area function via flag Poisson kernels and flag version of harmonic functions; (3) developing an atomic decomposition via the finite speed propagation and area function in terms of flag heat semigroups. As a consequence of these real variable methods, we obtain the full characterisations of the multi-parameter Hardy space with the flag structure.
APA, Harvard, Vancouver, ISO, and other styles
35

Galasso, Andrea. "Equivariant fixed point formulae and Toeplitz operators under Hamiltonian torus actions and remarks on equivariant asymptotic expansions." International Journal of Mathematics 33, no. 02 (January 15, 2022). http://dx.doi.org/10.1142/s0129167x22500112.

Full text
Abstract:
Suppose given a holomorphic and Hamiltonian action of a compact torus on a polarized Hodge manifold. Assume that the action lifts to the quantizing line bundle, so that there is an induced unitary representation of the torus on the associated Hardy space which decomposes into isotypes. The main result of this paper is the description of asymptotics along rays in weight space of traces of equivariant Toeplitz operators composed with quantomorphisms for the torus action. The main ingredient in the proof is the micro-local analysis of the equivariant Szegő kernels. As a particular case we obtain a simple approach for asymptotics of the Lefschetz fixed point formula and traces of Toeplitz operators in the setting of ladder representations. We also consider equivariant asymptotic when the decomposition given by the standard circle action is taken into account, in this case one can recall previous results of X. Ma and W. Zhang or of R. Paoletti. We address some explicit computations for the action of the special unitary group of dimension two.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography