Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Guaranteed computations.

Artykuły w czasopismach na temat „Guaranteed computations”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Guaranteed computations”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

YANG, ZHONGHUA, CHENGZHENG SUN, YUAN MIAO, ABDUL SATTAR i YANYAN YANG. "GUARANTEED MUTUALLY CONSISTENT CHECKPOINTING IN DISTRIBUTED COMPUTATIONS". International Journal of Foundations of Computer Science 11, nr 01 (marzec 2000): 153–66. http://dx.doi.org/10.1142/s0129054100000089.

Pełny tekst źródła
Streszczenie:
In this paper, we explore the isomorphism between vector time and causality to characterize consistency of a set of checkpoints in a distributed computing. A necessary and sufficient condition, to determine if a set of local checkpoints can form a consistent global checkpoint, is presented and proved using the isomorphic power of vector time and causality. To the best of our knowledge, this is the first attempt to use the isomorphism for this purpose. This condition leads to a simple and straightforward algorithm for a guaranteed mutually consistent global checkpointing. In our approach, a process can take a checkpoint whenever and wherever it wants while other related process may be asked to take an additional checkpoint for ensuring the mutual consistency. We also show how this condition and the resulting algorithm can be used to obtain a maximum and minimum global checkpoints, another important paradigm for distributed applications.
Style APA, Harvard, Vancouver, ISO itp.
2

YONG, XIE, i HSU WEN-JING. "ALIGNED MULTITHREADED COMPUTATIONS AND THEIR SCHEDULING WITH PERFORMANCE GUARANTEES". Parallel Processing Letters 13, nr 03 (wrzesień 2003): 353–64. http://dx.doi.org/10.1142/s0129626403001331.

Pełny tekst źródła
Streszczenie:
This paper considers the problem of scheduling dynamic parallel computations to achieve linear speedup without using significantly more space per processor than that required for a single processor execution. Earlier research in the Cilk project proposed the "strict" computational model, in which every dependency goes from a thread x only to one of x's ancestor threads, and guaranteed both linear speedup and linear expansion of space. However, Cilk threads are stateless, and the task graph that Cilk language expresses is series-parallel graph, which is a proper subset of arbitrary task graph. Moreover, Cilk does not support applications with pipelining. We propose the "aligned" multithreaded computational model, which extends the "strict" computational model in Cilk. In the aligned multithreaded computational model, dependencies can go from arbitrary thread x not only to x's ancestor threads, but also to x's younger brother threads, that are spawned by x's parent thread but after x. We use the same measures of time and space as those used in Cilk: T1 is the time required for executing the computation on 1 processor, T∞ is the time required by an infinite number of processors, and S1 is the space required to execute the computation on 1 processor. We show that for any aligned computation, there exists an execution schedule that achieves both efficient time and efficient space. Specifically, we show that for an execution of any aligned multithreaded computation on P processors, the time required is bounded by O(T1/P + T∞), and the space required can be loosely bounded by O(λ·S1P), where λ is the maximum number of younger brother threads that have the same parent thread and can be blocked during execution. If we assume that λ is a constant, and the space requirements for elder and younger brother threads are the same, then the space required would be bounded by O(S1P). Based on the aligned multithreaded computational model, we show that the aligned multithreaded computational model supports pipelined applications. Furthermore, we propose a multithreaded programming language and show that it can express arbitrary task graph.
Style APA, Harvard, Vancouver, ISO itp.
3

Evstigneev, Nikolay M., i Oleg I. Ryabkov. "Reduction in Degrees of Freedom for Large-Scale Nonlinear Systems in Computer-Assisted Proofs". Mathematics 11, nr 20 (18.10.2023): 4336. http://dx.doi.org/10.3390/math11204336.

Pełny tekst źródła
Streszczenie:
In many physical systems, it is important to know the exact trajectory of a solution. Relevant applications include celestial mechanics, fluid mechanics, robotics, etc. For cases where analytical methods cannot be applied, one can use computer-assisted proofs or rigorous computations. One can obtain a guaranteed bound for the solution trajectory in the phase space. The application of rigorous computations poses few problems for low-dimensional systems of ordinary differential equations (ODEs) but is a challenging problem for large-scale systems, for example, systems of ODEs obtained from the discretization of the PDEs. A large-scale system size for rigorous computations can be as small as about a hundred ODE equations because computational complexity for rigorous algorithms is much larger than that for simple computations. We are interested in the application of rigorous computations to the problem of proving the existence of a periodic orbit in the Kolmogorov problem for the Navier–Stokes equations. One of the key issues, among others, is the computation complexity, which increases rapidly with the growth of the problem dimension. In previous papers, we showed that 79 degrees of freedom are needed in order to achieve convergence of the rigorous algorithm only for the system of ordinary differential equations. Here, we wish to demonstrate the application of the proper orthogonal decomposition (POD) in order to approximate the attracting set of the system and reduce the dimension of the active degrees of freedom.
Style APA, Harvard, Vancouver, ISO itp.
4

Ainsworth, Mark, i Richard Rankin. "Guaranteed computable bounds on quantities of interest in finite element computations". International Journal for Numerical Methods in Engineering 89, nr 13 (28.02.2012): 1605–34. http://dx.doi.org/10.1002/nme.3276.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Xie, Dawei, Haining Yang, Jing Qin i Jixin Ma. "Privacy-Preserving and Publicly Verifiable Protocol for Outsourcing Polynomials Evaluation to a Malicious Cloud". International Journal of Digital Crime and Forensics 11, nr 4 (październik 2019): 14–27. http://dx.doi.org/10.4018/ijdcf.2019100102.

Pełny tekst źródła
Streszczenie:
As cloud computing provides affordable and scalable computational resources, delegating heavy computing tasks to the cloud service providers is appealing to individuals and companies. Among different types of specific computations, the polynomial evaluation is an important one due to its wide usage in engineering and scientific fields. Cloud service providers may not be trusted, thus, the validity and the privacy of such computation should be guaranteed. In this article, the authors present a protocol for publicly verifiable delegations of high degree polynomials. Compared with the existing solutions, it ensures the privacy of outsourced functions and actual results. And the protocol satisfies the property of blind verifiability such that the results can be publicly verified without learning the value. The protocol also improves in efficiency.
Style APA, Harvard, Vancouver, ISO itp.
6

LÊ, DINH, i D. STOTT PARKER. "Using randomization to make recursive matrix algorithms practical". Journal of Functional Programming 9, nr 6 (listopad 1999): 605–24. http://dx.doi.org/10.1017/s0956796899003470.

Pełny tekst źródła
Streszczenie:
Recursive block decomposition algorithms (also known as quadtree algorithms when the blocks are all square) have been proposed to solve well-known problems such as matrix addition, multiplication, inversion, determinant computation, block LDU decomposition and Cholesky and QR factorization. Until now, such algorithms have been seen as impractical, since they require leading submatrices of the input matrix to be invertible (which is rarely guaranteed). We show how to randomize an input matrix to guarantee that submatrices meet these requirements, and to make recursive block decomposition methods practical on well-conditioned input matrices. The resulting algorithms are elegant, and we show the recursive programs can perform well for both dense and sparse matrices, although with randomization dense computations seem most practical. By ‘homogenizing’ the input, randomization provides a way to avoid degeneracy in numerical problems that permits simple recursive quadtree algorithms to solve these problems.
Style APA, Harvard, Vancouver, ISO itp.
7

Ghoniem, Nasr M. "Curved Parametric Segments for the Stress Field of 3-D Dislocation Loops". Journal of Engineering Materials and Technology 121, nr 2 (1.04.1999): 136–42. http://dx.doi.org/10.1115/1.2812358.

Pełny tekst źródła
Streszczenie:
Under applied mechanical forces, strong mutual interaction or other thermodynamic forces, dislocation shapes become highly curved. We present here a new method for accurate computations of self and mutual interactions between dislocation loops. In this method, dislocation loops of arbitrary shapes are segmented with appropriate parametric equations representing the dislocation line vector. Field equations of infinitesimal linear elasticity are developed on the basis of isotropic elastic Green’s tensor functions. The accuracy and computational speed of the method are illustrated by computing the stress field around a typical (110)-[111] slip loop in a BCC crystal. The method is shown to be highly accurate for close-range dislocation interactions without any loss of computational speed when compared to analytic evaluations of the stress field for short linear segments. Moreover, computations of self-forces and energies of curved segments are guaranteed to be accurate, because of the continuity of line curvature on the loop.
Style APA, Harvard, Vancouver, ISO itp.
8

Bertrand, Fleurianne, Marcel Moldenhauer i Gerhard Starke. "A Posteriori Error Estimation for Planar Linear Elasticity by Stress Reconstruction". Computational Methods in Applied Mathematics 19, nr 3 (1.07.2019): 663–79. http://dx.doi.org/10.1515/cmam-2018-0004.

Pełny tekst źródła
Streszczenie:
AbstractThe nonconforming triangular piecewise quadratic finite element space by Fortin and Soulie can be used for the displacement approximation and its combination with discontinuous piecewise linear pressure elements is known to constitute a stable combination for incompressible linear elasticity computations. In this contribution, we extend the stress reconstruction procedure and resulting guaranteed a posteriori error estimator developed by Ainsworth, Allendes, Barrenechea and Rankin [2] and by Kim [18] to linear elasticity. In order to get a guaranteed reliability bound with respect to the energy norm involving only known constants, two modifications are carried out: (i) the stress reconstruction in next-to-lowest order Raviart–Thomas spaces is modified in such a way that its anti-symmetric part vanishes in average on each element; (ii) the auxiliary conforming approximation is constructed under the constraint that its divergence coincides with the one for the nonconforming approximation. An important aspect of our construction is that all results hold uniformly in the incompressible limit. Global efficiency is also shown and the effectiveness is illustrated by adaptive computations involving different Lamé parameters including the incompressible limit case.
Style APA, Harvard, Vancouver, ISO itp.
9

Lindeberg, Tony. "Provably Scale-Covariant Continuous Hierarchical Networks Based on Scale-Normalized Differential Expressions Coupled in Cascade". Journal of Mathematical Imaging and Vision 62, nr 1 (25.10.2019): 120–48. http://dx.doi.org/10.1007/s10851-019-00915-x.

Pełny tekst źródła
Streszczenie:
Abstract This article presents a theory for constructing hierarchical networks in such a way that the networks are guaranteed to be provably scale covariant. We first present a general sufficiency argument for obtaining scale covariance, which holds for a wide class of networks defined from linear and nonlinear differential expressions expressed in terms of scale-normalized scale-space derivatives. Then, we present a more detailed development of one example of such a network constructed from a combination of mathematically derived models of receptive fields and biologically inspired computations. Based on a functional model of complex cells in terms of an oriented quasi quadrature combination of first- and second-order directional Gaussian derivatives, we couple such primitive computations in cascade over combinatorial expansions over image orientations. Scale-space properties of the computational primitives are analysed, and we give explicit proofs of how the resulting representation allows for scale and rotation covariance. A prototype application to texture analysis is developed, and it is demonstrated that a simplified mean-reduced representation of the resulting QuasiQuadNet leads to promising experimental results on three texture datasets.
Style APA, Harvard, Vancouver, ISO itp.
10

Likhoded, N. A., i M. A. Paliashchuk. "Tiled parallel 2D computational processes". Proceedings of the National Academy of Sciences of Belarus. Physics and Mathematics Series 54, nr 4 (11.01.2019): 417–26. http://dx.doi.org/10.29235/1561-2430-2018-54-4-417-426.

Pełny tekst źródła
Streszczenie:
The algorithm implemented on a parallel computer with distributed memory has, as a rule, a tiled structure: a set of operations is divided into subsets, called tiles. One of the modern approaches to obtaining tiled versions of algorithms is a tiling transformation based on information sections of the iteration space, resulting in macro-operations (tiles). The operations of one tile are performed atomically, as one unit of calculation, and the data exchange is done by arrays. The method of construction of tiled computational processes logically organized as a two-dimensional structure for algorithms given by multidimensional loops is stated. Compared to one-dimensional structures, the use of two-dimensional structures is possible in a smaller number of cases, but it can have advantages when implementing algorithms on parallel computers with distributed memory. Among the possible advantages are the reduction of the volume of communication operations, the reduction of acceleration and deceleration of computations, potentially a greater number of computation processes and the organization of data exchange operations only within the rows or columns of processes. The results are a generalization of some aspects of the method of construction of parallel computational processes organized in a one-dimensional structure to the case of a two-dimensional structure. It is shown that under certain restrictions on the structure and length of loops, it is sufficient to perform tiling on three coordinates of a multidimensional iteration space. In the earlier theoretical studies, the parallelism of tiled computations was guaranteed in the presence of information sections in all coordinates of the iteration space, and for a simpler case of a one-dimensional structure, in two coordinates.
Style APA, Harvard, Vancouver, ISO itp.
11

Lei, F., XP Xie, XW Wang i YG Wang. "Research on the efficiency of reduced-basis approach in computations of structural problems and its improvements". Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 227, nr 10 (13.12.2012): 2143–56. http://dx.doi.org/10.1177/0954406212470895.

Pełny tekst źródła
Streszczenie:
In this article, procedure and efficiency of the reduced-basis approach in structural design computation are studied. As a model order reduction approach, it provides fast evaluation of a structural system in explicitly parameterized formulation. Theoretically, the original structural system is reduced to obtain a reduced system by being projected onto a lower dimensional subspace. However, in practice, it is a time-consuming process due to the iterations of adaptive procedure in subspace construction. To improve the efficiency of the method, some characteristics are analyzed. First, the accuracy of the subspace is evaluated and computational costs of procedures with different approaches are studied. Results show that the subspaces constructed by greedy adaptive procedures with different beginnings have the same accuracy. It is instructive that accuracy of the subspace is guaranteed by adaptive procedure. And the computational costs depend on the number of iterations in adaptive procedure. Thus, a modified adaptive procedure is proposed to reduce the computational costs and guarantee the accuracy. The modified adaptive procedure begins with experimental design methods to obtain a set of samples rather than a single sample and ends with the adaptive procedure. The start set of samples are selected by the following experimental design methods: 2 k factorial design, standard Latin design and Latin hypercube design. By being integrated with the experimental design, the modified adaptive procedure saves computational costs and retains the same accuracy as traditional procedure does. As an example, the outputs of a vehicle body front compartment subjected to a bending load are illustrated. It is proved that the proposed procedure is efficient and is applicable to many other structural design contexts.
Style APA, Harvard, Vancouver, ISO itp.
12

Fischer, A., A. Smolin i G. Elber. "Mid-Surfaces of Profile-based Freeforms for Mold Design". Journal of Manufacturing Science and Engineering 121, nr 2 (1.05.1999): 202–7. http://dx.doi.org/10.1115/1.2831206.

Pełny tekst źródła
Streszczenie:
Mid-surfaces of complex thin objects are commonly used in CAD applications for the analysis of casting and injection molding. However, geometrical representation in CAD typically takes the form of a solid representation rather than a mid-surface; therefore, a process for extracting the mid-surface is essential. Contemporary methods for extracting mid-surfaces are based on numerical computations using offsetting techniques or Voronoi diagram processes where the data is discrete and piecewise linear. These algorithms usually have high computational complexity, and their accuracy is not guaranteed. Furthermore, the geometry and topology of the object are not always preserved. To overcome these problems, this paper proposes a new approach for extracting a mid-surface from a freeform thin object. The proposed method reduces the mid-surface problem into a parametrization problem that is based on a matching technique in which a nonlinear optimization function is defined and solved according to mid-surface criteria. Then, the resulting mid-surface is dictated by a reparametrization process. The algorithm is implemented for freeform ruled, swept, and rotational surfaces, that are commonly used in engineering products. Reducing the problem to the profile curves of these surfaces alleviates the computational complexity of the 3D case and restricts it to a 2D case. Error is controlled globally through an iterative refinement process that utilizes continuous symbolic computations on the parametric representation. The feasibility of the proposed method is demonstrated through several examples.
Style APA, Harvard, Vancouver, ISO itp.
13

Kennedy, Jane B. "Activities: An Interest in Radioactivity". Mathematics Teacher 89, nr 3 (marzec 1996): 209–30. http://dx.doi.org/10.5951/mt.89.3.0209.

Pełny tekst źródła
Streszczenie:
Introduction: This activity explores exponential growth and decay, emphasizing the paired concepts of doubling and half-life. Exponential growth is derived from actual computations to obtain compound interest, whereas exponential decay is modeled by the use of “radioactive” dice. The activity is based on the concept of the differentiated core curriculum, which asserts that all students should be guaranteed equal access to the same curricular topics but recognizes that all students may not explore the content to the same depth or at the same level of formalism.
Style APA, Harvard, Vancouver, ISO itp.
14

Morgante, Pierpaolo, Coty Deluca, Tegla E. Jones, Gregory J. Aldrich, Norito Takenaka i Roberto Peverati. "Steps toward Rationalization of the Enantiomeric Excess of the Sakurai–Hosomi–Denmark Allylation Catalyzed by Biisoquinoline N,N’-Dioxides Using Computations". Catalysts 11, nr 12 (4.12.2021): 1487. http://dx.doi.org/10.3390/catal11121487.

Pełny tekst źródła
Streszczenie:
Allylation reactions of aldehydes are chemical transformations of fundamental interest, as they give direct access to chiral homoallylic alcohols. In this work, we focus on the full computational characterization of the catalytic activity of substituted biisoquinoline-N,N’-dioxides for the allylation of 2-naphthaldehyde. We characterized the structure of all transition states as well as identified the π stacking interactions that are responsible for their relative energies. Motivated by disagreement with the experimental results, we also performed an assessment of 34 different density functional methods, with the goal of assessing DFT as a general tool for understanding this chemistry. We found that the DFT results are generally consistent as long as functionals that correctly account for dispersion interactions are used. However, agreement with the experimental results is not always guaranteed. We suggest the need for a careful synergy between computations and experiments to correctly interpret the data and use them as a design tool for new and improved asymmetric catalysts.
Style APA, Harvard, Vancouver, ISO itp.
15

Kurz, Stefan, Dirk Pauly, Dirk Praetorius, Sergey Repin i Daniel Sebastian. "Functional a posteriori error estimates for boundary element methods". Numerische Mathematik 147, nr 4 (18.03.2021): 937–66. http://dx.doi.org/10.1007/s00211-021-01188-6.

Pełny tekst źródła
Streszczenie:
AbstractFunctional error estimates are well-established tools for a posteriori error estimation and related adaptive mesh-refinement for the finite element method (FEM). The present work proposes a first functional error estimate for the boundary element method (BEM). One key feature is that the derived error estimates are independent of the BEM discretization and provide guaranteed lower and upper bounds for the unknown error. In particular, our analysis covers Galerkin BEM and the collocation method, what makes the approach of particular interest for scientific computations and engineering applications. Numerical experiments for the Laplace problem confirm the theoretical results.
Style APA, Harvard, Vancouver, ISO itp.
16

DESTERCKE, SEBASTIEN, DIDIER DUBOIS i ERIC CHOJNACKI. "A CONSONANT APPROXIMATION OF THE PRODUCT OF INDEPENDENT CONSONANT RANDOM SETS". International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 17, nr 06 (grudzień 2009): 773–92. http://dx.doi.org/10.1142/s0218488509006261.

Pełny tekst źródła
Streszczenie:
The belief structure resulting from the combination of consonant and independent marginal random sets is not, in general, consonant. Also, the complexity of such a structure grows exponentially with the number of combined random sets, making it quickly intractable for computations. In this paper, we propose a simple guaranteed consonant outer approximation of this structure. The complexity of this outer approximation does not increase with the number of marginal random sets (i.e., of dimensions), making it easier to handle in uncertainty propagation. Features and advantages of this outer approximation are then discussed, with the help of some illustrative examples.
Style APA, Harvard, Vancouver, ISO itp.
17

Wan, Min, Jianping Gou, Desong Wang i Xiaoming Wang. "Dynamical Properties of Discrete-Time Background Neural Networks with Uniform Firing Rate". Mathematical Problems in Engineering 2013 (2013): 1–6. http://dx.doi.org/10.1155/2013/892794.

Pełny tekst źródła
Streszczenie:
The dynamics of a discrete-time background network with uniform firing rate and background input is investigated. The conditions for stability are firstly derived. An invariant set is then obtained so that the nondivergence of the network can be guaranteed. In the invariant set, it is proved that all trajectories of the network starting from any nonnegative value will converge to a fixed point under some conditions. In addition, bifurcation and chaos are discussed. It is shown that the network can engender bifurcation and chaos with the increase of background input. The computations of Lyapunov exponents confirm the chaotic behaviors.
Style APA, Harvard, Vancouver, ISO itp.
18

Romig, Swantje, Luc Jaulin i Andreas Rauh. "Using Interval Analysis to Compute the Invariant Set of a Nonlinear Closed-Loop Control System". Algorithms 12, nr 12 (6.12.2019): 262. http://dx.doi.org/10.3390/a12120262.

Pełny tekst źródła
Streszczenie:
In recent years, many applications, as well as theoretical properties of interval analysis have been investigated. Without any claim for completeness, such applications and methodologies range from enclosing the effect of round-off errors in highly accurate numerical computations over simulating guaranteed enclosures of all reachable states of a dynamic system model with bounded uncertainty in parameters and initial conditions, to the solution of global optimization tasks. By exploiting the fundamental enclosure properties of interval analysis, this paper aims at computing invariant sets of nonlinear closed-loop control systems. For that purpose, Lyapunov-like functions and interval analysis are combined in a novel manner. To demonstrate the proposed techniques for enclosing invariant sets, the systems examined in this paper are controlled via sliding mode techniques with subsequently enclosing the invariant sets by an interval based set inversion technique. The applied methods for the control synthesis make use of a suitably chosen Gröbner basis, which is employed to solve Bézout’s identity. Illustrating simulation results conclude this paper to visualize the novel combination of sliding mode control with an interval based computation of invariant sets.
Style APA, Harvard, Vancouver, ISO itp.
19

ROSENBERG, ARNOLD L. "GUIDELINES FOR DATA-PARALLEL CYCLE-STEALING IN NETWORKS OF WORKSTATIONS II: ON MAXIMIZING GUARANTEED OUTPUT". International Journal of Foundations of Computer Science 11, nr 01 (marzec 2000): 183–204. http://dx.doi.org/10.1142/s0129054100000107.

Pełny tekst źródła
Streszczenie:
We derive efficient guidelines for scheduling data-parallel computations within a draconian mode of cycle-stealing in networks of workstations. In this computing regimen, (the owner of) workstation A contracts with (the owner of) workstation B to take control of B's processor for a guaranteed total of U time units, punctuated by up to some prespecified number p of interrupts which kill any work A has in progress on B. On the one hand, the high overhead — of c time units — for setting up the communications that supply workstation B with work and receive its results recommends that A communicate with B infrequently, supplying B with large amounts of work each time. On the other hand, the risk of losing work in progress when workstation B is interrupted recommends that A supply B with a long sequence of small bundles of work. In this paper, we derive two sets of scheduling guidelines that balance these conflicting pressures in a way that optimizes, up to low-order additive terms, the amount of work that A is guaranteed to accomplish during the cycle-stealing opportunity. Our non-adaptive guidelines, which employ a single fixed strategy until all p interrupts have occurred, produce schedules that achieve at least [Formula: see text] units of work. Our adaptive guidelines, which change strategy after each interrupt, produce schedules that achieve at least [Formula: see text] (low-order terms) units of work. By deriving the theoretical underpinnings of our guidelines, we show that our non-adaptive schedules are optimal in guaranteed work-output and that our adaptive schedules are within low-order additive terms of being optimal.
Style APA, Harvard, Vancouver, ISO itp.
20

Journal, Baghdad Science. "ON NAIVE TAYLOR MODEL INTEGRATION METHOD". Baghdad Science Journal 6, nr 1 (1.03.2009): 222–30. http://dx.doi.org/10.21123/bsj.6.1.222-230.

Pełny tekst źródła
Streszczenie:
Interval methods for verified integration of initial value problems (IVPs) for ODEs have been used for more than 40 years. For many classes of IVPs, these methods have the ability to compute guaranteed error bounds for the flow of an ODE, where traditional methods provide only approximations to a solution. Overestimation, however, is a potential drawback of verified methods. For some problems, the computed error bounds become overly pessimistic, or integration even breaks down. The dependency problem and the wrapping effect are particular sources of overestimations in interval computations. Berz (see [1]) and his co-workers have developed Taylor model methods, which extend interval arithmetic with symbolic computations. The latter is an effective tool for reducing both the dependency problem and the wrapping effect. By construction, Taylor model methods appear particularly suitable for integrating nonlinear ODEs. In this paper, we analyze Taylor model based integration of ODEs and compare Taylor model with traditional enclosure methods for IVPs for ODEs. More advanced Taylor model integration methods are discussed in the algorithm (1). For clarity, we summarize the major steps of the naive Taylor model method as algorithm 1.
Style APA, Harvard, Vancouver, ISO itp.
21

Saw, Vee-Liem, i Freeman Chee Siong Thun. "Peeling property and asymptotic symmetries with a cosmological constant". International Journal of Modern Physics D 29, nr 03 (luty 2020): 2050020. http://dx.doi.org/10.1142/s0218271820500200.

Pełny tekst źródła
Streszczenie:
This paper establishes two things in an asymptotically (anti-)de Sitter spacetime, by direct computations in the physical spacetime (i.e. with no involvement of spacetime compactification): (1) The peeling property of the Weyl spinor is guaranteed. In the case where there are Maxwell fields present, the peeling properties of both Weyl and Maxwell spinors similarly hold, if the leading order term of the spin coefficient [Formula: see text] when expanded as inverse powers of [Formula: see text] (where [Formula: see text] is the usual spherical radial coordinate, and [Formula: see text] is null infinity, [Formula: see text]) has coefficient [Formula: see text]. (2) In the absence of gravitational radiation (a conformally flat [Formula: see text]), the group of asymptotic symmetries is trivial, with no room for supertranslations.
Style APA, Harvard, Vancouver, ISO itp.
22

Xu, Yihao, Zhuo Zhang, Longyong Chen, Zhenhua Li i Ling Yang. "The Adaptive Streaming SAR Back-Projection Algorithm Based on Half-Precision in GPU". Electronics 11, nr 18 (6.09.2022): 2807. http://dx.doi.org/10.3390/electronics11182807.

Pełny tekst źródła
Streszczenie:
The back-projection (BP) algorithm is completely accurate in the imaging principle, but the computational complexity is extremely high. The single-precision arithmetic used in the traditional graphics processing unit (GPU) acceleration scheme has low throughput and its usage of the video memory is large. An adaptive asynchronous streaming scheme for the BP algorithm based on half-precision is proposed in this study, and then it is extended to the fast back-projection (FBP) algorithm. In this scheme, the adaptive loss factors selection strategy ensures the dynamic range of data, the asynchronous streaming structure ensures the efficiency of large scene imaging, and the mixed-precision data processing ensures the imaging quality. The schemes proposed in this paper are compared with BP, FBP, and fast factorized back-projection (FFBP) algorithms of single-precision in GPU. The experimental results show that the two half-precision acceleration schemes in this paper reduce the video memory usage to 74% and 59% of the single-precision schemes with guaranteed image quality. The efficiency improvements of the proposed schemes are almost one and 0.5 times greater than that of the corresponding single-precision scheme, and the advantage can be more obvious when dealing with large computations.
Style APA, Harvard, Vancouver, ISO itp.
23

Abo Khamis, Mahmoud, Hung Q. Ngo, Reinhard Pichler, Dan Suciu i Yisu Remy Wang. "Convergence of Datalog over (Pre-) Semirings". ACM SIGMOD Record 52, nr 1 (7.06.2023): 75–82. http://dx.doi.org/10.1145/3604437.3604454.

Pełny tekst źródła
Streszczenie:
Recursive queries have been traditionally studied in the framework of datalog, a language that restricts recursion to monotone queries over sets, which is guaranteed to converge in polynomial time in the size of the input. But modern big data systems require recursive computations beyond the Boolean space. In this paper we study the convergence of datalog when it is interpreted over an arbitrary semiring. We consider an ordered semiring, define the semantics of a datalog program as a least fixpoint in this semiring, and study the number of steps required to reach that fixpoint, if ever. We identify algebraic properties of the semiring that correspond to certain convergence properties of datalog programs. Finally, we describe a class of ordered semirings on which one can generalize the semi-na¨ve evaluation algorithm to compute their minimal fixpoints.
Style APA, Harvard, Vancouver, ISO itp.
24

Chiang, David, Colin McDonald i Chung-chieh Shan. "Exact Recursive Probabilistic Programming". Proceedings of the ACM on Programming Languages 7, OOPSLA1 (6.04.2023): 665–95. http://dx.doi.org/10.1145/3586050.

Pełny tekst źródła
Streszczenie:
Recursive calls over recursive data are useful for generating probability distributions, and probabilistic programming allows computations over these distributions to be expressed in a modular and intuitive way. Exact inference is also useful, but unfortunately, existing probabilistic programming languages do not perform exact inference on recursive calls over recursive data, forcing programmers to code many applications manually. We introduce a probabilistic language in which a wide variety of recursion can be expressed naturally, and inference carried out exactly. For instance, probabilistic pushdown automata and their generalizations are easy to express, and polynomial-time parsing algorithms for them are derived automatically. We eliminate recursive data types using program transformations related to defunctionalization and refunctionalization. These transformations are assured correct by a linear type system, and a successful choice of transformations, if there is one, is guaranteed to be found by a greedy algorithm.
Style APA, Harvard, Vancouver, ISO itp.
25

Tropin, D. V., A. M. Ershov, D. P. Nikolaev i V. V. Arlazarov. "Advanced Hough-based method for on-device document localization". Computer Optics 5, nr 45 (wrzesień 2021): 702–12. http://dx.doi.org/10.18287/2412-6179-co-895.

Pełny tekst źródła
Streszczenie:
The demand for on-device document recognition systems increases in conjunction with the emergence of more strict privacy and security requirements. In such systems, there is no data transfer from the end device to a third-party information processing servers. The response time is vital to the user experience of on-device document recognition. Combined with the unavailability of discrete GPUs, powerful CPUs, or a large RAM capacity on consumer-grade end devices such as smartphones, the time limitations put significant constraints on the computational complexity of the applied algorithms for on-device execution. In this work, we consider document location in an image without prior knowledge of the docu-ment content or its internal structure. In accordance with the published works, at least 5 systems offer solutions for on-device document location. All these systems use a location method which can be considered Hough-based. The precision of such systems seems to be lower than that of the state-of-the-art solutions which were not designed to account for the limited computational resources. We propose an advanced Hough-based method. In contrast with other approaches, it accounts for the geometric invariants of the central projection model and combines both edge and color features for document boundary detection. The proposed method allowed for the second best result for SmartDoc dataset in terms of precision, surpassed by U-net like neural network. When evaluated on a more challenging MIDV-500 dataset, the proposed algorithm guaranteed the best precision compared to published methods. Our method retained the applicability to on-device computations.
Style APA, Harvard, Vancouver, ISO itp.
26

Tropin, D. V., A. M. Ershov, D. P. Nikolaev i V. V. Arlazarov. "Advanced Hough-based method for on-device document localization". Computer Optics 5, nr 45 (wrzesień 2021): 702–12. http://dx.doi.org/10.18287/2412-6179-co-895.

Pełny tekst źródła
Streszczenie:
The demand for on-device document recognition systems increases in conjunction with the emergence of more strict privacy and security requirements. In such systems, there is no data transfer from the end device to a third-party information processing servers. The response time is vital to the user experience of on-device document recognition. Combined with the unavailability of discrete GPUs, powerful CPUs, or a large RAM capacity on consumer-grade end devices such as smartphones, the time limitations put significant constraints on the computational complexity of the applied algorithms for on-device execution. In this work, we consider document location in an image without prior knowledge of the docu-ment content or its internal structure. In accordance with the published works, at least 5 systems offer solutions for on-device document location. All these systems use a location method which can be considered Hough-based. The precision of such systems seems to be lower than that of the state-of-the-art solutions which were not designed to account for the limited computational resources. We propose an advanced Hough-based method. In contrast with other approaches, it accounts for the geometric invariants of the central projection model and combines both edge and color features for document boundary detection. The proposed method allowed for the second best result for SmartDoc dataset in terms of precision, surpassed by U-net like neural network. When evaluated on a more challenging MIDV-500 dataset, the proposed algorithm guaranteed the best precision compared to published methods. Our method retained the applicability to on-device computations.
Style APA, Harvard, Vancouver, ISO itp.
27

Al-Adwan, Ibrahim M. "Intelligent Path Planning Approach for Autonomous Mobile Robot". Journal of Robotics and Mechatronics 33, nr 6 (20.12.2021): 1423–28. http://dx.doi.org/10.20965/jrm.2021.p1423.

Pełny tekst źródła
Streszczenie:
This paper presents a new path planning algorithm for an autonomous mobile robot. It is desired that the robot reaches its goal in a known or partially known environment (e.g., a warehouse or an urban environment) and avoids collisions with walls and other obstacles. To this end, a new, efficient, simple, and flexible path finder strategy for the robot is proposed in this paper. With the proposed strategy, the optimal path from the robot’s current position to the goal position is guaranteed. The environment is represented as a grid-based map, which is then divided into a predefined number of subfields to reduce the number of required computations. This leads to a reduction in the load on the controller and allows a real-time response. To evaluate the flexibility and efficiency of the proposed strategy, several tests were simulated with environments of different sizes and obstacle distributions. The experimental results demonstrate the reliability and efficiency of the proposed algorithm.
Style APA, Harvard, Vancouver, ISO itp.
28

Zhang, Aihua, Yongchao Wang, Zhiqiang Zhang i Hamid Reza Karimi. "Robust Control Allocation for Spacecraft Attitude Stabilization under Actuator Faults and Uncertainty". Mathematical Problems in Engineering 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/789327.

Pełny tekst źródła
Streszczenie:
A robust control allocation scheme is developed for rigid spacecraft attitude stabilization in the presence of actuator partial loss fault, actuator failure, and actuator misalignment. First, a neural network fault detection scheme is proposed, Second, an adaptive attitude tracking strategy is employed which can realize fault tolerance control under the actuator partial loss and actuator failure withinλmin⁡=0.5. The attitude tracking and faults detection are always here during the procedure. Once the fault occurred which could not guaranteed the attitude stable for 30 s, the robust control allocation strategy is generated automatically. The robust control allocation compensates the control effectiveness uncertainty which caused the actuator misalignment. The unknown disturbances, uncertain inertia matrix, and even actuator error with limited actuators are all considered in the controller design process. All are achieved with inexpensive online computations. Numerical results are also presented that not only highlight the closed-loop performance benefits of the control law derived here but also illustrate its great robustness.
Style APA, Harvard, Vancouver, ISO itp.
29

WEIN, RON, OLEG ILUSHIN, GERSHON ELBER i DAN HALPERIN. "CONTINUOUS PATH VERIFICATION IN MULTI-AXIS NC-MACHINING". International Journal of Computational Geometry & Applications 15, nr 04 (sierpień 2005): 351–77. http://dx.doi.org/10.1142/s0218195905001749.

Pełny tekst źródła
Streszczenie:
We introduce a new approach to the problem of collision detection between a rotating milling-cutter of an NC-machine and a model of a solid workpiece, as the rotating cutter continuously moves near the workpiece. Having five degrees of motion freedom, this problem is hard to solve exactly and we approximate the motion of the tool by a sequence of sub-paths of pure translations interleaved with pure rotations. The collision-detection problem along each sub-path is then solved by using radial projection of the obstacles (the workpiece and the static parts of the NC-machine) around the tool axis to obtain a collection of critical surface patches in ℝ3, and by examining planar silhouettes of these surface patches. We thus reduce the problem to successive computations of the lower envelope of a set of planar curves, which we intersect with the profile of the tool. Our reduction is exact, and incurs no loss of accuracy. We have implemented our algorithm in the IRIT environment for solid modeling, using an extension package of the CGAL library for computing envelopes. The algorithm, combined with the proper data structures, solves the collision detection problem in a robust manner, yet it yields efficient computation times as our experiments show. Our approach produces exact results in case of purely translational motion, and provides guaranteed (and good) approximation bounds in case the motion includes rotation.
Style APA, Harvard, Vancouver, ISO itp.
30

Ida, Yasutoshi, Sekitoshi Kanai, Kazuki Adachi, Atsutoshi Kumagai i Yasuhiro Fujiwara. "Fast Regularized Discrete Optimal Transport with Group-Sparse Regularizers". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 7 (26.06.2023): 7980–87. http://dx.doi.org/10.1609/aaai.v37i7.25965.

Pełny tekst źródła
Streszczenie:
Regularized discrete optimal transport (OT) is a powerful tool to measure the distance between two discrete distributions that have been constructed from data samples on two different domains. While it has a wide range of applications in machine learning, in some cases the sampled data from only one of the domains will have class labels such as unsupervised domain adaptation. In this kind of problem setting, a group-sparse regularizer is frequently leveraged as a regularization term to handle class labels. In particular, it can preserve the label structure on the data samples by corresponding the data samples with the same class label to one group-sparse regularization term. As a result, we can measure the distance while utilizing label information by solving the regularized optimization problem with gradient-based algorithms. However, the gradient computation is expensive when the number of classes or data samples is large because the number of regularization terms and their respective sizes also turn out to be large. This paper proposes fast discrete OT with group-sparse regularizers. Our method is based on two ideas. The first is to safely skip the computations of the gradients that must be zero. The second is to efficiently extract the gradients that are expected to be nonzero. Our method is guaranteed to return the same value of the objective function as that of the original approach. Experiments demonstrate that our method is up to 8.6 times faster than the original method without degrading accuracy.
Style APA, Harvard, Vancouver, ISO itp.
31

Ifrim, Ioana, Vassil Vassilev i David J. Lange. "GPU Accelerated Automatic Differentiation With Clad". Journal of Physics: Conference Series 2438, nr 1 (1.02.2023): 012043. http://dx.doi.org/10.1088/1742-6596/2438/1/012043.

Pełny tekst źródła
Streszczenie:
Abstract Automatic Differentiation (AD) is instrumental for science and industry. It is a tool to evaluate the derivative of a function specified through a computer program. The range of AD application domain spans from Machine Learning to Robotics to High Energy Physics. Computing gradients with the help of AD is guaranteed to be more precise than the numerical alternative and have a low, constant factor more arithmetical operations compared to the original function. Moreover, AD applications to domain problems typically are computationally bound. They are often limited by the computational requirements of high-dimensional parameters and thus can benefit from parallel implementations on graphics processing units (GPUs). Clad aims to enable differential analysis for C/C++ and CUDA and is a compiler-assisted AD tool available both as a compiler extension and in ROOT. Moreover, Clad works as a plugin extending the Clang compiler; as a plugin extending the interactive interpreter Cling; and as a Jupyter kernel extension based on xeus-cling. We demonstrate the advantages of parallel gradient computations on GPUs with Clad. We explain how to bring forth a new layer of optimization and a proportional speed up by extending Clad to support CUDA. The gradients of well-behaved C++ functions can be automatically executed on a GPU. The library can be easily integrated into existing frameworks or used interactively. Furthermore, we demonstrate the achieved application performance improvements, including (≈10x) in ROOT histogram fitting and corresponding performance gains from offloading to GPUs.
Style APA, Harvard, Vancouver, ISO itp.
32

Miao, Ji, Chunlin Gong i Chunna Li. "Two-stage aerodynamic optimization method based on early termination of CFD convergence and variable-fidelity model". Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 39, nr 1 (luty 2021): 148–58. http://dx.doi.org/10.1051/jnwpu/20213910148.

Pełny tekst źródła
Streszczenie:
Efficient aerodynamic design optimization method is of great value for improving the aerodynamic performance of little UAV's airfoil. Using engineering or semi-engineering estimation method to analyze aerodynamic forces in solving aerodynamic optimization problems costs little computational time, but the accuracy cannot be guaranteed. However, CFD method ensuring high accuracy needs much more computational cost, which is unfordable for optimization. Surrogate-based optimization can reduce the number of high-fidelity analyses to increase the optimization efficiency. However, the cost of CFD analyses is still huge for aerodynamic optimization due to multiple design variables, multi-optimal and strong nonlinearities. To solve this problem, a two-stage aerodynamic optimization method based on early termination of CFD convergence and variable-fidelity model is proposed. In the first optimization stage, the solutions by early termination CFD convergence and the convergenced CFD solutions are regarded as low-and high-fidelity data respectively for building variable-fidelity model. Then, the multi-island genetic algorithm is used in the global optimization based on the built variable-fidelity model. The modeling efficiency can be greatly improved due to many cheap low-fidelity data. In the second stage optimization, the global optimum from the first optimization stage is treated as the start of the Hooke-Jeeves algorithm to search locally based on convergenced CFD computations in order to acquire better-optimum. The proposed method is utilized in optimizing the aerodynamic performance of the airfoil of little UAV, and is compared with the EGO method based on single-fidelity Kriging surrogate model. The results show that the present two-level aerodynamic optimization method consumes less time.
Style APA, Harvard, Vancouver, ISO itp.
33

Cook, Sebastien, i Paulo Garcia. "Arbitrarily Parallelizable Code: A Model of Computation Evaluated on a Message-Passing Many-Core System". Computers 11, nr 11 (18.11.2022): 164. http://dx.doi.org/10.3390/computers11110164.

Pełny tekst źródła
Streszczenie:
The number of processing elements per solution is growing. From embedded devices now employing (often heterogeneous) multi-core processors, across many-core scientific computing platforms, to distributed systems comprising thousands of interconnected processors, parallel programming of one form or another is now the norm. Understanding how to efficiently parallelize code, however, is still an open problem, and the difficulties are exacerbated across heterogeneous processing, and especially at run time, when it is sometimes desirable to change the parallelization strategy to meet non-functional requirements (e.g., load balancing and power consumption). In this article, we investigate the use of a programming model based on series-parallel partial orders: computations are expressed as directed graphs that expose parallelization opportunities and necessary sequencing by construction. This programming model is suitable as an intermediate representation for higher-level languages. We then describe a model of computation for such a programming model that maps such graphs into a stack-based structure more amenable to hardware processing. We describe the formal small-step semantics for this model of computation and use this formal description to show that the model can be arbitrarily parallelized, at compile and runtime, with correct execution guaranteed by design. We empirically support this claim and evaluate parallelization benefits using a prototype open-source compiler, targeting a message-passing many-core simulation. We empirically verify the correctness of arbitrary parallelization, supporting the validity of our formal semantics, analyze the distribution of operations within cores to understand the implementation impact of the paradigm, and assess execution time improvements when five micro-benchmarks are automatically and randomly parallelized across 2 × 2 and 4 × 4 multi-core configurations, resulting in execution time decrease by up to 95% in the best case.
Style APA, Harvard, Vancouver, ISO itp.
34

Elkordy, Ahmed Roushdy, Jiang Zhang, Yahya H. Ezzeldin, Konstantinos Psounis i Salman Avestimehr. "How Much Privacy Does Federated Learning with Secure Aggregation Guarantee?" Proceedings on Privacy Enhancing Technologies 2023, nr 1 (styczeń 2023): 510–26. http://dx.doi.org/10.56553/popets-2023-0030.

Pełny tekst źródła
Streszczenie:
Federated learning (FL) has attracted growing interest for enabling privacy-preserving machine learning on data stored at multiple users while avoiding moving the data off-device. However, while data never leaves users’ devices, privacy still cannot be guaranteed since significant computations on users’ training data are shared in the form of trained local models. These local models have recently been shown to pose a substantial privacy threat through different privacy attacks such as model inversion attacks. As a remedy, Secure Aggregation (SA) has been developed as a framework to preserve privacy in FL, by guaranteeing the server can only learn the global aggregated model update but not the individual model updates.While SA ensures no additional information is leaked about the individual model update beyond the aggregated model update, there are no formal guarantees on how much privacy FL with SA can actually offer; as information about the individual dataset can still potentially leak through the aggregated model computed at the server. In this work, we perform a first analysis of the formal privacy guarantees for FL with SA. Specifically, we use Mutual Information (MI) as a quantification metric and derive upper bounds on how much information about each user's dataset can leak through the aggregated model update. When using the FedSGD aggregation algorithm, our theoretical bounds show that the amount of privacy leakage reduces linearly with the number of users participating in FL with SA. To validate our theoretical bounds, we use an MI Neural Estimator to empirically evaluate the privacy leakage under different FL setups on both the MNIST and CIFAR10 datasets. Our experiments verify our theoretical bounds for FedSGD, which show a reduction in privacy leakage as the number of users and local batch size grow, and an increase in privacy leakage as the number of training rounds increases. We also observe similar dependencies for the FedAvg and FedProx protocol.
Style APA, Harvard, Vancouver, ISO itp.
35

NAGY, MARIUS, i SELIM G. AKL. "COPING WITH DECOHERENCE: PARALLELIZING THE QUANTUM FOURIER TRANSFORM". Parallel Processing Letters 20, nr 03 (wrzesień 2010): 213–26. http://dx.doi.org/10.1142/s012962641000017x.

Pełny tekst źródła
Streszczenie:
Rank-varying computational complexity describes those computations in which the complexity of executing each step is not a constant, but evolves throughout the computation as a function of the order of execution of each step [2]. This paper identifies practical instances of this computational paradigm in the procedures for computing the quantum Fourier transform and its inverse. It is shown herein that under the constraints imposed by quantum decoherence, only a parallel approach can guarantee a reliable solution or, alternatively, improve scalability.
Style APA, Harvard, Vancouver, ISO itp.
36

Zelelew, M. B., i K. Alfredsen. "Sensitivity-guided evaluation of the HBV hydrological model parameterization". Journal of Hydroinformatics 15, nr 3 (4.12.2012): 967–90. http://dx.doi.org/10.2166/hydro.2012.011.

Pełny tekst źródła
Streszczenie:
Applying hydrological models for river basin management depends on the availability of the relevant data information to constrain the model residuals. The estimation of reliable parameter values for parameterized models is not guaranteed. Identification of influential model parameters controlling the model response variations either by main or interaction effects is therefore critical for minimizing model parametric dimensions and limiting prediction uncertainty. In this study, the Sobol variance-based sensitivity analysis method was applied to quantify the importance of the HBV conceptual hydrological model parameterization. The analysis was also supplemented by the generalized sensitivity analysis method to assess relative model parameter sensitivities in cases of negative Sobol sensitivity index computations. The study was applied to simulate runoff responses at twelve catchments varying in size. The result showed that varying up to a minimum of four to six influential model parameters for high flow conditions, and up to a minimum of six influential model parameters for low flow conditions can sufficiently capture the catchments' responses characteristics. To the contrary, varying more than nine out of 15 model parameters will not make substantial model performance changes on any of the case studies.
Style APA, Harvard, Vancouver, ISO itp.
37

Yazdani, E., Y. Cang, R. Sadighi-Bonabi, H. Hora i F. Osman. "Layers from initial Rayleigh density profiles by directed nonlinear force driven plasma blocks for alternative fast ignition". Laser and Particle Beams 27, nr 1 (23.01.2009): 149–56. http://dx.doi.org/10.1017/s0263034609000214.

Pełny tekst źródła
Streszczenie:
AbstractMeasurement of extremely new phenomena during the interaction of laser pulses with terawatt and higher power and picoseconds with plasmas arrived at drastically different anomalies in contrast to the usual observations if the laser pulses were very clean with a contrast ratio higher than 108. This was guaranteed by the suppression of prepulses during less than dozens of ps before the arrival of the main pulse resulting in the suppression of relativistic self-focusing. This anomaly was confirmed in many experimental details, and explained and numerically reproduced as a nonlinear force acceleration of skin layers generating quasi-neutral plasma blocks with ion current densities above 1011A/cm2. This may support the requirement to produce a fast ignition deuterium tritium fusion at densities not much higher than the solid state by a single shot PW-ps laser pulse. With the aim to achieve separately studied ignition conditions, we are studying numerically how the necessary nonlinear force accelerated plasma blocks may reach the highest possible thickness by using optimized dielectric properties of the irradiated plasma. The use of double Rayleigh initial density profiles results in many wavelength thick low reflectivity directed plasma blocks of modest temperatures. Results of computations with the genuine two-fluid model are presented.
Style APA, Harvard, Vancouver, ISO itp.
38

Ern, Alexandre, Iain Smears i Martin Vohralík. "Equilibrated flux a posteriori error estimates in $L^2(H^1)$-norms for high-order discretizations of parabolic problems". IMA Journal of Numerical Analysis 39, nr 3 (25.06.2018): 1158–79. http://dx.doi.org/10.1093/imanum/dry035.

Pełny tekst źródła
Streszczenie:
Abstract We consider the a posteriori error analysis of fully discrete approximations of parabolic problems based on conforming $hp$-finite element methods in space and an arbitrary order discontinuous Galerkin method in time. Using an equilibrated flux reconstruction we present a posteriori error estimates yielding guaranteed upper bounds on the $L^2(H^1)$-norm of the error, without unknown constants and without restrictions on the spatial and temporal meshes. It is known from the literature that the analysis of the efficiency of the estimators represents a significant challenge for $L^2(H^1)$-norm estimates. Here we show that the estimator is bounded by the $L^2(H^1)$-norm of the error plus the temporal jumps under the one-sided parabolic condition $h^2 \lesssim \tau $. This result improves on earlier works that required stronger two-sided hypotheses such as $h \simeq \tau $ or $h^2\simeq \tau $; instead, our result now encompasses practically relevant cases for computations and allows for locally refined spatial meshes. The constants in our bounds are robust with respect to the mesh and time-step sizes, the spatial polynomial degrees and the refinement and coarsening between time steps, thereby removing any transition condition.
Style APA, Harvard, Vancouver, ISO itp.
39

Pandi, Suganya, i Pradeep Reddy Ch. "PMCAR: proactive mobility and congestion aware route prediction mechanism in IoMT for delay sensitive medical applications to ensure reliability in COVID-19 pandemic situation". International Journal of Pervasive Computing and Communications 16, nr 5 (5.08.2020): 429–46. http://dx.doi.org/10.1108/ijpcc-06-2020-0061.

Pełny tekst źródła
Streszczenie:
Purpose Inclusion of mobile nodes (MNs) in Internet of Things (IoT) further increases the challenges such as frequent network disconnection and intermittent connectivity because of high mobility rate of nodes. This paper aims to propose a proactive mobility and congestion aware route prediction mechanism (PMCAR) to find the congestion free route from leaf to destination oriented directed acyclic graph root (DODAG-ROOT) which considers number of MNs connected to a static node. This paper compares the proposed technique (PMCAR) with RPL (OF0) which considers the HOP-COUNT to determine the path from leaf to DODAG-ROOT. The authors performed a simulation with the proposed technique in MATLAB to present the benefits in terms of packet loss and energy consumption. Design/methodology/approach In this pandemic situation, mobile and IoT play major role in predicting and preventing the CoronaVirus Disease of 2019 (COVID-19). Huge amount of computations is happening with the data generated in this pandemic with the help of mobile devices. To route the data to remote locations through the network, it is necessary to have proper routing mechanism without congestion. In this paper, PMCAR mechanism is introduced to achieve the same. Internet of mobile Things (IoMT) is an extension of IoT that consists of static embedded devices and sensors. IoMT includes MNs which sense data and transfer it to the DODAG-ROOT. The nodes in the IoMT are characterised by low power, low memory, low computing power and low bandwidth support. Several challenges are encountered by routing protocols defined for IPV6 over low power wireless personal area networks to ensure reduced packet loss, less delay, less energy consumption and guaranteed quality of service. Findings The results obtained shows a significant improvement compared to the existing approach such as RPL (OF0). The proposed route prediction mechanism can be applied largely to medical applications which are delay sensitive, particularly in pandemic situations where the number of patients involved and the data gathered from them flows towards a central root for analysis. Support of data transmission from the patients to the doctors without much delay and packet loss will make the response or decisions available more quickly which is a vital part of medical applications. Originality/value The computational technologies in this COVID-19 pandemic situation needs timely data for computation without delay. IoMT is enabled with various devices such as mobile, sensors and wearable devices. These devices are dedicated for collecting the data from the patients or any objects from different geographical location based on the predetermined time intervals. Timely delivery of data is essential for accurate computation. So, it is necessary to have a routing mechanism without delay and congestion to handle this pandemic situation. The proposed PMCAR mechanism ensures the reliable delivery of data for immediate computation which can be used to make decisions in preventing and prediction.
Style APA, Harvard, Vancouver, ISO itp.
40

Vinti, Agrawal. "Corporate Guarantee: Computation of Guarantee Fees at Arm’s Length Price". Christ University Law Journal 5, nr 1 (30.01.2016): 19–34. http://dx.doi.org/10.12728/culj.8.2.

Pełny tekst źródła
Streszczenie:
The most recent controversy surrounding Indian tax courts, pertains to the issue of international transactions with respect to intra-group financing. It includes short as well as long term borrowing and lending, guarantees etc. The debate centres around transfer pricing (herein after referred to 'TP') provisions and how the computation of arm’s length price is to be done. The article has focussed on one aspect of intra group financing, that is, the provision of corporate guarantee. The paper first describes the meaning of guarantee and then highlights various provisions relating to corporate guarantee, under the Companies Act, 2013 and the Foreign Exchange Management Act, 1999. The article then describes various legislative provisions relating to the transfer pricing issue and how guarantees fit into such provisions. It is ambiguous, as courts have not provided for a settled principle, in this regard. The author, thus, highlights various approaches the Indian tax courts have adopted when the issue of corporate guarantee came before them. The paper provides an answer to the issue of whether the guarantee fee can be computed at arm’s length price, using compared uncontrolled price as the method for computation. The author concludes by stating that while the courts have attempted to resolve these ambiguities, there is still scope for further reform.
Style APA, Harvard, Vancouver, ISO itp.
41

Fan, Wenfei. "Big graphs". Proceedings of the VLDB Endowment 15, nr 12 (sierpień 2022): 3782–97. http://dx.doi.org/10.14778/3554821.3554899.

Pełny tekst źródła
Streszczenie:
Big data is typically characterized with 4V's: Volume, Velocity, Variety and Veracity. When it comes to big graphs, these challenges become even more staggering. Each and every of the 4V's raises new questions, from theory to systems and practice. Is it possible to parallelize sequential graph algorithms and guarantee the correctness of the parallelized computations? Given a computational problem, does there exist a parallel algorithm for it that guarantees to reduce parallel runtime when more machines are used? Is there a systematic method for developing incremental algorithms with effectiveness guarantees in response to frequent updates? Is it possible to write queries across relational databases and semistructured graphs in SQL? Can we unify logic rules and machine learning, to improve the quality of graph-structured data, and deduce associations between entities? This paper aims to incite interest and curiosity in these topics. It raises as many questions as it answers.
Style APA, Harvard, Vancouver, ISO itp.
42

ALVIANO, MARIO, i RAFAEL PEÑALOZA. "Fuzzy answer sets approximations". Theory and Practice of Logic Programming 13, nr 4-5 (lipiec 2013): 753–67. http://dx.doi.org/10.1017/s1471068413000471.

Pełny tekst źródła
Streszczenie:
AbstractFuzzy answer set programming (FASP) is a recent formalism for knowledge representation that enriches the declarativity of answer set programming by allowing propositions to be graded. To now, no implementations of FASP solvers are available and all current proposals are based on compilations of logic programs into different paradigms, like mixed integer programs or bilevel programs. These approaches introduce many auxiliary variables which might affect the performance of a solver negatively. To limit this downside, operators for approximating fuzzy answer sets can be introduced: Given a FASP program, these operators compute lower and upper bounds for all atoms in the program such that all answer sets are between these bounds. This paper analyzes several operators of this kind which are based on linear programming, fuzzy unfounded sets and source pointers. Furthermore, the paper reports on a prototypical implementation, also describing strategies for avoiding computations of these operators when they are guaranteed to not improve current bounds. The operators and their implementation can be used to obtain more constrained mixed integer or bilevel programs, or even for providing a basis for implementing a native FASP solver. Interestingly, the semantics of relevant classes of programs with unique answer sets, like positive programs and programs with stratified negation, can be already computed by the prototype without the need for an external tool.
Style APA, Harvard, Vancouver, ISO itp.
43

Rohou, Simon, Luc Jaulin, Lyudmila Mihaylova, Fabrice Le Bars i Sandor M. Veres. "Guaranteed computation of robot trajectories". Robotics and Autonomous Systems 93 (lipiec 2017): 76–84. http://dx.doi.org/10.1016/j.robot.2017.03.020.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Yi, Pu (Luke), i Sara Achour. "Hardware-Aware Static Optimization of Hyperdimensional Computations". Proceedings of the ACM on Programming Languages 7, OOPSLA2 (16.10.2023): 1–30. http://dx.doi.org/10.1145/3622797.

Pełny tekst źródła
Streszczenie:
Binary spatter code (BSC)-based hyperdimensional computing (HDC) is a highly error-resilient approximate computational paradigm suited for error-prone, emerging hardware platforms. In BSC HDC, the basic datatype is a hypervector , a typically large binary vector, where the size of the hypervector has a significant impact on the fidelity and resource usage of the computation. Typically, the hypervector size is dynamically tuned to deliver the desired accuracy; this process is time-consuming and often produces hypervector sizes that lack accuracy guarantees and produce poor results when reused for very similar workloads. We present Heim, a hardware-aware static analysis and optimization framework for BSC HD computations. Heim analytically derives the minimum hypervector size that minimizes resource usage and meets the target accuracy requirement. Heim guarantees the optimized computation converges to the user-provided accuracy target on expectation, even in the presence of hardware error. Heim deploys a novel static analysis procedure that unifies theoretical results from the neuroscience community to systematically optimize HD computations. We evaluate Heim against dynamic tuning-based optimization on 25 benchmark data structures. Given a 99% accuracy requirement, Heim-optimized computations achieve a 99.2%-100.0% median accuracy, up to 49.5% higher than dynamic tuning-based optimization, while achieving 1.15x-7.14x reductions in hypervector size compared to HD computations that achieve comparable query accuracy and finding parametrizations 30.0x-100167.4x faster than dynamic tuning-based approaches. We also use Heim to systematically evaluate the performance benefits of using analog CAMs and multiple-bit-per-cell ReRAM over conventional hardware, while maintaining iso-accuracy – for both emerging technologies, we find usages where the emerging hardware imparts significant benefits.
Style APA, Harvard, Vancouver, ISO itp.
45

Yu, Weiren, Julie McCann, Chengyuan Zhang i Hakan Ferhatosmanoglu. "Scaling High-Quality Pairwise Link-Based Similarity Retrieval on Billion-Edge Graphs". ACM Transactions on Information Systems 40, nr 4 (31.10.2022): 1–45. http://dx.doi.org/10.1145/3495209.

Pełny tekst źródła
Streszczenie:
SimRank is an attractive link-based similarity measure used in fertile fields of Web search and sociometry. However, the existing deterministic method by Kusumoto et al. [ 24 ] for retrieving SimRank does not always produce high-quality similarity results, as it fails to accurately obtain diagonal correction matrix D . Moreover, SimRank has a “connectivity trait” problem: increasing the number of paths between a pair of nodes would decrease its similarity score. The best-known remedy, SimRank++ [ 1 ], cannot completely fix this problem, since its score would still be zero if there are no common in-neighbors between two nodes. In this article, we study fast high-quality link-based similarity search on billion-scale graphs. (1) We first devise a “varied- D ” method to accurately compute SimRank in linear memory. We also aggregate duplicate computations, which reduces the time of [ 24 ] from quadratic to linear in the number of iterations. (2) We propose a novel “cosine-based” SimRank model to circumvent the “connectivity trait” problem. (3) To substantially speed up the partial-pairs “cosine-based” SimRank search on large graphs, we devise an efficient dimensionality reduction algorithm, PSR # , with guaranteed accuracy. (4) We give mathematical insights to the semantic difference between SimRank and its variant, and correct an argument in [ 24 ] that “if D is replaced by a scaled identity matrix (1-Ɣ)I, their top-K rankings will not be affected much”. (5) We propose a novel method that can accurately convert from Li et al. SimRank ~{S} to Jeh and Widom’s SimRank S . (6) We propose GSR # , a generalisation of our “cosine-based” SimRank model, to quantify pairwise similarities across two distinct graphs, unlike SimRank that would assess nodes across two graphs as completely dissimilar. Extensive experiments on various datasets demonstrate the superiority of our proposed approaches in terms of high search quality, computational efficiency, accuracy, and scalability on billion-edge graphs.
Style APA, Harvard, Vancouver, ISO itp.
46

Foster, M. P. "Disambiguating the SI notation would guarantee its correct parsing". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 465, nr 2104 (13.01.2009): 1227–29. http://dx.doi.org/10.1098/rspa.2008.0343.

Pełny tekst źródła
Streszczenie:
The clarity and utility of the International System of Units (SI) would be improved if its notation were made a formally unambiguous symbolic language. This could be achieved with minor changes, and would enable the development of software that could correctly (i) check SI style in text and (ii) manipulate/verify quantities, SI units and prefix algebra in scientific and engineering computations. These tools could lead to better SI usage and fewer computational errors.
Style APA, Harvard, Vancouver, ISO itp.
47

Heshmati-alamdari, Shahab, Alina Eqtami, George C. Karras, Dimos V. Dimarogonas i Kostas J. Kyriakopoulos. "A Self-triggered Position Based Visual Servoing Model Predictive Control Scheme for Underwater Robotic Vehicles". Machines 8, nr 2 (11.06.2020): 33. http://dx.doi.org/10.3390/machines8020033.

Pełny tekst źródła
Streszczenie:
An efficient position based visual sevroing control approach for Autonomous Underwater Vehicles (AUVs) by employing Non-linear Model Predictive Control (N-MPC) is designed and presented in this work. In the proposed scheme, a mechanism is incorporated within the vision-based controller that determines when the Visual Tracking Algorithm (VTA) should be activated and new control inputs should be calculated. More specifically, the control loop does not close periodically, i.e., between two consecutive activations (triggering instants), the control inputs calculated by the N-MPC at the previous triggering time instant are applied to the underwater robot in an open-loop mode. This results in a significantly smaller number of requested measurements from the vision tracking algorithm, as well as less frequent computations of the non-linear predictive control law. This results in a reduction in processing time as well as energy consumption and, therefore, increases the accuracy and autonomy of the Autonomous Underwater Vehicle. The latter is of paramount importance for persistent underwater inspection tasks. Moreover, the Field of View constraints (FoV), control input saturation, the kinematic limitations due to the underactuated degree of freedom in sway direction, and the effect of the model uncertainties as well as external disturbances have been considered during the control design. In addition, the stability and convergence of the closed-loop system has been guaranteed analytically. Finally, the efficiency and performance of the proposed vision-based control framework is demonstrated through a comparative real-time experimental study while using a small underwater vehicle.
Style APA, Harvard, Vancouver, ISO itp.
48

Service, Travis, i Julie Adams. "Approximate Coalition Structure Generation". Proceedings of the AAAI Conference on Artificial Intelligence 24, nr 1 (4.07.2010): 854–59. http://dx.doi.org/10.1609/aaai.v24i1.7636.

Pełny tekst źródła
Streszczenie:
Coalition formation is a fundamental problem in multi-agent systems. In characteristic function games (CFGs), each coalition C of agents is assigned a value indicating the joint utility those agents will receive if C is formed. CFGs are an important class of cooperative games; however, determining the optimal coalition structure, partitioning of the agents into a set of coalitions that maximizes the social welfare, currently requires O(3n) time for n agents. In light of the high computational complexity of the coalition structure generation problem, a natural approach is to relax the optimality requirement and attempt to find an approximate solution that is guaranteed to be close to optimal. Unfortunately, it has been shown that guaranteeing a solution within any factor of the optimal requires Ω(2n) time. Thus, the best that can be hoped for is to find an algorithm that returns solutions that are guaranteed to be as close to the optimal as possible, in as close to O(2n) time as possible. This paper contributes to the state-of-the-art by presenting an algorithm that achieves better quality guarantees with lower worst case running times than all currently existing algorithms. Our approach is also the first algorithm to guarantee a constant factor approximation ratio, 1/8, in the optimal time of O(2n. The previous best ratio obtainable in O(2n) was 2/n.
Style APA, Harvard, Vancouver, ISO itp.
49

Plum, M., i Ch Wieners. "Numerical Enclosures for Variational Inequalities". Computational Methods in Applied Mathematics 7, nr 4 (2007): 376–88. http://dx.doi.org/10.2478/cmam-2007-0023.

Pełny tekst źródła
Streszczenie:
AbstractWe present a new method for proving the existence of a unique solution of variational inequalities within guaranteed close error bounds to a numerical approximation. The method is derived for a specific model problem featuring most of the difficulties of perfect plasticity. We introduce a finite element method for the computation of admissible primal and dual solutions which a posteriori guarantees the existence of a unique solution (by the verification of the safe load condition) and which allows determination of a guaranteed error bound. Finally, we present explicit existence results and error bounds in some significant specific configurations.
Style APA, Harvard, Vancouver, ISO itp.
50

Huifen, Zou, Ye Sheng, Wang Dexi, Li Huixing, Cao Xiaozhen i Yan Lijun. "Model of Mass and Heat Transfer during Vacuum Freeze-Drying for Cornea". Mathematical Problems in Engineering 2012 (2012): 1–16. http://dx.doi.org/10.1155/2012/941609.

Pełny tekst źródła
Streszczenie:
Cornea is the important apparatus of organism, which has complex cell structure. Heat and mass transfer and thermal parameters during vacuum freeze-drying of keeping corneal activity are studied. The freeze-drying cornea experiments were operated in the homemade vacuum freeze dryer. Pressure of the freeze-drying box was about 50 Pa and temperature was about −10°C by controlled, and operating like this could guarantee survival ratio of the corneal endothelium over the grafting normal. Theory analyzing of corneal freeze-drying, mathematical model of describing heat and mass transfer during vacuum freeze-drying of cornea was established. The analogy computation for the freeze-drying of cornea was made by using finite-element computational software. When pressure of the freeze-drying box was about 50 Pa and temperature was about −10°C, time of double-side drying was 170 min. In this paper, a moving-grid finite-element method was used. The sublimation interface was tracked continuously. The finite-element mesh is moved continuously such that the interface position always coincides with an element node. Computational precision was guaranteed. The computational results were agreed with the experimental results. It proved that the mathematical model was reasonable. The finite-element software is adapted for calculating the heat and mass transfer of corneal freeze-drying.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii