Journal articles on the topic 'Computational Cost Reduction'

To see the other types of publications on this topic, follow the link: Computational Cost Reduction.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Computational Cost Reduction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Et al., Kiattikul Sooksomsatarn. "Computational Cost Reduction of Transaction Signing in Blockchain." Psychology and Education Journal 58, no. 1 (January 29, 2021): 1496–500. http://dx.doi.org/10.17762/pae.v58i1.935.

Full text
Abstract:
Nowadays, Blockchain is a disruptive technology, particularly in the financial context. Moreover, Blockchain is behind the success of cryptocurrencies, e.g., Bitcoin and Ethereum. Unlike traditional currencies, cryptocurrencies are entirely virtual. There is no physical money, but it can directly make payments in digital currency from one person to another without intermediaries. Moreover, Hashing's cryptographic algorithm makes Blockchain resist tampering from any transacting participants because the submitted block cannot be altered or re-engineered. However, another big problem is how users of cryptocurrencies stop somebody from adding or editing a transaction that spends someone else's money to them. To do this, Blockchain needs another cryptosystem called Public/Private Keys, a primitive asymmetric cryptosystem, e.g., the RSA encryption, to sign the transactions for proving the authenticity of the ownership without revealing the signed secret information. The generated public key is regarded as a ledger account number or digital wallet of the sender and the recipient. Simultaneously, the paired private keys are used to identify whether the digital wallets' owners are authentic. As growing network entities and propagated Blockchain transactions, computing millions of replicated tokens in the blocks to sign and verify the digital wallet's ownership is computationally expensive. However, a certain of chosen arithmetical transformations that can simplify mathematical cost can significantly reduce computational complexity. This research's main contribution is developing a protocol that can reduce the complexity and mathematical cost in generating the digital wallet and verifying its authenticity of ownership. Finally, performance analyses of the RSA algorithm for the protocol have been measured and visualized using Python.
APA, Harvard, Vancouver, ISO, and other styles
2

Wunschmann, Jurgen, Sebastian Zanker, Christian Gunter, and Albrecht Rothermel. "Reduction of computational cost for high quality video scaling." IEEE Transactions on Consumer Electronics 56, no. 4 (November 2010): 2584–91. http://dx.doi.org/10.1109/tce.2010.5681144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Urriza, Jose M., Francisco E. Paez, Javier D. Orozco, and Ricardo Casyssials. "Computational Cost Reduction for Real-Time Schedulability Tests Algorithms." IEEE Latin America Transactions 13, no. 12 (December 2015): 3714–23. http://dx.doi.org/10.1109/tla.2015.7404899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ono, Kiminori, Yoshiya Matsukawa, Aki Watanabe, Kazuki Dewa, Yasuhiro Saito, Yohsuke Matshushita, Hideyuki Aoki, Koki Era, Takayuki Aoki, and Togo Yamaguchi. "Computational Cost Reduction and Validation of Cluster-Cluster Aggregation Model." Journal of the Society of Powder Technology, Japan 52, no. 8 (2015): 426–34. http://dx.doi.org/10.4164/sptj.52.426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Drozdik, Martin, Youhei Akimoto, Hernan Aguirre, and Kiyoshi Tanaka. "Computational Cost Reduction of Nondominated Sorting Using the M-Front." IEEE Transactions on Evolutionary Computation 19, no. 5 (October 2015): 659–78. http://dx.doi.org/10.1109/tevc.2014.2366498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Singh, Satyavir, M. Abid Bazaz, and Shahkar Ahmad Nahvi. "A scheme for comprehensive computational cost reduction in proper orthogonal decomposition." Journal of Electrical Engineering 69, no. 4 (August 1, 2018): 279–85. http://dx.doi.org/10.2478/jee-2018-0039.

Full text
Abstract:
Abstract This paper addresses the issue of offline and online computational cost reduction of the proper orthogonal decomposition (POD) which is a popular nonlinear model order reduction (MOR) technique. Online computational cost is reduced by using the discrete empirical interpolation method (DEIM), which reduces the complexity of evaluating the nonlinear term of the reduced model to a cost proportional to the number of reduced variables obtained by POD: this is the POD-DEIM approach. Offline computational cost is reduced by generating an approximate snapshot-ensemble of the nonlinear dynamical system, consequently, completely avoiding the need to simulate the full-order system. Two snapshot ensembles: one of the states and the other of the nonlinear function are obtained by simulating the successive linearization of the original nonlinear system. The proposed technique is applied to two benchmark large-scale nonlinear dynamical systems and clearly demonstrates comprehensive savings in computational cost and time with insignificant or no deterioration in performance.
APA, Harvard, Vancouver, ISO, and other styles
7

Ruiz, P. A., C. R. Philbrick, and P. W. Sauer. "Modeling Approaches for Computational Cost Reduction in Stochastic Unit Commitment Formulations." IEEE Transactions on Power Systems 25, no. 1 (February 2010): 588–89. http://dx.doi.org/10.1109/tpwrs.2009.2036462.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Martinelli, G., G. Orlandi, and P. Burrascano. "Reduction of computational cost in DFT implementation of FIR digital filters." Electronics Letters 21, no. 9 (1985): 364. http://dx.doi.org/10.1049/el:19850260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Singh, Satyavir, Mohammad Abid Bazaz, and Shahkar Ahmad Nahvi. "Simulating swing dynamics of a power system model using nonlinear model order reduction." COMPEL - The international journal for computation and mathematics in electrical and electronic engineering 38, no. 6 (October 24, 2019): 1918–30. http://dx.doi.org/10.1108/compel-08-2018-0331.

Full text
Abstract:
Purpose The purpose of this paper is to demonstrate the applicability of the Discrete Empirical Interpolation method (DEIM) for simulating the swing dynamics of benchmark power system problems. The authors demonstrate that considerable savings in computational time and resources are obtained using this methodology. Another purpose is to apply a recently developed modified DEIM strategy with a reduced on-line computational burden on this problem. Design/methodology/approach On-line computational cost of the power system dynamics problem is reduced by using DEIM, which reduces the complexity of the evaluation of the nonlinear function in the reduced model to a cost proportional to the number of reduced modes. The on-line computational cost is reduced by using an approximate snap-shot ensemble to construct the reduced basis. Findings Considerable savings in computational resources and time are obtained when DEIM is used for simulating swing dynamics. The on-line cost implications of DEIM are also reduced considerably by using approximate snapshots to construct the reduced basis. Originality/value Applicability of DEIM (with and without approximate ensemble) to a large-scale power system dynamics problem is demonstrated for the first time.
APA, Harvard, Vancouver, ISO, and other styles
10

Manis, George, Md Aktaruzzaman, and Roberto Sassi. "Low Computational Cost for Sample Entropy." Entropy 20, no. 1 (January 13, 2018): 61. http://dx.doi.org/10.3390/e20010061.

Full text
Abstract:
Sample Entropy is the most popular definition of entropy and is widely used as a measure of the regularity/complexity of a time series. On the other hand, it is a computationally expensive method which may require a large amount of time when used in long series or with a large number of signals. The computationally intensive part is the similarity check between points in m dimensional space. In this paper, we propose new algorithms or extend already proposed ones, aiming to compute Sample Entropy quickly. All algorithms return exactly the same value for Sample Entropy, and no approximation techniques are used. We compare and evaluate them using cardiac inter-beat (RR) time series. We investigate three algorithms. The first one is an extension of the k d -trees algorithm, customized for Sample Entropy. The second one is an extension of an algorithm initially proposed for Approximate Entropy, again customized for Sample Entropy, but also improved to present even faster results. The last one is a completely new algorithm, presenting the fastest execution times for specific values of m, r, time series length, and signal characteristics. These algorithms are compared with the straightforward implementation, directly resulting from the definition of Sample Entropy, in order to give a clear image of the speedups achieved. All algorithms assume the classical approach to the metric, in which the maximum norm is used. The key idea of the two last suggested algorithms is to avoid unnecessary comparisons by detecting them early. We use the term unnecessary to refer to those comparisons for which we know a priori that they will fail at the similarity check. The number of avoided comparisons is proved to be very large, resulting in an analogous large reduction of execution time, making them the fastest algorithms available today for the computation of Sample Entropy.
APA, Harvard, Vancouver, ISO, and other styles
11

Robertsson, Johan O. A., and Chris H. Chapman. "An efficient method for calculating finite‐difference seismograms after model alterations." GEOPHYSICS 65, no. 3 (May 2000): 907–18. http://dx.doi.org/10.1190/1.1444787.

Full text
Abstract:
Seismic modeling, processing, and inversion often require the calculation of the seismic response resulting from a suite of closely related seismic models. Even though changes to the model may be restricted to a small subvolume, we need to perform simulations for the full model. We present a new finite‐difference method that circumvents the need to resimulate the complete model for local changes. By requiring only calculations in the subvolume and its neighborhood, our method makes possible significant reductions in computational cost and memory requirements. In general, each source/receiver location requires one full simulation on the complete model. Following these pre‐computations, recalculation of the altered wavefield can be limited to the region around the subvolume and its neighborhood. We apply our method to a 2-D time‐lapse seismic problem, thereby achieving a factor of 15 reduction in computational cost. Potential savings for 3-D are far greater.
APA, Harvard, Vancouver, ISO, and other styles
12

Park, Mingyu, Dongwhan Kim, Yonghwan Oh, and Yisoo Lee. "Computational Cost Reduction Method for HQP-based Hierarchical Controller for Articulated Robot." Journal of Korea Robotics Society 17, no. 1 (March 1, 2022): 16–24. http://dx.doi.org/10.7746/jkros.2022.17.1.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Sanchis, J. M., and J. J. Rieta. "Computational cost reduction using coincident boundary microphones for convolutive blind signal separation." Electronics Letters 41, no. 6 (2005): 374. http://dx.doi.org/10.1049/el:20057242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Moyroud, F., T. Fransson, and G. Jacquet-Richardet. "A Comparison of Two Finite Element Reduction Techniques for Mistuned Bladed Disks." Journal of Engineering for Gas Turbines and Power 124, no. 4 (September 24, 2002): 942–52. http://dx.doi.org/10.1115/1.1415741.

Full text
Abstract:
The high performance bladed disks used in today’s turbomachines must meet strict standards in terms of aeroelastic stability and resonant response level. One structural characteristic that can significantly impact on both these areas is that of bladed disk mistuning. To predict the effects of mistuning, computational efficient methods are much needed to make free-vibration and forced-response analyses of full assembly finite element (FE) models feasible in both research and industrial environments. Due to the size and complexity of typical industrial bladed disk models, one must resort to robust and systematic reduction techniques to produce reduced-order models of sufficient accuracy. The objective of this paper is to compare two prevalent reduction methods on representative test rotors, including a modern design industrial shrouded bladed disk, in terms of accuracy (for frequencies and mode shapes), reduction order, computational efficiency, sensitivity to intersector elastic coupling, and ability to capture the phenomenon of mode localization. The first reduction technique employs a modal reduction approach with a modal basis consisting of mode shapes of the tuned bladed disk which can be obtained from a classical cyclic symmetric modal analysis. The second reduction technique uses Craig and Bampton substructure modes. The results show a perfect agreement between the two reduced-order models and the nonreduced finite element model. It is found that the phenomena of mode localization is equally well predicted by the two reduction models. In terms of computational cost, reductions from one to two orders of magnitude are obtained for the industrial bladed disk, with the modal reduction method being the most computationally efficient approach.
APA, Harvard, Vancouver, ISO, and other styles
15

Islam, Md Rashedul, Boshir Ahmed, Md Ali Hossain, and Md Palash Uddin. "Mutual Information-Driven Feature Reduction for Hyperspectral Image Classification." Sensors 23, no. 2 (January 6, 2023): 657. http://dx.doi.org/10.3390/s23020657.

Full text
Abstract:
A hyperspectral image (HSI), which contains a number of contiguous and narrow spectral wavelength bands, is a valuable source of data for ground cover examinations. Classification using the entire original HSI suffers from the “curse of dimensionality” problem because (i) the image bands are highly correlated both spectrally and spatially, (ii) not every band can carry equal information, (iii) there is a lack of enough training samples for some classes, and (iv) the overall computational cost is high. Therefore, effective feature (band) reduction is necessary through feature extraction (FE) and/or feature selection (FS) for improving the classification in a cost-effective manner. Principal component analysis (PCA) is a frequently adopted unsupervised FE method in HSI classification. Nevertheless, its performance worsens when the dataset is noisy, and the computational cost becomes high. Consequently, this study first proposed an efficient FE approach using a normalized mutual information (NMI)-based band grouping strategy, where the classical PCA was applied to each band subgroup for intrinsic FE. Finally, the subspace of the most effective features was generated by the NMI-based minimum redundancy and maximum relevance (mRMR) FS criteria. The subspace of features was then classified using the kernel support vector machine. Two real HSIs collected by the AVIRIS and HYDICE sensors were used in an experiment. The experimental results demonstrated that the proposed feature reduction approach significantly improved the classification performance. It achieved the highest overall classification accuracy of 94.93% for the AVIRIS dataset and 99.026% for the HYDICE dataset. Moreover, the proposed approach reduced the computational cost compared with the studied methods.
APA, Harvard, Vancouver, ISO, and other styles
16

Caicedo, M., J. Oliver, A. E. Huespe, and O. Lloberas-Valls. "Model Order Reduction in Computational Multiscale Fracture Mechanics." Key Engineering Materials 713 (September 2016): 248–53. http://dx.doi.org/10.4028/www.scientific.net/kem.713.248.

Full text
Abstract:
Nowadays, the model order reduction techniques have become an intensive research eld because of the increasing interest in the computational modeling of complex phenomena in multi-physic problems, and its conse- quent increment in high-computing demanding processes; it is well known that the availability of high-performance computing capacity is, in most of cases limited, therefore, the model order reduction becomes a novelty tool to overcome this paradigm, that represents an immediately challenge in our research community. In computational multiscale modeling for instance, in order to study the interaction between components, a di erent numerical model has to be solved in each scale, this feature increases radically the computational cost. We present a reduced model based on a multi-scale framework for numerical modeling of the structural failure of heterogeneous quasi-brittle materials using the Strong Discontinuity Approach (CSD). The model is assessed by application to cementitious materials. The Proper Orthogonal Decomposition (POD) and the Reduced Order Integration Cubature are the pro- posed techniques to develop the reduced model, these two techniques work together to reduce both, the complexity and computational time of the high-delity model, in our case the FE2 standard model
APA, Harvard, Vancouver, ISO, and other styles
17

Cano, Begoña, and Nuria Reguera. "Why Improving the Accuracy of Exponential Integrators Can Decrease Their Computational Cost?" Mathematics 9, no. 9 (April 29, 2021): 1008. http://dx.doi.org/10.3390/math9091008.

Full text
Abstract:
In previous papers, a technique has been suggested to avoid order reduction when integrating initial boundary value problems with several kinds of exponential methods. The technique implies in principle to calculate additional terms at each step from those already necessary without avoiding order reduction. The aim of the present paper is to explain the surprising result that, many times, in spite of having to calculate more terms at each step, the computational cost of doing it through Krylov methods decreases instead of increases. This is very interesting since, in that way, the methods improve not only in terms of accuracy, but also in terms of computational cost.
APA, Harvard, Vancouver, ISO, and other styles
18

Gao, Jianjun, Mauricio D. Sacchi, and Xiaohong Chen. "A fast reduced-rank interpolation method for prestack seismic volumes that depend on four spatial dimensions." GEOPHYSICS 78, no. 1 (January 1, 2013): V21—V30. http://dx.doi.org/10.1190/geo2012-0038.1.

Full text
Abstract:
Rank reduction strategies can be employed to attenuate noise and for prestack seismic data regularization. We present a fast version of Cadzow reduced-rank reconstruction method. Cadzow reconstruction is implemented by embedding 4D spatial data into a level-four block Toeplitz matrix. Rank reduction of this matrix via the Lanczos bidiagonalization algorithm is used to recover missing observations and to attenuate random noise. The computational cost of the Lanczos bidiagonalization is dominated by the cost of multiplying a level-four block Toeplitz matrix by a vector. This is efficiently implemented via the 4D fast Fourier transform. The proposed algorithm significantly decreases the computational cost of rank-reduction methods for multidimensional seismic data denoising and reconstruction. Synthetic and field prestack data examples are used to examine the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
19

Ghayebi, B., and S. M. Hosseini. "A Simplified Milstein Scheme for SPDEs with Multiplicative Noise." Abstract and Applied Analysis 2014 (2014): 1–15. http://dx.doi.org/10.1155/2014/140849.

Full text
Abstract:
This paper deals with a research question raised by Jentzen and Röckner (A Milstein scheme for SPDEs, arXiv:1001.2751v4 (2012)), whether the exponential term in their introduced scheme can be replaced by a simpler mollifier. This replacement can lead to more simplification and computational reduction in simulation. So, in this paper, we essentially replace the exponential term with a Padé approximation of order 1 and denote the resulting scheme by simplified Milstein scheme. The convergence analysis for this scheme is carried out and it is shown that even with this replacement the order of convergence is maintained, while the resulting scheme is easier to implement and slightly more efficient computationally. Some numerical tests are given that confirm the order of accuracy and also computational cost reduction.
APA, Harvard, Vancouver, ISO, and other styles
20

Ravi Kumar, P., P. V. Naganjaneyulu, and K. Satya Prasad. "Partial transmit sequence to improve OFDM using BFO & PSO algorithm." International Journal of Wavelets, Multiresolution and Information Processing 18, no. 01 (June 7, 2019): 1941018. http://dx.doi.org/10.1142/s0219691319410182.

Full text
Abstract:
The Peak to Average Power Ratio (PAPR) in Orthogonal Frequency Division Multiplexing (OFDM) system can be decreased effectively through Partial Transmit Sequence (PTS) scheme. In optimum phase factors searches of the PTM scheme, there is a very high computational cost involved and this restricts its ability to be applied in practical applications, more so in the case of high speed data transmissions. A combination of Particle Swarm Optimization (PSO), Bacterial Foraging Optimization (BFO) and Genetic Algorithm (GA) with the PTS scheme in the OFDM system has been proposed in this work which is a PAPR reduction technique with lesser computational complexity. By performing a set of simulations with different PTS schemes, the performance of the PSO–PTS, BFO–PTS and GA–PTS scheme is comparatively investigated in terms of PAPR and computational cost reductions.
APA, Harvard, Vancouver, ISO, and other styles
21

ROWLEY, C. W. "MODEL REDUCTION FOR FLUIDS, USING BALANCED PROPER ORTHOGONAL DECOMPOSITION." International Journal of Bifurcation and Chaos 15, no. 03 (March 2005): 997–1013. http://dx.doi.org/10.1142/s0218127405012429.

Full text
Abstract:
Many of the tools of dynamical systems and control theory have gone largely unused for fluids, because the governing equations are so dynamically complex, both high-dimensional and nonlinear. Model reduction involves finding low-dimensional models that approximate the full high-dimensional dynamics. This paper compares three different methods of model reduction: proper orthogonal decomposition (POD), balanced truncation, and a method called balanced POD. Balanced truncation produces better reduced-order models than POD, but is not computationally tractable for very large systems. Balanced POD is a tractable method for computing approximate balanced truncations, that has computational cost similar to that of POD. The method presented here is a variation of existing methods using empirical Gramians, and the main contributions of the present paper are a version of the method of snapshots that allows one to compute balancing transformations directly, without separate reduction of the Gramians; and an output projection method, which allows tractable computation even when the number of outputs is large. The output projection method requires minimal additional computation, and has a priori error bounds that can guide the choice of rank of the projection. Connections between POD and balanced truncation are also illuminated: in particular, balanced truncation may be viewed as POD of a particular dataset, using the observability Gramian as an inner product. The three methods are illustrated on a numerical example, the linearized flow in a plane channel.
APA, Harvard, Vancouver, ISO, and other styles
22

Lee, Jaesung, and Dae-Won Kim. "Scalable Multilabel Learning Based on Feature and Label Dimensionality Reduction." Complexity 2018 (September 24, 2018): 1–15. http://dx.doi.org/10.1155/2018/6292143.

Full text
Abstract:
The data-driven management of real-life systems based on a trained model, which in turn is based on the data gathered from its daily usage, has attracted a lot of attention because it realizes scalable control for large-scale and complex systems. To obtain a model within an acceptable computational cost that is restricted by practical constraints, the learning algorithm may need to identify essential data that carries important knowledge on the relation between the observed features representing the measurement value and labels encoding the multiple target concepts. This results in an increased computational burden owing to the concurrent learning of multiple labels. A straightforward approach to address this issue is feature selection; however, it may be insufficient to satisfy the practical constraints because the computational cost for feature selection can be impractical when the number of labels is large. In this study, we propose an efficient multilabel feature selection method to achieve scalable multilabel learning when the number of labels is large. The empirical experiments on several multilabel datasets show that the multilabel learning process can be boosted without deteriorating the discriminating power of the multilabel classifier.
APA, Harvard, Vancouver, ISO, and other styles
23

Gottwald, Sebastian, and Daniel Braun. "Bounded Rational Decision-Making from Elementary Computations That Reduce Uncertainty." Entropy 21, no. 4 (April 6, 2019): 375. http://dx.doi.org/10.3390/e21040375.

Full text
Abstract:
In its most basic form, decision-making can be viewed as a computational process that progressively eliminates alternatives, thereby reducing uncertainty. Such processes are generally costly, meaning that the amount of uncertainty that can be reduced is limited by the amount of available computational resources. Here, we introduce the notion of elementary computation based on a fundamental principle for probability transfers that reduce uncertainty. Elementary computations can be considered as the inverse of Pigou–Dalton transfers applied to probability distributions, closely related to the concepts of majorization, T-transforms, and generalized entropies that induce a preorder on the space of probability distributions. Consequently, we can define resource cost functions that are order-preserving and therefore monotonic with respect to the uncertainty reduction. This leads to a comprehensive notion of decision-making processes with limited resources. Along the way, we prove several new results on majorization theory, as well as on entropy and divergence measures.
APA, Harvard, Vancouver, ISO, and other styles
24

Sarkar, Biswajit, and Arunava Majumder. "Integrated vendor–buyer supply chain model with vendor’s setup cost reduction." Applied Mathematics and Computation 224 (November 2013): 362–71. http://dx.doi.org/10.1016/j.amc.2013.08.072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Stephens, H., Q. J. J. Wu, and Q. Wu. "The Reduction of Computational Cost by Introducing Kernel Sparsity and Truncation Into IMRT Optimization." International Journal of Radiation Oncology*Biology*Physics 111, no. 3 (November 2021): e145. http://dx.doi.org/10.1016/j.ijrobp.2021.07.596.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Luo, Zhijian, and Yuntao Qian. "Stochastic sub-sampled Newton method with variance reduction." International Journal of Wavelets, Multiresolution and Information Processing 17, no. 06 (November 2019): 1950041. http://dx.doi.org/10.1142/s0219691319500413.

Full text
Abstract:
Stochastic optimization on large-scale machine learning problems has been developed dramatically since stochastic gradient methods with variance reduction technique were introduced. Several stochastic second-order methods, which approximate curvature information by the Hessian in stochastic setting, have been proposed for improvements. In this paper, we introduce a Stochastic Sub-Sampled Newton method with Variance Reduction (S2NMVR), which incorporates the sub-sampled Newton method and stochastic variance-reduced gradient. For many machine learning problems, the linear time Hessian-vector production provides evidence to the computational efficiency of S2NMVR. We then develop two variations of S2NMVR that preserve the estimation of Hessian inverse and decrease the computational cost of Hessian-vector product for nonlinear problems.
APA, Harvard, Vancouver, ISO, and other styles
27

Matsui, Kentaro, and Yo Sasaki. "Computational reduction of the spectral division method for synthesizing moving sources by source trajectory approximation." Journal of the Acoustical Society of America 153, no. 1 (January 2023): 159–67. http://dx.doi.org/10.1121/10.0016817.

Full text
Abstract:
This paper proposes a method to reduce the computational cost of the spectral division method that synthesizes moving sources. The proposed method consists of two approximations: that of the secondary source driving function and that of the trajectory of the moving sources. Combining these two approximations simplifies the integral calculations that traditionally appear in the driving functions, replacing them with a correction of the frequency magnitude and phase of the source signals. Numerical simulations and subjective experiments show that the computational cost can be reduced by a factor of 50–100 compared to the conventional method without significantly affecting the synthesized sound field and the sense of localization.
APA, Harvard, Vancouver, ISO, and other styles
28

Xie, Zheng, and Doug Edwards. "Computational Performance Optimisation for Statistical Analysis of the Effect of Nano-CMOS Variability on Integrated Circuits." VLSI Design 2013 (July 28, 2013): 1–22. http://dx.doi.org/10.1155/2013/984376.

Full text
Abstract:
The intrinsic variability of nanoscale VLSI technology must be taken into account when analyzing circuit designs to predict likely yield. Monte-Carlo- (MC-) and quasi-MC- (QMC-) based statistical techniques do this by analysing many randomised or quasirandomised copies of circuits. The randomisation must model forms of variability that occur in nano-CMOS technology, including “atomistic” effects without intradie correlation and effects with intradie correlation between neighbouring devices. A major problem is the computational cost of carrying out sufficient analyses to produce statistically reliable results. The use of principal components analysis, behavioural modeling, and an implementation of “Statistical Blockade” (SB) is shown to be capable of achieving significant reduction in the computational costs. A computation time reduction of 98.7% was achieved for a commonly used asynchronous circuit element. Replacing MC by QMC analysis can achieve further computation reduction, and this is illustrated for more complex circuits, with the results being compared with those of transistor-level simulations. The “yield prediction” analysis of SRAM arrays is taken as a case study, where the arrays contain up to 1536 transistors modelled using parameters appropriate to 35 nm technology. It is reported that savings of up to 99.85% in computation time were obtained.
APA, Harvard, Vancouver, ISO, and other styles
29

Holloway, Ian, Aihua Wood, and Alexander Alekseenko. "Acceleration of Boltzmann Collision Integral Calculation Using Machine Learning." Mathematics 9, no. 12 (June 15, 2021): 1384. http://dx.doi.org/10.3390/math9121384.

Full text
Abstract:
The Boltzmann equation is essential to the accurate modeling of rarefied gases. Unfortunately, traditional numerical solvers for this equation are too computationally expensive for many practical applications. With modern interest in hypersonic flight and plasma flows, to which the Boltzmann equation is relevant, there would be immediate value in an efficient simulation method. The collision integral component of the equation is the main contributor of the large complexity. A plethora of new mathematical and numerical approaches have been proposed in an effort to reduce the computational cost of solving the Boltzmann collision integral, yet it still remains prohibitively expensive for large problems. This paper aims to accelerate the computation of this integral via machine learning methods. In particular, we build a deep convolutional neural network to encode/decode the solution vector, and enforce conservation laws during post-processing of the collision integral before each time-step. Our preliminary results for the spatially homogeneous Boltzmann equation show a drastic reduction of computational cost. Specifically, our algorithm requires O(n3) operations, while asymptotically converging direct discretization algorithms require O(n6), where n is the number of discrete velocity points in one velocity dimension. Our method demonstrated a speed up of 270 times compared to these methods while still maintaining reasonable accuracy.
APA, Harvard, Vancouver, ISO, and other styles
30

ZANIN, DIEGO, and RICCARDO ZECCHINA. "LEARNING INTERFERENCE REDUCTION IN NEURAL NETWORKS." Modern Physics Letters B 09, no. 18 (August 10, 1995): 1165–74. http://dx.doi.org/10.1142/s0217984995001169.

Full text
Abstract:
The learning and generalization properties of a modified learning cost function for Neural Networks models are discussed. We show that the introduction of a “cross-talk” term allows for an improvement of performance based on the control of the convergence subspaces of the network outputs. In the case of an unbiased distribution of binary patterns, we derive analytically the learning performance of the single layer architecture whereas we investigate numerically the generalization capabilities. An enhancement of computational performance is observed for multi-classification purposes, and also for imperfectly classified training sets.
APA, Harvard, Vancouver, ISO, and other styles
31

Ougiaroglou, Stefanos, Theodoros Mastromanolis, Georgios Evangelidis, and Dionisis Margaris. "Fast Training Set Size Reduction Using Simple Space Partitioning Algorithms." Information 13, no. 12 (December 10, 2022): 572. http://dx.doi.org/10.3390/info13120572.

Full text
Abstract:
The Reduction by Space Partitioning (RSP3) algorithm is a well-known data reduction technique. It summarizes the training data and generates representative prototypes. Its goal is to reduce the computational cost of an instance-based classifier without penalty in accuracy. The algorithm keeps on dividing the initial training data into subsets until all of them become homogeneous, i.e., they contain instances of the same class. To divide a non-homogeneous subset, the algorithm computes its two furthest instances and assigns all instances to their closest furthest instance. This is a very expensive computational task, since all distances among the instances of a non-homogeneous subset must be calculated. Moreover, noise in the training data leads to a large number of small homogeneous subsets, many of which have only one instance. These instances are probably noise, but the algorithm mistakenly generates prototypes for these subsets. This paper proposes simple and fast variations of RSP3 that avoid the computationally costly partitioning tasks and remove the noisy training instances. The experimental study conducted on sixteen datasets and the corresponding statistical tests show that the proposed variations of the algorithm are much faster and achieve higher reduction rates than the conventional RSP3 without negatively affecting the accuracy.
APA, Harvard, Vancouver, ISO, and other styles
32

Hou, Tianfeng, Karl Meerbergen, Staf Roels, and Hans Janssen. "The use of POD–DEIM model order reduction for the simulation of nonlinear hygrothermal problems." E3S Web of Conferences 172 (2020): 04002. http://dx.doi.org/10.1051/e3sconf/202017204002.

Full text
Abstract:
In this paper, the discrete empirical interpolation method (DEIM) and the proper orthogonal decomposition (POD) method are combined to construct a reduced order model to lessen the computational expense of hygrothermal simulation. To investigate the performance of the POD-DEIM model, HAMSTAD benchmark 2 is selected as the illustrative case study. To evaluate the accuracy of the POD-DEIM model as a function of the number of construction modes and interpolation points, the results of the POD-DEIM model are compared with a POD and a Finite Volume Method (FVM). Also, as the number of construction modes/interpolation points cannot entirely represent the computational cost of different models, the accuracies of the different models are compared as function of the calculation time, to provide a fair comparison of their computational performances. Further, the use of POD-DEIM to simulate a problem different from the training snapshot simulation is investigated. The outcomes show that with a sufficient number of construction modes and interpolation points the POD-DEIM model can provide an accurate result, and is capable of reducing the computational cost relative to the POD and FVM.
APA, Harvard, Vancouver, ISO, and other styles
33

Rizk, Mostafa, Amer Baghdadi, and Michel Jézéquel. "Computational Complexity Reduction of MMSE-IC MIMO Turbo Detection." Journal of Circuits, Systems and Computers 28, no. 13 (March 1, 2019): 1950228. http://dx.doi.org/10.1142/s0218126619502281.

Full text
Abstract:
High data rates and error-rate performance approaching close to theoretical limits are key trends for evolving digital wireless communication applications. To address the first requirement, multiple-input multiple-output (MIMO) techniques are adopted in emergent wireless communication standards and applications. On the other hand, turbo concept is used to alleviate the destructive effects of the channel and ensure error-rate performance close to theoretical limits. At the receiver side, the incorporation of MIMO techniques and turbo processing leads to increased complexity that has a severe impact on computation speed, power consumption and implementation area. Because of its increased complexity, the detector is considered critical among all receiver components. Low-complexity algorithms are developed at the cost of decreased performance. Minimum mean-squared error (MMSE) solution with iterative detection and decoding shows an acceptable tradeoff. In this paper, the complexity of the MMSE algorithm in turbo detection context is investigated thoroughly. Algorithmic computations are surveyed to extract the characteristics of all involved parameters. Consequently, several decompositions are applied leading to enhanced performance and to a significant reduction of utilized computations. The complexity of the algorithm is evaluated in terms of real-valued operations. The proposed decompositions save an average of [Formula: see text] and [Formula: see text] of required operations for 2 [Formula: see text] 2 and 4 [Formula: see text] 4 MIMO systems, respectively. In addition, the hardware implementation designed applying the devised simplifications and decompositions outperforms available state-of-the-art implementations in terms of maximum operating frequency, execution time, and performance.
APA, Harvard, Vancouver, ISO, and other styles
34

De Ceuster, Frederik, Jan Bolte, Ward Homan, Silke Maes, Jolien Malfait, Leen Decin, Jeremy Yates, Peter Boyle, and James Hetherington. "magritte, a modern software library for 3D radiative transfer – II. Adaptive ray-tracing, mesh construction, and reduction." Monthly Notices of the Royal Astronomical Society 499, no. 4 (October 16, 2020): 5194–204. http://dx.doi.org/10.1093/mnras/staa3199.

Full text
Abstract:
ABSTRACT Radiative transfer is a notoriously difficult and computationally demanding problem. Yet, it is an indispensable ingredient in nearly all astrophysical and cosmological simulations. Choosing an appropriate discretization scheme is a crucial part of the simulation, since it not only determines the direct memory cost of the model but also largely determines the computational cost and the achievable accuracy. In this paper, we show how an appropriate choice of directional discretization scheme as well as spatial model mesh can help alleviate the computational cost, while largely retaining the accuracy. First, we discuss the adaptive ray-tracing scheme implemented in our 3D radiative transfer library magritte, that adapts the rays to the spatial mesh and uses a hierarchical directional discretization based on healpix. Second, we demonstrate how the free and open-source software library gmsh can be used to generate high-quality meshes that can be easily tailored for magritte. In particular, we show how the local element size distribution of the mesh can be used to optimize the sampling of both analytically and numerically defined models. Furthermore, we show that when using the output of hydrodynamics simulations as input for a radiative transfer simulation, the number of elements in the input model can often be reduced by an order of magnitude, without significant loss of accuracy in the radiation field. We demonstrate this for two models based on a hierarchical octree mesh resulting from adaptive mesh refinement, as well as two models based on smoothed particle hydrodynamics data.
APA, Harvard, Vancouver, ISO, and other styles
35

BEN AYED, MOHAMED ALI, AMINE SAMET, and NOURI MASMOUDI. "TOWARD AN OPTIMAL BLOCK MOTION ESTIMATION ALGORITHM FOR H.264/AVC." International Journal of Image and Graphics 07, no. 02 (April 2007): 303–20. http://dx.doi.org/10.1142/s0219467807002660.

Full text
Abstract:
A merging procedure joining search pattern and variable block size motion estimation for H.264/AVC is proposed in this paper. The principal purpose of the proposed methods is the reduction of the computational complexity for block matching module. In fact, there are numerous contributions in the literature aiming the reduction of the computational cost needed for motion estimation. The best solution from a qualitative point of view is the full search that considers every possible detail. The computational effort required is enormous and this makes motion estimation by far the most important computational bottleneck in video coding systems. Our approach invests and exploits the center-biased characteristics of the real world video sequences, aiming to achieve an acceptable image quality while independently targeting the reduction of the computational complexity. The simulations results demonstrated that the proposal performs well.
APA, Harvard, Vancouver, ISO, and other styles
36

Tai, Yen-Ling, Shin-Jhe Huang, Chien-Chang Chen, and Henry Horng-Shing Lu. "Computational Complexity Reduction of Neural Networks of Brain Tumor Image Segmentation by Introducing Fermi–Dirac Correction Functions." Entropy 23, no. 2 (February 11, 2021): 223. http://dx.doi.org/10.3390/e23020223.

Full text
Abstract:
Nowadays, deep learning methods with high structural complexity and flexibility inevitably lean on the computational capability of the hardware. A platform with high-performance GPUs and large amounts of memory could support neural networks having large numbers of layers and kernels. However, naively pursuing high-cost hardware would probably drag the technical development of deep learning methods. In the article, we thus establish a new preprocessing method to reduce the computational complexity of the neural networks. Inspired by the band theory of solids in physics, we map the image space into a noninteraction physical system isomorphically and then treat image voxels as particle-like clusters. Then, we reconstruct the Fermi–Dirac distribution to be a correction function for the normalization of the voxel intensity and as a filter of insignificant cluster components. The filtered clusters at the circumstance can delineate the morphological heterogeneity of the image voxels. We used the BraTS 2019 datasets and the dimensional fusion U-net for the algorithmic validation, and the proposed Fermi–Dirac correction function exhibited comparable performance to other employed preprocessing methods. By comparing to the conventional z-score normalization function and the Gamma correction function, the proposed algorithm can save at least 38% of computational time cost under a low-cost hardware architecture. Even though the correction function of global histogram equalization has the lowest computational time among the employed correction functions, the proposed Fermi–Dirac correction function exhibits better capabilities of image augmentation and segmentation.
APA, Harvard, Vancouver, ISO, and other styles
37

Codecasa, Lorenzo, Federico Moro, and Piergiorgio Alotto. "Nonlinear model order reduction for the fast solution of induction heating problems in time-domain." COMPEL - The international journal for computation and mathematics in electrical and electronic engineering 36, no. 2 (March 6, 2017): 469–75. http://dx.doi.org/10.1108/compel-05-2016-0215.

Full text
Abstract:
Purpose This paper aims to propose a fast and accurate simulation of large-scale induction heating problems by using nonlinear reduced-order models. Design/methodology/approach A projection space for model order reduction (MOR) is quickly generated from the first kernels of Volterra’s series to the problem solution. The nonlinear reduced model can be solved with time-harmonic phasor approximation, as the nonlinear quadratic structure of the full problem is preserved by the projection. Findings The solution of induction heating problems is still computationally expensive, even with a time-harmonic eddy current approximation. Numerical results show that the construction of the nonlinear reduced model has a computational cost which is orders of magnitude smaller than that required for the solution of the full problem. Research limitations/implications Only linear magnetic materials are considered in the present formulation. Practical implications The proposed MOR approach is suitable for the solution of industrial problems with a computing time which is orders of magnitude smaller than that required for the full unreduced problem, solved by traditional discretization methods such as finite element method. Originality/value The most common technique for MOR is the proper orthogonal decomposition. It requires solving the full nonlinear problem several times. The present MOR approach can be built directly at a negligible computational cost instead. From the reduced model, magnetic and temperature fields can be accurately reconstructed in whole time and space domains.
APA, Harvard, Vancouver, ISO, and other styles
38

Bharathi, M. "Optimum Test Suite Using Fault-Type Coverage-Based Ant Colony Optimization Algorithm." International Journal of Applied Metaheuristic Computing 13, no. 1 (January 2022): 1–23. http://dx.doi.org/10.4018/ijamc.2022010106.

Full text
Abstract:
Software Product Lines(SPLs) covers a mixture of features for testing Software Application Program(SPA). Testing cost reduction is a major metric of software testing. In combinatorial testing(CT), maximization of fault type coverage and test suite reduction plays a key role to reduce the testing cost of SPA. Metaheuristic Genetic Algorithm(GA) do not offer best outcome for test suite optimization problem due to mutation operation and required more computational time. So, Fault-Type Coverage Based Ant Colony Optimization(FTCBACO) algorithm is offered for test suite reduction in CT. FTCBACO algorithm starts with test cases in test suite and assign separate ant to each test case. Ants elect best test cases by updating of pheromone trails and selection of higher probability trails. Best test case path of ant with least time are taken as optimal solution for performing CT. Hence, FTCBACO Technique enriches reduction rate of test suite and minimizes computational time of reducing test cases efficiently for CT.
APA, Harvard, Vancouver, ISO, and other styles
39

ZHANG, Xiaoyong, Masahide ABE, and Masayuki KAWAMATA. "Reduction of Computational Cost of POC-Based Methods for Displacement Estimation in Old Film Sequences." IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E94-A, no. 7 (2011): 1497–504. http://dx.doi.org/10.1587/transfun.e94.a.1497.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Dattani, Nikesh S. "Numerical Feynman integrals with physically inspired interpolation: Faster convergence and significant reduction of computational cost." AIP Advances 2, no. 1 (March 2012): 012121. http://dx.doi.org/10.1063/1.3680607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Khisamutdinov, A. I., and N. N. Velker. "On reduction of computational cost of imitation Monte Carlo algorithms for modeling rarefied gas flows." Mathematical Models and Computer Simulations 4, no. 2 (April 2012): 187–202. http://dx.doi.org/10.1134/s2070048212020068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Muller, Andrei A., Esther Sanabria-Codesal, and Stepan Lucyszyn. "Computational Cost Reduction for N+2 Order Coupling Matrix Synthesis Based on Desnanot-Jacobi Identity." IEEE Access 4 (2016): 10042–50. http://dx.doi.org/10.1109/access.2016.2631262.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Zhang, Feichi, Thorsten Zirwes, Peter Habisreuther, and Henning Bockhorn. "Towards reduction of computational cost for large-scale combustion modelling with a multi-regional concept." Progress in Computational Fluid Dynamics, An International Journal 18, no. 6 (2018): 333. http://dx.doi.org/10.1504/pcfd.2018.096616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Bockhorn, Henning, Thorsten Zirwes, Peter Habisreuther, and Feichi Zhang. "Towards reduction of computational cost for large-scale combustion modelling with a multi-regional concept." Progress in Computational Fluid Dynamics, An International Journal 18, no. 6 (2018): 333. http://dx.doi.org/10.1504/pcfd.2018.10017951.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Yang, Shangze, Di Xiao, Xuesong Li, and Zhen Ma. "Markov Chain Investigation of Discretization Schemes and Computational Cost Reduction in Modeling Photon Multiple Scattering." Applied Sciences 8, no. 11 (November 19, 2018): 2288. http://dx.doi.org/10.3390/app8112288.

Full text
Abstract:
Establishing fast and reversible photon multiple scattering algorithms remains a modeling challenge for optical diagnostics and noise reduction purposes, especially when the scattering happens within the intermediate scattering regime. Previous work has proposed and verified a Markov chain approach for modeling photon multiple scattering phenomena through turbid slabs. The fidelity of the Markov chain method has been verified through detailed comparison with Monte Carlo models. However, further improvement to the Markov chain method is still required to improve its performance in studying multiple scattering. The present research discussed the efficacy of non-uniform discretization schemes and analyzed errors induced by different schemes. The current work also proposed an iterative approach as an alternative to directly carrying out matrix inversion manipulations, which would significantly reduce the computational costs. The benefits of utilizing non-uniform discretization schemes and the iterative approach were confirmed and verified by comparing the results to a Monte Carlo simulation.
APA, Harvard, Vancouver, ISO, and other styles
46

Zhou, Yujian, Liang Bao, and Yiqin Lin. "An Alternating Direction Implicit Method for Solving Projected Generalized Continuous-Time Sylvester Equations." Mathematical Problems in Engineering 2013 (2013): 1–10. http://dx.doi.org/10.1155/2013/387854.

Full text
Abstract:
We present the generalized low-rank alternating direction implicit method and the low-rank cyclic Smith method to solve projected generalized continuous-time Sylvester equations with low-rank right-hand sides. Such equations arise in control theory including the computation of inner products andℍ2norms and the model reduction based on balanced truncation for descriptor systems. The requirements of these methods are moderate with respect to both computational cost and memory. Numerical experiments presented in this paper show the effectiveness of the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
47

Lim, Yaik-Wah, Taki Eddine Seghier, Muhamad Farhin Harun, Mohd Hamdan Ahmad, Azurah A. Samah, and Hairudin Abdul Majid. "Computational BIM for Building Envelope Sustainability Optimization." MATEC Web of Conferences 278 (2019): 04001. http://dx.doi.org/10.1051/matecconf/201927804001.

Full text
Abstract:
Building envelope plays an important role to protect a building from external climatic factors while providing a comfortable indoor environment. However, the choices of construction materials, opening sizes, and glazing types for optimized sustainability performance require discrete analyses and decision-making processes. Thereby this study explores the use of computational building information modelling (BIM) to automate the process of design decision-making for building envelope sustainability optimization. A BIM tool (Revit), a visual programming tool (Dynamo) and multi objective optimization algorithm were integrated to create a computational BIM-based optimization model for building envelope overall thermal transfer value (OTTV) and construction cost. The proposed model was validated through a test case; the results showed that the optimized design achieved 44.78% reduction in OTTV but 19.64% increment in construction cost compared to the original design. The newly developed computational BIM optimization model can improve the level of automation in design process for sustainability.
APA, Harvard, Vancouver, ISO, and other styles
48

Ghodsi, Seyed Roholah, and Mohammad Taeibi-Rahni. "A Novel Parallel Algorithm Based on the Gram-Schmidt Method for Tridiagonal Linear Systems of Equations." Mathematical Problems in Engineering 2010 (2010): 1–17. http://dx.doi.org/10.1155/2010/268093.

Full text
Abstract:
This paper introduces a new parallel algorithm based on the Gram-Schmidt orthogonalization method. This parallel algorithm can find almost exact solutions of tridiagonal linear systems of equations in an efficient way. The system of equations is partitioned proportional to number of processors, and each partition is solved by a processor with a minimum request from the other partitions' data. The considerable reduction in data communication between processors causes interesting speedup. The relationships between partitions approximately disappear if some columns are switched. Hence, the speed of computation increases, and the computational cost decreases. Consequently, obtained results show that the suggested algorithm is considerably scalable. In addition, this method of partitioning can significantly decrease the computational cost on a single processor and make it possible to solve greater systems of equations. To evaluate the performance of the parallel algorithm, speedup and efficiency are presented. The results reveal that the proposed algorithm is practical and efficient.
APA, Harvard, Vancouver, ISO, and other styles
49

Mikeš, Karel, and Milan Jirásek. "Quasicontinuum Simulation of Nanotextile Based on the Microplane Model." Key Engineering Materials 714 (September 2016): 143–47. http://dx.doi.org/10.4028/www.scientific.net/kem.714.143.

Full text
Abstract:
The quasicontinuum (QC) method is a relatively new computational technique, which combines fast continuum and exact atomistic approaches. The key idea of QC is to reduce the computational cost by reducing degrees of freedom of the fully atomistic approach. In this work, a material model based on the idea of microplanes is used to realize the QC simplification. A formulation convenient for numerical simulation of materials with the structure similar to nanotextile is proposed. The relations between microscopic and macroscopic parameters are derived. Numerical tests show that the proposed model can reach a significant reduction of computational cost for the price of an acceptable error.
APA, Harvard, Vancouver, ISO, and other styles
50

Lee, Hong-rae, Eun-bin Ahn, A.-young Kim, and Kwang-deok Seo. "Complexity reduction method for High Efficiency Video Coding encoding based on scene-change detection and image texture information." International Journal of Distributed Sensor Networks 15, no. 12 (December 2019): 155014771989256. http://dx.doi.org/10.1177/1550147719892562.

Full text
Abstract:
Recently, as demand for high-quality video and realistic media has increased, High Efficiency Video Coding has been standardized. However, High Efficiency Video Coding requires heavy cost in terms of computational complexity to achieve high coding efficiency, which causes problems in fast coding processing and real-time processing. In particular, High Efficiency Video Coding inter-coding has heavy computational complexity, and the High Efficiency Video Coding inter prediction uses reference pictures to improve coding efficiency. The reference pictures are typically signaled in two independent lists according to the display order, to be used for forward and backward prediction. If an event occurs in the input video, such as a scene change, the inter prediction performs unnecessary computations. Therefore, the reference picture list should be reconfigured to improve the inter prediction performance and reduce computational complexity. To address this problem, this article proposes a method to reduce computational complexity for fast High Efficiency Video Coding encoding using information such as scene changes obtained from the input video through preprocessing. Furthermore, reference picture lists are reconstructed by sorting the reference pictures by similarity to the current coded picture using Angular Second Moment, Contrast, Entropy, and Correlation, which are image texture parameters from the input video. Simulations are used to show that both the encoding time and coding efficiency could be improved simultaneously by applying the proposed algorithms.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography