Artigos de revistas sobre o tema "Algorithm efficiency"

Siga este link para ver outros tipos de publicações sobre o tema: Algorithm efficiency.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Algorithm efficiency".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Sergienko, Ivan, Vladimir Shylo, Valentyna Roshchyn e Petro Shylo. "The Efficiency of Discrete Optimization Algorithm Portfolios". Cybernetics and Computer Technologies, n.º 2 (30 de junho de 2021): 5–12. http://dx.doi.org/10.34229/2707-451x.21.2.1.

Texto completo da fonte
Resumo:
Introduction. Solving large-scale discrete optimization problems requires the processing of large-scale data in a reasonable time. Efficient solving is only possible by using multiprocessor computer systems. However, it is a daunting challenge to adapt existing optimization algorithms to get all the benefits of these parallel computing systems. The available computational resources are ineffective without efficient and scalable parallel methods. In this connection, the algorithm unions (portfolios and teams) play a crucial role in the parallel processing of discrete optimization problems. The purpose. The purpose of this paper is to research the efficiency of the algorithm portfolios by solving the weighted max-cut problem. The research is carried out in two stages using stochastic local search algorithms. Results. In this paper, we investigate homogeneous and non-homogeneous algorithm portfolios. We developed the homogeneous portfolios of two stochastic local optimization algorithms for the weighted max-cut problem, which has numerous applications. The results confirm the advantages of the proposed methods. Conclusions. Algorithm portfolios could be used to solve well-known discrete optimization problems of unprecedented scale and significantly improve their solving time. Further, we propose using communication between algorithms, namely teams and portfolios of algorithm teams. The algorithms in a team communicate with each other to boost overall performance. It is supposed that algorithm communication allows enhancing the best features of the developed algorithms and would improve the computational times and solution quality. The underlying algorithms should be able to utilize relevant data that is being communicated effectively to achieve any computational benefit from communication. Keywords: Discrete optimization, algorithm portfolios, computational experiment.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Zarembo, Imants, e Sergejs Kodors. "Pathfinding Algorithm Efficiency Analysis in 2D Grid". Environment. Technology. Resources. Proceedings of the International Scientific and Practical Conference 2 (8 de agosto de 2015): 46. http://dx.doi.org/10.17770/etr2013vol2.868.

Texto completo da fonte
Resumo:
The main goal of this paper is to collect information about pathfinding algorithms A*, BFS, Dijkstra's algorithm, HPA* and LPA*, and compare them on different criteria, including execution time and memory requirements. Work has two parts, the first being theoretical and the second practical. The theoretical part details the comparison of pathfinding algorithms. The practical part includes implementation of specific algorithms and series of experiments using algorithms implemented. Such factors as various size two dimensional grids and choice of heuristics were taken into account while conducting experiments.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Zhi, Xiangyu, Xiao Yan, Bo Tang, Ziyao Yin, Yanchao Zhu e Minqi Zhou. "CoroGraph: Bridging Cache Efficiency and Work Efficiency for Graph Algorithm Execution". Proceedings of the VLDB Endowment 17, n.º 4 (dezembro de 2023): 891–903. http://dx.doi.org/10.14778/3636218.3636240.

Texto completo da fonte
Resumo:
Many systems are designed to run graph algorithms efficiently in memory but they achieve only cache efficiency or work efficiency. We tackle this fundamental trade-off in existing systems by designing CoroGraph, a system that attains both cache efficiency and work efficiency for in-memory graph processing. CoroGraph adopts a novel hybrid execution model , which generates update messages at vertex granularity to prioritize promising vertices for work efficiency, and commits updates at partition granularity to share data access for cache efficiency. To overlap the random memory access of graph algorithms with computation, CoroGraph extensively uses coroutine , i.e., a lightweight function in C++ that can yield and resume with low overhead, to prefetch the required data. A suite of designs are incorporated to reap the full benefits of coroutine, which include prefetch pipeline, cache-friendly graph format, and stop-free synchronization. We compare CoroGraph with five state-of-the-art graph algorithm systems via extensive experiments. The results show that CoroGraph yields shorter algorithm execution time than all baselines in 18 out of 20 cases, and its speedup over the best-performing baseline can be over 2x. Detailed profiling suggests that CoroGraph achieves both cache efficiency and work efficiency with a low memory stall and a small number of processed edges.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Brandejsky, Tomas. "Function Set Structure Influence onto GPA Efficiency". MENDEL 23, n.º 1 (1 de junho de 2017): 29–32. http://dx.doi.org/10.13164/mendel.2017.1.029.

Texto completo da fonte
Resumo:
The paper discusses the influence of function set structure onto efficiency of GPA (Genetic Programming Algorithms), and hierarchical algorithms like GPA-ES (GPA with Evolutionary Strategy to separate parameter optimization) algorithm efficiency. On the foreword, the discussed GPA algorithm is described. Then there is depicted function set and common requirements to its structure. On the end of this contribution, the test examples and environment as well as results of measurement of influence of superfluous functions presence in the used function set is discussed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Klyatchenko, Yaroslav, e Volodymyr Holub. "EFFICIENCY OF LOSSLESS DATA COMPRESSION ALGORITHM MODIFICATION". Bulletin of National Technical University "KhPI". Series: System Analysis, Control and Information Technologies, n.º 2 (10) (19 de dezembro de 2023): 67–72. http://dx.doi.org/10.20998/2079-0023.2023.02.10.

Texto completo da fonte
Resumo:
The current level of development of information technologies causes a rapid increase in the amount of information stored, transmitted and processed in computer systems. Ensuring the full and effective use of this information requires the use of the latest improved algorithms for compaction and optimization of its storage. The further growth of the technical level of hardware and software is closely related to the problems of lack of memory for storage, which also actualizes the task of effective data compression. Improved compression algorithms allow more efficient use of storage resources and reduce data transfer time over the network. Every year, programmers, scientists, and researchers look for ways to improve existing algorithms, as well as invent new ones, because every algorithm, even if it is simple, has its potential for improvement. A wide range of technologies related to the collection, processing, storage and transmission of information are largely oriented towards the development of systems in which graphical presentation of information has an advantage over other types of presentation. The development of modern computer systems and networks has influenced the wide distribution of tools operating with digital images. It is clear that storing and transferring a large number of images in their original, unprocessed form is a rather resource-intensive task. In turn, modern multimedia systems have gained considerable popularity thanks, first of all, to effective means of compressing graphic information. Image compression is a key factor in improving the efficiency of data transfer and the use of computing resources. The work is devoted to the study of the modification of the data compression algorithm The Quite OK Image Format, or QOI, which is optimized for speed for the compression of graphic information. Testing of those implementations of the algorithm, which were proposed by its author, shows such encouraging results that it can make it competitive with the already known PNG algorithm, providing a higher compression speed and targeting work with archives. The article compares the results of the two proposed modifications of the algorithm with the original implementation and shows their advantages. The effectiveness of the modifications and the features of their application for various cases were evaluated. A comparison of file compression coefficients, which were compressed by the original QOI algorithm, with such coefficients, which were obtained as a result of the application of modifications of its initial version, was also carried out.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Xiao, Shi Song, Ao Lin Wang e Hui Feng. "An Improved Algorithm Based on AC-BM Algorithm". Applied Mechanics and Materials 380-384 (agosto de 2013): 1576–79. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.1576.

Texto completo da fonte
Resumo:
The pattern matching algorithm is the mainstream technology in the instruction detection system, and therefore as a pattern-matching methods core string matching algorithm directly affect an intrusion detection system performance and efficiency. So based on the discussions of the most fashionable pattern matching algorithms at present, an improved algorithm of AC-BM is presented. From the experiments in the Snort ,it is concluded that the improved algorithm of the performance and efficiency is higher than AC-BM algorithm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Gracios, Anita, e Prof Arun Jhapate. "An Effective Routing Algorithm to Enhance Efficiency with WSN". International Journal of Trend in Scientific Research and Development Volume-2, Issue-3 (30 de abril de 2018): 301–4. http://dx.doi.org/10.31142/ijtsrd10879.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Zaharov, D. O., e A. P. Karpenko. "Study of League Championship Algorithm Efficiency for Global Optimization Problem". Mathematics and Mathematical Modeling, n.º 2 (9 de junho de 2020): 25–45. http://dx.doi.org/10.24108/mathm.0220.0000217.

Texto completo da fonte
Resumo:
The article objective is to study a new League Championship Algorithm (LCA) algorithm efficiency by its comparing with the efficiency of the Particle Swarm optimization (PSO) algorithm.The article presents a brief description of the terms used in the League Championship algorithm, describes the basic rules of the algorithm, on the basis of which the iterative process for solving the global optimization problem is built.Gives a detailed description of the League Championship algorithm, which comprises a flowchart of the algorithm, as well as a formalization of all its main steps.Depicts an exhaustive description of the software developed to implement the League Championship algorithm to solve global optimization problems.Briefly describes the modified particle swarm algorithm. Presents the values of all free parameters of the algorithm and the algorithm modifications, which make it different from the classical version, as well.The main part of the article shows the results of a great deal of computational experiments using two abovementioned algorithms. All the performance criteria, used for assessment of the algorithms efficiency, are given.Computational experiments were performed using the spherical function, as well as the Rosenbrock, Rastrigin, and Ackley functions. The results of the experiments are summarized in Tables, and also illustrated in Figures. Experiments were performed for the vector dimension of the variable parameters that is equal to 2, 4, 8, 16, 32, and 64.An analysis of the results of computational experiments involves a full assessment of the efficiency of the League Championship algorithm, and also provides an answer about expediency for further algorithm development.It is shown that the League Championship algorithm presented in the article has a high development potential and needs further work for its study.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Wang, Chun Ye, e Xiao Feng Zhou. "The MapReduce Parallel Study of KNN Algorithm". Advanced Materials Research 989-994 (julho de 2014): 2123–27. http://dx.doi.org/10.4028/www.scientific.net/amr.989-994.2123.

Texto completo da fonte
Resumo:
Although the parallelization KNN algorithm improves the classification efficiency of the algorithm, the calculation of the parallel algorithms increases with the increasing of the training sample data scale, affecting the classification efficiency of the algorithm. Aiming at this shortage, this paper will improve the original parallelization KNN algorithm in the MapReduce model, adding the text pretreatment process to improve the classification efficiency of the algorithm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Yang, Wen Jun. "Efficient Pattern Matching Algorithm for Intrusion Detection Systems". Applied Mechanics and Materials 511-512 (fevereiro de 2014): 1178–84. http://dx.doi.org/10.4028/www.scientific.net/amm.511-512.1178.

Texto completo da fonte
Resumo:
To overcome the defects of low efficiency of pattern matching in intrusion detection systems (IDS), an efficient pattern matching algorithm is proposed. The proposed algorithm first preprocesses pattern to record pattern information, then it recursive compares the nodes to find the most common part of pattern to improve efficiency. The proposed algorithm also appends auxiliary structure of m nodes in pattern to reduce time and space complexity. Theoretical analysis shows that for the subject with n nodes, the time complexity of the proposed algorithm is, and space complexity is . Detailed experimental results and comparisons with existed algorithms show that the proposed algorithm outperforms current state-of-the-art algorithms in terms of time efficiency, space efficiency and matching ratio.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Bhavani, Ch, e P. Madhavi. "Improving Efficiency of Apriori Algorithm". International Journal of Computer Trends and Technology 27, n.º 2 (25 de setembro de 2015): 93–99. http://dx.doi.org/10.14445/22312803/ijctt-v27p116.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Liao, Xiong, Junxiong Guo, Zhenghua Luo, Yanghui Xu e Yingjun Chu. "Research and Implementation of High-Efficiency and Low-Complexity LDPC Coding Algorithm". Electronics 12, n.º 17 (1 de setembro de 2023): 3696. http://dx.doi.org/10.3390/electronics12173696.

Texto completo da fonte
Resumo:
In this work, we proposed a high-efficiency and low-complexity encoding algorithm and its corresponding implementation structure during the design and implementation process of an LDPC encoder and decoder. This proposal was derived from extensive research on and analysis of standard encoding algorithms and recursive iterative encoding algorithms, specifically targeting the problem of high computational complexity in encoding algorithms. Subsequently, we combined binary phase-shift keying modulation mode and additive white Gaussian noise channel transmission with the min-sum decoding algorithm to realize the (1536, 1024) LDPC codec. This codec was uniformly quantized with a (6, 2) configuration, executed eight iterations, and achieved a 2/3 code rate in the IEEE802.16e standard. At the bit error rate (BER) of 10−5, the codec’s BER obtained by the proposed coding algorithm was about 0.25 dB lower than the recursive-iterative coding algorithm and was about 1.25 dB lower than the standard coding algorithm, which confirms the correctness, effectiveness, and feasibility of the proposed algorithm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Markić, Ivan, Maja Štula, Marija Zorić e Darko Stipaničev. "Entropy-Based Approach in Selection Exact String-Matching Algorithms". Entropy 23, n.º 1 (28 de dezembro de 2020): 31. http://dx.doi.org/10.3390/e23010031.

Texto completo da fonte
Resumo:
The string-matching paradigm is applied in every computer science and science branch in general. The existence of a plethora of string-matching algorithms makes it hard to choose the best one for any particular case. Expressing, measuring, and testing algorithm efficiency is a challenging task with many potential pitfalls. Algorithm efficiency can be measured based on the usage of different resources. In software engineering, algorithmic productivity is a property of an algorithm execution identified with the computational resources the algorithm consumes. Resource usage in algorithm execution could be determined, and for maximum efficiency, the goal is to minimize resource usage. Guided by the fact that standard measures of algorithm efficiency, such as execution time, directly depend on the number of executed actions. Without touching the problematics of computer power consumption or memory, which also depends on the algorithm type and the techniques used in algorithm development, we have developed a methodology which enables the researchers to choose an efficient algorithm for a specific domain. String searching algorithms efficiency is usually observed independently from the domain texts being searched. This research paper aims to present the idea that algorithm efficiency depends on the properties of searched string and properties of the texts being searched, accompanied by the theoretical analysis of the proposed approach. In the proposed methodology, algorithm efficiency is expressed through character comparison count metrics. The character comparison count metrics is a formal quantitative measure independent of algorithm implementation subtleties and computer platform differences. The model is developed for a particular problem domain by using appropriate domain data (patterns and texts) and provides for a specific domain the ranking of algorithms according to the patterns’ entropy. The proposed approach is limited to on-line exact string-matching problems based on information entropy for a search pattern. Meticulous empirical testing depicts the methodology implementation and purports soundness of the methodology.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Seliverstov, E. Yu. "Structural Mapping of Global Optimization Algorithms to Graphics Processing Unit Architecture". Herald of the Bauman Moscow State Technical University. Series Instrument Engineering, n.º 2 (139) (junho de 2022): 42–59. http://dx.doi.org/10.18698/0236-3933-2022-2-42-59.

Texto completo da fonte
Resumo:
Graphics processing units (GPU) deliver a high execution efficiency for modern metaheuristic algorithms with a high computation complexity. It is crucial to have an optimal task mapping of the optimization algorithm to the parallel system architecture which strongly affects the efficiency of the optimization process. The paper proposes a novel task mapping algorithm of the parallel metaheuristic algorithm to the GPU architecture, describes problem statement for the mapping of algorithm graph model to the GPU model, and gives a formal definition of graph mapping and mapping restrictions. The algorithm graph model is a hierarchical graph model consisting of island parallel model and metaheuristic optimization algorithm model. A set of feasible mappings using mapping restrictions makes it possible to formalize GPU architecture and parallel model features. The structural mapping algorithm is based on cooperative solving of the optimization problem and the discrete optimization problem of the structural model mapping. The study outlines the parallel efficiency criteria which can be evaluated both experimentally and analytically to predict a model efficiency. The experimental section introduces the parallel optimization algorithm based on the proposed structural mapping algorithm. Experimental results for parallel efficiency comparison between parallel and sequential algorithms are presented and discussed
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Xiao, Qian Cai, Ming Qi Li e Wen Qiang Guo. "Experimental Study of Dynamic Single-Source Shortest Path Algorithm". Applied Mechanics and Materials 58-60 (junho de 2011): 1493–98. http://dx.doi.org/10.4028/www.scientific.net/amm.58-60.1493.

Texto completo da fonte
Resumo:
In this paper, software Inet 3.0 is applied to generate topology, which randomly generates dynamic topology nodes. Based on dynamic shortest path algorithms put forward by P.Narvaez, Xiaobin et al, we analyzed the time efficiency of dynamic and static shortest path algorithms, the different time efficiency inner dynamic shortest path algorithms, and the relationship of time efficiency between topology and dynamic shortest path algorithms. The result shows that Xiaobin algorithm is statistically better than Narvaez algorithm about 20-30 percent. Dynamic algorithms are not always better than static algorithms considering the amount of changed topology. Dynamic and static algorithms are roughly same when the amount of changed topology holds 10 percent. Dynamic algorithms perform better when less than 10 percent, otherwise static algorithms will be better. The time efficiency of dynamic algorithms is related to special topology.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Moldovan, Dorin. "Plum Tree Algorithm and Weighted Aggregated Ensembles for Energy Efficiency Estimation". Algorithms 16, n.º 3 (2 de março de 2023): 134. http://dx.doi.org/10.3390/a16030134.

Texto completo da fonte
Resumo:
This article introduces a novel nature-inspired algorithm called the Plum Tree Algorithm (PTA), which has the biology of the plum trees as its main source of inspiration. The PTA was tested and validated using 24 benchmark objective functions, and it was further applied and compared to the following selection of representative state-of-the-art, nature-inspired algorithms: the Chicken Swarm Optimization (CSO) algorithm, the Particle Swarm Optimization (PSO) algorithm, the Grey Wolf Optimizer (GWO), the Cuckoo Search (CS) algorithm, the Crow Search Algorithm (CSA), and the Horse Optimization Algorithm (HOA). The results obtained with the PTA are comparable to the results obtained by using the other nature-inspired optimization algorithms. The PTA returned the best overall results for the 24 objective functions tested. This article presents the application of the PTA for weight optimization for an ensemble of four machine learning regressors, namely, the Random Forest Regressor (RFR), the Gradient Boosting Regressor (GBR), the AdaBoost Regressor (AdaBoost), and the Extra Trees Regressor (ETR), which are used for the prediction of the heating load and cooling load requirements of buildings, using the Energy Efficiency Dataset from UCI Machine Learning as experimental support. The PTA optimized ensemble-returned results such as those returned by the ensembles optimized with the GWO, the CS, and the CSA.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Moayedi, A., R. A. Abbaspour e A. Chehreghan. "A COMPARISON OF EFFICIENCY OF THE OPTIMIZATION APPROACH FOR CLUSTERING OF TRAJECTORIES". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W18 (18 de outubro de 2019): 737–40. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w18-737-2019.

Texto completo da fonte
Resumo:
Abstract. Clustering is an unsupervised learning method that used to discover hidden patterns in large sets of data. Huge data volume and the multidimensionality of trajectories have made their clustering a more challenging task. K-means is a widely used clustering algorithm applied in the trajectory computation field. However, the critical issue with this algorithm is its dependency on the initial values and getting stuck in the local minimum. Meta-heuristic algorithms with the goal of minimizing the cost function of the K-means algorithm can be utilized to address this problem. In this paper, after suggesting a cost function, we compare clustering performance of seven known metaheuristic population-based algorithms including, Grey Wolf Optimizer (GWO), Particle Swarm Optimization (PSO), Sine Cosine Algorithm (SCA), and Whale Optimization Algorithm (WOA). The results obtained from the clustering of several data sets with class labels were assessed by internal and external clustering validation indices along with computation time factor. According to the results, PSO, and SCA algorithms show the best results in the clustering regarding the Purity, and computation time metrics, respectively.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Chen, Jie. "AIRRT*: An improved fast path planning algorithm for manipulators". Journal of Physics: Conference Series 2390, n.º 1 (1 de dezembro de 2022): 012088. http://dx.doi.org/10.1088/1742-6596/2390/1/012088.

Texto completo da fonte
Resumo:
Abstract Path planning algorithms based on sampling have been widely studied and applied in recent years, among which the Informed-RRT* algorithm is a typically progressive optimization algorithm. To overcome the weakness of low optimization efficiency of the Informed-RRT* algorithm, an improved algorithm AIRRT* (Adaptive Informed-RRT*) is proposed, which can greatly improve optimization efficiency and improve the quality of optimized paths. According to the node distribution of the initial path, the appropriate node construction areas are dynamically selected first in the process of direct local sampling. Then collision detection and reconstruction are performed. Unlike the traditional algorithm, the AIRRT* algorithm determines the area to be reconstructed firstly, then performs direct local sampling in the reconstructed area, and finally reconstructs the path to improve the sampling efficiency. In addition, to reduce the searching difficulty of global path nodes, the invalid nodes are removed in reconstruction, which limits the growth of nodes in optimization. AIRRT* algorithm is compared with the common algorithms through simulation experiments, which proves that AIRRT* algorithm has obvious advantages in algorithm efficiency and path quality and has good applicability in varied scenarios.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Et. al., Siva Sankara Phani T. ,. "CGRA MODULO SCHEDULING FOR ACHIEVING BETTER PERFORMANCE AND INCREASED EFFICIENCY". Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, n.º 4 (10 de abril de 2021): 1400–1413. http://dx.doi.org/10.17762/turcomat.v12i4.1225.

Texto completo da fonte
Resumo:
Coarse-Grained Reconfigurable Architectures (CGRA) is an effective solution for speeding up computer-intensive activities due to its high energy efficiency and flexibility sacrifices. The timely implementation of CGRA loops was one of the hardest problems in the analysis. Modulo scheduling (MS) was productive in order to implement loops on CGRAs. The problem remains with current MS algorithms, namely to map large and irregular circuits to CGRAs over a fair period of compilation with restricted computational and high-performance routing tools. This is mainly due to an absence of awareness of major mapping limits and a time consuming approach to solving temporary and space-related mapping using CGRA buffer tools. It aims to boost the performance and robust compilation of the CGRA modulo planning algorithm. The problem with the CGRA MS is divided into time and space and the mechanisms between the two problems have to be reorganized. We have a detailed, systematic mapping fluid that addresses the algorithms of the time mapping problem with a powerful buffer algorithm and efficient connection and calculation limitations. We create a fast-stable algorithm for spatial mapping with a retransmission and rearrangement mechanism. With higher performance and quicker build-up time, our MS algorithm can map loops to CBGRA. The results show that, given the same compilation budget, our mapping algorithm results in a better rate for compilation. The performance of this method will be increased from 5% to 14%, better than the standard CGRA mapping algorithms available.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Shen, Linbang, Zhigang Chu, Yongxiang Zhang e Yang Yang. "A novel Fourier-based deconvolution algorithm with improved efficiency and convergence". Journal of Low Frequency Noise, Vibration and Active Control 39, n.º 4 (12 de setembro de 2019): 866–78. http://dx.doi.org/10.1177/1461348419873471.

Texto completo da fonte
Resumo:
Various deconvolution algorithms for acoustic source are developed to improve spatial resolution and suppress sidelobe of the conventional beamforming. To improve the computational efficiency and solution convergence of deconvolution, this paper proposes a Fourier-based improved fast iterative shrinkage thresholding algorithm. Simulations and experiments show that Fourier-based improved fast iterative shrinkage thresholding algorithm can achieve excellent acoustic identification performance, with high computational efficiency and good convergence. For Fourier-based improved fast iterative shrinkage thresholding algorithm, the larger the weight coefficient, the narrower the mainlobe width, and the better the convergence, but the spurious source also increases. The recommended weight coefficient for the array described herein is 3. In addition, like other Fourier-based deconvolution algorithms, Fourier-based improved fast iterative shrinkage thresholding algorithm using irregular focus grid can obtain better acoustic source identification performance than using the conventional regular focus grid.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

WANG, MONAN, SHAOYONG CHEN e QIYOU YANG. "DESIGN AND APPLICATION OF BOUNDING VOLUME HIERARCHY COLLISION DETECTION ALGORITHM BASED ON VIRTUAL SPHERE". Journal of Mechanics in Medicine and Biology 19, n.º 07 (novembro de 2019): 1940044. http://dx.doi.org/10.1142/s021951941940044x.

Texto completo da fonte
Resumo:
The result of collision detection is closely related to the further deformation or cutting action of soft tissue. In order to further improve the efficiency and stability of collision detection, in this paper, a collision detection algorithm of bounding volume hierarchy based on virtual sphere was proposed. The proposed algorithm was validated and the results show that the detection efficiency of the bounding volume hierarchy algorithm based on virtual sphere is higher than that of the serial hybrid bounding volume hierarchy algorithm and the parallel hybrid bounding volume hierarchy algorithm. Different collision detection algorithms were tested and the results show that the collision detection algorithm based on virtual sphere has high detection efficiency and good stability. As the number of triangular patches increased, the advantage was more and more obvious. Finally, the proposed algorithm was applied to two large and medium-sized virtual scenes to implement the collision detection between the vastus lateralis muscle, thigh and surgical instrument. Based on the virtual sphere, the collision detection algorithm of bounding volume hierarchy can implement efficient and stable collision detection in a virtual surgery system. Meanwhile, the algorithm can be combined with other acceleration algorithms (such as the multithread acceleration algorithm) to further improve detection efficiency.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Guo, Xiaomin, Yongxing Cao, Jian Zhou, Yuanxian Huang e Bijun Li. "HDM-RRT: A Fast HD-Map-Guided Motion Planning Algorithm for Autonomous Driving in the Campus Environment". Remote Sensing 15, n.º 2 (13 de janeiro de 2023): 487. http://dx.doi.org/10.3390/rs15020487.

Texto completo da fonte
Resumo:
On campus, the complexity of the environment and the lack of regulatory constraints make it difficult to model the environment, resulting in less efficient motion planning algorithms. To solve this problem, HD-Map-guided sampling-based motion planning is a feasible research direction. We proposed a motion planning algorithm for autonomous vehicles on campus, called HD-Map-guided rapidly-exploring random tree (HDM-RRT). In our algorithm, A collision risk map (CR-Map) that quantifies the collision risk coefficient on the road is combined with the Gaussian distribution for sampling to improve the efficiency of algorithm. Then, the node optimization strategy of the algorithm is deeply optimized through the prior information of the CR-Map to improve the convergence rate and solve the problem of poor stability in campus environments. Three experiments were designed to verify the efficiency and stability of our approach. The results show that the sampling efficiency of our algorithm is four times higher than that of the Gaussian distribution method. The average convergence rate of the proposed algorithm outperforms the RRT* algorithm and DT-RRT* algorithm. In terms of algorithm efficiency, the average computation time of the proposed algorithm is only 15.98 ms, which is much better than that of the three compared algorithms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Harvey, Charles, e Edmund Green. "Record Linkage Algorithms: Efficiency, Selection and Relative Confidence". History and Computing 6, n.º 3 (outubro de 1994): 143–52. http://dx.doi.org/10.3366/hac.1994.6.3.143.

Texto completo da fonte
Resumo:
Using the case material of the Westminster Historical Database, this article describes how 30 record linkage algorithms were evaluated to select the optimum algorithm for linking poll book data. It stresses the importance of relative confidence in linkage algorithms, and shows how more discriminating algorithms may increase both confidence in linked records and rates of record linkage.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Yang, Ruohan, e Zijun Zhong. "Algorithm efficiency and hybrid applications of quantum computing". Theoretical and Natural Science 11, n.º 1 (17 de novembro de 2023): 279–89. http://dx.doi.org/10.54254/2753-8818/11/20230419.

Texto completo da fonte
Resumo:
With the development of science and technology, it is difficult for traditional computers to solve cutting-edge problems due to the lack of computing power, and the importance of quantum computers is increasing day by day. This article starts with the simple principle of quantum computing, introduces the most advanced quantum computing instruments and quantum computing algorithms, and points out the application prospects in medicine, chemistry and other fields. This paper explains the basic principles of quantum computing algorithms, their efficiency over traditional algorithms, and focuses on the Shor algorithm and its variations. In terms of applications, the quantum computer Zu Chongzhi and its contribution to the sampling problem of quantum random circuits are introduced. This paper makes a certain analysis of the limitations of quantum computing, and gives the future development goals in a targeted manner. Besides, we summarize popular quantum computing algorithms and applications, and make contributions to the promotion and development of quantum computing. Overall, these results shed light on guiding further exploration of how to improve the computational efficiency of quantum computers.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Shen, Lingli. "Implementation of CT Image Segmentation Based on an Image Segmentation Algorithm". Applied Bionics and Biomechanics 2022 (12 de outubro de 2022): 1–11. http://dx.doi.org/10.1155/2022/2047537.

Texto completo da fonte
Resumo:
With the increasingly important role of image segmentation in the field of computed tomography (CT) image segmentation, the requirements for image segmentation technology in related industries are constantly improving. When the hardware resources can fully meet the needs of the fast and high-precision image segmentation program system, the main means of how to improve the image segmentation effect is to improve the related algorithms. Therefore, this study has proposed a combination of genetic algorithm (GA) and Great Law (OTSU) algorithm to form an image segmentation algorithm-immune genetic algorithm (IGA) algorithm. The algorithm has improved the segmentation accuracy and efficiency of the original algorithm, which is beneficial to the more accurate results of CT image segmentation. The experimental results in this study have shown that the operating efficiency of the OTSU segmentation algorithm is up to 75%. The operating efficiency of the GA algorithm is up to 78%. The operating efficiency of the IGA algorithm is up to 92%. In terms of operating efficiency, the OTSU segmentation algorithm has more advantages. In terms of segmentation accuracy, the highest accuracy rate of OTSU segmentation algorithm is 45%. The accuracy of the GA algorithm is 80%. The highest accuracy of the IGA algorithm is 97%. The IGA algorithm is more powerful in terms of operating efficiency and accuracy. Therefore, the application of the IGA algorithm to CT image segmentation is beneficial to doctors to better judge the lesions and improve the diagnosis rate.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Ali, Abdulrazzak, Nurul Akmar Emran, Safiza Suhana Kamal Baharin, Zahriah Othman, Awsan Thabet Salem, Maslita Abd Aziz, Nor Mas Aina Md Bohari e Noraswaliza Abdullah. "Improving the efficiency of clustering algorithm for duplicates detection". Indonesian Journal of Electrical Engineering and Computer Science 30, n.º 3 (1 de junho de 2023): 1586. http://dx.doi.org/10.11591/ijeecs.v30.i3.pp1586-1595.

Texto completo da fonte
Resumo:
Clustering method is a technique used for comparisons reduction between the candidates records in the duplicate detection process. The process of clustering records is affected by the quality of data. The more error-free the data, the more efficient the clustering algorithm, as data errors cause data to be placed in incorrect groups. Window algorithms suffer from the window size. The larger the window, the greater the number of unnecessary comparisons, and the smaller the window size may prevent the detection of duplicates that are supposed to be within the window. In this paper, we propose a data pre-processing method that increases the efficiency of window algorithms in grouping similar records together. In addition, the proposed method also deal s with the window size problem. In the proposed method, high-rank attributes are selected and then preparators are applied to the selected traits. A compensation algorithm is implemented to reduce the problem of missing and distorted sort keys. Two datasets (compact disc database (CDDB) and MusicBrainz) were used to test duplicates detection algorithms. The duplicates detection toolkit(DuDe) was used as a benchmark for the proposed method. Experiments showed that the proposed method achieved a high rate of accuracy in detecting duplicates. In addition, the proposed method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Kim, Bong-seok, Youngseok Jin, Jonghun Lee e Sangdong Kim. "High-Efficiency Super-Resolution FMCW Radar Algorithm Based on FFT Estimation". Sensors 21, n.º 12 (10 de junho de 2021): 4018. http://dx.doi.org/10.3390/s21124018.

Texto completo da fonte
Resumo:
This paper proposes a high-efficiency super-resolution frequency-modulated continuous-wave (FMCW) radar algorithm based on estimation by fast Fourier transform (FFT). In FMCW radar systems, the maximum number of samples is generally determined by the maximum detectable distance. However, targets are often closer than the maximum detectable distance. In this case, even if the number of samples is reduced, the ranges of targets can be estimated without degrading the performance. Based on this property, the proposed algorithm adaptively selects the number of samples used as input to the super-resolution algorithm depends on the coarsely estimated ranges of targets using the FFT. The proposed algorithm employs the reduced samples by the estimated distance by FFT as input to the super resolution algorithm instead of the maximum number of samples set by the maximum detectable distance. By doing so, the proposed algorithm achieves the similar performance of the conventional multiple signal classification algorithm (MUSIC), which is a representative of the super resolution algorithms while the performance does not degrade. Simulation results demonstrate the feasibility and performance improvement provided by the proposed algorithm; that is, the proposed algorithm achieves average complexity reduction of 88% compared to the conventional MUSIC algorithm while achieving its similar performance. Moreover, the improvement provided by the proposed algorithm was verified in practical conditions, as evidenced by our experimental results.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

PAN, Guan-hua, e Xing-zhong ZHANG. "Study on efficiency of Sunday algorithm". Journal of Computer Applications 32, n.º 11 (26 de maio de 2013): 3082–84. http://dx.doi.org/10.3724/sp.j.1087.2012.03082.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Maiza, M., e T. Clarke. "Task clustering algorithm with improved efficiency". Electronics Letters 32, n.º 12 (1996): 1070. http://dx.doi.org/10.1049/el:19960740.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Ginat, David. "Early Algorithm Efficiency with Design Patterns". Computer Science Education 11, n.º 2 (junho de 2001): 89–109. http://dx.doi.org/10.1076/csed.11.2.89.3838.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Shi, Wei, Jianhua Chen, Mao Luo e Min Chen. "High efficiency referential genome compression algorithm". Bioinformatics 35, n.º 12 (8 de novembro de 2018): 2058–65. http://dx.doi.org/10.1093/bioinformatics/bty934.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Ratschek, H., e J. G. Rokne. "Efficiency of a Global Optimization Algorithm". SIAM Journal on Numerical Analysis 24, n.º 5 (outubro de 1987): 1191–201. http://dx.doi.org/10.1137/0724078.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Xiao, Ning. "An Efficiency Algorithm for SDCP Program". Advanced Materials Research 971-973 (junho de 2014): 1533–36. http://dx.doi.org/10.4028/www.scientific.net/amr.971-973.1533.

Texto completo da fonte
Resumo:
For more effectively solving SDCP,in the paper,using BP neural networks to approximate chance function,training samples are produced by random simulation,and a hybrid intelligent algorithm for SDCP combined stochastic particle swarm optimization and BP neural network is proposed.The experimental results show that the algorithm is more preferable.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Chen, Y. M., e M. S. Liu. "Efficiency improvement of GPST inversion algorithm". Journal of Computational Physics 72, n.º 2 (outubro de 1987): 372–82. http://dx.doi.org/10.1016/0021-9991(87)90088-x.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Loo, Chang Herng, M. Zulfahmi Toh, Ahmad Fakhri Ab. Nasir, Nur Shazwani Kamaruddin e Nur Hafieza Ismail. "Efficiency and Accuracy of Scheduling Algorithms for Final Year Project Evaluation Management System". MEKATRONIKA 5, n.º 2 (23 de outubro de 2023): 23–31. http://dx.doi.org/10.15282/mekatronika.v5i2.9973.

Texto completo da fonte
Resumo:
Scheduling algorithms play a crucial role in optimizing the efficiency and precision of scheduling tasks, finding applications across various domains to enhance work productivity, reduce costs, and save time. This research paper conducts a comparative analysis of three algorithms: genetic algorithm, hill climbing algorithm, and particle swarm optimization algorithm, with a focus on evaluating their performance in scheduling presentations. The primary goal of this study is to assess the effectiveness of these algorithms and identify the most efficient one for handling presentation scheduling tasks, thereby minimizing the system's response time for generating schedules. The research takes into account various constraints, including evaluator availability, student and evaluator affiliations within research groups, and student-evaluator relationships where a student cannot be supervised by one of the evaluators. Considering these critical parameters and constraints, the algorithm assigns presentation slots, venues, and two evaluators to each student without encountering scheduling conflicts, ultimately producing a schedule based on the allocated slots for both students and evaluators.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Mramba, Lazarus, e Salvador Gezan. "Evaluating Algorithm Efficiency for Optimizing Experimental Designs with Correlated Data". Algorithms 11, n.º 12 (18 de dezembro de 2018): 212. http://dx.doi.org/10.3390/a11120212.

Texto completo da fonte
Resumo:
The search for efficient methods and procedures to optimize experimental designs is a vital process in field trials that is often challenged by computational bottlenecks. Most existing methods ignore the presence of some form of correlations in the data to simplify the optimization process at the design stage. This study explores several algorithms for improving field experimental designs using a linear mixed models statistical framework adjusting for both spatial and genetic correlations based on A- and D-optimality criteria. Relative design efficiencies are estimated for an array of algorithms including pairwise swap, genetic neighborhood, and simulated annealing and evaluated with varying levels of heritabilities, spatial and genetic correlations. Initial randomized complete block designs were generated using a stochastic procedure and can also be imported directly from other design software. Results showed that at a spatial correlation of 0.6 and a heritability of 0.3, under the A-optimality criterion, both simulated annealing and simple pairwise algorithms achieved the highest design efficiencies of 7.4 % among genetically unrelated individuals, implying a reduction in average variance of the random treatment effects by 7.4 % when the algorithm was iterated 5000 times. In contrast, results under D-optimality criterion indicated that simulated annealing had the lowest design efficiency. The simple pairwise algorithm consistently maintained highest design efficiencies in all evaluated conditions. Design efficiencies for experiments with full-sib families decreased with increasing heritability. The number of successful swaps appeared to decrease with increasing heritability and were highest for both simulated annealing and simple pairwise algorithms, and lowest for genetic neighborhood algorithm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Dai, Cai, e Xiujuan Lei. "A Multiobjective Brain Storm Optimization Algorithm Based on Decomposition". Complexity 2019 (22 de janeiro de 2019): 1–11. http://dx.doi.org/10.1155/2019/5301284.

Texto completo da fonte
Resumo:
Brain storm optimization (BSO) algorithm is a simple and effective evolutionary algorithm. Some multiobjective brain storm optimization algorithms have low search efficiency. This paper combines the decomposition technology and multiobjective brain storm optimization algorithm (MBSO/D) to improve the search efficiency. Given weight vectors transform a multiobjective optimization problem into a series of subproblems. The decomposition technology determines the neighboring clusters of each cluster. Solutions of adjacent clusters generate new solutions to update population. An adaptive selection strategy is used to balance exploration and exploitation. Besides, MBSO/D compares with three efficient state-of-the-art algorithms, e.g., NSGAII and MOEA/D, on twenty-two test problems. The experimental results show that MBSO/D is more efficient than compared algorithms and can improve the search efficiency for most test problems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Naeem, Usman, e Muhammad Mateen Afzal Awan. "Maximizing off-grid solar photovoltaic system efficiency through cutting-edge performance optimization technique for incremental conductance algorithm". Mehran University Research Journal of Engineering and Technology 43, n.º 3 (1 de julho de 2024): 113. http://dx.doi.org/10.22581/muet1982.3135.

Texto completo da fonte
Resumo:
The maximum power point tracking (MPPT) algorithms are required to deliver the optimal energy from solar photovoltaic cells/array (PV) under numerous weather conditions. Therefore, MPPT circuits driven by defined rules called algorithms are designed. These algorithms range from simple to complex in design and implementation and are selected based on the scenarios of the surroundings. However, the incremental conductance (InC) MPPT algorithm is one of the market's most simple, easy to implement, and demanding algorithms. The drawback associated with the InC algorithm is its tracking speed. To overcome this weakness various researchers have made multiple improvements. Although the performance became better but was not satisfied. Further, the improvements introduce steady-state oscillations of the operating point around the MPP. So, the user needs to pick and choose based on demand. Keeping the target in focus, we have introduced a couple of modifications in the structure of INC. that remain fruitful. The proposed structure named the augmented InC algorithm has shown marvelous improvement in tracking speed and steady-state oscillations. The results have been compared with the conventional InC algorithm, where the proposed augmented InC algorithm has outperformed the conventional InC algorithm in tracking speed and steady-state oscillations. We have used the MATLAB script to code the conventional InC algorithm and proposed augmented InC algorithms based on their designed flowchart. Both algorithms have been applied to the standalone solar photovoltaic system composed of a solar photovoltaic array, DC/DC boost converter, illumination and temperature inputs, MPPT algorithm, and a DC load. The model is designed in Simulink/MATLAB.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Zhang, Wei, e Qian Xu. "Optimization of College English Classroom Teaching Efficiency by Deep Learning SDD Algorithm". Computational Intelligence and Neuroscience 2022 (21 de janeiro de 2022): 1–10. http://dx.doi.org/10.1155/2022/1014501.

Texto completo da fonte
Resumo:
In order to improve the teaching efficiency of English teachers in classroom teaching, the target detection algorithm in deep learning and the monitoring information from teachers are used, the target detection algorithm of deep learning Single Shot MultiBox Detector (SSD) is optimized, and the optimized Mobilenet-Single Shot MultiBox Detector (Mobilenet-SSD) is designed. After analyzing the Mobilenet-SSD algorithm, it is recognized that the algorithm has the shortcomings of large amount of basic network parameters and poor small target detection. The deficiencies are optimized in the following partThrough related experiments of student behaviour analysis, the average detection accuracy of the optimized algorithm reached 82.13%, and the detection speed reached 23.5 fps (frames per second). Through experiments, the algorithm has achieved 81.11% in detecting students’ writing behaviour. This proves that the proposed algorithm has improved the accuracy of small target recognition without changing the operation speed of the traditional algorithm. The designed algorithm has more advantages in detection accuracy compared with previous detection algorithms. The optimized algorithm improves the detection efficiency of the algorithm, which is beneficial to provide modern technical support for English teachers to understand the learning status of students and has strong practical significance for improving the efficiency of English classroom teaching.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Jain, Abhishek, Manjeet Saini e Manohar Kumar. "Greedy Algorithm". Journal of Advance Research in Computer Science & Engineering (ISSN: 2456-3552) 2, n.º 4 (30 de abril de 2015): 11015. http://dx.doi.org/10.53555/nncse.v2i4.451.

Texto completo da fonte
Resumo:
This paper describes the basic technological aspects of algorithm, algorithmic efficiency and Greedy algorithm. Algorithmic efficiency is the property of an algorithm which relate to the amount of resources use by the algorithm in computer sciences. An algorithm is considered efficient if its resource consumption (or computational cost) is at or below some acceptable level.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Fan, Ai Wan, e Shu Xi Lu. "An Improved Elliptic Curve Digital Signature Algorithm". Applied Mechanics and Materials 34-35 (outubro de 2010): 1024–27. http://dx.doi.org/10.4028/www.scientific.net/amm.34-35.1024.

Texto completo da fonte
Resumo:
In elliptic curve cryptography, reverse-mode operation is the impact on the efficiency of digital signature one of the most important factor. Analysis of the limited domain of elliptic curve digital signature process, to prove the correctness of the algorithm, a non-mode based on the inverse operation of the elliptic curve digital signature algorithm, the algorithm does not reduce the security on the basis of improved algorithms Efficiency.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Huang, Feifan. "The Research Progress of Path Planning Algorithms". Highlights in Science, Engineering and Technology 63 (8 de agosto de 2023): 216–21. http://dx.doi.org/10.54097/hset.v63i.10879.

Texto completo da fonte
Resumo:
At present, path-planning for mobile robots is a hot problem. In the previous scientific research, A*, Dijkstra, rapidly-exploring random tree (RRT) and many other algorithms have appeared. To provide a clearer understanding of these algorithms, this article organized the idea and development of Q-learning, A* algorithm, ant colony algorithm, RRT algorithm and Dijkstra algorithm. According to the time order, this article researched several applications of the five algorithms respectively. Besides, optimization of the five algorithms in the past few years were listed. There are also examples of combined algorithms, which have better efficiency compared to using one algorithm alone. By analyzing the five algorithm and their optimization, the common problem and solution was found. The common problem of these algorithms is that a shorter and smoother path needs to be solved and the convergence speed needs accelerating. At present, the main solution can be divided into two aspects. One is to improve the algorithm itself, and the other is to combine different algorithms. In the future, more kinds of combined algorithms will emerge, and a better solution of path planning can be obtained to improve efficiency and reduce energy cost.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Parfenov, V. I., e V. D. Le. "DISTRIBUTED DETECTION BASED ON USING SOFT DECISION DECODING IN A FUSION CENTER". Telecommunications, n.º 1 (2022): 2–9. http://dx.doi.org/10.31044/1684-2588-2022-0-1-2-9.

Texto completo da fonte
Resumo:
In this work, distributed detection by data from many sensors in the wireless sensor system is considered. A decision-making algorithm based on using soft-decision decoding in a fusion center is synthesized. Its gain in efficiency compared to efficiency of the algorithm based on hard-decision scheme is shown. It is noted that this algorithm is a generalization of earlier developed algorithms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Wang, Huanwei, Xuyan Qi, Shangjie Lou, Jing Jing, Hongqi He e Wei Liu. "An Efficient and Robust Improved A* Algorithm for Path Planning". Symmetry 13, n.º 11 (19 de novembro de 2021): 2213. http://dx.doi.org/10.3390/sym13112213.

Texto completo da fonte
Resumo:
Path planning plays an essential role in mobile robot navigation, and the A* algorithm is one of the best-known path planning algorithms. However, the conventional A* algorithm and the subsequent improved algorithms still have some limitations in terms of robustness and efficiency. These limitations include slow algorithm efficiency, weak robustness, and collisions when robots are traversing. In this paper, we propose an improved A*-based algorithm called EBHSA* algorithm. The EBHSA* algorithm introduces the expansion distance, bidirectional search, heuristic function optimization and smoothing into path planning. The expansion distance extends a certain distance from obstacles to improve path robustness by avoiding collisions. Bidirectional search is a strategy that searches for a path from the start node and from the goal node at the same time. Heuristic function optimization designs a new heuristic function to replace the traditional heuristic function. Smoothing improves path robustness by reducing the number of right-angle turns. Moreover, we carry out simulation tests with the EBHSA* algorithm, and the test results show that the EBHSA* algorithm has excellent performance in terms of robustness and efficiency. In addition, we transplant the EBHSA* algorithm to a robot to verify its effectiveness in the real world.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

G B, Suba Santhosi. "A Light Weight Optimal Resource Scheduling Algorithm for Energy Efficient and Real Time Cloud Services". International Journal for Research in Applied Science and Engineering Technology 11, n.º 5 (31 de maio de 2023): 1953–62. http://dx.doi.org/10.22214/ijraset.2023.51985.

Texto completo da fonte
Resumo:
Abstract: Cloud computing has become an essential technology for providing various services to customers. Energy consumption and response time are critical factors in cloud computing systems. Therefore, scheduling algorithms play a vital role in optimizing resource usage and improving energy efficiency. In this paper, we propose a lightweight optimal scheduling algorithm for energy-efficient and real-time cloud services. Our algorithm considers the trade-off between energy consumption and response time and provides a solution that optimizes both factors. We evaluate our algorithm on a cloud computing testbed using the Cloud Sim simulation toolkit and compare it with load balancing round robin, ant colony, genetic algorithm, and analytical algorithm. The results show that our proposed algorithm outperforms these algorithms in terms of energy efficiency and response time.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Zhang, Bai, Haofang Liu e Feng Gao. "Research on an Improved Sliding Extracting Algorithm of Composite Deviation for Wind Turbine Gear". Journal of Physics: Conference Series 2218, n.º 1 (1 de março de 2022): 012046. http://dx.doi.org/10.1088/1742-6596/2218/1/012046.

Texto completo da fonte
Resumo:
Abstract One tooth radial composite deviation and one tooth tangential composite deviation are important parameters of wind turbine gears whose operation life cycle can be influenced. An improved sliding extracting algorithm of composite deviation is proposed. It just only calculates those parts that need to be recalculated, avoids a large number of invalid searching calculations, and greatly improves the efficiency. The experimental results show that the improved sliding extracting algorithm of composite deviation has the same results as the traditional extracting algorithm, and the efficiency of the extracting algorithm is improved nearly 100 times. The improved sliding extracting algorithm of total composite deviation is proposed in this paper, and theoretically the algorithm efficiency is one time than traditional algorithm. The algorithms proposed in this paper can improve the efficiency of radial composite deviation and tangential composite deviation, and can be applied to gear singleflank rolling tester and gear double-flank rolling tester, especially in wind turbine gear field.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Qiu, Liqing, Shuang Zhang, Chunmei Gu e Xiangbo Tian. "Scalable Influence Maximization Meets Efficiency and Effectiveness in Large-Scale Social Networks". International Journal of Software Engineering and Knowledge Engineering 30, n.º 08 (agosto de 2020): 1079–96. http://dx.doi.org/10.1142/s0218194020400161.

Texto completo da fonte
Resumo:
Influence maximization is a problem that aims to select top [Formula: see text] influential nodes to maximize the spread of influence in social networks. The classical greedy-based algorithms and their improvements are relatively slow or not scalable. The efficiency of heuristic algorithms is fast but their accuracy is unacceptable. Some algorithms improve the accuracy and efficiency by consuming a large amount of memory usage. To overcome the above shortcoming, this paper proposes a fast and scalable algorithm for influence maximization, called K-paths, which utilizes the influence tree to estimate the influence spread. Additionally, extensive experiments demonstrate that the K-paths algorithm outperforms the comparison algorithms in terms of efficiency while keeping competitive accuracy.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Shan, Yuyuan, Xueping Wang, Shi Cheng, Mingming Zhang e Lining Xing. "Knowledge-Guided Parallel Hybrid Local Search Algorithm for Solving Time-Dependent Agile Satellite Scheduling Problems". Symmetry 16, n.º 7 (28 de junho de 2024): 813. http://dx.doi.org/10.3390/sym16070813.

Texto completo da fonte
Resumo:
As satellite capabilities have evolved and new observation requirements have emerged, satellites have become essential tools in disaster relief, emergency monitoring, and other fields. However, the efficiency of satellite scheduling still needs to be enhanced. Learning and optimization are symmetrical processes of solving problems. Learning problem knowledge could provide efficient optimization strategies for solving problems. A knowledge-guided parallel hybrid local search algorithm (KG-PHLS) is proposed in this paper to solve time-dependent agile Earth observation satellite (AEOS) scheduling problems more efficiently. Firstly, the algorithm uses heuristic algorithms to generate initial solutions. Secondly, a knowledge-based parallel hybrid local search algorithm is employed to solve the problem in parallel. Meanwhile, data mining techniques are used to extract knowledge to guide the construction of new solutions. Finally, the proposed algorithm has demonstrated superior efficiency and computation time through simulations across multiple scenarios. Notably, compared to benchmark algorithms, the algorithm improves overall efficiency by approximately 7.4% and 8.9% in large-scale data scenarios while requiring only about 60.66% and 31.89% of the computation time of classic algorithms. Moreover, the proposed algorithm exhibits scalability to larger problem sizes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Sakharov, Maxim, Kamila Koledina, Irek Gubaydullin e Anatoly Karpenko. "Studying the Efficiency of Parallelization in Optimal Control of Multistage Chemical Reactions". Mathematics 10, n.º 19 (1 de outubro de 2022): 3589. http://dx.doi.org/10.3390/math10193589.

Texto completo da fonte
Resumo:
In this paper, we investigate the problem of optimal control of complex multistage chemical reactions, which is considered a nonlinear global constrained optimization problem. This class of problems is computationally expensive due to the inclusion of multiple parameters and requires parallel computing systems and algorithms to obtain a solution within a reasonable time. However, the efficiency of parallel algorithms can differ depending on the architecture of the computing system. One available approach to deal with this is the development of specialized optimization algorithms that consider not only problem-specific features but also peculiarities of a computing system in which the algorithms are launched. In this work, we developed a novel parallel population algorithm based on the mind evolutionary computation method. This algorithm is designed for desktop girds and works in synchronous and asynchronous modes. The algorithm and its software implementation were used to solve the problem of the catalytic reforming of gasoline and to study the parallelization efficiency. Results of the numerical experiments are presented in this paper.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

XIE, MENG, e HONGCHI SHI. "IN-NETWORK DATA AGGREGATION VIA ANT-COLONY OPTIMIZATION IN WIRELESS SENSOR NETWORKS". Journal of Interconnection Networks 13, n.º 03n04 (setembro de 2012): 1250013. http://dx.doi.org/10.1142/s0219265912500132.

Texto completo da fonte
Resumo:
Energy efficiency is an important issue of wireless sensor networks. In-network data aggregation is a data collection technique that improves energy efficiency and alleviates congestive routing traffic by reducing data forwarding in wireless sensor networks. Ant-colony aggregation is a distributed algorithm that provides an intrinsic way of exploring the search space to optimize settings for optimal data aggregation. This paper aims to refine the heuristic function and the aggregation node selection method to maximize energy efficiency and to extend network lifetime. Two proposed algorithms are shown to yield longer maximum lifetime than the conventional algorithm with the same hop-count delay. One of the proposed algorithms is shown to have improved scalability than the conventional algorithm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia