Siga este link para ver outros tipos de publicações sobre o tema: Algorithm efficiency.

Artigos de revistas sobre o tema "Algorithm efficiency"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Algorithm efficiency".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Sergienko, Ivan, Vladimir Shylo, Valentyna Roshchyn, and Petro Shylo. "The Efficiency of Discrete Optimization Algorithm Portfolios." Cybernetics and Computer Technologies, no. 2 (June 30, 2021): 5–12. http://dx.doi.org/10.34229/2707-451x.21.2.1.

Texto completo da fonte
Resumo:
Introduction. Solving large-scale discrete optimization problems requires the processing of large-scale data in a reasonable time. Efficient solving is only possible by using multiprocessor computer systems. However, it is a daunting challenge to adapt existing optimization algorithms to get all the benefits of these parallel computing systems. The available computational resources are ineffective without efficient and scalable parallel methods. In this connection, the algorithm unions (portfolios and teams) play a crucial role in the parallel processing of discrete optimization problems. The purpose. The purpose of this paper is to research the efficiency of the algorithm portfolios by solving the weighted max-cut problem. The research is carried out in two stages using stochastic local search algorithms. Results. In this paper, we investigate homogeneous and non-homogeneous algorithm portfolios. We developed the homogeneous portfolios of two stochastic local optimization algorithms for the weighted max-cut problem, which has numerous applications. The results confirm the advantages of the proposed methods. Conclusions. Algorithm portfolios could be used to solve well-known discrete optimization problems of unprecedented scale and significantly improve their solving time. Further, we propose using communication between algorithms, namely teams and portfolios of algorithm teams. The algorithms in a team communicate with each other to boost overall performance. It is supposed that algorithm communication allows enhancing the best features of the developed algorithms and would improve the computational times and solution quality. The underlying algorithms should be able to utilize relevant data that is being communicated effectively to achieve any computational benefit from communication. Keywords: Discrete optimization, algorithm portfolios, computational experiment.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Zarembo, Imants, and Sergejs Kodors. "Pathfinding Algorithm Efficiency Analysis in 2D Grid." Environment. Technology. Resources. Proceedings of the International Scientific and Practical Conference 2 (August 8, 2015): 46. http://dx.doi.org/10.17770/etr2013vol2.868.

Texto completo da fonte
Resumo:
The main goal of this paper is to collect information about pathfinding algorithms A*, BFS, Dijkstra's algorithm, HPA* and LPA*, and compare them on different criteria, including execution time and memory requirements. Work has two parts, the first being theoretical and the second practical. The theoretical part details the comparison of pathfinding algorithms. The practical part includes implementation of specific algorithms and series of experiments using algorithms implemented. Such factors as various size two dimensional grids and choice of heuristics were taken into account while conducting experiments.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Zhi, Xiangyu, Xiao Yan, Bo Tang, Ziyao Yin, Yanchao Zhu, and Minqi Zhou. "CoroGraph: Bridging Cache Efficiency and Work Efficiency for Graph Algorithm Execution." Proceedings of the VLDB Endowment 17, no. 4 (2023): 891–903. http://dx.doi.org/10.14778/3636218.3636240.

Texto completo da fonte
Resumo:
Many systems are designed to run graph algorithms efficiently in memory but they achieve only cache efficiency or work efficiency. We tackle this fundamental trade-off in existing systems by designing CoroGraph, a system that attains both cache efficiency and work efficiency for in-memory graph processing. CoroGraph adopts a novel hybrid execution model , which generates update messages at vertex granularity to prioritize promising vertices for work efficiency, and commits updates at partition granularity to share data access for cache efficiency. To overlap the random memory access of graph algorithms with computation, CoroGraph extensively uses coroutine , i.e., a lightweight function in C++ that can yield and resume with low overhead, to prefetch the required data. A suite of designs are incorporated to reap the full benefits of coroutine, which include prefetch pipeline, cache-friendly graph format, and stop-free synchronization. We compare CoroGraph with five state-of-the-art graph algorithm systems via extensive experiments. The results show that CoroGraph yields shorter algorithm execution time than all baselines in 18 out of 20 cases, and its speedup over the best-performing baseline can be over 2x. Detailed profiling suggests that CoroGraph achieves both cache efficiency and work efficiency with a low memory stall and a small number of processed edges.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Brandejsky, Tomas. "Function Set Structure Influence onto GPA Efficiency." MENDEL 23, no. 1 (2017): 29–32. http://dx.doi.org/10.13164/mendel.2017.1.029.

Texto completo da fonte
Resumo:
The paper discusses the influence of function set structure onto efficiency of GPA (Genetic Programming Algorithms), and hierarchical algorithms like GPA-ES (GPA with Evolutionary Strategy to separate parameter optimization) algorithm efficiency. On the foreword, the discussed GPA algorithm is described. Then there is depicted function set and common requirements to its structure. On the end of this contribution, the test examples and environment as well as results of measurement of influence of superfluous functions presence in the used function set is discussed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Amannah, Constance Izuchukwu, and Francis Sunday Bakpo. "SIMPLIFIED BLUESTEIN NUMERICAL FAST FOURIER TRANSFORMS ALGORITHM FOR DSP AND ASP." International Journal of Research – Granthaalayah 3, no. 11 (2017): 153–63. https://doi.org/10.5281/zenodo.849031.

Texto completo da fonte
Resumo:
This research was designed to develop a simplified Bluestein numerical FFT algorithm necessary for the processing of digital signals. The simplified numerical algorithm developed in this study is abbreviated with SBNADSP. The methodology adopted in this work was iterative and incremental development design. The major technology used in this work is the Bluestein numerical FFT algorithm. The study set the pace for its goal by re-indexing, decomposing, and simplifying the default Fast Fourier Transform Algorithms (the Bluestein FFT Algorithm). The improved efficiency of the Bluestein FFT algorithm is accounted for by the obvious reduction in the number of operations and operators in the simplified Bluestein algorithms. The SBTNADSP is designed to have four products, and three exponentiations against the default Bluestein FFT algorithm which has six exponentiations and eight products. Since the increase in the number of operators increases the length of operation, it is therefore reasonable to infer that the algorithm with the less number of operators will run shorter execution time than the one with greater operators. In line with this, we conclude that SBNADSP is of greater efficiency than the Bluestein numerical algorithm. The result of this study showed that a faster numerical algorithm other than the Bluestein fft algorithms is possible for the processing of digital signals.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Xiao, Shi Song, Ao Lin Wang, and Hui Feng. "An Improved Algorithm Based on AC-BM Algorithm." Applied Mechanics and Materials 380-384 (August 2013): 1576–79. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.1576.

Texto completo da fonte
Resumo:
The pattern matching algorithm is the mainstream technology in the instruction detection system, and therefore as a pattern-matching methods core string matching algorithm directly affect an intrusion detection system performance and efficiency. So based on the discussions of the most fashionable pattern matching algorithms at present, an improved algorithm of AC-BM is presented. From the experiments in the Snort ,it is concluded that the improved algorithm of the performance and efficiency is higher than AC-BM algorithm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Klyatchenko, Yaroslav, and Volodymyr Holub. "EFFICIENCY OF LOSSLESS DATA COMPRESSION ALGORITHM MODIFICATION." Bulletin of National Technical University "KhPI". Series: System Analysis, Control and Information Technologies, no. 2 (10) (December 19, 2023): 67–72. http://dx.doi.org/10.20998/2079-0023.2023.02.10.

Texto completo da fonte
Resumo:
The current level of development of information technologies causes a rapid increase in the amount of information stored, transmitted and processed in computer systems. Ensuring the full and effective use of this information requires the use of the latest improved algorithms for compaction and optimization of its storage. The further growth of the technical level of hardware and software is closely related to the problems of lack of memory for storage, which also actualizes the task of effective data compression. Improved compression algorithms allow more efficient use of storage resources and reduce data transfer time over the network. Every year, programmers, scientists, and researchers look for ways to improve existing algorithms, as well as invent new ones, because every algorithm, even if it is simple, has its potential for improvement. A wide range of technologies related to the collection, processing, storage and transmission of information are largely oriented towards the development of systems in which graphical presentation of information has an advantage over other types of presentation. The development of modern computer systems and networks has influenced the wide distribution of tools operating with digital images. It is clear that storing and transferring a large number of images in their original, unprocessed form is a rather resource-intensive task. In turn, modern multimedia systems have gained considerable popularity thanks, first of all, to effective means of compressing graphic information. Image compression is a key factor in improving the efficiency of data transfer and the use of computing resources. The work is devoted to the study of the modification of the data compression algorithm The Quite OK Image Format, or QOI, which is optimized for speed for the compression of graphic information. Testing of those implementations of the algorithm, which were proposed by its author, shows such encouraging results that it can make it competitive with the already known PNG algorithm, providing a higher compression speed and targeting work with archives. The article compares the results of the two proposed modifications of the algorithm with the original implementation and shows their advantages. The effectiveness of the modifications and the features of their application for various cases were evaluated. A comparison of file compression coefficients, which were compressed by the original QOI algorithm, with such coefficients, which were obtained as a result of the application of modifications of its initial version, was also carried out.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Wang, Chun Ye, and Xiao Feng Zhou. "The MapReduce Parallel Study of KNN Algorithm." Advanced Materials Research 989-994 (July 2014): 2123–27. http://dx.doi.org/10.4028/www.scientific.net/amr.989-994.2123.

Texto completo da fonte
Resumo:
Although the parallelization KNN algorithm improves the classification efficiency of the algorithm, the calculation of the parallel algorithms increases with the increasing of the training sample data scale, affecting the classification efficiency of the algorithm. Aiming at this shortage, this paper will improve the original parallelization KNN algorithm in the MapReduce model, adding the text pretreatment process to improve the classification efficiency of the algorithm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Mrs., Pooja V. "Enhancing Scheduling Efficiency in Graph Theory: A Novel Approach to Maximum Matching." Journal of Scholastic Engineering Science and Management 3, no. 3 (2024): 1–13. https://doi.org/10.5281/zenodo.10907243.

Texto completo da fonte
Resumo:
The maximum matching problem is a fundamental problem in graph theory. It asks for the maximum number of edges that can be selected in a graph so that no two selected edges share an endpoint. This problem has applications in many areas, such as scheduling, transportation, and telecommunications. Existing algorithms for finding the maximum matching in a graph are often inefficient. For example, the brute-force algorithm can take exponential time to find the maximum matching in a graph. In this paper, we present a new algorithm for finding the maximum matching in a graph. Our algorithm is based on a novel approach that uses a combination of dynamic programming and greedy algorithms. We prove that our algorithm is correct and efficient, and we demonstrate its effectiveness on a variety of graphs.The key features of our new algorithm are as follows: It is a dynamic programming algorithm, which means that it works by breaking the problem down into smaller subproblems and solving them recursively. It uses a greedy approach to choose the edges to be included in the matching. It is efficient, and it can find the maximum matching in a graph in polynomial time. We evaluated our new algorithm on a variety of graphs. Our results show that our algorithm is significantly more efficient than existing algorithms. For example, on a graph with 100 vertices, our algorithm can find the maximum matching in 0.01 seconds, while the brute-force algorithm takes 100 seconds. Our new algorithm is a significant improvement over existing algorithms for finding the maximum matching in a graph. It is more efficient, and it can be used to solve larger graphs. We believe that our algorithm will be useful in many applications.  
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Zaharov, D. O., and A. P. Karpenko. "Study of League Championship Algorithm Efficiency for Global Optimization Problem." Mathematics and Mathematical Modeling, no. 2 (June 9, 2020): 25–45. http://dx.doi.org/10.24108/mathm.0220.0000217.

Texto completo da fonte
Resumo:
The article objective is to study a new League Championship Algorithm (LCA) algorithm efficiency by its comparing with the efficiency of the Particle Swarm optimization (PSO) algorithm.The article presents a brief description of the terms used in the League Championship algorithm, describes the basic rules of the algorithm, on the basis of which the iterative process for solving the global optimization problem is built.Gives a detailed description of the League Championship algorithm, which comprises a flowchart of the algorithm, as well as a formalization of all its main steps.Depicts an exhaustive description of the software developed to implement the League Championship algorithm to solve global optimization problems.Briefly describes the modified particle swarm algorithm. Presents the values of all free parameters of the algorithm and the algorithm modifications, which make it different from the classical version, as well.The main part of the article shows the results of a great deal of computational experiments using two abovementioned algorithms. All the performance criteria, used for assessment of the algorithms efficiency, are given.Computational experiments were performed using the spherical function, as well as the Rosenbrock, Rastrigin, and Ackley functions. The results of the experiments are summarized in Tables, and also illustrated in Figures. Experiments were performed for the vector dimension of the variable parameters that is equal to 2, 4, 8, 16, 32, and 64.An analysis of the results of computational experiments involves a full assessment of the efficiency of the League Championship algorithm, and also provides an answer about expediency for further algorithm development.It is shown that the League Championship algorithm presented in the article has a high development potential and needs further work for its study.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Yang, Wen Jun. "Efficient Pattern Matching Algorithm for Intrusion Detection Systems." Applied Mechanics and Materials 511-512 (February 2014): 1178–84. http://dx.doi.org/10.4028/www.scientific.net/amm.511-512.1178.

Texto completo da fonte
Resumo:
To overcome the defects of low efficiency of pattern matching in intrusion detection systems (IDS), an efficient pattern matching algorithm is proposed. The proposed algorithm first preprocesses pattern to record pattern information, then it recursive compares the nodes to find the most common part of pattern to improve efficiency. The proposed algorithm also appends auxiliary structure of m nodes in pattern to reduce time and space complexity. Theoretical analysis shows that for the subject with n nodes, the time complexity of the proposed algorithm is, and space complexity is . Detailed experimental results and comparisons with existed algorithms show that the proposed algorithm outperforms current state-of-the-art algorithms in terms of time efficiency, space efficiency and matching ratio.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Liao, Xiong, Junxiong Guo, Zhenghua Luo, Yanghui Xu, and Yingjun Chu. "Research and Implementation of High-Efficiency and Low-Complexity LDPC Coding Algorithm." Electronics 12, no. 17 (2023): 3696. http://dx.doi.org/10.3390/electronics12173696.

Texto completo da fonte
Resumo:
In this work, we proposed a high-efficiency and low-complexity encoding algorithm and its corresponding implementation structure during the design and implementation process of an LDPC encoder and decoder. This proposal was derived from extensive research on and analysis of standard encoding algorithms and recursive iterative encoding algorithms, specifically targeting the problem of high computational complexity in encoding algorithms. Subsequently, we combined binary phase-shift keying modulation mode and additive white Gaussian noise channel transmission with the min-sum decoding algorithm to realize the (1536, 1024) LDPC codec. This codec was uniformly quantized with a (6, 2) configuration, executed eight iterations, and achieved a 2/3 code rate in the IEEE802.16e standard. At the bit error rate (BER) of 10−5, the codec’s BER obtained by the proposed coding algorithm was about 0.25 dB lower than the recursive-iterative coding algorithm and was about 1.25 dB lower than the standard coding algorithm, which confirms the correctness, effectiveness, and feasibility of the proposed algorithm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Gracios, Anita, and Prof Arun Jhapate. "An Effective Routing Algorithm to Enhance Efficiency with WSN." International Journal of Trend in Scientific Research and Development Volume-2, Issue-3 (2018): 301–4. http://dx.doi.org/10.31142/ijtsrd10879.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Nurmana, Ayyub Hamdanu Budi, Mars Caroline Wibowo, and Sarwo Nugroho. "Enhancing Image Denoising Efficiency: Dynamic Transformations in BM3D Algorithm." Journal of Image and Graphics 12, no. 4 (2024): 332–44. http://dx.doi.org/10.18178/joig.12.4.332-344.

Texto completo da fonte
Resumo:
This research aims to enhance the performance of image-denoising algorithms, particularly in the context of Block Matching 3D (BM3D) usage, focusing on improving image quality and retaining important information in noisy images. The novelty of this research lies in developing more effective and efficient image-denoising techniques by considering the characteristics of image blocks to improve denoising results. The method employed in this research involves the development of a new approach enabling the application of adaptive 2D and 3D transformations depending on the characteristics of the image block being processed. The research develops a new approach enabling the application of adaptive 2D and 3D transformations depending on the characteristics of the image block being processed. The results of this research indicate that the proposed adaptive approach in the BM3D image denoising algorithm can significantly improve denoising performance. Experimental results show that performing 2D transformations on blocks that do not have sufficiently similar blocks can yield better denoising results, especially at high noise levels.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Markić, Ivan, Maja Štula, Marija Zorić, and Darko Stipaničev. "Entropy-Based Approach in Selection Exact String-Matching Algorithms." Entropy 23, no. 1 (2020): 31. http://dx.doi.org/10.3390/e23010031.

Texto completo da fonte
Resumo:
The string-matching paradigm is applied in every computer science and science branch in general. The existence of a plethora of string-matching algorithms makes it hard to choose the best one for any particular case. Expressing, measuring, and testing algorithm efficiency is a challenging task with many potential pitfalls. Algorithm efficiency can be measured based on the usage of different resources. In software engineering, algorithmic productivity is a property of an algorithm execution identified with the computational resources the algorithm consumes. Resource usage in algorithm execution could be determined, and for maximum efficiency, the goal is to minimize resource usage. Guided by the fact that standard measures of algorithm efficiency, such as execution time, directly depend on the number of executed actions. Without touching the problematics of computer power consumption or memory, which also depends on the algorithm type and the techniques used in algorithm development, we have developed a methodology which enables the researchers to choose an efficient algorithm for a specific domain. String searching algorithms efficiency is usually observed independently from the domain texts being searched. This research paper aims to present the idea that algorithm efficiency depends on the properties of searched string and properties of the texts being searched, accompanied by the theoretical analysis of the proposed approach. In the proposed methodology, algorithm efficiency is expressed through character comparison count metrics. The character comparison count metrics is a formal quantitative measure independent of algorithm implementation subtleties and computer platform differences. The model is developed for a particular problem domain by using appropriate domain data (patterns and texts) and provides for a specific domain the ranking of algorithms according to the patterns’ entropy. The proposed approach is limited to on-line exact string-matching problems based on information entropy for a search pattern. Meticulous empirical testing depicts the methodology implementation and purports soundness of the methodology.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Seliverstov, E. Yu. "Structural Mapping of Global Optimization Algorithms to Graphics Processing Unit Architecture." Herald of the Bauman Moscow State Technical University. Series Instrument Engineering, no. 2 (139) (June 2022): 42–59. http://dx.doi.org/10.18698/0236-3933-2022-2-42-59.

Texto completo da fonte
Resumo:
Graphics processing units (GPU) deliver a high execution efficiency for modern metaheuristic algorithms with a high computation complexity. It is crucial to have an optimal task mapping of the optimization algorithm to the parallel system architecture which strongly affects the efficiency of the optimization process. The paper proposes a novel task mapping algorithm of the parallel metaheuristic algorithm to the GPU architecture, describes problem statement for the mapping of algorithm graph model to the GPU model, and gives a formal definition of graph mapping and mapping restrictions. The algorithm graph model is a hierarchical graph model consisting of island parallel model and metaheuristic optimization algorithm model. A set of feasible mappings using mapping restrictions makes it possible to formalize GPU architecture and parallel model features. The structural mapping algorithm is based on cooperative solving of the optimization problem and the discrete optimization problem of the structural model mapping. The study outlines the parallel efficiency criteria which can be evaluated both experimentally and analytically to predict a model efficiency. The experimental section introduces the parallel optimization algorithm based on the proposed structural mapping algorithm. Experimental results for parallel efficiency comparison between parallel and sequential algorithms are presented and discussed
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Ahmadi, Farrokh, Abbas Toloie Eshlaghi, and Reza Radfar. "Examining and Comparing the Efficiency of MLP and SimpleRNN Algorithms in Cryptocurrency Price Prediction." Management Strategies and Engineering Sciences 6, no. 3 (2024): 121–37. https://doi.org/10.61838/msesj.6.3.12.

Texto completo da fonte
Resumo:
Cryptocurrencies have been widely identified and established as a new form of electronic currency exchange, carrying significant implications for emerging economies and the global economy. This research focused on the "examination and comparison of the efficiency of MLP and SimpleRNN algorithms in predicting cryptocurrency prices" using the Python programming language. Price predictions for Bitcoin, Ethereum, Binance Coin, Cardano, and Ripple were made using two deep learning algorithms (including the MLP algorithm and the SimpleRNN algorithm) over the period from 2017 to 2023. The results of cryptocurrency price prediction using deep learning algorithms were satisfactory; and the comparison of predictions across all cryptocurrencies indicated minimal differences between the algorithms studied, suggesting that they were efficient and had low error rates. Based on the obtained results regarding Bitcoin price prediction, the best algorithm was SimpleRNN; for Ethereum price prediction, the best algorithm was MLP; for Binance Coin price prediction, the best algorithm was SimpleRNN; for Cardano price prediction, the best algorithm was MLP; and for Ripple price prediction, the best algorithm was MLP.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Bhavani, Ch, and P. Madhavi. "Improving Efficiency of Apriori Algorithm." International Journal of Computer Trends and Technology 27, no. 2 (2015): 93–99. http://dx.doi.org/10.14445/22312803/ijctt-v27p116.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Sadullaeva, Sh. "Analysis of the efficiency of SHA algorithms based on message length." Международный Журнал Теоретических и Прикладных Вопросов Цифровых Технологий 8, no. 1 (2025): 164–74. https://doi.org/10.62132/ijdt.v8i1.245.

Texto completo da fonte
Resumo:
This study analyzes the efficiency of SHA (Secure Hash Algorithm) family algorithms depending on message length. The execution time of hashing processes for SHA-1 and SHA-2 (SHA-224, SHA-256, SHA-384, SHA-512) algorithms is compared across various message lengths. Python programming language is used for implementation, and the results are presented in graphical and tabular forms. The study examines the computational complexity and performance speed of these algorithms. The main objective is to determine the efficiency of each algorithm relative to message length and recommend an optimal option for practical applications. Special attention is given to balancing execution speed and computational complexity. The results highlight similarities and differences among the algorithms, providing valuable insights for selecting the most efficient algorithm in real-time systems or large-scale data processing.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Xiao, Qian Cai, Ming Qi Li, and Wen Qiang Guo. "Experimental Study of Dynamic Single-Source Shortest Path Algorithm." Applied Mechanics and Materials 58-60 (June 2011): 1493–98. http://dx.doi.org/10.4028/www.scientific.net/amm.58-60.1493.

Texto completo da fonte
Resumo:
In this paper, software Inet 3.0 is applied to generate topology, which randomly generates dynamic topology nodes. Based on dynamic shortest path algorithms put forward by P.Narvaez, Xiaobin et al, we analyzed the time efficiency of dynamic and static shortest path algorithms, the different time efficiency inner dynamic shortest path algorithms, and the relationship of time efficiency between topology and dynamic shortest path algorithms. The result shows that Xiaobin algorithm is statistically better than Narvaez algorithm about 20-30 percent. Dynamic algorithms are not always better than static algorithms considering the amount of changed topology. Dynamic and static algorithms are roughly same when the amount of changed topology holds 10 percent. Dynamic algorithms perform better when less than 10 percent, otherwise static algorithms will be better. The time efficiency of dynamic algorithms is related to special topology.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Moldovan, Dorin. "Plum Tree Algorithm and Weighted Aggregated Ensembles for Energy Efficiency Estimation." Algorithms 16, no. 3 (2023): 134. http://dx.doi.org/10.3390/a16030134.

Texto completo da fonte
Resumo:
This article introduces a novel nature-inspired algorithm called the Plum Tree Algorithm (PTA), which has the biology of the plum trees as its main source of inspiration. The PTA was tested and validated using 24 benchmark objective functions, and it was further applied and compared to the following selection of representative state-of-the-art, nature-inspired algorithms: the Chicken Swarm Optimization (CSO) algorithm, the Particle Swarm Optimization (PSO) algorithm, the Grey Wolf Optimizer (GWO), the Cuckoo Search (CS) algorithm, the Crow Search Algorithm (CSA), and the Horse Optimization Algorithm (HOA). The results obtained with the PTA are comparable to the results obtained by using the other nature-inspired optimization algorithms. The PTA returned the best overall results for the 24 objective functions tested. This article presents the application of the PTA for weight optimization for an ensemble of four machine learning regressors, namely, the Random Forest Regressor (RFR), the Gradient Boosting Regressor (GBR), the AdaBoost Regressor (AdaBoost), and the Extra Trees Regressor (ETR), which are used for the prediction of the heating load and cooling load requirements of buildings, using the Energy Efficiency Dataset from UCI Machine Learning as experimental support. The PTA optimized ensemble-returned results such as those returned by the ensembles optimized with the GWO, the CS, and the CSA.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Moayedi, A., R. A. Abbaspour, and A. Chehreghan. "A COMPARISON OF EFFICIENCY OF THE OPTIMIZATION APPROACH FOR CLUSTERING OF TRAJECTORIES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W18 (October 18, 2019): 737–40. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w18-737-2019.

Texto completo da fonte
Resumo:
Abstract. Clustering is an unsupervised learning method that used to discover hidden patterns in large sets of data. Huge data volume and the multidimensionality of trajectories have made their clustering a more challenging task. K-means is a widely used clustering algorithm applied in the trajectory computation field. However, the critical issue with this algorithm is its dependency on the initial values and getting stuck in the local minimum. Meta-heuristic algorithms with the goal of minimizing the cost function of the K-means algorithm can be utilized to address this problem. In this paper, after suggesting a cost function, we compare clustering performance of seven known metaheuristic population-based algorithms including, Grey Wolf Optimizer (GWO), Particle Swarm Optimization (PSO), Sine Cosine Algorithm (SCA), and Whale Optimization Algorithm (WOA). The results obtained from the clustering of several data sets with class labels were assessed by internal and external clustering validation indices along with computation time factor. According to the results, PSO, and SCA algorithms show the best results in the clustering regarding the Purity, and computation time metrics, respectively.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Poddubnyi, Vadym, Oleksandr Sievierinov, and Dmytro Nepokrytov. "Research the efficiency of image processing algorithms in zero watermark schemes." INNOVATIVE TECHNOLOGIES AND SCIENTIFIC SOLUTIONS FOR INDUSTRIES, no. 1(31) (March 31, 2025): 102–14. https://doi.org/10.30837/2522-9818.2025.1.102.

Texto completo da fonte
Resumo:
Subject matter: image transformation algorithms used in zero-watermarking techniques for development authentication algorithm. Objectives: identify effective image transformation algorithms for use in zero-watermarking schemes. The study addresses the following tasks: examining the range of existing image transformation algorithms, formulating informal requirements for image transformation algorithms to be used in zero-watermarking-based authentication schemes, and making assumptions about the feasibility of using each analyzed image transformation algorithm. To achieve these objectives, the following methods are employed: modeling – software implementation of each studied algorithm, empirical methods – application of algorithms and observation of transformation results, mathematical methods – calculation of normal correlation metrics and peak signal-to-noise ratio (PSNR). Results: an analysis was conducted on a set of algorithms that could potentially be used in zero-watermarking schemes for authentication purposes. A methodology was developed to evaluate algorithms while considering image dimensionality reduction due to compression. Additionally, requirements for image processing algorithms in zero-watermarking-based authentication were established. Conclusions: the study identified the most effective image transformation algorithms for use in zero-watermarking authentication schemes: DWT (Discrete Wavelet Transform), SVD (Singular Value Decomposition), DCT (Discrete Cosine Transform), and K-means clustering. For low-resolution images, DCT is a viable option. The most effective algorithm combinations are DWT + DCT and DWT + K-means, as these combinations ensure optimal robustness to noise while maintaining sensitivity to distinguish similar images. Future authentication schemes based on these algorithms may be useful (in combination with IoT devices), including for user authentication in enterprises and organizations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Chen, Jie. "AIRRT*: An improved fast path planning algorithm for manipulators." Journal of Physics: Conference Series 2390, no. 1 (2022): 012088. http://dx.doi.org/10.1088/1742-6596/2390/1/012088.

Texto completo da fonte
Resumo:
Abstract Path planning algorithms based on sampling have been widely studied and applied in recent years, among which the Informed-RRT* algorithm is a typically progressive optimization algorithm. To overcome the weakness of low optimization efficiency of the Informed-RRT* algorithm, an improved algorithm AIRRT* (Adaptive Informed-RRT*) is proposed, which can greatly improve optimization efficiency and improve the quality of optimized paths. According to the node distribution of the initial path, the appropriate node construction areas are dynamically selected first in the process of direct local sampling. Then collision detection and reconstruction are performed. Unlike the traditional algorithm, the AIRRT* algorithm determines the area to be reconstructed firstly, then performs direct local sampling in the reconstructed area, and finally reconstructs the path to improve the sampling efficiency. In addition, to reduce the searching difficulty of global path nodes, the invalid nodes are removed in reconstruction, which limits the growth of nodes in optimization. AIRRT* algorithm is compared with the common algorithms through simulation experiments, which proves that AIRRT* algorithm has obvious advantages in algorithm efficiency and path quality and has good applicability in varied scenarios.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Shen, Linbang, Zhigang Chu, Yongxiang Zhang, and Yang Yang. "A novel Fourier-based deconvolution algorithm with improved efficiency and convergence." Journal of Low Frequency Noise, Vibration and Active Control 39, no. 4 (2019): 866–78. http://dx.doi.org/10.1177/1461348419873471.

Texto completo da fonte
Resumo:
Various deconvolution algorithms for acoustic source are developed to improve spatial resolution and suppress sidelobe of the conventional beamforming. To improve the computational efficiency and solution convergence of deconvolution, this paper proposes a Fourier-based improved fast iterative shrinkage thresholding algorithm. Simulations and experiments show that Fourier-based improved fast iterative shrinkage thresholding algorithm can achieve excellent acoustic identification performance, with high computational efficiency and good convergence. For Fourier-based improved fast iterative shrinkage thresholding algorithm, the larger the weight coefficient, the narrower the mainlobe width, and the better the convergence, but the spurious source also increases. The recommended weight coefficient for the array described herein is 3. In addition, like other Fourier-based deconvolution algorithms, Fourier-based improved fast iterative shrinkage thresholding algorithm using irregular focus grid can obtain better acoustic source identification performance than using the conventional regular focus grid.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Et. al., Siva Sankara Phani T. ,. "CGRA MODULO SCHEDULING FOR ACHIEVING BETTER PERFORMANCE AND INCREASED EFFICIENCY." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 4 (2021): 1400–1413. http://dx.doi.org/10.17762/turcomat.v12i4.1225.

Texto completo da fonte
Resumo:
Coarse-Grained Reconfigurable Architectures (CGRA) is an effective solution for speeding up computer-intensive activities due to its high energy efficiency and flexibility sacrifices. The timely implementation of CGRA loops was one of the hardest problems in the analysis. Modulo scheduling (MS) was productive in order to implement loops on CGRAs. The problem remains with current MS algorithms, namely to map large and irregular circuits to CGRAs over a fair period of compilation with restricted computational and high-performance routing tools. This is mainly due to an absence of awareness of major mapping limits and a time consuming approach to solving temporary and space-related mapping using CGRA buffer tools. It aims to boost the performance and robust compilation of the CGRA modulo planning algorithm. The problem with the CGRA MS is divided into time and space and the mechanisms between the two problems have to be reorganized. We have a detailed, systematic mapping fluid that addresses the algorithms of the time mapping problem with a powerful buffer algorithm and efficient connection and calculation limitations. We create a fast-stable algorithm for spatial mapping with a retransmission and rearrangement mechanism. With higher performance and quicker build-up time, our MS algorithm can map loops to CBGRA. The results show that, given the same compilation budget, our mapping algorithm results in a better rate for compilation. The performance of this method will be increased from 5% to 14%, better than the standard CGRA mapping algorithms available.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

WANG, MONAN, SHAOYONG CHEN, and QIYOU YANG. "DESIGN AND APPLICATION OF BOUNDING VOLUME HIERARCHY COLLISION DETECTION ALGORITHM BASED ON VIRTUAL SPHERE." Journal of Mechanics in Medicine and Biology 19, no. 07 (2019): 1940044. http://dx.doi.org/10.1142/s021951941940044x.

Texto completo da fonte
Resumo:
The result of collision detection is closely related to the further deformation or cutting action of soft tissue. In order to further improve the efficiency and stability of collision detection, in this paper, a collision detection algorithm of bounding volume hierarchy based on virtual sphere was proposed. The proposed algorithm was validated and the results show that the detection efficiency of the bounding volume hierarchy algorithm based on virtual sphere is higher than that of the serial hybrid bounding volume hierarchy algorithm and the parallel hybrid bounding volume hierarchy algorithm. Different collision detection algorithms were tested and the results show that the collision detection algorithm based on virtual sphere has high detection efficiency and good stability. As the number of triangular patches increased, the advantage was more and more obvious. Finally, the proposed algorithm was applied to two large and medium-sized virtual scenes to implement the collision detection between the vastus lateralis muscle, thigh and surgical instrument. Based on the virtual sphere, the collision detection algorithm of bounding volume hierarchy can implement efficient and stable collision detection in a virtual surgery system. Meanwhile, the algorithm can be combined with other acceleration algorithms (such as the multithread acceleration algorithm) to further improve detection efficiency.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Guo, Xiaomin, Yongxing Cao, Jian Zhou, Yuanxian Huang, and Bijun Li. "HDM-RRT: A Fast HD-Map-Guided Motion Planning Algorithm for Autonomous Driving in the Campus Environment." Remote Sensing 15, no. 2 (2023): 487. http://dx.doi.org/10.3390/rs15020487.

Texto completo da fonte
Resumo:
On campus, the complexity of the environment and the lack of regulatory constraints make it difficult to model the environment, resulting in less efficient motion planning algorithms. To solve this problem, HD-Map-guided sampling-based motion planning is a feasible research direction. We proposed a motion planning algorithm for autonomous vehicles on campus, called HD-Map-guided rapidly-exploring random tree (HDM-RRT). In our algorithm, A collision risk map (CR-Map) that quantifies the collision risk coefficient on the road is combined with the Gaussian distribution for sampling to improve the efficiency of algorithm. Then, the node optimization strategy of the algorithm is deeply optimized through the prior information of the CR-Map to improve the convergence rate and solve the problem of poor stability in campus environments. Three experiments were designed to verify the efficiency and stability of our approach. The results show that the sampling efficiency of our algorithm is four times higher than that of the Gaussian distribution method. The average convergence rate of the proposed algorithm outperforms the RRT* algorithm and DT-RRT* algorithm. In terms of algorithm efficiency, the average computation time of the proposed algorithm is only 15.98 ms, which is much better than that of the three compared algorithms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Harvey, Charles, and Edmund Green. "Record Linkage Algorithms: Efficiency, Selection and Relative Confidence." History and Computing 6, no. 3 (1994): 143–52. http://dx.doi.org/10.3366/hac.1994.6.3.143.

Texto completo da fonte
Resumo:
Using the case material of the Westminster Historical Database, this article describes how 30 record linkage algorithms were evaluated to select the optimum algorithm for linking poll book data. It stresses the importance of relative confidence in linkage algorithms, and shows how more discriminating algorithms may increase both confidence in linked records and rates of record linkage.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Sevin, Abdullah, and Ünal Çavuşoğlu. "Design and Performance Analysis of a SPECK-Based Lightweight Hash Function." Electronics 13, no. 23 (2024): 4767. https://doi.org/10.3390/electronics13234767.

Texto completo da fonte
Resumo:
In recent years, hash algorithms have been used frequently in many areas, such as digital signature, blockchain, and IoT applications. Standard cryptographic hash functions, including traditional algorithms such as SHA-1 and MD5, are generally computationally intensive. A principal approach to improving the security and efficiency of hash algorithms is the integration of lightweight algorithms, which are designed to minimize computational overhead, into their architectural framework. This article proposes a new hash algorithm based on lightweight encryption. A new design for the lightweight hash function is proposed to improve its efficiency and meet security requirements. In particular, efficiency reduces computational load, energy consumption, and processing time for resource-constrained environments such as IoT devices. Security requirements focus on ensuring properties such as collision resistance, pre-image resistance, and distribution of modified bit numbers to ensure reliable performance while preserving the robustness of the algorithm. The proposed design incorporates the SPECK lightweight encryption algorithm to improve the structure of the algorithm, ensuring robust mixing and security through confusion and diffusion, while improving processing speed. Performance and efficiency tests were conducted to evaluate the proposed algorithm, and the results were compared with commonly used hash algorithms in the literature. The test results show that the new lightweight hash algorithm has successfully passed security tests, including collision resistance, pre-image resistance, sensitivity, and distribution of hash values, while outperforming other commonly used algorithms regarding execution time.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Pujiono, Imam Prayogo, Eko Hari Rachmawanto, and Nurul Anisa Sri Winarsih. "Array Sorting Algorithm vs Traditional Sorting Algorithm: Memory and Time Efficiency Analysis." Jurnal Manajemen Informatika (JAMIKA) 15, no. 1 (2025): 47–59. https://doi.org/10.34010/jamika.v15i1.13230.

Texto completo da fonte
Resumo:
The development of information technology has changed various aspects of life, including the way we store and sort data. Data that used to be stored in filing cabinets is now stored in digital form on computers. However, digital data that is not well organised can make it difficult to search and verify. Therefore, data sorting has become very important, and various sorting algorithms have been developed to fulfil this need, such as the Array Sorting Algorithm (ASA), which is claimed to have efficient time complexity and is very competitive when compared to the time complexity of traditional algorithms. This research examines the memory efficiency and computation time between ASA and five traditional sorting algorithms (Bubble Sort, Shell Sort, Merge Sort, Quick Sort, and Heap Sort) using the Java programming language. The research was conducted by utilising random numerical datasets on three different scales (100, 1,000, and 10,000 data) to test the performance of the six algorithms in various scenarios. ASA, which utilises a two-dimensional array structure to manage element frequencies, showed impressive performance in terms of computation time, especially on datasets containing 1,000 and 10,000 data, compared to traditional algorithms that focus more on comparison and recursion methods. The test results confirm that on datasets of 1,000 and 10,000 data, ASA excels in terms of computational speed but loses in terms of memory usage. Therefore, if memory usage is not a major consideration, then ASA is a very suitable sorting algorithm for sorting data of 100 - 10,000. These findings provide important insights for the selection of efficient sorting algorithms based on memory efficiency and computation time on multiple data sizes, which is particularly useful when developing applications using the Java programming language.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Shen, Lingli. "Implementation of CT Image Segmentation Based on an Image Segmentation Algorithm." Applied Bionics and Biomechanics 2022 (October 12, 2022): 1–11. http://dx.doi.org/10.1155/2022/2047537.

Texto completo da fonte
Resumo:
With the increasingly important role of image segmentation in the field of computed tomography (CT) image segmentation, the requirements for image segmentation technology in related industries are constantly improving. When the hardware resources can fully meet the needs of the fast and high-precision image segmentation program system, the main means of how to improve the image segmentation effect is to improve the related algorithms. Therefore, this study has proposed a combination of genetic algorithm (GA) and Great Law (OTSU) algorithm to form an image segmentation algorithm-immune genetic algorithm (IGA) algorithm. The algorithm has improved the segmentation accuracy and efficiency of the original algorithm, which is beneficial to the more accurate results of CT image segmentation. The experimental results in this study have shown that the operating efficiency of the OTSU segmentation algorithm is up to 75%. The operating efficiency of the GA algorithm is up to 78%. The operating efficiency of the IGA algorithm is up to 92%. In terms of operating efficiency, the OTSU segmentation algorithm has more advantages. In terms of segmentation accuracy, the highest accuracy rate of OTSU segmentation algorithm is 45%. The accuracy of the GA algorithm is 80%. The highest accuracy of the IGA algorithm is 97%. The IGA algorithm is more powerful in terms of operating efficiency and accuracy. Therefore, the application of the IGA algorithm to CT image segmentation is beneficial to doctors to better judge the lesions and improve the diagnosis rate.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Ma, Xiangbo. "The impact of AI on algorithm design innovations and practical applications." Edelweiss Applied Science and Technology 9, no. 3 (2025): 1121–27. https://doi.org/10.55214/25768484.v9i3.5436.

Texto completo da fonte
Resumo:
Artificial intelligence (AI) has greatly influenced the innovation and practical application of algorithm design. With the rapid development of AI technology, the traditional algorithm design paradigm is undergoing profound changes. AI leverages machine and deep learning to automate algorithm optimization and adaptation for tasks and environments. This involves selecting suitable models and preparing specific datasets for training and validation. It also includes implementing complex algorithms for automatic optimization, allowing models to learn and improve over time with minimal manual intervention. AI-driven innovation not only improves the efficiency and accuracy of algorithm design but also expands its range of practical applications in multiple fields. This paper summarizes evaluation criteria and efficiency indicators for AI-driven algorithms, discusses their basic principles, and examines real-world applications to predict future trends and potential uses, thereby facilitating innovative applications of deep learning technology. AI integration in algorithm design has advanced healthcare, finance, and autonomous driving. It automates optimization, enabling rapid algorithm improvement and enhancing decision-making and efficiency. AI-driven algorithms adapt to changes, ensuring long-term relevance and effectiveness.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Kim, Bong-seok, Youngseok Jin, Jonghun Lee, and Sangdong Kim. "High-Efficiency Super-Resolution FMCW Radar Algorithm Based on FFT Estimation." Sensors 21, no. 12 (2021): 4018. http://dx.doi.org/10.3390/s21124018.

Texto completo da fonte
Resumo:
This paper proposes a high-efficiency super-resolution frequency-modulated continuous-wave (FMCW) radar algorithm based on estimation by fast Fourier transform (FFT). In FMCW radar systems, the maximum number of samples is generally determined by the maximum detectable distance. However, targets are often closer than the maximum detectable distance. In this case, even if the number of samples is reduced, the ranges of targets can be estimated without degrading the performance. Based on this property, the proposed algorithm adaptively selects the number of samples used as input to the super-resolution algorithm depends on the coarsely estimated ranges of targets using the FFT. The proposed algorithm employs the reduced samples by the estimated distance by FFT as input to the super resolution algorithm instead of the maximum number of samples set by the maximum detectable distance. By doing so, the proposed algorithm achieves the similar performance of the conventional multiple signal classification algorithm (MUSIC), which is a representative of the super resolution algorithms while the performance does not degrade. Simulation results demonstrate the feasibility and performance improvement provided by the proposed algorithm; that is, the proposed algorithm achieves average complexity reduction of 88% compared to the conventional MUSIC algorithm while achieving its similar performance. Moreover, the improvement provided by the proposed algorithm was verified in practical conditions, as evidenced by our experimental results.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Yang, Ruohan, and Zijun Zhong. "Algorithm efficiency and hybrid applications of quantum computing." Theoretical and Natural Science 11, no. 1 (2023): 279–89. http://dx.doi.org/10.54254/2753-8818/11/20230419.

Texto completo da fonte
Resumo:
With the development of science and technology, it is difficult for traditional computers to solve cutting-edge problems due to the lack of computing power, and the importance of quantum computers is increasing day by day. This article starts with the simple principle of quantum computing, introduces the most advanced quantum computing instruments and quantum computing algorithms, and points out the application prospects in medicine, chemistry and other fields. This paper explains the basic principles of quantum computing algorithms, their efficiency over traditional algorithms, and focuses on the Shor algorithm and its variations. In terms of applications, the quantum computer Zu Chongzhi and its contribution to the sampling problem of quantum random circuits are introduced. This paper makes a certain analysis of the limitations of quantum computing, and gives the future development goals in a targeted manner. Besides, we summarize popular quantum computing algorithms and applications, and make contributions to the promotion and development of quantum computing. Overall, these results shed light on guiding further exploration of how to improve the computational efficiency of quantum computers.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Loo, Chang Herng, M. Zulfahmi Toh, Ahmad Fakhri Ab. Nasir, Nur Shazwani Kamaruddin, and Nur Hafieza Ismail. "Efficiency and Accuracy of Scheduling Algorithms for Final Year Project Evaluation Management System." MEKATRONIKA 5, no. 2 (2023): 23–31. http://dx.doi.org/10.15282/mekatronika.v5i2.9973.

Texto completo da fonte
Resumo:
Scheduling algorithms play a crucial role in optimizing the efficiency and precision of scheduling tasks, finding applications across various domains to enhance work productivity, reduce costs, and save time. This research paper conducts a comparative analysis of three algorithms: genetic algorithm, hill climbing algorithm, and particle swarm optimization algorithm, with a focus on evaluating their performance in scheduling presentations. The primary goal of this study is to assess the effectiveness of these algorithms and identify the most efficient one for handling presentation scheduling tasks, thereby minimizing the system's response time for generating schedules. The research takes into account various constraints, including evaluator availability, student and evaluator affiliations within research groups, and student-evaluator relationships where a student cannot be supervised by one of the evaluators. Considering these critical parameters and constraints, the algorithm assigns presentation slots, venues, and two evaluators to each student without encountering scheduling conflicts, ultimately producing a schedule based on the allocated slots for both students and evaluators.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Naeem, Usman, and Muhammad Mateen Afzal Awan. "Maximizing off-grid solar photovoltaic system efficiency through cutting-edge performance optimization technique for incremental conductance algorithm." Mehran University Research Journal of Engineering and Technology 43, no. 3 (2024): 113. http://dx.doi.org/10.22581/muet1982.3135.

Texto completo da fonte
Resumo:
The maximum power point tracking (MPPT) algorithms are required to deliver the optimal energy from solar photovoltaic cells/array (PV) under numerous weather conditions. Therefore, MPPT circuits driven by defined rules called algorithms are designed. These algorithms range from simple to complex in design and implementation and are selected based on the scenarios of the surroundings. However, the incremental conductance (InC) MPPT algorithm is one of the market's most simple, easy to implement, and demanding algorithms. The drawback associated with the InC algorithm is its tracking speed. To overcome this weakness various researchers have made multiple improvements. Although the performance became better but was not satisfied. Further, the improvements introduce steady-state oscillations of the operating point around the MPP. So, the user needs to pick and choose based on demand. Keeping the target in focus, we have introduced a couple of modifications in the structure of INC. that remain fruitful. The proposed structure named the augmented InC algorithm has shown marvelous improvement in tracking speed and steady-state oscillations. The results have been compared with the conventional InC algorithm, where the proposed augmented InC algorithm has outperformed the conventional InC algorithm in tracking speed and steady-state oscillations. We have used the MATLAB script to code the conventional InC algorithm and proposed augmented InC algorithms based on their designed flowchart. Both algorithms have been applied to the standalone solar photovoltaic system composed of a solar photovoltaic array, DC/DC boost converter, illumination and temperature inputs, MPPT algorithm, and a DC load. The model is designed in Simulink/MATLAB.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Zhang, Wei, and Qian Xu. "Optimization of College English Classroom Teaching Efficiency by Deep Learning SDD Algorithm." Computational Intelligence and Neuroscience 2022 (January 21, 2022): 1–10. http://dx.doi.org/10.1155/2022/1014501.

Texto completo da fonte
Resumo:
In order to improve the teaching efficiency of English teachers in classroom teaching, the target detection algorithm in deep learning and the monitoring information from teachers are used, the target detection algorithm of deep learning Single Shot MultiBox Detector (SSD) is optimized, and the optimized Mobilenet-Single Shot MultiBox Detector (Mobilenet-SSD) is designed. After analyzing the Mobilenet-SSD algorithm, it is recognized that the algorithm has the shortcomings of large amount of basic network parameters and poor small target detection. The deficiencies are optimized in the following partThrough related experiments of student behaviour analysis, the average detection accuracy of the optimized algorithm reached 82.13%, and the detection speed reached 23.5 fps (frames per second). Through experiments, the algorithm has achieved 81.11% in detecting students’ writing behaviour. This proves that the proposed algorithm has improved the accuracy of small target recognition without changing the operation speed of the traditional algorithm. The designed algorithm has more advantages in detection accuracy compared with previous detection algorithms. The optimized algorithm improves the detection efficiency of the algorithm, which is beneficial to provide modern technical support for English teachers to understand the learning status of students and has strong practical significance for improving the efficiency of English classroom teaching.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Ali, Abdulrazzak, Nurul Akmar Emran, Safiza Suhana Kamal Baharin, et al. "Improving the efficiency of clustering algorithm for duplicates detection." Indonesian Journal of Electrical Engineering and Computer Science 30, no. 3 (2023): 1586. http://dx.doi.org/10.11591/ijeecs.v30.i3.pp1586-1595.

Texto completo da fonte
Resumo:
Clustering method is a technique used for comparisons reduction between the candidates records in the duplicate detection process. The process of clustering records is affected by the quality of data. The more error-free the data, the more efficient the clustering algorithm, as data errors cause data to be placed in incorrect groups. Window algorithms suffer from the window size. The larger the window, the greater the number of unnecessary comparisons, and the smaller the window size may prevent the detection of duplicates that are supposed to be within the window. In this paper, we propose a data pre-processing method that increases the efficiency of window algorithms in grouping similar records together. In addition, the proposed method also deal s with the window size problem. In the proposed method, high-rank attributes are selected and then preparators are applied to the selected traits. A compensation algorithm is implemented to reduce the problem of missing and distorted sort keys. Two datasets (compact disc database (CDDB) and MusicBrainz) were used to test duplicates detection algorithms. The duplicates detection toolkit(DuDe) was used as a benchmark for the proposed method. Experiments showed that the proposed method achieved a high rate of accuracy in detecting duplicates. In addition, the proposed method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Abdulrazzak, Ali, Akmar Emran Nurul, Suhana Kamal Baharin Safiza, et al. "Improving the efficiency of clustering algorithm for duplicates detection." Improving the efficiency of clustering algorithm for duplicates detection 30, no. 3 (2023): 1586–95. https://doi.org/10.11591/ijeecs.v30.i3.pp1586-1595.

Texto completo da fonte
Resumo:
Clustering method is a technique used for comparisons reduction between the candidates records in the duplicate detection process. The process of clustering records is affected by the quality of data. The more error-free the data, the more efficient the clustering algorithm, as data errors cause data to be placed in incorrect groups. Window algorithms suffer from the window size. The larger the window, the greater the number of unnecessary comparisons, and the smaller the window size may prevent the detection of duplicates that are supposed to be within the window. In this paper, we propose a data pre-processing method that increases the efficiency of window algorithms in grouping similar records together. In addition, the proposed method also deal s with the window size problem. In the proposed method, high-rank attributes are selected and then preparators are applied to the selected traits. A compensation algorithm is implemented to reduce the problem of missing and distorted sort keys. Two datasets (compact disc database (CDDB) and MusicBrainz) were used to test duplicates detection algorithms. The duplicates detection toolkit(DuDe) was used as a benchmark for the proposed method. Experiments showed that the proposed method achieved a high rate of accuracy in detecting duplicates. In addition, the proposed method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Huang, Feifan. "The Research Progress of Path Planning Algorithms." Highlights in Science, Engineering and Technology 63 (August 8, 2023): 216–21. http://dx.doi.org/10.54097/hset.v63i.10879.

Texto completo da fonte
Resumo:
At present, path-planning for mobile robots is a hot problem. In the previous scientific research, A*, Dijkstra, rapidly-exploring random tree (RRT) and many other algorithms have appeared. To provide a clearer understanding of these algorithms, this article organized the idea and development of Q-learning, A* algorithm, ant colony algorithm, RRT algorithm and Dijkstra algorithm. According to the time order, this article researched several applications of the five algorithms respectively. Besides, optimization of the five algorithms in the past few years were listed. There are also examples of combined algorithms, which have better efficiency compared to using one algorithm alone. By analyzing the five algorithm and their optimization, the common problem and solution was found. The common problem of these algorithms is that a shorter and smoother path needs to be solved and the convergence speed needs accelerating. At present, the main solution can be divided into two aspects. One is to improve the algorithm itself, and the other is to combine different algorithms. In the future, more kinds of combined algorithms will emerge, and a better solution of path planning can be obtained to improve efficiency and reduce energy cost.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Parfenov, V. I., and V. D. Le. "DISTRIBUTED DETECTION BASED ON USING SOFT DECISION DECODING IN A FUSION CENTER." Telecommunications, no. 1 (2022): 2–9. http://dx.doi.org/10.31044/1684-2588-2022-0-1-2-9.

Texto completo da fonte
Resumo:
In this work, distributed detection by data from many sensors in the wireless sensor system is considered. A decision-making algorithm based on using soft-decision decoding in a fusion center is synthesized. Its gain in efficiency compared to efficiency of the algorithm based on hard-decision scheme is shown. It is noted that this algorithm is a generalization of earlier developed algorithms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Wang, Huanwei, Xuyan Qi, Shangjie Lou, Jing Jing, Hongqi He, and Wei Liu. "An Efficient and Robust Improved A* Algorithm for Path Planning." Symmetry 13, no. 11 (2021): 2213. http://dx.doi.org/10.3390/sym13112213.

Texto completo da fonte
Resumo:
Path planning plays an essential role in mobile robot navigation, and the A* algorithm is one of the best-known path planning algorithms. However, the conventional A* algorithm and the subsequent improved algorithms still have some limitations in terms of robustness and efficiency. These limitations include slow algorithm efficiency, weak robustness, and collisions when robots are traversing. In this paper, we propose an improved A*-based algorithm called EBHSA* algorithm. The EBHSA* algorithm introduces the expansion distance, bidirectional search, heuristic function optimization and smoothing into path planning. The expansion distance extends a certain distance from obstacles to improve path robustness by avoiding collisions. Bidirectional search is a strategy that searches for a path from the start node and from the goal node at the same time. Heuristic function optimization designs a new heuristic function to replace the traditional heuristic function. Smoothing improves path robustness by reducing the number of right-angle turns. Moreover, we carry out simulation tests with the EBHSA* algorithm, and the test results show that the EBHSA* algorithm has excellent performance in terms of robustness and efficiency. In addition, we transplant the EBHSA* algorithm to a robot to verify its effectiveness in the real world.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

G B, Suba Santhosi. "A Light Weight Optimal Resource Scheduling Algorithm for Energy Efficient and Real Time Cloud Services." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (2023): 1953–62. http://dx.doi.org/10.22214/ijraset.2023.51985.

Texto completo da fonte
Resumo:
Abstract: Cloud computing has become an essential technology for providing various services to customers. Energy consumption and response time are critical factors in cloud computing systems. Therefore, scheduling algorithms play a vital role in optimizing resource usage and improving energy efficiency. In this paper, we propose a lightweight optimal scheduling algorithm for energy-efficient and real-time cloud services. Our algorithm considers the trade-off between energy consumption and response time and provides a solution that optimizes both factors. We evaluate our algorithm on a cloud computing testbed using the Cloud Sim simulation toolkit and compare it with load balancing round robin, ant colony, genetic algorithm, and analytical algorithm. The results show that our proposed algorithm outperforms these algorithms in terms of energy efficiency and response time.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Qiu, Liqing, Shuang Zhang, Chunmei Gu, and Xiangbo Tian. "Scalable Influence Maximization Meets Efficiency and Effectiveness in Large-Scale Social Networks." International Journal of Software Engineering and Knowledge Engineering 30, no. 08 (2020): 1079–96. http://dx.doi.org/10.1142/s0218194020400161.

Texto completo da fonte
Resumo:
Influence maximization is a problem that aims to select top [Formula: see text] influential nodes to maximize the spread of influence in social networks. The classical greedy-based algorithms and their improvements are relatively slow or not scalable. The efficiency of heuristic algorithms is fast but their accuracy is unacceptable. Some algorithms improve the accuracy and efficiency by consuming a large amount of memory usage. To overcome the above shortcoming, this paper proposes a fast and scalable algorithm for influence maximization, called K-paths, which utilizes the influence tree to estimate the influence spread. Additionally, extensive experiments demonstrate that the K-paths algorithm outperforms the comparison algorithms in terms of efficiency while keeping competitive accuracy.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Mramba, Lazarus, and Salvador Gezan. "Evaluating Algorithm Efficiency for Optimizing Experimental Designs with Correlated Data." Algorithms 11, no. 12 (2018): 212. http://dx.doi.org/10.3390/a11120212.

Texto completo da fonte
Resumo:
The search for efficient methods and procedures to optimize experimental designs is a vital process in field trials that is often challenged by computational bottlenecks. Most existing methods ignore the presence of some form of correlations in the data to simplify the optimization process at the design stage. This study explores several algorithms for improving field experimental designs using a linear mixed models statistical framework adjusting for both spatial and genetic correlations based on A- and D-optimality criteria. Relative design efficiencies are estimated for an array of algorithms including pairwise swap, genetic neighborhood, and simulated annealing and evaluated with varying levels of heritabilities, spatial and genetic correlations. Initial randomized complete block designs were generated using a stochastic procedure and can also be imported directly from other design software. Results showed that at a spatial correlation of 0.6 and a heritability of 0.3, under the A-optimality criterion, both simulated annealing and simple pairwise algorithms achieved the highest design efficiencies of 7.4 % among genetically unrelated individuals, implying a reduction in average variance of the random treatment effects by 7.4 % when the algorithm was iterated 5000 times. In contrast, results under D-optimality criterion indicated that simulated annealing had the lowest design efficiency. The simple pairwise algorithm consistently maintained highest design efficiencies in all evaluated conditions. Design efficiencies for experiments with full-sib families decreased with increasing heritability. The number of successful swaps appeared to decrease with increasing heritability and were highest for both simulated annealing and simple pairwise algorithms, and lowest for genetic neighborhood algorithm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Dai, Cai, and Xiujuan Lei. "A Multiobjective Brain Storm Optimization Algorithm Based on Decomposition." Complexity 2019 (January 22, 2019): 1–11. http://dx.doi.org/10.1155/2019/5301284.

Texto completo da fonte
Resumo:
Brain storm optimization (BSO) algorithm is a simple and effective evolutionary algorithm. Some multiobjective brain storm optimization algorithms have low search efficiency. This paper combines the decomposition technology and multiobjective brain storm optimization algorithm (MBSO/D) to improve the search efficiency. Given weight vectors transform a multiobjective optimization problem into a series of subproblems. The decomposition technology determines the neighboring clusters of each cluster. Solutions of adjacent clusters generate new solutions to update population. An adaptive selection strategy is used to balance exploration and exploitation. Besides, MBSO/D compares with three efficient state-of-the-art algorithms, e.g., NSGAII and MOEA/D, on twenty-two test problems. The experimental results show that MBSO/D is more efficient than compared algorithms and can improve the search efficiency for most test problems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Anita, Mahato* Kailash Patidar. "A REVIWE ARTICLE OF COMPARITIVELY ANALYSIS OF DEEC PROTOCOL." INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY 6, no. 6 (2017): 113–17. https://doi.org/10.5281/zenodo.805385.

Texto completo da fonte
Resumo:
Energy Management can be improved by proficient clustering algorithms in heterogeneous wireless sensor networks. Coordination through cluster head selection provides efficient data aggregation that reduces communication overhead in the network. In this paper, we propose a fuzzy logic approach based DDEEC clustering algorithm which aims to prolong the lifetime of nodes in heterogeneous WSNs. We compare this algorithm with the PSO based DDEEC algorithm and original DDEEC algorithm according to the parameters of first node dies at different rounds and energy-efficiency metrics. The efficiency of proposed optimized fuzzy algorithm is proved by the Matlab experimental results. Simulation results exhibits that the proposed algorithm has higher energy efficiency and can improve life span of a node and data delivery at the base station over its comparatives.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Zhang, Bai, Haofang Liu, and Feng Gao. "Research on an Improved Sliding Extracting Algorithm of Composite Deviation for Wind Turbine Gear." Journal of Physics: Conference Series 2218, no. 1 (2022): 012046. http://dx.doi.org/10.1088/1742-6596/2218/1/012046.

Texto completo da fonte
Resumo:
Abstract One tooth radial composite deviation and one tooth tangential composite deviation are important parameters of wind turbine gears whose operation life cycle can be influenced. An improved sliding extracting algorithm of composite deviation is proposed. It just only calculates those parts that need to be recalculated, avoids a large number of invalid searching calculations, and greatly improves the efficiency. The experimental results show that the improved sliding extracting algorithm of composite deviation has the same results as the traditional extracting algorithm, and the efficiency of the extracting algorithm is improved nearly 100 times. The improved sliding extracting algorithm of total composite deviation is proposed in this paper, and theoretically the algorithm efficiency is one time than traditional algorithm. The algorithms proposed in this paper can improve the efficiency of radial composite deviation and tangential composite deviation, and can be applied to gear singleflank rolling tester and gear double-flank rolling tester, especially in wind turbine gear field.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

PAN, Guan-hua, and Xing-zhong ZHANG. "Study on efficiency of Sunday algorithm." Journal of Computer Applications 32, no. 11 (2013): 3082–84. http://dx.doi.org/10.3724/sp.j.1087.2012.03082.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia