Journal articles on the topic 'EDGE BASED WEIGHTING'

To see the other types of publications on this topic, follow the link: EDGE BASED WEIGHTING.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'EDGE BASED WEIGHTING.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Li, Xian Mao, Gao Ming Huang, and Dong Xia. "Spacial Filter of Weighting Method Based on Spectrum Analyse." Advanced Materials Research 605-607 (December 2012): 1890–96. http://dx.doi.org/10.4028/www.scientific.net/amr.605-607.1890.

Full text
Abstract:
The selection of a certain scope angle signal in the traditional method is to switch the hard switches in antennas, this paper proposes a method, which based on a weighting method to filter the signal in certain directions, namely spacial filter. With array antennas,the compositive signal can be acquired, by which the phrase and plus (weighting) of each unit antenna’s signal be adjusted and then the signals be added. In different time, signals can be selected in any scope of directions through adjusting each channels by different weighting. The weighting parameters can be obtained through the analysis of spacial signal and spacial spectrum, and then obtains an appropriate weighting window function. Simulation shows that Hamming window’s weighting is the best among the three representative windows functions. It can obtain a low sidelobe (-44dB) and less rising edge and declining edges. And the paper also give a hardware structure.
APA, Harvard, Vancouver, ISO, and other styles
2

Karasuyama, Masayuki, and Hiroshi Mamitsuka. "Adaptive edge weighting for graph-based learning algorithms." Machine Learning 106, no. 2 (November 18, 2016): 307–35. http://dx.doi.org/10.1007/s10994-016-5607-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

KERR, GRAINNE, DIMITRI PERRIN, HEATHER J. RUSKIN, and MARTIN CRANE. "EDGE-WEIGHTING OF GENE EXPRESSION GRAPHS." Advances in Complex Systems 13, no. 02 (April 2010): 217–38. http://dx.doi.org/10.1142/s0219525910002505.

Full text
Abstract:
In recent years, considerable research efforts have been directed to micro-array technologies and their role in providing simultaneous information on expression profiles for thousands of genes. These data, when subjected to clustering and classification procedures, can assist in identifying patterns and providing insight on biological processes. To understand the properties of complex gene expression datasets, graphical representations can be used. Intuitively, the data can be represented in terms of a bipartite graph, with weighted edges corresponding to gene-sample node couples in the dataset. Biologically meaningful subgraphs can be sought, but performance can be influenced both by the search algorithm, and, by the graph-weighting scheme and both merit rigorous investigation. In this paper, we focus on edge-weighting schemes for bipartite graphical representation of gene expression. Two novel methods are presented: the first is based on empirical evidence; the second on a geometric distribution. The schemes are compared for several real datasets, assessing efficiency of performance based on four essential properties: robustness to noise and missing values, discrimination, parameter influence on scheme efficiency and reusability. Recommendations and limitations are briefly discussed.
APA, Harvard, Vancouver, ISO, and other styles
4

Seo, Suyoung. "Subpixel Edge Localization Based on Adaptive Weighting of Gradients." IEEE Transactions on Image Processing 27, no. 11 (November 2018): 5501–13. http://dx.doi.org/10.1109/tip.2018.2860241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cheng Deqiang, 程德强, 庄焕东 Zhuang Huandong, 于文洁 Yu Wenjie, 白春梦 Bai Chunmeng, and 文小顺 Wen Xiaoshun. "Cross-Scale Local Stereo Matching Based on Edge Weighting." Laser & Optoelectronics Progress 56, no. 21 (2019): 211504. http://dx.doi.org/10.3788/lop56.211504.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sun, Peng Gang. "Weighting links based on edge centrality for community detection." Physica A: Statistical Mechanics and its Applications 394 (January 2014): 346–57. http://dx.doi.org/10.1016/j.physa.2013.08.048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kovács, László, Anita Agárdi, and Tamás Bányai. "Fitness Landscape Analysis and Edge Weighting-Based Optimization of Vehicle Routing Problems." Processes 8, no. 11 (October 28, 2020): 1363. http://dx.doi.org/10.3390/pr8111363.

Full text
Abstract:
Vehicle routing problem (VRP) is a highly investigated discrete optimization problem. The first paper was published in 1959, and later, many vehicle routing problem variants appeared to simulate real logistical systems. Since vehicle routing problem is an NP-difficult task, the problem can be solved by approximation algorithms. Metaheuristics give a “good” result within an “acceptable” time. When developing a new metaheuristic algorithm, researchers usually use only their intuition and test results to verify the efficiency of the algorithm, comparing it to the efficiency of other algorithms. However, it may also be necessary to analyze the search operators of the algorithms for deeper investigation. The fitness landscape is a tool for that purpose, describing the possible states of the search space, the neighborhood operator, and the fitness function. The goal of fitness landscape analysis is to measure the complexity and efficiency of the applicable operators. The paper aims to investigate the fitness landscape of a complex vehicle routing problem. The efficiency of the following operators is investigated: 2-opt, order crossover, partially matched crossover, cycle crossover. The results show that the most efficient one is the 2-opt operator. Based on the results of fitness landscape analysis, we propose a novel traveling salesman problem genetic algorithm optimization variant where the edges are the elementary units having a fitness value. The optimal route is constructed from the edges having good fitness value. The fitness value of an edge depends on the quality of the container routes. Based on the performed comparison tests, the proposed method significantly dominates many other optimization approaches.
APA, Harvard, Vancouver, ISO, and other styles
8

Mahajan, Shveta, Anu Rani, Mamta Sharma, Sudesh Kumar Mittal, and Amitava Das. "A pre-processing based optimized edge weighting method for colour constancy." Imaging Science Journal 66, no. 4 (December 15, 2017): 231–38. http://dx.doi.org/10.1080/13682199.2017.1412889.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tan, Xiao, Changming Sun, and Tuan D. Pham. "Edge-Aware Filtering with Local Polynomial Approximation and Rectangle-Based Weighting." IEEE Transactions on Cybernetics 46, no. 12 (December 2016): 2693–705. http://dx.doi.org/10.1109/tcyb.2015.2485203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zheng, Yi, Fang Liu, and Yong-Wang Gong. "Robustness in Weighted Networks with Cluster Structure." Mathematical Problems in Engineering 2014 (2014): 1–8. http://dx.doi.org/10.1155/2014/292465.

Full text
Abstract:
The vulnerability of complex systems induced by cascade failures revealed the comprehensive interaction of dynamics with network structure. The effect on cascade failures induced by cluster structure was investigated on three networks, small-world, scale-free, and module networks, of which the clustering coefficient is controllable by the random walk method. After analyzing the shifting process of load, we found that the betweenness centrality and the cluster structure play an important role in cascading model. Focusing on this point, properties of cascading failures were studied on model networks with adjustable clustering coefficient and fixed degree distribution. In the proposed weighting strategy, the path length of an edge is designed as the product of the clustering coefficient of its end nodes, and then the modified betweenness centrality of the edge is calculated and applied in cascade model as its weights. The optimal region of the weighting scheme and the size of the survival components were investigated by simulating the edge removing attack, under the rule of local redistribution based on edge weights. We found that the weighting scheme based on the modified betweenness centrality makes all three networks have better robustness against edge attack than the one based on the original betweenness centrality.
APA, Harvard, Vancouver, ISO, and other styles
11

Xie, Yun Yu, Chang Hua Hu, Biao Shi, and Qian Man. "An Adaptive Super-Resolution Reconstruction for Terahertz Image Based on MRF Model." Applied Mechanics and Materials 373-375 (August 2013): 541–46. http://dx.doi.org/10.4028/www.scientific.net/amm.373-375.541.

Full text
Abstract:
A method that the adaptive super-resolution reconstruction for Terahertz (THz) image based on the Markov random field (MRF) is proposed. The adaptive Gaussian weighting factor based on the Markov prior distribution is applied to the smoothness of the image edge. The gradient-based optimization converges to the optimal solution fast. It simulates the fact Terahertz image to verify the feasibility of the method comparing with the traditional maximum a posteriori (MAP) super-resolution algorithm. The experimental results show that the adaptive Gaussian weighting super-resolution algorithm not only has high super-resolution performance, but also can better maintain the image edge information and reduce the noise of restored images, and get an ideal THz image. An adaptive super-resolution reconstruction method can be used for Terahertz image reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
12

Chuang, T. Y., H. W. Ting, and J. J. Jaw. "HYBRID-BASED DENSE STEREO MATCHING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 495–501. http://dx.doi.org/10.5194/isprs-archives-xli-b3-495-2016.

Full text
Abstract:
Stereo matching generating accurate and dense disparity maps is an indispensable technique for 3D exploitation of imagery in the fields of Computer vision and Photogrammetry. Although numerous solutions and advances have been proposed in the literature, occlusions, disparity discontinuities, sparse texture, image distortion, and illumination changes still lead to problematic issues and await better treatment. In this paper, a hybrid-based method based on semi-global matching is presented to tackle the challenges on dense stereo matching. To ease the sensitiveness of SGM cost aggregation towards penalty parameters, a formal way to provide proper penalty estimates is proposed. To this end, the study manipulates a shape-adaptive cross-based matching with an edge constraint to generate an initial disparity map for penalty estimation. Image edges, indicating the potential locations of occlusions as well as disparity discontinuities, are approved by the edge drawing algorithm to ensure the local support regions not to cover significant disparity changes. Besides, an additional penalty parameter 𝑃𝑒 is imposed onto the energy function of SGM cost aggregation to specifically handle edge pixels. Furthermore, the final disparities of edge pixels are found by weighting both values derived from the SGM cost aggregation and the U-SURF matching, providing more reliable estimates at disparity discontinuity areas. Evaluations on Middlebury stereo benchmarks demonstrate satisfactory performance and reveal the potency of the hybrid-based dense stereo matching method.
APA, Harvard, Vancouver, ISO, and other styles
13

Chuang, T. Y., H. W. Ting, and J. J. Jaw. "HYBRID-BASED DENSE STEREO MATCHING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 495–501. http://dx.doi.org/10.5194/isprsarchives-xli-b3-495-2016.

Full text
Abstract:
Stereo matching generating accurate and dense disparity maps is an indispensable technique for 3D exploitation of imagery in the fields of Computer vision and Photogrammetry. Although numerous solutions and advances have been proposed in the literature, occlusions, disparity discontinuities, sparse texture, image distortion, and illumination changes still lead to problematic issues and await better treatment. In this paper, a hybrid-based method based on semi-global matching is presented to tackle the challenges on dense stereo matching. To ease the sensitiveness of SGM cost aggregation towards penalty parameters, a formal way to provide proper penalty estimates is proposed. To this end, the study manipulates a shape-adaptive cross-based matching with an edge constraint to generate an initial disparity map for penalty estimation. Image edges, indicating the potential locations of occlusions as well as disparity discontinuities, are approved by the edge drawing algorithm to ensure the local support regions not to cover significant disparity changes. Besides, an additional penalty parameter 𝑃𝑒 is imposed onto the energy function of SGM cost aggregation to specifically handle edge pixels. Furthermore, the final disparities of edge pixels are found by weighting both values derived from the SGM cost aggregation and the U-SURF matching, providing more reliable estimates at disparity discontinuity areas. Evaluations on Middlebury stereo benchmarks demonstrate satisfactory performance and reveal the potency of the hybrid-based dense stereo matching method.
APA, Harvard, Vancouver, ISO, and other styles
14

Lai, Yu-Ren, Ping-Chuan Tsai, Chih-Yuan Yao, and Shanq-Jang Ruan. "Improved local histogram equalization with gradient-based weighting process for edge preservation." Multimedia Tools and Applications 76, no. 1 (December 21, 2015): 1585–613. http://dx.doi.org/10.1007/s11042-015-3147-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Zhang, Yi, Cheng Liang Huang, and Lian Fa Bai. "Stereo Matching Algorithm Based on Edge Feature of Segmented Image." Applied Mechanics and Materials 333-335 (July 2013): 948–53. http://dx.doi.org/10.4028/www.scientific.net/amm.333-335.948.

Full text
Abstract:
Stereo matching of the disparity discontinuous boundaries and weak texture regions is still a problem of computer vision. Local-based stereo matching algorithm with the advantages of fast speed and high accuracy is the most common method. In order to improve the matching accuracy of the mentioned regions,a stereo matching algorithm based on edge feature of segmented image is proposed. Firstly, the reference image was segmented by Mean-Shift algorithm. Then, support window was dynamically allocated based on edge feature of segmented image. Finally, the disparity distribution of support window was adjusted by introducing weighting factor. The experimental results show that this algorithm can reduce noise and effectively improve the matching accuracy.
APA, Harvard, Vancouver, ISO, and other styles
16

SAIF, ABDULGABBAR, UMMI ZAKIAH ZAINODIN, NAZLIA OMAR, and ABDULLAH SAEED GHAREB. "Weighting-based semantic similarity measure based on topological parameters in semantic taxonomy." Natural Language Engineering 24, no. 6 (June 4, 2018): 861–86. http://dx.doi.org/10.1017/s1351324918000190.

Full text
Abstract:
AbstractSemantic measures are used in handling different issues in several research areas, such as artificial intelligence, natural language processing, knowledge engineering, bioinformatics, and information retrieval. Hierarchical feature-based semantic measures have been proposed to estimate the semantic similarity between two concepts/words depending on the features extracted from a semantic taxonomy (hierarchy) of a given lexical source. The central issue in these measures is the constant weighting assumption that all elements in the semantic representation of the concept possess the same relevance. In this paper, a new weighting-based semantic similarity measure is proposed to address the issues in hierarchical feature-based measures. Four mechanisms are introduced to weigh the degree of relevance of features in the semantic representation of a concept by using topological parameters (edge, depth, descendants, and density) in a semantic taxonomy. With the semantic taxonomy of WordNet, the proposed semantic measure is evaluated for word semantic similarity in four gold-standard datasets. Experimental results show that the proposed measure outperforms hierarchical feature-based semantic measures in all the datasets. Comparison results also imply that the proposed measure is more effective than information-content measures in measuring semantic similarity.
APA, Harvard, Vancouver, ISO, and other styles
17

Wu, Liang, Shunbo Hu, and Changchun Liu. "Exponential-Distance Weights for Reducing Grid-like Artifacts in Patch-Based Medical Image Registration." Sensors 21, no. 21 (October 26, 2021): 7112. http://dx.doi.org/10.3390/s21217112.

Full text
Abstract:
Patch-based medical image registration has been well explored in recent decades. However, the patch fusion process can generate grid-like artifacts along the edge of patches for the following two reasons: firstly, in order to ensure the same size of input and output, zero-padding is used, which causes uncertainty in the edges of the output feature map during the feature extraction process; secondly, the sliding window extraction patch with different strides will result in different degrees of grid-like artifacts. In this paper, we propose an exponential-distance-weighted (EDW) method to remove grid-like artifacts. To consider the uncertainty of predictions near patch edges, we used an exponential function to convert the distance from the point in the overlapping regions to the center point of the patch into a weighting coefficient. This gave lower weights to areas near the patch edges, to decrease the uncertainty predictions. Finally, the dense displacement field was obtained by this EDW weighting method. We used the OASIS-3 dataset to evaluate the performance of our method. The experimental results show that the proposed EDW patch fusion method removed grid-like artifacts and improved the dice similarity coefficient superior to those of several state-of-the-art methods. The proposed fusion method can be used together with any patch-based registration model.
APA, Harvard, Vancouver, ISO, and other styles
18

Xu, Yan, Shunbo Hu, and Yuyue Du. "Research on Optimization Scheme for Blocking Artifacts after Patch-Based Medical Image Reconstruction." Computational and Mathematical Methods in Medicine 2022 (July 31, 2022): 1–17. http://dx.doi.org/10.1155/2022/2177159.

Full text
Abstract:
Due to limitations of computer resources, when utilizing a neural network to process an image with a high resolution, the typical processing approach is to slice the original image. However, because of the influence of zero-padding in the edge component during the convolution process, the central part of the patch often has more accurate feature information than the edge part, resulting in image blocking artifacts after patch stitching. We studied this problem in this paper and proposed a fusion method that assigns a weight to each pixel in a patch using a truncated Gaussian function as the weighting function. In this method, we used the weighting function to transform the Euclidean-distance between a point in the overlapping part and the central point of the patch where the point was located into a weight coefficient. With increasing distance, the value of the weight coefficient decreased. Finally, the reconstructed image was obtained by weighting. We employed the bias correction model to evaluate our method on the simulated database BrainWeb and the real dataset HCP (Human Connectome Project). The results show that the proposed method is capable of effectively removing blocking artifacts and obtaining a smoother bias field. To verify the effectiveness of our algorithm, we employed a denoising model to test it on the IXI-Guys human dataset. Qualitative and quantitative evaluations of both models show that the fusion method proposed in this paper can effectively remove blocking artifacts and demonstrates superior performance compared to five commonly available and state-of-the-art fusion methods.
APA, Harvard, Vancouver, ISO, and other styles
19

Xu, Yingying, Songsong Dai, Haifeng Song, Lei Du, and Ying Chen. "Multi-modal brain MRI images enhancement based on framelet and local weights super-resolution." Mathematical Biosciences and Engineering 20, no. 2 (2022): 4258–73. http://dx.doi.org/10.3934/mbe.2023199.

Full text
Abstract:
<abstract><p>Magnetic resonance (MR) image enhancement technology can reconstruct high-resolution image from a low-resolution image, which is of great significance for clinical application and scientific research. T1 weighting and T2 weighting are the two common magnetic resonance imaging modes, each of which has its own advantages, but the imaging time of T2 is much longer than that of T1. Related studies have shown that they have very similar anatomical structures in brain images, which can be utilized to enhance the resolution of low-resolution T2 images by using the edge information of high-resolution T1 images that can be rapidly imaged, so as to shorten the imaging time needed for T2 images. In order to overcome the inflexibility of traditional methods using fixed weights for interpolation and the inaccuracy of using gradient threshold to determine edge regions, we propose a new model based on previous studies on multi-contrast MR image enhancement. Our model uses framelet decomposition to finely separate the edge structure of the T2 brain image, and uses the local regression weights calculated from T1 image to construct a global interpolation matrix, so that our model can not only guide the edge reconstruction more accurately where the weights are shared, but also carry out collaborative global optimization for the remaining pixels and their interpolated weights. Experimental results on a set of simulated MR data and two sets of real MR images show that the enhanced images obtained by the proposed method are superior to the compared methods in terms of visual sharpness or qualitative indicators.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
20

Ye, Dan, Xiaogang Wang, and Jin Hou. "Multi-objective Offloading Decision Based on Combination Weighting Method for Multi-access Edge Computing." Internet of Things and Cloud Computing 9, no. 3 (2021): 21. http://dx.doi.org/10.11648/j.iotcc.20210903.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Kazemi-Pour, Ali, Bahram Goliaei, and Hamid Pezeshk. "Protein Complex Discovery by Interaction Filtering from Protein Interaction Networks Using Mutual Rank Coexpression and Sequence Similarity." BioMed Research International 2015 (2015): 1–7. http://dx.doi.org/10.1155/2015/165186.

Full text
Abstract:
The evaluation of the biological networks is considered the essential key to understanding the complex biological systems. Meanwhile, the graph clustering algorithms are mostly used in the protein-protein interaction (PPI) network analysis. The complexes introduced by the clustering algorithms include noise proteins. The error rate of the noise proteins in the PPI network researches is about 40–90%. However, only 30–40% of the existing interactions in the PPI databases depend on the specific biological function. It is essential to eliminate the noise proteins and the interactions from the complexes created via clustering methods. We have introduced new methods of weighting interactions in protein clusters and the splicing of noise interactions and proteins-based interactions on their weights. The coexpression and the sequence similarity of each pair of proteins are considered the edge weight of the proteins in the network. The results showed that the edge filtering based on the amount of coexpression acts similar to the node filtering via graph-based characteristics. Regarding the removal of the noise edges, the edge filtering has a significant advantage over the graph-based method. The edge filtering based on the amount of sequence similarity has the ability to remove the noise proteins and the noise interactions.
APA, Harvard, Vancouver, ISO, and other styles
22

ZHENG, SHENG, CHANGCAI YANG, EMILE A. HENDRIKS, and XIAOJUN WANG. "ADAPTIVE WEIGHTED LEAST SQUARES SVM BASED SNOWING MODEL FOR IMAGE DENOISING." International Journal of Wavelets, Multiresolution and Information Processing 11, no. 06 (November 2013): 1350043. http://dx.doi.org/10.1142/s0219691313500434.

Full text
Abstract:
We propose a snowing model to iteratively smoothe the various image noises while preserving the important image structures such as edges and lines. Considering the gray image as a digital terrain model, we develop an adaptive weighted least squares support vector machine (LS-SVM) to iteratively estimate the optimal gray surface underlying the noisy image. The LS-SVM works on Gaussian noise while the weighted LS-SVM works on the outliers and non-Gaussian noise. To improve its performance in preserving the directional signal while suppressing the noise, the dominant orientation information of the gradients is integrated into the weighting scheme. The contribution of each attribute to the final LS-SVM model is adaptively determined in a fitness modulated, elongated, elliptical contour spread along the direction of the local edge structure. The farther away from the hyperplane in kernel space the point is, the less weight it gets, while the point on the direction of the local structure gets more weight. With adaptive weighting scheme, the robust LS-SVM smoothes most strongly along the edges, rather than across them, while it minimizes the effects of outliers, and results in strong preservation of details in the final output. The iteratively adaptive reweighted LS-SVM simulated the snowing process. The investigation on real images contaminated by mixture Gaussian noise has demonstrated that the performance of the present method is stable and reliable under noise distributions varying from Gaussian to impulsive.
APA, Harvard, Vancouver, ISO, and other styles
23

Eassa, Mohamed, Ibrahim Mohamed Selim, Walid Dabour, and Passent Elkafrawy. "Automated classification technique for edge-on galaxies based on mathematical treatment of brightness data." Research in Astronomy and Astrophysics 21, no. 10 (November 1, 2021): 264. http://dx.doi.org/10.1088/1674-4527/21/10/264.

Full text
Abstract:
Abstract Classification of edge-on galaxies is important to astronomical studies due to our Milky Way galaxy being an edge-on galaxy. Edge-on galaxies pose a problem to classification due to their less overall brightness levels and smaller numbers of pixels. In the current work, a novel technique for the classification of edge-on galaxies has been developed. This technique is based on the mathematical treatment of galaxy brightness data from their images. A special treatment for galaxies’ brightness data is developed to enhance faint galaxies and eliminate adverse effects of high brightness backgrounds as well as adverse effects of background bright stars. A novel slimness weighting factor is developed to classify edge-on galaxies based on their slimness. The technique has the capacity to be optimized for different catalogs with different brightness levels. In the current work, the developed technique is optimized for the EFIGI catalog and is trained using a set of 1800 galaxies from this catalog. Upon classification of the full set of 4458 galaxies from the EFIGI catalog, an accuracy of 97.5% has been achieved, with an average processing time of about 0.26 seconds per galaxy on an average laptop.
APA, Harvard, Vancouver, ISO, and other styles
24

Cai, Shaowei, Kaile Su, and Qingliang Chen. "EWLS: A New Local Search for Minimum Vertex Cover." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (July 3, 2010): 45–50. http://dx.doi.org/10.1609/aaai.v24i1.7539.

Full text
Abstract:
A number of algorithms have been proposed for the Minimum Vertex Cover problem. However, they are far from satisfactory, especially on hard instances. In this paper, we introduce Edge Weighting Local Search (EWLS), a new local search algorithm for the Minimum Vertex Cover problem. EWLS is based on the idea of extending a partial vertex cover into a vertex cover. A key point of EWLS is to find a vertex set that provides a tight upper bound on the size of the minimum vertex cover. To this purpose, EWLS employs an iterated local search procedure, using an edge weighting scheme which updates edge weights when stuck in local optima. Moreover, some sophisticated search strategies have been taken to improve the quality of local optima. Experimental results on the broadly used DIMACS benchmark show that EWLS is competitive with the current best heuristic algorithms, and outperforms them on hard instances. Furthermore, on a suite of difficult benchmarks, EWLS delivers the best results and sets a new record on the largest instance.
APA, Harvard, Vancouver, ISO, and other styles
25

Chen, Jiming, Liping Chen, and Mohammad Shabaz. "Image Fusion Algorithm at Pixel Level Based on Edge Detection." Journal of Healthcare Engineering 2021 (August 9, 2021): 1–10. http://dx.doi.org/10.1155/2021/5760660.

Full text
Abstract:
In the present scenario, image fusion is utilized at a large level for various applications. But, the techniques and algorithms are cumbersome and time-consuming. So, aiming at the problems of low efficiency, long running time, missing image detail information, and poor image fusion, the image fusion algorithm at pixel level based on edge detection is proposed. The improved ROEWA (Ratio of Exponentially Weighted Averages) operator is used to detect the edge of the image. The variable precision fitting algorithm and edge curvature change are used to extract the feature line of the image edge and edge angle point of the feature to improve the stability of image fusion. According to the information and characteristics of the high-frequency region and low-frequency region, different image fusion rules are set. To cope with the high-frequency area, the local energy weighted fusion approach based on edge information is utilized. The low-frequency region is processed by merging the region energy with the weighting factor, and the fusion results of the high findings demonstrate that the image fusion technique presented in this work increases the resolution by 1.23 and 1.01, respectively, when compared to the two standard approaches. When compared to the two standard approaches, the experimental results show that the proposed algorithm can effectively reduce the lack of image information. The sharpness and information entropy of the fused image are higher than the experimental comparison method, and the running time is shorter and has better robustness.
APA, Harvard, Vancouver, ISO, and other styles
26

Fang, Feng. "Crack Repair Model of Ancient Ceramics Based on Digital Image." Scientific Programming 2022 (March 11, 2022): 1–10. http://dx.doi.org/10.1155/2022/4932183.

Full text
Abstract:
Ancient ceramics is an important carrier of concretization and artistic transformation of Traditional Chinese culture. It is also an extremely indispensable link for the world to understand traditional Chinese culture. It is of great significance to ensure the quality of ceramic products and improve the reliability of products by nondestructive testing of ceramic microdefect cracks. It is necessary to extract the microdefect crack area first and describe the characteristics of the ceramic crack image with the gradient weighting feature of the model to complete the nondestructive detection of the crack image. The traditional method sets the pixel point and brightness threshold according to the pixel value of the microdefect area but ignores the description of the weighted feature of the image and completes the nondestructive detection of the microdefect crack image of ceramic products. In this study, the improved algorithm of edge detection based on cluster analysis was applied to the ancient ceramic crack repair. First, cluster analysis is used to optimize the Sobel operator in edge detection. Then, the gray value distribution of edge detection map is changed by the Clustering algorithm. Finally,the experimental results show that the contour crack trace and edge direction of the improved edge detection map are obviously enhanced by 20%, which is beneficial to improve the accuracy of ancient ceramic crack repair.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhang, Wenbo, Yuchen Zhao, Fangjing Li, and Hongbo Zhu. "A Hierarchical Federated Learning Algorithm Based on Time Aggregation in Edge Computing Environment." Applied Sciences 13, no. 9 (May 8, 2023): 5821. http://dx.doi.org/10.3390/app13095821.

Full text
Abstract:
Federated learning is currently a popular distributed machine learning solution that often experiences cumbersome communication processes and challenging model convergence in practical edge deployments due to the training nature of its model information interactions. The paper proposes a hierarchical federated learning algorithm called FedDyn to address these challenges. FedDyn uses dynamic weighting to limit the negative effects of local model parameters with high dispersion and speed-up convergence. Additionally, an efficient aggregation-based hierarchical federated learning algorithm is proposed to improve training efficiency. The waiting time is set at the edge layer, enabling edge aggregation within a specified time, while the central server waits for the arrival of all edge aggregation models before integrating them. Dynamic grouping weighted aggregation is implemented during aggregation based on the average obsolescence of local models in various batches. The proposed algorithm is tested on the MNIST and CIFAR-10 datasets and compared with the FedAVG algorithm. The results show that FedDyn can reduce the negative effects of non-independent and identically distributed (IID) data on the model and shorten the total training time by 30% under the same accuracy rate compared to FedAVG.
APA, Harvard, Vancouver, ISO, and other styles
28

Zaimbashi, Amir. "Two Types of Distributed CFAR Detection Based on Weighting Functions in Fusion Center for Weibull Clutter." Journal of Engineering 2013 (2013): 1–10. http://dx.doi.org/10.1155/2013/648190.

Full text
Abstract:
Two types of distributed constant false alarm rate (CFAR) detection using binary and fuzzy weighting functions in fusion center are developed. In the two types of distributed detectors, it was assumed that the clutter parameters at the local sensors are unknown and each local detector performs CFAR processing based on ML and OS CFAR processors before transmitting data to the fusion center. At the fusion center, received data is weighted either by a binary or a fuzzy weighting functions and combined according to deterministic rules, constructing global test statistics. Moreover, for the Weibull clutter, the expression of the weighting functions, based on ML and OS CFAR processors in local detectors, is obtained. In the binary type, we analyzed various distributed detection schemes based on maximum, minimum, and summation rules in fusion center. In the fuzzy type, we consider the various distributed detectors based on algebraic product, algebraic sum, probabilistic OR, and Lukasiewicz t-conorm fuzzy rules in fusion center. The performance of the two types of distributed detectors is analyzed and compared in the homogenous and nonhomogenous situations, multiple targets, or clutter edge. The simulation results indicate the superiority and robust performance of fuzzy type in homogenous and non homogenous situations.
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Zhan Feng, Hai Tao Su, Hong Shu Chen, Zhi Yi Hu, and Jie Liang Wang. "A Model of Target Detection in Variegated Natural Scene Based on Visual Attention." Applied Mechanics and Materials 333-335 (July 2013): 1213–18. http://dx.doi.org/10.4028/www.scientific.net/amm.333-335.1213.

Full text
Abstract:
Less of edge and texture information existed in traditional visual attention model in target detection due to extract only the color, brightness, directional characteristics, as well as direct sum fusion rule ignoring the difference in each characteristic. A improved model is proposed by introduced the edge, texture and the weights in fusion rules in visual computing model. First of all, DOG is employed in extracting the edge information on the basis of obtained brightness feature with multi-scale pyramid using the ITTI visual computing model; the second, the non-linear classification is processing in the six parameters of the mean and standard deviation of the gray contrast, relativity and entropy based on the GLCM; finally, the fusion rule of global enhancement is employed for combination of multi-feature saliency maps. The comparison experimental results on variegated natural scene display, relative to the ITTI calculation model, there is more effective with the application of the model in this paper, the interested area and the order to shift the focus are more in line with the human visual perception, the ability of target detection is strengthening in variegated natural scene. Further shows that the proposed edge and texture features introduced in the primary visual features to be effective, the introduction of each feature significant weighting factor is reasonable in the feature map integration phase.
APA, Harvard, Vancouver, ISO, and other styles
30

Huang, Chengqiang, Youchang Yang, Bo Wu, and Weize Yu. "RGB-to-RGBG conversion algorithm with adaptive weighting factors based on edge detection and minimal square error." Journal of the Optical Society of America A 35, no. 6 (May 18, 2018): 969. http://dx.doi.org/10.1364/josaa.35.000969.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Rafajłowicz, Ewaryst, Mirosław Pawlak, and Angsar Steland. "Nonlinear Image Processing and Filtering: A Unified Approach Based on Vertically Weighted Regression." International Journal of Applied Mathematics and Computer Science 18, no. 1 (March 1, 2008): 49–61. http://dx.doi.org/10.2478/v10006-008-0005-z.

Full text
Abstract:
Nonlinear Image Processing and Filtering: A Unified Approach Based on Vertically Weighted RegressionA class of nonparametric smoothing kernel methods for image processing and filtering that possess edge-preserving properties is examined. The proposed approach is a nonlinearly modified version of the classical nonparametric regression estimates utilizing the concept of vertical weighting. The method unifies a number of known nonlinear image filtering and denoising algorithms such as bilateral and steering kernel filters. It is shown that vertically weighted filters can be realized by a structure of three interconnected radial basis function (RBF) networks. We also assess the performance of the algorithm by studying industrial images.
APA, Harvard, Vancouver, ISO, and other styles
32

Tan, JingDong, and RuJing Wang. "Smooth Splicing: A Robust SNN-Based Method for Clustering High-Dimensional Data." Mathematical Problems in Engineering 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/295067.

Full text
Abstract:
Sharing nearest neighbor (SNN) is a novel metric measure of similarity, and it can conquer two hardships: the low similarities between samples and the different densities of classes. At present, there are two popular SNN similarity based clustering methods: JP clustering and SNN density based clustering. Their clustering results highly rely on the weighting value of the single edge, and thus they are very vulnerable. Motivated by the idea of smooth splicing in computing geometry, the authors design a novel SNN similarity based clustering algorithm within the structure of graph theory. Since it inherits complementary intensity-smoothness principle, its generalizing ability surpasses those of the previously mentioned two methods. The experiments on text datasets show its effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
33

Wu, Jiawei, and Hengyou Wang. "Structural Smoothing Low-Rank Matrix Restoration Based on Sparse Coding and Dual-Weighted Model." Entropy 24, no. 7 (July 7, 2022): 946. http://dx.doi.org/10.3390/e24070946.

Full text
Abstract:
Group sparse coding (GSC) uses the non-local similarity of images as constraints, which can fully exploit the structure and group sparse features of images. However, it only imposes the sparsity on the group coefficients, which limits the effectiveness of reconstructing real images. Low-rank regularized group sparse coding (LR-GSC) reduces this gap by imposing low-rankness on the group sparse coefficients. However, due to the use of non-local similarity, the edges and details of the images are over-smoothed, resulting in the blocking artifact of the images. In this paper, we propose a low-rank matrix restoration model based on sparse coding and dual weighting. In addition, total variation (TV) regularization is integrated into the proposed model to maintain local structure smoothness and edge features. Finally, to solve the problem of the proposed optimization, an optimization method is developed based on the alternating direction method. Extensive experimental results show that the proposed SDWLR-GSC algorithm outperforms state-of-the-art algorithms for image restoration when the images have large and sparse noise, such as salt and pepper noise.
APA, Harvard, Vancouver, ISO, and other styles
34

Luo, Juanjuan, and Haohao Yin. "Research On Evolution Of Quality Chain Network Model Based On Multi-Attribute Weighting Of Complex Network." Advances in Engineering Technology Research 6, no. 1 (July 19, 2023): 448. http://dx.doi.org/10.56028/aetr.6.1.448.2023.

Full text
Abstract:
Using the linear optimal connection mechanism of complex network model, a multi-attribute weighted quality chain network model construction method is proposed. Firstly, the enterprise node in the quality chain is defined to have its own attributes of quality elements and node business relevance. Secondly, the degree and the intermediate number are combined to comprehensively consider the local and global characteristics of the network to express the network attributes of the node, and the multi-attribute weighted quality chain network edge connection strategy algorithm is designed under the combination with the node’s own attribute weighting. Finally, the simulation results show that the quality chain network model established by this strategy meets the scale-free characteristics and has higher network efficiency and stronger network invulnerability which proves that the newly established model is reasonable.
APA, Harvard, Vancouver, ISO, and other styles
35

Lian, Xing, Erwei Zhao, Wei Zheng, Xiaodong Peng, Ang Li, Zheng Zhen, and Yan Wen. "Weighted Sparseness-Based Anomaly Detection for Hyperspectral Imagery." Sensors 23, no. 4 (February 11, 2023): 2055. http://dx.doi.org/10.3390/s23042055.

Full text
Abstract:
Anomaly detection of hyperspectral remote sensing data has recently become more attractive in hyperspectral image processing. The low-rank and sparse matrix decomposition-based anomaly detection algorithm (LRaSMD) exhibits poor detection performance in complex scenes with multiple background edges and noise. Therefore, this study proposes a weighted sparse hyperspectral anomaly detection method. First, using the idea of matrix decomposition in mathematics, the original hyperspectral data matrix is reconstructed into three sub-matrices with low rank, small sparsity and representing noise, respectively. Second, to suppress the noise interference in the complex background, we employed the low-rank, background image as a reference, built a local spectral and spatial dictionary through the sliding window strategy, reconstructed the HSI pixels of the original data, and extracted the sparse coefficient. We proposed the sparse coefficient divergence evaluation index (SCDI) as a weighting factor to weight the sparse anomaly map to obtain a significant anomaly map to suppress the background edge, noise, and other residues caused by decomposition, and enhance the abnormal target. Finally, abnormal pixels are segmented based on the adaptive threshold. The experimental results demonstrate that, on a real-scene hyperspectral dataset with a complicated background, the proposed method outperforms the existing representative algorithms in terms of detection performance.
APA, Harvard, Vancouver, ISO, and other styles
36

Zhao, Tao, and Si-Xiang Zhang. "X-ray Image Enhancement Based on Nonsubsampled Shearlet Transform and Gradient Domain Guided Filtering." Sensors 22, no. 11 (May 27, 2022): 4074. http://dx.doi.org/10.3390/s22114074.

Full text
Abstract:
In this paper, we propose an image enhancement algorithm combining non-subsampled shearlet transform and gradient-domain guided filtering to address the problems of low resolution, noise amplification, missing details, and weak edge gradient retention in the X-ray image enhancement process. First, we decompose histogram equalization and nonsubsampled shearlet transform to the original image. We get a low-frequency sub-band and several high-frequency sub-bands. Adaptive gamma correction with weighting distribution is used for the low-frequency sub-band to highlight image contour information and improve the overall contrast of the image. The gradient-domain guided filtering is conducted for the high-frequency sub-bands to suppress image noise and highlight detail and edge information. Finally, we reconstruct all the effectively processed sub-bands by the inverse non-subsampled shearlet transform and obtain the final enhanced image. The experimental results show that the proposed algorithm has good results in X-ray image enhancement, and its objective index also has evident advantages over some classical algorithms.
APA, Harvard, Vancouver, ISO, and other styles
37

Pang, Shanchen, Huanhuan Sun, Min Wang, Shuyu Wang, Sibo Qiao, and Neal N. Xiong. "An Efficient Computing Offloading Scheme Based on Privacy-Preserving in Mobile Edge Computing Networks." Wireless Communications and Mobile Computing 2022 (June 14, 2022): 1–15. http://dx.doi.org/10.1155/2022/5152598.

Full text
Abstract:
Computation offloading is an important technology to achieve lower delay communication and improve the experience of service (EoS) in mobile edge computing (MEC). Due to the openness of wireless links and the limitation of computing resources in mobile computing process, the privacy of users is easy to leak, and the completion time of tasks is difficult to guarantee. In this paper, we propose an efficient computing offloading algorithm based on privacy-preserving (ECOAP), which solves the privacy problem of offloading users through the encryption technology. To avoid the algorithm falling into local optimum and reduce the offloading user energy consumption and task completion delay in the case of encryption, we use the improved fast nondominated sorting genetic algorithm (INSGA-II) to obtain the optimal offloading strategy set. We obtain the optimal offloading strategy by using the methods of min-max normalization and simple additive weighting based on the optimal offloading strategy set. The ECOAP algorithm can preserve user privacy and reduce task completion time and user energy consumption effectively by comparing with other algorithms.
APA, Harvard, Vancouver, ISO, and other styles
38

Osoba, Osonde A., and Bart Kosko. "Fuzzy cognitive maps of public support for insurgency and terrorism." Journal of Defense Modeling and Simulation: Applications, Methodology, Technology 14, no. 1 (January 2017): 17–32. http://dx.doi.org/10.1177/1548512916680779.

Full text
Abstract:
Feedback fuzzy cognitive maps (FCMs) can model the complex structure of public support for insurgency and terrorism (PSOT). FCMs are fuzzy causal signed digraphs that model degrees of causality in interwoven webs of feedback causality and policy variables. Their nonlinear dynamics permit forward-chaining inference from input causes and policy options to output effects. We show how a concept node causally affects downstream nodes through a weighted product of the intervening causal edge strengths. FCMs allow users to add detailed dynamics and feedback links directly to the causal model. Users can also fuse or combine FCMs from multiple experts by weighting and adding the underlying FCM fuzzy edge matrices. The combined FCM tends to better represent domain knowledge as the expert sample size increases if the expert sample approximates a random sample. Statistical or machine-learning algorithms can use numerical sample data to learn and tune a FCM’s causal edges. A differential Hebbian learning law can approximate a PSOT FCM’s directed edges of partial causality using time-series training data. The PSOT FCM adapts to the computational factor-tree PSOT model that Davis and OMahony based on prior social science research and case studies. Simulation experiments compare the PSOT models with the adapted FCM models.
APA, Harvard, Vancouver, ISO, and other styles
39

Zhang, Heng, Faming Shao, Xiaohui He, Zihan Zhang, Yonggen Cai, and Shaohua Bi. "Research on Object Detection and Recognition Method for UAV Aerial Images Based on Improved YOLOv5." Drones 7, no. 6 (June 17, 2023): 402. http://dx.doi.org/10.3390/drones7060402.

Full text
Abstract:
In this paper, an object detection and recognition method based on improved YOLOv5 is proposed for application on unmanned aerial vehicle (UAV) aerial images. Firstly, we improved the traditional Gabor function to obtain Gabor convolutional kernels with better edge enhancement properties. We used eight Gabor convolutional kernels to enhance the object edges from eight directions, and the enhanced image has obvious edge features, thus providing the best object area for subsequent deep feature extraction work. Secondly, we added a coordinate attention (CA) mechanism to the backbone of YOLOv5. The plug-and-play lightweight CA mechanism considers information of both the spatial location and channel of features and can accurately capture the long-range dependencies of positions. CA is like the eyes of YOLOv5, making it easier for the network to find the region of interest (ROI). Once again, we replaced the Path Aggregation Network (PANet) with a Bidirectional Feature Pyramid Network (BiFPN) at the neck of YOLOv5. BiFPN performs weighting operations on different input feature layers, which helps to balance the contribution of each layer. In addition, BiFPN adds horizontally connected feature branches across nodes on a bidirectional feature fusion structure to fuse more in-depth feature information. Finally, we trained the overall improved YOLOv5 model on our integrated dataset LSDUVD and compared it with other models on multiple datasets. The results show that our method has the best convergence effect and mAP value, which demonstrates that our method has unique advantages in processing detection tasks of UAV aerial images.
APA, Harvard, Vancouver, ISO, and other styles
40

Zhang, Guojun, Quansheng Li, Zhuhe Xu, and Yong Zhang. "Roof Fractures of Near-Vertical and Extremely Thick Coal Seams in Horizontally Grouped Top-Coal Drawing Method Based on the Theory of a Thin Plate." Sustainability 14, no. 16 (August 18, 2022): 10285. http://dx.doi.org/10.3390/su141610285.

Full text
Abstract:
During the mining process of the near-vertical seam, there will be movement and collapse of the “roof side” rock layer and the “overlying coal seam,” as well as the emergence of the “floor side” rock layer roof which is more complicated than the inclined and gently inclined coal seams, which causes problems with slippage or overturning damage. With the increase of the inclination of the coal seam, the impact of the destruction of the immediate roof on the stope and roadway gradually becomes prominent, while the impact of the destruction of the basic roof on the stope and the roadway gradually weakens. The destruction of the immediate roof of the near-vertical coal seam will cause a large area of coal and rock mass to suddenly rush to the working face and the two lanes, resulting in rapid deformation of the roadway, overturning of equipment, overturning of personnel, and even severe rock pressure disaster accidents, all of which pose a serious threat to coal mine safety and production. It is necessary to carry out research on the mechanical response mechanism of the immediate roof of near-upright coal seams, to analyse the weighting process of steeply inclined thick coal seam sub-level mining. A four fixed support plate model and top three clamped edges simply supported plate model for roof stress distribution are established before the first weighting of the roof during the upper and lower level mining process. The bottom three clamped edges simply supported plate model and two adjacent edges clamped on the edge of a simply supported plate model are established for roof stress distribution before periodic weighting of the roof during the upper and lower level mining process. The Galerkin method is used to make an approximate solution of deflection equation under the effect of sheet normal stress, and then roof failure criterion is established based on the maximum tensile stress strength criterion and generalized Hooke law. This paper utilizes FLAC3D finite element numerical simulation software, considering the characteristics of steeply inclined thick coal seam sub-level mining. It undertakes orthogonal numerical simulation experiment in three levels with different depths, coal seam angles, lateral pressure coefficient, and orientation of maximum horizontal principal stress, and translates roof stress of corresponding 9 simulation experiment into steeply inclined roof normal stress. We conclude that the distribution law of normal stress along dip and dip direction of a roof under the circumstance of different advancing distances and different sub-levels. The caving pace of first weight and periodical weight were counted under the effect of the roof uniform normal stress. It can better predict the weighting situation of the working face and ensure the safe, efficient, and sustainable mining of coal mines.
APA, Harvard, Vancouver, ISO, and other styles
41

Siregar, Rosma, Kartika Sari, and Siti Julianita Siregar. "Penerapan Metode SAW (Simple Additive Weighting) Dalam Pemilihan Saham Terbaik Pada Sektor Teknologi." JURNAL MEDIA INFORMATIKA BUDIDARMA 6, no. 1 (January 25, 2022): 519. http://dx.doi.org/10.30865/mib.v6i1.3425.

Full text
Abstract:
Stocks are one of the many investments that are favored by all groups because they promise high returns. But in addition to promising high returns, stocks can also provide a high risk of loss, which makes ordinary people afraid to start investing in the stock market. To prevent losses in buying stocks is to choose stocks with good fundamentals. To support this, we need an analysis that can help make decisions in choosing the best stocks in the technology sector. The saw method analysis will be used in this study, where the saw method is able to select alternatives based on predetermined categories. This study will rank the best stocks based on company fundamentals, namely EPS, PER, PBV, ROE, DER and Dividend Yield. The results of this study are EDGE stocks as the best stocks in the technology sector with the highest value of 0.88. The purpose of this research is to help investors choose stocks before investing in technology companies.
APA, Harvard, Vancouver, ISO, and other styles
42

Ren, Yue Qing, and Zhi Qiang Zhang. "A Weighted Network Topology Model of WSN Based on Random Geometric Graph Method." Advanced Materials Research 962-965 (June 2014): 2898–902. http://dx.doi.org/10.4028/www.scientific.net/amr.962-965.2898.

Full text
Abstract:
Focus on the weakness of modeling WSN topology by the means of random graph theory, a new weighted topology model of WSN based on random geometric theory is proposed in this paper. For the proposed new network model, the weighting scheme of network edge take into account the property of distance decreasing effect for communication signal. It also defines the differential weight which can embody the energy consumption for network communication. Based on simulations and calculations of the measures for the network topology as well as comparison with other forms of network, the results indicate that the proposed network topology model can not only describe the interrelated connected relationship between different nodes but also verify the degree of node is subject to Poisson distribution. It also has prominent clustering effects. These properties are consist with real world networks.
APA, Harvard, Vancouver, ISO, and other styles
43

Zhao, Liang, Zhengjie Wei, Yanting Li, Junwei Jin, and Xuan Li. "SEDG-Yolov5: A Lightweight Traffic Sign Detection Model Based on Knowledge Distillation." Electronics 12, no. 2 (January 6, 2023): 305. http://dx.doi.org/10.3390/electronics12020305.

Full text
Abstract:
Most existing traffic sign detection models suffer from high computational complexity and superior performance but cannot be deployed on edge devices with limited computational capacity, which cannot meet the direct needs of autonomous vehicles for detection model performance and efficiency. To address the above concerns, this paper proposes an improved SEDG-Yolov5 traffic sign detection method based on knowledge distillation. Firstly, the Slicing Aided Hyper Inference method is used as a local offline data augmentation method for the model training. Secondly, to solve the problems of high-dimensional feature information loss and high model complexity, the inverted residual structure ESGBlock with a fused attention mechanism is proposed, and a lightweight feature extraction backbone network is constructed based on it, while we introduce the GSConv in the feature fusion layer to reduce the computational complexity of the model further. Eventually, an improved response-based objectness scaled knowledge distillation method is proposed to retrain the traffic sign detection model to compensate for the degradation of detection accuracy due to light-weighting. Extensive experiments on two challenging traffic sign datasets show that our proposed method has a good balance on detection precision and detection speed with 2.77M parametric quantities. Furthermore, the inference speed of our method achieves 370 FPS with TensorRT and 35.6 FPS with ONNX at FP16-precision, which satisfies the requirements for real-time sign detection and edge deployment.
APA, Harvard, Vancouver, ISO, and other styles
44

Chen, Jenn-Shyong, Ching-Lun Su, Yen-Hsyang Chu, Ruey-Ming Kuong, and Jun-ichi Furumoto. "Measurement of Range-Weighting Function for Range Imaging of VHF Atmospheric Radars Using Range Oversampling." Journal of Atmospheric and Oceanic Technology 31, no. 1 (January 1, 2014): 47–61. http://dx.doi.org/10.1175/jtech-d-12-00236.1.

Full text
Abstract:
Abstract Multifrequency range imaging (RIM) used with the atmospheric radars at ultra- and very high-frequency (VHF) bands is capable of retrieving the power distribution of the backscattered radar echoes in the range direction, with some inversion algorithms such as the Capon method. The retrieved power distribution, however, is weighted by the range-weighting function (RWF). Modification of the retrieved power distribution with a theoretical RWF may cause overcorrection around the edge of the sampling gate. In view of this, an effective RWF that is in a Gaussian form and varies with the signal-to-noise ratio (SNR) of radar echoes has been proposed to mitigate the range-weighting effect and thereby enhance the continuity of the power distribution at gate boundaries. Based on the previously proposed concept, an improved approach utilizing the range-oversampled signals is addressed in this article to inspect the range-weighting effects at different range locations. The shape of the Gaussian RWF for describing the range-weighting effect was found to vary with the off-center range location in addition to the SNR of radar echoes—that is, the effective RWF for the RIM was SNR and range dependent. The use of SNR- and range-dependent RWF can be of help to improve the range imaging to some degree at the range location outside the range extent of a sampling gate defined by the pulse length. To verify the proposed approach, several radar experiments were carried out with the Chung-Li (24.9°N, 121.1°E) and middle and upper atmosphere (MU; 34.85°N, 136.11°E) VHF radars.
APA, Harvard, Vancouver, ISO, and other styles
45

Zhou, Guodong, Huailiang Zhang, and Raquel Martínez Lucas. "Compressed sensing image restoration algorithm based on improved SURF operator." Open Physics 16, no. 1 (December 31, 2018): 1033–45. http://dx.doi.org/10.1515/phys-2018-0124.

Full text
Abstract:
Abstract Aiming at the excellent descriptive ability of SURF operator for local features of images, except for the shortcoming of global feature description ability, a compressed sensing image restoration algorithm based on improved SURF operator is proposed. The SURF feature vector set of the image is extracted, and the vector set data is reduced into a single high-dimensional feature vector by using a histogram algorithm, and then the image HSV color histogram is extracted.MSA image decomposition algorithm is used to obtain sparse representation of image feature vectors. Total variation curvature diffusion method and Bayesian weighting method perform image restoration for data smoothing feature and local similarity feature of texture part respectively. A compressed sensing image restoration model is obtained by using Schatten-p norm, and image color supplement is performed on the model. The compressed sensing image is iteratively solved by alternating optimization method, and the compressed sensing image is restored. The experimental results show that the proposed algorithm has good restoration performance, and the restored image has finer edge and texture structure and better visual effect.
APA, Harvard, Vancouver, ISO, and other styles
46

Fionda, Valeria, and Giuseppe Pirrò. "Learning Triple Embeddings from Knowledge Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3874–81. http://dx.doi.org/10.1609/aaai.v34i04.5800.

Full text
Abstract:
Graph embedding techniques allow to learn high-quality feature vectors from graph structures and are useful in a variety of tasks, from node classification to clustering. Existing approaches have only focused on learning feature vectors for the nodes and predicates in a knowledge graph. To the best of our knowledge, none of them has tackled the problem of directly learning triple embeddings. The approaches that are closer to this task have focused on homogeneous graphs involving only one type of edge and obtain edge embeddings by applying some operation (e.g., average) on the embeddings of the endpoint nodes. The goal of this paper is to introduce Triple2Vec, a new technique to directly embed knowledge graph triples. We leverage the idea of line graph of a graph and extend it to the context of knowledge graphs. We introduce an edge weighting mechanism for the line graph based on semantic proximity. Embeddings are finally generated by adopting the SkipGram model, where sentences are replaced with graph walks. We evaluate our approach on different real-world knowledge graphs and compared it with related work. We also show an application of triple embeddings in the context of user-item recommendations.
APA, Harvard, Vancouver, ISO, and other styles
47

Wei, Wei, and Bin Bin Xie. "Evaluation of the Ecological Security in Shiyang River Basin Based on Grid GIS and PSR Model." Advanced Materials Research 864-867 (December 2013): 1042–46. http://dx.doi.org/10.4028/www.scientific.net/amr.864-867.1042.

Full text
Abstract:
With the support of GIS technology, in this paper, each index in the evaluation system was set to the spatial expression of 100m×100m grid scale. Subsequently, the spatial principal component analysis and hierarchy process-combination weighting method were used to express the spatial distribution of regional ecology security in study area. Some results show that:(1)The spatial distribution differences of regional ecology security are very obvious. (2)The maximum of SESI is 87.14 in Liangzhou District and its surroundings. Comparatively, the minimum value is 43.96 in north Minqin and edge of Tengery desert. This weakens interaction on ecological flowing in the basin as well as the capacity of ecological restoration between landscape patches. So the polarization of the watershed ecological security will become more seriously. (3)The ecological security and sustainable development of Shiyang River Basin are still in class of III(Threaten security) and class IV(insecurity).
APA, Harvard, Vancouver, ISO, and other styles
48

Hu, Xinchang, Pengbo Wang, Yanan Guo, Qian Han, and Xinkai Zhou. "Azimuth Ambiguity Suppression in SAR Images Based on VS-KSVD Dictionary Learning and Compressive Sensing." Journal of Physics: Conference Series 2083, no. 3 (November 1, 2021): 032049. http://dx.doi.org/10.1088/1742-6596/2083/3/032049.

Full text
Abstract:
Abstract The azimuth ambiguities appear widely in Synthetic Aperture Radar (SAR) images, which causes a large number of false targets and seriously affect the quality of image interpretation. Due to under-sampling in Doppler domain, ambiguous energy is mixed with energy from the main zone in the time and frequency domains. In order to effectively suppress the ambiguous energy in SAR images without loss of resolution, this paper presents a novel method of KSVD dictionary learning based on variance statistics (VS-KSVD) and compressed sensing (CS) reconstruction. According to the statistical characteristics of distributed targets, the dictionary that represents the ambiguities is selected and suppressed by coefficient weighting, in which local window filtering is carried out to remove the block effect and optimize the edge information. Finally, the high resolution images with low-ambiguity can be reconstructed by CS. With the proposed approach, the feasibility and effectiveness of the proposed approach is validated by using satellite data and simulation in suppressing azimuth ambiguity.
APA, Harvard, Vancouver, ISO, and other styles
49

Men, Qing Yi, and Guang Wei Cheng. "Mechanics Properties Influence on Ball Mill Rotator through Sliding Shoes Position Change." Advanced Materials Research 694-697 (May 2013): 864–67. http://dx.doi.org/10.4028/www.scientific.net/amr.694-697.864.

Full text
Abstract:
In this paper, the ball mills supported with double sliding shoes and the edge-transmission are compared. The mechanical model of ball mill supported with the double sliding shoes is established. The shearing force diagram, bending moment diagram and torque diagram are been carried out with the alteration of the double sliding shoes. After the support position improved, the felicitous rotator thickness is calculated based on the Third Strength Theory. It is showed by the calculation result; the rotator in manufacturing according to the experience is too thick. The thickness of the ball mill can be reduced. Then the weight of the ball mill can be reduced too. The light weighting design theoretical basis for large equipment is provided.
APA, Harvard, Vancouver, ISO, and other styles
50

Choi, Ho-Hyoung, Hyun-Soo Kang, and Byoung-Ju Yun. "Tone Mapping of High Dynamic Range Images Combining Co-Occurrence Histogram and Visual Salience Detection." Applied Sciences 9, no. 21 (November 1, 2019): 4658. http://dx.doi.org/10.3390/app9214658.

Full text
Abstract:
One of the significant qualities of the human vision, which differentiates it from computer vision, is so called attentional control, which is the innate ability of our human eyes to select what visual stimuli to pay attention to at any moment in time. In this sense, the visual salience detection model, which is designed to simulate how the human visual system (HVS) perceives objects and scenes, is widely used for performing multiple vision tasks. This model is also in high demand in the tone mapping technology of high dynamic range images (HDRIs). Another distinct quality of the HVS is that our eyes blink and adjust brightness when objects are in their sight. Likewise, HDR imaging is a technology applied to a camera that takes pictures of an object several times by repeatedly opening and closing a camera iris, which is referred to as multiple exposures. In this way, the computer vision is able to control brightness and depict a range of light intensities. HDRIs are the product of HDR imaging. This article proposes a novel tone mapping method using CCH-based saliency-aware weighting and edge-aware weighting methods to efficiently detect image salience information in the given HDRIs. The two weighting methods combine with a guided filter to generate a modified guided image filter (MGIF). The function of the MGIF is to split an image into the base layer and the detail layer which are the two elements of an image: illumination and reflection, respectively. The base layer is used to obtain global tone mapping and compress the dynamic range of HDRI while preserving the sharp edges of an object in the HDRI. This has a remarkable effect of reducing halos in the resulting HDRIs. The proposed approach in this article also has several distinct advantages of discriminative operation, tolerance to image size variation, and minimized parameter tuning. According to the experimental results, the proposed method has made progress compared to its existing counterparts when it comes to subjective and quantitative qualities, and color reproduction.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography