Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Gradient Smoothing“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Gradient Smoothing" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Gradient Smoothing"
Fang, Shuai, Zhenji Yao und Jing Zhang. „Scale and Gradient Aware Image Smoothing“. IEEE Access 7 (2019): 166268–81. http://dx.doi.org/10.1109/access.2019.2953550.
Der volle Inhalt der QuelleWang, Dongdong, Jiarui Wang und Junchao Wu. „Superconvergent gradient smoothing meshfree collocation method“. Computer Methods in Applied Mechanics and Engineering 340 (Oktober 2018): 728–66. http://dx.doi.org/10.1016/j.cma.2018.06.021.
Der volle Inhalt der QuelleZhou, Zhengyong, und Qi Yang. „An Active Set Smoothing Method for Solving Unconstrained Minimax Problems“. Mathematical Problems in Engineering 2020 (24.06.2020): 1–25. http://dx.doi.org/10.1155/2020/9108150.
Der volle Inhalt der QuelleXu, Li, Cewu Lu, Yi Xu und Jiaya Jia. „Image smoothing via L 0 gradient minimization“. ACM Transactions on Graphics 30, Nr. 6 (Dezember 2011): 1–12. http://dx.doi.org/10.1145/2070781.2024208.
Der volle Inhalt der QuelleBurke, James V., Tim Hoheisel und Christian Kanzow. „Gradient Consistency for Integral-convolution Smoothing Functions“. Set-Valued and Variational Analysis 21, Nr. 2 (29.03.2013): 359–76. http://dx.doi.org/10.1007/s11228-013-0235-6.
Der volle Inhalt der QuellePinilla, Samuel, Tamir Bendory, Yonina C. Eldar und Henry Arguello. „Frequency-Resolved Optical Gating Recovery via Smoothing Gradient“. IEEE Transactions on Signal Processing 67, Nr. 23 (01.12.2019): 6121–32. http://dx.doi.org/10.1109/tsp.2019.2951192.
Der volle Inhalt der QuelleAvrashi, Jacob. „High order gradient smoothing towards improved C1 eigenvalues“. Engineering Computations 12, Nr. 6 (Juni 1995): 513–28. http://dx.doi.org/10.1108/02644409510799749.
Der volle Inhalt der QuelleWang, Bao, Difan Zou, Quanquan Gu und Stanley J. Osher. „Laplacian Smoothing Stochastic Gradient Markov Chain Monte Carlo“. SIAM Journal on Scientific Computing 43, Nr. 1 (Januar 2021): A26—A53. http://dx.doi.org/10.1137/19m1294356.
Der volle Inhalt der QuelleLin, Qihang, Xi Chen und Javier Peña. „A smoothing stochastic gradient method for composite optimization“. Optimization Methods and Software 29, Nr. 6 (13.03.2014): 1281–301. http://dx.doi.org/10.1080/10556788.2014.891592.
Der volle Inhalt der QuelleHe, Liangtian, und Yilun Wang. „Image smoothing via truncated ℓ 0 gradient regularisation“. IET Image Processing 12, Nr. 2 (01.02.2018): 226–34. http://dx.doi.org/10.1049/iet-ipr.2017.0533.
Der volle Inhalt der QuelleDissertationen zum Thema "Gradient Smoothing"
Lee, Chang-Kye. „Gradient smoothing in finite elasticity : near-incompressibility“. Thesis, Cardiff University, 2016. http://orca.cf.ac.uk/94491/.
Der volle Inhalt der QuelleMao, Zirui. „A Novel Lagrangian Gradient Smoothing Method for Fluids and Flowing Solids“. University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1553252214052311.
Der volle Inhalt der QuellePierucci, Federico. „Optimisation non-lisse pour l'apprentissage statistique avec régularisation matricielle structurée“. Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM024/document.
Der volle Inhalt der QuelleTraining machine learning methods boils down to solving optimization problems whose objective functions often decomposes into two parts: a) the empirical risk, built upon the loss function, whose shape is determined by the performance metric and the noise assumptions; b) the regularization penalty, built upon a norm, or a gauge function, whose structure is determined by the prior information available for the problem at hand.Common loss functions, such as the hinge loss for binary classification, or more advanced loss functions, such as the one arising in classification with reject option, are non-smooth. Sparse regularization penalties such as the (vector) l1- penalty, or the (matrix) nuclear-norm penalty, are also non-smooth. However, basic non-smooth optimization algorithms, such as subgradient optimization or bundle-type methods, do not leverage the composite structure of the objective. The goal of this thesis is to study doubly non-smooth learning problems (with non-smooth loss functions and non-smooth regularization penalties) and first- order optimization algorithms that leverage composite structure of non-smooth objectives.In the first chapter, we introduce new regularization penalties, called the group Schatten norms, to generalize the standard Schatten norms to block- structured matrices. We establish the main properties of the group Schatten norms using tools from convex analysis and linear algebra; we retrieve in particular some convex envelope properties. We discuss several potential applications of the group nuclear-norm, in collaborative filtering, database compression, multi-label image tagging.In the second chapter, we present a survey of smoothing techniques that allow us to use first-order optimization algorithms designed for composite objectives decomposing into a smooth part and a non-smooth part. We also show how smoothing can be used on the loss function corresponding to the top-k accuracy, used for ranking and multi-class classification problems. We outline some first-order algorithms that can be used in combination with the smoothing technique: i) conditional gradient algorithms; ii) proximal gradient algorithms; iii) incremental gradient algorithms.In the third chapter, we study further conditional gradient algorithms for solving doubly non-smooth optimization problems. We show that an adaptive smoothing combined with the standard conditional gradient algorithm gives birth to new conditional gradient algorithms having the expected theoretical convergence guarantees. We present promising experimental results in collaborative filtering for movie recommendation and image categorization
Bhowmick, Sauradeep. „Advanced Smoothed Finite Element Modeling for Fracture Mechanics Analyses“. University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1623240613376967.
Der volle Inhalt der QuelleMayrink, Victor Teixeira de Melo. „Avaliação do algoritmo Gradient Boosting em aplicações de previsão de carga elétrica a curto prazo“. Universidade Federal de Juiz de Fora (UFJF), 2016. https://repositorio.ufjf.br/jspui/handle/ufjf/3563.
Der volle Inhalt der QuelleApproved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-07T15:06:57Z (GMT) No. of bitstreams: 1 victorteixeirademelomayrink.pdf: 2587774 bytes, checksum: 1319cc37a15480796050b618b4d7e5f7 (MD5)
Made available in DSpace on 2017-03-07T15:06:57Z (GMT). No. of bitstreams: 1 victorteixeirademelomayrink.pdf: 2587774 bytes, checksum: 1319cc37a15480796050b618b4d7e5f7 (MD5) Previous issue date: 2016-08-31
FAPEMIG - Fundação de Amparo à Pesquisa do Estado de Minas Gerais
O armazenamento de energia elétrica em larga escala ainda não é viável devido a restrições técnicas e econômicas. Portanto, toda energia consumida deve ser produzida instantaneamente; não é possível armazenar o excesso de produção, ou tampouco cobrir eventuais faltas de oferta com estoques de segurança, mesmo que por um curto período de tempo. Consequentemente, um dos principais desafios do planejamento energético consiste em realizar previsões acuradas para as demandas futuras. Neste trabalho, apresentamos um modelo de previsão para o consumo de energia elétrica a curto prazo. A metodologia utilizada compreende a construção de um comitê de previsão, por meio da aplicação do algoritmo Gradient Boosting em combinação com modelos de árvores de decisão e a técnica de amortecimento exponencial. Esta estratégia compreende um método de aprendizado supervisionado que ajusta o modelo de previsão com base em dados históricos do consumo de energia, das temperaturas registradas e de variáveis de calendário. Os modelos propostos foram testados em duas bases de dados distintas e demonstraram um ótimo desempenho quando comparados com resultados publicados em outros trabalhos recentes.
The storage of electrical energy is still not feasible on a large scale due to technical and economic issues. Therefore, all energy to be consumed must be produced instantly; it is not possible to store the production leftover, or either to cover any supply shortages with safety stocks, even for a short period of time. Thus, one of the main challenges of energy planning consists in computing accurate forecasts for the future demand. In this paper, we present a model for short-term load forecasting. The methodology consists in composing a prediction comitee by applying the Gradient Boosting algorithm in combination with decision tree models and the exponential smoothing technique. This strategy comprises a supervised learning method that adjusts the forecasting model based on historical energy consumption data, the recorded temperatures and calendar variables. The proposed models were tested in two di erent datasets and showed a good performance when compared with results published in recent papers.
Heinrich, André. „Fenchel duality-based algorithms for convex optimization problems with applications in machine learning and image restoration“. Doctoral thesis, Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-108923.
Der volle Inhalt der QuelleHuang, Chih-Ping, und 黃志平. „Piecewise Linear Function Solution Space and Modified-Gradient Smoothing Domain Method“. Thesis, 2012. http://ndltd.ncl.edu.tw/handle/51737947931004919031.
Der volle Inhalt der QuelleCheng, Ching Wen, und 鄭景文. „Simplification Of Centroid Gradient Smoothing Domain Method Using Finite Element Basis“. Thesis, 2012. http://ndltd.ncl.edu.tw/handle/52320644772318750123.
Der volle Inhalt der QuelleJhong, Jhih-Syong, und 鍾智雄. „A Study of Gradient Smoothing Methods for Boundary Value Problems on Triangular Meshes“. Thesis, 2015. http://ndltd.ncl.edu.tw/handle/93662964677132066520.
Der volle Inhalt der QuelleHsieh, Hsun, und 謝. 洵. „Automatic tumor segmentation of breast ultra-sound images using a distance-regularized level-set evolution method with initial contour obtained by guided image filter, L0 gradient minimization smoothing pre-processing, and morphological features“. Thesis, 2017. http://ndltd.ncl.edu.tw/handle/t6z6cs.
Der volle Inhalt der Quelle國立清華大學
電機工程學系所
105
Due to the speckle noise and low contrast in breast ultrasound images, it is hard to locate the contour of the tumor by using a single method. In this thesis, a new method for finding an initial contour is proposed, which can improve the result of DRLSE on the segmentation of BUS images. The new method focuses on improving the algorithm proposed by Tsai-Wen Niu, which is a way to search an initial contour based on the local minimum in the images. When the BUS images contain calcification, it is possible to fail in searching of initial contour through such algorithm, hence leading to a poor segmentation result when the initial contour is on the wrong place. Therefore, we acquire a bigger initial contour by using a series of image smoothing methods and binarization, which can eliminate the weak edges and adjust the contrast in BUS images. In addition, some images without local minimum can be successfully detected by using the proposed method. However, the pixel value in these images are similar. It might be hard to accurately separate the tumor region from non-tumor region by the difference of pixel values. These obstacles are conquered by calculating the difference of length and pixel value in the suspect region. The ranking outcome is improved by using the morphological features. After applying DRLSE, our initial contour can reach the tumor region more accurately. To evaluate the result of segmentation, it is compared with the outcome of DRLSE obtained from different initial contours proposed by Tsai-Wen Niu, expansion DRLSE method, and contraction DRLSE method using three evaluation metrics, including ME, RFAE and MHD. The experimental results indicate that the proposed method is basically better than the other methods. However, the initial contour might contain non-tumor region when the edge of the tumor’s boundary is too ambiguous; even so, the proposed method drastically reduce the number of DRLSE iteration and computation time. According to the experimental results, the proposed method has three advantages over the other methods. First, it sets the initial contour automatically which is more efficient than setting the initial contour manually. Second, the region of the initial contour is much bigger than those obtained by the other methods, which can reduce the computation time and the number of DRLSE iteration. Third, if the tumor boundary is distinct, the new initial contour can improve the segmentation result of DRLSE.
Bücher zum Thema "Gradient Smoothing"
Geological Survey (U.S.), Hrsg. Combining edge-gradient information to improve adaptive discontinuity-preserving smoothing of multispectral images. Reston, VA (521 National Center, Reston 22092): U.S. Geological Survey, 1994.
Den vollen Inhalt der Quelle findenGeological Survey (U.S.), Hrsg. Combining edge-gradient information to improve adaptive discontinuity-preserving smoothing of multispectral images. Reston, VA (521 National Center, Reston 22092): U.S. Geological Survey, 1994.
Den vollen Inhalt der Quelle findenCombining edge-gradient information to improve adaptive discontinuity-preserving smoothing of multispectral images. Reston, VA (521 National Center, Reston 22092): U.S. Geological Survey, 1994.
Den vollen Inhalt der Quelle findenCombining edge-gradient information to improve adaptive discontinuity-preserving smoothing of multispectral images. Reston, VA (521 National Center, Reston 22092): U.S. Geological Survey, 1994.
Den vollen Inhalt der Quelle findenGeological Survey (U.S.), Hrsg. Combining edge-gradient information to improve adaptive discontinuity-preserving smoothing of multispectral images. Reston, VA (521 National Center, Reston 22092): U.S. Geological Survey, 1994.
Den vollen Inhalt der Quelle findenGeological Survey (U.S.), Hrsg. Combining edge-gradient information to improve adaptive discontinuity-preserving smoothing of multispectral images. Reston, VA (521 National Center, Reston 22092): U.S. Geological Survey, 1994.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Gradient Smoothing"
Bui, Tinh Quoc. „A Smoothing Gradient-Enhanced Damage Model“. In Computational and Experimental Simulations in Engineering, 91–96. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-27053-7_9.
Der volle Inhalt der QuelleWelk, Martin. „Diffusion, Pre-smoothing and Gradient Descent“. In Lecture Notes in Computer Science, 78–90. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-75549-2_7.
Der volle Inhalt der QuelleZhang, He, François Petitjean und Wray Buntine. „Hierarchical Gradient Smoothing for Probability Estimation Trees“. In Advances in Knowledge Discovery and Data Mining, 222–34. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-47426-3_18.
Der volle Inhalt der QuelleHowlett, John, und Alan Zundel. „Size Function Smoothing Using an Element Area Gradient“. In Proceedings of the 18th International Meshing Roundtable, 1–12. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04319-2_1.
Der volle Inhalt der QuelleCox, Ingemar J., Sunita Hingorani, Bruce M. Maggs und Satish B. Rao. „Stereo Without Disparity Gradient Smoothing: a Bayesian Sensor Fusion Solution“. In BMVC92, 337–46. London: Springer London, 1992. http://dx.doi.org/10.1007/978-1-4471-3201-1_35.
Der volle Inhalt der QuelleAhmad, Zohaib, Kaizhe Nie, Junfei Qiao und Cuili Yang. „Batch Gradient Training Method with Smoothing $$l_0$$ Regularization for Echo State Networks“. In Machine Learning and Intelligent Communications, 491–500. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-32388-2_42.
Der volle Inhalt der QuelleChen, Li, Hongzhi Zhang, Dongwei Ren, David Zhang und Wangmeng Zuo. „Fast Augmented Lagrangian Method for Image Smoothing with Hyper-Laplacian Gradient Prior“. In Communications in Computer and Information Science, 12–21. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-45643-9_2.
Der volle Inhalt der QuelleUl Rahman, Jamshaid, Akhtar Ali, Masood Ur Rehman und Rafaqat Kazmi. „A Unit Softmax with Laplacian Smoothing Stochastic Gradient Descent for Deep Convolutional Neural Networks“. In Communications in Computer and Information Science, 162–74. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5232-8_14.
Der volle Inhalt der QuelleIqbal, Mansoor, Muhammad Awais Rehman, Naveed Iqbal und Zaheer Iqbal. „Effect of Laplacian Smoothing Stochastic Gradient Descent with Angular Margin Softmax Loss on Face Recognition“. In Communications in Computer and Information Science, 549–61. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5232-8_47.
Der volle Inhalt der QuelleNajid, Najib Mohamed, Marouane Alaoui-Selsouli und Abdemoula Mohafid. „Dantzig-Wolfe Decomposition and Lagrangean Relaxation-Based Heuristics for an Integrated Production and Maintenance Planning with Time Windows“. In Handbook of Research on Modern Optimization Algorithms and Applications in Engineering and Economics, 601–29. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9644-0.ch023.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Gradient Smoothing"
Akai, Yuji, Toshihiro Shibata, Ryo Matsuoka und Masahiro Okuda. „L0 Smoothing Based on Gradient Constraints“. In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451436.
Der volle Inhalt der QuellePinilla, Samuel, Jorge Bacca, Jhon Angarita und Henry Arguello. „Phase Retrieval via Smoothing Projected Gradient Method“. In ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. http://dx.doi.org/10.1109/icassp.2018.8461445.
Der volle Inhalt der QuelleGudkov, Vladimir, und Ilia Moiseev. „Image Smoothing Algorithm Based on Gradient Analysis“. In 2020 Ural Symposium on Biomedical Engineering, Radioelectronics and Information Technology (USBEREIT). IEEE, 2020. http://dx.doi.org/10.1109/usbereit48449.2020.9117646.
Der volle Inhalt der QuelleFeng Huang, Hu Cheng und S. Vijayakumar. „Gradient weighted smoothing for MRI intensity correction“. In 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference. IEEE, 2005. http://dx.doi.org/10.1109/iembs.2005.1617109.
Der volle Inhalt der QuelleJiao, Jian, Hong Lu, Zijian Wang, Wenqiang Zhang und Lizhe Qi. „L0 Gradient Smoothing and Bimodal Histogram Analysis“. In MMAsia '19: ACM Multimedia Asia. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3338533.3366554.
Der volle Inhalt der QuelleKnyazev, Andrew, und Alexander Malyshev. „Conjugate gradient acceleration of non-linear smoothing filters“. In 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP). IEEE, 2015. http://dx.doi.org/10.1109/globalsip.2015.7418194.
Der volle Inhalt der QuelleChalasani, Rakesh, und Jose C. Principe. „Dynamic sparse coding with smoothing proximal gradient method“. In ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2014. http://dx.doi.org/10.1109/icassp.2014.6854995.
Der volle Inhalt der QuelleHeiden, Eric, Luigi Palmieri, Sven Koenig, Kai O. Arras und Gaurav S. Sukhatme. „Gradient-Informed Path Smoothing for Wheeled Mobile Robots“. In 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018. http://dx.doi.org/10.1109/icra.2018.8460818.
Der volle Inhalt der QuelleSubhan, Fazli, Salman Ahmed und Khalid Ashraf. „Extended Gradient Predictor and Filter for smoothing RSSI“. In 2014 16th International Conference on Advanced Communication Technology (ICACT). Global IT Research Institute (GIRI), 2014. http://dx.doi.org/10.1109/icact.2014.6779148.
Der volle Inhalt der QuelleLiu, Jun, Ming Yan, Jinshan Zeng und Tieyong Zeng. „Image Smoothing Via Gradient Sparsity and Surface Area Minimization“. In 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019. http://dx.doi.org/10.1109/icip.2019.8804271.
Der volle Inhalt der Quelle