Auswahl der wissenschaftlichen Literatur zum Thema „Gradient Smoothing“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Gradient Smoothing" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Gradient Smoothing"

1

Fang, Shuai, Zhenji Yao und Jing Zhang. „Scale and Gradient Aware Image Smoothing“. IEEE Access 7 (2019): 166268–81. http://dx.doi.org/10.1109/access.2019.2953550.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Wang, Dongdong, Jiarui Wang und Junchao Wu. „Superconvergent gradient smoothing meshfree collocation method“. Computer Methods in Applied Mechanics and Engineering 340 (Oktober 2018): 728–66. http://dx.doi.org/10.1016/j.cma.2018.06.021.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Zhou, Zhengyong, und Qi Yang. „An Active Set Smoothing Method for Solving Unconstrained Minimax Problems“. Mathematical Problems in Engineering 2020 (24.06.2020): 1–25. http://dx.doi.org/10.1155/2020/9108150.

Der volle Inhalt der Quelle
Annotation:
In this paper, an active set smoothing function based on the plus function is constructed for the maximum function. The active set strategy used in the smoothing function reduces the number of gradients and Hessians evaluations of the component functions in the optimization. Combing the active set smoothing function, a simple adjustment rule for the smoothing parameters, and an unconstrained minimization method, an active set smoothing method is proposed for solving unconstrained minimax problems. The active set smoothing function is continuously differentiable, and its gradient is locally Lipschitz continuous and strongly semismooth. Under the boundedness assumption on the level set of the objective function, the convergence of the proposed method is established. Numerical experiments show that the proposed method is feasible and efficient, particularly for the minimax problems with very many component functions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Xu, Li, Cewu Lu, Yi Xu und Jiaya Jia. „Image smoothing via L 0 gradient minimization“. ACM Transactions on Graphics 30, Nr. 6 (Dezember 2011): 1–12. http://dx.doi.org/10.1145/2070781.2024208.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Burke, James V., Tim Hoheisel und Christian Kanzow. „Gradient Consistency for Integral-convolution Smoothing Functions“. Set-Valued and Variational Analysis 21, Nr. 2 (29.03.2013): 359–76. http://dx.doi.org/10.1007/s11228-013-0235-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Pinilla, Samuel, Tamir Bendory, Yonina C. Eldar und Henry Arguello. „Frequency-Resolved Optical Gating Recovery via Smoothing Gradient“. IEEE Transactions on Signal Processing 67, Nr. 23 (01.12.2019): 6121–32. http://dx.doi.org/10.1109/tsp.2019.2951192.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Avrashi, Jacob. „High order gradient smoothing towards improved C1 eigenvalues“. Engineering Computations 12, Nr. 6 (Juni 1995): 513–28. http://dx.doi.org/10.1108/02644409510799749.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Wang, Bao, Difan Zou, Quanquan Gu und Stanley J. Osher. „Laplacian Smoothing Stochastic Gradient Markov Chain Monte Carlo“. SIAM Journal on Scientific Computing 43, Nr. 1 (Januar 2021): A26—A53. http://dx.doi.org/10.1137/19m1294356.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lin, Qihang, Xi Chen und Javier Peña. „A smoothing stochastic gradient method for composite optimization“. Optimization Methods and Software 29, Nr. 6 (13.03.2014): 1281–301. http://dx.doi.org/10.1080/10556788.2014.891592.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

He, Liangtian, und Yilun Wang. „Image smoothing via truncated ℓ 0 gradient regularisation“. IET Image Processing 12, Nr. 2 (01.02.2018): 226–34. http://dx.doi.org/10.1049/iet-ipr.2017.0533.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Gradient Smoothing"

1

Lee, Chang-Kye. „Gradient smoothing in finite elasticity : near-incompressibility“. Thesis, Cardiff University, 2016. http://orca.cf.ac.uk/94491/.

Der volle Inhalt der Quelle
Annotation:
This thesis presents the extension of the gradient smoothing technique for finite element approximation (so-called Smoothed Finite Element Method (S-FEM)) and its bubble-enhanced version for non-linear problems involving large deformations in nearly-incompressible and incompressible hyperelastic materials. Finite Element Method (FEM) presents numerous challenges for soft matter applications, such as incompressibility, complex geometries and mesh distortion from large deformation. S-FEM was introduced to overcome the challenges mentioned of FEM. The smoothed strains and the smoothed deformation gradients are evaluated on the smoothing domain selected by either edge information, nodal information or face information. This thesis aims the extension of S-FEM in finite elasticity as a means of alleviating locking and avoiding mesh distortion. S-FEM employs a “cubic” bubble enhancement of the element shape functions with edge-based and face-based S-FEMs, adding a linear displacement field at the centre of the element. Thereby bubble-enhanced S-FEM affords a simple and efficient implementation. This thesis reports the properties and performance of the proposed method for quasi-incompressible hyperelastic materials. Benchmark tests show that the method is well suited to soft matter simulation, overcoming deleterious locking phenomenon and maintaining the accuracy with distorted meshes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Mao, Zirui. „A Novel Lagrangian Gradient Smoothing Method for Fluids and Flowing Solids“. University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1553252214052311.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Pierucci, Federico. „Optimisation non-lisse pour l'apprentissage statistique avec régularisation matricielle structurée“. Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM024/document.

Der volle Inhalt der Quelle
Annotation:
La phase d’apprentissage des méthodes d’apprentissage statistique automatique correspondent à la résolution d’un problème d’optimisation mathématique dont la fonction objectif se décompose en deux parties: a) le risque empirique, construit à partir d’une fonction de perte, dont la forme est déterminée par la métrique de performance et les hypothèses sur le bruit; b) la pénalité de régularisation, construite a partir d’une norme ou fonction jauge, dont la structure est déterminée par l’information à priori disponible sur le problème a résoudre.Les fonctions de perte usuelles, comme la fonction de perte charnière pour la classification supervisée binaire, ainsi que les fonctions de perte plus avancées comme celle pour la classification supervisée avec possibilité d’abstention, sont non-différentiables. Les pénalités de régularisation comme la norme l1 (vectorielle), ainsi que la norme nucléaire (matricielle), sont également non- différentiables. Cependant, les algorithmes d’optimisation numériques les plus simples, comme l’algorithme de sous-gradient ou les méthodes de faisceaux, ne tirent pas profit de la structure composite de l’objectif. Le but de cette thèse est d’étudier les problèmes d’apprentissage doublement non-différentiables (perte non- différentiable et régularisation non-différentiable), ainsi que les algorithmes d’optimisation numérique qui sont en mesure de bénéficier de cette structure composite.Dans le premier chapitre, nous présentons une nouvelle famille de pénalité de régularisation, les normes de Schatten par blocs, qui généralisent les normes de Schatten classiques. Nous démontrons les principales propriétés des normes de Schatten par blocs en faisant appel à des outils d’analyse convexe et d’algèbre linéaire; nous retrouvons en particulier des propriétés caractérisant les normes proposées en termes d’enveloppe convexes. Nous discutons plusieurs applications potentielles de la norme nucléaire par blocs, pour le filtrage collaboratif, la compression de bases de données, et l’annotation multi-étiquettes d’images.Dans le deuxième chapitre, nous présentons une synthèse de différentes tech- niques de lissage qui permettent d’utiliser des algorithmes de premier ordre adaptes aux objectifs composites qui de décomposent en un terme différentiable et un terme non-différentiable. Nous montrons comment le lissage peut être utilisé pour lisser la fonction de perte correspondant à la précision au rang k, populaire pour le classement et la classification supervises d’images. Nous décrivons dans les grandes lignes plusieurs familles d’algorithmes de premier ordre qui peuvent bénéficier du lissage: i) les algorithmes de gradient conditionnel; ii) les algorithmes de gradient proximal; iii) les algorithmes de gradient incrémental.Dans le troisième chapitre, nous étudions en profondeur les algorithmes de gradient conditionnel pour les problèmes d’optimisation non-différentiables d’apprentissage statistique automatique. Nous montrons qu’une stratégie de lis- sage adaptative associée à un algorithme de gradient conditionnel donne lieu à de nouveaux algorithmes de gradient conditionnel qui satisfont des garanties de convergence théoriques. Nous présentons des résultats expérimentaux prometteurs des problèmes de filtrage collaboratif pour la recommandation de films et de catégorisation d’images
Training machine learning methods boils down to solving optimization problems whose objective functions often decomposes into two parts: a) the empirical risk, built upon the loss function, whose shape is determined by the performance metric and the noise assumptions; b) the regularization penalty, built upon a norm, or a gauge function, whose structure is determined by the prior information available for the problem at hand.Common loss functions, such as the hinge loss for binary classification, or more advanced loss functions, such as the one arising in classification with reject option, are non-smooth. Sparse regularization penalties such as the (vector) l1- penalty, or the (matrix) nuclear-norm penalty, are also non-smooth. However, basic non-smooth optimization algorithms, such as subgradient optimization or bundle-type methods, do not leverage the composite structure of the objective. The goal of this thesis is to study doubly non-smooth learning problems (with non-smooth loss functions and non-smooth regularization penalties) and first- order optimization algorithms that leverage composite structure of non-smooth objectives.In the first chapter, we introduce new regularization penalties, called the group Schatten norms, to generalize the standard Schatten norms to block- structured matrices. We establish the main properties of the group Schatten norms using tools from convex analysis and linear algebra; we retrieve in particular some convex envelope properties. We discuss several potential applications of the group nuclear-norm, in collaborative filtering, database compression, multi-label image tagging.In the second chapter, we present a survey of smoothing techniques that allow us to use first-order optimization algorithms designed for composite objectives decomposing into a smooth part and a non-smooth part. We also show how smoothing can be used on the loss function corresponding to the top-k accuracy, used for ranking and multi-class classification problems. We outline some first-order algorithms that can be used in combination with the smoothing technique: i) conditional gradient algorithms; ii) proximal gradient algorithms; iii) incremental gradient algorithms.In the third chapter, we study further conditional gradient algorithms for solving doubly non-smooth optimization problems. We show that an adaptive smoothing combined with the standard conditional gradient algorithm gives birth to new conditional gradient algorithms having the expected theoretical convergence guarantees. We present promising experimental results in collaborative filtering for movie recommendation and image categorization
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Bhowmick, Sauradeep. „Advanced Smoothed Finite Element Modeling for Fracture Mechanics Analyses“. University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1623240613376967.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Mayrink, Victor Teixeira de Melo. „Avaliação do algoritmo Gradient Boosting em aplicações de previsão de carga elétrica a curto prazo“. Universidade Federal de Juiz de Fora (UFJF), 2016. https://repositorio.ufjf.br/jspui/handle/ufjf/3563.

Der volle Inhalt der Quelle
Annotation:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-03-07T14:25:21Z No. of bitstreams: 1 victorteixeirademelomayrink.pdf: 2587774 bytes, checksum: 1319cc37a15480796050b618b4d7e5f7 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-07T15:06:57Z (GMT) No. of bitstreams: 1 victorteixeirademelomayrink.pdf: 2587774 bytes, checksum: 1319cc37a15480796050b618b4d7e5f7 (MD5)
Made available in DSpace on 2017-03-07T15:06:57Z (GMT). No. of bitstreams: 1 victorteixeirademelomayrink.pdf: 2587774 bytes, checksum: 1319cc37a15480796050b618b4d7e5f7 (MD5) Previous issue date: 2016-08-31
FAPEMIG - Fundação de Amparo à Pesquisa do Estado de Minas Gerais
O armazenamento de energia elétrica em larga escala ainda não é viável devido a restrições técnicas e econômicas. Portanto, toda energia consumida deve ser produzida instantaneamente; não é possível armazenar o excesso de produção, ou tampouco cobrir eventuais faltas de oferta com estoques de segurança, mesmo que por um curto período de tempo. Consequentemente, um dos principais desafios do planejamento energético consiste em realizar previsões acuradas para as demandas futuras. Neste trabalho, apresentamos um modelo de previsão para o consumo de energia elétrica a curto prazo. A metodologia utilizada compreende a construção de um comitê de previsão, por meio da aplicação do algoritmo Gradient Boosting em combinação com modelos de árvores de decisão e a técnica de amortecimento exponencial. Esta estratégia compreende um método de aprendizado supervisionado que ajusta o modelo de previsão com base em dados históricos do consumo de energia, das temperaturas registradas e de variáveis de calendário. Os modelos propostos foram testados em duas bases de dados distintas e demonstraram um ótimo desempenho quando comparados com resultados publicados em outros trabalhos recentes.
The storage of electrical energy is still not feasible on a large scale due to technical and economic issues. Therefore, all energy to be consumed must be produced instantly; it is not possible to store the production leftover, or either to cover any supply shortages with safety stocks, even for a short period of time. Thus, one of the main challenges of energy planning consists in computing accurate forecasts for the future demand. In this paper, we present a model for short-term load forecasting. The methodology consists in composing a prediction comitee by applying the Gradient Boosting algorithm in combination with decision tree models and the exponential smoothing technique. This strategy comprises a supervised learning method that adjusts the forecasting model based on historical energy consumption data, the recorded temperatures and calendar variables. The proposed models were tested in two di erent datasets and showed a good performance when compared with results published in recent papers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Heinrich, André. „Fenchel duality-based algorithms for convex optimization problems with applications in machine learning and image restoration“. Doctoral thesis, Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-108923.

Der volle Inhalt der Quelle
Annotation:
The main contribution of this thesis is the concept of Fenchel duality with a focus on its application in the field of machine learning problems and image restoration tasks. We formulate a general optimization problem for modeling support vector machine tasks and assign a Fenchel dual problem to it, prove weak and strong duality statements as well as necessary and sufficient optimality conditions for that primal-dual pair. In addition, several special instances of the general optimization problem are derived for different choices of loss functions for both the regression and the classifification task. The convenience of these approaches is demonstrated by numerically solving several problems. We formulate a general nonsmooth optimization problem and assign a Fenchel dual problem to it. It is shown that the optimal objective values of the primal and the dual one coincide and that the primal problem has an optimal solution under certain assumptions. The dual problem turns out to be nonsmooth in general and therefore a regularization is performed twice to obtain an approximate dual problem that can be solved efficiently via a fast gradient algorithm. We show how an approximate optimal and feasible primal solution can be constructed by means of some sequences of proximal points closely related to the dual iterates. Furthermore, we show that the solution will indeed converge to the optimal solution of the primal for arbitrarily small accuracy. Finally, the support vector regression task is obtained to arise as a particular case of the general optimization problem and the theory is specialized to this problem. We calculate several proximal points occurring when using difffferent loss functions as well as for some regularization problems applied in image restoration tasks. Numerical experiments illustrate the applicability of our approach for these types of problems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Huang, Chih-Ping, und 黃志平. „Piecewise Linear Function Solution Space and Modified-Gradient Smoothing Domain Method“. Thesis, 2012. http://ndltd.ncl.edu.tw/handle/51737947931004919031.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Cheng, Ching Wen, und 鄭景文. „Simplification Of Centroid Gradient Smoothing Domain Method Using Finite Element Basis“. Thesis, 2012. http://ndltd.ncl.edu.tw/handle/52320644772318750123.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Jhong, Jhih-Syong, und 鍾智雄. „A Study of Gradient Smoothing Methods for Boundary Value Problems on Triangular Meshes“. Thesis, 2015. http://ndltd.ncl.edu.tw/handle/93662964677132066520.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Hsieh, Hsun, und 謝. 洵. „Automatic tumor segmentation of breast ultra-sound images using a distance-regularized level-set evolution method with initial contour obtained by guided image filter, L0 gradient minimization smoothing pre-processing, and morphological features“. Thesis, 2017. http://ndltd.ncl.edu.tw/handle/t6z6cs.

Der volle Inhalt der Quelle
Annotation:
碩士
國立清華大學
電機工程學系所
105
Due to the speckle noise and low contrast in breast ultrasound images, it is hard to locate the contour of the tumor by using a single method. In this thesis, a new method for finding an initial contour is proposed, which can improve the result of DRLSE on the segmentation of BUS images. The new method focuses on improving the algorithm proposed by Tsai-Wen Niu, which is a way to search an initial contour based on the local minimum in the images. When the BUS images contain calcification, it is possible to fail in searching of initial contour through such algorithm, hence leading to a poor segmentation result when the initial contour is on the wrong place. Therefore, we acquire a bigger initial contour by using a series of image smoothing methods and binarization, which can eliminate the weak edges and adjust the contrast in BUS images. In addition, some images without local minimum can be successfully detected by using the proposed method. However, the pixel value in these images are similar. It might be hard to accurately separate the tumor region from non-tumor region by the difference of pixel values. These obstacles are conquered by calculating the difference of length and pixel value in the suspect region. The ranking outcome is improved by using the morphological features. After applying DRLSE, our initial contour can reach the tumor region more accurately. To evaluate the result of segmentation, it is compared with the outcome of DRLSE obtained from different initial contours proposed by Tsai-Wen Niu, expansion DRLSE method, and contraction DRLSE method using three evaluation metrics, including ME, RFAE and MHD. The experimental results indicate that the proposed method is basically better than the other methods. However, the initial contour might contain non-tumor region when the edge of the tumor’s boundary is too ambiguous; even so, the proposed method drastically reduce the number of DRLSE iteration and computation time. According to the experimental results, the proposed method has three advantages over the other methods. First, it sets the initial contour automatically which is more efficient than setting the initial contour manually. Second, the region of the initial contour is much bigger than those obtained by the other methods, which can reduce the computation time and the number of DRLSE iteration. Third, if the tumor boundary is distinct, the new initial contour can improve the segmentation result of DRLSE.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Gradient Smoothing"

1

Geological Survey (U.S.), Hrsg. Combining edge-gradient information to improve adaptive discontinuity-preserving smoothing of multispectral images. Reston, VA (521 National Center, Reston 22092): U.S. Geological Survey, 1994.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Geological Survey (U.S.), Hrsg. Combining edge-gradient information to improve adaptive discontinuity-preserving smoothing of multispectral images. Reston, VA (521 National Center, Reston 22092): U.S. Geological Survey, 1994.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Combining edge-gradient information to improve adaptive discontinuity-preserving smoothing of multispectral images. Reston, VA (521 National Center, Reston 22092): U.S. Geological Survey, 1994.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Combining edge-gradient information to improve adaptive discontinuity-preserving smoothing of multispectral images. Reston, VA (521 National Center, Reston 22092): U.S. Geological Survey, 1994.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Geological Survey (U.S.), Hrsg. Combining edge-gradient information to improve adaptive discontinuity-preserving smoothing of multispectral images. Reston, VA (521 National Center, Reston 22092): U.S. Geological Survey, 1994.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Geological Survey (U.S.), Hrsg. Combining edge-gradient information to improve adaptive discontinuity-preserving smoothing of multispectral images. Reston, VA (521 National Center, Reston 22092): U.S. Geological Survey, 1994.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Gradient Smoothing"

1

Bui, Tinh Quoc. „A Smoothing Gradient-Enhanced Damage Model“. In Computational and Experimental Simulations in Engineering, 91–96. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-27053-7_9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Welk, Martin. „Diffusion, Pre-smoothing and Gradient Descent“. In Lecture Notes in Computer Science, 78–90. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-75549-2_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Zhang, He, François Petitjean und Wray Buntine. „Hierarchical Gradient Smoothing for Probability Estimation Trees“. In Advances in Knowledge Discovery and Data Mining, 222–34. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-47426-3_18.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Howlett, John, und Alan Zundel. „Size Function Smoothing Using an Element Area Gradient“. In Proceedings of the 18th International Meshing Roundtable, 1–12. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04319-2_1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Cox, Ingemar J., Sunita Hingorani, Bruce M. Maggs und Satish B. Rao. „Stereo Without Disparity Gradient Smoothing: a Bayesian Sensor Fusion Solution“. In BMVC92, 337–46. London: Springer London, 1992. http://dx.doi.org/10.1007/978-1-4471-3201-1_35.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ahmad, Zohaib, Kaizhe Nie, Junfei Qiao und Cuili Yang. „Batch Gradient Training Method with Smoothing $$l_0$$ Regularization for Echo State Networks“. In Machine Learning and Intelligent Communications, 491–500. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-32388-2_42.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Chen, Li, Hongzhi Zhang, Dongwei Ren, David Zhang und Wangmeng Zuo. „Fast Augmented Lagrangian Method for Image Smoothing with Hyper-Laplacian Gradient Prior“. In Communications in Computer and Information Science, 12–21. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-45643-9_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Ul Rahman, Jamshaid, Akhtar Ali, Masood Ur Rehman und Rafaqat Kazmi. „A Unit Softmax with Laplacian Smoothing Stochastic Gradient Descent for Deep Convolutional Neural Networks“. In Communications in Computer and Information Science, 162–74. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5232-8_14.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Iqbal, Mansoor, Muhammad Awais Rehman, Naveed Iqbal und Zaheer Iqbal. „Effect of Laplacian Smoothing Stochastic Gradient Descent with Angular Margin Softmax Loss on Face Recognition“. In Communications in Computer and Information Science, 549–61. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5232-8_47.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Najid, Najib Mohamed, Marouane Alaoui-Selsouli und Abdemoula Mohafid. „Dantzig-Wolfe Decomposition and Lagrangean Relaxation-Based Heuristics for an Integrated Production and Maintenance Planning with Time Windows“. In Handbook of Research on Modern Optimization Algorithms and Applications in Engineering and Economics, 601–29. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9644-0.ch023.

Der volle Inhalt der Quelle
Annotation:
In this chapter, two approaches are developed to solve the integrated production planning and maintenance problem. Moreover, Some Propositions and mathematical properties were suggested and applied in the proposed heuristics to solve the problem. The first heuristic developed is based on Dantzig-Wolfe decomposition. The Dantzig-Wolfe Decomposition principle reformulates the original model and Column generation is then used to deal with the huge number of variables of the reformulated model. A simple rounding heuristic and a smoothing procedure are finally carried out in order to obtain integer solutions. The second heuristic is based on Lagrangean relaxation of the capacity constraints and sub-gradient optimization. At every step of sub-gradient method, feasibility and improvement procedures are applied to the solution of the Lagrangean problem. Computational experiments are carried out to show the results obtained by our approaches and compared to those of commercial solver.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Gradient Smoothing"

1

Akai, Yuji, Toshihiro Shibata, Ryo Matsuoka und Masahiro Okuda. „L0 Smoothing Based on Gradient Constraints“. In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451436.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Pinilla, Samuel, Jorge Bacca, Jhon Angarita und Henry Arguello. „Phase Retrieval via Smoothing Projected Gradient Method“. In ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. http://dx.doi.org/10.1109/icassp.2018.8461445.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Gudkov, Vladimir, und Ilia Moiseev. „Image Smoothing Algorithm Based on Gradient Analysis“. In 2020 Ural Symposium on Biomedical Engineering, Radioelectronics and Information Technology (USBEREIT). IEEE, 2020. http://dx.doi.org/10.1109/usbereit48449.2020.9117646.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Feng Huang, Hu Cheng und S. Vijayakumar. „Gradient weighted smoothing for MRI intensity correction“. In 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference. IEEE, 2005. http://dx.doi.org/10.1109/iembs.2005.1617109.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Jiao, Jian, Hong Lu, Zijian Wang, Wenqiang Zhang und Lizhe Qi. „L0 Gradient Smoothing and Bimodal Histogram Analysis“. In MMAsia '19: ACM Multimedia Asia. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3338533.3366554.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Knyazev, Andrew, und Alexander Malyshev. „Conjugate gradient acceleration of non-linear smoothing filters“. In 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP). IEEE, 2015. http://dx.doi.org/10.1109/globalsip.2015.7418194.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Chalasani, Rakesh, und Jose C. Principe. „Dynamic sparse coding with smoothing proximal gradient method“. In ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2014. http://dx.doi.org/10.1109/icassp.2014.6854995.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Heiden, Eric, Luigi Palmieri, Sven Koenig, Kai O. Arras und Gaurav S. Sukhatme. „Gradient-Informed Path Smoothing for Wheeled Mobile Robots“. In 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018. http://dx.doi.org/10.1109/icra.2018.8460818.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Subhan, Fazli, Salman Ahmed und Khalid Ashraf. „Extended Gradient Predictor and Filter for smoothing RSSI“. In 2014 16th International Conference on Advanced Communication Technology (ICACT). Global IT Research Institute (GIRI), 2014. http://dx.doi.org/10.1109/icact.2014.6779148.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Liu, Jun, Ming Yan, Jinshan Zeng und Tieyong Zeng. „Image Smoothing Via Gradient Sparsity and Surface Area Minimization“. In 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019. http://dx.doi.org/10.1109/icip.2019.8804271.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie