Academic literature on the topic 'Approximate norm descent methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Approximate norm descent methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Approximate norm descent methods"

1

Morini, Benedetta, Margherita Porcelli, and Philippe L. Toint. "Approximate norm descent methods for constrained nonlinear systems." Mathematics of Computation 87, no. 311 (May 11, 2017): 1327–51. http://dx.doi.org/10.1090/mcom/3251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jin, Yang, Li, and Liu. "Sparse Recovery Algorithm for Compressed Sensing Using Smoothed l0 Norm and Randomized Coordinate Descent." Mathematics 7, no. 9 (September 9, 2019): 834. http://dx.doi.org/10.3390/math7090834.

Full text
Abstract:
Compressed sensing theory is widely used in the field of fault signal diagnosis and image processing. Sparse recovery is one of the core concepts of this theory. In this paper, we proposed a sparse recovery algorithm using a smoothed l0 norm and a randomized coordinate descent (RCD), then applied it to sparse signal recovery and image denoising. We adopted a new strategy to express the (P0) problem approximately and put forward a sparse recovery algorithm using RCD. In the computer simulation experiments, we compared the performance of this algorithm to other typical methods. The results show that our algorithm possesses higher precision in sparse signal recovery. Moreover, it achieves higher signal to noise ratio (SNR) and faster convergence speed in image denoising.
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Kai, and Zhi Xiong. "Nonparametric Tensor Completion Based on Gradient Descent and Nonconvex Penalty." Symmetry 11, no. 12 (December 12, 2019): 1512. http://dx.doi.org/10.3390/sym11121512.

Full text
Abstract:
Existing tensor completion methods all require some hyperparameters. However, these hyperparameters determine the performance of each method, and it is difficult to tune them. In this paper, we propose a novel nonparametric tensor completion method, which formulates tensor completion as an unconstrained optimization problem and designs an efficient iterative method to solve it. In each iteration, we not only calculate the missing entries by the aid of data correlation, but consider the low-rank of tensor and the convergence speed of iteration. Our iteration is based on the gradient descent method, and approximates the gradient descent direction with tensor matricization and singular value decomposition. Considering the symmetry of every dimension of a tensor, the optimal unfolding direction in each iteration may be different. So we select the optimal unfolding direction by scaled latent nuclear norm in each iteration. Moreover, we design formula for the iteration step-size based on the nonconvex penalty. During the iterative process, we store the tensor in sparsity and adopt the power method to compute the maximum singular value quickly. The experiments of image inpainting and link prediction show that our method is competitive with six state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Ko, Dongnam, and Enrique Zuazua. "Model predictive control with random batch methods for a guiding problem." Mathematical Models and Methods in Applied Sciences 31, no. 08 (July 2021): 1569–92. http://dx.doi.org/10.1142/s0218202521500329.

Full text
Abstract:
We model, simulate and control the guiding problem for a herd of evaders under the action of repulsive drivers. The problem is formulated in an optimal control framework, where the drivers (controls) aim to guide the evaders (states) to a desired region of the Euclidean space. The numerical simulation of such models quickly becomes unfeasible for a large number of interacting agents, as the number of interactions grows [Formula: see text] for [Formula: see text] agents. For reducing the computational cost to [Formula: see text], we use the Random Batch Method (RBM), which provides a computationally feasible approximation of the dynamics. First, the considered time interval is divided into a number of subintervals. In each subinterval, the RBM randomly divides the set of particles into small subsets (batches), considering only the interactions inside each batch. Due to the averaging effect, the RBM approximation converges to the exact dynamics in the [Formula: see text]-expectation norm as the length of subintervals goes to zero. For this approximated dynamics, the corresponding optimal control can be computed efficiently using a classical gradient descent. The resulting control is not optimal for the original system, but for a reduced RBM model. We therefore adopt a Model Predictive Control (MPC) strategy to handle the error in the dynamics. This leads to a semi-feedback control strategy, where the control is applied only for a short time interval to the original system, and then compute the optimal control for the next time interval with the state of the (controlled) original dynamics. Through numerical experiments we show that the combination of RBM and MPC leads to a significant reduction of the computational cost, preserving the capacity of controlling the overall dynamics.
APA, Harvard, Vancouver, ISO, and other styles
5

Utomo, Rukmono Budi. "METODE NUMERIK STEPEST DESCENT DENGAN DIRECTION DAN NORMRERATA ARITMATIKA." AKSIOMA Journal of Mathematics Education 5, no. 2 (January 3, 2017): 128. http://dx.doi.org/10.24127/ajpm.v5i2.673.

Full text
Abstract:
This research is investigating ofSteepest Descent numerical method with direction and norm arithmetic mean. This research is begin with try to understand what Steepest Descent Numerical is and its algorithm. After that, we constructing the new Steepest Descent numerical method using another direction and norm called arithmetic mean. This paper also containing numerical counting examples using both of these methods and analyze them self.
APA, Harvard, Vancouver, ISO, and other styles
6

Goh, B. S. "Approximate Greatest Descent Methods for Optimization with Equality Constraints." Journal of Optimization Theory and Applications 148, no. 3 (November 16, 2010): 505–27. http://dx.doi.org/10.1007/s10957-010-9765-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xiao, Yunhai, Chunjie Wu, and Soon-Yi Wu. "Norm descent conjugate gradient methods for solving symmetric nonlinear equations." Journal of Global Optimization 62, no. 4 (July 11, 2014): 751–62. http://dx.doi.org/10.1007/s10898-014-0218-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Qiu, Yixuan, and Xiao Wang. "Stochastic Approximate Gradient Descent via the Langevin Algorithm." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5428–35. http://dx.doi.org/10.1609/aaai.v34i04.5992.

Full text
Abstract:
We introduce a novel and efficient algorithm called the stochastic approximate gradient descent (SAGD), as an alternative to the stochastic gradient descent for cases where unbiased stochastic gradients cannot be trivially obtained. Traditional methods for such problems rely on general-purpose sampling techniques such as Markov chain Monte Carlo, which typically requires manual intervention for tuning parameters and does not work efficiently in practice. Instead, SAGD makes use of the Langevin algorithm to construct stochastic gradients that are biased in finite steps but accurate asymptotically, enabling us to theoretically establish the convergence guarantee for SAGD. Inspired by our theoretical analysis, we also provide useful guidelines for its practical implementation. Finally, we show that SAGD performs well experimentally in popular statistical and machine learning problems such as the expectation-maximization algorithm and the variational autoencoders.
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Yin, and Yunqing Huang. "Spectral-Collocation Methods for Fractional Pantograph Delay-Integrodifferential Equations." Advances in Mathematical Physics 2013 (2013): 1–14. http://dx.doi.org/10.1155/2013/821327.

Full text
Abstract:
We propose and analyze a spectral Jacobi-collocation approximation for fractional order integrodifferential equations of Volterra type with pantograph delay. The fractional derivative is described in the Caputo sense. We provide a rigorous error analysis for the collocation method, which shows that the error of approximate solution decays exponentially inL∞norm and weightedL2-norm. The numerical examples are given to illustrate the theoretical results.
APA, Harvard, Vancouver, ISO, and other styles
10

Poggio, Tomaso, Andrzej Banburski, and Qianli Liao. "Theoretical issues in deep networks." Proceedings of the National Academy of Sciences 117, no. 48 (June 9, 2020): 30039–45. http://dx.doi.org/10.1073/pnas.1907369117.

Full text
Abstract:
While deep learning is successful in a number of applications, it is not yet well understood theoretically. A theoretical characterization of deep learning should answer questions about their approximation power, the dynamics of optimization, and good out-of-sample performance, despite overparameterization and the absence of explicit regularization. We review our recent results toward this goal. In approximation theory both shallow and deep networks are known to approximate any continuous functions at an exponential cost. However, we proved that for certain types of compositional functions, deep networks of the convolutional type (even without weight sharing) can avoid the curse of dimensionality. In characterizing minimization of the empirical exponential loss we consider the gradient flow of the weight directions rather than the weights themselves, since the relevant function underlying classification corresponds to normalized networks. The dynamics of normalized weights turn out to be equivalent to those of the constrained problem of minimizing the loss subject to a unit norm constraint. In particular, the dynamics of typical gradient descent have the same critical points as the constrained problem. Thus there is implicit regularization in training deep networks under exponential-type loss functions during gradient flow. As a consequence, the critical points correspond to minimum norm infima of the loss. This result is especially relevant because it has been recently shown that, for overparameterized models, selection of a minimum norm solution optimizes cross-validation leave-one-out stability and thereby the expected error. Thus our results imply that gradient descent in deep networks minimize the expected error.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Approximate norm descent methods"

1

Sgattoni, Cristina. "Solving systems of nonlinear equations via spectral residual methods." Doctoral thesis, 2021. http://hdl.handle.net/2158/1238325.

Full text
Abstract:
This thesis addresses the numerical solution of systems of nonlinear equations via spectral residual methods. Spectral residual methods are iterative procedures, they use the residual vector as search direction and a spectral steplength, i.e., a steplength that is related to the spectrum of the average matrices associated to the Jacobian matrix of the system. Such procedures are widely studied and employed since they are derivative-free and low-cost per iteration. The first aim of the work is to analyze the properties of the spectral residual steplengths and study how they affect the performance of the methods. This aim is addressed both from a theoretical and experimental point of view. The main contributions in this direction are: the theoretical analysis of the steplengths proposed in the literature and of their impact on the methods behaviour; the analysis of the performance of spectral methods with various rules for updating the steplengths. We propose and extensively test different steplength strategies. Rules based on adaptive strategies that suitably combine small and large steplengths result by far more effective than rules based on static choices of the steplength. Numerical experience is conducted on sequences of nonlinear systems arising from rolling contact models which play a central role in many important applications, such as rolling bearings and wheel-rail interaction. Solving these models gives rise to sequences which consist of a large number of medium-size nonlinear systems and represent a relevant benchmark test set for the purpose of the thesis. The second purpose of the thesis is to propose a variant of the derivative-free spectral residual method used in the first part and obtain a general scheme globally convergent under more general conditions. The robustness of the new method is potentially improved with respect to the previous version. Numerical experiments are conducted both on the problems arising in rolling contact models and on a set of problems commonly used for testing solvers for nonlinear systems.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Approximate norm descent methods"

1

Hiriart-Urruty, Jean-Baptiste, and Claude Lemaréchal. "Inner Construction of the Approximate Subdifferential: Methods of ε-Descent." In Grundlehren der mathematischen Wissenschaften, 195–222. Berlin, Heidelberg: Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/978-3-662-06409-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Akulenko, Leonid D. "Approximate Synthesis of Optimal Control for Perturbed Systems with Invariant Norm." In Problems and Methods of Optimal Control, 223–80. Dordrecht: Springer Netherlands, 1994. http://dx.doi.org/10.1007/978-94-011-1194-2_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Albreem, Mahmoud. "Efficient Iterative Massive MIMO Detectors Based on Iterative Matrix Inversion Methods." In Design Methodologies and Tools for 5G Network Development and Application, 175–95. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-4610-9.ch009.

Full text
Abstract:
Massive multiple-input multiple-output (MIMO) is a key technology in fifth generation (5G) communication systems. Although the maximum likelihood (ML) obtains an optimal performance, it is prohibited in realization because of its high computational complexity. Linear detectors are an alternative solution, but they contain a matrix inversion which is not hardware friendly. Several methods have been proposed to approximate or to avoid the computation of exact matrix inversion. This chapter garners those methods and study their applicability in massive MIMO system so that a generalist in communication systems can differentiate between different algorithms from a wide range of solutions. This chapter presents the performance-complexity profile of a detector based on the Neuamnn-series (NS), Newton iteration (NI), successive over relaxation (SOR), Gauss-Seidel (GS), Jacobi (JA), Richardson (RI), optimized coordinate descent (OCD), and conjugate-gradient (CG) methods in 8×64, 16×64, and 32×64 MIMO sizes, and modulation scheme is 64QAM.
APA, Harvard, Vancouver, ISO, and other styles
4

P., Umamaheswari. "Water-Level Prediction Utilizing Datamining Techniques in Watershed Management." In Advances in IT Standards and Standardization Research, 261–75. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-9795-8.ch017.

Full text
Abstract:
The massive wastage of water occurs due to irregular heavy rainfall and water released from dams. Many statistical methods are of the previous techniques used to predict water level, which give approximate results. To overcome this disadvantage, gradient descent algorithm has been used. This gives more accurate results and provides higher performance. K-means algorithm is used for clustering, which iteratively assigns each data point to one of the k groups according to the given attribute. The clustered output will be refined for further processing in such a way that the data will be extracted as ordered datasets of year-wise and month-wise data. Clustering accuracy has been improved to 90.22%. Gradient descent algorithm is applied for reducing the error. It also helps in predicting the amount of water to be stored in watershed for future usage. Watershed development appears to be helpful in terms of groundwater recharge, which benefits the farmers. It can also be used for domestic purposes.
APA, Harvard, Vancouver, ISO, and other styles
5

Mkhytaryan, Olha, and Inna Rodionova. "FORMATION OF READING COMPETENCE OF FUTURE DICTIONARIES IN THE CONTEXT OF TECHNOLOGICAL LEARNING (ON THE EXAMPLE OF ANALYSIS OF POETRY BY M. DRY-KHMARY)." In Trends of philological education development in the context of European integration. Publishing House “Baltija Publishing”, 2021. http://dx.doi.org/10.30525/978-9934-26-069-8-8.

Full text
Abstract:
The article reveals the conceptual, semantic and procedural aspects of the application of pedagogical technologies in the process of forming the reading competence of students of philology in classes on the history of Ukrainian literature during the study of Ukrainian neoclassicists. The notion of pedagogical technologies, reading competence of future linguists, neoclassicism is specified and their explanation in the field of methods of teaching Ukrainian literature is given. Thus, the interpretation of reading competence is based on establishing relationships between similar concepts of competencies and competencies, according to which reading competence is interpreted as a social norm alienated from the subject to the student’s educational training necessary for his quality productive activity in reading. Instead, it is proposed to call reading competence the acquired ability of a student acquired in the process of teaching literature, consisting of knowledge, skills, experience, values and attitudes, which are fully realized in the process of practice. In addition, the interpretation of reading competence as a subject and as a component of the professional competence of the future vocabulary allowed to outline a number of important for future vocabulary reading competencies. Modeling the development of students’ reading competence by means of a work of art is proposed to be carried out on the basis of problem-based learning technologies, group work and local technology of problem-style analysis. The peculiarities of integrated teaching of literary, psychological, pedagogical and methodological disciplines are revealed, in particular the importance of purposeful formation of innovative consciousness of students during their mastering of literary courses by detailed description of step - by - step procedure of achievement of planned didactic goal is emphasized. A notable feature of the procedural part of the technologicalization of the pedagogical process is the optimal interaction of forms of traditional and innovative learning. Theoretical provisions are illustrated by the example of designing pedagogical tools for organizing independent reading of poetry by students of M. Dry-Khmara in order to determine the bright constants and thematic-semantic paradigms of individual poetics of the famous representative of neoclassical literary style. For this purpose, the formation of student creative groups is envisaged, the number of which is determined by the number of discussed dominants of M. Dry-Khmara’s individual style, which have become peculiar carriers of the integrity of lyrical works: 1) antinomy «freedom – slavery»; 2) the dominant «blood»; 3) the dominant motive «dream» – «death» – «life». The specificity of this work is to develop an algorithm for an approximate system of tasks for problem-style analysis of works by each of the creative groups, the solution of which brings students closer to understanding the author’s style of neoclassical poet and prepares for analytical-synthetic conversation in the audience. The result of the work carried out by means of technologicalization of the educational process is an approximate report of one of the creative groups of students during the practical lesson. As a result, the leading features of technologicalization of the formation of reading competence of students of philology in the course of their study of the poetic works of Ukrainian neoclassicists are formulated.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Approximate norm descent methods"

1

Merrill, William, Vivek Ramanujan, Yoav Goldberg, Roy Schwartz, and Noah A. Smith. "Effects of Parameter Norm Growth During Transformer Training: Inductive Bias from Gradient Descent." In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.emnlp-main.133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ren, Yong, and Jun Zhu. "Distributed Accelerated Proximal Coordinate Gradient Methods." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/370.

Full text
Abstract:
We develop a general accelerated proximal coordinate descent algorithm in distributed settings (Dis- APCG) for the optimization problem that minimizes the sum of two convex functions: the first part f is smooth with a gradient oracle, and the other one Ψ is separable with respect to blocks of coordinate and has a simple known structure (e.g., L1 norm). Our algorithm gets new accelerated convergence rate in the case that f is strongly con- vex by making use of modern parallel structures, and includes previous non-strongly case as a special case. We further present efficient implementations to avoid full-dimensional operations in each step, significantly reducing the computation cost. Experiments on the regularized empirical risk minimization problem demonstrate the effectiveness of our algorithm and match our theoretical findings.
APA, Harvard, Vancouver, ISO, and other styles
3

Nie, Feiping, Zhouyuan Huo, and Heng Huang. "Joint Capped Norms Minimization for Robust Matrix Recovery." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/356.

Full text
Abstract:
The low-rank matrix recovery is an important machine learning research topic with various scientific applications. Most existing low-rank matrix recovery methods relax the rank minimization problem via the trace norm minimization. However, such a relaxation makes the solution seriously deviate from the original one. Meanwhile, most matrix recovery methods minimize the squared prediction errors on the observed entries, which is sensitive to outliers. In this paper, we propose a new robust matrix recovery model to address the above two challenges. The joint capped trace norm and capped $\ell_1$-norm are used to tightly approximate the rank minimization and enhance the robustness to outliers. The evaluation experiments are performed on both synthetic data and real world applications in collaborative filtering and social network link prediction. All empirical results show our new method outperforms the existing matrix recovery methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Rayces, J. L., and Lan Lebich. "Hybrid method of lens optimization." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1988. http://dx.doi.org/10.1364/oam.1988.mf4.

Full text
Abstract:
In the typical lens optimization problem there are two groups of equations. The first group which includes optomechanical constraints must be solved exactly, while the second group which includes aberrations in general admits an approximate solution. In Spencer’s method the sum of the squares of the residuals of the latter group is reduced to a minimum. If all the equations in the second group except one are eliminated, and that equation represents the norm of the vector of parameter changes, the system is a solution of Glatzel’s method. This happens automatically when the damping factor added to the diagonal in Spencer’s matrix approaches infinity. There is, therefore, a gradual transition from Spencer’s solution to Glatzel’s solution, and it is possible to combine both methods into one.
APA, Harvard, Vancouver, ISO, and other styles
5

Stojkovic, Ivan, Vladisav Jelisavcic, Veljko Milutinovic, and Zoran Obradovic. "Fast Sparse Gaussian Markov Random Fields Learning Based on Cholesky Factorization." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/384.

Full text
Abstract:
Learning the sparse Gaussian Markov Random Field, or conversely, estimating the sparse inverse covariance matrix is an approach to uncover the underlying dependency structure in data. Most of the current methods solve the problem by optimizing the maximum likelihood objective with a Laplace prior L1 on entries of a precision matrix. We propose a novel objective with a regularization term which penalizes an approximate product of the Cholesky decomposed precision matrix. This new reparametrization of the penalty term allows efficient coordinate descent optimization, which in synergy with an active set approach results in a very fast and efficient method for learning the sparse inverse covariance matrix. We evaluated the speed and solution quality of the newly proposed SCHL method on problems consisting of up to 24,840 variables. Our approach was several times faster than three state-of-the-art approaches. We also demonstrate that SCHL can be used to discover interpretable networks, by applying it to a high impact problem from the health informatics domain.
APA, Harvard, Vancouver, ISO, and other styles
6

Xu, Yuhui, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, and Hongkai Xiong. "TRP: Trained Rank Pruning for Efficient Deep Neural Networks." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/136.

Full text
Abstract:
To enable DNNs on edge devices like mobile phones, low-rank approximation has been widely adopted because of its solid theoretical rationale and efficient implementations. Several previous works attempted to directly approximate a pre-trained model by low-rank decomposition; however, small approximation errors in parameters can ripple over a large prediction loss. As a result, performance usually drops significantly and a sophisticated effort on fine-tuning is required to recover accuracy. Apparently, it is not optimal to separate low-rank approximation from training. Unlike previous works, this paper integrates low rank approximation and regularization into the training process. We propose Trained Rank Pruning (TRP), which alternates between low rank approximation and training. TRP maintains the capacity of the original network while imposing low-rank constraints during training. A nuclear regularization optimized by stochastic sub-gradient descent is utilized to further promote low rank in TRP. The TRP trained network inherently has a low-rank structure, and is approximated with negligible performance loss, thus eliminating the fine-tuning process after low rank decomposition. The proposed method is comprehensively evaluated on CIFAR-10 and ImageNet, outperforming previous compression methods using low rank approximation.
APA, Harvard, Vancouver, ISO, and other styles
7

Gaymann, A., F. Montomoli, and M. Pietropaoli. "Design for Additive Manufacturing: Valves Without Moving Parts." In ASME Turbo Expo 2017: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/gt2017-64872.

Full text
Abstract:
This work presents an innovative design method to obtain valves without moving parts that can be built using additive manufacturing and applied to gas turbines. Additive manufacturing offers more flexibility than traditional manufacturing methods, which implies less constraints on the manufacture of engineering parts and it is possible to build complex geometries like the Tesla valve. The Tesla valve is a duct that shows a diodicity behavior: it allows a fluid to flow in one direction with lower losses than in the other one. Unfortunately the design of the Tesla valve is two dimensional and it relies on the designer experience to obtain good performance. The method presented here allows the automatic generation of valves similar to the Tesla one, obtained automatically by a topology optimization algorithm. It is the first time that a three dimensional method is presented, the available algorithms in the open literature works in two dimensions. A fluid sedimentation process enables the creation of a new geometry optimized to meet a prescribed set of performance, such as pressure losses. The steepest descent method is used to approximate the integrals met during the calculation process. The optimizer is used to obtain three dimensional geometries for different multi-objective functions. The geometry is compared to an existing similar solution proposed in the open literature and validated. The results are compared to a Tesla valve to show the performance of the optimized geometries. The advantage of the proposed solution is the possibility to apply the design method with any spatial constraints and for a wide range of mass flow.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography