Дисертації з теми "FEATURE OPTIMIZATION METHODS"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: FEATURE OPTIMIZATION METHODS.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-15 дисертацій для дослідження на тему "FEATURE OPTIMIZATION METHODS".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Lin, Lei. "Optimization methods for inventive design." Thesis, Strasbourg, 2016. http://www.theses.fr/2016STRAD012/document.

Повний текст джерела
Анотація:
La thèse traite des problèmes d'invention où les solutions des méthodes d'optimisation ne satisfont pas aux objectifs des problèmes à résoudre. Les problèmes ainsi définis exploitent, pour leur résolution, un modèle de problème étendant le modèle de la TRIZ classique sous une forme canonique appelée "système de contradictions généralisées". Cette recherche instrumente un processus de résolution basé sur la boucle simulation-optimisation-invention permettant d'utiliser à la fois des méthodes d'optimisation et d'invention. Plus précisément, elle modélise l'extraction des contractions généralisées à partir des données de simulation sous forme de problèmes d'optimisation combinatoire et propose des algorithmes donnant toutes les solutions à ces problèmes
The thesis deals with problems of invention where solutions optimization methods do not meet the objectives of problems to solve. The problems previuosly defined exploit for their resolution, a problem extending the model of classical TRIZ in a canonical form called "generalized system of contradictions." This research draws up a resolution process based on the loop simulation-optimization-invention using both solving methods of optimization and invention. More precisely, it models the extraction of generalized contractions from simulation data as combinatorial optimization problems and offers algorithms that offer all the solutions to these problems
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zanco, Philip. "Analysis of Optimization Methods in Multisteerable Filter Design." ScholarWorks@UNO, 2016. http://scholarworks.uno.edu/td/2227.

Повний текст джерела
Анотація:
The purpose of this thesis is to study and investigate a practical and efficient implementation of corner orientation detection using multisteerable filters. First, practical theory involved in applying multisteerable filters for corner orientation estimation is presented. Methods to improve the efficiency with which multisteerable corner filters are applied to images are investigated and presented. Prior research in this area presented an optimization equation for determining the best match of corner orientations in images; however, little research has been done on optimization techniques to solve this equation. Optimization techniques to find the maximum response of a similarity function to determine how similar a corner feature is to a multioriented corner template are also explored and compared in this research.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Monrousseau, Thomas. "Développement du système d'analyse des données recueillies par les capteurs et choix du groupement de capteurs optimal pour le suivi de la cuisson des aliments dans un four." Thesis, Toulouse, INSA, 2016. http://www.theses.fr/2016ISAT0054.

Повний текст джерела
Анотація:
Dans un monde où tous les appareils électro-ménagers se connectent et deviennent intelligents, il est apparu pour des industriels français le besoin de créer des fours de cuisson innovants capables de suivre l’état de cuisson à cœur de poissons et de viandes sans capteur au contact. Cette thèse se place dans ce contexte et se divise en deux grandes parties. La première est une phase de sélection d’attributs parmi un ensemble de mesures issues de capteurs spécifiques de laboratoire afin de permettre d’appliquer un algorithme de classification supervisée sur trois états de cuisson. Une méthode de sélection basée sur la logique floue a notamment été appliquée pour réduire grandement le nombre de variable à surveiller. La seconde partie concerne la phase de suivi de cuisson en ligne par plusieurs méthodes. Les techniques employées sont une approche par classification sur dix états à cœur, la résolution d’équation de la chaleur discrétisée, ainsi que le développement d’un capteur logiciel basé sur des réseaux de neurones artificiels synthétisés à partir d’expériences de cuisson, pour réaliser la reconstruction du signal de la température au cœur des aliments à partir de mesures disponibles en ligne. Ces algorithmes ont été implantés sur microcontrôleur équipant une version prototype d’un nouveau four afin d’être testés et validés dans le cas d’utilisations réelles
In a world where all personal devices become smart and connected, some French industrials created a project to make ovens able detecting the cooking state of fish and meat without contact sensor. This thesis takes place in this context and is divided in two major parts. The first one is a feature selection phase to be able to classify food in three states: under baked, well baked and over baked. The point of this selection method, based on fuzzy logic is to strongly reduce the number of features got from laboratory specific sensors. The second part concerns on-line monitoring of the food cooking state by several methods. These technics are: classification algorithm into ten bake states, the use of a discrete version of the heat equation and the development of a soft sensor based on an artificial neural network model build from cooking experiments to infer the temperature inside the food from available on-line measurements. These algorithms have been implemented on microcontroller equipping a prototype version of a new oven in order to be tested and validated on real use cases
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Xiong, Xuehan. "Supervised Descent Method." Research Showcase @ CMU, 2015. http://repository.cmu.edu/dissertations/652.

Повний текст джерела
Анотація:
In this dissertation, we focus on solving Nonlinear Least Squares problems using a supervised approach. In particular, we developed a Supervised Descent Method (SDM), performed thorough theoretical analysis, and demonstrated its effectiveness on optimizing analytic functions, and four other real-world applications: Inverse Kinematics, Rigid Tracking, Face Alignment (frontal and multi-view), and 3D Object Pose Estimation. In Rigid Tracking, SDM was able to take advantage of more robust features, such as, HoG and SIFT. Those non-differentiable image features were out of consideration of previous work because they relied on gradient-based methods for optimization. In Inverse Kinematics where we minimize a non-convex function, SDM achieved significantly better convergence than gradient-based approaches. In Face Alignment, SDM achieved state-of-the-arts results. Moreover, it was extremely computationally efficient, which makes it applicable for many mobile applications. In addition, we provided a unified view of several popular methods including SDM on sequential prediction, and reformulated them as a sequence of function compositions. Finally, we suggested some future research directions on SDM and sequential prediction.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Lösch, Felix. "Optimization of variability in software product lines a semi-automatic method for visualization, analysis, and restructuring of variability in software product lines." Berlin Logos-Verl, 2008. http://d-nb.info/992075904/04.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Bai, Bing. "A Study of Adaptive Random Features Models in Machine Learning based on Metropolis Sampling." Thesis, KTH, Numerisk analys, NA, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-293323.

Повний текст джерела
Анотація:
Artificial neural network (ANN) is a machine learning approach where parameters, i.e., frequency parameters and amplitude parameters, are learnt during the training process. Random features model is a special case of ANN that the structure of random features model is as same as ANN’s but the parameters’ learning processes are different. For random features model, the amplitude parameters are learnt during the training process but the frequency parameters are sampled from some distributions. If the frequency distribution of the random features model is well-chosen, both models can approximate data well. Adaptive random Fourier features with Metropolis sampling is an enhanced random Fourier features model which can select appropriate frequency distribution adaptively. This thesis studies Rectified Linear Unit and sigmoid features and combines them with the adaptive idea to generate another two adaptive random features models. The results show that using the particular set of hyper-parameters, adaptive random Rectified Linear Unit features model can also approximate the data relatively well, though the adaptive random Fourier features model performs slightly better.
I artificiella neurala nätverk (ANN), som används inom maskininlärning, behöver parametrar, kallade frekvensparametrar och amplitudparametrar, hittasgenom en så kallad träningsprocess. Random feature-modeller är ett specialfall av ANN där träningen sker på ett annat sätt. I dessa modeller tränasamplitudparametrarna medan frekvensparametrarna samplas från någon sannolikhetstäthet. Om denna sannolikhetstäthet valts med omsorg kommer båda träningsmodellerna att ge god approximation av givna data. Metoden Adaptiv random Fourier feature[1] uppdaterar frekvensfördelningen adaptivt. Denna uppsats studerar aktiveringsfunktionerna ReLU och sigmoid och kombinerar dem med den adaptiva iden i [1] för att generera två ytterligare Random feature-modeller. Resultaten visar att om samma hyperparametrar som i [1] används så kan den adaptiva ReLU features-modellen approximera data relativt väl, även om Fourier features-modellen ger något bättre resultat.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Sasse, Hugh Granville. "Enhancing numerical modelling efficiency for electromagnetic simulation of physical layer components." Thesis, De Montfort University, 2010. http://hdl.handle.net/2086/4406.

Повний текст джерела
Анотація:
The purpose of this thesis is to present solutions to overcome several key difficulties that limit the application of numerical modelling in communication cable design and analysis. In particular, specific limiting factors are that simulations are time consuming, and the process of comparison requires skill and is poorly defined and understood. When much of the process of design consists of optimisation of performance within a well defined domain, the use of artificial intelligence techniques may reduce or remove the need for human interaction in the design process. The automation of human processes allows round-the-clock operation at a faster throughput. Achieving a speedup would permit greater exploration of the possible designs, improving understanding of the domain. This thesis presents work that relates to three facets of the efficiency of numerical modelling: minimizing simulation execution time, controlling optimization processes and quantifying comparisons of results. These topics are of interest because simulation times for most problems of interest run into tens of hours. The design process for most systems being modelled may be considered an optimisation process in so far as the design is improved based upon a comparison of the test results with a specification. Development of software to automate this process permits the improvements to continue outside working hours, and produces decisions unaffected by the psychological state of a human operator. Improved performance of simulation tools would facilitate exploration of more variations on a design, which would improve understanding of the problem domain, promoting a virtuous circle of design. The minimization of execution time was achieved through the development of a Parallel TLM Solver which did not use specialized hardware or a dedicated network. Its design was novel because it was intended to operate on a network of heterogeneous machines in a manner which was fault tolerant, and included a means to reduce vulnerability of simulated data without encryption. Optimisation processes were controlled by genetic algorithms and particle swarm optimisation which were novel applications in communication cable design. The work extended the range of cable parameters, reducing conductor diameters for twisted pair cables, and reducing optical coverage of screens for a given shielding effectiveness. Work on the comparison of results introduced ―Colour maps‖ as a way of displaying three scalar variables over a two-dimensional surface, and comparisons were quantified by extending 1D Feature Selective Validation (FSV) to two dimensions, using an ellipse shaped filter, in such a way that it could be extended to higher dimensions. In so doing, some problems with FSV were detected, and suggestions for overcoming these presented: such as the special case of zero valued DC signals. A re-description of Feature Selective Validation, using Jacobians and tensors is proposed, in order to facilitate its implementation in higher dimensional spaces.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

YADAV, JYOTI. "A STUDY OF FEATURE OPTIMIZATION METHODS FOR LUNG CANCER DETECTION." Thesis, 2022. http://dspace.dtu.ac.in:8080/jspui/handle/repository/19156.

Повний текст джерела
Анотація:
In this project, Lung cancer remains an extremely important disease in the world that causes deaths. Early Diagnosis can prevent large amounts of deaths. Classifiers play an important role in detecting lung cancer by means of a machine learning set of rules in addition to CAD-based image processing techniques. For the classifier’s accuracy, there is the need for a good feature collection of images. Features of an image can help to find all relevant information for identifying disease. Features are the important parameter for finding results. Mostly, features are extracted from feature extraction techniques like GLCM or some datasets already have features of lung cancer images by using some techniques. For different models of classifier, dimension, storage, speed, time and performance create an impactful effect on the results because we have large amount features of the images. An optimized method like the feature selection technique is the one solution that leads to finding relevant features from datasets containing features or features extracted from feature extraction techniques. The lung cancer database has 32 case records with 57 unique characteristics. Hong and Young compiled this database, which was indexed in the University of California Irvine repository. Take out medical information and X-ray information, for example, are among the experimental materials. The data described three categories of problematic lung malignancies, each with an integer value ranging from 0 to 3. A new strategy for identifying effective aspects of lung cancer is proposed in our work in Matlab 2022a. It employs a Genetic Algorithm. Using a simplified 8-feature SVM classifier and four feature KNN, 100% accurateness is achieved. The new method is compared to the existing Hyper Heuristic method for the feature selection. Through the maximum level of precision, the projected technique performs better. As a result, the proposed approach is recommended for determining an effective disease symptom.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Salehipour, Amir. "Combinatorial optimization methods for the (alpha,beta)-k Feature Set Problem." Thesis, 2019. http://hdl.handle.net/1959.13/1400399.

Повний текст джерела
Анотація:
Research Doctorate - Doctor of Philosophy (PhD)
This PhD research thesis proposes novel and efficient combinatorial optimization-based solution methods for the (alpha,beta)-k Feature Set Problem. The (alpha,beta)-k Feature Set Problem is a combinatorial optimization-based feature selection approach proposed in 2004, and has several applications in computational biology and Bioinformatics. The (alpha,beta)-k Feature Set Problem aims to select a minimum cost set of features such that similarities between entities of the same class and differences between entities of different classes are maximized. The developed solution methods of this research include heuristic and exact methods. While this research focuses on utilizing exact methods, we also developed mathematical properties, and heuristics and problem-driven local searches and applied them in certain stages of the exact methods in order to guide exact solvers and deliver high quality solutions. The motivation behind this stems from computational difficulty of exact solvers in providing good quality solutions for the (alpha, beta)-k Feature Set Problem. Our proposed heuristics deliver very good quality solutions including optimal, and that in a reasonable amount of time. The major contributions of the presented research include: 1) investigating and exploring mathematical properties and characteristics of the (alpha,beta)-k Feature Set Problem for the first time, and utilizing those in order to design and develop algorithms and methods for solving large instances of the (alpha,beta)-k Feature Set Problem; 2) extending the basic modeling, algorithms and solution methods to the weighted variant of the (alpha,beta)-k Feature Set Problem (where features have a cost); and, 3) developing algorithms and solution methods that are capable of solving large instances of the (alpha,beta)-k Feature Set Problem in a reasonable amount of time (prior to this research, many of those instances pose a computational challenge for the exact solvers). To this end, we showed the usefulness of the developed algorithms and methods by applying them on three sets of 346 instances, including real-world, weighted, and randomly generated instances, and obtaining high quality solutions in a short time. To the best of our knowledge, the developed algorithms of this research have obtained the best results for the (alpha,beta)-k Feature Set Problem. In particular, they outperform state-of-the-art algorithms and exact solvers, and have a very competitive performance over large instances because they always deliver feasible solutions, and obtain new best solutions for a majority of large instances in a reasonable amount of time.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Tayal, Aditya. "Effective and Efficient Optimization Methods for Kernel Based Classification Problems." Thesis, 2014. http://hdl.handle.net/10012/8334.

Повний текст джерела
Анотація:
Kernel methods are a popular choice in solving a number of problems in statistical machine learning. In this thesis, we propose new methods for two important kernel based classification problems: 1) learning from highly unbalanced large-scale datasets and 2) selecting a relevant subset of input features for a given kernel specification. The first problem is known as the rare class problem, which is characterized by a highly skewed or unbalanced class distribution. Unbalanced datasets can introduce significant bias in standard classification methods. In addition, due to the increase of data in recent years, large datasets with millions of observations have become commonplace. We propose an approach to address both the problem of bias and computational complexity in rare class problems by optimizing area under the receiver operating characteristic curve and by using a rare class only kernel representation, respectively. We justify the proposed approach theoretically and computationally. Theoretically, we establish an upper bound on the difference between selecting a hypothesis from a reproducing kernel Hilbert space and a hypothesis space which can be represented using a subset of kernel functions. This bound shows that for a fixed number of kernel functions, it is optimal to first include functions corresponding to rare class samples. We also discuss the connection of a subset kernel representation with the Nystrom method for a general class of regularized loss minimization methods. Computationally, we illustrate that the rare class representation produces statistically equivalent test error results on highly unbalanced datasets compared to using the full kernel representation, but with significantly better time and space complexity. Finally, we extend the method to rare class ordinal ranking, and apply it to a recent public competition problem in health informatics. The second problem studied in the thesis is known as the feature selection problem in literature. Embedding feature selection in kernel classification leads to a non-convex optimization problem. We specify a primal formulation and solve the problem using a second-order trust region algorithm. To improve efficiency, we use the two-block Gauss-Seidel method, breaking the problem into a convex support vector machine subproblem and a non-convex feature selection subproblem. We reduce possibility of saddle point convergence and improve solution quality by sharing an explicit functional margin variable between block iterates. We illustrate how our algorithm improves upon state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Rocha, de Paula Mateus. "Efficient methods of feature selection based on combinatorial optimization motivated by the analysis of large biological datasets." Thesis, 2013. http://hdl.handle.net/1959.13/938563.

Повний текст джерела
Анотація:
Research Doctorate - Doctor of Philosophy (PhD)
Intuitively, the Feature Selection problem is to choose a subset of a given a set of features that best represents the whole in a particular aspect, preserving the original semantics of the variables on the given samples and classes. In practice, the objective of finding such a subset is often to reveal a particular characteristic present in the given samples. In 2004, a new feature selection approach was proposed. It was based on a combinatorial optimization problem called (α, β)-k-Feature Set Problem. The main advantage of using this approach over ranking methods is that the features are evaluated as groups, instead of only considering their individual performance. The main drawback of this approach is the complexity of the combinatorial problems involved. Since some of them are NP-Complete, it is unlikely that there would exist an efficient method to solve them to optimality efficiently. To the best of the author’s knowledge at the moment of this research, the available tools to deal with the (α, β)-k-Feature Set Problem approach can not solve problems of the magnitude required by many practical applications. Given the big advantage brought by the multivariate characteristic of this method, its successful wide applicability and knowing that its only real known drawback is scalability, further research to overcome such a difficulty is appropriate. Even though the optimal solution of the problem is always desirable, it often is not strictly necessary in the case of many biological applications. Therefore, this work aims to propose fast heuristics to address the (α, β)-k-Feature Set Problem approach, and propose procedures to obtain dual bounds that do not rely on external optimization packages.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Elmi, Carlo Alberto. "Design system integration for multi-objective optimization of aero engine combustors." Doctoral thesis, 2022. http://hdl.handle.net/2158/1276939.

Повний текст джерела
Анотація:
The transformation towards a climate-neutral civil aviation is providing significant business opportunities to the aero engine market players. To meet this target and keep competitiveness, however, groundbreaking solutions must be introduced at the product’s level in the shortest possible time. Industry lead-ers are increasingly embracing lean and digital approaches for this purpose, by applying these concepts at all company’s levels. Considerable room for im-provements can be identified in the development of complex components as, for instance, the combustor. Due to the complexity of phenomena taking place and interacting into it, there are conflicting functional requirements defined over different physical domains. This leads to a design approach that must be both multidisciplinary and multi-objective, in which the need for supporting know-how and product expertise arises with extensive and structured studies of the design space arises. Nowadays, simulation-based methodologies repre-sent a standard in evaluating multiple configurations of the system, although it may lead to heterogeneous models interacting with each other, sharing miscel-laneous information within the process. In this context, taking advantage of in-tegrated design systems has been proven to be beneficial in standardizing the simulation processes while embedding design’s best practices. The subject matter of this work is the Combustor Design System Integra-tion (DSI), an integrated methodology aimed at easing and streamlining the preliminary design phase of aero engine combustors. Its concept will be de-scribed in the first part, where the automation of low value-added tasks will be introduced together with four custom integrated tools. It is composed of a CAD generation system, a RANS-based CFD suite for reactive flow calculations, a boundary-conditions processor for 3D thermal FEA and a FE structural envi-ronment for stress and displacement estimation. Particular importance is given to the definition of cooling and quenching systems on combustor’s liners, since their prominent impact on aero-thermal and durability performance. Therefore, specific features for a detailed topological management of holes are presented in this work, providing advance patterning and arrangement capabilities which are not addressed in other design systems. Finally, it will be possible to prove the reduction of lead time for analysis, as well as the enhancement of the overall process robustness. The NEWAC combustor, a lean-burn concept developed in the context of the homonymous European research project, will be exploited as a case allowing, moreover, an assessment of the DSI modelling approach. In the second part will be presented a dedicated framework for multi-objective design optimization, comprising the DSI tools for CAD generation and CFD analysis. A fully automated and water-tight process is here implemented in order to ad-dress the combustor’s problem of dilution mixing, aimed at optimizing the temperature profiles and the emission levels at its outlet. This approach will leverage on advanced neural network algorithms for improving the overall de-sign workflow, so to ensure that the optimal combustor configuration is de-fined as a function of the product’s Critical-To-Quality. The results of the opti-mization will be shown for a rich-quench-lean combustor concept intentionally designed to support this activity, referred as to LEM-RQL. The general intention of this work, in the end, is to demonstrate how in-tegrated design systems embedded in optimization frameworks could repre-sent both a strategic asset for industry players and a relevant topic for academ-ics. Given the pervasive integration-and-automation of the process, the general-ity in processing multiple design layouts and the possibility to accommodate increasingly advanced and sophisticated optimization algorithms, the DSI pro-cedure configure itself as an ideal platform within the technology maturation process, thus enabling not only the improvement of in-service components but also the development of next-generation combustor products.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

"Structural optimization and engineering feature design with semi-Lagrangian level set method." 2013. http://library.cuhk.edu.hk/record=b5549808.

Повний текст джерела
Анотація:
基於計算機仿真的優化設計方法如今已成為產品設計的重要工具之一。其最主要特點包括縮短產品開發週期,降低物理實驗成本,保證產品質量以及利用科學方法推動設計創新等。與此同時,計算機輔助設計,仿真,優化的一體化策略也得到了學術界和工業界的廣泛關注。許多新的研究成果都致力於提高以往算法的效率和適用性。
基於水平集的形狀和拓撲優化算法是設計輕量化連續結構體的強有力的工具之一。相比於基於有限單元網格的材分佈算法,前者能夠更清楚地表a達所設計結構的幾何邊界和特徵。這個優勢使得該算法能更好的與計算機輔助幾何設計方法相結合,例如構造立體幾何法 (Constructive Solid Geometry)。另外,最新的研究表明,基於水平集的幾何表達方法能夠很好地與擴展有限元分析(Extended Finite ElementAnalysis) 相結合,實現高效的仿真優化計算。這種結合的主要特點包括統一的數據表達,高精度的結構分析和優化計算,以及優化過程中無需重新劃分有限單元網格等。
近年來,儘管水平集結構優化算法得到了廣泛的發展,許多基於該方法的應用也層出不窮,但仍有一些相對實際的問題亟待解決。例如,如何提高水平集優化效率,如何增強該方法的設計能力以及適用性等。本論文致力於研究上述問題并提出了一些實用的新方法。
首先,我們結合semi-Lagrangian 數值方法和最優化線搜索算法,提出了一種新的水平集結構優化方法。在求解水平集方程的過程中,semi-Lagrangian 方法允許相對較大的時間步長並且無需受CFL(Courant-Friedrichs-Lewy) 條件的限制。基於這個特點,本文提出的最優化線搜索策略能夠自適應地計算出每一步的最佳時間步長,并充分考慮拓撲優化過程中的實際特徵。實驗表明,本算法能夠有效地減少優化迭代次數,同時降低整體優化計算的時間。另外,我們還提出了一種新的敏度計算方法。其思想與有限維度問題中的共軛梯度法相似。實驗表明該方法能夠替代廣泛運用于水平集優化的最速下降法,得到滿意的優化結果。
其次,我們提出了一種在水平集結構優化過程中設計幾何特徵的方法。幾何特徵指模型中包含加工、組裝或者特定功能信息的簡單幾何形狀。在優化設計中加入特徵設計功能有顯著的實際意義。本文中,我們結合水平集方法和構造立體幾何法的優勢,首先在建模時分離出具有特徵的幾何元素體(特徵體)和包含自由邊界的幾何元素體(自由體),然後分別在各自的設計策略下實現同步的優化計算。對於特徵體的設計,我們利用仿射變換驅動幾何形狀的改變并時刻保持關鍵的幾何特徵。其中,仿射變換的速度場通過擬合連續體設計的速度場得到,實際變換則採用粒子水平集方法。另一方面,自由體的形狀和拓撲通過標準的水平集方法進行優化設計。實驗表明,該方法能夠在結構形狀及拓撲優化過程中,保持並設計包含不用實際工程信息的幾何特徵,實現了真正意義上的含有幾何特徵的最優結構設計。本文中,我們將用數個二維和三維的算例來說明該方法的設計潛力和適用性。
最後,我們討論并實現了基於自適應水平集方法的三維結構優化算法。該方法在計算過程中結合了顯示和隱式幾何表達的雙重優點。首先,我們用八叉樹網格來表示隱式水平集模型以及其對應的二維流型三角片網格模型。在優化迭代過程中,隱式水平集模型的邊界演化採用semi-Lagrangian 方法。其中,有向距離函數通過直接計算當前顯示模型得到,而非插值。之後,新的三角片網格模型從更新的距離場中提取出來,作為下一步的輸入。這種混合表達和自適應的網格策略不僅實現了窄帶計算,而且能夠很好跟擴展有限元分析方法相結合。此外,我們在計算過程中還提出并加入了一種能夠保持幾何特徵和模型表面拓撲的網格簡化算法以提高計算效率。值得注意的是,這種自適應水平集方法成功地在結構優化過程中植入了幾何模型處理方法。這為進一步發展水平集結構優化提供了一個新的方向。
In modern product design practice, adopting simulation based optimization has become a standard procedure to reduce experimental cost, shorten development time, assure product quality and promote innovation. Both industries and academics have put great efforts in exploring new approaches to integrate computer aided design (CAD), simulation and optimization processes in an effective and truly applicable way.
For general lightweight structural design of continuum, the level set method is a promising tool for shape and topology optimization. Compared to traditional approaches such as Finite Element (FE) mesh based shape optimization and material based topology optimization, the level set based method excels in its flexibility in handling both shape and topological change as well as the capability in representing a clear structural geometry. The later advantage allows for a intuitive integration of computer aided design and engineering (CAD/CAE), because the level set model can be easily extended to constructive solid geometry, which is a fundamental geometry description of CAD. Meanwhile, recent research progress indicates that coupling level set method with extended finite element (XFEM) analysis for simulation based design possesses tremendous values, such as data compatibility, free of re-meshing and good accuracy.
Although the basic theory of level set based structural optimization has been well established and many applications have been reported in the last decade, the realm is still under investigation for a number of practical issues, such as to improve computational efficiency, optimal search effectiveness, design capability and industrial applicability. This thesis presents some recent research progress and novel techniques towards these common goals.
Firstly, an efficient and numerically stable semi-Lagrangian level set method is proposed for structural optimization with a line search algorithm and a sensitivity modulation scheme. The semi-Lagrange method has an advantage to allow for a large time step without the limitation of Courant- Friedrichs-Lewy (CFL) condition. The line search attempts to adaptively determine an appropriate time step in each iteration of optimization. With consideration of some practical characteristics during topology optimization process, incorporating the line search into semi-Lagrange optimization method can yield fewer design iterations and thus improve the overall computational efficiency. The sensitivity modulation is inspired from the conjugate gradient method in finite-dimensions, and provides an alternative to the standard steepest descent search in level set based optimization. Two benchmark examples are presented to compare the sensitivity modulation and the steepest descent techniques with and without the line search respectively.
Secondly, a generic method to design engineering features for level set based structural optimization is presented. Engineering features are regular and simple shape units containing specific engineering significance for manufacture and assembly consideration. It is practically useful to combine feature design with structural optimization. In this thesis, a Constructive Solid Geometry (CSG) based Level Sets description is proposed to represent a structure based on two basic entities: a level set model containing either a feature shape or a freeform boundary. By treating both entities implicitly and homogeneously, optimal feature design and freeform boundary design are unified under the level set framework. For feature models, a constrained motion of affine transformations is utilized, where the design velocity is obtained through a least square approximation of continuous shape variation. An accurate particle level set updating scheme is employed for the transformation. Meanwhile, freeform models undergo a standard level set updating process using a semi-Lagrange scheme. With this method, various feature characteristics are identified through carefully constructing a CSG model tree with flexible entities and preserved by imposing motion constraints to different stages of the tree. Moreover, because a free shape and topology optimization is enabled over non-feature regions, a truly optimal structural configuration with engineering features can be designed in a convenient way. Several 2D and 3D generative feature design examples are provided to show the applicability of this approach.
Finally, a 3D implementation using adaptive level set method is discussed. This method utilizes both explicit and implicit geometric representations for computation. An octree grid is adopted to accommodate the free structural interface of an implicit level set model and a corresponding 2-manifold triangle mesh model. Within each iteration of optimization, the interface evolves implicitly using a semi-Lagrange level set method, during which the signed distance field is evaluated directly and accurately from the current surface model other than interpolation. After that, another mesh model is extracted from the updated field and serves as the input of subsequent process. This hybrid and adaptive representation scheme not only achieves "narrow band computation", but also facilitates the structural analysis by using a geometry-aware mesh-free approach. Moreover, a feature preserving and topological errorless mesh simplification algorithm is proposed to enhance the computational efficiency. Remarkably, the adaptive level set scheme opens up a gate to incorporate geometric editing into structural optimization in an effective way, which creates a new dimension of opportunity to further develop level set based structural optimization in this direction. A three dimensional benchmark example and possible extensions are presented to demonstrate the capability and potential of this method.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Zhou, Mingdong.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2013.
Includes bibliographical references (leaves 123-135).
Abstract also in Chinese.
Abstract --- p.i
Acknowledgement --- p.v
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Background of Structural Optimization --- p.2
Chapter 1.2 --- Research Issues and Contributions --- p.7
Chapter 1.3 --- Content Outline --- p.11
Chapter 2 --- Structural Optimization with Level Set Method --- p.13
Chapter 2.1 --- Dynamic Level Set Method --- p.14
Chapter 2.1.1 --- Implicit Model Description and Hamilton-Jacobi Equation --- p.15
Chapter 2.1.2 --- Model Update and Re-Initialization --- p.16
Chapter 2.2 --- Application in Structural Optimization Problem --- p.19
Chapter 2.2.1 --- Problem Formulation of Linear Elastic Continuum --- p.19
Chapter 2.2.2 --- Design Sensitivity Analysis --- p.21
Chapter 2.2.3 --- Optimization Strategy --- p.24
Chapter 2.3 --- Couple with Extended Finite Element Method --- p.26
Chapter 2.3.1 --- X-FEM for Structural Analysis --- p.28
Chapter 2.3.2 --- Numerical Integration --- p.30
Chapter 2.3.3 --- Imposing Boundary Conditions --- p.31
Chapter 2.4 --- Summary --- p.33
Chapter 3 --- A semi-Lagrangian level set method for structural optimization --- p.34
Chapter 3.1 --- Introduction --- p.35
Chapter 3.2 --- Semi-Lagrangian Level Set Method --- p.37
Chapter 3.3 --- A Line Search Algorithm --- p.38
Chapter 3.4 --- A Sensitivity Modulation Scheme --- p.41
Chapter 3.5 --- Numerical Examples --- p.43
Chapter 3.5.1 --- Cantilever beam --- p.44
Chapter 3.5.2 --- Bridge-type structure --- p.48
Chapter 3.6 --- Summary --- p.54
Chapter 4 --- Engineering Feature Design in Structural Optimization --- p.58
Chapter 4.1 --- Introduction --- p.59
Chapter 4.2 --- CSG based Level Sets --- p.64
Chapter 4.3 --- Structural Optimization with CSGLS --- p.67
Chapter 4.4 --- Constrained Motion with Affine Transformation --- p.71
Chapter 4.4.1 --- 2D Algorithm --- p.71
Chapter 4.4.2 --- 3D Algorithm --- p.74
Chapter 4.5 --- Design Sharp Characteristics --- p.79
Chapter 4.6 --- Numerical Examples --- p.79
Chapter 4.6.1 --- Moment of Inertia (MOI) Maximization --- p.79
Chapter 4.6.2 --- Feature Design in Structural Topology Optimization --- p.81
Chapter 4.6.3 --- Generative Feature Design --- p.85
Chapter 4.6.4 --- A 3D Feature Based Optimal Design --- p.92
Chapter 4.7 --- Summary --- p.93
Chapter 5 --- Adaptive level set implementation for 3D problems --- p.97
Chapter 5.1 --- Introduction and Algorithm Overview --- p.98
Chapter 5.2 --- Hybrid Model Representation and Interface Tracking --- p.100
Chapter 5.2.1 --- Octree Based Implicit Model --- p.101
Chapter 5.2.2 --- Triangle Mesh Based Explicit Model --- p.102
Chapter 5.2.3 --- Interface Tracking --- p.102
Chapter 5.3 --- Engineering Model Simplification --- p.103
Chapter 5.3.1 --- Introduction --- p.104
Chapter 5.3.2 --- Algorithm of Progressive Multi-Pass Simplification --- p.105
Chapter 5.3.3 --- Numerical Results of Mesh Simplification --- p.109
Chapter 5.4 --- Structural Analysis --- p.115
Chapter 5.5 --- Numerical Example of A 3D Optimal Design --- p.116
Chapter 5.6 --- Summary --- p.116
Chapter 6 --- Conclusions and Future work --- p.118
Chapter 6.1 --- Conclusions --- p.118
Chapter 6.2 --- Future Work --- p.120
Bibliography --- p.123
Publications --- p.136
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Yang, Yu Tai, and 楊御台. "A Hybrid Filter/Wrapper Method Using Simplified Swarm Optimization for Feature Selection in High-Dimensional Imbalanced Data." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/21084379798551051101.

Повний текст джерела
Анотація:
碩士
國立清華大學
工業工程與工程管理學系
104
In recent years, feature selection has become an important field in data mining and been wildly used in numerous regions. The purpose of feature selection is to search an optimal subset of features from existing data to maximize the accuracy. However, there are still few studies investigating the impact of data imbalance, the existence of underrepresented categories of data, on feature selection problem. Therefore, the aim of this study is to provide a feature selection method for increasing classifying high-dimensional imbalanced data accuracy. In this study, we proposed a hybrid method which can spot a better optimal features subset. In the proposed method, information gain as a filter selects the most informative features from the original dataset. The imbalance of the dataset with selected features is justified by using Synthetic minority over-sampling technique. Then, simplified swarm optimization is implemented as feature search engine to guide the search for an optimal feature subset. Finally, support vector machine serve as a classifier to evaluate the performance of the proposed method. To evaluate the performance of proposed algorithm, we apply our algorithm in ten benchmark datasets and compare the results with existing algorithm The results show that our algorithm has a better performance than its competitor.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Vazhbakht, Bahareh. "A finite element mesh optimization method incorporating geologic features for stress analysis of underground excavations." Thesis, 2011. http://spectrum.library.concordia.ca/35795/1/Vazhbakht_MASc_F2011.pdf.

Повний текст джерела
Анотація:
Application of numerical modeling in civil and mining engineering projects not only increases the effectiveness of analysis but also improves the results of the analysis. However, due to complexity of model generation and analysis, it still is a time consuming process. The finite element method requires a discretization, or a mesh, to solve the partial differential equations representing the problem. The finer and denser is the mesh, the more time and computer memory consuming is the analysis. Therefore, one of possible solutions is to simplify the analysis by reducing the mesh density while maintaining the quality of solution. Previously, with help of a cost function, a framework was introduced for mesh optimization considering the geometries of excavations only. From the current research, the optimization strategy is improved by including the effect of geologic features represented by rock properties. Among different rock properties, Young’s modulus (E) and Poisson’s ratio (µ) were considered. The effect of each of these properties on the mesh optimization was investigated and it was concluded that the E has the most significant effect on the results of stress analysis of dissimilar rocks. Subsequently, an expanded cost function incorporating E was formulated. Finally, an application of expanded cost function was demonstrated using a few representation case studies.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії