Дисертації з теми "Set-Based Methods"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Set-Based Methods.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Set-Based Methods".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Stoican, Florin. "Fault tolerant control based on set-theoretic methods." Phd thesis, Supélec, 2011. http://tel.archives-ouvertes.fr/tel-00633622.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The scope of the thesis is the analysis and design of fault tolerant control (FTC) schemes through the use of set-theoretic methods. In the framework of multisensor schemes, the faults appearance and the modalities to accurately detect them are investigated as well as the design of control laws which assure the closed-loop stability. By using invariant/contractive sets to describe the residual signals, a fault detection and isolation (FDI) mechanism with reduced computational demands is implemented based on set-separation. A dual mechanism, implemented by a recovery block, which certificates previously fault-affected sensors is also studied. From a broader theoretical perspective, we point to the conditions which allow the inclusion of {FDI} objectives in the control law design. This leads to static feedback gains synthesis by means of numerically attractive optimization problems. Depending on the parameters selected for tuning, is shown that the FTC design can be completed by a reference governor or a predictive control scheme which adapts the state trajectory and the feedback control action in order to assure {FDI}. When necessary, the specific issues originated by the use of set-theoretic methods are detailed and various improvements are proposed towards: invariant set construction, mixed integer programming (MIP), stability for switched systems (dwell-time notions).
2

Xu, Feng. "Diagnosis and fault-tolerant control using set-based methods." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/284831.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The fault-tolerant capability is an important performance specification for most of technical systems. The examples showing its importance are some catastrophes in civil aviation. According to some official investigations, some air incidents are technically avoidable if the pilots can take right measures. But, relying on the skill and experience of the pilots, it cannot be guaranteed that reliable flight decisions are always made. Instead, if fault-tolerant strategies can be included in the decision-making procedure, it will be very useful for safer flight. Fault-tolerant control is generally classified into passive and active fault-tolerant control. Passive fault-tolerant control relies on robustness of controller, which can only provide limited fault-tolerant ability, while active fault-tolerant control turns to a fault detection and isolation module to obtain fault information and then actively take actions to tolerate the effect of faults. Thus, active fault-tolerant control generally has stronger fault-tolerant ability. In this dissertation, one focuses on active fault-tolerant control, which for this case considers model predictive control and set-based fault detection and isolation. Model predictive control is a successful advanced control strategy in process industry and has been widely used for processes such as chemistry and water treatment, because of its ability to deal with multivariable constrained systems. However, the performance of model redictive control has deep dependence on system-model accuracy. Realistically, it is impossible to avoid the effect of modelling errors, disturbances, noises and faults, which always result in model mismatch. Comparatively, model mismatch induced by faults is possible to be effectively handled by suitable fault-tolerant strategies. The objective of this dissertation is to endow model predictive control with fault-tolerant ability to improve its effectiveness. In order to reach this objective, set-based fault detection and isolation methods are used in the proposed fault-tolerant schemes. The important advantage of set-based fault detection and isolation is that it can make robust fault detection and isolation decisions, which is key for taking right fault-tolerant measures. This dissertation includes four parts. The first part introduces this research, presents the state of the art and gives an introduction of used research tools. The second part proposes set-based fault detection and isolation for actuator or=and sensor faults, which are involved in interval observers, invariant sets and set-membership estimation. First, the relationship between interval observers and invariant sets is investigated. Then, actuator and sensor faults are separately coped with depending on their own features. The third part focuses on actuator or=and sensor fault-tolerant model predictive control, where the control strategy is robust model predictive control (tube-based and min-max approaches). The last part draws some conclusions, summarizes this research and gives clues for the further work.
La capacidad de los sistemas para tolerar fallos es una importante especificación de desempeño para la mayoría de sistemas. Ejemplos que muestran su importancia son algunas catástrofes en aviación civil. De acuerdo a investigaciones oficiales, algunos incidentes aéreos son técnicamente evitables si los pilotos pudiesen tomar las medidas adecuadas. Aun así, basándose en las habilidades y experiencia de los pilotos, no se puede garantizar que decisiones de vuelo confiables serán siempre posible de tomar. En cambio, si estrategias de tolerancia a fallos se pudieran incluir en el proceso de toma de decisión, los vuelos serían mucho más seguros. El control tolerante a fallos es generalmente clasificado en control pasivo y activo. El control pasivo se basa en la robustez del controlador, el cual sólo provee una habilidad limitada de tolerancia a fallos, mientras que el control tolerante a fallos de tipo activo se convierte en un modulo de detección y aislamiento de fallos que permite obtener información de éstos, y luego, activamente, tomar acciones para tolerar el efecto de dichos fallos. Así pues, el control activo generalmente tiene habilidades más fuertes de tolerancia a fallos. Esta tesis se enfoca en control tolerante a fallos activo, para lo cual considera el control predictivo basado en modelos y la detección y aislamiento de fallos basados en conjuntos. El control predictivo basado en modelos es una estrategia de control exitosa en la industria de procesos y ha sido ampliamente utilizada para procesos químicos y tratamiento de aguas, debido a su habilidad de tratar con sistemas multivariables con restricciones. A pesar de esto, el desempeño del control predictivo basado en modelos tiene una profunda dependencia de la precisión del modelo del sistema. Siendo realistas, es imposible evitar el efecto de errores de modelado, perturbaciones, ruidos y fallos, que siempre llevan a diferencias entre el modelo y el sistema real. Comparativamente, el error de modelo inducido por los fallos es posible de ser manejado efectivamente por estrategias adecuadas de control tolerante a fallos. Con el fin de alcanzar este objetivo, métodos de detección y aislamiento de fallos basados en conjuntos son utilizados en los esquemas de tolerancia a fallos propuestos en esta tesis. La ventaja importante de estas técnicas de detección y aislamiento de fallos basadas en conjuntos es que puede tomar decisiones robustas de detección y aislamiento, lo cual es clave para tomar medidas acertadas de tolerancia a fallos. Esta tesis esta dividida en cuatro partes. La primera parte es introductoria, presenta el estado del arte y hace una introducción a las herramientas de investigación utilizadas. La segunda parte expone la detección y aislamiento de fallos en actuadores y/o sensores, basándose en teoría de conjuntos, a partir de observadores de intervalo, y conjuntos invariantes. La tercera parte se enfoca en el control predictivo robusto (con enfoques basados tanto en tubos robustos como en min-max) con tolerancia a fallos en actuadores y/o sensores. La cuarta parte presenta algunas conclusiones, hace un resumen de esta investigación y da algunas ideas para trabajos futuros.
3

Stankovic, Nikola. "Set-based control methods for systems affected by time-varying delay." Thesis, Supélec, 2013. http://www.theses.fr/2013SUPL0025/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
On considère la synthèse de la commande basée sur un asservissement affecté par des retards. L’approche utilisée repose sur des méthodes ensemblistes. Une partie de cette thèse est consacrée à une conception de commande active pour la compensation des retards qui apparaissent dans des canaux de communication entre le capteur et correcteur. Ce problème est considéré dans une perspective générale du cadre de commande tolérante aux défauts où des retards variés sont vus comme un mode particulier de dégradation du capteur. Le cas avec transmission de mesure retardée pour des systèmes avec des capteurs redondants est également examiné. Par conséquent, un cadre unifié est proposé afin de régler le problème de commande basé sur la transmission des mesures avec retard qui peuvent également être fournies par des capteurs qui sont affectés par des défauts soudains.Dans la deuxième partie le concept d’invariance positive pour des systèmes linéaires à retard à temps discret est exposé. En ce qui concerne l’invariance pour cette classe des systèmes dynamiques, il existe deux idées principales. La première approche repose sur la réécriture d’un tel système dans l’espace d’état augmenté et de le considérer comme un système linéaire. D’autre part, la seconde approche considère l’invariance dans l’espace d’état initial. Cependant, la caractérisation d’un tel ensemble invariant est encore une question ouverte, même pour le cas linéaire. Par conséquent, l’objectif de cette thèse est d’introduire une notion générale d’invariance positive pour des systèmes linéaires à retard à temps discret. Également, certains nouveaux éclairages sur l’existence et la construction pour les ensembles invariants positifs robustes sont détaillés. En outre, les nouveaux concepts d’invariance alternatives sont décrits
We considered the process regulation which is based on feedback affected by varying delays. Proposed approach relies on set-based control methods. One part of the thesis examines active control design for compensation of delays in sensor-to controller communication channel. This problem is regarded in a general perspective of the fault tolerant control where delays are considered as a particular degradation mode of the sensor. Obtained results are also adapted to the systems with redundant sensing elements that are prone to abrupt faults. In this sense, an unified framework is proposed in order to address the control design with outdated measurements provided by unreliable sensors.Positive invariance for linear discrete-time systems with delays is outlined in the second part of the thesis. Concerning this class of dynamics, there are two main approaches which define positive invariance. The first one relies on rewriting a delay-difference equation in the augmented state-space and applying standard analysis and control design tools for the linear systems. The second approach considers invariance in the initial state-space. However, the initial state-space characterization is still an open problem even for the linear case and it represents our main subject of interest. As a contribution, we provide new insights on the existence of the positively invariant sets in the initial state-space. Moreover, a construction algorithm for the minimal robust D-invariant set is outlined. Additionally, alternative invariance concepts are discussed
4

Tariq, Muhammad Farzan. "Set-based design rules and implementation methods in concept development phase." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/118491.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 52).
There are numerous methodologies that organizations employ during concept development cycles. These range from agile, waterfall, point-based designs etc. One of the emerging such methodologies is called Set-Based Design (SBD). There has been flurry of research conducted into SBD process. Most of the documentations about SBD highlight its general principles and characteristics. In this thesis, I have taken a more focused approach by targeting planning and concept development phases in particular. Rules to select or deselect concepts have been extensively discussed in this research followed by providing an effective structure to implement SBD in concept development process. The form and function distinction during the concept development cycle has been clearly examined and documented. The research has been conducted independent of any organization or product type and therefore is applicable to any product development scenario and can be easily adopted by any organization.
by Muhammad Farzan Tariq.
S.M. in Engineering and Management
5

Léon, Cantón Plinio de. "Dependable control of uncertain linear systems based on set theoretic methods." Achen Shaker, 2009. http://d-nb.info/995737347/04.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

譚玉貞 and Yuk-ching Tam. "Some practical issues in estimation based on a ranked set sample." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31221683.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Kern, Benjamin Verfasser], and Rolf [Gutachter] [Findeisen. "Set-based methods for interconnected control systems / Benjamin Kern ; Gutachter: Rolf Findeisen." Magdeburg : Universitätsbibliothek Otto-von-Guericke-Universität, 2019. http://d-nb.info/1220036447/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Ullah, Baseer. "Structural topology optimisation based on the boundary element and level set methods." Thesis, Durham University, 2014. http://etheses.dur.ac.uk/10659/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The research work presented in this thesis is related to the development of structural optimisation algorithms based on the boundary element and level set methods for two and three-dimensional linear elastic problems. In the initial implementation, a stress based evolutionary structural optimisation (ESO) approach has been used to add and remove material simultaneously for the solution of two-dimensional optimisation problems. The level set method (LSM) is used to provide an implicit description of the structural geometry, which is also capable of automatically handling topological changes, i.e. holes merging with each other or with the boundary. The classical level set based optimisation methods are dependent on initial designs with pre-existing holes. However, the proposed method automatically introduces internal cavities utilising a stress based hole insertion criteria, and thereby eliminates the use of initial designs with pre-existing holes. A detailed study has also been carried out to investigate the relationship between a stress and topological derivative based hole insertion criteria within a boundary element method (BEM) and LSM framework. The evolving structural geometry (i.e. the zero level set contours) is represented by non-uniform rational b-splines (NURBS), providing a smooth geometry throughout the optimisation process and completely eliminating jagged edges. The BEM and LSM are further combined with a shape sensitivity approach for the solution of minimum compliance problems in two-dimensions. The proposed sensitivity based method is capable of automatically inserting holes during the optimisation process using a topological derivative approach. In order to investigate the associated advantages and disadvantages of the evolutionary and sensitivity based optimisation methods a comparative study has also been carried out. There are two advantages associated with the use of LSM in three-dimensional topology optimisation. Firstly, the LSM may readily be applied to three-dimensional space, and it is shown how this can be linked to a 3D BEM solver. Secondly, the holes appear automatically through the intersection of two surfaces moving towards each other. Therefore, the use of LSM eliminates the need for an additional hole insertion mechanism as both shape and topology optimisation can be performed at the same time. A complete algorithm is proposed and tested for BEM and LSM based topology optimisation in three-dimensions. Optimal geometries compare well against those in the literature for a range of benchmark examples.
9

Kern, Benjamin [Verfasser], and Rolf [Gutachter] Findeisen. "Set-based methods for interconnected control systems / Benjamin Kern ; Gutachter: Rolf Findeisen." Magdeburg : Universitätsbibliothek Otto-von-Guericke-Universität, 2019. http://d-nb.info/1220036447/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Mulagaleti, Sampath Kumar. "Invariant Set-based Methods for the Computation of Input and Disturbance Sets." Thesis, IMT Alti Studi Lucca, 2023. http://e-theses.imtlucca.it/370/1/Mulagaleti_phdthesis.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This dissertation presents new methods to synthesize disturbance sets and input constraints set for constrained linear time-invariant systems. Broadly, we formulate and solve optimization problems that (a) compute disturbance sets such that the reachable set of outputs approximates an assigned set, and (b) compute input constraint sets guaranteeing the stabilizability of a given set of initial conditions. The proposed methods find application in the synthesis and analysis of several control schemes such as decentralized control, reduced-order control, etc., as well as in practical system design problems such as actuator selection, etc. The key tools supporting the develpment of the aforementioned methods are Robust Positive Invariant (RPI) sets. In particular, the problems that we formulate are such that they co-synthesize disturbance/input constraint sets along with the associated RPI sets. This requires embedding existing techniques to compute RPI sets within an optimization problem framework, that we facilitate by developing new results related to properties of RPI sets, polytope representations, inclusion encoding techniques, etc. In order to solve the resulting optimization problems, we develop specialized structure-exploiting solvers that we numerically demonstrate to outperform conventional solution methods. We also demonstrate several applications of the methods we propose for control design. Finally, we extend the methods to tackle data-driven control synthesis problems in an identification-for-control framework.
11

Bernstein, Joshua I. (Joshua Ian) 1974. "Design methods in the aerospace industry : looking for evidence of set-based practices." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/82675.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (M.S.)--Massachusetts Institute of Technology, Technology and Policy Program, 1998.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 209-211).
by Joshua I. Bernstein.
M.S.
12

de, Léon Cantón Plinio [Verfasser]. "Dependable control of uncertain linear systems based on set-theoretic methods / Plinio de Léon Cantón." Aachen : Shaker, 2009. http://d-nb.info/1159832757/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Falkeborn, Rikard. "Evaluation of Differential Algebraic Elimination Methods for Deriving Consistency Relations from an Engine Model." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7973.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:

New emission legislations introduced in the European Union and the U.S. have made truck manufacturers face stricter requirements for low emissions and on-board diagnostic systems. The on-board diagnostic system typically consists of several tests that are run when the truck is driving. One way to construct such tests is to use so called consistency relations. A consistency relation is a relation with known variables that in the fault free case always holds. Calculation of a consistency relation typically involves eliminating unknown variables from a set of equations.

To eliminate variables from a differential polynomial system, methods from differential algebra can be used. In this thesis, the purely algebraic Gröbner basis algorithm and the differential Rosenfeld-Gröbner algorithm implemented in the Maple package Diffalg have been compared and evaluated. The conclusion drawn is that there are no significant differences between the methods. However, since using Gröbner basis requires differentiations to be made in advance, the recommendation is to use the Rosenfeld-Gröbner algorithm.

Further, attempts to calculate consistency relations using the Rosenfeld-Gröbner algorithm have been made to a real application, a model of a Scania diesel engine. These attempts did not yield any successful results. It was only possible to calculate one consistency relation. This can be explained by the high complexity of the model.

14

Webb, Grayson. "A Gaussian Mixture Model based Level Set Method for Volume Segmentation in Medical Images." Thesis, Linköpings universitet, Beräkningsmatematik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148548.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis proposes a probabilistic level set method to be used in segmentation of tumors with heterogeneous intensities. It models the intensities of the tumor and surrounding tissue using Gaussian mixture models. Through a contour based initialization procedure samples are gathered to be used in expectation maximization of the mixture model parameters. The proposed method is compared against a threshold-based segmentation method using MRI images retrieved from The Cancer Imaging Archive. The cases are manually segmented and an automated testing procedure is used to find optimal parameters for the proposed method and then it is tested against the threshold-based method. Segmentation times, dice coefficients, and volume errors are compared. The evaluation reveals that the proposed method has a comparable mean segmentation time to the threshold-based method, and performs faster in cases where the volume error does not exceed 40%. The mean dice coefficient and volume error are also improved while achieving lower deviation.
15

Maringanti, Rajaram Seshu. "INVERSE-DISTANCE INTERPOLATION BASED SET-POINT GENERATION METHODS FOR CLOSED-LOOP COMBUSTION CONTROL OF A CIDI ENGINE." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1253553419.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Heikkinen, Tim, and Jakob Müller. "Multidisciplinary analysis of jet engine components : Development of methods and tools for design automatisation in a multidisciplinary context." Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Maskinteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-27784.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis report presents the work of analysing current challenges in Multidisciplinary Analysis systems. Exemplary the system of an aerospace supplier, GKN Aerospace Sweden AB, is examined and several suggestions for improve- ment are implemented. The Multidisciplinary Analysis system, with company internal name Engineering Workbench, employs a set-based approach in exploring the design-space for jet engine components. A number of design cases with varied geometrical and environmental parameters is generated using Design of Experiment sampling methods. Each design case is then subjected to a set of analyses. Using the analyses results, a surrogate model of the parts behaviour in relation to the input parameters is created. This enables the product developer to get a general view of the model’s behaviour and also to react to changes in product requirements. Design research methodology is applied to further develop the Engineering Workbench into a versatile design support system and expand the functionality to include producibility assessment. In its original state, the execution of a study requires explicit domain knowledge and programming skills in several disciplines. The execution of a study is often halted by minor process errors. Several methods to improve this status are suggested and tested. Among those are the introduction of an interface to improve the usability and expand the range of possible users. Further the integration of a four level system architecture supporting a modular structure. Producibility assessment is enabled by developing an expert system where geometrical and simulation results can be caught, analysed and evaluated to produce producibility metrics. Evaluation of the implemented solutions indicate a step in the right direction. Further development towards Multidisciplinary Optimisation, involving experts in information technologies as well as case- based reasoning techniques is suggested and discussed.
17

Goddard, Aaron Matthew. "Applying vessel inlet/outlet conditions to patient-specific models embedded in Cartesian grids." Thesis, University of Iowa, 2015. https://ir.uiowa.edu/etd/1970.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cardiovascular modeling has the capability to provide valuable information allowing clinicians to better classify patients and aid in surgical planning. Modeling is advantageous for being non-invasive, and also allows for quantification of values not easily obtained from physical measurements. Hemodynamics are heavily dependent on vessel geometry, which varies greatly from patient to patient. For this reason, clinically relevant approaches must perform these simulations on patient-specific geometry. Geometry is acquired from various imaging modalities, including magnetic resonance imaging, computed tomography, and ultrasound. The typical approach for generating a computational model requires construction of a triangulated surface mesh for use with finite volume or finite element solvers. Surface mesh construction can result in a loss of anatomical features and often requires a skilled user to execute manual steps in 3rd party software. An alternative to this method is to use a Cartesian grid solver to conduct the fluid simulation. Cartesian grid solvers do not require a surface mesh. They can use the implicit geometry representation created during the image segmentation process, but they are constrained to a cuboidal domain. Since patient-specific geometry usually deviate from the orthogonal directions of a cuboidal domain, flow extensions are often implemented. Flow extensions are created via a skilled user and 3rd party software, rendering the Cartesian grid solver approach no more clinically useful than the triangulated surface mesh approach. This work presents an alternative to flow extensions by developing a method of applying vessel inlet and outlet boundary conditions to regions inside the Cartesian domain.
18

Trumpp, Alexander, Johannes Lohr, Daniel Wedekind, Martin Schmidt, Matthias Burghardt, Axel R. Heller, Hagen Malberg, and Sebastian Zaunseder. "Camera-based photoplethysmography in an intraoperative setting." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-234950.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background Camera-based photoplethysmography (cbPPG) is a measurement technique which enables remote vital sign monitoring by using cameras. To obtain valid plethysmograms, proper regions of interest (ROIs) have to be selected in the video data. Most automated selection methods rely on specific spatial or temporal features limiting a broader application. In this work, we present a new method which overcomes those drawbacks and, therefore, allows cbPPG to be applied in an intraoperative environment. Methods We recorded 41 patients during surgery using an RGB and a near-infrared (NIR) camera. A Bayesian skin classifier was employed to detect suitable regions, and a level set segmentation approach to define and track ROIs based on spatial homogeneity. Results The results show stable and homogeneously illuminated ROIs. We further evaluated their quality with regards to extracted cbPPG signals. The green channel provided the best results where heart rates could be correctly estimated in 95.6% of cases. The NIR channel yielded the highest contribution in compensating false estimations. Conclusions The proposed method proved that cbPPG is applicable in intraoperative environments. It can be easily transferred to other settings regardless of which body site is considered.
19

Robinson, Elinirina Iréna. "Filtering and uncertainty propagation methods for model-based prognosis." Thesis, Paris, CNAM, 2018. http://www.theses.fr/2018CNAM1189/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les travaux présentés dans ce mémoire concernent le développement de méthodes de pronostic à base de modèles. Le pronostic à base de modèles a pour but d'estimer le temps qu'il reste avant qu'un système ne soit défaillant, à partir d'un modèle physique de la dégradation du système. Ce temps de vie restant est appelé durée de résiduelle (RUL) du système.Le pronostic à base de modèle est composé de deux étapes principales : (i) estimation de l'état actuel de la dégradation et (ii) prédiction de l'état futur de la dégradation. La première étape, qui est une étape de filtrage, est réalisée à partir du modèle et des mesures disponibles. La seconde étape consiste à faire de la propagation d'incertitudes. Le principal enjeu du pronostic concerne la prise en compte des différentes sources d'incertitude pour obtenir une mesure de l'incertitude associée à la RUL prédite. Les principales sources d'incertitude sont les incertitudes de modèle, les incertitudes de mesures et les incertitudes liées aux futures conditions d'opération du système. Afin de gérer ces incertitudes et les intégrer au pronostic, des méthodes probabilistes ainsi que des méthodes ensemblistes ont été développées dans cette thèse.Dans un premier temps, un filtre de Kalman étendu ainsi qu'un filtre particulaire sont appliqués au pronostic de propagation de fissure, en utilisant la loi de Paris et des données synthétiques. Puis, une méthode combinant un filtre particulaire et un algorithme de détection (algorithme des sommes cumulatives) a été développée puis appliquée au pronostic de propagation de fissure dans un matériau composite soumis à un chargement variable. Cette fois, en plus des incertitudes de modèle et de mesures, les incertitudes liées aux futures conditions d'opération du système ont aussi été considérées. De plus, des données réelles ont été utilisées. Ensuite, deux méthodes de pronostic sont développées dans un cadre ensembliste où les erreurs sont considérées comme étant bornées. Elles utilisent notamment des méthodes d'inversion ensembliste et un observateur par intervalles pour des systèmes linéaires à temps discret. Enfin, l'application d'une méthode issue du domaine de l'analyse de fiabilité des systèmes au pronostic à base de modèles est présentée. Il s'agit de la méthode Inverse First-Order Reliability Method (Inverse FORM).Pour chaque méthode développée, des métriques d'évaluation de performance sont calculées dans le but de comparer leur efficacité. Il s'agit de l'exactitude, la précision et l'opportunité
In this manuscript, contributions to the development of methods for on-line model-based prognosis are presented. Model-based prognosis aims at predicting the time before the monitored system reaches a failure state, using a physics-based model of the degradation. This time before failure is called the remaining useful life (RUL) of the system.Model-based prognosis is divided in two main steps: (i) current degradation state estimation and (ii) future degradation state prediction to predict the RUL. The first step, which consists in estimating the current degradation state using the measurements, is performed with filtering techniques. The second step is realized with uncertainty propagation methods. The main challenge in prognosis is to take the different uncertainty sources into account in order to obtain a measure of the RUL uncertainty. There are mainly model uncertainty, measurement uncertainty and future uncertainty (loading, operating conditions, etc.). Thus, probabilistic and set-membership methods for model-based prognosis are investigated in this thesis to tackle these uncertainties.The ability of an extended Kalman filter and a particle filter to perform RUL prognosis in presence of model and measurement uncertainty is first studied using a nonlinear fatigue crack growth model based on the Paris' law and synthetic data. Then, the particle filter combined to a detection algorithm (cumulative sum algorithm) is applied to a more realistic case study, which is fatigue crack growth prognosis in composite materials under variable amplitude loading. This time, model uncertainty, measurement uncertainty and future loading uncertainty are taken into account, and real data are used. Then, two set-membership model-based prognosis methods based on constraint satisfaction and unknown input interval observer for linear discete-time systems are presented. Finally, an extension of a reliability analysis method to model-based prognosis, namely the inverse first-order reliability method (Inverse FORM), is presented.In each case study, performance evaluation metrics (accuracy, precision and timeliness) are calculated in order to make a comparison between the proposed methods
20

Xia, Xiaolin. "A Comparison Study on a Set of Space Syntax based Methods : Applying metric, topological and angular analysis to natural streets, axial lines and axial segments." Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-15524.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recently, there has been an increasing interest in looking at urban environment as a complex system. More and more researchers are paying attention to the study of the configuration of urban space as well as human social activities within it. It has been found that correlation exists between the morphological properties of urban street network and observed human social movement patterns. This correlation implies that the influence of urban configurations on human social movements is no longer only revealed from the sense of metric distance, but also revealed from topological and geometrical perspectives. Metric distances, topological relationships and angular changes between streets should be considered when applying space syntax analysis to an urban street network. This thesis is mainly focused on the comparison among metric, topological and angular analyses based on three kinds of urban street representation models: natural streets, axial lines and axial segments. Four study areas (London, Paris, Manhattan and San Francisco) were picked up for empirical study. In the study, space syntax measures were calculated for different combinations of analytical methods and street models. These theoretical space syntax accessibility measures (connectivity, integration and choice) were correlated to the corresponding practical human movement to evaluate the correlations. Then the correlation results were compared in terms of analytical methods and street representation models respectively. In the end, the comparison of results show that (1) natural-street based model is the optimal street model for carrying out space syntax analysis followed by axial lines and axial segments; (2) angular analysis and topological analysis are more advanced than metric analysis; and (3) connectivity, integration and local integration (two-step) are more suitable for predicting human movements in space syntax. Furthermore, it can be hypothesized that topological analysis method with natural-street based model is the best combination for the prediction of human movements in space syntax, for the integration of topological and geometrical thinking.
21

Robinson, Elinirina Iréna. "Filtering and uncertainty propagation methods for model-based prognosis." Electronic Thesis or Diss., Paris, CNAM, 2018. http://www.theses.fr/2018CNAM1189.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les travaux présentés dans ce mémoire concernent le développement de méthodes de pronostic à base de modèles. Le pronostic à base de modèles a pour but d'estimer le temps qu'il reste avant qu'un système ne soit défaillant, à partir d'un modèle physique de la dégradation du système. Ce temps de vie restant est appelé durée de résiduelle (RUL) du système.Le pronostic à base de modèle est composé de deux étapes principales : (i) estimation de l'état actuel de la dégradation et (ii) prédiction de l'état futur de la dégradation. La première étape, qui est une étape de filtrage, est réalisée à partir du modèle et des mesures disponibles. La seconde étape consiste à faire de la propagation d'incertitudes. Le principal enjeu du pronostic concerne la prise en compte des différentes sources d'incertitude pour obtenir une mesure de l'incertitude associée à la RUL prédite. Les principales sources d'incertitude sont les incertitudes de modèle, les incertitudes de mesures et les incertitudes liées aux futures conditions d'opération du système. Afin de gérer ces incertitudes et les intégrer au pronostic, des méthodes probabilistes ainsi que des méthodes ensemblistes ont été développées dans cette thèse.Dans un premier temps, un filtre de Kalman étendu ainsi qu'un filtre particulaire sont appliqués au pronostic de propagation de fissure, en utilisant la loi de Paris et des données synthétiques. Puis, une méthode combinant un filtre particulaire et un algorithme de détection (algorithme des sommes cumulatives) a été développée puis appliquée au pronostic de propagation de fissure dans un matériau composite soumis à un chargement variable. Cette fois, en plus des incertitudes de modèle et de mesures, les incertitudes liées aux futures conditions d'opération du système ont aussi été considérées. De plus, des données réelles ont été utilisées. Ensuite, deux méthodes de pronostic sont développées dans un cadre ensembliste où les erreurs sont considérées comme étant bornées. Elles utilisent notamment des méthodes d'inversion ensembliste et un observateur par intervalles pour des systèmes linéaires à temps discret. Enfin, l'application d'une méthode issue du domaine de l'analyse de fiabilité des systèmes au pronostic à base de modèles est présentée. Il s'agit de la méthode Inverse First-Order Reliability Method (Inverse FORM).Pour chaque méthode développée, des métriques d'évaluation de performance sont calculées dans le but de comparer leur efficacité. Il s'agit de l'exactitude, la précision et l'opportunité
In this manuscript, contributions to the development of methods for on-line model-based prognosis are presented. Model-based prognosis aims at predicting the time before the monitored system reaches a failure state, using a physics-based model of the degradation. This time before failure is called the remaining useful life (RUL) of the system.Model-based prognosis is divided in two main steps: (i) current degradation state estimation and (ii) future degradation state prediction to predict the RUL. The first step, which consists in estimating the current degradation state using the measurements, is performed with filtering techniques. The second step is realized with uncertainty propagation methods. The main challenge in prognosis is to take the different uncertainty sources into account in order to obtain a measure of the RUL uncertainty. There are mainly model uncertainty, measurement uncertainty and future uncertainty (loading, operating conditions, etc.). Thus, probabilistic and set-membership methods for model-based prognosis are investigated in this thesis to tackle these uncertainties.The ability of an extended Kalman filter and a particle filter to perform RUL prognosis in presence of model and measurement uncertainty is first studied using a nonlinear fatigue crack growth model based on the Paris' law and synthetic data. Then, the particle filter combined to a detection algorithm (cumulative sum algorithm) is applied to a more realistic case study, which is fatigue crack growth prognosis in composite materials under variable amplitude loading. This time, model uncertainty, measurement uncertainty and future loading uncertainty are taken into account, and real data are used. Then, two set-membership model-based prognosis methods based on constraint satisfaction and unknown input interval observer for linear discete-time systems are presented. Finally, an extension of a reliability analysis method to model-based prognosis, namely the inverse first-order reliability method (Inverse FORM), is presented.In each case study, performance evaluation metrics (accuracy, precision and timeliness) are calculated in order to make a comparison between the proposed methods
22

Bornschlegell, Augusto Salomao. "Optimisation aérothermique d'un alternateur à pôles saillants pour la production d'énergie électrique décentralisée." Thesis, Valenciennes, 2012. http://www.theses.fr/2012VALE0023/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La présente étude concerne l’étude d’optimisation thermique d’une machine électrique. Un modèle nodal est utilisé pour la simulation du champ de température. Ce modèle résout l’équation de la chaleur en trois dimensions, en coordonnées cylindriques et en régime transitoire ou permanent. On prend en compte les deux mécanismes de transport les plus importants : La conduction et la convection. L’évaluation de ce modèle est effectuée par l’intermédiaire de 13 valeurs de débits de référence. C’est en faisant varier ces variables qu’on évalue la performance du refroidissement dans la machine. Avant de partir sur l’étude d’optimisation de cettegéométrie, on a lancé une étude d’optimisation d’un cas plus simple afin de mieux comprendre les différents outils d’optimisation disponibles. L’expérience acquise avec les cas simples est utilisée dans l’optimisation thermique de la machine. La machine est thermiquement évaluée sur la combinaison de deux critères : la température maximale et la température moyenne. Des contraintes ont été additionnées afin d’obtenir des résultats physiquement acceptables. Le problème est résolu à l’aide des méthodes de gradient (Active-set et Point-Intérieur) et des Algorithmes Génétiques
This work relates the thermal optimization of an electrical machine. The lumped method is used to simulate the temperature field. This model solves the heat equation in three dimensions, in cylindrical coordinates and in transient or steady state. We consider two transport mechanisms: conduction and convection. The evaluation of this model is performed by means of 13 design variables that correspond to the main flow rates of the equipment. We analyse the machine cooling performance by varying these 13 flow rates. Before starting the study of such a complicated geometry, we picked a simpler case in order to better understand the variety of the available optimization tools. The experience obtained in the simpler case is applyed in the resolution of the thermal optimization problem of the electrical machine. This machine is evaluated from the thermal point of view by combining two criteria : the maximum and the mean temperature. Constraints are used to keep the problem consistent. We solved the problem using the gradient based methods (Active-set and Interior-Point) and the Genetic Algorithms
23

Prodan, Ionela. "Control of Multi-Agent Dynamical Systems in the Presence of Constraints." Thesis, Supélec, 2012. http://www.theses.fr/2012SUPL0019/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'objectif de cette thèse est de proposer des solutions aux problèmes liés à la commande optimale de systèmes dynamiques multi-agents en présence de contraintes. Des éléments de la théorie de commande et d'optimisation sont appliqués à différents problèmes impliquant des formations de systèmes multi-agents. La thèse examine le cas d'agents soumis à des contraintes dynamiques. Pour faire face à ces problèmes, les concepts bien établis tels que la théorie des ensembles, la platitude différentielle, la commande prédictive (Model Predictive Control - MPC), la programmation mixte en nombres entiers (Mixed-Integer Programming - MIP) sont adaptés et améliorés. En utilisant ces notions théoriques, ce travail de thèse a porté sur les propriétés géométriques de la formation d'un groupe multi-agents et propose un cadre de synthèse original qui exploite cette structure. En particulier, le problème de conception de formation et les conditions d'évitement des collisions sont formulés comme des problèmes géométriques et d'optimisation pour lesquels il existe des procédures de résolution. En outre, des progrès considérables dans ce sens ont été obtenus en utilisant de façon efficace les techniques MIP (dans le but d'en déduire une description efficace des propriétés de non convexité et de non connexion d'une région de faisabilité résultant d'une collision de type multi-agents avec des contraintes d'évitement d'obstacles) et des propriétés de stabilité (afin d'analyser l'unicité et l'existence de configurations de formation de systèmes multi-agents). Enfin, certains résultats théoriques obtenus ont été appliqués dans un cas pratique très intéressant. On utilise une nouvelle combinaison de la commande prédictive et de platitude différentielle (pour la génération de référence) dans la commande et la navigation de véhicules aériens sans pilote (UAVs)
The goal of this thesis is to propose solutions for the optimal control of multi-agent dynamical systems under constraints. Elements from control theory and optimization are merged together in order to provide useful tools which are further applied to different problems involving multi-agent formations. The thesis considers the challenging case of agents subject to dynamical constraints. To deal with these issues, well established concepts like set-theory, differential flatness, Model Predictive Control (MPC), Mixed-Integer Programming (MIP) are adapted and enhanced. Using these theoretical notions, the thesis concentrates on understanding the geometrical properties of the multi-agent group formation and on providing a novel synthesis framework which exploits the group structure. In particular, the formation design and the collision avoidance conditions are casted as geometrical problems and optimization-based procedures are developed to solve them. Moreover, considerable advances in this direction are obtained by efficiently using MIP techniques (in order to derive an efficient description of the non-convex, non-connected feasible region which results from multi-agent collision and obstacle avoidance constraints) and stability properties (in order to analyze the uniqueness and existence of formation configurations). Lastly, some of the obtained theoretical results are applied on a challenging practical application. A novel combination of MPC and differential flatness (for reference generation) is used for the flight control of Unmanned Aerial Vehicles (UAVs)
24

Bertin, Étienne. "Robust optimal control for the guidance of autonomous vehicles." Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAE012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Le guidage d'un lanceur réutilisable est un problème de contrôle qui nécessite à la fois précision et robustesse : il faut calculer une trajectoire et un contrôle, de sorte que le lanceur atteigne la piste d'atterrissage, sans s'écraser ni exploser en vol, le tout en utilisant le moins de carburant possible.Les méthodes de Contrôle Optimal issu du Principe de Pontryagin calculent une trajectoire optimale avec grande précision, mais les incertitudes, soit les erreurs entre les estimations de l'état initial et des paramètres et leurs valeurs réelles, causent une déviation potentiellement dangereuse de la trajectoire réelle. En parallèle, les méthodes ensemblistes et notamment la simulation validée peuvent encadrer toutes les trajectoires possibles d'un système dynamique avec des incertitudes bornées.Cette thèse combine ces deux approches pour encadrer des ensembles de trajectoires optimales de systèmes avec incertitudes afin de garantir la robustesse du guidage d'un véhicule autonome.Nous commençons par définir des ensembles de trajectoires optimales pour des systèmes avec incertitudes, d'abord pour les trajectoires mathématiquement parfaites, puis pour les trajectoires d'un véhicule sujet à des erreurs d'estimation, mais qui utiliserait, ou non, les données des capteurs pour recalculer sa trajectoire en cours de route. Le principe de Pontryagin caractérise ces ensembles comme solutions de problèmes aux deux bouts avec des dynamiques avec incertitudes. Nous développons alors des algorithmes qui encadrent toutes les solutions de ces problèmes aux deux bouts en utilisant la simulation validée, l'arithmétique des intervalles et la théorie des contracteurs. Cependant, la simulation avec des intervalles occasionne une forte sur-approximation qui limite nos méthodes. Pour y remédier, nous remplaçons les intervalles par des zonotopes symboliques contraints. Nous utilisons notamment ces zonotopes pour simuler des systèmes hybrides, encadrer des solutions de problèmes aux deux bouts et construire des sous-approximations en complément de la sur-approximation classique. Enfin, nous combinons tout ceci pour calculer des ensembles de trajectoires de systèmes aérospatiaux et les utilisons pour évaluer la robustesse du contrôle
The guidance of a reusable launcher is a control problem that requires both precision and robustness: one must compute a trajectory and a control such that the system reaches the landing zone, without crashing into it or exploding mid-flight, all while using as little fuel as possible. Optimal control methods based on Pontryagin's Maximum Principle can compute an optimal trajectory with great precision, but uncertainties, the discrepancies between estimated values of the initial state and parameters and actual values, cause the actual trajectory to deviate, which can be dangerous. In parallel, set-based methods and notably validated simulation can enclose all trajectories of a system with uncertainties.This thesis combines those two approaches to enclose sets of optimal trajectories of a problem with uncertainties to guarantee the robustness of the guidance of autonomous vehicles.We start by defining sets of optimal trajectories for systems with uncertainties, first for mathematically perfect trajectories, then for the trajectory of a vehicle subject to estimation errors that can use, or not use, sensor information to compute a new trajectory online. Pontryagin's principle characterizes those sets as solutions of a boundary value problem with dynamics subject to uncertainties. We develop algorithms that enclose all solutions of these boundary value problem using validated simulation, interval arithmetic and contractor theory. However, validated simulation with intervals is subject to significant over-approximation that limits our methods. To remedy that we replace intervals by constrained symbolic zonotopes. We use those zonotopes to simulate hybrid systems, enclose the solutions of boundary value problems and build an inner-approximation to complement the classical outer-approximation. Finally, we combine all our methods to compute sets of trajectories for aerospace systems and use those sets to assess the robustness of a control
25

Lo, Shin-en. "A Fire Simulation Model for Heterogeneous Environments Using the Level Set Method." Scholarship @ Claremont, 2012. http://scholarship.claremont.edu/cgu_etd/72.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Wildfire hazard and its destructive consequences have become a growing issue around the world especially in the context of global warming. An effective and efficient fire simulation model will make it possible to predict the fire spread and assist firefighters in the process of controlling the damage and containing the fire area. Simulating wildfire spread remains challenging due to the complexity of fire behaviors. The raster-based method and the vector-based method are two major approaches that allow one to perform computerized fire spread simulation. In this thesis, we present a scheme we have developed that utilizes a level set method to build a fire spread simulation model. The scheme applies the strengths and overcomes some of the shortcomings of the two major types of simulation method. We store fire data and local rules at cells. Instead of calculating which are the next ignition points cell by cell, we apply Huygens' principle and elliptical spread assumption to calculate the direction and distance of the expanding fire by the level set method. The advantage to storing data at cells is that it makes our simulation model more suitable for heterogeneous fuel and complex topographic environment. Using a level set method for our simulation model makes it possible to overcome the crossover problem. Another strength of the level set method is its continuous data processing. Applying the level set method in the simulation models, we need fewer vector points than raster cells to produce a more realistic fire shape. We demonstrate this fire simulation model through two implementations using narrow band level set method and fast marching method. The simulated results are compared to the real fire image data generated from Troy and Colina fires. The simulation data are then studied and compared. The ultimate goal is to apply this simulation model to the broader picture to better predict different types of fires such as crown fire, spotting fires, etc.
26

Mueller, Martin F. "Physics-driven variational methods for computer vision and shape-based imaging." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/54034.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this dissertation, novel variational optical-flow and active-contour methods are investigated to address challenging problems in computer vision and shape-based imaging. Starting from traditional applications of these methods in computer vision, such as object segmentation, tracking, and detection, this research subsequently applies similar active contour techniques to the realm of shape-based imaging, which is an image reconstruction technique estimating object shapes directly from physical wave measurements. In particular, the first and second part of this thesis deal with the following two physically inspired computer vision applications. Optical Flow for Vision-Based Flame Detection: Fire motion is estimated using optimal mass transport optical flow, whose motion model is inspired by the physical law of mass conservation, a governing equation for fire dynamics. The estimated motion fields are used to first detect candidate regions characterized by high motion activity, which are then tracked over time using active contours. To classify candidate regions, a neural net is trained on a set of novel motion features, which are extracted from optical flow fields of candidate regions. Coupled Photo-Geometric Object Features: Active contour models for segmentation in thermal videos are presented, which generalize the well-known Mumford-Shah functional. The diffusive nature of heat processes in thermal imagery motivates the use of Mumford-Shah-type smooth approximations for the image radiance. Mumford-Shah's isotropic smoothness constraint is generalized to anisotropic diffusion in this dissertation, where the image gradient is decomposed into components parallel and perpendicular to level set curves describing the object's boundary contour. In a limiting case, this anisotropic Mumford-Shah segmentation energy yields a one-dimensional ``photo-geometric'' representation of an object which is invariant to translation, rotation and scale. These properties allow the photo-geometric object representation to be efficiently used as a radiance feature; a recognition-segmentation active contour energy, whose shape and radiance follow a training model obtained by principal component analysis of a training set's shape and radiance features, is finally applied to tracking problems in thermal imagery. The third part of this thesis investigates a physics-driven active contour approach for shape-based imaging. Adjoint Active Contours for Shape-Based Imaging: The goal of this research is to estimate both location and shape of buried objects from surface measurements of waves scattered from the object. These objects' shapes are described by active contours: A misfit energy quantifying the discrepancy between measured and simulated wave amplitudes is minimized with respect to object shape using the adjoint state method. The minimizing active contour evolution requires numerical forward scattering solutions, which are obtained by way of the method of fundamental solutions, a meshfree collocation method. In combination with active contours being implemented as level sets, one obtains a completely meshfree algorithm; a considerable advantage over previous work in this field. With future applications in medical and geophysical imaging in mind, the method is formulated for acoustic and elastodynamic wave processes in the frequency domain.
27

Hartmann, Daniel [Verfasser]. "A Level-Set Based Method for Premixed Combustion in Compressible Flow / Daniel Hartmann." Aachen : Shaker, 2010. http://d-nb.info/1120864143/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Yamada, Takayuki. "A Level Set-Based Topology Optimization Incorporating Concept of the Phase-Field Method." 京都大学 (Kyoto University), 2010. http://hdl.handle.net/2433/126804.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Jadhav, Trishul. "Knowledge Based Gene Set analysis (KB-GSA) : A novel method for gene expression analysis." Thesis, University of Skövde, School of Life Sciences, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-4352.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:

Microarray technology allows measurement of the expression levels of thousand of genes simultaneously. Several gene set analysis (GSA) methods are widely used for extracting useful information from microarrays, for example identifying differentially expressed pathways associated with a particular biological process or disease phenotype. Though GSA methods like Gene Set Enrichment Analysis (GSEA) are widely used for pathway analysis, these methods are solely based on statistics. Such methods can be awkward to use if knowledge of specific pathways involved in particular biological processes are the aim of the study. Here we present a novel method (Knowledge Based Gene Set Analysis: KB-GSA) which integrates knowledge about user-selected pathways that are known to be involved in specific biological processes. The method generates an easy to understand graphical visualization of the changes in expression of the genes, complemented with some common statistics about the pathway of particular interest.

30

Shopple, John P. "An interface-fitted finite element based level set method algorithm, implementation, analysis and applications /." Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p3359494.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (Ph. D.)--University of California, San Diego, 2009.
Title from first page of PDF file (viewed July 14, 2009). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 59-60).
31

Rosenthal, Paul, Vladimir Molchanov, and Lars Linsen. "A Narrow Band Level Set Method for Surface Extraction from Unstructured Point-based Volume Data." Universitätsbibliothek Chemnitz, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-70373.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Level-set methods have become a valuable and well-established field of visualization over the last decades. Different implementations addressing different design goals and different data types exist. In particular, level sets can be used to extract isosurfaces from scalar volume data that fulfill certain smoothness criteria. Recently, such an approach has been generalized to operate on unstructured point-based volume data, where data points are not arranged on a regular grid nor are they connected in form of a mesh. Utilizing this new development, one can avoid an interpolation to a regular grid which inevitably introduces interpolation errors. However, the global processing of the level-set function can be slow when dealing with unstructured point-based volume data sets containing several million data points. We propose an improved level-set approach that performs the process of the level-set function locally. As for isosurface extraction we are only interested in the zero level set, values are only updated in regions close to the zero level set. In each iteration of the level-set process, the zero level set is extracted using direct isosurface extraction from unstructured point-based volume data and a narrow band around the zero level set is constructed. The band consists of two parts: an inner and an outer band. The inner band contains all data points within a small area around the zero level set. These points are updated when executing the level set step. The outer band encloses the inner band providing all those neighbors of the points of the inner band that are necessary to approximate gradients and mean curvature. Neighborhood information is obtained using an efficient kd-tree scheme, gradients and mean curvature are estimated using a four-dimensional least-squares fitting approach. Comparing ourselves to the global approach, we demonstrate that this local level-set approach for unstructured point-based volume data achieves a significant speed-up of one order of magnitude for data sets in the range of several million data points with equivalent quality and robustness.
32

Raudberget, Dag. "Industrial Experiences of Set-based Concurrent Engineering- Effects, results and applications." Licentiate thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH. Forskningsmiljö Produktutveckling - Datorstödd konstruktion, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-20149.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
During product development, most of the customer value, as well as the cost and the quality of a product are defined. This key role of development in industry has led to an intense search for better ways to develop products, software, services and systems. One development methodology that has received positive attention is Set-Based Concurrent Engineering (SBCE). Some authors claim that SBCE and related practices from Lean Development are four times more productive than traditional development models. Unfortunately, SBCE is also described as hard to implement. This thesis presents the results of a three year research project aimed at implementing and describing the effects of Set-Based Concurrent Engineering in industry. The scope of the research is to use the principles of SBCE as a means to improve the productivity of industrial product development processes and its resulting products. The contribution of this work is a better understanding of Set-Based Concurrent Engineering and a support to implement its principles. The results show that SBCE gives positive effects on many aspects of product development performance and on the resulting products. The improvements are especially dominant on product performance, product cost and the level of innovation Moreover, a comparison between a Set-based decision process and a traditional matrix for design evaluation is presented, showing that these two approaches generate different results. The matrix evaluation promoted the development of new technology and the Set-based process promoted a thorough understanding of the important design parameters of the current designs. Finally, this work presents a structured design process and computer tool for implementing the principles of SBCE. The process was demonstrated by using information from an industrial development project, showing how the proposed process could implement the three principles of SBCE in a traditional Point-based development environment.
33

Dillard, Seth Ian. "Image based modeling of complex boundaries." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/950.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
One outstanding challenge to understanding the behaviors of organisms and other complexities found in nature through the use of computational fluid dynamics simulations lies in the ability to accurately model the highly tortuous geometries and motions they generally exhibit. Descriptions must be created in a manner that is amenable to definition within some operative computational domain, while at the same time remaining fidelitous to the essence of what is desired to be understood. Typically models are created using functional approximations, so that complex objects are reduced to mathematically tractable representations. Such reductions can certainly lead to a great deal of insight, revealing trends by assigning parameterized motions and tracking their influence on a virtual surrounding environment. However, simplicity sometimes comes at the expense of fidelity; pared down to such a degree, simplified geometries evolving in prescribed fashions may fail to identify some of the essential physical mechanisms that make studying a system interesting to begin with. In this thesis, and alternative route to modeling complex geometries and behaviors is offered, basing its methodology on the coupling of image analysis and level set treatments. First a semi-Lagrangian method is explored, whereby images are utilized as a means for creating a set of surface points that describe a moving object. Later, points are dispensed with altogether, giving in the end a fully Eulerian representation of complex moving geometries that requires no surface meshing and that translates imaged objects directly to level sets without unnecessary tedium. The final framework outlined here represents a completely novel approach to modeling that combines image denoising, segmentation, optical flow, and morphing with level set- based embedded sharp interface methods to produce models that would be difficult to generate any other way.
34

Ioan, Daniel. "Safe Navigation Strategies within Cluttered Environment." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG047.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cette thèse se rapporte à la navigation et le contrôle par optimisation dans des environnements multi-obstacles. Le problème de conception est généralement énoncé dans la littérature en termes de problème d'optimisation sous contrainte sur un domaine non convexe. Ainsi, en s'appuyant sur la combinaison du contrôle prédictif de modèle et des concepts de la théorie des ensembles, on a développe des méthodes constructives basées sur l'interprétation géométrique. Dans sa première partie, la thèse se concentre sur la représentation de l'environnement multi-obstacles, basée sur une analyse approfondie des résultats récents dans le domaine. Ainsi, on a choisi d'exploiter une classe particulière d'ensembles convexes (dotés de la propriété de symétrie) afin de modéliser l'environnement, réduire la complexité et améliorer les performances. De plus, on a résolve un problème ouvert dans la navigation dans des environnements encombrés : le partitionnement de l'espace faisable en fonction de la distribution des obstacles. Le cœur de cette méthodologie est la construction d'un convex lifting qui se résume à une optimisation convexe. On a couvré à la fois les fondements mathématiques et les détails informatiques de l'implémentation. Enfin, on a illustré les concepts par des exemples géométriques et on a complété l'étude en fournissant des garanties de faisabilité globale et en améliorant le contrôle effectif au niveau stratégique
This thesis pertains to optimization-based navigation and control in multi-obstacle environments. The design problem is commonly stated in the literature in terms of a constrained optimization problem over a non-convex domain. Thus, building on the combination of Model Predictive Control and set-theoretic concepts, we develop a couple of constructive methods based on the geometrical interpretation. In its first part, the thesis focuses based on a thorough analysis of the recent results in the field on the multi-obstacle environment's representation. Hence, we opted to exploit a particular class of convex sets endowed with the symmetry property to model the environment, reduce complexity, and enhance performance. Furthermore, we solve an open problem in navigation within cluttered environments: the feasible space partitioning in accordance with the distribution of obstacles. This methodology's core is the construction of a convex lifting which boils down to convex optimization. We cover both the mathematical foundations and the computational details of the implementation. Finally, we illustrate the concepts with geometrical examples, and we complement the study by further providing global feasibility guarantees and enhancing the effective control by operating at the strategical level
35

Li, Min, and Min Li. "Numerical model building based on XFEM/level set method to simulate ledge freezing/melting in Hall-Héroult cell." Doctoral thesis, Université Laval, 2017. http://hdl.handle.net/20.500.11794/27919.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Au cours de la production de l'aluminium via le procédé de Hall-Héroult, le bain gelé, obtenu par solidification du bain électrolytique, joue un rôle significatif dans le maintien de la stabilité de la cellule d'électrolyse. L'objectif de ce travail est le développement d'un modèle numérique bidimensionnel afin de prédire le profil du bain gelé dans le système biphasé bain liquide/bain gelé, et ce, en résolvant trois problèmes physiques couplés incluant le problème de changement de phase (problème de Stefan), la variation de la composition chimique du bain et le mouvement de ce dernier. Par souci de simplification, la composition chimique du bain est supposée comme étant un système binaire. La résolution de ces trois problèmes, caractérisés par le mouvement de l'interface entre les deux phases et les discontinuités qui ont lieu à l'interface, constitue un grand défi pour les méthodes de résolution conventionnelles, basées sur le principe de la continuité des variables. En conséquence, la méthode des éléments finis étendus (XFEM) est utilisée comme alternative afin de traiter les discontinuités locales inhérentes à chaque solution tandis que la méthode de la fonction de niveaux (level-set) est exploitée pour capturer, implicitement, l'évolution de l'interface entre les deux phases. Au cours du développement de ce modèle, les problématiques suivantes : 1) l'écoulement monophasique à densité variable 2) le problème de Stefan couplé au transport d'espèces chimiques dans un système binaire sans considération du phénomène de la convection et 3) le problème de Stefan et le mouvement du fluide qui en résulte sont investigués par le biais du couplage entre deux problèmes parmi les problèmes mentionnées ci-dessus. La pertinence et la précision de ces sous-modèles sont testées à travers des comparaisons avec des solutions analytiques ou des résultats obtenus via des méthodes numériques conventionnelles. Finalement, le modèle tenant en compte les trois physiques est appliqué à la simulation de certains scénarios de solidification/fusion du système bain liquide-bain gelé. Dans cette dernière application, le mouvement du bain, induit par la différence de densité entre les deux phases ou par la force de flottabilité due aux gradients de température et/ou de concentration, est décrit par le problème de Stokes. Ce modèle se caractérise par le couplage entre différentes physiques, notamment la variation de la densité du fluide et de la température de fusion en fonction de la concentration des espèces chimiques. En outre, la méthode XFEM démontre sa précision et sa flexibilité pour traiter différents types de discontinuité tout en considérant un maillage fixe.
Au cours de la production de l'aluminium via le procédé de Hall-Héroult, le bain gelé, obtenu par solidification du bain électrolytique, joue un rôle significatif dans le maintien de la stabilité de la cellule d'électrolyse. L'objectif de ce travail est le développement d'un modèle numérique bidimensionnel afin de prédire le profil du bain gelé dans le système biphasé bain liquide/bain gelé, et ce, en résolvant trois problèmes physiques couplés incluant le problème de changement de phase (problème de Stefan), la variation de la composition chimique du bain et le mouvement de ce dernier. Par souci de simplification, la composition chimique du bain est supposée comme étant un système binaire. La résolution de ces trois problèmes, caractérisés par le mouvement de l'interface entre les deux phases et les discontinuités qui ont lieu à l'interface, constitue un grand défi pour les méthodes de résolution conventionnelles, basées sur le principe de la continuité des variables. En conséquence, la méthode des éléments finis étendus (XFEM) est utilisée comme alternative afin de traiter les discontinuités locales inhérentes à chaque solution tandis que la méthode de la fonction de niveaux (level-set) est exploitée pour capturer, implicitement, l'évolution de l'interface entre les deux phases. Au cours du développement de ce modèle, les problématiques suivantes : 1) l'écoulement monophasique à densité variable 2) le problème de Stefan couplé au transport d'espèces chimiques dans un système binaire sans considération du phénomène de la convection et 3) le problème de Stefan et le mouvement du fluide qui en résulte sont investigués par le biais du couplage entre deux problèmes parmi les problèmes mentionnées ci-dessus. La pertinence et la précision de ces sous-modèles sont testées à travers des comparaisons avec des solutions analytiques ou des résultats obtenus via des méthodes numériques conventionnelles. Finalement, le modèle tenant en compte les trois physiques est appliqué à la simulation de certains scénarios de solidification/fusion du système bain liquide-bain gelé. Dans cette dernière application, le mouvement du bain, induit par la différence de densité entre les deux phases ou par la force de flottabilité due aux gradients de température et/ou de concentration, est décrit par le problème de Stokes. Ce modèle se caractérise par le couplage entre différentes physiques, notamment la variation de la densité du fluide et de la température de fusion en fonction de la concentration des espèces chimiques. En outre, la méthode XFEM démontre sa précision et sa flexibilité pour traiter différents types de discontinuité tout en considérant un maillage fixe.
During the Hall-Héroult process for smelting aluminium, the ledge formed by freezing the molten bath plays a significant role in maintaining the internal working condition of the cell at stable state. The present work aims at building a vertically two-dimensional numerical model to predict the ledge profile in the bath-ledge two-phase system through solving three interactive physical problems including the phase change problem (Stefan problem), the variation of bath composition and the bath motion. For the sake of simplicity, the molten bath is regarded as a binary system in chemical composition. Solving the three involved problems characterized by the free moving internal boundary and the presence of discontinuities at the free boundary is always a challenge to the conventional continuum-based methods. Therefore, as an alternative method, the extended finite element method (XFEM) is used to handle the local discontinuities in each solution space while the interface between phases is captured implicitly by the level set method. In the course of model building, the following subjects: 1) one-phase density driven flow 2) Stefan problem without convection mechanism in the binary system 3) Stefan problem with ensuing melt flow in pure material, are investigated by coupling each two of the problems mentioned above. The accuracy of the corresponding sub-models is verified by the analytical solutions or those obtained by the conventional methods. Finally, the model by coupling three physics is applied to simulate the freezing/melting of the bath-ledge system under certain scenarios. In the final application, the bath flow is described by Stokes equations and induced either by the density jump between different phases or by the buoyancy forces produced by the temperature or/and compositional gradients. The present model is characterized by the coupling of multiple physics, especially the liquid density and the melting point are dependent on the species concentration. XFEM also exhibits its accuracy and flexibility in dealing with different types of discontinuity based on a fixed mesh.
During the Hall-Héroult process for smelting aluminium, the ledge formed by freezing the molten bath plays a significant role in maintaining the internal working condition of the cell at stable state. The present work aims at building a vertically two-dimensional numerical model to predict the ledge profile in the bath-ledge two-phase system through solving three interactive physical problems including the phase change problem (Stefan problem), the variation of bath composition and the bath motion. For the sake of simplicity, the molten bath is regarded as a binary system in chemical composition. Solving the three involved problems characterized by the free moving internal boundary and the presence of discontinuities at the free boundary is always a challenge to the conventional continuum-based methods. Therefore, as an alternative method, the extended finite element method (XFEM) is used to handle the local discontinuities in each solution space while the interface between phases is captured implicitly by the level set method. In the course of model building, the following subjects: 1) one-phase density driven flow 2) Stefan problem without convection mechanism in the binary system 3) Stefan problem with ensuing melt flow in pure material, are investigated by coupling each two of the problems mentioned above. The accuracy of the corresponding sub-models is verified by the analytical solutions or those obtained by the conventional methods. Finally, the model by coupling three physics is applied to simulate the freezing/melting of the bath-ledge system under certain scenarios. In the final application, the bath flow is described by Stokes equations and induced either by the density jump between different phases or by the buoyancy forces produced by the temperature or/and compositional gradients. The present model is characterized by the coupling of multiple physics, especially the liquid density and the melting point are dependent on the species concentration. XFEM also exhibits its accuracy and flexibility in dealing with different types of discontinuity based on a fixed mesh.
36

Ewald, Jens. "A level set based flamelet model for the prediction of combustion in homogeneous charge and direct injection spark ignition engines /." Göttingen : Cuvillier, 2006. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=014901502&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Duramaz, Alper. "Image Segmentation Based On Variational Techniques." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607721/index.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recently, solutions to the problem of image segmentation and denoising are developed based on the Mumford-Shah model. The model provides an energy functional, called the Mumford-Shah functional, which should be minimized. Since the minimization of the functional has some difficulties, approximate approaches are proposed. Two such methods are the gradient flows method and the Chan-Vese active contour method. The performance evolution in terms of speed shows that the gradient flows method converges to the boundaries of the smooth parts faster
but for the hierarchical four-phase segmentation, it is observed that this method sometimes gives unsatisfactory results. In this work, a fast hierarchical four-phase segmentation method is proposed where the Chan-Vese active contour method is applied following the gradient flows method. After the segmentation process, the segmented regions are denoised using diffusion filters. Additionally, for the low signal-to-noise ratio applications, the prefiltering scheme using nonlinear diffusion filters is included in the proposed method. Simulations have shown that the proposed method provides an effective solution to the image segmentation and denoising problem.
38

Mortensen, Clifton H. "A Computational Fluid Dynamics Feature Extraction Method Using Subjective Logic." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2208.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Computational fluid dynamics simulations are advancing to correctly simulate highly complex fluid flow problems that can require weeks of computation on expensive high performance clusters. These simulations can generate terabytes of data and pose a severe challenge to a researcher analyzing the data. Presented in this document is a general method to extract computational fluid dynamics flow features concurrent with a simulation and as a post-processing step to drastically reduce researcher post-processing time. This general method uses software agents governed by subjective logic to make decisions about extracted features in converging and converged data sets. The software agents are designed to work inside the Concurrent Agent-enabled Feature Extraction concept and operate efficiently on massively parallel high performance computing clusters. Also presented is a specific application of the general feature extraction method to vortex core lines. Each agent's belief tuple is quantified using a pre-defined set of information. The information and functions necessary to set each component in each agent's belief tuple is given along with an explanation of the methods for setting the components. A simulation of a blunt fin is run showing convergence of the horseshoe vortex core to its final spatial location at 60% of the converged solution. Agents correctly select between two vortex core extraction algorithms and correctly identify the expected probabilities of vortex cores as the solution converges. A simulation of a delta wing is run showing coherently extracted primary vortex cores as early as 16% of the converged solution. Agents select primary vortex cores extracted by the Sujudi-Haimes algorithm as the most probable primary cores. These simulations show concurrent feature extraction is possible and that intelligent agents following the general feature extraction method are able to make appropriate decisions about converging and converged features based on pre-defined information.
39

Altinoklu, Metin Burak. "Image Segmentation Based On Variational Techniques." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610415/index.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this thesis, the image segmentation methods based on the Mumford&
#8211
Shah variational approach have been studied. By obtaining an optimum point of the Mumford-Shah functional which is a piecewise smooth approximate image and a set of edge curves, an image can be decomposed into regions. This piecewise smooth approximate image is smooth inside of regions, but it is allowed to be discontinuous region wise. Unfortunately, because of the irregularity of the Mumford Shah functional, it cannot be directly used for image segmentation. On the other hand, there are several approaches to approximate the Mumford-Shah functional. In the first approach, suggested by Ambrosio-Tortorelli, it is regularized in a special way. The regularized functional (Ambrosio-Tortorelli functional) is supposed to be gamma-convergent to the Mumford-Shah functional. In the second approach, the Mumford-Shah functional is minimized in two steps. In the first minimization step, the edge set is held constant and the resultant functional is minimized. The second minimization step is about updating the edge set by using level set methods. The second approximation to the Mumford-Shah functional is known as the Chan-Vese method. In both approaches, resultant PDE equations (Euler-Lagrange equations of associated functionals) are solved by finite difference methods. In this study, both approaches are implemented in a MATLAB environment. The overall performance of the algorithms has been investigated based on computer simulations over a series of images from simple to complicated.
40

Patuelli, Claudia. "Implementation, set up and validation of multiplex qualitative two-step RT-PCR based on TaqMan® method for the diagnosis of viruses in Vitis vinifera L." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La viticoltura si trova ad affrontare il problema dalla lotta contro i patogeni che comporta costi ingenti dovuti anche alla sostituzione di piante infette. Le patologie più gravi sono l’Infezione degenerativa, l’Accartocciamento fogliare e il Complesso del legno riccio, tutti causati da infezioni virali. L’impresa cilena Concha y Toro ha deciso di studiare la presenza dei virus all’interno dei propri vigneti in modo da evitarne la diffusione, applicando la tecnica analitica della Real Time PCR, strategia SYBR® Green e TaqMan®. L’obiettivo di questa ricerca è di mettere a punto un protocollo che permetta di rilevare la presenza dei virus applicando una strategia multiplex, al fine di ridurre i costi e i tempi di analisi. Inoltre è possibile inserire un controllo interno che assicuri la veridicità di una risposta negativa alla presenza del virus. Sono state costruite curve standard per le analisi in singleplex dei virus testati dall’azienda e del gene actina di vite, allo scopo di ottimizzare l’analisi. È seguito lo studio delle analisi in duplex prima con l’introduzione del controllo interno e in seguito con test per analizzare combinazioni di virus. Sono state confrontate le efficienze delle curve standard verificando che il valore rimanesse all’interno di un range stabilito e che non ci fosse differenza significativa tra l’analisi in singleplex e la corrispondente in duplex. Lo stesso è stato ripetuto per un protocollo in triplex. Sono state studiate la sensibilità e la specificità di ogni analisi in modo da validare il metodo. I risultati ottenuti hanno dato la possibilità di ottimizzare i protocolli singleplex e confermato la possibilità di introdurre il gene actina come controllo interno nelle analisi. Sono stati raggiunti ottimi risultati anche in molte delle combinazioni in duplex tra virus. In futuro sarà necessario approfondire lo studio delle sensibilità e specificità d’analisi in modo da validare il metodo evitando falsi negativi o falsi positivi.
41

Goddard, Aaron M. "A primarily Eulerian means of applying left ventricle boundary conditions for the purpose of patient-specific heart valve modeling." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6584.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Patient-specific multi-physics simulations have the potential to improve the diagnosis, treatment, and scientific inquiry of heart valve dynamics. It has been shown that the flow characteristics within the left ventricle are important to correctly capture the aortic and mitral valve motion and corresponding fluid dynamics, motivating the use of patient-specific imaging to describe the aortic and mitral valve geometries as well as the motion of the left ventricle (LV). The LV position can be captured at several time points in the cardiac cycle, such that its motion can be prescribed a priori as a Dirichlet boundary condition during a simulation. Valve leaflet motion, however, should be computed from soft-tissue models and incorporated using fully-coupled Fluid Structure Interaction (FSI) algorithms. While FSI simulations have in part or wholly been achieved by multiple groups, to date, no high-throughput models have been developed, which are needed for use in a clinical environment. This project seeks to enable patient-derived moving LV boundary conditions, and has been developed for use with a previously developed immersed boundary, fixed Cartesian grid FSI framework. One challenge in specifying LV motion from medical images stems from the low temporal resolution available. Typical imaging modalities contain only tens of images during the cardiac cycle to describe the change in position of the left ventricle. This temporal resolution is significantly lower than the time resolution needed to capture fluid dynamics of a highly deforming heart valve, and thus an approach to describe intermediate positions of the LV is necessary. Here, we propose a primarily Eulerian means of representing LV displacement. This is a natural extension, since an Eulerian framework is employed in the CFD model to describe the large displacement of the heart valve leaflets. This approach to using Eulerian interface representation is accomplished by applying “morphing” techniques commonly used in the field of computer graphics. For the approach developed in the current work, morphing is adapted to the unique characteristics of a Cartesian grid flow solver which presents challenges of adaptive mesh refinement, narrow band approach, parallel domain decomposition, and the need to supply a local surface velocity to the flow solver that describes both normal and tangential motion. This is accomplished by first generating a skeleton from the Eulerian interface representation, and deforming the skeleton between image frames to determine bulk displacement. After supplying bulk displacement, local displacement is determined using the Eulerian fields. The skeletons are also utilized to automate the simulation setup to track the locations upstream and downstream where the system inflow/outflow boundary conditions are to be applied, which in the current approach, are not limited to Cartesian domain boundaries.
42

Beisler, Matthias Werner. "Modelling of input data uncertainty based on random set theory for evaluation of the financial feasibility for hydropower projects." Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2011. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-71564.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The design of hydropower projects requires a comprehensive planning process in order to achieve the objective to maximise exploitation of the existing hydropower potential as well as future revenues of the plant. For this purpose and to satisfy approval requirements for a complex hydropower development, it is imperative at planning stage, that the conceptual development contemplates a wide range of influencing design factors and ensures appropriate consideration of all related aspects. Since the majority of technical and economical parameters that are required for detailed and final design cannot be precisely determined at early planning stages, crucial design parameters such as design discharge and hydraulic head have to be examined through an extensive optimisation process. One disadvantage inherent to commonly used deterministic analysis is the lack of objectivity for the selection of input parameters. Moreover, it cannot be ensured that the entire existing parameter ranges and all possible parameter combinations are covered. Probabilistic methods utilise discrete probability distributions or parameter input ranges to cover the entire range of uncertainties resulting from an information deficit during the planning phase and integrate them into the optimisation by means of an alternative calculation method. The investigated method assists with the mathematical assessment and integration of uncertainties into the rational economic appraisal of complex infrastructure projects. The assessment includes an exemplary verification to what extent the Random Set Theory can be utilised for the determination of input parameters that are relevant for the optimisation of hydropower projects and evaluates possible improvements with respect to accuracy and suitability of the calculated results
Die Auslegung von Wasserkraftanlagen stellt einen komplexen Planungsablauf dar, mit dem Ziel das vorhandene Wasserkraftpotential möglichst vollständig zu nutzen und künftige, wirtschaftliche Erträge der Kraftanlage zu maximieren. Um dies zu erreichen und gleichzeitig die Genehmigungsfähigkeit eines komplexen Wasserkraftprojektes zu gewährleisten, besteht hierbei die zwingende Notwendigkeit eine Vielzahl für die Konzepterstellung relevanter Einflussfaktoren zu erfassen und in der Projektplanungsphase hinreichend zu berücksichtigen. In frühen Planungsstadien kann ein Großteil der für die Detailplanung entscheidenden, technischen und wirtschaftlichen Parameter meist nicht exakt bestimmt werden, wodurch maßgebende Designparameter der Wasserkraftanlage, wie Durchfluss und Fallhöhe, einen umfangreichen Optimierungsprozess durchlaufen müssen. Ein Nachteil gebräuchlicher, deterministischer Berechnungsansätze besteht in der zumeist unzureichenden Objektivität bei der Bestimmung der Eingangsparameter, sowie der Tatsache, dass die Erfassung der Parameter in ihrer gesamten Streubreite und sämtlichen, maßgeblichen Parameterkombinationen nicht sichergestellt werden kann. Probabilistische Verfahren verwenden Eingangsparameter in ihrer statistischen Verteilung bzw. in Form von Bandbreiten, mit dem Ziel, Unsicherheiten, die sich aus dem in der Planungsphase unausweichlichen Informationsdefizit ergeben, durch Anwendung einer alternativen Berechnungsmethode mathematisch zu erfassen und in die Berechnung einzubeziehen. Die untersuchte Vorgehensweise trägt dazu bei, aus einem Informationsdefizit resultierende Unschärfen bei der wirtschaftlichen Beurteilung komplexer Infrastrukturprojekte objektiv bzw. mathematisch zu erfassen und in den Planungsprozess einzubeziehen. Es erfolgt eine Beurteilung und beispielhafte Überprüfung, inwiefern die Random Set Methode bei Bestimmung der für den Optimierungsprozess von Wasserkraftanlagen relevanten Eingangsgrößen Anwendung finden kann und in wieweit sich hieraus Verbesserungen hinsichtlich Genauigkeit und Aussagekraft der Berechnungsergebnisse ergeben
43

Li, Yilun. "Numerical methodologies for topology optimization of electromagnetic devices." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS228.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'optimisation de la topologie est la conception conceptuelle d'un produit. En comparaison avec les approches de conception conventionnelles, il peut créer une nouvelle topologie, qui ne pouvait être imaginée à l’avance, en particulier pour la conception d’un produit sans expérience préalable ni connaissance. En effet, la technique de la topologie consistant à rechercher des topologies efficaces à partir de brouillon devient un sérieux atout pour les concepteurs. Bien qu’elle provienne de l'optimisation de la structure, l'optimisation de la topologie en champ électromagnétique a prospéré au cours des deux dernières décennies. De nos jours, l'optimisation de la topologie est devenue le paradigme des techniques d'ingénierie prédominantes pour fournir une méthode de conception quantitative pour la conception technique moderne. Cependant, en raison de sa nature complexe, le développement de méthodes et de stratégies applicables pour l’optimisation de la topologie est toujours en cours. Pour traiter les problèmes et défis typiques rencontrés dans le processus d'optimisation de l'ingénierie, en considérant les méthodes existantes dans la littérature, cette thèse se concentre sur les méthodes d'optimisation de la topologie basées sur des algorithmes déterministes et stochastiques. Les travaile principal et la réalisation peuvent être résumés comme suit: Premièrement, pour résoudre la convergence prématurée vers un point optimal local de la méthode ON/OFF existante, un Tabu-ON/OFF, un Quantum-inspiré Evolutif Algorithme (QEA) amélioré et une Génétique Algorithme (GA) amélioré sont proposés successivement. Les caractéristiques de chaque algorithme sont élaborées et ses performances sont comparées de manière exhaustive. Deuxièmement, pour résoudre le problème de densité intermédiaire rencontré dans les méthodes basées sur la densité et le problème que la topologie optimisée est peu utilisée directement pour la production réelle, deux méthodes d'optimisation de la topologie, à savoir Matérial Isotrope solide avec pénalisation (SIMP)-Fonction de Base Radiale (RBF) et Méthode du Level Set (LSM)-Fonction de Base Radiale (RBF). Les deux méthodes calculent les informations de sensibilité de la fonction objectif et utilisent des optimiseurs déterministes pour guider le processus d'optimisation. Pour le problème posé par un grand nombre de variables de conception, le coût de calcul des méthodes proposées est considérablement réduit par rapport à celui des méthodes de comptabilisation sur des algorithmes stochastiques. Dans le même temps, en raison de l'introduction de la technique de lissage par interpolation de données RBF, la topologie optimisée est plus adaptée aux productions réelles. Troisièmement, afin de réduire les coût informatiques excessifs lorsqu’un algorithme de recherche stochastique est utilisé dans l’optimisation de la topologie, une stratégie de redistribution des variables de conception est proposée. Dans la stratégie proposée, l’ensemble du processus de recherche d’une optimisation de la topologie est divisé en structures en couches. La solution de la couche précédente est défini comme topologie initiale pour la couche d'optimisation suivante, et seuls les éléments adjacents à la limite sont choisis comme variables de conception. Par conséquent, le nombre de variables de conception est réduit dans une certaine mesure; le temps de calcul du processus est ainsi raccourci. Enfin, une méthodologie d’optimisation de topologie multi-objectif basée sur l’algorithme d’optimisation hybride multi-objectif combinant l’Algorithme Génétique de Tri Non dominé II (NSGAII) et l’algorithme d’Evolution Différentielle (DE) est proposée
Topology optimization is the conceptual design of a product. Comparing with conventional design approaches, it can create a novel topology, which could not be imagined beforehand, especially for the design of a product without prior-experiences or knowledge. Indeed, the topology optimization technique with the ability of finding efficient topologies starting from scratch has become a serious asset for the designers. Although originated from structure optimization, topology optimization in electromagnetic field has flourished in the past two decades. Nowadays, topology optimization has become the paradigm of the predominant engineering techniques to provide a quantitative design method for modern engineering design. However, due to its inherent complex nature, the development of applicable methods and strategies for topology optimization is still in progress. To address the typical problems and challenges encountered in an engineering optimization process, considering the existing methods in the literature, this thesis focuses on topology optimization methods based on deterministic and stochastic algorithms. The main work and achievement can be summarized as: Firstly, to solve the premature convergence to a local optimal point of existing ON/OFF method, a Tabu-ON/OFF, an improved Quantum-inspired Evolutionary Algorithm (QEA) and an improved Genetic Algorithm (GA) are proposed successively. The characteristics of each algorithm are elaborated, and its performance is compared comprehensively. Secondly, to solve the intermediate density problem encountered in density-based methods and the engineering infeasibility of the finally optimized topology, two topology optimization methods, namely Solid Isotropic Material with Penalization-Radial Basis Function (SIMP-RBF) and Level Set Method-Radial Basis Function (LSM-RBF) are proposed. Both methods calculate the sensitivity information of the objective function, and use deterministic optimizers to guide the optimizing process. For the problem with a large number of design variables, the computational cost of the proposed methods is greatly reduced compared with those of the methods accounting on stochastic algorithms. At the same time, due to the introduction of RBF data interpolation smoothing technique, the optimized topology is more conducive in actual productions. Thirdly, to reduce the excessive computing costs when a stochastic searching algorithm is used in topology optimization, a design variable redistribution strategy is proposed. In the proposed strategy, the whole searching process of a topology optimization is divided into layered structures. The solution of the previous layer is set as the initial topology for the next optimization layer, and only elements adjacent to the boundary are chosen as design variables. Consequently, the number of design variables is reduced to some extent; and the computation time is thereby shortened. Finally, a multi-objective topology optimization methodology based on the hybrid multi-objective optimization algorithm combining Non-dominated Sorting Genetic Algorithm II (NSGAII) and Differential Evolution (DE) algorithm is proposed. The comparison results on test functions indicate that the performance of the proposed hybrid algorithm is better than those of the traditional NSGAII and Strength Pareto Evolutionary Algorithm 2 (SPEA2), which guarantee the good global optimal ability of the proposed methodology, and enables a designer to handle constraint conditions in a direct way. To validate the proposed topology optimization methodologies, two study cases are optimized and analyzed
44

Li, Honghao. "Interpretable biological network reconstruction from observational data." Electronic Thesis or Diss., Université Paris Cité, 2021. http://www.theses.fr/2021UNIP5207.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cette thèse porte sur les méthodes basées sur des contraintes. Nous présentons comme exemple l’algorithme PC, pour lequel nous proposons une modification qui garantit la cohérence des ensembles de séparation, utilisés pendant l’étape de reconstruction du squelette pour supprimer les arêtes entre les variables conditionnellement indépendantes, par rapport au graphe final. Elle consiste à itérer l’algorithme d’apprentissage de structure tout en limitant la recherche des ensembles de séparation à ceux qui sont cohérents par rapport au graphe obtenu à la fin de l’itération précédente. La contrainte peut être posée avec une complexité de calcul limitée à l’aide de la décomposition en block-cut tree du squelette du graphe. La modification permet d’augmenter le rappel au prix de la précision des méthodes basées sur des contraintes, tout en conservant une performance globale similaire ou supérieure. Elle améliore également l’interprétabilité et l’explicabilité du modèle graphique obtenu. Nous présentons ensuite la méthode basée sur des contraintes MIIC, récemment développée, qui adopte les idées du cadre du maximum de vraisemblance pour améliorer la robustesse et la performance du graphe obtenu. Nous discutons les caractéristiques et les limites de MIIC, et proposons plusieurs modifications qui mettent l’accent sur l’interprétabilité du graphe obtenu et l’extensibilité de l’algorithme. En particulier, nous mettons en œuvre l’approche itérative pour renforcer la cohérence de l’ensemble de séparation, nous optons pour une règle d’orientation conservatrice et nous utilisons la probabilité d’orientation de MIIC pour étendre la notation des arêtes dans le graphe final afin d’illustrer différentes relations causales. L’algorithme MIIC est appliqué à un ensemble de données d’environ 400 000 dossiers de cancer du sein provenant de la base de données SEER, comme benchmark à grande échelle dans la vie réelle
This thesis is focused on constraint-based methods, one of the basic types of causal structure learning algorithm. We use PC algorithm as a representative, for which we propose a simple and general modification that is applicable to any PC-derived methods. The modification ensures that all separating sets used during the skeleton reconstruction step to remove edges between conditionally independent variables remain consistent with respect to the final graph. It consists in iterating the structure learning algorithm while restricting the search of separating sets to those that are consistent with respect to the graph obtained at the end of the previous iteration. The restriction can be achieved with limited computational complexity with the help of block-cut tree decomposition of the graph skeleton. The enforcement of separating set consistency is found to increase the recall of constraint-based methods at the cost of precision, while keeping similar or better overall performance. It also improves the interpretability and explainability of the obtained graphical model. We then introduce the recently developed constraint-based method MIIC, which adopts ideas from the maximum likelihood framework to improve the robustness and overall performance of the obtained graph. We discuss the characteristics and the limitations of MIIC, and propose several modifications that emphasize the interpretability of the obtained graph and the scalability of the algorithm. In particular, we implement the iterative approach to enforce separating set consistency, and opt for a conservative rule of orientation, and exploit the orientation probability feature of MIIC to extend the edge notation in the final graph to illustrate different causal implications. The MIIC algorithm is applied to a dataset of about 400 000 breast cancer records from the SEER database, as a large-scale real-life benchmark
45

Kaelo, Professor. "Some Population Set-Based Methods for Unconstrained Global Optimization." Thesis, 2006. http://hdl.handle.net/10539/1771.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Student Number : 0214677F - PhD thesis - School of Camputational and Applied Mathematics - Faculty of Science
Many real-life problems are formulated as global optimization problems with continuous variables. These problems are in most cases nonsmooth, nonconvex and often simulation based, making gradient based methods impossible to be used to solve them. Therefore, ef#2;cient, reliable and derivative-free global optimization methods for solving such problems are needed. In this thesis, we focus on improving the ef#2;ciency and reliability of some global optimization methods. In particular, we concentrate on improving some population set-based methods for unconstrained global optimization, mainly through hybridization. Hybridization has widely been recognized to be one of the most attractive areas of unconstrained global optimization. Experiments have shown that through hybridization, new methods that inherit the strength of the original elements but not their weakness can be formed. We suggest a number of new hybridized population set-based methods based on differential evolution (de), controlled random search (crs2) and real coded genetic algorithm (ga). We propose #2;ve new versions of de. In the #2;rst version, we introduce a localization, called random localization, in the mutation phase of de. In the second version, we propose a localization in the acceptance phase of de. In the third version, we form a de hybrid algorithm by probabilistically combining the point generation scheme of crs2 with that of de in the de algorithm. The fourth and #2;fth versions are also de hybrids. These versions hybridize the mutation of de with the point generation rule of the electromagnetism-like (em) algorithm. We also propose #2;ve new versions of crs2. The #2;rst version modi#2;es the point generation scheme of crs2 by introducing a local mutation technique. In the second and third modi#2;cations, we probabilistically combine the point generation scheme of crs2 with the linear interpolation scheme of a trust-region based method. The fourth version is a crs hybrid that probabilistically combines the quadratic interpolation scheme with the linear interpolation scheme in crs2. In the #2;fth version, we form a crs2 hybrid algorithm by probabilistically combining the point generation scheme of crs2 with that of de in the crs2 algorithm. Finally, we propose #2;ve new versions of the real coded genetic algorithm (ga) with arithmetic crossover. In the #2;rst version of ga, we introduce a local technique. We propose, in the second version, an integrated crossover rule that generates two children at a time using two different crossover rules. We introduce a local technique in the second version to obtain the third version. The fourth and #2;fth versions are based on the probabilistic adaptation of crossover rules. The ef#2;ciency and reliability of the new methods are evaluated through numerical experiments using a large test suite of both simple and dif#2;cult problems from the literature. Results indicate that the new hybrids are much better than their original counterparts both in reliability and ef#2;ciency. Therefore, the new hybrids proposed in this study offer an alternative to many currently available stochastic algorithms for solving global optimization problems in which the gradient information is not readily available.
46

"A finite element based level set method for structural topology optimization." Thesis, 2009. http://library.cuhk.edu.hk/record=b6074757.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A finite element (FE) based level set method is proposed for structural topology optimization problems in this thesis. The level set method has become a popular tool for structural topology optimization in recent years because of its ability to describe smooth structure boundaries and handle topological changes. There are commonly two stages in the optimization process: the stress analysis stage and the boundary evolution stage. The first stage is usually performed with the finite element method (FEM) while the second is often realized by solving the level set equation with the finite difference method (FDM). The first motivation for developing the proposed method is the desire to unify the techniques of both stages within a uniform framework. In addition, there are many problems involving irregular design domains in practice, the FEM is more powerful than the FDM in dealing with these problems. This is the second motivation for this study.
Numerical examples are involved in this thesis to illustrate the reliability of the proposed method. Problems on both regular and irregular design domains are considered and different meshes are tested and compared.
Solving the level set equation with the standard Galerkin FEM might produce unstable results because of the hyperbolic characteristic of this equation. Therefore, the streamline diffusion finite element method (SDFEM), a stabilized method, is employed to solve the level set equation. In addition to the advantage of simplicity, this method generates a system of equations with a constant, symmetric, and positive defined coefficient matrix. Furthermore, this matrix can be diagonalized by virtue of the lumping technique in structural dynamics. This makes the cost in solving and storing quite low. It is more important that the lumped coefficient matrix may help to improve the stability under some circumstance.
The accuracy of the finite element based level set method (FELSM) is compared with that of the finite difference based level set method (FDLSM). The FELSM is a first-order accurate algorithm but we prove that its accuracy is enough for structural optimization problems considered in this study. Even higher-order accurate FDLSM schemes are used, the numerical results are still the same as those obtained by FELSM. It is also shown that if the Courant-Friedreichs-Lewy (CFL) number is large, the FELSM is more robust and accurate than FDLSM.
The reinitialization equation is also solved with the SDFEM and an extra diffusion term is added to improve the stability near the boundary. We propose a criterion to select the factor of the diffusion term. Due to numerical errors and the diffusion term, boundary will drift during the process of reinitialization. To constrain the boundary from moving, a Dirichlet boundary condition is enforced. Within the framework of FEM, this enforcement can be conveniently preformed with the Lagrangian multiplier method or the penalty method.
Velocity extension is discussed in this thesis. A natural extension method and a partial differential equation (PDE)-based extension method are introduced. Some related topics, such as the "ersatz" material approach and the recovery of stresses, are discussed as well.
Xing, Xianghua.
Adviser: Michael Yu Wang.
Source: Dissertation Abstracts International, Volume: 71-01, Section: B, page: 0628.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2009.
Includes bibliographical references (leaves 102-113).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts in English and Chinese.
47

HUANG, MING-CHONG, and 黃銘崇. "Location-search-based solution methods for the set covering problem." Thesis, 1991. http://ndltd.ncl.edu.tw/handle/99011874033646558249.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Zhao, Kaiqiong. "Gene-pair based statistical methods for testing gene set enrichment in microarray gene expression studies." 2016. http://hdl.handle.net/1993/31796.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Gene set enrichment analysis aims to discover sets of genes, such as biological pathways or protein complexes, which may show moderate but coordinated differentiation across experimental conditions. The existing gene set enrichment approaches utilize single gene statistic as a measure of differentiation for individual genes. These approaches do not utilize any inter-gene correlations, but it has been known that genes in a pathway often interact with each other. Motivated by the need for taking gene dependence into account, we propose a novel gene set enrichment algorithm, where the gene-gene correlation is addressed via a gene-pair representation strategy. Relying on an appropriately defined gene pair statistic, the gene set statistic is formulated using a competitive null hypothesis. Extensive simulation studies show that our proposed approach can correctly control the type I error (false positive rate), and retain good statistical power for detecting true differential expression. The new method is also applied to analyze several gene expression datasets.
October 2016
49

Tang, Yu-Chuan, and 湯育全. "Methods based on distance statistics for detection of differentially expressed genes and gene set enrichment analysis." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/4bnub4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
碩士
國立臺灣大學
農藝學研究所
107
The first part of this paper is to study the effectiveness of differentially expressed gene analysis. Statistical methods such as t-test or SAM treat each gene as independent and separately identify whether it is a differentially expressed gene. However, the results of the test may be biased because of the correlation between genes. Therefore, a novel statistic called OR value is proposed for identifying differentially expressed genes recently. The advantage of OR value is no model assumptions and no estimated parameters, as well as the Euclidean distance is used to consider the correlation between genes and the dispersion of data. In this paper, multivariate normal distribution, multivariate t distribution, and mixed distribution are used to simulate gene expression data, and then the OR value is used to identify whether the gene is a differentially expressed gene, and compared it to the commonly used t-test and non-OR methods. The results show that the weighted quantile difference method using OR value performs well in all cases, especially in the multivariate t distribution with a high correlation coefficient and the mixed distribution with shift amount greater than 0. The second aim of this paper is gene set analysis (GSA) using the self-contained hypothesis. Adjustments for the GSA method is carried out using statistics in the first part, and we also compared it to commonly used gene set analysis methods. The results show that only in the multivariate t distribution, the distance-based methods such as the sum of the quantile difference, the sum of the weighted quantile difference and the energy test method perform better than other methods, and there is no apparent method outperforming others under other conditions. Finally, we applied the OR-based method and competing methods to a large scale dataset from a group of breast cancer patients to perform the differentially expressed gene and gene set analysis. In summary, the OR value is a worthwhile method when performing the differentially expressed gene analysis, but a more robust statistic may be needed to extend the analysis for gene-set level.
50

Lin, Yi, and 林以. "Enhancing Decision Prediction Based on Rough Set Method." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/63553732729976697390.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
碩士
南台科技大學
資訊管理系
97
Data Mining is a tool to grasp pratical information through abundant datum developing fast and build up association rules for decision references or classification prediction. Nevertheless, the quantity of association rules derived from Data Mining is usually immeasurable, and which are difficult to find essential ones. Rough Set Theory is a mathematical instrument in Data Mining, and is usually used to search for vital knowledge from abundant datum. Also, it can eliminate redundant information and establish effective association rules in Information Table. Rough Set Theory is implemented reguarly in discrete data. However, there are also huge amount of information belonging to numeric data in practical application. Thus, the aim of this research is to find important rules and enhance decision prediction towards numeric data via Rough Set Theory. This research uses Rough Set Theory to get rid of unnessary datum and search Reduct containing complete information in Information Table constructed by numeric data. Then, two key approaches to enhance decision prediction are proposed towards Reduct rules: (1) Setting rule weights for rules derived from Reducts and discovering the critical decision rules under the basis of rule weight. (2) Placing threshold value towards decision rules of Reduct and exploring decision rules which succeed in passing the threshold tests under original Decision Table and most of its subtables ( so-called “stable” decision rules ). Experiments are generated using Matlab and compared to the well known rough set analysis tool-Rough Set Exploaration Sysytem (RSES). The results show that the approaches proposed by this research can find important rules and enhance decision prediction.

До бібліографії