Academic literature on the topic 'Ensemble non dominé'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Ensemble non dominé.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Ensemble non dominé"

1

Moffette, David. "Propositions pour une sociologie pragmatique des frontières : multiples acteurs, pratiques spatio-temporelles et jeux de juridictions." Cahiers de recherche sociologique, no. 59-60 (June 15, 2016): 61–78. http://dx.doi.org/10.7202/1036786ar.

Full text
Abstract:
Bien que les sociologues aient beaucoup travaillé sur des objets connexes, l’étude des frontières demeure un champ de recherche dominé par les géographes et politistes. Ce sont eux qui ont proposé qu’il faille considérer la frontière non pas comme un objet physique spatialement situé, mais bien comme un ensemble de pratiques d’acteurs dispersés. Nous soutenons qu’en adoptant une approche pragmatique des frontières qui mette l’accent sur la multiplicité des acteurs impliqués, leurs pratiques socio-temporelles et leurs jeux de juridictions, les sociologues peuvent pousser les limites de ce domaine de recherche. De plus, en encourageant les sociologues à réfléchir aux dimensions spatiales, temporelles et juridictionnelles des pratiques sociales, la « sociologie des frontières » proposée ici peut faciliter un renouvellement de l’analyse sociologique et nous aider non seulement à ne pas réifier le social, mais aussi à ne pas le distinguer a priori du spatial, du temporel et du juridique.
APA, Harvard, Vancouver, ISO, and other styles
2

Hsu, Kuo-Wei. "A Theoretical Analysis of Why Hybrid Ensembles Work." Computational Intelligence and Neuroscience 2017 (2017): 1–12. http://dx.doi.org/10.1155/2017/1930702.

Full text
Abstract:
Inspired by the group decision making process, ensembles or combinations of classifiers have been found favorable in a wide variety of application domains. Some researchers propose to use the mixture of two different types of classification algorithms to create a hybrid ensemble. Why does such an ensemble work? The question remains. Following the concept of diversity, which is one of the fundamental elements of the success of ensembles, we conduct a theoretical analysis of why hybrid ensembles work, connecting using different algorithms to accuracy gain. We also conduct experiments on classification performance of hybrid ensembles of classifiers created by decision tree and naïve Bayes classification algorithms, each of which is a top data mining algorithm and often used to create non-hybrid ensembles. Therefore, through this paper, we provide a complement to the theoretical foundation of creating and using hybrid ensembles.
APA, Harvard, Vancouver, ISO, and other styles
3

Bogaert, Matthias, and Lex Delaere. "Ensemble Methods in Customer Churn Prediction: A Comparative Analysis of the State-of-the-Art." Mathematics 11, no. 5 (February 24, 2023): 1137. http://dx.doi.org/10.3390/math11051137.

Full text
Abstract:
In the past several single classifiers, homogeneous and heterogeneous ensembles have been proposed to detect the customers who are most likely to churn. Despite the popularity and accuracy of heterogeneous ensembles in various domains, customer churn prediction models have not yet been picked up. Moreover, there are other developments in the performance evaluation and model comparison level that have not been introduced in a systematic way. Therefore, the aim of this study is to perform a large scale benchmark study in customer churn prediction implementing these novel methods. To do so, we benchmark 33 classifiers, including 6 single classifiers, 14 homogeneous, and 13 heterogeneous ensembles across 11 datasets. Our findings indicate that heterogeneous ensembles are consistently ranked higher than homogeneous ensembles and single classifiers. It is observed that a heterogeneous ensemble with simulated annealing classifier selection is ranked the highest in terms of AUC and expected maximum profits. For accuracy, F1 measure and top-decile lift, a heterogenous ensemble optimized by non-negative binomial likelihood, and a stacked heterogeneous ensemble are, respectively, the top ranked classifiers. Our study contributes to the literature by being the first to include such an extensive set of classifiers, performance metrics, and statistical tests in a benchmark study of customer churn.
APA, Harvard, Vancouver, ISO, and other styles
4

Kioutsioukis, Ioannis, Ulas Im, Efisio Solazzo, Roberto Bianconi, Alba Badia, Alessandra Balzarini, Rocío Baró, et al. "Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII data." Atmospheric Chemistry and Physics 16, no. 24 (December 20, 2016): 15629–52. http://dx.doi.org/10.5194/acp-16-15629-2016.

Full text
Abstract:
Abstract. Simulations from chemical weather models are subject to uncertainties in the input data (e.g. emission inventory, initial and boundary conditions) as well as those intrinsic to the model (e.g. physical parameterization, chemical mechanism). Multi-model ensembles can improve the forecast skill, provided that certain mathematical conditions are fulfilled. In this work, four ensemble methods were applied to two different datasets, and their performance was compared for ozone (O3), nitrogen dioxide (NO2) and particulate matter (PM10). Apart from the unconditional ensemble average, the approach behind the other three methods relies on adding optimum weights to members or constraining the ensemble to those members that meet certain conditions in time or frequency domain. The two different datasets were created for the first and second phase of the Air Quality Model Evaluation International Initiative (AQMEII). The methods are evaluated against ground level observations collected from the EMEP (European Monitoring and Evaluation Programme) and AirBase databases. The goal of the study is to quantify to what extent we can extract predictable signals from an ensemble with superior skill over the single models and the ensemble mean. Verification statistics show that the deterministic models simulate better O3 than NO2 and PM10, linked to different levels of complexity in the represented processes. The unconditional ensemble mean achieves higher skill compared to each station's best deterministic model at no more than 60 % of the sites, indicating a combination of members with unbalanced skill difference and error dependence for the rest. The promotion of the right amount of accuracy and diversity within the ensemble results in an average additional skill of up to 31 % compared to using the full ensemble in an unconditional way. The skill improvements were higher for O3 and lower for PM10, associated with the extent of potential changes in the joint distribution of accuracy and diversity in the ensembles. The skill enhancement was superior using the weighting scheme, but the training period required to acquire representative weights was longer compared to the sub-selecting schemes. Further development of the method is discussed in the conclusion.
APA, Harvard, Vancouver, ISO, and other styles
5

Parshin, Alexander M. "Modulation of Light Transmission in Self-Organized Ensembles of Nematic Domains." Liquid Crystals and their Application 23, no. 4 (December 26, 2023): 49–57. http://dx.doi.org/10.18083/lcappl.2023.4.49.

Full text
Abstract:
Using an electric field, the transmission of light passing through self-organized ensembles of nematic domains is studied. Modulation characteristics of the ensembles containing domains with non-oriented and magnetic-field-oriented disclination lines are compared. A significant decrease in light scattering is shown when disclination lines are oriented. The calculated and well agreeing with them experimental dependences of light transmission on electric voltage applied to liquid crystal cells are obtained and presented. Oscillations on light transmission curves are studied. Superpositions of ordinary and extraordinary waves propagating through the domain ensembles and homogeneous planar liquid crystal layers are considered. The spectral characteristics of the ensembles are presented.
APA, Harvard, Vancouver, ISO, and other styles
6

Karim, Zainoolabadien, and Terence L. van Zyl. "Deep/Transfer Learning with Feature Space Ensemble Networks (FeatSpaceEnsNets) and Average Ensemble Networks (AvgEnsNets) for Change Detection Using DInSAR Sentinel-1 and Optical Sentinel-2 Satellite Data Fusion." Remote Sensing 13, no. 21 (October 31, 2021): 4394. http://dx.doi.org/10.3390/rs13214394.

Full text
Abstract:
Differential interferometric synthetic aperture radar (DInSAR), coherence, phase, and displacement are derived from processing SAR images to monitor geological phenomena and urban change. Previously, Sentinel-1 SAR data combined with Sentinel-2 optical imagery has improved classification accuracy in various domains. However, the fusing of Sentinel-1 DInSAR processed imagery with Sentinel-2 optical imagery has not been thoroughly investigated. Thus, we explored this fusion in urban change detection by creating a verified balanced binary classification dataset comprising 1440 blobs. Machine learning models using feature descriptors and non-deep learning classifiers, including a two-layer convolutional neural network (ConvNet2), were used as baselines. Transfer learning by feature extraction (TLFE) using various pre-trained models, deep learning from random initialization, and transfer learning by fine-tuning (TLFT) were all evaluated. We introduce a feature space ensemble family (FeatSpaceEnsNet), an average ensemble family (AvgEnsNet), and a hybrid ensemble family (HybridEnsNet) of TLFE neural networks. The FeatSpaceEnsNets combine TLFE features directly in the feature space using logistic regression. AvgEnsNets combine TLFEs at the decision level by aggregation. HybridEnsNets are a combination of FeatSpaceEnsNets and AvgEnsNets. Several FeatSpaceEnsNets, AvgEnsNets, and HybridEnsNets, comprising a heterogeneous mixture of different depth and architecture models, are defined and evaluated. We show that, in general, TLFE outperforms both TLFT and classic deep learning for the small dataset used and that larger ensembles of TLFE models do not always improve accuracy. The best performing ensemble is an AvgEnsNet (84.862%) comprised of a ResNet50, ResNeXt50, and EfficientNet B4. This was matched by a similarly composed FeatSpaceEnsNet with an F1 score of 0.001 and variance of 0.266 less. The best performing HybridEnsNet had an accuracy of 84.775%. All of the ensembles evaluated outperform the best performing single model, ResNet50 with TLFE (83.751%), except for AvgEnsNet 3, AvgEnsNet 6, and FeatSpaceEnsNet 5. Five of the seven similarly composed FeatSpaceEnsNets outperform the corresponding AvgEnsNet.
APA, Harvard, Vancouver, ISO, and other styles
7

Santana-Falcón, Yeray, Pierre Brasseur, Jean Michel Brankart, and Florent Garnier. "Assimilation of chlorophyll data into a stochastic ensemble simulation for the North Atlantic Ocean." Ocean Science 16, no. 5 (October 29, 2020): 1297–315. http://dx.doi.org/10.5194/os-16-1297-2020.

Full text
Abstract:
Abstract. Satellite-derived surface chlorophyll data are assimilated daily into a three-dimensional 24-member ensemble configuration of an online-coupled NEMO (Nucleus for European Modeling of the Ocean)–PISCES (Pelagic Interaction Scheme of Carbon and Ecosystem Studies) model for the North Atlantic Ocean. A 1-year multivariate assimilation experiment is performed to evaluate the impacts on analyses and forecast ensembles. Our results demonstrate that the integration of data improves surface analysis and forecast chlorophyll representation in a major part of the model domain, where the assimilated simulation outperforms the probabilistic skills of a non-assimilated analogous simulation. However, improvements are dependent on the reliability of the prior free ensemble. A regional diagnosis shows that surface chlorophyll is overestimated in the northern limit of the subtropical North Atlantic, where the prior ensemble spread does not cover the observation's variability. There, the system cannot deal with corrections that alter the equilibrium between the observed and unobserved state variables producing instabilities that propagate into the forecast. To alleviate these inconsistencies, a 1-month sensitivity experiment in which the assimilation process is only applied to model fluctuations is performed. Results suggest the use of this methodology may decrease the effect of corrections on the correlations between state vectors. Overall, the experiments presented here evidence the need of refining the description of model's uncertainties according to the biogeochemical characteristics of each oceanic region.
APA, Harvard, Vancouver, ISO, and other styles
8

Kotlarski, S., K. Keuler, O. B. Christensen, A. Colette, M. Déqué, A. Gobiet, K. Goergen, et al. "Regional climate modeling on European scales: a joint standard evaluation of the EURO-CORDEX RCM ensemble." Geoscientific Model Development Discussions 7, no. 1 (January 14, 2014): 217–93. http://dx.doi.org/10.5194/gmdd-7-217-2014.

Full text
Abstract:
Abstract. EURO-CORDEX is an international climate downscaling initiative that aims to provide high-resolution climate scenarios for Europe. Here an evaluation of the ERA-Interim-driven EURO-CORDEX regional climate model (RCM) ensemble is presented. The study documents the performance of the individual models in representing the basic spatio-temporal patterns of the European climate for the period 1989–2008. Model evaluation focuses on near-surface air temperature and precipitation, and uses the E-OBS dataset as observational reference. The ensemble consists of 17 simulations carried out by seven different models at grid resolutions of 12 km (nine experiments) and 50 km (eight experiments). Several performance metrics computed from monthly and seasonal mean values are used to assess model performance over eight sub-domains of the European continent. Results are compared to those for the ERA40-driven ENSEMBLES simulations. The analysis confirms the ability of RCMs to capture the basic features of the European climate, including its variability in space and time. But it also identifies non-negligible deficiencies of the simulations for selected metrics, regions and seasons. Seasonally and regionally averaged temperature biases are mostly smaller than 1.5 °C, while precipitation biases are typically located in the ±40% range. Some bias characteristics, such as a predominant cold and wet bias in most seasons and over most parts of Europe and a warm and dry summer bias over southern and south-eastern Europe reflect common model biases. For seasonal mean quantities averaged over large European sub-domains, no clear benefit of an increased spatial resolution (12 km vs. 50 km) can be identified. The bias ranges of the EURO-CORDEX ensemble mostly correspond to those of the ENSEMBLES simulations, but some improvements in model performance can be identified (e.g., a less pronounced southern European warm summer bias). The temperature bias spread across different configurations of one individual model can be of a similar magnitude as the spread across different models, demonstrating a strong influence of the specific choices in physical parameterizations and experimental setup on model performance. Based on a number of simply reproducible metrics, the present study quantifies the currently achievable accuracy of RCMs used for regional climate simulations over Europe and provides a quality standard for future model developments.
APA, Harvard, Vancouver, ISO, and other styles
9

Akemann, Gernot, Markus Ebke, and Iván Parra. "Skew-Orthogonal Polynomials in the Complex Plane and Their Bergman-Like Kernels." Communications in Mathematical Physics 389, no. 1 (October 27, 2021): 621–59. http://dx.doi.org/10.1007/s00220-021-04230-8.

Full text
Abstract:
AbstractNon-Hermitian random matrices with symplectic symmetry provide examples for Pfaffian point processes in the complex plane. These point processes are characterised by a matrix valued kernel of skew-orthogonal polynomials. We develop their theory in providing an explicit construction of skew-orthogonal polynomials in terms of orthogonal polynomials that satisfy a three-term recurrence relation, for general weight functions in the complex plane. New examples for symplectic ensembles are provided, based on recent developments in orthogonal polynomials on planar domains or curves in the complex plane. Furthermore, Bergman-like kernels of skew-orthogonal Hermite and Laguerre polynomials are derived, from which the conjectured universality of the elliptic symplectic Ginibre ensemble and its chiral partner follow in the limit of strong non-Hermiticity at the origin. A Christoffel perturbation of skew-orthogonal polynomials as it appears in applications to quantum field theory is provided.
APA, Harvard, Vancouver, ISO, and other styles
10

Adler, Mark, and Pierre van Moerbeke. "Double interlacing in random tiling models." Journal of Mathematical Physics 64, no. 3 (March 1, 2023): 033509. http://dx.doi.org/10.1063/5.0093542.

Full text
Abstract:
Random tilings of very large domains will typically lead to a solid, a liquid, and a gas phase. In the two-phase case, the solid–liquid boundary (arctic curve) is smooth, possibly with singularities. At the point of tangency of the arctic curve with the domain boundary, for large-sized domains, the tiles of a certain shape form a singly interlacing set, fluctuating according to the eigenvalues of the principal minors of a Gaussian unitary ensemble-matrix. Introducing non-convexities in large domains may lead to the appearance of several interacting liquid regions: They can merely touch, leading to either a split tacnode (hard tacnode), with two distinct adjacent frozen phases descending into the tacnode, or a soft tacnode. For appropriate scaling of the non-convex domains and probing about such split tacnodes, filaments, evolving in a bricklike sea of dimers of another type, will connect the liquid patches. Nearby, the tiling fluctuations are governed by a discrete tacnode kernel—i.e., a determinantal point process on a doubly interlacing set of dots belonging to a discrete array of parallel lines. This kernel enables us to compute the joint distribution of the dots along those lines. This kernel appears in two very different models: (i) domino tilings of skew-Aztec rectangles and (ii) lozenge tilings of hexagons with cuts along opposite edges. Soft tacnodes appear when two arctic curves gently touch each other amid a bricklike sea of dimers of one type, unlike the split tacnode. We hope that this largely expository paper will provide a view on the subject and be accessible to a wider audience.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Ensemble non dominé"

1

Tamby, Satya. "Approches génériques pour la résolution de problèmes d'optimisation discrète multiobjectif." Electronic Thesis or Diss., Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLED048.

Full text
Abstract:
Lorsque les problèmes de décision font intervenir plusieurs critères contradictoires, la notion d'optimum n'a plus réellement de sens. Dès lors, les décideurs sont amenés à considérer tous les différents compromis possibles. Même s'il est possible d'éliminer ceux qui sont dominés, c'est à dire moins bons qu'un autre sur tous les critères, l'ensemble est d'autant plus complexe à déterminer que ses éléments peuvent être très nombreux. Nous nous intéressons ici aux problèmes d'optimisation combinatoire multicritères. Afin que notre méthode soit adaptable pour un grand nombre de problèmes, nous employons la programmation mathématique en nombres entiers pour définir l'ensemble des solutions réalisables
Real world problems often involve several conflicting objectives. Thus, solution of interests are efficient solutions which have the property that an improvement on one objective leads to a decay on another one. The image of such solutions are referred to as nondominated points. We consider here the standard problem of computing the set of nondominated points, and providing a corresponding efficient solution for each point
APA, Harvard, Vancouver, ISO, and other styles
2

Jamain, Florian. "Représentations discrètes de l'ensemble des points non dominés pour des problèmes d'optimisation multi-objectifs." Phd thesis, Université Paris Dauphine - Paris IX, 2014. http://tel.archives-ouvertes.fr/tel-01070041.

Full text
Abstract:
Le but de cette thèse est de proposer des méthodes générales afin de contourner l'intractabilité de problèmes d'optimisation multi-objectifs.Dans un premier temps, nous essayons d'apprécier la portée de cette intractabilité en déterminant une borne supérieure, facilement calculable, sur le nombre de points non dominés, connaissant le nombre de valeurs prises par chaque critère.Nous nous attachons ensuite à produire des représentations discrètes et tractables de l'ensemble des points non dominés de toute instance de problèmes d'optimisation multi-objectifs. Ces représentations doivent satisfaire des conditions de couverture, i.e. fournir une bonne approximation, de cardinalité, i.e. ne pas contenir trop de points, et si possible de stabilité, i.e. ne pas contenir de redondances. En s'inspirant de travaux visant à produire des ensembles ε-Pareto de petite taille, nous proposons tout d'abord une extension directe de ces travaux, puis nous axons notre recherche sur des ensembles ε-Pareto satisfaisant une condition supplémentaire de stabilité. Formellement, nous considérons des ensembles ε-Pareto particuliers, appelés (ε, ε′)-noyaux, qui satisfont une propriété de stabilité liée à ε′. Nous établissons des résultats généraux sur les (ε, ε′)-noyaux puis nous proposons des algorithmes polynomiaux qui produisent des (ε, ε′)-noyaux de petite taille pour le cas bi-objectif et nous donnons des résultats négatifs pour plus de deux objectifs.
APA, Harvard, Vancouver, ISO, and other styles
3

Riah, Rachid. "Théorie des ensembles pour le contrôle robuste des systèmes non linéaires : Application à la chimiothérapie et les thérapies anti-angiogéniques." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT090/document.

Full text
Abstract:
Cette thèse vise à utiliser la modélisation mathématique avec les outils du contrôle avancé, afin de guider les thérapies pour assurer la contraction de la tumeur. Les buts de cette thèse sont la contribution au développement des méthodes de la théorie des ensembles pour le contrôle robuste des systèmes non linéaires et le développement d’outils numériques pour l’analyse et le contrôle de la croissance tumorale en présence de chimiothérapie et=ou de traitement anti-angiogénique. Génériquement, dans le contexte de la théorie du contrôle, les techniques qui sont théoriquement basées sur certaines propriétés des sous-ensembles de l’espace d’état du système pourraient être désignées comme des méthodes de la théorie des ensembles. Dans la première partie, nous passons en revue les définitions, concepts et outils de la théorie des ensembles existants dans la littérature pour réponde efficacement à des problématiques de contrôle des systèmes linéaires et non linéaires avec contraintes dures et incertitudes. Dans ce cadre, nous nous intéressons à deux propriétés des ensembles qui sont l’invariance et la contraction. Les problèmes liés à la stabilité des systèmes peuvent être formulés en termes de calcul de leurs domaines d’attraction. Pour des fins de développement, nous rappelons les méthodes de la littérature pour la caractérisation de ces domaines d’attraction pour les systèmes linéaires et non linéaires. Une application importante de ces méthodes est le contrôle de la croissance tumorale en présence de différents traitements. Car dans cette application, plusieurs contraintes peuvent être posées pour éviter l’intoxication des patients pendant les traitements et les méthodes de la théorie des ensembles peuvent les prendre en compte facilement. Pour cette application, nous proposons une méthodologie pour déterminer les domaines d’attraction pour les modèles mathématiques choisis pour simuler la croissance tumorale. Dans la deuxième partie, nous proposons des méthodes de la théorie des ensemble pour la caractérisation des domaines d’attraction pour les systèmes non linéaires incertains. Au début, nous développons des conditions suffisantes pour l’invariance et la contraction d’un ellipsoïde pour des systèmes saturés. Ces conditions permettent de déterminer implicitement une fonction de Lyapunov quadratique locale. Nous montrerons que l’approche proposée est moins conservatrice que celles de la littérature, et donnerons un algorithme pour la caractérisation de l’ellipsoïde invariant et contractif. Pour les systèmes non linéaires incertains, nous développons une condition suffisante pour l’invariance contrôlable robuste pour le cas des incertitudes paramétriques. Une méthode basée sur cette condition est développée pour la caractérisation des domaines d’attraction des systèmes avec ces incertitudes. Ensuite, nous nous concentrons sur l’étude des systèmes non linéaires avec incertitudes additives, et nous donnons également une autre méthode pour la caractérisation de leurs domaines d’attraction. Ces méthodes sont des méthodes facilement traitables en utilisant les outils de l’optimisation convexe. Dans la troisième partie, nous développons des outils numériques pour la caractérisation des domaines d’attraction pour les modèles de la croissance tumorale en présence de traitements, en particulier la chimiothérapie et le traitement anti-angiogénique. Ces domaines contiennent tous les états des patients pour lesquels ils existent des protocoles de traitement efficaces. Dans ce cadre, nous considérons que les modèles sont incertains car les paramètres exactes qui les définissent sont en pratique inconnus. Ces outils sont basés sur les méthodes rappelées et développées dans cette thèse. Plusieurs informations utiles pour une thérapie tumorale efficace peuvent être extraites de ces domaines
This thesis aims at using the mathematical modeling with advanced control tools to guide therapies for the contraction of the tumor. The aims of this thesis are the contribution to the development of the set-theoretic methods for robust control of nonlinear systems and the development of analytical tools for the analysis and control of tumor growth in presence of chemotherapy and/oranti-angiogenic therapy. Generically, in the context of control theory, techniques that are theoretically based on some properties of subsets of the system state space could be referred as set-theoretic methods.In the first part, we review the definitions, concepts and tools of the existing set-theoretic methods in the literature to respond effectively to the control issues of linear and nonlinear systems with hard constraints and uncertainties. In this context, we are interested in two properties of sets that are invariance and contractiveness. The problems associated with the stability of the systems may be formulated in terms of calculation of their domain of attraction. For development purposes, we recall methods from the literature for characterizing these domains of attraction for linear and nonlinear systems. An important application of these methods is the control of tumor growth in the presence of different treatments. For this application, several constraints can be imposed in order to avoid the patient intoxications during the treatments and the set-theoretic methods can consider easily these constraints. For this latter application, we propose a methodology to estimate the domains of attraction for the mathematical models chosen to simulate the tumor growth.In the second part, we propose set-theoretic methods for the characterization of the domains ofattraction for linear and nonlinear uncertain systems. At the beginning, we develop sufficient conditions for the invariance and contractiveness of an ellipsoid for saturated systems. These conditions allow implicitly determining a local Lyapunov function. We will show that the proposed approach is less conservative than those in the literature, and we give an algorithm for characterizing the invariant ellipsoids. For uncertain nonlinear systems, we develop a sufficient condition for the robust controlled invariance in the case of parametric uncertainties. A method based on this condition is developed for characterizing the domains of attraction for nonlinear systems with these uncertainties. Then we focus on the study of nonlinear systems with additive uncertainties, and we also give a method for the characterization of their domains of attraction. These methods are easily treatable using convex optimization tools.In the third part, we develop numerical tools for characterizing the domains of attraction for themodels of tumor growth in the presence of treatments, particularly chemotherapy and anti-angiogenictreatment. These domains contain all the states of the patients for whom effective treatment protocols exist. In this context, we consider that the models are uncertain and in particular the parameters that are unknown in practice. These tools are based on the methods developed in this thesis. Several useful informations for effective tumor therapy can be extracted from these domains
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Xiaodong. "Classes de récurrence par chaînes non hyperboliques des difféomorphismes C¹." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS100/document.

Full text
Abstract:
La dynamique d'un difféomorphisme d'une variété compacte est essentiellement concentrée sur l'ensemble récurrent par chaînes, qui est partitionné en classes de récurrence par chaînes, disjointes et indécomposables. Le travail de Bonatti et Crovisier [BC] montre que, pour les difféomorphismes C¹-génériques, une classe de récurrence par chaînes ou bien est une classe homocline, ou bien ne contient pas de point périodique. Une classe de récurrence par chaînes sans point périodique est appelée classe apériodique.Il est clair qu'une classe homocline hyperbolique ni contient d'orbite périodique faible ni supporte de mesure non hyperbolique.Cette thèse tente de donner une caractérisation des classes homoclines non hyperboliques en montrant qu'elles contiennent des orbites périodiques faibles ou des mesures ergodiques non hyperboliques. Cette thèse décrit également les décompositions dominées sur les classes apériodiques.Le premier résultat de cette thèse montre que, pour les difféomorphismes C¹-génériques, si les orbites périodiques contenues dans une classe homocline H(p) ont tous leurs exposants de Lyapunov bornés loin de zéro, alors H(p) doit être (uniformément) hyperbolique. Ceci est dans l'esprit des travaux sur la conjecture de stabilité, mais il y a une différence importante lorsque la classe homocline H(p) n'est pas isolée. Par conséquent, nous devons garantir que des orbites périodiques "faibles'', crées par perturbations au voisinage de la classe homocline, sont contenues dans la classe. En ce sens, le problème est de nature "intrinsèque'', et l'argument classique de la conjecture de stabilité est impraticable.Le deuxième résultat de cette thèse prouve une conjecture de Díaz et Gorodetski [DG]: pour les difféomorphismes C¹-génériques, si une classe homocline n'est pas hyperbolique, alors elle porte une mesure ergodique non hyperbolique. C'est un travail en collaboration avec C. Cheng, S. Crovisier, S. Gan et D. Yang. Dans la démonstration, nous devons appliquer une technique introduité dans [DG], et qui améliore la méthode de [GIKN], pour obtenir une mesure ergodique comme limite d'une suite de mesures périodiques.Le troisième résultat de cette thèse énonce que, génériquement, une décomposition dominée non-triviale sur une classe apériodique stable au sens de Lyapunov est en fait une décomposition partiellement hyperbolique. Plus précisément, pour les difféomorphismes C¹-génériques, si une classe apériodique stable au sens de Lyapunov a une décomposition dominée non-triviale Eoplus F, alors, l'un des deux fibrés est hyperbolique: soit E contracté, soit F dilaté.Dans les démonstrations des résultats principaux, nous construisons des perturbations qui ne sont pas obtenues directement à partir des lemmes de connexion classiques. En fait, il faut appliquer le lemme de connexion un grand nombre (et même un nombre infini) de fois. Nous expliquons les méthodes de connexions multiples dans le Chapitre 3
The dynamics of a diffeomorphism of a compact manifold concentrates essentially on the chain recurrent set, which splits into disjoint indecomposable chain recurrence classes. By the work of Bonatti and Crovisier [BC], for C¹-generic diffeomorphisms, a chain recurrence class either is a homoclinic class or contains no periodic point. A chain recurrence class without a periodic point is called an aperiodic class.Obviously, a hyperbolic homoclinic class can neither contain weak periodic orbit or support non-hyperbolic ergodic measure.This thesis attempts to give a characterization of non-hyperbolic homoclinic classes via weak periodic orbits inside or non-hyperbolic ergodic measures supported on it. Also, this thesis gives a description of the dominated splitting on Lyapunov stable aperiodic classes.The first result of this thesis shows that for C¹-generic diffeomorphisms, if the periodic orbits contained in a homoclinic class H(p) have all their Lyapunov exponents bounded away from 0, then H(p) must be (uniformly) hyperbolic. This is in spirit of the works of the stability conjecture, but with a significant difference that the homoclinic class H(p) is not known isolated in advance. Hence the "weak'' periodic orbits created by perturbations near the homoclinic class have to be guaranteed strictly inside the homoclinic class. In this sense the problem is of an "intrinsic" nature, and the classical argument of the stability conjecture does not pass through.The second result of this thesis proves a conjecture by Díaz and Gorodetski [DG]: for C¹-generic diffeomorphisms, if a homoclinic class is not hyperbolic, then there is a non-hyperbolic ergodic measure supported on it. This is a joint work with C. Cheng, S. Crovisier, S. Gan and D. Yang. In the proof, we have to use a technic introduced in [DG], which developed the method of [GIKN], to get an ergodic measure by taking the limit of a sequence of periodic measures.The third result of this thesis states that, generically, a non-trivial dominated splitting over a Lyapunov stable aperiodic class is in fact a partially hyperbolic splitting. To be precise, for C¹-generic diffeomorphisms, if a Lyapunov stable aperiodic class admits a non-trivial dominated splitting Eoplus F, then one of the two bundles is hyperbolic: either E is contracted or F is expanded.In the proofs of the main results, we construct several perturbations which are not simple applications of the connecting lemmas. In fact, one has to apply the connecting lemma several (even infinitely many) times. We will give the detailed explanations of the multi-connecting processes in Chapter 3
APA, Harvard, Vancouver, ISO, and other styles
5

Mouysset, Vincent. "Une méthode de sous-domaines pour la résolution des équations de Maxwell instationnaires en présence d'un ensemble non-connexe d'objets diffractant." Phd thesis, Université Paul Sabatier - Toulouse III, 2006. http://tel.archives-ouvertes.fr/tel-00136029.

Full text
Abstract:
A partir de l'établissement d'une approximation in stationnaires en 3D des potentiels retardés pour des courants électromagnétiques sur des polyèdres non-nécessairement convexes, une méthode de résolution pour la simulation de la diffraction par un ensemble non-connexe d'objets est formulée. Une partition de ce dernier est effectuée suivant les inhomogénéités présentes. Le problème est alors traduit en un système d'équations de Maxwell couplées, chacune étant homogène hors d'un élément correspondant de la partition, qui induit la construction d'une solution du problème initial. Par approximation des termes de couplage, il s'en suit une méthode naturellement hybride et parallèle sur un système stable et bien-posé. La restriction de chaque sous-système à un voisinage du support de ses inhomogénéités est obtenue par introduction de conditions aux limites absorbantes de type "PML" dont un formalisme généralisé est étudié. Des exemples numériques illustrent l'ensemble de ces développements.
APA, Harvard, Vancouver, ISO, and other styles
6

Marx, Didier. "Contribution à l'étude de la stabilité des systèmes électrotechniques." Thesis, Vandoeuvre-les-Nancy, INPL, 2009. http://www.theses.fr/2009INPL078N/document.

Full text
Abstract:
Dans cette thèse différents outils issus de l'automatique non linéaire ont été mis en œuvre et ont permis d'apporter une première solution au problème de stabilité large signal des dispositifs électriques. A l'aide de modèles flous de type Takagi-Sugeno, on a montré qu'il était possible de résoudre le problème de stabilité dans le cas de deux applications électrotechniques à savoir un hacheur contrôlé en tension et l'alimentation par l'intermédiaire un filtre d'entrée d'un dispositif électrique fonctionnant à puissance constante. Dans le cas du hacheur, la taille estimée des bassins d'attraction reste modeste. Les raisons essentielles à l'échec obtenu dans la recherche de bassin de grande taille peut résulter dans le fait que d'une part , la mise sous forme TS du système n'est pas unique et que d'autre part les matrices du sous modèle TS du système ne sont de Hurwitz que dans une gamme très restreinte de variations du rapport cyclique. Dans le cas de l'alimentation par l'intermédiaire d'un filtre d'entrée d'un dispositif fonctionnant à puissance constante, on a montré que l'utilisation d'un modèle flou de type Takagi-Sugeno permettait d'exhiber un domaine d'attraction de taille significative. On a fourni des outils permettant de borner la plage de variations des pôles du système dans un domaine donné de l'espace d'état, domaine dans lequel la stabilité du modèle TS est prouvée. L'utilisation de la D-stabilité permet de connaitre les dynamiques maximales du système. La notion de stabilité exponentielle permet de connaître les dynamiques minimales du système. L'approche utilisée pour prouver la stabilité du système en présence de variations paramétriques, pour les deux systèmes étudiés, n'autorise que des variations extrêmement faibles de la valeur du paramètre autour de sa valeur nominale
In this thesis, various tools resulting from the nonlinear automatic were implemented and made it possible to bring a first solution to the problem of large signal stability of the electric systems. Using Takagi-Sugeno fuzzy models, one showed that it was possible to in the case of solve the problem of stability two electrotechnical applications to knowing a Boost converter controlled in tension and an electric system constituted by an input filter connected to an actuator functioning at constant power. In the case of the Boost converter, the estimated size of attraction domain remains modest. The reasons essential with the failure obtained in the search for domain of big size can result in the fact that on the one hand, the setting TS fuzzy models of the system is not single and that on the other hand the matrices of local model of TS model of the system are of Hurwitz only in one very restricted range of variations of the cyclic ratio. In the case of the electric system via a filter of entry of a functioning device at constant power, one showed that the use of a Takagi-Sugeno fuzzy model allowed exhibit a attraction domain of significant size. One provided tools allowing to limit the variations of the poles of the system in a given field of the state space, domain in which the stability of model TS is proven. The use of D-stability makes it possible to know dynamic maximum system. The concept of exponential stability makes it possible to know dynamic minimal system. The approach used to prove the stability of the system in the presence of parametric variations, for the two studied systems, authorizes only extremely weak variations of the value of the parameter around its maximal value
APA, Harvard, Vancouver, ISO, and other styles
7

Berrebi, Johanna. "Contribution à l'intégration d'une liaison avionique sans fil. L'ingénierie système appliquée à une problématique industrielle." Phd thesis, Ecole Polytechnique X, 2013. http://pastel.archives-ouvertes.fr/pastel-00800141.

Full text
Abstract:
Dans un avion, un hélicoptère ou un lanceur actuel, des milliers de capteurs, pour la plupart non critiques sont utilisés pour la mesure de divers paramètres (températures, pressions, positions...) Les résultats sont ensuite acheminés par des fils vers les calculateurs de bord qui les traitent. Ceci implique la mise en place de centaines de kilomètres de câbles (500 km pour un avion de ligne) dont le volume est considérable. Il en résulte une grande complexité de conception et de fabrication, des problèmes de fiabilité, notamment au niveau des connexions, et une masse importante. Par ailleurs l'instrumentation de certaines zones est impossible car leur câblage est difficilement envisageable par manque d'espace. En outre, s'il est souvent intéressant d'installer de nouveaux capteurs pour faire évoluer un aéronef ancien, l'installation des câbles nécessaires implique un démantèlement partiel, problématique et coûteux, de l'appareil. Pour résoudre ces problèmes, une idée innovante a émergé chez les industriels de l'aéronautique : commencer à remplacer les réseaux filaires reliant les capteurs d'un aéronef et leur centre de décision par des réseaux sans fil. Les technologies de communication sans fil sont aujourd'hui largement utilisées dans les marchés de l'électronique de grande consommation. Elles commencent également à être déployées pour des applications industrielles comme l'automobile ou le relevé à distance de compteurs domestiques. Cependant, remplacer des câbles par des ondes représente un défi technologique considérable comme la propagation en milieu confiné, la sécurité, la sureté de fonctionnement, la fiabilité ou la compatibilité électromagnétique. Cette thèse est motivée d'une part par l'avancée non négligeable dans le milieu aérospatial que pourrait être l'établissement d'un réseau sans fil à bord d'aéronefs dans la résolution de problématique classiques comme l'allégement et l'instrumentation. Il en résulterait donc : * Une meilleure connaissance de l'environnement et de la santé de l'aéronef * Un gain sur le poids. * Un gain en flexibilité. * Un gain en malléabilité et en évolutivité. * Un gain sur la complexité. * Un gain sur la fiabilité D'autre part, étant donnée la complexité de la conception de ce réseau de capteur sans fil, il a été nécessaire d'appliquer une méthodologie évolutive et adaptée mais inspirée de l'ingénierie système. Il est envisageable, vu le nombre de sous-systèmes à considérer, que cette méthodologie soit réutilisable pour d'autre cas pratiques. Une étude aussi complète que possible a été réalisée autour de l'existant déjà établi sur le sujet. En effet, on peut en lisant ce mémoire de thèse avoir une idée assez précise de ce qui a été fait. Une liste a été dressée de toutes les technologies sans fil en indiquant leur état de maturité, leurs avantages et leurs inconvénients afin de préciser les choix possibles et les raisons de ces choix. Des projets de capteurs sans fil ont été réalisés, des technologies sans fil performantes et personnalisables ont été développées et arrivent à maturité dans des secteurs variés tels que la domotique, la santé, l'automobile ou même l'aéronautique. Cependant aucun capteur sans fil n'a été véritablement installé en milieu aérospatial car de nombreux verrous technologiques n'ont pas été levés. Fort des expériences passées, et de la maturité qu'ont prise certaines technologies, des conclusions ont été tirées des projets antérieurs afin de tendre vers des solutions plus viables. Une fois identifiés, les verrous technologiques ont été isolés. Une personnalisation de notre solution a été à envisager afin de remédier tant que faire se peut à ces points bloquants avec les moyens mis à disposition. La méthodologie appliquée nous a permis d'identifier un maximum de contraintes, besoins et exigences pour mieux focaliser les efforts d'innovation sur les plus importantes et choisir ainsi les technologies les plus indiquées.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Ensemble non dominé"

1

Villepastour, Amanda. Amelia Pedroso. University of Illinois Press, 2017. http://dx.doi.org/10.5406/illinois/9780252037245.003.0003.

Full text
Abstract:
This chapter studies the life of Amelia Pedroso, a renowned Cuban ritual singer and priestess in the Santeria tradition. She generated remarkable achievements in male-dominated and heterosexual contexts, openly creating a lesbian and gay-friendly ritual house in Havana. In the early 1990s when she was in her forties, Amelia moved into a drumming domain that specifically prohibited Cuban women—although paradoxically, non-Cuban women were taught in Cuba. She formed an all-women ensemble and toured and ran workshops in the United States and Europe. Amelia attracted women to her, acquiring a role as an iconic activist, developing a network of students and religious godchildren, and leaving a remarkable transnational legacy following her death in 2000.
APA, Harvard, Vancouver, ISO, and other styles
2

Schehr, Grégory, Alexander Altland, Yan V. Fyodorov, Neil O'Connell, and Leticia F. Cugliandolo, eds. Stochastic Processes and Random Matrices. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198797319.001.0001.

Full text
Abstract:
The field of stochastic processes and random matrix theory (RMT) has been a rapidly evolving subject during the past fifteen years where the continuous development and discovery of new tools, connections, and ideas have led to an avalanche of new results. These breakthroughs have been made possible thanks, to a large extent, to the recent development of various new techniques in RMT. Matrix models have been playing an important role in theoretical physics for a long time and they are currently also a very active domain of research in mathematics. An emblematic example of these recent advances concerns the theory of growth phenomena in the Kardar–Parisi–Zhang (KPZ) universality class where the joint efforts of physicists and mathematicians during the past twenty years have unveiled the beautiful connections between this fundamental problem of statistical mechanics and the theory of random matrices, namely the fluctuations of the largest eigenvalue of certain ensemble of random matrices. These chapters not only cover this topic in detail but also present more recent developments that have emerged from these discoveries, for instance in the context of low-dimensional heat transport (on the physics side) or in the context of integrable probability (on the mathematical side).
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Ensemble non dominé"

1

Tong, Howell. "Statistical aspects." In Non-linear Time Series, 215–344. Oxford University PressOxford, 1990. http://dx.doi.org/10.1093/oso/9780198522249.003.0005.

Full text
Abstract:
Abstract We have been looking at the ensemble properties, that is properties pertaining to the collection of all realizations/sample paths. Under ergodicity /stationarity, these properties will tell us about the long-run behaviour of each realization. Now, we are going to study the ‘inverse problem’ of inferring something about the ensemble properties from one, or more precisely part of one, single realization. This falls within the domain of statistical inference. Before performing any formal statistical procedure, it is always good practice to examine the data graphically. A number of graphical methods have been in routine use in time series modelling. For example, time series data plots, sample autocorrelation function plots, sample partial autocorrelation function plots, sample spectral density functions, histograms, plots of differenced data, plots of instantaneously transformed data, etc., have been used as standard practice.
APA, Harvard, Vancouver, ISO, and other styles
2

Kumar, Pradeep, Abdul Wahid, and Venkatesh Naganathan. "Machine Learning Approaches for Text Mining and Spam E-mail Filtering: Industry 4.0 Perspective." In Artificial Intelligence and Data Science in Recommendation System: Current Trends, Technologies and Applications, 25–52. BENTHAM SCIENCE PUBLISHERS, 2023. http://dx.doi.org/10.2174/9789815136746123010005.

Full text
Abstract:
The revolution of Industry 4.0 will leave an impact on the domain of everyone's lives directly or indirectly. Several new complex applications will be developed in the days to come that are complicated to predict in the current scenario. With the help of machine learning approaches and intelligent IoT devices, people will be relieved from extra overheads of redundant work currently being performed. Industry 4.0 has become a significant catalyst for innovation and development in various industrial sectors like production processes and quality improvement with greater flexibility. This chapter applied different machine learning algorithms for spam detection and classifying emails into legitimate and spam. Seven classification models: Decision Trees, Random Forest, Artificial Neural Network, Gradient Boosting Machines, AdaBoost, Naive Bayes, and Support Vector Machines are applied. Three benchmark spam datasets are extracted from standard repositories to conduct the experiments. The chapter also presents a quantitative performance analysis. The results from rigorous experiments reveal that ensemble methods, Gradient Boosting and AdaBoost, outperformed other methods with an overall accuracy of 98.70% and 98.18%, respectively. The ensembled models are effective on a large-sized dataset embedded with more extensive features. The performance of non-ensemble methods, ANN and Naïve Bayes, was instrumental on large datasets as a viable alternative, with an overall accuracy of 98.38% and 97.63% on test data.
APA, Harvard, Vancouver, ISO, and other styles
3

Qin, Meng. "A Software Code Infringement Detection Scheme Based on Integration Learning." In Advances in Transdisciplinary Engineering. IOS Press, 2024. http://dx.doi.org/10.3233/atde231264.

Full text
Abstract:
A software code plagiarism detection scheme based on ensemble learning is designed to address the issue of low accuracy in traditional abstract syntax tree based software code infringement detection methods. We adopt the AST structure of the code to integrate domain partitioning in IR with AST, and use a weighted simplified abstract syntax tree to design feature extraction and similarity calculation methods, to achieve partial detection of semantic plagiarism and calculate the similarity between text and source code. Then, the feature set of the known classification training set is placed into a random forest based ensemble classifier for training, and an association between error rate and the classification effect of the decision tree in the random forest are proposed to acquire feature node matching with the feature in the code base. The experimental results show that our scheme has higher accuracy than traditional detection methods based on abstract syntax trees. It can not only detect code similarity, but also provide the types of plagiarism, which has better comprehensive identification performance.
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Peng, and Lei Qi. "Leak Acoustic Emission Signal Classification and Diagnosis Based on the Fractional-Order Fourier Transfer and Ensemble Learning." In Advances in Transdisciplinary Engineering. IOS Press, 2022. http://dx.doi.org/10.3233/atde220453.

Full text
Abstract:
Fractional-order Fourier transform is a representational method of the fractional Fourier domain formed by the signal rotated by any angle counterclockwise about the origin on the coordinate axis in the time-frequency plane. This paper intends to use the Fractional-order Fourier transform to process the signals collected from the acoustic emission device, and then train the results through the ensemble method of SVME, KNN and Softmax, so as to build a model that can predict size and location of leak holes in acoustic emission device. The model process has a good accuracy in predicting that whether or not the leak and the size of the leak are empty. If you only need to predict whether it leaks, the accuracy reaches 75.6 %, compared to the model trained on the original data, the classification accuracy has increased by 25.6% to 66.8%. In particular, on the Softmax classifier, the addition of FFRT increases the accuracy by more than 200%.
APA, Harvard, Vancouver, ISO, and other styles
5

Jeannerod, Marc. "Consciousness of Action and Self-Consciousness: A Cognitive Neuroscience Approach." In Agency and Self-Awareness, 128–49. Oxford University PressOxford, 1992. http://dx.doi.org/10.1093/oso/9780199245611.003.0005.

Full text
Abstract:
Abstract The mutual relationships of action to consciousness are complex and far from unequivocal. We execute many of our daily actions unconsciously; conversely, we can consciously simulate or imagine actions we do not execute. This vast domain of research, motor cognition, is central to the study, not only of action itself (how it is planned, prepared, and finally executed), but also of how action contributes to the representations we build from objects and from other selves. In this chapter, I will use the broad term of representation to include the various internal states in relation to action. There could be better terms, with the advantage of greater precision, but with the disadvantage of losing continuity between different levels of functioning. Indeed, the term representation of an action can be used in its strong sense, to designate a mental state in relation to goals and desires, as well as in its weak sense, to indicate the ensemble of mechanisms that precede execution of a movement. Finally, it can also be accepted by biologists to designate the state of the neural network during a mental state related to action.
APA, Harvard, Vancouver, ISO, and other styles
6

Sayilgan, Ebru, Yilmaz Kemal Yuce, and Yalcin Isler. "Evaluating Steady-State Visually Evoked Potentials-Based Brain-Computer Interface System Using Wavelet Features and Various Machine Learning Methods." In Artificial Intelligence. IntechOpen, 2021. http://dx.doi.org/10.5772/intechopen.98335.

Full text
Abstract:
Steady-state visual evoked potentials (SSVEPs) have been designated to be appropriate and are in use in many areas such as clinical neuroscience, cognitive science, and engineering. SSVEPs have become popular recently, due to their advantages including high bit rate, simple system structure and short training time. To design SSVEP-based BCI system, signal processing methods appropriate to the signal structure should be applied. One of the most appropriate signal processing methods of these non-stationary signals is the Wavelet Transform. In this study, we investigated both the effect of choosing a mother wavelet function and the most successful combination of classifier algorithm, wavelet features, and frequency pairs assigned to BCI commands. SSVEP signals that were recorded at seven different stimulus frequencies (6–6.5 – 7 – 7.5 – 8.2 – 9.3 – 10 Hz) were used in this study. A total of 115 features were extracted from time, frequency, and time-frequency domains. These features were classified by a total of seven different classification processes. Classification evaluation was presented with the 5-fold cross-validation method and accuracy values. According to the results, (I) the most successful wavelet function was Haar wavelet, (II) the most successful classifier was Ensemble Learning, (III) using the feature vector consisting of energy, entropy, and variance features yielded higher accuracy than using one of these features alone, and (IV) the highest performances were obtained in the frequency pairs with “6–10”, “6.5–10”, “7–10”, and “7.5–10” Hz.
APA, Harvard, Vancouver, ISO, and other styles
7

Andrade, J. S. de, and M. P. Almeida. "A Hamiltonian Approach for Tsallis Thermostatistics." In Nonextensive Entropy. Oxford University Press, 2004. http://dx.doi.org/10.1093/oso/9780195159769.003.0012.

Full text
Abstract:
Since the pioneering work of Tsallis in 1988 [15] in which a nonextensive generalization of the Boltzmann-Gibbs (BG) formalism for statistical mechanics was proposed, intensive research has been dedicated to the development of the conceptual framework behind this new thermodynamical approach and to its application to realistic physical systems. In order to justify the Tsallis generalization, it has been frequently argued that the BG statistical mechanics has a domain of applicability restricted to systems with short-range interactions and non-(multi)fractal boundary conditions [14]. Moreover, it has been recalled that anomalies displayed by mesoscopic dissipative systems and strongly non- Markovian processes represent clear evidence of the departure from BG thermostatistics. These types of arguments have been duly reinforced by recent convincing examples of physical systems that are far better described in terms of the generalized formalism than in the usual context of the BG thermodynamics (see Tsallis [14] and references therein). It thus became evident that the intrinsic nonlinear features present in the Tsallis formalism that lead naturally to power laws represent powerful ingredients for the description of complex systems. In the majority of studies dealing with the Tsallis thermostatistics, the starting point is the expression for the generalized entropy S<sub>q<sub>, where A; is a positive constant, q a parameter, and / is the probability distribution. Under a different framework, some interesting studies [8] have shown that the parameter q can be somehow linked to the system sensibility on initial conditions. Few works have been committed to substantiate the form of entropy (1) in physical systems based entirely on first principles [1, 13]. For example, it has been demonstrated that it is possible to develop dynamical thermostat schemes that are compatible with the generalized canonical ensemble [12].
APA, Harvard, Vancouver, ISO, and other styles
8

Dunham, Ian, and Don Powell. "From Genes to Genomes: A Historical Perspective." In Genomics and Clinical Medicine, 3–16. Oxford University PressNew York, NY, 2008. http://dx.doi.org/10.1093/oso/9780195188134.003.0001.

Full text
Abstract:
Abstract You may or may not have noticed, but during the past 10 or so years, you have just experienced the genomics revolution. A precise start point is difficult to define, but for argument’s sake we can identify the first publication of the genome sequence of a freeliving organism (Haemophilus influenzae) (Fleischmann et al., 1995) as the harbinger of the flood of genomic information that would appear over the next 10 years. Notably this flood included the first complete sequences of a human chromosome (Dunham et al., 1999), the completed sequence of the human genome (IHGSC, 2004), and draft sequences of the mouse (Waterston et al., 2002) and rat (Gibbs et al., 2004) genomes. After the wealth of whole-genome sequences have come the beginnings of genome re-sequencing as human genomes have been sampled to generate single nucleotide polymorphisms (SNPs) (Sachidanandam et al., 2001) and these SNPs have been used to build maps of the sequence content of different individual human genomes (Altshuler et al., 2005). All of this information is freely available through public domain Web servers such as Ensemble (Hubbard et al., 2005), so any researcher with access to the Internet can interrogate the human genome. It is now hard to imagine a time when genome sequence information was not central to human genetics. However, looking back to the time before genome sequencing was a reality there were genuine questions as to whether we could afford or indeed needed the human genome sequence (Lewin, 1986). Thus it is worth looking back to see how we got to the position we are now in and what were the motivating factors behind the drive toward genome sequencing (Table 1–1).
APA, Harvard, Vancouver, ISO, and other styles
9

Brown, John H. "Beauty." In Routledge Encyclopedia of Philosophy. London: Routledge, 2021. http://dx.doi.org/10.4324/9780415249126-m014-3.

Full text
Abstract:
Article Summary The resumption of serious and sustained analysis of the concept of aesthetic value in the twentieth century which was described in Section 5 of the article ‘Beauty’ in the 1998 edition of the Encyclopedia (hereafter ‘Beauty1998§5’) broke new ground but failed to solve the problems that stand in the way of a credibly unified theory. For instance, Guy Sircello provides a keen analysis of beauty-making properties, which he takes to be ‘properties of qualitative degree’ (‘PQDs’) such as the vividness or softness of colour, the brilliance or harshness of sound, the melancholy or joviality of a mood. These admit of no quantitative analysis and range over a wide swath of domains. They are not in themselves positively or negatively value-laden, which is essential to avoid circularity. Their aesthetic value is determined by their being intense, non-defective, and non-defective seeming on a nonaesthetic basis (again, to avoid circularity). By intensity is meant that the PQD exists to a high degree. Thus the seeming suede-softness of hills as seen from a distance in a certain light is intensely suede-soft. In a different context the softness might be drab. But he finds little to tell us about the ontological and epistemological standing of these properties or about the rank properly assigned to the overall value of the complex ensembles of properties of a thing. A second standout is the reflection of Kendall Walton (see Walton, K. (1939–)) on what we can call secondary appreciation aroused by awe or wonder at the capacity of things to elicit pleasurable admiration even when they have little claim to direct aesthetic appeal but impress us for their practical, moral, intellectual, or natural value, or for their radically avant-garde strangeness. The virtue of this is to expand the reach of aesthetic admiration beyond traditional limits without losing connection with more paradigmatic beauties. But it raises deep questions about how far we should extend our vision of aesthetic appreciation and how to understand the ontology and epistemology of beauty. What criterion of accuracy will apply to wonder and awe? (Aside from that question there is an important connection between it and aesthetic evaluation. Awe heightens a person’s disinterestedness: ‘Awe basically shuts down self-interest and self-representation and the nagging voice of the self,’ said Dacher Keltner, a professor of psychology at the University of California, Berkeley, ‘two- to three-minute “micro awe” experiences (like gazing at a reflection on the water or visiting a nostalgic playground). … Both can have a profound impact on one’s quality of life.’ https://www.nytimes.com/2021/07/05/well/live/awe-microadventure-exploration.html?action=click&module=Editors%20Picks&pgtype=Homepage). If we scan the scene of philosophical aesthetics honestly, we can hardly fail to be impressed by how much disarray it presents, ranging from presumption to disinclination to take a position. Consider the presumption of Mary Mothersill’s claim that the aesthetic properties of a work depend on literally every feature being what it is. This is certainly not true for the wavelike contour she cites in El Greco’s Burial of Count Orgaz. Much detail could be altered without affecting that contour. If this were not so then all changes in the work, even its aging, would affect the aesthetic properties we confidently ascribe to it. The opposite of presumption is evasion, as when philosophers refuse to engage with the issue of how aesthetic value can be a single species of evaluation, doubtless wide but still possessed of unity. A case can also be made for the evasiveness saturating the prevalence of studies of what a historical philosopher may or may not have meant in this or that work, rather than direct confrontation with the central theoretical issues. The conclusion one comes to is that philosophical aesthetics is at present a strikingly undercultivated domain. We have much work to do. At the same time our aesthetic commerce with the things that move us (positively or negatively) is a source of major satisfaction. That is a deeply important truism. The oddity is that our passion does not come with deep understanding. It is to this mismatch that the best twenty-first century contributors are devoting themselves. To their contributions we now turn.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Ensemble non dominé"

1

Khan, Abdul Saboor, Salahaldeen Alqallabi, Anish Phade, Arne Skorstad, Faisal Al-Jenaibi, Mohamed Tarik Gacem, Mustapha Adli, Sheharyar Mansur, and Lyes Malla. "Demonstrating Flexibility and Cost-Efficiency of Integrated Ensemble-Based Modeling – One Approach on Three Reservoirs." In Abu Dhabi International Petroleum Exhibition & Conference. SPE, 2021. http://dx.doi.org/10.2118/207738-ms.

Full text
Abstract:
Abstract The aim of this study is to demonstrate the value of an integrated ensemble-based modeling approach for multiple reservoirs of varying complexity. Three different carbonate reservoirs are selected with varying challenges to showcase the flexibility of the approach to subsurface teams. Modeling uncertainties are included in both static and dynamic domains and valuable insights are attained in a short reservoir modeling cycle time. Integrated workflows are established with guidance from multi-disciplinary teams to incorporate recommended static and dynamic modeling processes in parallel to overcome the modeling challenges of the individual reservoirs. Challenges such as zonal communication, presence of baffles, high permeability streaks, communication from neighboring fields, water saturation modeling uncertainties, relative permeability with hysteresis, fluid contact depth shift etc. are considered when accounting for uncertainties. All the uncertainties in sedimentology, structure and dynamic reservoir parameters are set through common dialogue and collaboration between subsurface teams to ensure that modeling best practices are adhered to. Adaptive pluri-Gaussian simulation is used for facies modeling and uncertainties are propagated in the dynamic response of the geologically plausible ensembles. These equiprobable models are then history-matched simultaneously using an ensemble-based conditioning tool to match the available observed field production data within a specified tolerance; with each reservoir ranging in number of wells, number of grid cells and production history. This approach results in a significantly reduced modeling cycle time compared to the traditional approach, regardless of the inherent complexity of the reservoir, while giving better history-matched models that are honoring the geology and correlations in input data. These models are created with only enough detail level as per the modeling objectives, leaving more time to extract insights from the ensemble of models. Uncertainties in data, from various domains, are not isolated there, but rather propagated throughout, as these might have an important role in another domain, or in the total response uncertainty. Similarly, the approach encourages a collaborative effort in reservoir modeling and fosters trust between geo-scientists and engineers, ascertaining that models remain consistent across all subsurface domains. It allows for the flexibility to incorporate modeling practices fit for individual reservoirs. Moreover, analysis of the history-matched ensemble shows added insights to the reservoirs such as the location and possible extent of features like high permeability streaks and baffles that are not explicitly modeled in the process initially. Forecast strategies further run on these ensembles of equiprobable models, capture realistic uncertainties in dynamic responses which can help make informed reservoir management decisions. The integrated ensemble-based modeling approach is successfully applied on three different reservoir cases, with different levels of complexity. The fast-tracked process from model building to decision making enabled rapid insights for all domains involved.
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Zengmao, Chaoyang Zhou, Bo Du, and Fengxiang He. "Self-paced Supervision for Multi-source Domain Adaptation." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/493.

Full text
Abstract:
Multi-source domain adaptation has attracted great attention in machine learning community. Most of these methods focus on weighting the predictions produced by the adaptation networks of different domains. Thus the domain shifts between certain of domains and target domain are not effectively relieved, resulting in that these domains are not fully exploited and even may have a negative influence on multi-source domain adaptation task. To address such challenge, we propose a multi-source domain adaptation method to gradually improve the adaptation ability of each source domain by producing more high-confident pseudo-labels with self-paced learning for conditional distribution alignment. The proposed method first trains several separate domain branch networks with single domains and an ensemble branch network with all domains. Then we obtain some high-confident pseudo-labels with the branch networks and learn the branch specific pseudo-labels with self-paced learning. Each branch network reduces the domain gap by aligning the conditional distribution with its branch specific pseudo-labels and the pseudo-labels provided by all branch networks. Experiments on Office31, Office-Home and DomainNet show that the proposed method outperforms the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Menezes, Davi Eber Sanches de, Susana Margarida da Graça Santos, Antonio Alberto de Souza dos Santos, João Carlos von Hohendorff Filho, and Denis José Schiozer. "Construction of Single-Porosity and Single-Permeability Models as Low-Fidelity Alternative to Represent Fractured Carbonate Reservoirs Subject to WAG-CO2 Injection Under Uncertainty." In SPE EuropEC - Europe Energy Conference featured at the 83rd EAGE Annual Conference & Exhibition. SPE, 2022. http://dx.doi.org/10.2118/209692-ms.

Full text
Abstract:
Abstract Fractured carbonate reservoirs are typically modeled in a system of dual-porosity and dual-permeability (DP/DP), where fractures, vugs, karsts and rock matrix are represented in different domains. The DP/DP modeling allows for a more accurate reservoir description but implies a higher computational cost than the single-porosity and single-permeability (SP/SP) approach. The time may be a limitation for cases that require many simulations, such as production optimization under uncertainty. This computational cost is more challenging when we couple DPDP models with compositional fluid models, such as in the case of fractured light-oil reservoirs where the production strategy accounts for water-alternating-gas (WAG) injection. In this context, low fidelity models (LFM) can be an interesting alternative for initial studies. This work shows the potential of compositional single-porosity and single-permeability models based on pseudo-properties (SP/SP-P) as LFM applied to a fractured benchmark carbonate reservoir, subject to WAG-CO2 injection and gas recycle. Two workflows are proposed to assist the construction of SP-P models for studies based on (i) nominal approach and (ii) probabilistic approach of reservoir properties. Both workflows begin with a parametrization step, in which the pseudo-properties are optimized for a base case in order to minimize the mismatch between forecasts of the SP/SP-P and DP/DP models. The new parametrization methods proposed in this work showed to be viable for the construction of the SP/SP-P models. For studies under uncertainties, the workflow proposes obtaining pseudo-properties by robust optimizations based on representative models from a DP/DP ensemble, which proved to be an effective method. The case study is the benchmark UNISIM-II-D-CO with an ensemble of 197 DP/DP models and two different production strategies. The risk curves for production, injection and economic indicators obtained from DP/DP and SP/SP-P ensembles showed good match and the computational time spent on simulations of the SP/SP-P ensemble was 81% faster than DP/DP models, on average. Finally, the responses obtained from both ensembles were validated in a reference model (UNISIM-II-R) that represents the true response and is not part of the ensemble. The results indicate the SP/SP-P modeling as a good LFM for preliminary assessments of highly time-consuming studies. Besides, the workflows proposed in this work can be very useful for assisting the construction of SP/SP-P models for different case studies. However, we recommend the use of the high-fidelity models to support the final decision.
APA, Harvard, Vancouver, ISO, and other styles
4

Menezes, Davi Eber Sanches de, Susana Margarida da Graça Santos, Antonio Alberto de Souza dos Santos, João Carlos von Hohendorff Filho, and Denis José Schiozer. "Construction of Single-Porosity and Single-Permeability Models as Low-Fidelity Alternative to Represent Fractured Carbonate Reservoirs Subject to WAG-CO2 Injection Under Uncertainty." In SPE EuropEC - Europe Energy Conference featured at the 83rd EAGE Annual Conference & Exhibition. SPE, 2022. http://dx.doi.org/10.2118/209692-ms.

Full text
Abstract:
Abstract Fractured carbonate reservoirs are typically modeled in a system of dual-porosity and dual-permeability (DP/DP), where fractures, vugs, karsts and rock matrix are represented in different domains. The DP/DP modeling allows for a more accurate reservoir description but implies a higher computational cost than the single-porosity and single-permeability (SP/SP) approach. The time may be a limitation for cases that require many simulations, such as production optimization under uncertainty. This computational cost is more challenging when we couple DPDP models with compositional fluid models, such as in the case of fractured light-oil reservoirs where the production strategy accounts for water-alternating-gas (WAG) injection. In this context, low fidelity models (LFM) can be an interesting alternative for initial studies. This work shows the potential of compositional single-porosity and single-permeability models based on pseudo-properties (SP/SP-P) as LFM applied to a fractured benchmark carbonate reservoir, subject to WAG-CO2 injection and gas recycle. Two workflows are proposed to assist the construction of SP-P models for studies based on (i) nominal approach and (ii) probabilistic approach of reservoir properties. Both workflows begin with a parametrization step, in which the pseudo-properties are optimized for a base case in order to minimize the mismatch between forecasts of the SP/SP-P and DP/DP models. The new parametrization methods proposed in this work showed to be viable for the construction of the SP/SP-P models. For studies under uncertainties, the workflow proposes obtaining pseudo-properties by robust optimizations based on representative models from a DP/DP ensemble, which proved to be an effective method. The case study is the benchmark UNISIM-II-D-CO with an ensemble of 197 DP/DP models and two different production strategies. The risk curves for production, injection and economic indicators obtained from DP/DP and SP/SP-P ensembles showed good match and the computational time spent on simulations of the SP/SP-P ensemble was 81% faster than DP/DP models, on average. Finally, the responses obtained from both ensembles were validated in a reference model (UNISIM-II-R) that represents the true response and is not part of the ensemble. The results indicate the SP/SP-P modeling as a good LFM for preliminary assessments of highly time-consuming studies. Besides, the workflows proposed in this work can be very useful for assisting the construction of SP/SP-P models for different case studies. However, we recommend the use of the high-fidelity models to support the final decision.
APA, Harvard, Vancouver, ISO, and other styles
5

Fokoue, Achille, Ibrahim Abdelaziz, Maxwell Crouse, Shajith Ikbal, Akihiro Kishimoto, Guilherme Lima, Ndivhuwo Makondo, and Radu Marinescu. "An Ensemble Approach for Automated Theorem Proving Based on Efficient Name Invariant Graph Neural Representations." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/359.

Full text
Abstract:
Using reinforcement learning for automated theorem proving has recently received much attention. Current approaches use representations of logical statements that often rely on the names used in these statements and, as a result, the models are generally not transferable from one domain to another. The size of these representations and whether to include the whole theory or part of it are other important decisions that affect the performance of these approaches as well as their runtime efficiency. In this paper, we present NIAGRA; an ensemble Name InvAriant Graph RepresentAtion. NIAGRA addresses this problem by using 1) improved Graph Neural Networks for learning name-invariant formula representations that is tailored for their unique characteristics and 2) an efficient ensemble approach for automated theorem proving. Our experimental evaluation shows state-of-the-art performance on multiple datasets from different domains with improvements up to 10% compared to the best learning-based approaches. Furthermore, transfer learning experiments show that our approach significantly outperforms other learning-based approaches by up to 28%.
APA, Harvard, Vancouver, ISO, and other styles
6

Khan, Hikmat, Charles Johnson, Nidhal Bouaynaya, Ghulam Rasool, Lacey Thompson, and Tyler Travis. "Deep Ensemble for Rotorcraft Attitude Prediction." In Vertical Flight Society 77th Annual Forum & Technology Display. The Vertical Flight Society, 2021. http://dx.doi.org/10.4050/f-0077-2021-16854.

Full text
Abstract:
Historically, the rotorcraft community has experienced a higher fatal accident rate than other aviation segments, including commercial and general aviation. To date, traditional methods applied to reduce incident rates have not proven hugely successful for the rotorcraft community. Recent advancements in artificial intelligence (AI) and the application of these technologies in different areas of our lives are both intriguing and encouraging. When developed appropriately for the aviation domain, AI techniques may provide an opportunity to help design systems that can address rotorcraft safety challenges. Our recent work demonstrated that AI algorithms could use video data from onboard cameras and correctly identify different flight parameters from cockpit gauges, e.g., indicated airspeed. These AI-based techniques provide a potentially cost-effective solution, especially for small helicopter operators, to record the flight state information and perform post-flight analyses. We also showed that carefully designed and trained AI systems can accurately predict rotorcraft attitude (i.e., pitch and yaw) from outside scenes (images or video data). Ordinary offthe-shelf video cameras were installed inside the rotorcraft cockpit to record the outside scene, including the horizon. The AI algorithm was able to correctly identify rotorcraft attitude at an accuracy in the range of 80%. In this work, we combined five different onboard camera viewpoints to improve attitude prediction accuracy to 94%. Our current approach, which is referred to as ensembled prediction, significantly increased the reliability in the predicted attitude (i.e., pitch and yaw). For example, in some camera views, the horizon may be obstructed or not visible. The proposed ensemble method can combine visual details recorded from other cameras and predict the attitude with high reliability. In our setup, the five onboard camera views included pilot windshield, co-pilot windshield, pilot Electronic Flight Instrument System (EFIS) display, co-pilot EFIS display, and the attitude indicator gauge. Using video data from each camera view, we trained a variety of convolutional neural networks (CNNs), which achieved prediction accuracy in the range of 79% to 90%. We subsequently ensembled the learned knowledge from all CNNs and achieved an ensembled accuracy of 93.3%. Our efforts could potentially provide a cost-effective means to supplement traditional Flight Data Recorders (FDR), a technology that to date has been challenging to incorporate into the fleets of most rotorcraft operators due to cost and resource constraints. Such cost-effective solutions can gradually increase the rotorcraft community's participation in various safety programs, enhancing safety and opening up helicopter flight data monitoring (HFDM) to historically underrepresented segments of the vertical flight community.
APA, Harvard, Vancouver, ISO, and other styles
7

Sadigov, Subhi, Siti Bahjam, and Alf Sebastian Lackner. "Locating Infill Targets of an Offshore Field Using an Ensemble Based Integrated Uncertainty Centric Modeling Approach." In SPE Annual Technical Conference and Exhibition. SPE, 2022. http://dx.doi.org/10.2118/210144-ms.

Full text
Abstract:
Abstract The aim of the study is to demonstrate the value of an integrated ensemble-based modeling approach for improving reservoir management of a mature offshore field located on the Norwegian Continental Shelf. Automated workflows are created to include subsurface uncertainties from both static and dynamic domains within a short reservoir modeling timeline. Potential infill targets for producers are located and evaluated efficiently as a result of implementing the proposed methodology. For ensemble-based methodologies, an ensemble of equiprobable reservoir models is created with the guidance of a multi-disciplinary team to represent realistic reservoir uncertainties. Automated workflows are established to capture and propagate subsurface uncertainties spanning from grid structure creation, petrophysical modeling, and dynamic modeling while honoring all firm well data. These equiprobable models are then conditioned to the historic production data using an iterative ensemble-based data assimilation algorithm. The proposed method supports the conditioning of a large number of reservoir parameters in a consistent manner on both a local level (e.g., facies and petrophysical properties) and a global level (e.g., aquifer size and relative permeability curves). The conditioned ensemble is then used for robust forecasting studies for making important reservoir management decisions under uncertainty. As a result of the proposed methodology, an ensemble of history-matched reservoir models is created in a remarkably short modeling time. Analysis of the updates made during the data assimilation process provides crucial insights into the reservoir such as the connectivity between the existing wells, the communication between different segments of the field, and critical flow dynamics affecting drainage decisions. Such insights would not have been obtained through just a traditional HM exercise where just one reservoir model was tuned to the dynamic data, and where all trust was put into that single model. Several potential infill targets for producer wells are identified by using the proposed integrated approach and ranked based on their potential risk level and added value. The forecasting studies carried out to evaluate the value of each of these targets capture the subsurface uncertainties on the dynamic response, which enables asset teams to make informed field development decisions by quantifying and ranking alternatives. The next steps will be augmenting the workflow to improve the back-produced injected water rates and modifying the base grid structure. This should allow for even better control of the formation water breakthrough timing in certain parts of the reservoir. This paper proposes a systematic and integrated workflow using an ensemble-based method for solving reservoir modeling challenges. The automated nature of the methodology significantly reduces the reservoir modeling timeline and enables it to be utilized on a wide range of hydrocarbon assets to maximize recovery. The infill targets that were identified through the proposed approach are very similar to the targets identified through a traditional thickness map approach. However, the results are obtained in a shorter time span and with a better grasp of critical uncertainties using this integrated method compared to traditional methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Freitas, João David, Caio Ponte, Rafael Bomfim, and Carlos Caminha. "The impact of window size on univariate time series forecasting using machine learning." In Symposium on Knowledge Discovery, Mining and Learning. Sociedade Brasileira de Computação - SBC, 2023. http://dx.doi.org/10.5753/kdmile.2023.232916.

Full text
Abstract:
In the task of modeling time series prediction problems, the window size (w) is a hyperparameter that defines the amount of time units that will be present in each example to be fed into a learning model. This hyperparameter is particularly important due to the need to make the learning model understand both long-term and short-term trends, as well as seasonal patterns, without making it sensitive to random fluctuations. In this article, our aim is to understand the effect that the window size has on the results of machine learning algorithms in univariate time series prediction problems. To achieve this goal, we used 40 time series from two distinct domains, conducting trainings with variations in the window size using four types of machine learning algorithms: Bagging, Boosting, Stacking, and a Recurrent Neural Network architecture. As a result, we observed that increasing the window size can lead to an improvement in the evaluation metric values until reaching a stabilization scenario, where further increasing the window size does not yield better predictions. The aforementioned stabilization occurred in both studied domains only when w values exceeded 100 time steps. We also observed that Recurrent Neural Network architectures do not outperform ensemble models in several univariate time series prediction scenarios.
APA, Harvard, Vancouver, ISO, and other styles
9

Crespo, Javier, and Jesús Contreras. "On the Development of a Synchronized Harmonic Balance Method for Multiple Frequencies and its Application to LPT Flows." In ASME Turbo Expo 2020: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/gt2020-14952.

Full text
Abstract:
Abstract The aim of this paper is to describe the development and application of a multi-frequency harmonic balance solver for GPUs, particularly suitable for the simulation of periodic unsteadiness in nonlinear turbomachinery flows comprised of a few dominant frequencies, with an unsteady multistage coupling that bolsters the flow continuity across the rotor/stator interface. The formulation is addressed with the time-domain reinterpretation, where several non-equidistant time instants conveniently selected are solved simultaneously. The set of required frequencies in each row is driven into the governing equations with the help of almost-periodic Fourier transforms for time derivatives and time shifted boundary conditions. The spatial repetitiveness inside each row can be exploited to perform single-passage simulations and the relative circumferential positioning of the rotors or stators and the different blade or vane counts is tackled by means of adding fictitious frequencies referring to non-adjacent rows therefore taking into account clocking and indexing effects. Existing multistage row coupling techniques of harmonic methods rely on the use of non-reflecting boundary conditions, based on linearizations, or time interpolation, which may lead to Runge phenomenon with the resulting numerical instabilities and non-preserving flux exchange. Different sets of time instants might be selected in each row but the interpolation in space and time across their interfaces gives rise to robustness issues due to this phenomenon. The so-called synchronized approach, developed in this work, consist of having the same time instances among the whole ensemble of rows, ensuring that flux transfer at sliding planes is applied more robustly. The combination of a set of shared non-equidistant time instances plus the use of unequal frequencies (real and fictitious) may spoil the Fourier transforms conditioning but this can be dramatically improved with the help of oversampling and instants selection optimization. The resulting multistage coupling naturally addresses typical numerical issues such as flow that might reverse locally across the row interfaces by means of not using boundary conditions but a local flux conservation scheme in the sliding planes. Some examples will be given to illustrate the ability of this new approach to preserve accuracy and robustness while resolving them. A brief analysis of results for a fan stage and a LPT multi-row case is presented to demonstrate the correctness of the method, assessing the impact in the modeling accuracy of the present approach compared with a time-domain conventional analysis. Regarding the computational performance, the speedup compared to a full annulus time-domain unsteady simulation is a factor of order 30 combining the use of single-passage rows and time spectral accuracy.
APA, Harvard, Vancouver, ISO, and other styles
10

Sidenko, Vladyslav, and Dmytro Oshurok. "Future temperature and precipitation climate indices changes over the Transcarpathia region on EURO-CORDEX multimodel ensemble." In International Conference of Young Scientists on Meteorology, Hydrology and Environmental Monitoring. Ukrainian Hydrometeorological Institute, 2023. http://dx.doi.org/10.15407/icys-mhem.2023.025.

Full text
Abstract:
This study aims to assess potential climate changes in the Transcarpathia Region in the period 2021-2050 relative to the 1991-2020 base period based on the calculation of temperature and precipitation climate indices: annual average air temperature, number of frost days (FD), number of summer days (SU), number of tropical days (TD), amount of winter precipitation, amount of summer precipitation, annual count of days with more than 20 mm precipitation (R20mm) and maximum amount of precipitation for two consecutive days (AMP2). The domain under study includes the Transcarpathia Region of Ukraine and its neighbouring territories and has extremely complicated topography, with an altitude range between 101 m and 2061 m ASL. Such complexity significantly influences an interpolation/downscaling procedure applied to climatological variables (e.g., air temperature, atmospheric precipitation, etc.). In this study, daily data collected at 11 meteorological stations located in the domain was used. Data of four essential climate variables, namely minimum air temperature (tn), mean air temperature (tm), maximum air temperature (tx), and atmospheric precipitation (rr), were used in the calculations. The period covered by the data time series is 1961-2020. Data of climate model simulations (historical and future projections) were obtained from the Coordinated Regional Climate Downscaling Experiment project for the European domain (Euro-CORDEX). In our calculations, the Euro-CORDEX daily data of tn, tm, tx, and rr (converted previously from precipitation flux) was used. We only selected Euro-CORDEX simulations which (1) were performed based on RPC4.5, (2) provide output data in the Gregorian (or similar) calendar, and (3) provide output for all four variables. Thus, a multimodel ensemble of climate simulations utilised in our calculations consists of eleven members (combinations of 5 GCMs and 8 RCMs). Quality control of the station time series was performed by means of the INQC software (https://CRAN.R-project.org/package=INQC). Homogenization was performed using the Climatol package (https://CRAN.R-project.org/package=climatol) (Guijarro, 2018). We used the MISH software (Szentimrey and Bihari, 2014) to perform gridding/downscaling of the station data on the grid with the spatial resolution of 0.05° in both horizontal directions (~5 km). Bias-correction of the climate model data was performed by means of linear/variance scaling (for the air temperature data) and quantile matching (for the atmospheric precipitation data) methods. After bias correction of the climate projection data, they were statistically downscaled by means of the MISH software to the 0.05° grid, the same as for the observation data. The downscaling was performed for the period of 2006-2050 for each climate variable and each climate model (GCM-RCM combination). Spatial fields of air temperature (minimum, average and maximum) and amounts of precipitation for each day of the historical period (1961-2020) and the period of climate projections (2006-2050) were obtained. Based on the MISH downscaled climate model data, 8 climate indices were calculated for each year of the projection period (2021-2050) and each grid point of the interpolation grid (with the 0.05° spatial resolution). Finally, differences (anomalies) in the climate indices averaged over 1991-2020 and 2021-2050, i.e. calculated base observations and projections, respectively, were computed. Our calculations showed a moderate increase in air temperature (and other related indices, such as SU and TD) in 2021-2050 compared to 1991-2020. The increase is more intensive on valleys of the domain, while mountain tops and ridges will experience less intensive changes. Atmospheric precipitation will not change significantly.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography