Zeitschriftenartikel zum Thema „Algorithmie quantique“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Algorithmie quantique.

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-48 Zeitschriftenartikel für die Forschung zum Thema "Algorithmie quantique" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Pavel, Ilarion. „Les défis des technologies quantiques“. Annales des Mines - Responsabilité et environnement N° 114, Nr. 2 (10.04.2024): 81–90. http://dx.doi.org/10.3917/re1.114.0081.

Der volle Inhalt der Quelle
Annotation:
Le développement des technologies quantiques est aujourd’hui l’objet d’importants efforts de recherche, mais les résultats seront-ils à la hauteur des attentes ? L’ordinateur quantique peut résoudre certains problèmes difficiles, qui demandent à l’ordinateur classique un temps de calcul trop important, et peut également attaquer les schémas actuels de cryptage. Cependant, la réalisation d’un ordinateur quantique suffisamment puissant pour résoudre des problèmes pratiques demeure un véritable défi technologique. Il existe plusieurs technologies d’implémentation hardware , chacune avec ses avantages et ses inconvénients. Comme ils sont plus faciles à réaliser, la recherche se dirige également vers la mise au point d’ordinateurs quantiques analogiques et de simulateurs quantiques. Parallèlement, ces travaux ont pour conséquence la conception de capteurs extrêmement sensibles, qui ont de nombreuses applications dans l’industrie de la prospection géologique, dans l’imagerie médicale, dans les technologies militaires et la mise au point de nouvelles méthodes de cryptage immunes contre l’attaque par un algorithme quantique.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Blais, A. „Algorithmes et architectures pour ordinateurs quantiques supraconducteurs“. Annales de Physique 28, Nr. 5 (September 2003): 1–148. http://dx.doi.org/10.1051/anphys:2003008.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Rahman, Mohammad Arshad. „Quantile regression using metaheuristic algorithms“. International Journal of Computational Economics and Econometrics 3, Nr. 3/4 (2013): 205. http://dx.doi.org/10.1504/ijcee.2013.058498.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

MOUNT, DAVID M., NATHAN S. NETANYAHU, CHRISTINE D. PIATKO, RUTH SILVERMAN und ANGELA Y. WU. „QUANTILE APPROXIMATION FOR ROBUST STATISTICAL ESTIMATION AND k-ENCLOSING PROBLEMS“. International Journal of Computational Geometry & Applications 10, Nr. 06 (Dezember 2000): 593–608. http://dx.doi.org/10.1142/s0218195900000334.

Der volle Inhalt der Quelle
Annotation:
Given a set P of n points in Rd, a fundamental problem in computational geometry is concerned with finding the smallest shape of some type that encloses all the points of P. Well-known instances of this problem include finding the smallest enclosing box, minimum volume ball, and minimum volume annulus. In this paper we consider the following variant: Given a set of n points in Rd, find the smallest shape in question that contains at least k points or a certain quantile of the data. This type of problem is known as a k-enclosing problem. We present a simple algorithmic framework for computing quantile approximations for the minimum strip, ellipsoid, and annulus containing a given quantile of the points. The algorithms run in O(n log n) time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kibzun, A. I. „Parallelization of the quantile function optimization algorithms“. Automation and Remote Control 68, Nr. 5 (Mai 2007): 799–810. http://dx.doi.org/10.1134/s0005117907050074.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Papacharalampous, Georgia, Hristos Tyralis, Andreas Langousis, Amithirigala W. Jayawardena, Bellie Sivakumar, Nikos Mamassis, Alberto Montanari und Demetris Koutsoyiannis. „Probabilistic Hydrological Post-Processing at Scale: Why and How to Apply Machine-Learning Quantile Regression Algorithms“. Water 11, Nr. 10 (14.10.2019): 2126. http://dx.doi.org/10.3390/w11102126.

Der volle Inhalt der Quelle
Annotation:
We conduct a large-scale benchmark experiment aiming to advance the use of machine-learning quantile regression algorithms for probabilistic hydrological post-processing “at scale” within operational contexts. The experiment is set up using 34-year-long daily time series of precipitation, temperature, evapotranspiration and streamflow for 511 catchments over the contiguous United States. Point hydrological predictions are obtained using the Génie Rural à 4 paramètres Journalier (GR4J) hydrological model and exploited as predictor variables within quantile regression settings. Six machine-learning quantile regression algorithms and their equal-weight combiner are applied to predict conditional quantiles of the hydrological model errors. The individual algorithms are quantile regression, generalized random forests for quantile regression, generalized random forests for quantile regression emulating quantile regression forests, gradient boosting machine, model-based boosting with linear models as base learners and quantile regression neural networks. The conditional quantiles of the hydrological model errors are transformed to conditional quantiles of daily streamflow, which are finally assessed using proper performance scores and benchmarking. The assessment concerns various levels of predictive quantiles and central prediction intervals, while it is made both independently of the flow magnitude and conditional upon this magnitude. Key aspects of the developed methodological framework are highlighted, and practical recommendations are formulated. In technical hydro-meteorological applications, the algorithms should be applied preferably in a way that maximizes the benefits and reduces the risks from their use. This can be achieved by (i) combining algorithms (e.g., by averaging their predictions) and (ii) integrating algorithms within systematic frameworks (i.e., by using the algorithms according to their identified skills), as our large-scale results point out.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Zheng, Songfeng. „Gradient descent algorithms for quantile regression with smooth approximation“. International Journal of Machine Learning and Cybernetics 2, Nr. 3 (22.07.2011): 191–207. http://dx.doi.org/10.1007/s13042-011-0031-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Möller, Eva, Gert Grieszbach, Bärbel Schack und Herbert Witte. „Statistical Properties and Control Algorithms of Recursive Quantile Estimators“. Biometrical Journal 42, Nr. 6 (Oktober 2000): 729–46. http://dx.doi.org/10.1002/1521-4036(200010)42:6<729::aid-bimj729>3.0.co;2-w.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Xiang, Dao-Hong, Ting Hu und Ding-Xuan Zhou. „Approximation Analysis of Learning Algorithms for Support Vector Regression and Quantile Regression“. Journal of Applied Mathematics 2012 (2012): 1–17. http://dx.doi.org/10.1155/2012/902139.

Der volle Inhalt der Quelle
Annotation:
We study learning algorithms generated by regularization schemes in reproducing kernel Hilbert spaces associated with anϵ-insensitive pinball loss. This loss function is motivated by theϵ-insensitive loss for support vector regression and the pinball loss for quantile regression. Approximation analysis is conducted for these algorithms by means of a variance-expectation bound when a noise condition is satisfied for the underlying probability measure. The rates are explicitly derived under a priori conditions on approximation and capacity of the reproducing kernel Hilbert space. As an application, we get approximation orders for the support vector regression and the quantile regularized regression.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Cheng, Hao. „Comparison of partial least square algorithms in hierarchical latent variable model with missing data“. SIMULATION 96, Nr. 10 (30.07.2020): 825–39. http://dx.doi.org/10.1177/0037549720944467.

Der volle Inhalt der Quelle
Annotation:
Missing data is almost inevitable for various reasons in many applications. For hierarchical latent variable models, there usually exist two kinds of missing data problems. One is manifest variables with incomplete observations, the other is latent variables which cannot be observed directly. Missing data in manifest variables can be handled by different methods. For latent variables, there exist several kinds of partial least square (PLS) algorithms which have been widely used to estimate the value of latent variables. In this paper, we not only combine traditional linear regression type PLS algorithms with missing data handling methods, but also introduce quantile regression to improve the performances of PLS algorithms when the relationships among manifest and latent variables are not fixed according to the explored quantile of interest. Thus, we can get the overall view of variables’ relationships at different levels. The main challenges lie in how to introduce quantile regression in PLS algorithms correctly and how well the PLS algorithms perform when missing manifest variables occur. By simulation studies, we compare all the PLS algorithms with missing data handling methods in different settings, and finally build a business sophistication hierarchical latent variable model based on real data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Koutmos, Dimitrios. „Network Activity and Ethereum Gas Prices“. Journal of Risk and Financial Management 16, Nr. 10 (30.09.2023): 431. http://dx.doi.org/10.3390/jrfm16100431.

Der volle Inhalt der Quelle
Annotation:
This article explores the extent to which network activity can explain changes in Ethereum transaction fees. Such fees are referred to as “gas prices” within the Ethereum blockchain, and are important inputs not only for executing transactions, but also for the deployment of smart contracts within the network. Using a bootstrapped quantile regression model, it can be shown that network activity, such as the sizes of blocks or the number of transactions and contracts, can have a heterogeneous relationship with gas prices across periods of low and high gas price changes. Of all the network activity variables examined herein, the number of intraday transactions within Ethereum’s blockchain is most consistent in explaining gas fees across the full distribution of gas fee changes. From a statistical perspective, the bootstrapped quantile regression approach demonstrates that linear modeling techniques may yield but a partial view of the rich dynamics found in the full range of gas price changes’ conditional distribution. This is an important finding given that Ethereum’s blockchain has undergone fundamental economic and technological regime changes, such as the recent implementation of the Ethereum Improvement Proposal (EIP) 1559, which aims to provide an algorithmic updating rule to estimate Ethereum’s “base fee”.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Ivkin, Nikita, Edo Liberty, Kevin Lang, Zohar Karnin und Vladimir Braverman. „Streaming Quantiles Algorithms with Small Space and Update Time“. Sensors 22, Nr. 24 (08.12.2022): 9612. http://dx.doi.org/10.3390/s22249612.

Der volle Inhalt der Quelle
Annotation:
Approximating quantiles and distributions over streaming data has been studied for roughly two decades now. Recently, Karnin, Lang, and Liberty proposed the first asymptotically optimal algorithm for doing so. This manuscript complements their theoretical result by providing a practical variants of their algorithm with improved constants. For a given sketch size, our techniques provably reduce the upper bound on the sketch error by a factor of two. These improvements are verified experimentally. Our modified quantile sketch improves the latency as well by reducing the worst-case update time from O(1ε) down to O(log1ε).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Tyralis, Hristos, Georgia Papacharalampous, Andreas Langousis und Simon Michael Papalexiou. „Explanation and Probabilistic Prediction of Hydrological Signatures with Statistical Boosting Algorithms“. Remote Sensing 13, Nr. 3 (20.01.2021): 333. http://dx.doi.org/10.3390/rs13030333.

Der volle Inhalt der Quelle
Annotation:
Hydrological signatures, i.e., statistical features of streamflow time series, are used to characterize the hydrology of a region. A relevant problem is the prediction of hydrological signatures in ungauged regions using the attributes obtained from remote sensing measurements at ungauged and gauged regions together with estimated hydrological signatures from gauged regions. The relevant framework is formulated as a regression problem, where the attributes are the predictor variables and the hydrological signatures are the dependent variables. Here we aim to provide probabilistic predictions of hydrological signatures using statistical boosting in a regression setting. We predict 12 hydrological signatures using 28 attributes in 667 basins in the contiguous US. We provide formal assessment of probabilistic predictions using quantile scores. We also exploit the statistical boosting properties with respect to the interpretability of derived models. It is shown that probabilistic predictions at quantile levels 2.5% and 97.5% using linear models as base learners exhibit better performance compared to more flexible boosting models that use both linear models and stumps (i.e., one-level decision trees). On the contrary, boosting models that use both linear models and stumps perform better than boosting with linear models when used for point predictions. Moreover, it is shown that climatic indices and topographic characteristics are the most important attributes for predicting hydrological signatures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Huang, Tianbao, Guanglong Ou, Hui Xu, Xiaoli Zhang, Yong Wu, Zihao Liu, Fuyan Zou, Chen Zhang und Can Xu. „Comparing Algorithms for Estimation of Aboveground Biomass in Pinus yunnanensis“. Forests 14, Nr. 9 (28.08.2023): 1742. http://dx.doi.org/10.3390/f14091742.

Der volle Inhalt der Quelle
Annotation:
Comparing algorithms are crucial for enhancing the accuracy of remote sensing estimations of forest biomass in regions with high heterogeneity. Herein, Sentinel 2A, Sentinel 1A, Landsat 8 OLI, and Digital Elevation Model (DEM) were selected as data sources. A total of 12 algorithms, including 7 types of learners, were utilized for estimating the aboveground biomass (AGB) of Pinus yunnanensis forest. The results showed that: (1) The optimal algorithm (Extreme Gradient Boosting, XGBoost) was selected as the meta-model (referred to as XGBoost-stacking) of the stacking ensemble algorithm, which integrated 11 other algorithms. The R2 value was improved by 0.12 up to 0.61, and RMSE was decreased by 4.53 Mg/ha down to 39.34 Mg/ha compared to the XGBoost. All algorithms consistently showed severe underestimation of AGB in the Pinus yunnanensis forest of Yunnan Province when AGB exceeded 100 Mg/ha. (2) XGBoost-Stacking, XGBoost, BRNN (Bayesian Regularized Neural Network), RF (Random Forest), and QRF (Quantile Random Forest) have good sensitivity to forest AGB. QRNN (Quantile Regression Neural Network), GP (Gaussian Process), and EN (Elastic Network) have more outlier data and their robustness was poor. SVM-RBF (Radial Basis Function Kernel Support Vector Machine), k-NN (K Nearest Neighbors), and SGB (Stochastic Gradient Boosting) algorithms have good robustness, but their sensitivity was poor, and QRF algorithms and BRNN algorithm can estimate low values with higher accuracy. In conclusion, the XGBoost-stacking, XGBoost, and BRNN algorithms have shown promising application prospects in remote sensing estimation of forest biomass. This study could provide a reference for selecting the suitable algorithm for forest AGB estimation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Cheng, Hao. „Importance sampling imputation algorithms in quantile regression with their application in CGSS data“. Mathematics and Computers in Simulation 188 (Oktober 2021): 498–508. http://dx.doi.org/10.1016/j.matcom.2021.04.014.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Arandjelovic, Ognjen, Duc-Son Pham und Svetha Venkatesh. „Two Maximum Entropy-Based Algorithms for Running Quantile Estimation in Nonstationary Data Streams“. IEEE Transactions on Circuits and Systems for Video Technology 25, Nr. 9 (September 2015): 1469–79. http://dx.doi.org/10.1109/tcsvt.2014.2376137.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Chuan, Zun Liang, Wan Nur Syahidah Wan Yusoff, Azlyna Senawi, Mohd Romlay Mohd Akramin, Soo-Fen Fam, Wendy Ling Shinyie und Tan Lit Ken. „A Comparative Effectiveness of Hierarchical and Non-hierarchical Regionalisation Algorithms in Regionalising the Homogeneous Rainfall Regions“. Pertanika Journal of Science and Technology 30, Nr. 1 (04.01.2022): 319–42. http://dx.doi.org/10.47836/pjst.30.1.18.

Der volle Inhalt der Quelle
Annotation:
Descriptive data mining has been widely applied in hydrology as the regionalisation algorithms to identify the statistically homogeneous rainfall regions. However, previous studies employed regionalisation algorithms, namely agglomerative hierarchical and non-hierarchical regionalisation algorithms requiring post-processing techniques to validate and interpret the analysis results. The main objective of this study is to investigate the effectiveness of the automated agglomerative hierarchical and non-hierarchical regionalisation algorithms in identifying the homogeneous rainfall regions based on a new statistically significant difference regionalised feature set. To pursue this objective, this study collected 20 historical monthly rainfall time-series data from the rain gauge stations located in the Kuantan district. In practice, these 20 rain gauge stations can be categorised into two statistically homogeneous rainfall regions, namely distinct spatial and temporal variability in the rainfall amounts. The results of the analysis show that Forgy K-means non-hierarchical (FKNH), Hartigan- Wong K-means non-hierarchical (HKNH), and Lloyd K-means non-hierarchical (LKNH) regionalisation algorithms are superior to other automated agglomerative hierarchical and non-hierarchical regionalisation algorithms. Furthermore, FKNH, HKNH, and LKNH yielded the highest regionalisation accuracy compared to other automated agglomerative hierarchical and non-hierarchical regionalisation algorithms. Based on the regionalisation results yielded in this study, the reliability and accuracy that assessed the risk of extreme hydro-meteorological events for the Kuantan district can be improved. In particular, the regional quantile estimates can provide a more accurate estimation compared to at-site quantile estimates using an appropriate statistical distribution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Watson, Oliver P., Isidro Cortes-Ciriano, Aimee R. Taylor und James A. Watson. „A decision-theoretic approach to the evaluation of machine learning algorithms in computational drug discovery“. Bioinformatics 35, Nr. 22 (09.05.2019): 4656–63. http://dx.doi.org/10.1093/bioinformatics/btz293.

Der volle Inhalt der Quelle
Annotation:
Abstract Motivation Artificial intelligence, trained via machine learning (e.g. neural nets, random forests) or computational statistical algorithms (e.g. support vector machines, ridge regression), holds much promise for the improvement of small-molecule drug discovery. However, small-molecule structure-activity data are high dimensional with low signal-to-noise ratios and proper validation of predictive methods is difficult. It is poorly understood which, if any, of the currently available machine learning algorithms will best predict new candidate drugs. Results The quantile-activity bootstrap is proposed as a new model validation framework using quantile splits on the activity distribution function to construct training and testing sets. In addition, we propose two novel rank-based loss functions which penalize only the out-of-sample predicted ranks of high-activity molecules. The combination of these methods was used to assess the performance of neural nets, random forests, support vector machines (regression) and ridge regression applied to 25 diverse high-quality structure-activity datasets publicly available on ChEMBL. Model validation based on random partitioning of available data favours models that overfit and ‘memorize’ the training set, namely random forests and deep neural nets. Partitioning based on quantiles of the activity distribution correctly penalizes extrapolation of models onto structurally different molecules outside of the training data. Simpler, traditional statistical methods such as ridge regression can outperform state-of-the-art machine learning methods in this setting. In addition, our new rank-based loss functions give considerably different results from mean squared error highlighting the necessity to define model optimality with respect to the decision task at hand. Availability and implementation All software and data are available as Jupyter notebooks found at https://github.com/owatson/QuantileBootstrap. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Tyralis, Hristos, Georgia Papacharalampous, Apostolos Burnetas und Andreas Langousis. „Hydrological post-processing using stacked generalization of quantile regression algorithms: Large-scale application over CONUS“. Journal of Hydrology 577 (Oktober 2019): 123957. http://dx.doi.org/10.1016/j.jhydrol.2019.123957.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Zhang, Hong-Yan, Wei Sun, Xiao Chen, Rui-Jia Lin und Yu Zhou. „Fixed-point algorithms for solving the critical value and upper tail quantile of Kuiper's statistics“. Heliyon 10, Nr. 7 (April 2024): e28274. http://dx.doi.org/10.1016/j.heliyon.2024.e28274.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Arunachalam, Srinivasan, Vojtech Havlicek, Giacomo Nannicini, Kristan Temme und Pawel Wocjan. „Simpler (classical) and faster (quantum) algorithms for Gibbs partition functions“. Quantum 6 (01.09.2022): 789. http://dx.doi.org/10.22331/q-2022-09-01-789.

Der volle Inhalt der Quelle
Annotation:
We present classical and quantum algorithms for approximating partition functions of classical Hamiltonians at a given temperature. Our work has two main contributions: first, we modify the classical algorithm of Stefankovic, Vempala and Vigoda (J. ACM, 56(3), 2009) to improve its sample complexity; second, we quantize this new algorithm, improving upon the previously fastest quantum algorithm for this problem, due to Harrow and Wei (SODA 2020). The conventional approach to estimating partition functions requires approximating the means of Gibbs distributions at a set of inverse temperatures that form the so-called cooling schedule. The length of the cooling schedule directly affects the complexity of the algorithm. Combining our improved version of the algorithm of Stefankovic, Vempala and Vigoda with the paired-product estimator of Huber (Ann. Appl. Probab., 25(2), 2015), our new quantum algorithm uses a shorter cooling schedule than previously known. This length matches the optimal length conjectured by Stefankovic, Vempala and Vigoda. The quantum algorithm also achieves a quadratic advantage in the number of required quantum samples compared to the number of random samples drawn by the best classical algorithm, and its computational complexity has quadratically better dependence on the spectral gap of the Markov chains used to produce the quantum samples.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Kibzun, Andrey. „Comparison of two algorithms for solving a two-stage bilinear stochastic programming problem with quantile criterion“. Applied Stochastic Models in Business and Industry 31, Nr. 6 (16.02.2015): 862–74. http://dx.doi.org/10.1002/asmb.2115.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Ahsan, Md Manjurul, M. A. Parvez Mahmud, Pritom Kumar Saha, Kishor Datta Gupta und Zahed Siddique. „Effect of Data Scaling Methods on Machine Learning Algorithms and Model Performance“. Technologies 9, Nr. 3 (24.07.2021): 52. http://dx.doi.org/10.3390/technologies9030052.

Der volle Inhalt der Quelle
Annotation:
Heart disease, one of the main reasons behind the high mortality rate around the world, requires a sophisticated and expensive diagnosis process. In the recent past, much literature has demonstrated machine learning approaches as an opportunity to efficiently diagnose heart disease patients. However, challenges associated with datasets such as missing data, inconsistent data, and mixed data (containing inconsistent missing data both as numerical and categorical) are often obstacles in medical diagnosis. This inconsistency led to a higher probability of misprediction and a misled result. Data preprocessing steps like feature reduction, data conversion, and data scaling are employed to form a standard dataset—such measures play a crucial role in reducing inaccuracy in final prediction. This paper aims to evaluate eleven machine learning (ML) algorithms—Logistic Regression (LR), Linear Discriminant Analysis (LDA), K-Nearest Neighbors (KNN), Classification and Regression Trees (CART), Naive Bayes (NB), Support Vector Machine (SVM), XGBoost (XGB), Random Forest Classifier (RF), Gradient Boost (GB), AdaBoost (AB), Extra Tree Classifier (ET)—and six different data scaling methods—Normalization (NR), Standscale (SS), MinMax (MM), MaxAbs (MA), Robust Scaler (RS), and Quantile Transformer (QT) on a dataset comprising of information of patients with heart disease. The result shows that CART, along with RS or QT, outperforms all other ML algorithms with 100% accuracy, 100% precision, 99% recall, and 100% F1 score. The study outcomes demonstrate that the model’s performance varies depending on the data scaling method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Wu, Xiaofeng. „AHP-BP-Based Algorithms for Teaching Quality Evaluation of Flipped English Classrooms in the Context of New Media Communication“. International Journal of Information Technologies and Systems Approach 16, Nr. 2 (21.04.2023): 1–12. http://dx.doi.org/10.4018/ijitsa.322096.

Der volle Inhalt der Quelle
Annotation:
Big data analytics constitutes a key component in the pursuit of enhancing educational efficiency. This study defines the concept of the flipped classroom in the context of new media communication, evaluates the state of audiovisual instruction in higher education, and advocates for the use of this methodology to enhance college students' English listening and speaking skills. Utilizing multiple linear regression and a conditional quantile model, this research quantifies the range of impact of flipped instruction on college students' acquisition of a foreign language. To address the deficiencies in the current evaluation process for flipped classroom teaching, it proposes a teaching quality evaluation model based on the AHP and BP neural network. The AHP constructs the teaching quality evaluation index system for the flipped classroom and ascertains the combined weights of the indices. The simulated experiment's results show that utilizing the proposed evaluation model to assess flipped classroom instruction enhances objectivity, efficiency, and precision in the evaluation process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Chen, Wei, Zhao Wang, Guirong Wang, Zixin Ning, Boxiang Lian, Shangjie Li, Paraskevas Tsangaratos, Ioanna Ilia und Weifeng Xue. „Optimizing Rotation Forest-Based Decision Tree Algorithms for Groundwater Potential Mapping“. Water 15, Nr. 12 (19.06.2023): 2287. http://dx.doi.org/10.3390/w15122287.

Der volle Inhalt der Quelle
Annotation:
Groundwater potential mapping is an important prerequisite for evaluating the exploitation, utilization, and recharge of groundwater. The study uses BFT (best-first decision tree classifier), CART (classification and regression tree), FT (functional trees), EBF (evidential belief function) benchmark models, and RF-BFTree, RF-CART, and RF-FT ensemble models to map the groundwater potential of Wuqi County, China. Firstly, select sixteen groundwater spring-related variables, such as altitude, plan curvature, profile curvature, curvature, slope angle, slope aspect, stream power index, topographic wetness index, stream sediment transport index, normalized difference vegetation index, land use, soil, lithology, distance to roads, distance to rivers, and rainfall, and make a correlation analysis of these sixteen groundwater spring-related variables. Secondly, optimize the parameters of the seven models and select the optimal parameters for groundwater modeling in Wuqi County. The predictive performance of each model was evaluated by estimating the area under the receiver operating characteristic (ROC) curve (AUC) and statistical index (accuracy, sensitivity, and specificity). The results show that the seven models have good predictive capabilities, and the ensemble model has a larger AUC value. Among them, the RF-BFT model has the highest success rate (AUC = 0.911), followed by RF-FT (0.898), RF-CART (0.894), FT (0.852), EBF (0.824), CART (0.801), and BFtree (0.784), respectively. Groundwater potential maps of these 7 models were obtained, and four different classification methods (geometric interval, natural breaks, quantile, and equal interval) were used to reclassify the obtained GPM into 5 categories: very low (VLC), low (LC), moderate (MC), high (HC), and very high (VHC). The results show that the natural breaks method has the best classification performance, and the RF-BFT model is the most reliable. The study highlights that the proposed ensemble model has more efficient and accurate performance for groundwater potential mapping.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Rascon, Caleb, Oscar Ruiz-Espitia und Jose Martinez-Carranza. „On the Use of the AIRA-UAS Corpus to Evaluate Audio Processing Algorithms in Unmanned Aerial Systems“. Sensors 19, Nr. 18 (10.09.2019): 3902. http://dx.doi.org/10.3390/s19183902.

Der volle Inhalt der Quelle
Annotation:
Audio analysis over an Unmanned Aerial Systems (UAS) is of interest it is an essential step for on-board sound source localization and separation. This could be useful for search & rescue operations, as well as for detection of unauthorized drone operations. In this paper, an analysis of the previously introduced Acoustic Interactions for Robot Audition (AIRA)-UAS corpus is presented, which is a set of recordings produced by the ego-noise of a drone performing different aerial maneuvers and by other drones flying nearby. It was found that the recordings have a very low Signal-to-Noise Ratio (SNR), that the noise is dynamic depending of the drone’s movements, and that their noise signatures are highly correlated. Three popular filtering techniques were evaluated in this work in terms of noise reduction and signature extraction, which are: Berouti’s Non-Linear Noise Subtraction, Adaptive Quantile Based Noise Estimation, and Improved Minima Controlled Recursive Averaging. Although there was moderate success in noise reduction, no filter was able to keep intact the signature of the drone flying in parallel. These results are evidence of the challenge in audio processing over drones, implying that this is a field prime for further research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Rajabi, Amirarsalan, und Ozlem Ozmen Garibay. „TabFairGAN: Fair Tabular Data Generation with Generative Adversarial Networks“. Machine Learning and Knowledge Extraction 4, Nr. 2 (16.05.2022): 488–501. http://dx.doi.org/10.3390/make4020022.

Der volle Inhalt der Quelle
Annotation:
With the increasing reliance on automated decision making, the issue of algorithmic fairness has gained increasing importance. In this paper, we propose a Generative Adversarial Network for tabular data generation. The model includes two phases of training. In the first phase, the model is trained to accurately generate synthetic data similar to the reference dataset. In the second phase we modify the value function to add fairness constraint, and continue training the network to generate data that is both accurate and fair. We test our results in both cases of unconstrained, and constrained fair data generation. We show that using a fairly simple architecture and applying quantile transformation of numerical attributes the model achieves promising performance. In the unconstrained case, i.e., when the model is only trained in the first phase and is only meant to generate accurate data following the same joint probability distribution of the real data, the results show that the model beats the state-of-the-art GANs proposed in the literature to produce synthetic tabular data. Furthermore, in the constrained case in which the first phase of training is followed by the second phase, we train the network and test it on four datasets studied in the fairness literature and compare our results with another state-of-the-art pre-processing method, and present the promising results that it achieves. Comparing to other studies utilizing GANs for fair data generation, our model is comparably more stable by using only one critic, and also by avoiding major problems of original GAN model, such as mode-dropping and non-convergence.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Ivković, Nikola, Robert Kudelić und Matej Črepinšek. „Probability and Certainty in the Performance of Evolutionary and Swarm Optimization Algorithms“. Mathematics 10, Nr. 22 (20.11.2022): 4364. http://dx.doi.org/10.3390/math10224364.

Der volle Inhalt der Quelle
Annotation:
Reporting the empirical results of swarm and evolutionary computation algorithms is a challenging task with many possible difficulties. These difficulties stem from the stochastic nature of such algorithms, as well as their inability to guarantee an optimal solution in polynomial time. This research deals with measuring the performance of stochastic optimization algorithms, as well as the confidence intervals of the empirically obtained statistics. Traditionally, the arithmetic mean is used for measuring average performance, but we propose quantiles for measuring average, peak and bad-case performance, and give their interpretations in a relevant context for measuring the performance of the metaheuristics. In order to investigate the differences between arithmetic mean and quantiles, and to confirm possible benefits, we conducted experiments with 7 stochastic algorithms and 20 unconstrained continuous variable optimization problems. The experiments showed that median was a better measure of average performance than arithmetic mean, based on the observed solution quality. Out of 20 problem instances, a discrepancy between the arithmetic mean and median happened in 6 instances, out of which 5 were resolved in favor of median and 1 instance remained unresolved as a near tie. The arithmetic mean was completely inadequate for measuring average performance based on the observed number of function evaluations, while the 0.5 quantile (median) was suitable for that task. The quantiles also showed to be adequate for assessing peak performance and bad-case performance. In this paper, we also proposed a bootstrap method to calculate the confidence intervals of the probability of the empirically obtained quantiles. Considering the many advantages of using quantiles, including the ability to calculate probabilities of success in the case of multiple executions of the algorithm and the practically useful method of calculating confidence intervals, we recommend quantiles as the standard measure of peak, average and bad-case performance of stochastic optimization algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Witkovsky, Viktor. „Numerical inversion of a characteristic function: An alternative tool to form the probability distribution of output quantity in linear measurement models“. ACTA IMEKO 5, Nr. 3 (04.11.2016): 32. http://dx.doi.org/10.21014/acta_imeko.v5i3.382.

Der volle Inhalt der Quelle
Annotation:
<p><span>Measurement uncertainty analysis based on combining the state-of-knowledge distributions requires evaluation of the probability density function (PDF), the cumulative distribution function (CDF), and/or the quantile function (QF) of a random variable reasonably associated with the measurand. This can be derived from the characteristic function (CF), which is defined as a Fourier transform of its probability distribution function. Working with CFs provides an alternative and frequently much simpler route than working directly with PDFs and/or CDFs. In particular, derivation of the CF of a weighted sum of independent random variables is a simple and trivial task. However, the analytical derivation of the PDF and/or CDF by using the inverse Fourier transform is available only in special cases. Thus, in most practical situations, a numerical derivation of the PDF/CDF from the CF is an indispensable tool. In metrological applications, such approach can be used to form the probability distribution for the output quantity of a measurement model of additive, linear or generalized linear form. In this paper we propose new original algorithmic implementations of methods for numerical inversion of the characteristic function which are especially suitable for typical metrological applications. The suggested numerical approaches are based on the Gil-Pelaez inverse formulae and on using the approximation by discrete Fourier transform and the fast Fourier transform (FFT) algorithm for computing PDF/CDF of the univariate continuous random variables. As illustrated here, for typical metrological applications based on linear measurement models the suggested methods are an efficient alternative to the standard Monte Carlo methods.</span></p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Beazley, Elizabeth, Anna Bertiger und Kaisa Taipale. „An equivariant rim hook rule for quantum cohomology of Grassmannians“. Discrete Mathematics & Theoretical Computer Science DMTCS Proceedings vol. AT,..., Proceedings (01.01.2014). http://dx.doi.org/10.46298/dmtcs.2377.

Der volle Inhalt der Quelle
Annotation:
International audience A driving question in (quantum) cohomology of flag varieties is to find non-recursive, positive combinatorial formulas for expressing the quantum product in a particularly nice basis, called the Schubert basis. Bertram, Ciocan-Fontanine and Fulton provide a way to compute quantum products of Schubert classes in the Grassmannian of $k$-planes in complex $n$-space by doing classical multiplication and then applying a combinatorial rimhook rule which yields the quantum parameter. In this paper, we provide a generalization of this rim hook rule to the setting in which there is also an action of the complex torus. Combining this result with Knutson and Tao's puzzle rule provides an effective algorithm for computing the equivariant quantum Littlewood-Richardson coefficients. Interestingly, this rule requires a specialization of torus weights that is tantalizingly similar to maps in affine Schubert calculus. Une question importante dans la cohomologie quantique des variétés de drapeaux est de trouver des formules positives non récursives pour exprimer le produit quantique dans une base particulièrement bonne, appelée la base de Schubert. Bertram, Ciocan-Fontanine et Fulton donnent une façon de calculer les produits quantiques de classes de Schubert dans la Grassmannienne de $k$-plans dans l’espace complexe de dimension $n$ en faisant la multiplication classique et appliquant une règle combinatoire “rimhook” qui donne le paramètre quantique. Dans cet article, nous donnons une généralisation de ce règle rimhook au contexte où il y a aussi une action du tore complexe. Combiné avec la règle “puzzle” de Knutson et Tao, cela donne une algorithme effective pour calculer les coefficients équivariants de Littlewood-Richard. Il est intéressant d'observer que cette règle demande une spécialisation des poids du tore qui est similaire d’une manière tentante aux applications dans le calcul de Schubert affiné.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Pietrosanu, Matthew, Jueyu Gao, Linglong Kong, Bei Jiang und Di Niu. „Advanced algorithms for penalized quantile and composite quantile regression“. Computational Statistics, 12.07.2020. http://dx.doi.org/10.1007/s00180-020-01010-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Cheng, Hao. „Efficient importance sampling imputation algorithms for quantile and composite quantile regression“. Statistical Analysis and Data Mining: The ASA Data Science Journal, 29.11.2021. http://dx.doi.org/10.1002/sam.11565.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Dabney, Will, Mark Rowland, Marc Bellemare und Rémi Munos. „Distributional Reinforcement Learning With Quantile Regression“. Proceedings of the AAAI Conference on Artificial Intelligence 32, Nr. 1 (29.04.2018). http://dx.doi.org/10.1609/aaai.v32i1.11791.

Der volle Inhalt der Quelle
Annotation:
In reinforcement learning (RL), an agent interacts with the environment by taking actions and observing the next state and reward. When sampled probabilistically, these state transitions, rewards, and actions can all induce randomness in the observed long-term return. Traditionally, reinforcement learning algorithms average over this randomness to estimate the value function. In this paper, we build on recent work advocating a distributional approach to reinforcement learning in which the distribution over returns is modeled explicitly instead of only estimating the mean. That is, we examine methods of learning the value distribution instead of the value function. We give results that close a number of gaps between the theoretical and algorithmic results given by Bellemare, Dabney, and Munos (2017). First, we extend existing results to the approximate distribution setting. Second, we present a novel distributional reinforcement learning algorithm consistent with our theoretical formulation. Finally, we evaluate this new algorithm on the Atari 2600 games, observing that it significantly outperforms many of the recent improvements on DQN, including the related distributional algorithm C51.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Chernozhukov, Victor, Iván Fernández-Val und Blaise Melly. „Fast algorithms for the quantile regression process“. Empirical Economics, 12.07.2020. http://dx.doi.org/10.1007/s00181-020-01898-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Deng, Yan, Huiwen Jia, Shabbir Ahmed, Jon Lee und Siqian Shen. „Scenario Grouping and Decomposition Algorithms for Chance-Constrained Programs“. INFORMS Journal on Computing, 13.10.2020. http://dx.doi.org/10.1287/ijoc.2020.0970.

Der volle Inhalt der Quelle
Annotation:
A lower bound for a finite-scenario-based chance-constrained program is the quantile value corresponding to the sorted optimal objective values of scenario subproblems. This quantile bound can be improved by grouping subsets of scenarios at the expense of solving larger subproblems. The quality of the bound depends on how the scenarios are grouped. In this paper, we formulate a mixed-integer bilevel program that optimally groups scenarios to tighten the quantile bounds. For general chance-constrained programs, we propose a branch-and-cut algorithm to optimize the bilevel program, and for chance-constrained linear programs, a mixed-integer linear-programming reformulation is derived. We also propose several heuristics for grouping similar or dissimilar scenarios. Our computational results demonstrate that optimal grouping bounds are much tighter than heuristic bounds, resulting in smaller root-node gaps and better performance of scenario decomposition for solving chance-constrained 0-1 programs. Also, the optimal grouping bounds can be greatly strengthened using larger group size. Summary of Contribution: Chance-constrained programs are in general NP-hard but widely used in practice for lowering the risk of undesirable outcomes during decision making under uncertainty. Assuming finite scenarios of uncertain parameter, chance-constrained programs can be reformulated as mixed-integer linear programs with binary variables representing whether or not the constraints are satisfied in corresponding scenarios. A useful quantile bound for solving chance-constrained programs can be improved by grouping subsets of scenarios at the expense of solving larger subproblems. In this paper, we develop algorithms for optimally and heuristically grouping scenarios to tighten the quantile bounds. We aim to improve both the computation and solution quality of a variety of chance-constrained programs formulated for different Operations Research problems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Papacharalampous, Georgia, und Andreas Langousis. „Probabilistic water demand forecasting using quantile regression algorithms“. Water Resources Research, 05.05.2022. http://dx.doi.org/10.1029/2021wr030216.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Wen, Jiawei, Songshan Yang, Christina Dan Wang, Yifan Jiang und Runze Li. „Feature-splitting algorithms for ultrahigh dimensional quantile regression“. Journal of Econometrics, März 2023. http://dx.doi.org/10.1016/j.jeconom.2023.01.028.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Dolce, Pasquale, Cristina Davino und Domenico Vistocco. „Quantile composite-based path modeling: algorithms, properties and applications“. Advances in Data Analysis and Classification, 02.11.2021. http://dx.doi.org/10.1007/s11634-021-00469-0.

Der volle Inhalt der Quelle
Annotation:
AbstractComposite-based path modeling aims to study the relationships among a set of constructs, that is a representation of theoretical concepts. Such constructs are operationalized as composites (i.e. linear combinations of observed or manifest variables). The traditional partial least squares approach to composite-based path modeling focuses on the conditional means of the response distributions, being based on ordinary least squares regressions. Several are the cases where limiting to the mean could not reveal interesting effects at other locations of the outcome variables. Among these: when response variables are highly skewed, distributions have heavy tails and the analysis is concerned also about the tail part, heteroscedastic variances of the errors is present, distributions are characterized by outliers and other extreme data. In such cases, the quantile approach to path modeling is a valuable tool to complement the traditional approach, analyzing the entire distribution of outcome variables. Previous research has already shown the benefits of Quantile Composite-based Path Modeling but the methodological properties of the method have never been investigated. This paper offers a complete description of Quantile Composite-based Path Modeling, illustrating in details the method, the algorithms, the partial optimization criteria along with the machinery for validating and assessing the models. The asymptotic properties of the method are investigated through a simulation study. Moreover, an application on chronic kidney disease in diabetic patients is used to provide guidelines for the interpretation of results and to show the potentialities of the method to detect heterogeneity in the variable relationships.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Jin, Jun, Shuangzhe Liu und Tiefeng Ma. „Optimal subsampling algorithms for composite quantile regression in massive data“. Statistics, 24.07.2023, 1–33. http://dx.doi.org/10.1080/02331888.2023.2239507.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Pagès, Gilles, und Abass Sagna. „Weak and strong error analysis of recursive quantization: a general approach with an application to jump diffusions“. IMA Journal of Numerical Analysis, 30.09.2020. http://dx.doi.org/10.1093/imanum/draa033.

Der volle Inhalt der Quelle
Annotation:
Abstract Observing that the recent developments of spatial discretization schemes based on recursive (product) quantization can be applied to a wide family of discrete time Markov chains, including all standard time discretization schemes of diffusion processes, we establish in this paper a generic strong error bound for such quantized schemes under a Lipschitz propagation assumption. We also establish a marginal weak error estimate that is entirely new to our best knowledge. As an illustration of their generality, we show how to recursively quantize the Euler scheme of a jump diffusion process, including details on the algorithmic aspects grid computation, transition weight computation, etc. Finally, we test the performances of the recursive quantization algorithm by pricing a European put option in a jump Merton model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Moon, Haeseong, und Wen-Xin Zhou. „High-dimensional composite quantile regression: Optimal statistical guarantees and fast algorithms“. Electronic Journal of Statistics 17, Nr. 2 (01.01.2023). http://dx.doi.org/10.1214/23-ejs2147.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Su, Yang, Huang Zhang, Benoit Gabrielle und David Makowski. „Performances of Machine Learning Algorithms in Predicting the Productivity of Conservation Agriculture at a Global Scale“. Frontiers in Environmental Science 10 (08.02.2022). http://dx.doi.org/10.3389/fenvs.2022.812648.

Der volle Inhalt der Quelle
Annotation:
Assessing the productive performance of conservation agriculture (CA) has become a major issue due to growing concerns about global food security and sustainability. Numerous experiments have been conducted to assess the performance of CA under various local conditions, and meta-analysis has become a standard approach in agricultural sector for analysing and summarizing the experimental data. Meta-analysis provides valuable synthetic information based on mean effect size estimation. However, summarizing large amounts of information by way of a single mean effect value is not always satisfactory, especially when considering agricultural practices. Indeed, their impacts on crop yields are often non-linear, and vary widely depending on a number of factors, including soil properties and local climate conditions. To address this issue, here we present a machine learning approach to produce data-driven global maps describing the spatial distribution of the productivity of CA versus conventional tillage (CT). Our objective is to evaluate and compare several machine-learning models for their ability in estimating the productivity of CA systems, and to analyse uncertainty in the model outputs. We consider different usages, including classification, point regression and quantile regression. Our approach covers the comparison of 12 different machine learning algorithms, model training, tuning with cross-validation, testing, and global projection of results. The performances of these algorithms are compared based on a recent global dataset including more than 4,000 pairs of crop yield data for CA vs. CT. We show that random forest has the best performance in classification and regression, while quantile regression forest performs better than quantile neural networks in quantile regression. The best algorithms are used to map crop productivity of CA vs. CT at the global scale, and results reveal that the performance of CA vs. CT is characterized by a strong spatial variability, and that the probability of yield gain with CA is highly dependent on geographical locations. This result demonstrates that our approach is much more informative than simply presenting average effect sizes produced by standard meta-analyses, and paves the way for such probabilistic, spatially-explicit approaches in many other fields of research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Krabichler, Thomas, und Marcus Wunsch. „Hedging goals“. Financial Markets and Portfolio Management, 17.11.2023. http://dx.doi.org/10.1007/s11408-023-00437-y.

Der volle Inhalt der Quelle
Annotation:
AbstractGoal-based investing is concerned with reaching a monetary investment goal by a given finite deadline, which differs from mean-variance optimization in modern portfolio theory. In this article, we expand the close connection between goal-based investing and option hedging that was originally discovered in Browne (Adv Appl Probab 31(2):551–577, 1999) by allowing for varying degrees of investor risk aversion using lower partial moments of different orders. Moreover, we show that maximizing the probability of reaching the goal (quantile hedging, cf. Föllmer and Leukert in Finance Stoch 3:251–273, 1999) and minimizing the expected shortfall (efficient hedging, cf. Föllmer and Leukert in Finance Stoch 4:117–146, 2000) yield, in fact, the same optimal investment policy. We furthermore present an innovative and model-free approach to goal-based investing using methods of reinforcement learning. To the best of our knowledge, we offer the first algorithmic approach to goal-based investing that can find optimal solutions in the presence of transaction costs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Gerdt, Vladimir P., und Vladimir V. Kornyak. „An algorithm for analysis of the structure of finitely presented Lie algebras“. Discrete Mathematics & Theoretical Computer Science Vol. 1 (01.01.1997). http://dx.doi.org/10.46298/dmtcs.243.

Der volle Inhalt der Quelle
Annotation:
International audience We consider the following problem: what is the most general Lie algebra satisfying a given set of Lie polynomial equations? The presentation of Lie algebras by a finite set of generators and defining relations is one of the most general mathematical and algorithmic schemes of their analysis. That problem is of great practical importance, covering applications ranging from mathematical physics to combinatorial algebra. Some particular applications are constructionof prolongation algebras in the Wahlquist-Estabrook method for integrability analysis of nonlinear partial differential equations and investigation of Lie algebras arising in different physical models. The finite presentations also indicate a way to q-quantize Lie algebras. To solve this problem, one should perform a large volume of algebraic transformations which is sharply increased with growth of the number of generators and relations. For this reason, in practice one needs to use a computer algebra tool. We describe here an algorithm for constructing the basis of a finitely presented Lie algebra and its commutator table, and its implementation in the C language. Some computer results illustrating our algorithmand its actual implementation are also presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Gholami, Hamid, Aliakbar Mohammadifar, Dieu Tien Bui und Adrian L. Collins. „Mapping wind erosion hazard with regression-based machine learning algorithms“. Scientific Reports 10, Nr. 1 (24.11.2020). http://dx.doi.org/10.1038/s41598-020-77567-0.

Der volle Inhalt der Quelle
Annotation:
AbstractLand susceptibility to wind erosion hazard in Isfahan province, Iran, was mapped by testing 16 advanced regression-based machine learning methods: Robust linear regression (RLR), Cforest, Non-convex penalized quantile regression (NCPQR), Neural network with feature extraction (NNFE), Monotone multi-layer perception neural network (MMLPNN), Ridge regression (RR), Boosting generalized linear model (BGLM), Negative binomial generalized linear model (NBGLM), Boosting generalized additive model (BGAM), Spline generalized additive model (SGAM), Spike and slab regression (SSR), Stochastic gradient boosting (SGB), support vector machine (SVM), Relevance vector machine (RVM) and the Cubist and Adaptive network-based fuzzy inference system (ANFIS). Thirteen factors controlling wind erosion were mapped, and multicollinearity among these factors was quantified using the tolerance coefficient (TC) and variance inflation factor (VIF). Model performance was assessed by RMSE, MAE, MBE, and a Taylor diagram using both training and validation datasets. The result showed that five models (MMLPNN, SGAM, Cforest, BGAM and SGB) are capable of delivering a high prediction accuracy for land susceptibility to wind erosion hazard. DEM, precipitation, and vegetation (NDVI) are the most critical factors controlling wind erosion in the study area. Overall, regression-based machine learning models are efficient techniques for mapping land susceptibility to wind erosion hazards.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Vogeti, Rishith Kumar, Bhavesh Rahul Mishra und K. Srinivasa Raju. „Machine learning algorithms for streamflow forecasting of Lower Godavari Basin“. H2Open Journal, 01.11.2022. http://dx.doi.org/10.2166/h2oj.2022.240.

Der volle Inhalt der Quelle
Annotation:
Abstract The present study applies three Machine Learning Algorithms, namely, Bi-directional Long Short-Term Memory (Bi-LSTM), Wavelet Neural Network (WNN), and eXtreme Gradient Boosting (XGBoost), to assess their suitability for streamflow projections of the Lower Godavari Basin. Historical data for 39 years of daily rainfall, evapotranspiration, and discharge are used, of which 80% were for the model training and 20% for validation. A Random Search method is used for hyperparameter tuning. XGBoost performs better than WNN, and Bi-LSTM with an R2, RMSE, NSE, and PBIAS of 0.88, 1.48, 0.86, and 29.3% during training, with corresponding values of 0.86, 1.63, 0.85, and 28.5%, respectively, during validation indicate consistency. Therefore, it is used further for projecting streamflow from a climate change perspective. Global Climate Model, Ec-Earth3 is used because of its potentiality, as observed from previous studies. Four Shared Socioeconomic Pathways (SSPs) are considered. Downscaling of future climate variables is based on Empirical Quantile Mapping. Eight decadal streamflow projections are computed – D1 to D8 (2021–2030 to 2091–2099) – exhibiting more pronounced changes within the warming range. They are compared with three historic time horizons of H1 (1982–1994), H2 (1995–2007), and H3 (2008–2020). The highest daily streamflow is observed in D1, D3, D4, D5, and D8 in SSP245; these are D6 and D7 in SSP585 as per XGBoost analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Debbarma, Nilotpal, Parthasarathi Choudhury und Parthajit Roy. „Comparision of performance of multi criteria decision making ensemble-clustering algorithms in rainfall frequency analysis“. Water Practice and Technology, 02.09.2021. http://dx.doi.org/10.2166/wpt.2021.086.

Der volle Inhalt der Quelle
Annotation:
Abstract Non availability of adequate extreme rainfall information at any place of interest are solved using regionalization where subjective grouping of similar attributes of nearby gauged stations is performed. K-Means and Fuzzy C-Means are commonly used methods in regionalization of rainfall, but application of genetic algorithm is very rarely explored. Genetic algorithms (GA) are highly efficient evolutionary algorithms, and through an appropriate objective function can effectively achieve the purpose of clustering. In the present study, Davies-Bouldin index is considered and validation is performed using a set of validation measures. Taking into account the varied output obtained in each validation measure, an ensembled approach involving multi criteria decision making is applied to obtain optimal ranked solutions, and the procedure is extended to K-Means and Fuzzy C-Means for comparision. From the results obtained, GA based clustering is found to outperform other two algorithms in formation of homogenous regions with better performances in leave-one-out cross validation (LOOCV) test and sensitivity analysis. Accuracy of regional growth curves of regions assessed using regional relative bias and RMSE suggest low uncertainty and accurate quantile estimates in GA regions. Further, information transfer index based on entropy evaluated among GA regions is found to be highest and K-Means lowest.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Adler, Jakob, Elina Taneva, Thomas Ansorge und Peter R. Mertens. „CKD prevalence based on real-world data: continuous age-dependent lower reference limits of eGFR with CKD–EPI, FAS and EKFC algorithms“. International Urology and Nephrology, 28.04.2022. http://dx.doi.org/10.1007/s11255-022-03210-8.

Der volle Inhalt der Quelle
Annotation:
Abstract Purpose Several recent articles discuss the need for a definition of chronic kidney disease (CKD) that embarks age-dependency and its impact on the prevalence of CKD. The relevance is derived from the common knowledge that renal function declines with age. The aim of this study was to calculate age-dependent eGFR lower reference limits and to consider their impact on the prevalence of CKD. Methods A real-world data set from patients with inconspicuous urinalysis was used to establish two quantile regression models which were used to calculate continuous age-dependent lower reference limits of CKD–EPI, FAS and EKFC–eGFR based on either single eGFR determinations or eGFR values that are stable over a period of at least 3 months (± 10% eGFR). The derived lower reference limits were used to calculate the prevalence of CKD in a validation data set. Prevalence calculation was done once without and once with application of the chronicity criterion. Results Both models yielded age-dependent lower reference limits of eGFR that are comparable to previously published data. The model using patients with stable eGFR resulted in higher eGFR reference limits. By applying the chronicity criterion, a lower prevalence of CKD was calculated when compared to one-time eGFR measurements (CKD–EPI: 9.8% vs. 8.3%, FAS: 8.0% vs. 7.2%, EKFC: 9.0% vs. 7.1%). Conclusion The application of age-dependent lower reference intervals of eGFR together with the chronicity criterion result in a lower prevalence of CKD which supports the estimates of recently published work and the idea of introducing age-dependency into the definition of CKD.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie