Letteratura scientifica selezionata sul tema "Non-identically distributed data"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Non-identically distributed data".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Non-identically distributed data"

1

A AlSaiary, Zakeia. "Analyzing Order Statistics of Non-Identically Distributed Shifted Exponential Variables in Numerical Data". International Journal of Science and Research (IJSR) 13, n. 11 (5 novembre 2024): 1264–70. http://dx.doi.org/10.21275/sr241116231011.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Tiurev, Konstantin, Peter-Jan H. S. Derks, Joschka Roffe, Jens Eisert e Jan-Michael Reiner. "Correcting non-independent and non-identically distributed errors with surface codes". Quantum 7 (26 settembre 2023): 1123. http://dx.doi.org/10.22331/q-2023-09-26-1123.

Testo completo
Abstract (sommario):
A common approach to studying the performance of quantum error correcting codes is to assume independent and identically distributed single-qubit errors. However, the available experimental data shows that realistic errors in modern multi-qubit devices are typically neither independent nor identical across qubits. In this work, we develop and investigate the properties of topological surface codes adapted to a known noise structure by Clifford conjugations. We show that the surface code locally tailored to non-uniform single-qubit noise in conjunction with a scalable matching decoder yields an increase in error thresholds and exponential suppression of sub-threshold failure rates when compared to the standard surface code. Furthermore, we study the behaviour of the tailored surface code under local two-qubit noise and show the role that code degeneracy plays in correcting such noise. The proposed methods do not require additional overhead in terms of the number of qubits or gates and use a standard matching decoder, hence come at no extra cost compared to the standard surface-code error correction.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Zhu, Feng, Jiangshan Hao, Zhong Chen, Yanchao Zhao, Bing Chen e Xiaoyang Tan. "STAFL: Staleness-Tolerant Asynchronous Federated Learning on Non-iid Dataset". Electronics 11, n. 3 (20 gennaio 2022): 314. http://dx.doi.org/10.3390/electronics11030314.

Testo completo
Abstract (sommario):
With the development of the Internet of Things, edge computing applications are paying more and more attention to privacy and real-time. Federated learning, a promising machine learning method that can protect user privacy, has begun to be widely studied. However, traditional synchronous federated learning methods are easily affected by stragglers, and non-independent and identically distributed data sets will also reduce the convergence speed. In this paper, we propose an asynchronous federated learning method, STAFL, where users can upload their updates at any time and the server will immediately aggregate the updates and return the latest global model. Secondly, STAFL will judge the user’s data distribution according to the user’s update and dynamically change the aggregation parameters according to the user’s network weight and staleness to minimize the impact of non-independent and identically distributed data sets on asynchronous updates. The experimental results show that our method performs better on non-independent and identically distributed data sets than existing methods.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Wu, Jikun, JiaHao Yu e YuJun Zheng. "Research on Federated Learning Algorithms in Non-Independent Identically Distributed Scenarios". Highlights in Science, Engineering and Technology 85 (13 marzo 2024): 104–12. http://dx.doi.org/10.54097/7newsv97.

Testo completo
Abstract (sommario):
Federal learning is distributed learning and is mainly training locally by using multiple distributed devices. After receiving a local parameter, a server performs aggregation and performs multiple iterations until convergence to a final stable model. However, in actual application, due to different preferences of clients and differences in local data of different clients, data in federal learning may be not independently and identically distributed. (Non-Independent Identically Distribution). The main research work of this article is as follows: 1)Analyze and summarize the methods and techniques for solving the non-IID data problem in past experiments.2) Perform in-depth research on the basic methods of federal learning on non-IID data, such as FedAvg and FedProx.3) By using the FedAvg algorithm, using the CIFAR-10 data set, the simulation method is used to simulate the number of types contained in each client, and the distribution of the data set divided according to the distribution of Dirichlet to simulate the non-independent identical distribution of data. The detailed data analysis is made on the influence of the data on the accuracy and loss of model training.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Jiang, Yingrui, Xuejian Zhao, Hao Li e Yu Xue. "A Personalized Federated Learning Method Based on Knowledge Distillation and Differential Privacy". Electronics 13, n. 17 (6 settembre 2024): 3538. http://dx.doi.org/10.3390/electronics13173538.

Testo completo
Abstract (sommario):
Federated learning allows data to remain decentralized, and various devices work together to train a common machine learning model. This method keeps sensitive data local on devices, protecting privacy. However, privacy protection and non-independent and identically distributed data are significant challenges for many FL techniques currently in use. This paper proposes a personalized federated learning method (FedKADP) that integrates knowledge distillation and differential privacy to address the issues of privacy protection and non-independent and identically distributed data in federated learning. The introduction of a bidirectional feedback mechanism enables the establishment of an interactive tuning loop between knowledge distillation and differential privacy, allowing dynamic tuning and continuous performance optimization while protecting user privacy. By closely monitoring privacy overhead through Rényi differential privacy theory, this approach effectively balances model performance and privacy protection. Experimental results using the MNIST and CIFAR-10 datasets demonstrate that FedKADP performs better than conventional federated learning techniques, particularly when handling non-independent and identically distributed data. It successfully lowers the heterogeneity of the model, accelerates global model convergence, and improves validation accuracy, making it a new approach to federated learning.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Babar, Muhammad, Basit Qureshi e Anis Koubaa. "Investigating the impact of data heterogeneity on the performance of federated learning algorithm using medical imaging". PLOS ONE 19, n. 5 (15 maggio 2024): e0302539. http://dx.doi.org/10.1371/journal.pone.0302539.

Testo completo
Abstract (sommario):
In recent years, Federated Learning (FL) has gained traction as a privacy-centric approach in medical imaging. This study explores the challenges posed by data heterogeneity on FL algorithms, using the COVIDx CXR-3 dataset as a case study. We contrast the performance of the Federated Averaging (FedAvg) algorithm on non-identically and independently distributed (non-IID) data against identically and independently distributed (IID) data. Our findings reveal a notable performance decline with increased data heterogeneity, emphasizing the need for innovative strategies to enhance FL in diverse environments. This research contributes to the practical implementation of FL, extending beyond theoretical concepts and addressing the nuances in medical imaging applications. This research uncovers the inherent challenges in FL due to data diversity. It sets the stage for future advancements in FL strategies to effectively manage data heterogeneity, especially in sensitive fields like healthcare.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Layne, Elliot, Erika N. Dort, Richard Hamelin, Yue Li e Mathieu Blanchette. "Supervised learning on phylogenetically distributed data". Bioinformatics 36, Supplement_2 (dicembre 2020): i895—i902. http://dx.doi.org/10.1093/bioinformatics/btaa842.

Testo completo
Abstract (sommario):
Abstract Motivation The ability to develop robust machine-learning (ML) models is considered imperative to the adoption of ML techniques in biology and medicine fields. This challenge is particularly acute when data available for training is not independent and identically distributed (iid), in which case trained models are vulnerable to out-of-distribution generalization problems. Of particular interest are problems where data correspond to observations made on phylogenetically related samples (e.g. antibiotic resistance data). Results We introduce DendroNet, a new approach to train neural networks in the context of evolutionary data. DendroNet explicitly accounts for the relatedness of the training/testing data, while allowing the model to evolve along the branches of the phylogenetic tree, hence accommodating potential changes in the rules that relate genotypes to phenotypes. Using simulated data, we demonstrate that DendroNet produces models that can be significantly better than non-phylogenetically aware approaches. DendroNet also outperforms other approaches at two biological tasks of significant practical importance: antiobiotic resistance prediction in bacteria and trophic level prediction in fungi. Availability and implementation https://github.com/BlanchetteLab/DendroNet.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Shahrivari, Farzad, e Nikola Zlatanov. "On Supervised Classification of Feature Vectors with Independent and Non-Identically Distributed Elements". Entropy 23, n. 8 (13 agosto 2021): 1045. http://dx.doi.org/10.3390/e23081045.

Testo completo
Abstract (sommario):
In this paper, we investigate the problem of classifying feature vectors with mutually independent but non-identically distributed elements that take values from a finite alphabet set. First, we show the importance of this problem. Next, we propose a classifier and derive an analytical upper bound on its error probability. We show that the error probability moves to zero as the length of the feature vectors grows, even when there is only one training feature vector per label available. Thereby, we show that for this important problem at least one asymptotically optimal classifier exists. Finally, we provide numerical examples where we show that the performance of the proposed classifier outperforms conventional classification algorithms when the number of training data is small and the length of the feature vectors is sufficiently high.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Lv, Yankai, Haiyan Ding, Hao Wu, Yiji Zhao e Lei Zhang. "FedRDS: Federated Learning on Non-IID Data via Regularization and Data Sharing". Applied Sciences 13, n. 23 (4 dicembre 2023): 12962. http://dx.doi.org/10.3390/app132312962.

Testo completo
Abstract (sommario):
Federated learning (FL) is an emerging decentralized machine learning framework enabling private global model training by collaboratively leveraging local client data without transferring it centrally. Unlike traditional distributed optimization, FL trains the model at the local client and then aggregates it at the server. While this approach reduces communication costs, the local datasets of different clients are non-Independent and Identically Distributed (non-IID), which may make the local model inconsistent. The present study suggests a FL algorithm that leverages regularization and data sharing (FedRDS). The local loss function is adapted by introducing a regularization term in each round of training so that the local model will gradually move closer to the global model. However, when the client data distribution gap becomes large, adding regularization items will increase the degree of client drift. Based on this, we used a data-sharing method in which a portion of server data is taken out as a shared dataset during the initialization. We then evenly distributed these data to each client to mitigate the problem of client drift by reducing the difference in client data distribution. Analysis of experimental outcomes indicates that FedRDS surpasses some known FL methods in various image classification tasks, enhancing both communication efficacy and accuracy.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Zhang, Xufei, e Yiqing Shen. "Non-IID federated learning with Mixed-Data Calibration". Applied and Computational Engineering 45, n. 1 (15 marzo 2024): 168–78. http://dx.doi.org/10.54254/2755-2721/45/20241048.

Testo completo
Abstract (sommario):
Federated learning (FL) is a privacy-preserving and collaborative machine learning approach for decentralized data across multiple clients. However, the presence of non-independent and non-identically distributed (non-IID) data among clients poses challenges to the performance of the global model. To address this, we propose Mixed Data Calibration (MIDAC). MIDAC mixes M data points to neutralize sensitive information in each individual data point and uses the mixed data to calibrate the global model on the server in a privacy-preserving way. MIDAC improves global model accuracy with low computational overhead while preserving data privacy. Our experiments on CIFAR-10 and BloodMNIST datasets validate the effectiveness of MIDAC in improving the accuracy of federated learning models under non-IID data distributions.
Gli stili APA, Harvard, Vancouver, ISO e altri
Più fonti

Tesi sul tema "Non-identically distributed data"

1

Dabo, Issa-Mbenard. "Applications de la théorie des matrices aléatoires en grandes dimensions et des probabilités libres en apprentissage statistique par réseaux de neurones". Electronic Thesis or Diss., Bordeaux, 2025. http://www.theses.fr/2025BORD0021.

Testo completo
Abstract (sommario):
Le fonctionnement des algorithmes d’apprentissage automatique repose grandement sur la structure des données qu’ils doivent utiliser. La majorité des travaux de recherche en apprentissage automatique se concentre sur l’étude de données homogènes, souvent modélisées par des variables aléatoires indépendantes et identiquement distribuées. Pourtant, les données apparaissant en pratique sont souvent hétérogènes. Nous proposons dans cette thèse de considérer des données hétérogènes en les dotant d’un profil de variance. Cette notion, issue de la théorie des matrices aléatoires, nous permet notamment d’étudier des données issues de modèles de mélanges. Nous nous intéressons plus particulièrement à la problématique de la régression ridge à travers deux modèles : la régression ridge linéaire (linear ridge model) et la régression ridge à caractéristiques aléatoires (random feature ridge model). Nous étudions dans cette thèse la performance de ces deux modèles dans le cadre de la grande dimension, c’est-à-dire lorsque la taille de l’échantillon d’entraînement et la dimension des données tendent vers l’infini avec des vitesses comparables. Dans cet objectif, nous proposons des équivalents asymptotiques de l’erreur d’entraînement et de l’erreur de test relatives aux modèles d’intérêt. L’obtention de ces équivalents repose grandement sur l’étude spectrale issue de la théorie des matrices aléatoires, des probabilités libres et de la théorie des trafics. En effet, la mesure de la performance de nombreux modèles d’apprentissage dépend de la distribution des valeurs propres de matrices aléatoires. De plus, ces résultats nous ont permis d’observer des phénomènes spécifiques à la grande dimension, comme le phénomène de la double descente. Notre étude théorique s’accompagne d’expériences numériques illustrant la précision des équivalents asymptotiques que nous fournissons
The functioning of machine learning algorithms relies heavily on the structure of the data they are given to study. Most research work in machine learning focuses on the study of homogeneous data, often modeled by independent and identically distributed random variables. However, data encountered in practice are often heterogeneous. In this thesis, we propose to consider heterogeneous data by endowing them with a variance profile. This notion, derived from random matrix theory, allows us in particular to study data arising from mixture models. We are particularly interested in the problem of ridge regression through two models: the linear ridge model and the random feature ridge model. In this thesis, we study the performance of these two models in the high-dimensional regime, i.e., when the size of the training sample and the dimension of the data tend to infinity at comparable rates. To this end, we propose asymptotic equivalents for the training error and the test error associated with the models of interest. The derivation of these equivalents relies heavily on spectral analysis from random matrix theory, free probability theory, and traffic theory. Indeed, the performance measurement of many learning models depends on the distribution of the eigenvalues of random matrices. Moreover, these results enabled us to observe phenomena specific to the high-dimensional regime, such as the double descent phenomenon. Our theoretical study is accompanied by numerical experiments illustrating the accuracy of the asymptotic equivalents we provide
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Non-identically distributed data"

1

"Models with dependent and with non-identically distributed data". In Quantile Regression, 131–62. Oxford: John Wiley & Sons, Ltd, 2014. http://dx.doi.org/10.1002/9781118752685.ch5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Lele, S. "Resampling using estimating equations". In Estimating Functions, 295–304. Oxford University PressOxford, 1991. http://dx.doi.org/10.1093/oso/9780198522287.003.0022.

Testo completo
Abstract (sommario):
Abstract This paper surveys resampling methods for a sequence of non-independent, non-identically distributed random variables. The jackknife method is extended to such a sequence through the use of linear estimating equations. In this paper, we extend Wu’s bootstrap to dependent data through the use of linear estimating equations. The main idea is to perturb each component estimation equation by another easy-to-generate sequence of estimating equations with proper mean, variance, and correlation structure. Some validity results are provided.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Tarima, Sergey, e Nancy Flournoy. "Choosing Interim Sample Sizes in Group Sequential Designs". In German Medical Data Sciences: Bringing Data to Life. IOS Press, 2021. http://dx.doi.org/10.3233/shti210043.

Testo completo
Abstract (sommario):
This manuscript investigates sample sizes for interim analyses in group sequential designs. Traditional group sequential designs (GSD) rely on “information fraction” arguments to define the interim sample sizes. Then, interim maximum likelihood estimators (MLEs) are used to decide whether to stop early or continue the data collection until the next interim analysis. The possibility of early stopping changes the distribution of interim and final MLEs: possible interim decisions on trial stopping excludes some sample space elements. At each interim analysis the distribution of an interim MLE is a mixture of truncated and untruncated distributions. The distributional form of an MLE becomes more and more complicated with each additional interim analysis. Test statistics that are asymptotically normal without a possibly of early stopping, become mixtures of truncated normal distributions under local alternatives. Stage-specific information ratios are equivalent to sample size ratios for independent and identically distributed data. This equivalence is used to justify interim sample sizes in GSDs. Because stage-specific information ratios derived from normally distributed data differ from those derived from non-normally distributed data, the former equivalence is invalid when there is a possibility of early stopping. Tarima and Flournoy [3] have proposed a new GSD where interim sample sizes are determined by a pre-defined sequence of ordered alternative hypotheses, and the calculation of information fractions is not needed. This innovation allows researchers to prescribe interim analyses based on desired power properties. This work compares interim power properties of a classical one-sided three stage Pocock design with a one-sided three stage design driven by three ordered alternatives.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Zhao, Juan, Yuankai Zhang, Ruixuan Li, Yuhua Li, Haozhao Wang, Xiaoquan Yi e Zhiying Deng. "XFed: Improving Explainability in Federated Learning by Intersection Over Union Ratio Extended Client Selection". In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230628.

Testo completo
Abstract (sommario):
Federated Learning (FL) allows massive clients to collaboratively train a global model without revealing their private data. Because of the participants’ not independently and identically distributed (non-IID) statistical characteristics, it will cause divergence among the client’s Deep Neural Network model weights and require more communication rounds before training can be converged. Moreover, models trained from non-IID data may also extract biased features and the rationale behind the model is still not fully analyzed and exploited. In this paper, we propose eXplainable-Fed (XFed) which is a novel client selection mechanism that takes both accuracy and explainability into account. Specifically, XFed selects participants in each round based on a small test set’s accuracy via cross-entropy loss and interpretability via XAI-accuracy. XAI-accuracy is calculated by Intersection over Union Ratio between the heat map and the truth mask to evaluate the overall rationale of accuracy. The results of our experiments show that our method has comparable accuracy to state-of-the-art methods specially designed for accuracy while increasing explainability by 14%-35% in terms of rationality.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Luo, Zicheng, Xiaohan Li, Demu Zou e Hao Bai. "Federated Reinforcement Learning Algorithm with Fair Aggregation for Edge Caching". In Advances in Transdisciplinary Engineering. IOS Press, 2024. https://doi.org/10.3233/atde241221.

Testo completo
Abstract (sommario):
Edge caching is employed to solve the challenge of massive data requests, ensuring the quality of user experience. However, existed edge caching algorithms often overlook issues related to user mobility, and privacy protection, non-identically and independently distributed (non-i.i.d.) characteristics of content requests among base stations. To tackle these challenges, this paper proposes Federated Reinforcement Learning Algorithm with Fair Aggregation for Edge Caching (FFA-PPO) algorithm. This paper primarily focuses on scenario of non-i.i.d. content requests in multi-base-station and multi-mobile-user network. We model this problem as a Markov Decision Process (MDP) problem and propose a federated reinforcement learning method to solve MDP problem. The goal is to minimize the content transmission latency of base stations. FFA-PPO algorithm resolves gradient conflicts by seeking the optimal gradient vector within a local ball centered at the averaged gradient which ensures model’s fairness. In conclusion, simulation results prove that the proposed FFA-PPO algorithm outperforms other baseline algorithms in terms of content transmission latency, model’s fairness.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Feng, Chao, Alberto Huertas Celdrán, Janosch Baltensperger, Enrique Tomás Martínez Beltrán, Pedro Miguel Sánchez Sánchez, Gérôme Bovet e Burkhard Stiller. "Sentinel: An Aggregation Function to Secure Decentralized Federated Learning". In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240686.

Testo completo
Abstract (sommario):
Decentralized Federated Learning (DFL) emerges as an innovative paradigm to train collaborative models, addressing the single point of failure limitation. However, the security and trustworthiness of FL and DFL are compromised by poisoning attacks, negatively impacting its performance. Existing defense mechanisms have been designed for centralized FL and they do not adequately exploit the particularities of DFL. Thus, this work introduces Sentinel, a defense strategy to counteract poisoning attacks in DFL. Sentinel leverages the accessibility of local data and defines a three-step aggregation protocol consisting of similarity filtering, bootstrap validation, and normalization to safeguard against malicious model updates. Sentinel has been evaluated with diverse datasets and data distributions. Besides, various poisoning attack types and threat levels have been verified. The results improve the state-of-the-art performance against both untargeted and targeted poisoning attacks when data follows an IID (Independent and Identically Distributed) configuration. Besides, under non-IID configuration, it is analyzed how performance degrades both for Sentinel and other state-of-the-art robust aggregation methods.
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Non-identically distributed data"

1

Zhou, Zihao, Han Chen, Huageng Liu, Zeyu Ping e Yuanyuan Song. "Distributed radar incoherent fusion method for independent non-identically distributed fluctuating targets". In 2024 IEEE International Conference on Signal, Information and Data Processing (ICSIDP), 1–4. IEEE, 2024. https://doi.org/10.1109/icsidp62679.2024.10868151.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Zhang, Bosong, Qian Sun, Hai Wang, Linna Zhang e Danyang Li. "Federated Learning Greedy Aggregation Optimization for Non-Independently Identically Distributed Data". In 2024 IEEE 23rd International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), 2090–97. IEEE, 2024. https://doi.org/10.1109/trustcom63139.2024.00290.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Nie, Wenjing. "Research on federated model algorithm based on non-independent identically distributed data sets". In International Conference on Mechatronics and Intelligent Control (ICMIC 2024), a cura di Kun Zhang e Pascal Lorenz, 130. SPIE, 2025. https://doi.org/10.1117/12.3045715.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Tillman, Robert E. "Structure learning with independent non-identically distributed data". In the 26th Annual International Conference. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1553374.1553507.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Hu, Liang, Wei Cao, Jian Cao, Guandong Xu, Longbing Cao e Zhiping Gu. "Bayesian Heteroskedastic Choice Modeling on Non-identically Distributed Linkages". In 2014 IEEE International Conference on Data Mining (ICDM). IEEE, 2014. http://dx.doi.org/10.1109/icdm.2014.84.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Li, Haowei, Like Luo e Haolong Wang. "Federated learning on non-independent and identically distributed data". In Third International Conference on Machine Learning and Computer Application (ICMLCA 2022), a cura di Fan Zhou e Shuhong Ba. SPIE, 2023. http://dx.doi.org/10.1117/12.2675255.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Mreish, Kinda, e Ivan I. Kholod. "Federated Learning with Non Independent and Identically Distributed Data". In 2024 Conference of Young Researchers in Electrical and Electronic Engineering (ElCon). IEEE, 2024. http://dx.doi.org/10.1109/elcon61730.2024.10468090.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Pan, Wentao, e Hui Zhou. "Fairness and Effectiveness in Federated Learning on Non-independent and Identically Distributed Data". In 2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI). IEEE, 2023. http://dx.doi.org/10.1109/ccai57533.2023.10201271.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Shahrivari, Farzad, e Nikola Zlatanov. "An Asymptotically Optimal Algorithm For Classification of Data Vectors with Independent Non-Identically Distributed Elements". In 2021 IEEE International Symposium on Information Theory (ISIT). IEEE, 2021. http://dx.doi.org/10.1109/isit45174.2021.9518006.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Hodea, Octavian, Adriana Vlad e Octaviana Datcu. "Evaluating the sampling distance to achieve independently and identically distributed data from generalized Hénon map". In 2011 10th International Symposium on Signals, Circuits and Systems (ISSCS). IEEE, 2011. http://dx.doi.org/10.1109/isscs.2011.5978665.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia