Academic literature on the topic 'Non-identically distributed data'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Non-identically distributed data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Non-identically distributed data"

1

A AlSaiary, Zakeia. "Analyzing Order Statistics of Non-Identically Distributed Shifted Exponential Variables in Numerical Data." International Journal of Science and Research (IJSR) 13, no. 11 (November 5, 2024): 1264–70. http://dx.doi.org/10.21275/sr241116231011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tiurev, Konstantin, Peter-Jan H. S. Derks, Joschka Roffe, Jens Eisert, and Jan-Michael Reiner. "Correcting non-independent and non-identically distributed errors with surface codes." Quantum 7 (September 26, 2023): 1123. http://dx.doi.org/10.22331/q-2023-09-26-1123.

Full text
Abstract:
A common approach to studying the performance of quantum error correcting codes is to assume independent and identically distributed single-qubit errors. However, the available experimental data shows that realistic errors in modern multi-qubit devices are typically neither independent nor identical across qubits. In this work, we develop and investigate the properties of topological surface codes adapted to a known noise structure by Clifford conjugations. We show that the surface code locally tailored to non-uniform single-qubit noise in conjunction with a scalable matching decoder yields an increase in error thresholds and exponential suppression of sub-threshold failure rates when compared to the standard surface code. Furthermore, we study the behaviour of the tailored surface code under local two-qubit noise and show the role that code degeneracy plays in correcting such noise. The proposed methods do not require additional overhead in terms of the number of qubits or gates and use a standard matching decoder, hence come at no extra cost compared to the standard surface-code error correction.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhu, Feng, Jiangshan Hao, Zhong Chen, Yanchao Zhao, Bing Chen, and Xiaoyang Tan. "STAFL: Staleness-Tolerant Asynchronous Federated Learning on Non-iid Dataset." Electronics 11, no. 3 (January 20, 2022): 314. http://dx.doi.org/10.3390/electronics11030314.

Full text
Abstract:
With the development of the Internet of Things, edge computing applications are paying more and more attention to privacy and real-time. Federated learning, a promising machine learning method that can protect user privacy, has begun to be widely studied. However, traditional synchronous federated learning methods are easily affected by stragglers, and non-independent and identically distributed data sets will also reduce the convergence speed. In this paper, we propose an asynchronous federated learning method, STAFL, where users can upload their updates at any time and the server will immediately aggregate the updates and return the latest global model. Secondly, STAFL will judge the user’s data distribution according to the user’s update and dynamically change the aggregation parameters according to the user’s network weight and staleness to minimize the impact of non-independent and identically distributed data sets on asynchronous updates. The experimental results show that our method performs better on non-independent and identically distributed data sets than existing methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Jikun, JiaHao Yu, and YuJun Zheng. "Research on Federated Learning Algorithms in Non-Independent Identically Distributed Scenarios." Highlights in Science, Engineering and Technology 85 (March 13, 2024): 104–12. http://dx.doi.org/10.54097/7newsv97.

Full text
Abstract:
Federal learning is distributed learning and is mainly training locally by using multiple distributed devices. After receiving a local parameter, a server performs aggregation and performs multiple iterations until convergence to a final stable model. However, in actual application, due to different preferences of clients and differences in local data of different clients, data in federal learning may be not independently and identically distributed. (Non-Independent Identically Distribution). The main research work of this article is as follows: 1)Analyze and summarize the methods and techniques for solving the non-IID data problem in past experiments.2) Perform in-depth research on the basic methods of federal learning on non-IID data, such as FedAvg and FedProx.3) By using the FedAvg algorithm, using the CIFAR-10 data set, the simulation method is used to simulate the number of types contained in each client, and the distribution of the data set divided according to the distribution of Dirichlet to simulate the non-independent identical distribution of data. The detailed data analysis is made on the influence of the data on the accuracy and loss of model training.
APA, Harvard, Vancouver, ISO, and other styles
5

Jiang, Yingrui, Xuejian Zhao, Hao Li, and Yu Xue. "A Personalized Federated Learning Method Based on Knowledge Distillation and Differential Privacy." Electronics 13, no. 17 (September 6, 2024): 3538. http://dx.doi.org/10.3390/electronics13173538.

Full text
Abstract:
Federated learning allows data to remain decentralized, and various devices work together to train a common machine learning model. This method keeps sensitive data local on devices, protecting privacy. However, privacy protection and non-independent and identically distributed data are significant challenges for many FL techniques currently in use. This paper proposes a personalized federated learning method (FedKADP) that integrates knowledge distillation and differential privacy to address the issues of privacy protection and non-independent and identically distributed data in federated learning. The introduction of a bidirectional feedback mechanism enables the establishment of an interactive tuning loop between knowledge distillation and differential privacy, allowing dynamic tuning and continuous performance optimization while protecting user privacy. By closely monitoring privacy overhead through Rényi differential privacy theory, this approach effectively balances model performance and privacy protection. Experimental results using the MNIST and CIFAR-10 datasets demonstrate that FedKADP performs better than conventional federated learning techniques, particularly when handling non-independent and identically distributed data. It successfully lowers the heterogeneity of the model, accelerates global model convergence, and improves validation accuracy, making it a new approach to federated learning.
APA, Harvard, Vancouver, ISO, and other styles
6

Babar, Muhammad, Basit Qureshi, and Anis Koubaa. "Investigating the impact of data heterogeneity on the performance of federated learning algorithm using medical imaging." PLOS ONE 19, no. 5 (May 15, 2024): e0302539. http://dx.doi.org/10.1371/journal.pone.0302539.

Full text
Abstract:
In recent years, Federated Learning (FL) has gained traction as a privacy-centric approach in medical imaging. This study explores the challenges posed by data heterogeneity on FL algorithms, using the COVIDx CXR-3 dataset as a case study. We contrast the performance of the Federated Averaging (FedAvg) algorithm on non-identically and independently distributed (non-IID) data against identically and independently distributed (IID) data. Our findings reveal a notable performance decline with increased data heterogeneity, emphasizing the need for innovative strategies to enhance FL in diverse environments. This research contributes to the practical implementation of FL, extending beyond theoretical concepts and addressing the nuances in medical imaging applications. This research uncovers the inherent challenges in FL due to data diversity. It sets the stage for future advancements in FL strategies to effectively manage data heterogeneity, especially in sensitive fields like healthcare.
APA, Harvard, Vancouver, ISO, and other styles
7

Layne, Elliot, Erika N. Dort, Richard Hamelin, Yue Li, and Mathieu Blanchette. "Supervised learning on phylogenetically distributed data." Bioinformatics 36, Supplement_2 (December 2020): i895—i902. http://dx.doi.org/10.1093/bioinformatics/btaa842.

Full text
Abstract:
Abstract Motivation The ability to develop robust machine-learning (ML) models is considered imperative to the adoption of ML techniques in biology and medicine fields. This challenge is particularly acute when data available for training is not independent and identically distributed (iid), in which case trained models are vulnerable to out-of-distribution generalization problems. Of particular interest are problems where data correspond to observations made on phylogenetically related samples (e.g. antibiotic resistance data). Results We introduce DendroNet, a new approach to train neural networks in the context of evolutionary data. DendroNet explicitly accounts for the relatedness of the training/testing data, while allowing the model to evolve along the branches of the phylogenetic tree, hence accommodating potential changes in the rules that relate genotypes to phenotypes. Using simulated data, we demonstrate that DendroNet produces models that can be significantly better than non-phylogenetically aware approaches. DendroNet also outperforms other approaches at two biological tasks of significant practical importance: antiobiotic resistance prediction in bacteria and trophic level prediction in fungi. Availability and implementation https://github.com/BlanchetteLab/DendroNet.
APA, Harvard, Vancouver, ISO, and other styles
8

Shahrivari, Farzad, and Nikola Zlatanov. "On Supervised Classification of Feature Vectors with Independent and Non-Identically Distributed Elements." Entropy 23, no. 8 (August 13, 2021): 1045. http://dx.doi.org/10.3390/e23081045.

Full text
Abstract:
In this paper, we investigate the problem of classifying feature vectors with mutually independent but non-identically distributed elements that take values from a finite alphabet set. First, we show the importance of this problem. Next, we propose a classifier and derive an analytical upper bound on its error probability. We show that the error probability moves to zero as the length of the feature vectors grows, even when there is only one training feature vector per label available. Thereby, we show that for this important problem at least one asymptotically optimal classifier exists. Finally, we provide numerical examples where we show that the performance of the proposed classifier outperforms conventional classification algorithms when the number of training data is small and the length of the feature vectors is sufficiently high.
APA, Harvard, Vancouver, ISO, and other styles
9

Lv, Yankai, Haiyan Ding, Hao Wu, Yiji Zhao, and Lei Zhang. "FedRDS: Federated Learning on Non-IID Data via Regularization and Data Sharing." Applied Sciences 13, no. 23 (December 4, 2023): 12962. http://dx.doi.org/10.3390/app132312962.

Full text
Abstract:
Federated learning (FL) is an emerging decentralized machine learning framework enabling private global model training by collaboratively leveraging local client data without transferring it centrally. Unlike traditional distributed optimization, FL trains the model at the local client and then aggregates it at the server. While this approach reduces communication costs, the local datasets of different clients are non-Independent and Identically Distributed (non-IID), which may make the local model inconsistent. The present study suggests a FL algorithm that leverages regularization and data sharing (FedRDS). The local loss function is adapted by introducing a regularization term in each round of training so that the local model will gradually move closer to the global model. However, when the client data distribution gap becomes large, adding regularization items will increase the degree of client drift. Based on this, we used a data-sharing method in which a portion of server data is taken out as a shared dataset during the initialization. We then evenly distributed these data to each client to mitigate the problem of client drift by reducing the difference in client data distribution. Analysis of experimental outcomes indicates that FedRDS surpasses some known FL methods in various image classification tasks, enhancing both communication efficacy and accuracy.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Xufei, and Yiqing Shen. "Non-IID federated learning with Mixed-Data Calibration." Applied and Computational Engineering 45, no. 1 (March 15, 2024): 168–78. http://dx.doi.org/10.54254/2755-2721/45/20241048.

Full text
Abstract:
Federated learning (FL) is a privacy-preserving and collaborative machine learning approach for decentralized data across multiple clients. However, the presence of non-independent and non-identically distributed (non-IID) data among clients poses challenges to the performance of the global model. To address this, we propose Mixed Data Calibration (MIDAC). MIDAC mixes M data points to neutralize sensitive information in each individual data point and uses the mixed data to calibrate the global model on the server in a privacy-preserving way. MIDAC improves global model accuracy with low computational overhead while preserving data privacy. Our experiments on CIFAR-10 and BloodMNIST datasets validate the effectiveness of MIDAC in improving the accuracy of federated learning models under non-IID data distributions.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Non-identically distributed data"

1

Dabo, Issa-Mbenard. "Applications de la théorie des matrices aléatoires en grandes dimensions et des probabilités libres en apprentissage statistique par réseaux de neurones." Electronic Thesis or Diss., Bordeaux, 2025. http://www.theses.fr/2025BORD0021.

Full text
Abstract:
Le fonctionnement des algorithmes d’apprentissage automatique repose grandement sur la structure des données qu’ils doivent utiliser. La majorité des travaux de recherche en apprentissage automatique se concentre sur l’étude de données homogènes, souvent modélisées par des variables aléatoires indépendantes et identiquement distribuées. Pourtant, les données apparaissant en pratique sont souvent hétérogènes. Nous proposons dans cette thèse de considérer des données hétérogènes en les dotant d’un profil de variance. Cette notion, issue de la théorie des matrices aléatoires, nous permet notamment d’étudier des données issues de modèles de mélanges. Nous nous intéressons plus particulièrement à la problématique de la régression ridge à travers deux modèles : la régression ridge linéaire (linear ridge model) et la régression ridge à caractéristiques aléatoires (random feature ridge model). Nous étudions dans cette thèse la performance de ces deux modèles dans le cadre de la grande dimension, c’est-à-dire lorsque la taille de l’échantillon d’entraînement et la dimension des données tendent vers l’infini avec des vitesses comparables. Dans cet objectif, nous proposons des équivalents asymptotiques de l’erreur d’entraînement et de l’erreur de test relatives aux modèles d’intérêt. L’obtention de ces équivalents repose grandement sur l’étude spectrale issue de la théorie des matrices aléatoires, des probabilités libres et de la théorie des trafics. En effet, la mesure de la performance de nombreux modèles d’apprentissage dépend de la distribution des valeurs propres de matrices aléatoires. De plus, ces résultats nous ont permis d’observer des phénomènes spécifiques à la grande dimension, comme le phénomène de la double descente. Notre étude théorique s’accompagne d’expériences numériques illustrant la précision des équivalents asymptotiques que nous fournissons
The functioning of machine learning algorithms relies heavily on the structure of the data they are given to study. Most research work in machine learning focuses on the study of homogeneous data, often modeled by independent and identically distributed random variables. However, data encountered in practice are often heterogeneous. In this thesis, we propose to consider heterogeneous data by endowing them with a variance profile. This notion, derived from random matrix theory, allows us in particular to study data arising from mixture models. We are particularly interested in the problem of ridge regression through two models: the linear ridge model and the random feature ridge model. In this thesis, we study the performance of these two models in the high-dimensional regime, i.e., when the size of the training sample and the dimension of the data tend to infinity at comparable rates. To this end, we propose asymptotic equivalents for the training error and the test error associated with the models of interest. The derivation of these equivalents relies heavily on spectral analysis from random matrix theory, free probability theory, and traffic theory. Indeed, the performance measurement of many learning models depends on the distribution of the eigenvalues of random matrices. Moreover, these results enabled us to observe phenomena specific to the high-dimensional regime, such as the double descent phenomenon. Our theoretical study is accompanied by numerical experiments illustrating the accuracy of the asymptotic equivalents we provide
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Non-identically distributed data"

1

"Models with dependent and with non-identically distributed data." In Quantile Regression, 131–62. Oxford: John Wiley & Sons, Ltd, 2014. http://dx.doi.org/10.1002/9781118752685.ch5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lele, S. "Resampling using estimating equations." In Estimating Functions, 295–304. Oxford University PressOxford, 1991. http://dx.doi.org/10.1093/oso/9780198522287.003.0022.

Full text
Abstract:
Abstract This paper surveys resampling methods for a sequence of non-independent, non-identically distributed random variables. The jackknife method is extended to such a sequence through the use of linear estimating equations. In this paper, we extend Wu’s bootstrap to dependent data through the use of linear estimating equations. The main idea is to perturb each component estimation equation by another easy-to-generate sequence of estimating equations with proper mean, variance, and correlation structure. Some validity results are provided.
APA, Harvard, Vancouver, ISO, and other styles
3

Tarima, Sergey, and Nancy Flournoy. "Choosing Interim Sample Sizes in Group Sequential Designs." In German Medical Data Sciences: Bringing Data to Life. IOS Press, 2021. http://dx.doi.org/10.3233/shti210043.

Full text
Abstract:
This manuscript investigates sample sizes for interim analyses in group sequential designs. Traditional group sequential designs (GSD) rely on “information fraction” arguments to define the interim sample sizes. Then, interim maximum likelihood estimators (MLEs) are used to decide whether to stop early or continue the data collection until the next interim analysis. The possibility of early stopping changes the distribution of interim and final MLEs: possible interim decisions on trial stopping excludes some sample space elements. At each interim analysis the distribution of an interim MLE is a mixture of truncated and untruncated distributions. The distributional form of an MLE becomes more and more complicated with each additional interim analysis. Test statistics that are asymptotically normal without a possibly of early stopping, become mixtures of truncated normal distributions under local alternatives. Stage-specific information ratios are equivalent to sample size ratios for independent and identically distributed data. This equivalence is used to justify interim sample sizes in GSDs. Because stage-specific information ratios derived from normally distributed data differ from those derived from non-normally distributed data, the former equivalence is invalid when there is a possibility of early stopping. Tarima and Flournoy [3] have proposed a new GSD where interim sample sizes are determined by a pre-defined sequence of ordered alternative hypotheses, and the calculation of information fractions is not needed. This innovation allows researchers to prescribe interim analyses based on desired power properties. This work compares interim power properties of a classical one-sided three stage Pocock design with a one-sided three stage design driven by three ordered alternatives.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhao, Juan, Yuankai Zhang, Ruixuan Li, Yuhua Li, Haozhao Wang, Xiaoquan Yi, and Zhiying Deng. "XFed: Improving Explainability in Federated Learning by Intersection Over Union Ratio Extended Client Selection." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230628.

Full text
Abstract:
Federated Learning (FL) allows massive clients to collaboratively train a global model without revealing their private data. Because of the participants’ not independently and identically distributed (non-IID) statistical characteristics, it will cause divergence among the client’s Deep Neural Network model weights and require more communication rounds before training can be converged. Moreover, models trained from non-IID data may also extract biased features and the rationale behind the model is still not fully analyzed and exploited. In this paper, we propose eXplainable-Fed (XFed) which is a novel client selection mechanism that takes both accuracy and explainability into account. Specifically, XFed selects participants in each round based on a small test set’s accuracy via cross-entropy loss and interpretability via XAI-accuracy. XAI-accuracy is calculated by Intersection over Union Ratio between the heat map and the truth mask to evaluate the overall rationale of accuracy. The results of our experiments show that our method has comparable accuracy to state-of-the-art methods specially designed for accuracy while increasing explainability by 14%-35% in terms of rationality.
APA, Harvard, Vancouver, ISO, and other styles
5

Luo, Zicheng, Xiaohan Li, Demu Zou, and Hao Bai. "Federated Reinforcement Learning Algorithm with Fair Aggregation for Edge Caching." In Advances in Transdisciplinary Engineering. IOS Press, 2024. https://doi.org/10.3233/atde241221.

Full text
Abstract:
Edge caching is employed to solve the challenge of massive data requests, ensuring the quality of user experience. However, existed edge caching algorithms often overlook issues related to user mobility, and privacy protection, non-identically and independently distributed (non-i.i.d.) characteristics of content requests among base stations. To tackle these challenges, this paper proposes Federated Reinforcement Learning Algorithm with Fair Aggregation for Edge Caching (FFA-PPO) algorithm. This paper primarily focuses on scenario of non-i.i.d. content requests in multi-base-station and multi-mobile-user network. We model this problem as a Markov Decision Process (MDP) problem and propose a federated reinforcement learning method to solve MDP problem. The goal is to minimize the content transmission latency of base stations. FFA-PPO algorithm resolves gradient conflicts by seeking the optimal gradient vector within a local ball centered at the averaged gradient which ensures model’s fairness. In conclusion, simulation results prove that the proposed FFA-PPO algorithm outperforms other baseline algorithms in terms of content transmission latency, model’s fairness.
APA, Harvard, Vancouver, ISO, and other styles
6

Feng, Chao, Alberto Huertas Celdrán, Janosch Baltensperger, Enrique Tomás Martínez Beltrán, Pedro Miguel Sánchez Sánchez, Gérôme Bovet, and Burkhard Stiller. "Sentinel: An Aggregation Function to Secure Decentralized Federated Learning." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2024. http://dx.doi.org/10.3233/faia240686.

Full text
Abstract:
Decentralized Federated Learning (DFL) emerges as an innovative paradigm to train collaborative models, addressing the single point of failure limitation. However, the security and trustworthiness of FL and DFL are compromised by poisoning attacks, negatively impacting its performance. Existing defense mechanisms have been designed for centralized FL and they do not adequately exploit the particularities of DFL. Thus, this work introduces Sentinel, a defense strategy to counteract poisoning attacks in DFL. Sentinel leverages the accessibility of local data and defines a three-step aggregation protocol consisting of similarity filtering, bootstrap validation, and normalization to safeguard against malicious model updates. Sentinel has been evaluated with diverse datasets and data distributions. Besides, various poisoning attack types and threat levels have been verified. The results improve the state-of-the-art performance against both untargeted and targeted poisoning attacks when data follows an IID (Independent and Identically Distributed) configuration. Besides, under non-IID configuration, it is analyzed how performance degrades both for Sentinel and other state-of-the-art robust aggregation methods.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Non-identically distributed data"

1

Zhou, Zihao, Han Chen, Huageng Liu, Zeyu Ping, and Yuanyuan Song. "Distributed radar incoherent fusion method for independent non-identically distributed fluctuating targets." In 2024 IEEE International Conference on Signal, Information and Data Processing (ICSIDP), 1–4. IEEE, 2024. https://doi.org/10.1109/icsidp62679.2024.10868151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Bosong, Qian Sun, Hai Wang, Linna Zhang, and Danyang Li. "Federated Learning Greedy Aggregation Optimization for Non-Independently Identically Distributed Data." In 2024 IEEE 23rd International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), 2090–97. IEEE, 2024. https://doi.org/10.1109/trustcom63139.2024.00290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nie, Wenjing. "Research on federated model algorithm based on non-independent identically distributed data sets." In International Conference on Mechatronics and Intelligent Control (ICMIC 2024), edited by Kun Zhang and Pascal Lorenz, 130. SPIE, 2025. https://doi.org/10.1117/12.3045715.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tillman, Robert E. "Structure learning with independent non-identically distributed data." In the 26th Annual International Conference. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1553374.1553507.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hu, Liang, Wei Cao, Jian Cao, Guandong Xu, Longbing Cao, and Zhiping Gu. "Bayesian Heteroskedastic Choice Modeling on Non-identically Distributed Linkages." In 2014 IEEE International Conference on Data Mining (ICDM). IEEE, 2014. http://dx.doi.org/10.1109/icdm.2014.84.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Haowei, Like Luo, and Haolong Wang. "Federated learning on non-independent and identically distributed data." In Third International Conference on Machine Learning and Computer Application (ICMLCA 2022), edited by Fan Zhou and Shuhong Ba. SPIE, 2023. http://dx.doi.org/10.1117/12.2675255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mreish, Kinda, and Ivan I. Kholod. "Federated Learning with Non Independent and Identically Distributed Data." In 2024 Conference of Young Researchers in Electrical and Electronic Engineering (ElCon). IEEE, 2024. http://dx.doi.org/10.1109/elcon61730.2024.10468090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pan, Wentao, and Hui Zhou. "Fairness and Effectiveness in Federated Learning on Non-independent and Identically Distributed Data." In 2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI). IEEE, 2023. http://dx.doi.org/10.1109/ccai57533.2023.10201271.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shahrivari, Farzad, and Nikola Zlatanov. "An Asymptotically Optimal Algorithm For Classification of Data Vectors with Independent Non-Identically Distributed Elements." In 2021 IEEE International Symposium on Information Theory (ISIT). IEEE, 2021. http://dx.doi.org/10.1109/isit45174.2021.9518006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hodea, Octavian, Adriana Vlad, and Octaviana Datcu. "Evaluating the sampling distance to achieve independently and identically distributed data from generalized Hénon map." In 2011 10th International Symposium on Signals, Circuits and Systems (ISSCS). IEEE, 2011. http://dx.doi.org/10.1109/isscs.2011.5978665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography