Segui questo link per vedere altri tipi di pubblicazioni sul tema: Membership Inference.

Articoli di riviste sul tema "Membership Inference"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Membership Inference".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Pedersen, Joseph, Rafael Muñoz-Gómez, Jiangnan Huang, Haozhe Sun, Wei-Wei Tu e Isabelle Guyon. "LTU Attacker for Membership Inference". Algorithms 15, n. 7 (20 luglio 2022): 254. http://dx.doi.org/10.3390/a15070254.

Testo completo
Abstract (sommario):
We address the problem of defending predictive models, such as machine learning classifiers (Defender models), against membership inference attacks, in both the black-box and white-box setting, when the trainer and the trained model are publicly released. The Defender aims at optimizing a dual objective: utility and privacy. Privacy is evaluated with the membership prediction error of a so-called “Leave-Two-Unlabeled” LTU Attacker, having access to all of the Defender and Reserved data, except for the membership label of one sample from each, giving the strongest possible attack scenario. We prove that, under certain conditions, even a “naïve” LTU Attacker can achieve lower bounds on privacy loss with simple attack strategies, leading to concrete necessary conditions to protect privacy, including: preventing over-fitting and adding some amount of randomness. This attack is straightforward to implement against any model trainer, and we demonstrate its performance against MemGaurd. However, we also show that such a naïve LTU Attacker can fail to attack the privacy of models known to be vulnerable in the literature, demonstrating that knowledge must be complemented with strong attack strategies to turn the LTU Attacker into a powerful means of evaluating privacy. The LTU Attacker can incorporate any existing attack strategy to compute individual privacy scores for each training sample. Our experiments on the QMNIST, CIFAR-10, and Location-30 datasets validate our theoretical results and confirm the roles of over-fitting prevention and randomness in the algorithms to protect against privacy attacks.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Zhao, Yanchao, Jiale Chen, Jiale Zhang, Zilu Yang, Huawei Tu, Hao Han, Kun Zhu e Bing Chen. "User-Level Membership Inference for Federated Learning in Wireless Network Environment". Wireless Communications and Mobile Computing 2021 (19 ottobre 2021): 1–17. http://dx.doi.org/10.1155/2021/5534270.

Testo completo
Abstract (sommario):
With the rise of privacy concerns in traditional centralized machine learning services, federated learning, which incorporates multiple participants to train a global model across their localized training data, has lately received significant attention in both industry and academia. Bringing federated learning into a wireless network scenario is a great move. The combination of them inspires tremendous power and spawns a number of promising applications. Recent researches reveal the inherent vulnerabilities of the various learning modes for the membership inference attacks that the adversary could infer whether a given data record belongs to the model’s training set. Although the state-of-the-art techniques could successfully deduce the membership information from the centralized machine learning models, it is still challenging to infer the member data at a more confined level, the user level. It is exciting that the common wireless monitor technique in the wireless network environment just provides a good ground for fine-grained membership inference. In this paper, we novelly propose and define a concept of user-level inference attack in federated learning. Specifically, we first give a comprehensive analysis of active and targeted membership inference attacks in the context of federated learning. Then, by considering a more complicated scenario that the adversary can only passively observe the updating models from different iterations, we incorporate the generative adversarial networks into our method, which can enrich the training set for the final membership inference model. In the end, we comprehensively research and implement inferences launched by adversaries of different roles, which makes the attack scenario complete and realistic. The extensive experimental results demonstrate the effectiveness of our proposed attacking approach in the case of single label and multilabel.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Hilprecht, Benjamin, Martin Härterich e Daniel Bernau. "Monte Carlo and Reconstruction Membership Inference Attacks against Generative Models". Proceedings on Privacy Enhancing Technologies 2019, n. 4 (1 ottobre 2019): 232–49. http://dx.doi.org/10.2478/popets-2019-0067.

Testo completo
Abstract (sommario):
Abstract We present two information leakage attacks that outperform previous work on membership inference against generative models. The first attack allows membership inference without assumptions on the type of the generative model. Contrary to previous evaluation metrics for generative models, like Kernel Density Estimation, it only considers samples of the model which are close to training data records. The second attack specifically targets Variational Autoencoders, achieving high membership inference accuracy. Furthermore, previous work mostly considers membership inference adversaries who perform single record membership inference. We argue for considering regulatory actors who perform set membership inference to identify the use of specific datasets for training. The attacks are evaluated on two generative model architectures, Generative Adversarial Networks (GANs) and Variational Autoen-coders (VAEs), trained on standard image datasets. Our results show that the two attacks yield success rates superior to previous work on most data sets while at the same time having only very mild assumptions. We envision the two attacks in combination with the membership inference attack type formalization as especially useful. For example, to enforce data privacy standards and automatically assessing model quality in machine learning as a service setups. In practice, our work motivates the use of GANs since they prove less vulnerable against information leakage attacks while producing detailed samples.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Bu, Diyue, Xiaofeng Wang e Haixu Tang. "Haplotype-based membership inference from summary genomic data". Bioinformatics 37, Supplement_1 (1 luglio 2021): i161—i168. http://dx.doi.org/10.1093/bioinformatics/btab305.

Testo completo
Abstract (sommario):
Abstract Motivation The availability of human genomic data, together with the enhanced capacity to process them, is leading to transformative technological advances in biomedical science and engineering. However, the public dissemination of such data has been difficult due to privacy concerns. Specifically, it has been shown that the presence of a human subject in a case group can be inferred from the shared summary statistics of the group, e.g. the allele frequencies, or even the presence/absence of genetic variants (e.g. shared by the Beacon project) in the group. These methods rely on the availability of the target’s genome, i.e. the DNA profile of a target human subject, and thus are often referred to as the membership inference method. Results In this article, we demonstrate the haplotypes, i.e. the sequence of single nucleotide variations (SNVs) showing strong genetic linkages in human genome databases, may be inferred from the summary of genomic data without using a target’s genome. Furthermore, novel haplotypes that did not appear in the database may be reconstructed solely from the allele frequencies from genomic datasets. These reconstructed haplotypes can be used for a haplotype-based membership inference algorithm to identify target subjects in a case group with greater power than existing methods based on SNVs. Availability and implementation The implementation of the membership inference algorithms is available at https://github.com/diybu/Haplotype-based-membership-inferences.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Yang, Ziqi, Lijin Wang, Da Yang, Jie Wan, Ziming Zhao, Ee-Chien Chang, Fan Zhang e Kui Ren. "Purifier: Defending Data Inference Attacks via Transforming Confidence Scores". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 9 (26 giugno 2023): 10871–79. http://dx.doi.org/10.1609/aaai.v37i9.26289.

Testo completo
Abstract (sommario):
Neural networks are susceptible to data inference attacks such as the membership inference attack, the adversarial model inversion attack and the attribute inference attack, where the attacker could infer useful information such as the membership, the reconstruction or the sensitive attributes of a data sample from the confidence scores predicted by the target classifier. In this paper, we propose a method, namely PURIFIER, to defend against membership inference attacks. It transforms the confidence score vectors predicted by the target classifier and makes purified confidence scores indistinguishable in individual shape, statistical distribution and prediction label between members and non-members. The experimental results show that PURIFIER helps defend membership inference attacks with high effectiveness and efficiency, outperforming previous defense methods, and also incurs negligible utility loss. Besides, our further experiments show that PURIFIER is also effective in defending adversarial model inversion attacks and attribute inference attacks. For example, the inversion error is raised about 4+ times on the Facescrub530 classifier, and the attribute inference accuracy drops significantly when PURIFIER is deployed in our experiment.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Wang, Xiuling, e Wendy Hui Wang. "GCL-Leak: Link Membership Inference Attacks against Graph Contrastive Learning". Proceedings on Privacy Enhancing Technologies 2024, n. 3 (luglio 2024): 165–85. http://dx.doi.org/10.56553/popets-2024-0073.

Testo completo
Abstract (sommario):
Graph contrastive learning (GCL) has emerged as a successful method for self-supervised graph learning. It involves generating augmented views of a graph by augmenting its edges and aims to learn node embeddings that are invariant to graph augmentation. Despite its effectiveness, the potential privacy risks associated with GCL models have not been thoroughly explored. In this paper, we delve into the privacy vulnerability of GCL models through the lens of link membership inference attacks (LMIA). Specifically, we focus on the federated setting where the adversary has white-box access to the node embeddings of all the augmented views generated by the target GCL model. Designing such white-box LMIAs against GCL models presents a significant and unique challenge due to potential variations in link memberships among node pairs in the target graph and its augmented views. This variability renders members indistinguishable from non-members when relying solely on the similarity of their node embeddings in the augmented views. To address this challenge, our in-depth analysis reveals that the key distinguishing factor lies in the similarity of node embeddings within augmented views where the node pairs share identical link memberships as those in the training graph. However, this poses a second challenge, as information about whether a node pair has identical link membership in both the training graph and augmented views is only available during the attack training phase. This demands the attack classifier to handle the additional “identical-membership" information which is available only for training and not for testing. To overcome this challenge, we propose GCL-LEAK, the first link membership inference attack against GCL models. The key component of GCL-LEAK is a new attack classifier model designed under the “Learning Using Privileged Information (LUPI)” paradigm, where the privileged information of “same-membership” is encoded as part of the attack classifier's structure. Our extensive set of experiments on four representative GCL models showcases the effectiveness of GCL-LEAK. Additionally, we develop two defense mechanisms that introduce perturbation to the node embeddings. Our empirical evaluation demonstrates that both defense mechanisms significantly reduce attack accuracy while preserving the accuracy of GCL models.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Jayaraman, Bargav, Lingxiao Wang, Katherine Knipmeyer, Quanquan Gu e David Evans. "Revisiting Membership Inference Under Realistic Assumptions". Proceedings on Privacy Enhancing Technologies 2021, n. 2 (29 gennaio 2021): 348–68. http://dx.doi.org/10.2478/popets-2021-0031.

Testo completo
Abstract (sommario):
Abstract We study membership inference in settings where assumptions commonly used in previous research are relaxed. First, we consider cases where only a small fraction of the candidate pool targeted by the adversary are members and develop a PPV-based metric suitable for this setting. This skewed prior setting is more realistic than the balanced prior setting typically considered. Second, we consider adversaries that select inference thresholds according to their attack goals, such as identifying as many members as possible with a given false positive tolerance. We develop a threshold selection designed for achieving particular attack goals. Since previous inference attacks fail in imbalanced prior settings, we develop new inference attacks based on the intuition that inputs corresponding to training set members will be near a local minimum in the loss function. An attack that combines this with thresholds on the per-instance loss can achieve high PPV even in settings where other attacks are ineffective.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Kulynych, Bogdan, Mohammad Yaghini, Giovanni Cherubin, Michael Veale e Carmela Troncoso. "Disparate Vulnerability to Membership Inference Attacks". Proceedings on Privacy Enhancing Technologies 2022, n. 1 (20 novembre 2021): 460–80. http://dx.doi.org/10.2478/popets-2022-0023.

Testo completo
Abstract (sommario):
Abstract A membership inference attack (MIA) against a machine-learning model enables an attacker to determine whether a given data record was part of the model’s training data or not. In this paper, we provide an in-depth study of the phenomenon of disparate vulnerability against MIAs: unequal success rate of MIAs against different population subgroups. We first establish necessary and sufficient conditions for MIAs to be prevented, both on average and for population subgroups, using a notion of distributional generalization. Second, we derive connections of disparate vulnerability to algorithmic fairness and to differential privacy. We show that fairness can only prevent disparate vulnerability against limited classes of adversaries. Differential privacy bounds disparate vulnerability but can significantly reduce the accuracy of the model. We show that estimating disparate vulnerability by naïvely applying existing attacks can lead to overestimation. We then establish which attacks are suitable for estimating disparate vulnerability, and provide a statistical framework for doing so reliably. We conduct experiments on synthetic and real-world data finding significant evidence of disparate vulnerability in realistic settings.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Tejash Umedbhai Chaudhari, Krunal Balubhai Patel e Vimal Bhikhubhai Patel. "A study of generalized bell-shaped membership function on Mamdani fuzzy inference system for Students’ Performance Evaluation". World Journal of Advanced Research and Reviews 3, n. 2 (30 agosto 2019): 083–90. http://dx.doi.org/10.30574/wjarr.2019.3.2.0046.

Testo completo
Abstract (sommario):
This paper presents a Mamdani fuzzy inference system for evaluation of students’ performance based on Generalized bell-shaped membership function (𝑖. 𝑒 . gbellmf(x; a, b, c) = 1 1+| x−c a | 2b). The objective of this research work is to study the control of parameter ‘b’ in the generalized bell-shaped membership function. Experimental Mamdani fuzzy inference systems will keep every condition identical except changing the parameter ‘b’ in the generalized bell-shaped membership function. Different values of parameter ‘b’ in generalized bell-shaped membership function using Mamdani fuzzy inference system have been proposed and the results are compared with a statistical tool.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Xia, Fan, Yuhao Liu, Bo Jin, Zheng Yu, Xingwei Cai, Hao Li, Zhiyong Zha, Dai Hou e Kai Peng. "Leveraging Multiple Adversarial Perturbation Distances for Enhanced Membership Inference Attack in Federated Learning". Symmetry 16, n. 12 (18 dicembre 2024): 1677. https://doi.org/10.3390/sym16121677.

Testo completo
Abstract (sommario):
In recent years, federated learning (FL) has gained significant attention for its ability to protect data privacy during distributed training. However, it also introduces new privacy leakage risks. Membership inference attacks (MIAs), which aim to determine whether a specific sample is part of the training dataset, pose a significant threat to federated learning. Existing research on membership inference attacks in federated learning has primarily focused on leveraging intrinsic model parameters or manipulating the training process. However, the widespread adoption of privacy-preserving frameworks in federated learning has significantly diminished the effectiveness of traditional attack methods. To overcome this limitation, this paper aims to explore an efficient Membership Inference Attack algorithm tailored for encrypted federated learning scenarios, providing new perspectives for optimizing privacy-preserving technologies. Specifically, this paper proposes a novel Membership Inference Attack algorithm based on multiple adversarial perturbation distances (MAPD_MIA) by leveraging the asymmetry in adversarial perturbation distributions near decision boundaries between member and non-member samples. By analyzing these asymmetric perturbation characteristics, the algorithm achieves accurate membership identification. Experimental results demonstrate that the proposed algorithm achieves accuracy rates of 63.0%, 68.7%, and 59.5%, and precision rates of 59.0%, 65.9%, and 55.8% on CIFAR10, CIFAR100, and MNIST datasets, respectively, outperforming three mainstream Membership Inference Attack methods. Furthermore, the algorithm exhibits robust attack performance against two common defense mechanisms, MemGuard and DP-SGD. This study provides new benchmarks and methodologies for evaluating membership privacy leakage risks in federated learning scenarios.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Oyekan, Basirat. "DEVELOPING PRIVACY-PRESERVING FEDERATED LEARNING MODELS FOR COLLABORATIVE HEALTH DATA ANALYSIS ACROSS MULTIPLE INSTITUTIONS WITHOUT COMPROMISING DATA SECURITY". Journal of Knowledge Learning and Science Technology ISSN: 2959-6386 (online) 3, n. 3 (25 agosto 2024): 139–64. http://dx.doi.org/10.60087/jklst.vol3.n3.p139-164.

Testo completo
Abstract (sommario):
Federated learning is an emerging distributed machine learning technique that enables collaborative training of models among devices and servers without exchanging private data. However, several privacy and security risks associated with federated learning need to be addressed for safe adoption. This review provides a comprehensive analysis of the key threats in federated learning and the mitigation strategies used to overcome these threats. Some of the major threats identified include model inversion, membership inference, data attribute inference and model extraction attacks. Model inversion aims to predict the raw data values from the model parameters, which can breach participant privacy. The membership inference determines whether a data sample was used to train the model. Data attribute inference discovers private attributes such as age and gender from the model, whereas model extraction steals intellectual property by reconstructing the global model from participant updates. The review then discusses various mitigation strategies proposed for these threats. Controlled-use protections such as secure multiparty computation, homomorphic encryption and conidential computing enable privacy-preserving computations on encrypted data without decryption. Differential privacy adds noise to query responses to limit privacy breaches from aggregate statistics. Privacy-aware objectives modify the loss function to learn representations that protect privacy. Information obfuscation strategies hide inferences about training data.
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Zheng, Junxiang, Yongzhi Cao e Hanpin Wang. "Resisting membership inference attacks through knowledge distillation". Neurocomputing 452 (settembre 2021): 114–26. http://dx.doi.org/10.1016/j.neucom.2021.04.082.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Zhang, Ziqi, Chao Yan e Bradley A. Malin. "Membership inference attacks against synthetic health data". Journal of Biomedical Informatics 125 (gennaio 2022): 103977. http://dx.doi.org/10.1016/j.jbi.2021.103977.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Hayes, Jamie, Luca Melis, George Danezis e Emiliano De Cristofaro. "LOGAN: Membership Inference Attacks Against Generative Models". Proceedings on Privacy Enhancing Technologies 2019, n. 1 (1 gennaio 2019): 133–52. http://dx.doi.org/10.2478/popets-2019-0008.

Testo completo
Abstract (sommario):
Abstract Generative models estimate the underlying distribution of a dataset to generate realistic samples according to that distribution. In this paper, we present the first membership inference attacks against generative models: given a data point, the adversary determines whether or not it was used to train the model. Our attacks leverage Generative Adversarial Networks (GANs), which combine a discriminative and a generative model, to detect overfitting and recognize inputs that were part of training datasets, using the discriminator’s capacity to learn statistical differences in distributions. We present attacks based on both white-box and black-box access to the target model, against several state-of-the-art generative models, over datasets of complex representations of faces (LFW), objects (CIFAR-10), and medical images (Diabetic Retinopathy). We also discuss the sensitivity of the attacks to different training parameters, and their robustness against mitigation strategies, finding that defenses are either ineffective or lead to significantly worse performances of the generative models in terms of training stability and/or sample quality.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Hisamoto, Sorami, Matt Post e Kevin Duh. "Membership Inference Attacks on Sequence-to-Sequence Models: Is My Data In Your Machine Translation System?" Transactions of the Association for Computational Linguistics 8 (luglio 2020): 49–63. http://dx.doi.org/10.1162/tacl_a_00299.

Testo completo
Abstract (sommario):
Data privacy is an important issue for “machine learning as a service” providers. We focus on the problem of membership inference attacks: Given a data sample and black-box access to a model’s API, determine whether the sample existed in the model’s training data. Our contribution is an investigation of this problem in the context of sequence-to-sequence models, which are important in applications such as machine translation and video captioning. We define the membership inference problem for sequence generation, provide an open dataset based on state-of-the-art machine translation models, and report initial results on whether these models leak private information against several kinds of membership inference attacks.
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Gao, Junyao, Xinyang Jiang, Huishuai Zhang, Yifan Yang, Shuguang Dou, Dongsheng Li, Duoqian Miao, Cheng Deng e Cairong Zhao. "Similarity Distribution Based Membership Inference Attack on Person Re-identification". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 12 (26 giugno 2023): 14820–28. http://dx.doi.org/10.1609/aaai.v37i12.26731.

Testo completo
Abstract (sommario):
While person Re-identification (Re-ID) has progressed rapidly due to its wide real-world applications, it also causes severe risks of leaking personal information from training data. Thus, this paper focuses on quantifying this risk by membership inference (MI) attack. Most of the existing MI attack algorithms focus on classification models, while Re-ID follows a totally different training and inference paradigm. Re-ID is a fine-grained recognition task with complex feature embedding, and model outputs commonly used by existing MI like logits and losses are not accessible during inference. Since Re-ID focuses on modelling the relative relationship between image pairs instead of individual semantics, we conduct a formal and empirical analysis which validates that the distribution shift of the inter-sample similarity between training and test set is a critical criterion for Re-ID membership inference. As a result, we propose a novel membership inference attack method based on the inter-sample similarity distribution. Specifically, a set of anchor images are sampled to represent the similarity distribution conditioned on a target image, and a neural network with a novel anchor selection module is proposed to predict the membership of the target image. Our experiments validate the effectiveness of the proposed approach on both the Re-ID task and conventional classification task.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Wang, Xiuling, e Wendy Hui Wang. "Subgraph Structure Membership Inference Attacks against Graph Neural Networks". Proceedings on Privacy Enhancing Technologies 2024, n. 4 (ottobre 2024): 268–90. http://dx.doi.org/10.56553/popets-2024-0116.

Testo completo
Abstract (sommario):
Graph Neural Networks (GNNs) have been widely applied to various applications across different domains. However, recent studies have shown that GNNs are susceptible to the membership inference attacks (MIAs) which aim to infer if some particular data samples were included in the model’s training data. While most previous MIAs have focused on inferring the membership of individual nodes and edges within the training graph, we introduce a novel form of membership inference attack called the Structure Membership Inference Attack (SMIA) which aims to determine whether a given set of nodes corresponds to a particular target structure, such as a clique or a multi-hop path, within the original training graph. To address this issue, we present novel black-box SMIA attacks that leverage the prediction outputs generated by the target GNN model for inference. Our approach involves training a three-label classifier, which, in combination with shadow training, aids in enabling the inference attack. Our extensive experimental evaluation of three representative GNN models and three real-world graph datasets demonstrates that our proposed attacks consistently outperform three baseline methods, including the one that employs the conventional link membership inference attacks to infer the subgraph structure. Additionally, we design a defense mechanism that introduces perturbations to the node embeddings thus influencing the corresponding prediction outputs by the target model. Our defense selectively perturbs dimensions within the node embeddings that have the least impact on the model's accuracy. Our empirical results demonstrate that the defense effectiveness of our approach is comparable with two established defense techniques that employ differential privacy. Moreover, our method achieves a better trade-off between defense strength and the accuracy of the target model compared to the two existing defense methods.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Lee, Jinhee, e Inseok Song. "Effect of Prior Information on Bayesian Membership Calculations for Nearby Young Star Associations". Proceedings of the International Astronomical Union 10, S314 (novembre 2015): 67–68. http://dx.doi.org/10.1017/s1743921315006341.

Testo completo
Abstract (sommario):
AbstractWe present a refined moving group membership diagnostics scheme based on Bayesian inference. Compared to the BANYAN II method, we improved the calculation by updating bona fide members of a moving group, field star treatment, and uniform spatial distribution of moving group members. Here, we present the detailed description of our method and the new results for Bayesian membership calculation. Comparison of our method with BANYAN II shows probability differences up to ~90%. We conclude that more cautious consideration is needed in moving group membership based on Bayesian inference.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Moore, Hunter D., Andrew Stephens e William Scherer. "An Understanding of the Vulnerability of Datasets to Disparate Membership Inference Attacks". Journal of Cybersecurity and Privacy 2, n. 4 (14 dicembre 2022): 882–906. http://dx.doi.org/10.3390/jcp2040045.

Testo completo
Abstract (sommario):
Recent efforts have shown that training data is not secured through the generalization and abstraction of algorithms. This vulnerability to the training data has been expressed through membership inference attacks that seek to discover the use of specific records within the training dataset of a model. Additionally, disparate membership inference attacks have been shown to achieve better accuracy compared with their macro attack counterparts. These disparate membership inference attacks use a pragmatic approach to attack individual, more vulnerable sub-sets of the data, such as underrepresented classes. While previous work in this field has explored model vulnerability to these attacks, this effort explores the vulnerability of datasets themselves to disparate membership inference attacks. This is accomplished through the development of a vulnerability-classification model that classifies datasets as vulnerable or secure to these attacks. To develop this model, a vulnerability-classification dataset is developed from over 100 datasets—including frequently cited datasets within the field. These datasets are described using a feature set of over 100 features and assigned labels developed from a combination of various modeling and attack strategies. By averaging the attack accuracy over 13 different modeling and attack strategies, the authors explore the vulnerabilities of the datasets themselves as opposed to a particular modeling or attack effort. The in-class observational distance, width ratio, and the proportion of discrete features are found to dominate the attributes defining dataset vulnerability to disparate membership inference attacks. These features are explored in deeper detail and used to develop exploratory methods for hardening these class-based sub-datasets against attacks showing preliminary mitigation success with combinations of feature reduction and class-balancing strategies.
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Zakovorotniy, Alexander, e Artem Kharchenko. "Properties of interval type-2 fuzzy sets in decision support systems". Bulletin of the National Technical University «KhPI» Series: New solutions in modern technologies, n. 4 (10) (30 dicembre 2021): 75–81. http://dx.doi.org/10.20998/2413-4295.2021.04.10.

Testo completo
Abstract (sommario):
Definitions and methods of designing interval type-2 fuzzy sets in fuzzy inference systems for control problems of complex technical objects in conditions of uncertainty are considered. The main types of uncertainties, that arise when designing fuzzy inference systems and depend on the number of expert assessments, are described. Methods for assessing intra-uncertainty and inter-uncertainty are proposed, taking into account the different number of expert assessments at the stage of determining the types and number of membership functions. Factors influencing the parameters and properties of interval type-2 fuzzy during experimental studies are determined. Such factors include the number of experiments performed, external factors, technical parameters of the control object, and the reliability of the components of the computer system decision support system. The properties of the lower and upper membership functions of interval type-2 fuzzy sets are investigated on the example of the Gaussian membership function, which is one of the most used in the problems of fuzzy inference systems design. The main features and differences in the methods of determining the lower and upper membership functions of interval type-2 fuzzy sets for different types of uncertainties are taken into account. Methods for determining the footprint of uncertainty, as well as the dependence of its size on the number of expert assessments, are considered. The footprint of uncertainty is characterized by the lower and upper membership functions, and its size directly affects the accuracy of the obtained solutions. Methods for determining interval type-2 fuzzy sets using regulation factors of membership function parameters for intra-uncertainty and weighting factors of membership functions for inter-uncertainties have been developed. The regulation factor of the function parameters can be used to describe the lower and upper membership functions while determining the size of the footprint of uncertainty. Complex interval type-2 sets are determined to take into account inter-uncertainties in the problems of fuzzy inference systems design.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Dionysiou, Antreas, e Elias Athanasopoulos. "SoK: Membership Inference is Harder Than Previously Thought". Proceedings on Privacy Enhancing Technologies 2023, n. 3 (luglio 2023): 286–306. http://dx.doi.org/10.56553/popets-2023-0082.

Testo completo
Abstract (sommario):
Membership Inference Attacks (MIAs) can be conducted based on specific settings/assumptions and experience different limitations. In this paper, first, we provide a systematization of knowledge for all representative MIAs found in the literature. Second, we empirically evaluate and compare the MIA success rates achieved on Machine Learning (ML) models trained with some of the most common generalization techniques. Third, we examine the contribution of potential data leaks to successful MIAs. Fourth, we examine if the depth of Artificial Neural Networks (ANNs) affects MIA success rate and to what extent. For the experimental analysis, we focus solely on well-generalizable target models (various architectures trained on multiple datasets), having only black-box access to them. Our results suggest the following: (a) MIAs on well-generalizable targets suffer from significant limitations which undermine their practicality, (b) common generalization techniques result in ML models which are comparably robust against MIAs, (c) data leaks, although effective for overfitted models, do not facilitate MIAs in case of well-generalizable targets, (d) deep ANN architectures are not more vulnerable to MIAs compared to shallower ones or the opposite, and (e) well-generalizable models can be robust against MIAs even when not achieving state-of-the-art performance.
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Lintilhac, Paul, Henry Scheible e Nathaniel D. Bastian. "Datamodel Distance: A New Metric for Privacy". Proceedings of the AAAI Symposium Series 4, n. 1 (8 novembre 2024): 68–75. http://dx.doi.org/10.1609/aaaiss.v4i1.31773.

Testo completo
Abstract (sommario):
Recent work developing Membership Inference Attacks has demonstrated that certain points in the dataset are often in- trinsically easier to attack than others. In this paper, we intro- duce a new pointwise metric, the Datamodel Distance, and show that it is empirically correlated to and establishes a theoreti- cal lower bound for the success probability for a point under the LiRA Membership Inference Attack. This establishes a connection between the concepts of Datamodels and Member- ship Inference, and also gives new intuitive explanations for why certain points are more susceptible to attack than others. We then use datamodels as a lens through which to investigate the Privacy Onion Efect.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

GAO, XIN, YUAN GAO e DAN A. RALESCU. "ON LIU'S INFERENCE RULE FOR UNCERTAIN SYSTEMS". International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 18, n. 01 (febbraio 2010): 1–11. http://dx.doi.org/10.1142/s0218488510006349.

Testo completo
Abstract (sommario):
Liu's inference is a process of deriving consequences from uncertain knowledge or evidence via the tool of conditional uncertainty. Using membership functions, this paper derives some expressions of Liu's inference rule for uncertain systems. This paper also discusses Liu's inference rule with multiple antecedents and with multiple if-then rules.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Zhao, Lai Gang, e Dao Jiong Chen. "A Fuzzy Control Method in ACC of the Constant Interval Mode". Advanced Materials Research 201-203 (febbraio 2011): 2083–86. http://dx.doi.org/10.4028/www.scientific.net/amr.201-203.2083.

Testo completo
Abstract (sommario):
Fuzzy control methods are widely used in Adaptive Cruise Control System as time passes by,but there are also various limitations for its membership and fuzzy rule acquisition, over-relying on experts’ knowledge. This paper presented one Sugeno method based on Mamdani using ANFIS(adaptive network-based fuzzy inference system)with optimized its memberships and training processes. Its experimental results revealed that it can be embeded into ACC much more appropriately and efficiently.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Barraza, Juan, Patricia Melin, Fevrier Valdez e Claudia I. Gonzalez. "Modeling of Fuzzy Systems Based on the Competitive Neural Network". Applied Sciences 13, n. 24 (8 dicembre 2023): 13091. http://dx.doi.org/10.3390/app132413091.

Testo completo
Abstract (sommario):
This paper presents a method to dynamically model Type-1 fuzzy inference systems using a Competitive Neural Network. The aim is to exploit the potential of Competitive Neural Networks and fuzzy logic systems to generate an intelligent hybrid model with the ability to group and classify any dataset. The approach uses the Competitive Neural Network to cluster the dataset and the fuzzy model to perform the classification. It is important to note that the fuzzy inference system is generated automatically from the classes and centroids obtained with the Competitive Neural Network, namely, all the parameters of the membership functions are adapted according to the values of the input data. In the approach, two fuzzy inference systems, Sugeno and Mamdani, are proposed. Additionally, variations of these models are presented using three types of membership functions, including Trapezoidal, Triangular, and Gaussian functions. The proposed models are applied to three classification datasets: Wine, Iris, and Wisconsin Breast Cancer (WDBC). The simulations and results present higher classification accuracy when implementing the Sugeno fuzzy inference system compared to the Mamdani system, and in both models (Mamdani and Sugeno), better results are obtained when the Gaussian membership function is used.
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Abbasi Tadi, Ali, Saroj Dayal, Dima Alhadidi e Noman Mohammed. "Comparative Analysis of Membership Inference Attacks in Federated and Centralized Learning". Information 14, n. 11 (19 novembre 2023): 620. http://dx.doi.org/10.3390/info14110620.

Testo completo
Abstract (sommario):
The vulnerability of machine learning models to membership inference attacks, which aim to determine whether a specific record belongs to the training dataset, is explored in this paper. Federated learning allows multiple parties to independently train a model without sharing or centralizing their data, offering privacy advantages. However, when private datasets are used in federated learning and model access is granted, the risk of membership inference attacks emerges, potentially compromising sensitive data. To address this, effective defenses in a federated learning environment must be developed without compromising the utility of the target model. This study empirically investigates and compares membership inference attack methodologies in both federated and centralized learning environments, utilizing diverse optimizers and assessing attacks with and without defenses on image and tabular datasets. The findings demonstrate that a combination of knowledge distillation and conventional mitigation techniques (such as Gaussian dropout, Gaussian noise, and activity regularization) significantly mitigates the risk of information leakage in both federated and centralized settings.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Cheng, Min-Yuan, e Nhat-Duc Hoang. "A Self-Adaptive Fuzzy Inference Model Based on Least Squares SVM for Estimating Compressive Strength of Rubberized Concrete". International Journal of Information Technology & Decision Making 15, n. 03 (maggio 2016): 603–19. http://dx.doi.org/10.1142/s0219622016500140.

Testo completo
Abstract (sommario):
This paper presents an AI approach named as self-Adaptive fuzzy least squares support vector machines inference model (SFLSIM) for predicting compressive strength of rubberized concrete. The SFLSIM consists of a fuzzification process for converting crisp input data into membership grades and an inference engine which is constructed based on least squares support vector machines (LS-SVM). Moreover, the proposed inference model integrates differential evolution (DE) to adaptively search for the most appropriate profiles of fuzzy membership functions (MFs) as well as the LS-SVM’s tuning parameters. In this study, 70 concrete mix samples are utilized to train and test the SFLSIM. According to experimental results, the SFLSIM can achieve a comparatively low MAPE which is less than 2%.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Shejwalkar, Virat, e Amir Houmansadr. "Membership Privacy for Machine Learning Models Through Knowledge Transfer". Proceedings of the AAAI Conference on Artificial Intelligence 35, n. 11 (18 maggio 2021): 9549–57. http://dx.doi.org/10.1609/aaai.v35i11.17150.

Testo completo
Abstract (sommario):
Large capacity machine learning (ML) models are prone to membership inference attacks (MIAs), which aim to infer whether the target sample is a member of the target model's training dataset. The serious privacy concerns due to the membership inference have motivated multiple defenses against MIAs, e.g., differential privacy and adversarial regularization. Unfortunately, these defenses produce ML models with unacceptably low classification performances. Our work proposes a new defense, called distillation for membership privacy (DMP), against MIAs that preserves the utility of the resulting models significantly better than prior defenses. DMP leverages knowledge distillation to train ML models with membership privacy. We provide a novel criterion to tune the data used for knowledge transfer in order to amplify the membership privacy of DMP. Our extensive evaluation shows that DMP provides significantly better tradeoffs between membership privacy and classification accuracies compared to state-of-the-art MIA defenses. For instance, DMP achieves ~100% accuracy improvement over adversarial regularization for DenseNet trained on CIFAR100, for similar membership privacy (measured using MIA risk): when the MIA risk is 53.7%, adversarially regularized DenseNet is 33.6% accurate, while DMP-trained DenseNet is 65.3% accurate. We have released our code at github.com/vrt1shjwlkr/AAAI21-MIA-Defense.
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Alfarisy, Gusti Ahmad Fanshuri, e Wayan Firdaus Mahmudy. "Rainfall Forecasting in Banyuwangi Using Adaptive Neuro Fuzzy Inference System". Journal of Information Technology and Computer Science 1, n. 2 (8 febbraio 2017): 65. http://dx.doi.org/10.25126/jitecs.20161212.

Testo completo
Abstract (sommario):
Rainfall forcasting is a non-linear forecasting process that varies according to area and strongly influenced by climate change. It is a difficult process due to complexity of rainfall trend in the previous event and the popularity of Adaptive Neuro Fuzzy Inference System (ANFIS) with hybrid learning method give high prediction for rainfall as a forecasting model. Thus, in this study we investigate the efficient membership function of ANFIS for predicting rainfall in Banyuwangi, Indonesia. The number of different membership functions that use hybrid learning method is compared. The validation process shows that 3 or 4 membership function gives minimum RMSE results that use temperature, wind speed and relative humidity as parameters.
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Ansaf, Huda, Bahaa Kazem Ansaf e Sanaa S. Al Samahi. "A Neuro-Fuzzy Technique for the Modeling of β-Glucosidase Activity from Agaricus bisporus". BioChem 1, n. 3 (19 ottobre 2021): 159–73. http://dx.doi.org/10.3390/biochem1030013.

Testo completo
Abstract (sommario):
This paper proposes a neuro-fuzzy system to model β-glucosidase activity based on the reaction’s pH level and temperature. The developed fuzzy inference system includes two input variables (pH level and temperature) and one output (enzyme activity). The multi-input fuzzy inference system was developed in two stages: first, developing a single input-single output fuzzy inference system for each input variable (pH, temperature) separately, using the robust adaptive network-based fuzzy inference system (ANFIS) approach. The neural network learning techniques were used to tune the membership functions based on previously published experimental data for β-glucosidase. Second, each input’s optimized membership functions from the ANFIS technique were embedded in a new fuzzy inference system to simultaneously encompass the impact of temperature and pH level on the activity of β-glucosidase. The required base rules for the developed fuzzy inference system were created to describe the antecedent (pH and temperature) implication to the consequent (enzyme activity), using the singleton Sugeno fuzzy inference technique. The simulation results from the developed models achieved high accuracy. The neuro-fuzzy approach performed very well in predicting β-glucosidase activity through comparative analysis. The proposed approach may be used to predict enzyme kinetics for several nonlinear biosynthetic processes.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Tejash U. Chaudhari, Vimal B. Patel, Rahul G. Thakkar e Chetanpal Singh. "Comparative analysis of Mamdani, Larsen and Tsukamoto methods of fuzzy inference system for students’ academic performance evaluation". International Journal of Science and Research Archive 9, n. 1 (30 giugno 2023): 517–23. http://dx.doi.org/10.30574/ijsra.2023.9.1.0443.

Testo completo
Abstract (sommario):
Over the last few years, the use of the fuzzy logic technique for evaluating performance in the teaching-learning process is growing rapidly. In this research work, three different fuzzy inference methods: Mamdani fuzzy inference method, Larsen fuzzy inference method and Tsukamoto fuzzy inference method have been proposed for students' academic performance appraisal for multi-input variables. To obtain a degree of satisfaction, the Triangular membership function is used. The results of experiments showed the best fuzzy inference method among Mamdani, Larsen and Tsukamoto. We have also compared the results with the existing statistical method.
Gli stili APA, Harvard, Vancouver, ISO e altri
32

LI, Jing, e Wei-dong TIAN. "Fuzzy inference system based on B-spline membership function". Journal of Computer Applications 31, n. 2 (6 aprile 2011): 490–92. http://dx.doi.org/10.3724/sp.j.1087.2011.00490.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Fan, Jianqing, Yingying Fan, Xiao Han e Jinchi Lv. "SIMPLE: Statistical inference on membership profiles in large networks". Journal of the Royal Statistical Society: Series B (Statistical Methodology) 84, n. 2 (30 marzo 2022): 630–53. http://dx.doi.org/10.1111/rssb.12505.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Ben Hamida, Sana, Hichem Mrabet, Sana Belguith, Adeeb Alhomoud e Abderrazak Jemai. "Towards Securing Machine Learning Models Against Membership Inference Attacks". Computers, Materials & Continua 70, n. 3 (2022): 4897–919. http://dx.doi.org/10.32604/cmc.2022.019709.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
35

SAKUMA, Akira. "Inference on the Fuzzy Equivalence with Step Membership Function." Rinsho yakuri/Japanese Journal of Clinical Pharmacology and Therapeutics 23, n. 2 (1992): 521–25. http://dx.doi.org/10.3999/jscpt.23.521.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Zrilic, Djuro G., Jaime Ramirez-Angulo e Bo Yuan. "Hardware implementations of fuzzy membership functions, operations, and inference". Computers & Electrical Engineering 26, n. 1 (gennaio 2000): 85–105. http://dx.doi.org/10.1016/s0045-7906(99)00025-7.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Bamshad, Michael J., Stephen Wooding, W. Scott Watkins, Christopher T. Ostler, Mark A. Batzer e Lynn B. Jorde. "Human Population Genetic Structure and Inference of Group Membership". American Journal of Human Genetics 72, n. 3 (marzo 2003): 578–89. http://dx.doi.org/10.1086/368061.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
38

KWON, Hyun, e Yongchul KIM. "Toward Selective Membership Inference Attack against Deep Learning Model". IEICE Transactions on Information and Systems E105.D, n. 11 (1 novembre 2022): 1911–15. http://dx.doi.org/10.1587/transinf.2022ngl0001.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Riaz, Shazia, Saqib Ali, Guojun Wang, Muhammad Ahsan Latif e Muhammad Zafar Iqbal. "Membership inference attack on differentially private block coordinate descent". PeerJ Computer Science 9 (5 ottobre 2023): e1616. http://dx.doi.org/10.7717/peerj-cs.1616.

Testo completo
Abstract (sommario):
The extraordinary success of deep learning is made possible due to the availability of crowd-sourced large-scale training datasets. Mostly, these datasets contain personal and confidential information, thus, have great potential of being misused, raising privacy concerns. Consequently, privacy-preserving deep learning has become a primary research interest nowadays. One of the prominent approaches adopted to prevent the leakage of sensitive information about the training data is by implementing differential privacy during training for their differentially private training, which aims to preserve the privacy of deep learning models. Though these models are claimed to be a safeguard against privacy attacks targeting sensitive information, however, least amount of work is found in the literature to practically evaluate their capability by performing a sophisticated attack model on them. Recently, DP-BCD is proposed as an alternative to state-of-the-art DP-SGD, to preserve the privacy of deep-learning models, having low privacy cost and fast convergence speed with highly accurate prediction results. To check its practical capability, in this article, we analytically evaluate the impact of a sophisticated privacy attack called the membership inference attack against it in both black box as well as white box settings. More precisely, we inspect how much information can be inferred from a differentially private deep model’s training data. We evaluate our experiments on benchmark datasets using AUC, attacker advantage, precision, recall, and F1-score performance metrics. The experimental results exhibit that DP-BCD keeps its promise to preserve privacy against strong adversaries while providing acceptable model utility compared to state-of-the-art techniques.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Zhang, Yun, e Chaoxia Qin. "A Gaussian-Shaped Fuzzy Inference System for Multi-Source Fuzzy Data". Systems 10, n. 6 (15 dicembre 2022): 258. http://dx.doi.org/10.3390/systems10060258.

Testo completo
Abstract (sommario):
Fuzzy control theory has been extensively used in the construction of complex fuzzy inference systems. However, we argue that existing fuzzy control technologies focus mainly on the single-source fuzzy information system, disregarding the complementary nature of multi-source data. In this paper, we develop a novel Gaussian-shaped Fuzzy Inference System (GFIS) driven by multi-source fuzzy data. To this end, we first propose an interval-value normalization method to address the heterogeneity of multi-source fuzzy data. The contribution of our interval-value normalization method involves mapping heterogeneous fuzzy data to a unified distribution space by adjusting the mean and variance of data from each information source. As a result of combining the normalized descriptions from various sources for an object, we can obtain a fused representation of that object. We then derive an adaptive Gaussian-shaped membership function based on the addition law of the Gaussian distribution. GFIS uses it to dynamically granulate fusion inputs and to design inference rules. This proposed membership function has the advantage of being able to adapt to changing information sources. Finally, we integrate the normalization method and adaptive membership function to the Takagi–Sugeno (T–S) model and present a modified fuzzy inference framework. Applying our methodology to four datasets, we confirm that the data do lend support to the theory implying the improved performance and effectiveness.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Ashraf, Ather, Muhammad Akram e Mansoor Sarwar. "Type-II Fuzzy Decision Support System for Fertilizer". Scientific World Journal 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/695815.

Testo completo
Abstract (sommario):
Type-II fuzzy sets are used to convey the uncertainties in the membership function of type-I fuzzy sets. Linguistic information in expert rules does not give any information about the geometry of the membership functions. These membership functions are mostly constructed through numerical data or range of classes. But there exists an uncertainty about the shape of the membership, that is, whether to go for a triangle membership function or a trapezoidal membership function. In this paper we use a type-II fuzzy set to overcome this uncertainty, and develop a fuzzy decision support system of fertilizers based on a type-II fuzzy set. This type-II fuzzy system takes cropping time and soil nutrients in the form of spatial surfaces as input, fuzzifies it using a type-II fuzzy membership function, and implies fuzzy rules on it in the fuzzy inference engine. The output of the fuzzy inference engine, which is in the form of interval value type-II fuzzy sets, reduced to an interval type-I fuzzy set, defuzzifies it to a crisp value and generates a spatial surface of fertilizers. This spatial surface shows the spatial trend of the required amount of fertilizer needed to cultivate a specific crop. The complexity of our algorithm isO(mnr), wheremis the height of the raster,nis the width of the raster, andris the number of expert rules.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Voronina, Ekaterina V., Viktor E. Reutov, Olga B. Yarosh e Sergei V. Khalezin. "Formation of a Science-Based Real Estate Services Market Management Mechanism". Materials Science Forum 931 (settembre 2018): 1172–77. http://dx.doi.org/10.4028/www.scientific.net/msf.931.1172.

Testo completo
Abstract (sommario):
The authors approve a mathematical model for estimating the value of an object taking into account trends in the residential real estate market in the MATLAB software package with the assignment and correction of the fuzzy set membership functions. Fuzzy computations are performed using the Mamdani method, which requires defining the membership functions for the output variables. We consider schemes for constructing systems of fuzzy inference for variables (the real estate object state) and (the real estate market state). The results of calculations using the Fuzzy Logic Toolbox are given on the surfaces of fuzzy inference systems that show the dependence of the output variables on the individual input variables.
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Yunda, Leonardo, David Pacheco e Jorge Millan. "A Web-based Fuzzy Inference System Based Tool for Cardiovascular Disease Risk Assessment". Nova 13, n. 24 (15 dicembre 2015): 7. http://dx.doi.org/10.22490/24629448.1712.

Testo completo
Abstract (sommario):
<p>Developing a Web-based Fuzzy Inference Tool for cardiovascular risk assessment. The tool uses evidence-based medicine inference rules for membership classification. <strong>Methods</strong>. The system framework allows adding variables such as gender, age, weight, height, medication intake, and blood pressure, with various types of membership functions based on classification rules. <strong>Results</strong>. By inputting patient clinical data, the tool allows health professionals to obtain a prediction of cardiovascular risk. The tool can also be later used to predict other types of risks including cognitive and physical disease conditions.</p>
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Hayashi, Kenichiro, Akifumi Otsubo e Kazuhiko Shiranita. "Fuzzy Control Using Piecewise Linear Membership Functions Based on Knowledge of Tuning a PID Controller". Journal of Advanced Computational Intelligence and Intelligent Informatics 5, n. 1 (20 gennaio 2001): 71–77. http://dx.doi.org/10.20965/jaciii.2001.p0071.

Testo completo
Abstract (sommario):
Tuning control parameters of a fuzzy controller depends on trial-and-error. How this can be accomplished efficiently is an important subject in fuzzy control that should be investigated. We propose a method in which membership functions of a fuzzy controller can be simply set up using the knowledge of tuning parameters in a conventional PID controller. In this method, fuzzy control is realized as follows: Piecewise linear membership functions determined using the knowledge of tuning parameters in a PID controller are set up in antecedent parts of four fuzzy control rules having simple structures. Then the simplified inference method that enables high-speed inference is applied to fuzzy control rules.
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Sant'Ana, Vitor Taha, e Roberto Mendes Finzi Neto. "A case study on neuro-fuzzy architectures applied to the system identification of a reduced scale fighter aircraft". OBSERVATÓRIO DE LA ECONOMÍA LATINOAMERICANA 22, n. 1 (18 gennaio 2024): 1898–919. http://dx.doi.org/10.55905/oelv22n1-099.

Testo completo
Abstract (sommario):
This study investigates different architectures of Neuro-Fuzzy applied to unsteady aerodynamic modeling based on experimental data from a reduced-scale aircraft, known as Generic Future Fighter. The comparison is made considering different fuzzy inference methods, membership function shapes, number of membership functions to describe the input variables and different output functions, in the case of Takagi-Sugeno inference method. All these comparisons are made using the differential evolution as an optimization tool. In the end, the results present the best Neuro-Fuzzy configuration applied to the system identification of the GFF. Furthermore, the conclusion presents insights about the possible future implementation of the methodology.
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Mohammad Khan, Farhan, Smriti Sridhar e Rajiv Gupta. "Detection of waterborne bacteria using Adaptive Neuro-Fuzzy Inference System". E3S Web of Conferences 158 (2020): 05002. http://dx.doi.org/10.1051/e3sconf/202015805002.

Testo completo
Abstract (sommario):
The detection of waterborne bacteria is crucial to prevent health risks. Current research uses soft computing techniques based on Artificial Neural Networks (ANN) for the detection of bacterial pollution in water. The limitation of only relying on sensor-based water quality analysis for detection can be prone to human errors. Hence, there is a need to automate the process of real-time bacterial monitoring for minimizing the error, as mentioned above. To address this issue, we implement an automated process of water-borne bacterial detection using a hybrid technique called Adaptive Neuro-fuzzy Inference System (ANFIS), that integrates the advantage of learning in an ANN and a set of fuzzy if-then rules with appropriate membership functions. The experimental data as the input to the ANFIS model is obtained from the open-sourced dataset of government of India data platform, having 1992 experimental laboratory results from the years 2003-2014. We have included the following water quality parameters: Temperature, Dissolved Oxygen (DO), pH, Electrical conductivity, Biochemical oxygen demand (BOD) as the significant factors in the detection and existence of bacteria. The membership function changes automatically with every iteration during training of the system. The goal of the study is to compare the results obtained from the three membership functions of ANFIS- Triangle, Trapezoidal, and Bell-shaped with 35 = 243 fuzzy set rules. The results show that ANFIS with generalized bell-shaped membership function is best with its average error 0.00619 at epoch 100.
Gli stili APA, Harvard, Vancouver, ISO e altri
47

AL-Saedi, Louloua M. AL, Methaq Talib Gaata, Mostafa Abotaleb e Hussein Alkattan. "New Approach of Estimating Sarcasm based on the percentage of happiness of facial Expression using Fuzzy Inference System". Journal of Artificial Intelligence and Metaheuristics 1, n. 1 (2022): 35–44. http://dx.doi.org/10.54216/jaim.010104.

Testo completo
Abstract (sommario):
Generally, the process of detecting micro expressions takes significant importance because all these expressions reflect the hidden emotions even when the person tried to conceal them. In this paper, a new approach has been proposed to estimate the percentage of sarcasm based on the detected degree of happiness of facial expression using fuzzy inference system. Five regions in a face (rightleft brows, rightleft eyes, and mouth) are considered to determine some active distances from the detected outline points of these regions. The membership functions in the proposed fuzzy inference system are used as a first step to determine the degree of happiness expression based mainly on the computed distances and then another membership function is used to estimate the percentage of sarcasm according the outcomes of the membership functions in the first step. The proposed approach is validated using some face images which are collected from the SMIC, SAMM, and CAS(ME)2 standard datasets.
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Velikanova, A. S., K. A. Polshchykov, R. V. Likhosherstov e A. K. Polshchykova. "The use of virtual reality and fuzzy neural network tools to identify the focus on achieving project results". Journal of Physics: Conference Series 2060, n. 1 (1 ottobre 2021): 012017. http://dx.doi.org/10.1088/1742-6596/2060/1/012017.

Testo completo
Abstract (sommario):
Abstract In the process of making a decision on the inclusion of an applicant in the project team, it is proposed to take into account his project results targeting (PRT). The article describes the conceptual basis for identifying people’s focus on achieving the results of significant projects based on the use of virtual reality tools and the capabilities of fuzzy inference systems. At the same time, to configure the parameters of the membership functions of the fuzzy inference system and the values of the individual inferences of the fuzzy rules, the use of a neural-fuzzy network is proposed. To get a training sample, the participants of the teams of implemented projects will need to pass a series of tests using virtual reality tools. According to the proposed concept, persons with high PRT will be primarily recommended for inclusion in the team of project performers.
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Jarin, Ismat, e Birhanu Eshete. "MIAShield: Defending Membership Inference Attacks via Preemptive Exclusion of Members". Proceedings on Privacy Enhancing Technologies 2023, n. 1 (gennaio 2023): 400–416. http://dx.doi.org/10.56553/popets-2023-0024.

Testo completo
Abstract (sommario):
In membership inference attacks (MIAs), an adversary observes the predictions of a model to determine whether a sample is part of the model’s training data. Existing MIA defenses conceal the presence of a target sample through strong regularization, knowledge distillation, confidence masking, or differential privacy. We propose MIAShield, a new MIA defense based on preemptive exclusion of member samples instead of masking the presence of a member. MIAShield departs from prior defenses in that it weakens the strong membership signal that stems from the presence of a target sample by preemptively excluding it at prediction time without compromising model utility. To that end, we design and evaluate a suite of preemptive exclusion oracles leveraging model confidence, exact/approximate sample signature, and learning-based exclusion of member data points. To be practical, MIAShield splits a training data into disjoint subsets and trains each subset to build an ensemble of models. The disjointedness of subsets ensures that a target sample belongs to only one subset, which isolates the sample to facilitate the preemptive exclusion goal. We evaluate MIAShield on three benchmark image classification datasets. We show that MIAShield effectively mitigates membership inference (near random guess) for a wide range of MIAs; achieves far better privacy/utility trade-off compared with state-of-the-art defenses; and remains resilient in the face of adaptive attacks.
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Park, Cheolhee, Youngsoo Kim, Jong-Geun Park, Dowon Hong e Changho Seo. "Evaluating Differentially Private Generative Adversarial Networks Over Membership Inference Attack". IEEE Access 9 (2021): 167412–25. http://dx.doi.org/10.1109/access.2021.3137278.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia