Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Private Data Analysis.

Zeitschriftenartikel zum Thema „Private Data Analysis“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Private Data Analysis" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Shi, Elaine, T. H. Hubert Chan, Eleanor Rieffel und Dawn Song. „Distributed Private Data Analysis“. ACM Transactions on Algorithms 13, Nr. 4 (21.12.2017): 1–38. http://dx.doi.org/10.1145/3146549.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Abdul Manap, Nazura, Mohamad Rizal Abd Rahman und Siti Nur Farah Atiqah Salleh. „HEALTH DATA OWNERSHIP IN MALAYSIA PUBLIC AND PRIVATE HEALTHCARE: A LEGAL ANALYSIS OF HEALTH DATA PRIVACY IN THE AGE OF BIG DATA“. International Journal of Law, Government and Communication 7, Nr. 30 (31.12.2022): 33–41. http://dx.doi.org/10.35631/ijlgc.730004.

Der volle Inhalt der Quelle
Annotation:
Health data ownership in big data is a new legal issue. The problem stands between the public and private healthcare as the main proprietor of health data. In Malaysia, health data ownership is under government hospitals and private healthcare jurisdictions. Who owns the data will be responsible for safeguarding it, including its privacy. Various technical methods are applied to protect health data, such as aggregation and anonymization. The thing is, do these technical methods are still reliable to safeguard privacy in big data? In terms of legal protection, private healthcare is governed under Personal Data Protection Act 2010, while the same Act does not bind the government. With the advancement of big data, public and private healthcare are trying to extract values from health data by processing big data and its analytical outcomes. Considering that health data is sensitive due to its nature which contains personal information of individuals or patients, had raised an issue as to whether the proprietor could provide adequate legal protection of health data. Personal Data Protection Act 2010 is still applicable in giving protection for health data for private healthcare, but what are the laws governing health data privacy in public healthcare? This article aims to answer the questions by analyzing legal sources relevant to health data privacy in big data. We propose a regulatory guideline that follows the GDPR as a legal reference model to harmonize the public and private healthcare ownership of health data better to protect the privacy of individuals in big data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Dwork, Cynthia, Frank McSherry, Kobbi Nissim und Adam Smith. „Calibrating Noise to Sensitivity in Private Data Analysis“. Journal of Privacy and Confidentiality 7, Nr. 3 (30.05.2017): 17–51. http://dx.doi.org/10.29012/jpc.v7i3.405.

Der volle Inhalt der Quelle
Annotation:
We continue a line of research initiated in Dinur and Nissim (2003); Dwork and Nissim (2004); and Blum et al. (2005) on privacy-preserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function $f$ mapping databases to reals, the so-called {\em true answer} is the result of applying $f$ to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which $f = \sum_i g(x_i)$, where $x_i$ denotes the $i$th row of the database and $g$ maps database rows to $[0,1]$. We extend the study to general functions $f$, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the {\em sensitivity} of the function $f$. Roughly speaking, this is the amount that any single argument to $f$ can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean definition of privacy---now known as differential privacy---and measure of its loss. We also provide a set of tools for designing and combining differentially private algorithms, permitting the construction of complex differentially private analytical tools from simple differentially private primitives. Finally, we obtain separation results showing the increased value of interactive statistical release mechanisms over non-interactive ones.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Proserpio, Davide, Sharon Goldberg und Frank McSherry. „Calibrating data to sensitivity in private data analysis“. Proceedings of the VLDB Endowment 7, Nr. 8 (April 2014): 637–48. http://dx.doi.org/10.14778/2732296.2732300.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Mandal, Sanjeev Kumar, Amit Sharma, Santosh Kumar Henge, Sumaira Bashir, Madhuresh Shukla und Asim Tara Pathak. „Secure data encryption key scenario for protecting private data security and privacy“. Journal of Discrete Mathematical Sciences and Cryptography 27, Nr. 2 (2024): 269–81. http://dx.doi.org/10.47974/jdmsc-1881.

Der volle Inhalt der Quelle
Annotation:
Cryptography, specifically encryption, plays a pivotal role in protecting data from unauthorized access. However, not all encryption methods are equally effective, as some exhibit vulnerabilities. This research proposing a novel encryption method that builds upon established techniques to enhance data security. The proposed method combines the strengths of the Festial encryption method and the Advanced Encryption Standard (AES) to create an algorithm that exhibits superior resistance against attacks. The proposed encryption method successfully mitigates vulnerabilities, demonstrating enhanced resilience against unauthorized access attempts and minimizing the potential for data leakage. By prioritizing security and advancing encryption technologies, it can effectively protect personal information, maintain data confidentiality and integrity, and establish a safer digital environment for individuals and organizations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Appenzeller, Arno, Moritz Leitner, Patrick Philipp, Erik Krempel und Jürgen Beyerer. „Privacy and Utility of Private Synthetic Data for Medical Data Analyses“. Applied Sciences 12, Nr. 23 (01.12.2022): 12320. http://dx.doi.org/10.3390/app122312320.

Der volle Inhalt der Quelle
Annotation:
The increasing availability and use of sensitive personal data raises a set of issues regarding the privacy of the individuals behind the data. These concerns become even more important when health data are processed, as are considered sensitive (according to most global regulations). PETs attempt to protect the privacy of individuals whilst preserving the utility of data. One of the most popular technologies recently is DP, which was used for the 2020 U.S. Census. Another trend is to combine synthetic data generators with DP to create so-called private synthetic data generators. The objective is to preserve statistical properties as accurately as possible, while the generated data should be as different as possible compared to the original data regarding private features. While these technologies seem promising, there is a gap between academic research on DP and synthetic data and the practical application and evaluation of these techniques for real-world use cases. In this paper, we evaluate three different private synthetic data generators (MWEM, DP-CTGAN, and PATE-CTGAN) on their use-case-specific privacy and utility. For the use case, continuous heart rate measurements from different individuals are analyzed. This work shows that private synthetic data generators have tremendous advantages over traditional techniques, but also require in-depth analysis depending on the use case. Furthermore, it can be seen that each technology has different strengths, so there is no clear winner. However, DP-CTGAN often performs slightly better than the other technologies, so it can be recommended for a continuous medical data use case.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Lobo-Vesga, Elisabet, Alejandro Russo und Marco Gaboardi. „A Programming Language for Data Privacy with Accuracy Estimations“. ACM Transactions on Programming Languages and Systems 43, Nr. 2 (Juli 2021): 1–42. http://dx.doi.org/10.1145/3452096.

Der volle Inhalt der Quelle
Annotation:
Differential privacy offers a formal framework for reasoning about the privacy and accuracy of computations on private data. It also offers a rich set of building blocks for constructing private data analyses. When carefully calibrated, these analyses simultaneously guarantee the privacy of the individuals contributing their data, and the accuracy of the data analysis results, inferring useful properties about the population. The compositional nature of differential privacy has motivated the design and implementation of several programming languages to ease the implementation of differentially private analyses. Even though these programming languages provide support for reasoning about privacy, most of them disregard reasoning about the accuracy of data analyses. To overcome this limitation, we present DPella, a programming framework providing data analysts with support for reasoning about privacy, accuracy, and their trade-offs. The distinguishing feature of DPella is a novel component that statically tracks the accuracy of different data analyses. To provide tight accuracy estimations, this component leverages taint analysis for automatically inferring statistical independence of the different noise quantities added for guaranteeing privacy. We evaluate our approach by implementing several classical queries from the literature and showing how data analysts can calibrate the privacy parameters to meet the accuracy requirements, and vice versa.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Dwork, Cynthia. „A firm foundation for private data analysis“. Communications of the ACM 54, Nr. 1 (Januar 2011): 86–95. http://dx.doi.org/10.1145/1866739.1866758.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Bos, Joppe W., Kristin Lauter und Michael Naehrig. „Private predictive analysis on encrypted medical data“. Journal of Biomedical Informatics 50 (August 2014): 234–43. http://dx.doi.org/10.1016/j.jbi.2014.04.003.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Aher, Ujjwala Bal, Amol A. Bhosle, Prachi Palsodkar, Swati Bula Patil, Nishchay Koul und Purva Mange. „Secure data sharing in collaborative network environments for privacy-preserving mechanisms“. Journal of Discrete Mathematical Sciences and Cryptography 27, Nr. 2-B (2024): 855–65. http://dx.doi.org/10.47974/jdmsc-1961.

Der volle Inhalt der Quelle
Annotation:
In today’s world where everything is linked, shared network settings are necessary to share info between companies. However, making sure that data is safe and private in these kinds of settings is very hard. This paper describes a new way to share data safely, focusing on privacy-protecting features that keep private data safe while making it easier for people to work together.To keep data safe at different steps of sharing and processing, the suggested answer uses a mix of cryptography, access controls, and anonymization techniques. Unauthorized access is stopped and privacy is kept by encrypting data both while it is being sent and while it is being stored. Access controls are also used to make sure that only people who are allowed to can see or change private data. These controls are based on rules that have already been set. Also, methods like differential privacy are used to make privacy even better by adding noise to question results. This stops individual records from being identified while still allowing useful analysis. Together, these methods make it possible to share data safely and privately, which encourages cooperation without jeopardizing accuracy or privacy.Overall, this method provides a complete framework for dealing with the tricky issues of data security and privacy in collaborative network settings. It lets businesses share information freely while still following strict privacy rules and keeping private data from getting into the wrong hands or being shared without permission.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Sramka, Michal. „Data mining as a tool in privacy-preserving data publishing“. Tatra Mountains Mathematical Publications 45, Nr. 1 (01.12.2010): 151–59. http://dx.doi.org/10.2478/v10127-010-0011-z.

Der volle Inhalt der Quelle
Annotation:
ABSTRACTMany databases contain data about individuals that are valuable for research, marketing, and decision making. Sharing or publishing data about individuals is however prone to privacy attacks, breaches, and disclosures. The concern here is about individuals’ privacy-keeping the sensitive information about individuals private to them. Data mining in this setting has been shown to be a powerful tool to breach privacy and make disclosures. In contrast, data mining can be also used in practice to aid data owners in their decision on how to share and publish their databases. We present and discuss the role and uses of data mining in these scenarios and also briefly discuss other approaches to private data analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Cho, Cheol-kyu. „Big Data Analysis Research on Private Investigation Systems“. K Association of Education Research 8, Nr. 3 (30.09.2023): 273–87. http://dx.doi.org/10.48033/jss.8.3.15.

Der volle Inhalt der Quelle
Annotation:
In this study, we would like to explore how the media means the private investigation system through “Big Cains,” a news big data analysis site. Therefore, it provided a big data system for the media and collected data from various media organizations to analyze media reports on private investigations through BigKinds, a news big data analysis system created as news content. In order to achieve the purpose of the research, the flow of keywords was identified by dividing them into 2000s, 2010s and 2020s and deriving and presenting the number of related articles every year during that period. As a result of this study, it was found that in the 2000s, the frequency of media reporting was low at a time when public interest in the private investigation system and various studies were not conducted in academia. Second, in the 2010s, there was a high topic rank on the private survey system compared to the 2000s, and various research and education were conducted in academia and industry related to government policies and private surveys. Third, the 2020s marked a high topic rank as related laws were revised, heralding a new change in the private investigation system. Therefore, it is hoped that the government will move in a direction that will greatly help the people's security demand by supplementing the protection of citizens' rights, minimizing conflicts between direct translations, and limitations of national security.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Zhu, Tianqing, Gang Li, Wanlei Zhou und Philip S. Yu. „Differentially Private Data Publishing and Analysis: A Survey“. IEEE Transactions on Knowledge and Data Engineering 29, Nr. 8 (01.08.2017): 1619–38. http://dx.doi.org/10.1109/tkde.2017.2697856.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Hamza, Rafik, Alzubair Hassan, Awad Ali, Mohammed Bakri Bashir, Samar M. Alqhtani, Tawfeeg Mohmmed Tawfeeg und Adil Yousif. „Towards Secure Big Data Analysis via Fully Homomorphic Encryption Algorithms“. Entropy 24, Nr. 4 (06.04.2022): 519. http://dx.doi.org/10.3390/e24040519.

Der volle Inhalt der Quelle
Annotation:
Privacy-preserving techniques allow private information to be used without compromising privacy. Most encryption algorithms, such as the Advanced Encryption Standard (AES) algorithm, cannot perform computational operations on encrypted data without first applying the decryption process. Homomorphic encryption algorithms provide innovative solutions to support computations on encrypted data while preserving the content of private information. However, these algorithms have some limitations, such as computational cost as well as the need for modifications for each case study. In this paper, we present a comprehensive overview of various homomorphic encryption tools for Big Data analysis and their applications. We also discuss a security framework for Big Data analysis while preserving privacy using homomorphic encryption algorithms. We highlight the fundamental features and tradeoffs that should be considered when choosing the right approach for Big Data applications in practice. We then present a comparison of popular current homomorphic encryption tools with respect to these identified characteristics. We examine the implementation results of various homomorphic encryption toolkits and compare their performances. Finally, we highlight some important issues and research opportunities. We aim to anticipate how homomorphic encryption technology will be useful for secure Big Data processing, especially to improve the utility and performance of privacy-preserving machine learning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Oyekan, Basirat. „DEVELOPING PRIVACY-PRESERVING FEDERATED LEARNING MODELS FOR COLLABORATIVE HEALTH DATA ANALYSIS ACROSS MULTIPLE INSTITUTIONS WITHOUT COMPROMISING DATA SECURITY“. Journal of Knowledge Learning and Science Technology ISSN: 2959-6386 (online) 3, Nr. 3 (25.08.2024): 139–64. http://dx.doi.org/10.60087/jklst.vol3.n3.p139-164.

Der volle Inhalt der Quelle
Annotation:
Federated learning is an emerging distributed machine learning technique that enables collaborative training of models among devices and servers without exchanging private data. However, several privacy and security risks associated with federated learning need to be addressed for safe adoption. This review provides a comprehensive analysis of the key threats in federated learning and the mitigation strategies used to overcome these threats. Some of the major threats identified include model inversion, membership inference, data attribute inference and model extraction attacks. Model inversion aims to predict the raw data values from the model parameters, which can breach participant privacy. The membership inference determines whether a data sample was used to train the model. Data attribute inference discovers private attributes such as age and gender from the model, whereas model extraction steals intellectual property by reconstructing the global model from participant updates. The review then discusses various mitigation strategies proposed for these threats. Controlled-use protections such as secure multiparty computation, homomorphic encryption and conidential computing enable privacy-preserving computations on encrypted data without decryption. Differential privacy adds noise to query responses to limit privacy breaches from aggregate statistics. Privacy-aware objectives modify the loss function to learn representations that protect privacy. Information obfuscation strategies hide inferences about training data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Miranda-Pascual, Àlex, Patricia Guerra-Balboa, Javier Parra-Arnau, Jordi Forné und Thorsten Strufe. „SoK: Differentially Private Publication of Trajectory Data“. Proceedings on Privacy Enhancing Technologies 2023, Nr. 2 (April 2023): 496–516. http://dx.doi.org/10.56553/popets-2023-0065.

Der volle Inhalt der Quelle
Annotation:
Trajectory analysis holds many promises, from improvements in traffic management to routing advice or infrastructure development. However, learning users' paths is extremely privacy-invasive. Therefore, there is a necessity to protect trajectories such that we preserve the global properties, useful for analysis, while specific and private information of individuals remains inaccessible. Trajectories, however, are difficult to protect, since they are sequential, highly dimensional, correlated, bound to geophysical restrictions, and easily mapped to semantic points of interest. This paper aims to establish a systematic framework on protective masking and synthetic-generation measures for trajectory databases with syntactic and differentially private (DP) guarantees, including also utility properties, derived from ideas and limitations of existing proposals. To reach this goal, we systematize the utility metrics used throughout the literature, deeply analyze the DP granularity notions, explore and elaborate on the state of the art on privacy-enhancing mechanisms and their problems, and expose the main limitations of DP notions in the context of trajectories.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Ferrara, Pietro, Luca Olivieri und Fausto Spoto. „Static Privacy Analysis by Flow Reconstruction of Tainted Data“. International Journal of Software Engineering and Knowledge Engineering 31, Nr. 07 (Juli 2021): 973–1016. http://dx.doi.org/10.1142/s0218194021500303.

Der volle Inhalt der Quelle
Annotation:
Software security vulnerabilities and leakages of private information are two of the main issues in modern software systems. Several different approaches, ranging from design techniques to run-time monitoring, have been applied to prevent, detect and isolate such vulnerabilities. Static taint analysis has been particularly successful in detecting injection vulnerabilities at compile time. However, its extension to detect leakages of sensitive data has been only partially investigated. In this paper, we introduce BackFlow, a backward flow reconstructor that, starting from the results of a generic taint analysis engine, reconstructs the flow of tainted data. If successful, BackFlow provides full information about the flow that such data (e.g. private information or user input) traversed inside the program before reaching a sensitive point (e.g. Internet communication or execution of an SQL query). Such information is needed to extend taint analysis to privacy analyses, since in such a scenario it is important to know which exact type of sensitive data flows to what type of communication channels. BackFlow has been implemented in Julia (an industrial static analyzer for Java, Android and .NET programs), and applied to WebGoat and different benchmarks to detect both injections and privacy issues. The experimental results prove that BackFlow is able to reconstruct the flow of tainted data for most of the true positives, it scales up to industrial applications, and it can be effectively applied to privacy analysis, such as the detection of sensitive data leaks or compliance with a data regulation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Shen, Wenquan, Shuhui Wu und Yuanhong Tao. „CLDP-pFedAvg: Safeguarding Client Data Privacy in Personalized Federated Averaging“. Mathematics 12, Nr. 22 (20.11.2024): 3630. http://dx.doi.org/10.3390/math12223630.

Der volle Inhalt der Quelle
Annotation:
The personalized federated averaging algorithm integrates a federated averaging approach with a model-agnostic meta-learning technique. In real-world heterogeneous scenarios, it is essential to implement additional privacy protection techniques for personalized federated learning. We propose a novel differentially private federated meta-learning scheme, CLDP-pFedAvg, which achieves client-level differential privacy guarantees for federated learning involving large heterogeneous clients. The client-level differentially private meta-based FedAvg algorithm enables clients to upload local model parameters for aggregation securely. Furthermore, we provide a convergence analysis of the clipping-enabled differentially private meta-based FedAvg algorithm. The proposed strategy is evaluated across various datasets, and the findings indicate that our approach offers improved privacy protection while maintaining model accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Li, Bing, Hong Zhu und Meiyi Xie. „Releasing Differentially Private Trajectories with Optimized Data Utility“. Applied Sciences 12, Nr. 5 (25.02.2022): 2406. http://dx.doi.org/10.3390/app12052406.

Der volle Inhalt der Quelle
Annotation:
The ubiquity of GPS-enabled devices has resulted in an abundance of data about individual trajectories. Releasing trajectories enables a range of data analysis tasks, such as urban planning, but it also poses a risk in compromising individual location privacy. To tackle this issue, a number of location privacy protection algorithms are proposed. However, existing works are primarily concerned with maintaining the trajectory data geographic utility and neglect the semantic utility. Thus, many data analysis tasks relying on utility, e.g., semantic annotation, suffer from poor performance. Furthermore, the released trajectories are vulnerable to location inference attacks and de-anonymization attacks due to insufficient privacy guarantee. In this paper, to design a location privacy protection algorithm for releasing an offline trajectory dataset to potentially untrusted analyzers, we propose a utility-optimized and differentially private trajectory synthesizer (UDPT) with two novel features. First, UDPT simultaneously preserves both geographical utility and semantic utility by solving a data utility optimization problem with a genetic algorithm. Second, UDPT provides a formal and provable guarantee against privacy attacks by synthesizing obfuscated trajectories in a differentially private manner. Extensive experimental evaluations on real-world datasets demonstrate UDPT’s outperformance against state-of-the-art works in terms of data utility and privacy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

AL-Mafrji, Ahmad Abdullah Mohammed, und Ahmed Burhan Mohammed. „Analysis of Patients Data Using Fuzzy Expert System“. Webology 19, Nr. 1 (20.01.2022): 4027–34. http://dx.doi.org/10.14704/web/v19i1/web19265.

Der volle Inhalt der Quelle
Annotation:
Many problems are facing many developed and developing countries in the medical field, and the most important of these problems is the analysis and diagnosis of patient data for government and private hospitals. This is due to the lack of experience of medical staff, especially new ones, which affects the provision of correct medical services to patients. It is no secret that these countries are making great efforts to overcome these problems. The study focuses on the use of a fuzzy expert system to analyze patient data based on (age, type of review) to reach the result of the analysis (intensive care, medium care, no care) and this system helps to give advice and good analysis of patient data, which can increase the speed of gaining experience for new and inexperienced medical staff in this field.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Xu, Xiaolong, Xuan Zhao, Feng Ruan, Jie Zhang, Wei Tian, Wanchun Dou und Alex X. Liu. „Data Placement for Privacy-Aware Applications over Big Data in Hybrid Clouds“. Security and Communication Networks 2017 (2017): 1–15. http://dx.doi.org/10.1155/2017/2376484.

Der volle Inhalt der Quelle
Annotation:
Nowadays, a large number of groups choose to deploy their applications to cloud platforms, especially for the big data era. Currently, the hybrid cloud is one of the most popular computing paradigms for holding the privacy-aware applications driven by the requirements of privacy protection and cost saving. However, it is still a challenge to realize data placement considering both the energy consumption in private cloud and the cost for renting the public cloud services. In view of this challenge, a cost and energy aware data placement method, named CEDP, for privacy-aware applications over big data in hybrid cloud is proposed. Technically, formalized analysis of cost, access time, and energy consumption is conducted in the hybrid cloud environment. Then a corresponding data placement method is designed to accomplish the cost saving for renting the public cloud services and energy savings for task execution within the private cloud platforms. Experimental evaluations validate the efficiency and effectiveness of our proposed method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Ji, Tianxi, Pan Li, Emre Yilmaz, Erman Ayday, Yanfang (Fanny) Ye und Jinyuan Sun. „Differentially private binary- and matrix-valued data query“. Proceedings of the VLDB Endowment 14, Nr. 5 (Januar 2021): 849–62. http://dx.doi.org/10.14778/3446095.3446106.

Der volle Inhalt der Quelle
Annotation:
Differential privacy has been widely adopted to release continuous- and scalar-valued information on a database without compromising the privacy of individual data records in it. The problem of querying binary- and matrix-valued information on a database in a differentially private manner has rarely been studied. However, binary- and matrix-valued data are ubiquitous in real-world applications, whose privacy concerns may arise under a variety of circumstances. In this paper, we devise an exclusive or (XOR) mechanism that perturbs binary- and matrix-valued query result by conducting an XOR operation on the query result with calibrated noises attributed to a matrix-valued Bernoulli distribution. We first rigorously analyze the privacy and utility guarantee of the proposed XOR mechanism. Then, to generate the parameters in the matrix-valued Bernoulli distribution, we develop a heuristic approach to minimize the expected square query error rate under ϵ -differential privacy constraint. Additionally, to address the intractability of calculating the probability density function (PDF) of this distribution and efficiently generate samples from it, we adapt an Exact Hamiltonian Monte Carlo based sampling scheme. Finally, we experimentally demonstrate the efficacy of the XOR mechanism by considering binary data classification and social network analysis, all in a differentially private manner. Experiment results show that the XOR mechanism notably outperforms other state-of-the-art differentially private methods in terms of utility (such as classification accuracy and F 1 score), and even achieves comparable utility to the non-private mechanisms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Ariful Islam, Md, und Rezwanul Hasan Rana. „Determinants of bank profitability for the selected private commercial banks in Bangladesh: a panel data analysis“. Banks and Bank Systems 12, Nr. 3 (18.10.2017): 179–92. http://dx.doi.org/10.21511/bbs.12(3-1).2017.03.

Der volle Inhalt der Quelle
Annotation:
This study aims to investigate the determinants of profitability of fifteen selected private commercial banks in Bangladesh over the period 2005‒2015. The study emphasizes on the internal factors that affect bank profitability. This research uses panel data to explore the impact of nonperforming loan, cost to income ratio, loan to deposit ratio, commission fees, cost of fund and operating expenses on the profitability indicators of banks like return on asset and return on equity. The experimental outcomes have found strong evidence that nonperforming loan (NPL) and operating expenses have a significant effect on the profitability. Moreover, the results have shown that higher NPL may lead to less profit due to provision of classified loans. Again, higher loan to deposit (LD) ratio and cost of fund contribute towards profitability, but their impacts are not significant in the private commercial banks of Bangladesh.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

AL-SAGGAF, YESLAM. „The Use of Data Mining by Private Health Insurance Companies and Customers’ Privacy“. Cambridge Quarterly of Healthcare Ethics 24, Nr. 3 (10.06.2015): 281–92. http://dx.doi.org/10.1017/s0963180114000607.

Der volle Inhalt der Quelle
Annotation:
Abstract:This article examines privacy threats arising from the use of data mining by private Australian health insurance companies. Qualitative interviews were conducted with key experts, and Australian governmental and nongovernmental websites relevant to private health insurance were searched. Using Rationale, a critical thinking tool, the themes and considerations elicited through this empirical approach were developed into an argument about the use of data mining by private health insurance companies. The argument is followed by an ethical analysis guided by classical philosophical theories—utilitarianism, Mill’s harm principle, Kant’s deontological theory, and Helen Nissenbaum’s contextual integrity framework. Both the argument and the ethical analysis find the use of data mining by private health insurance companies in Australia to be unethical. Although private health insurance companies in Australia cannot use data mining for risk rating to cherry-pick customers and cannot use customers’ personal information for unintended purposes, this article nonetheless concludes that the secondary use of customers’ personal information and the absence of customers’ consent still suggest that the use of data mining by private health insurance companies is wrong.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Avella-Medina, Marco. „The Role of Robust Statistics in Private Data Analysis“. CHANCE 33, Nr. 4 (01.10.2020): 37–42. http://dx.doi.org/10.1080/09332480.2020.1847958.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Utaliyeva, Assem, und Yoon-Ho Choi. „Two-Fold Differentially Private Mechanism for Big Data Analysis“. Journal of Korean Institute of Communications and Information Sciences 49, Nr. 3 (31.03.2024): 393–400. http://dx.doi.org/10.7840/kics.2024.49.3.393.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Chunxia Wang, Chunxia Wang, Qiuyu Zhang Chunxia Wang und Yan Yan Qiuyu Zhang. „Differentially Private Feature Selection Based on Dynamic Relevance for Correlated Data“. 電腦學刊 34, Nr. 1 (Februar 2023): 157–73. http://dx.doi.org/10.53106/199115992023023401012.

Der volle Inhalt der Quelle
Annotation:
<p>Traditional feature selection methods are only concerned with high relevance between selected features and classes and low redundancy among features, ignoring their interrelations which partly weak classification performance. This paper developed a dynamic relevance strategy to measure the dependency among them, where the relevance of each candidate feature is updated dynamically when a new feature is selected. Protecting sensitive information has become an important issue when executing feature selection. However, existing differentially private machine learning algorithms have seldom considered the impact of data correlation, which may cause more privacy leakage than expected. Therefore, the paper proposed a differentially private feature selection based on dynamic relevance measure, namely DPFSDR. Firstly, as a correlation analysis technique, the weighted undirected graph model is constructed via the correlated degree, which can reduce the dataset&rsquo;s dimension and correlated sensitivity. Secondly, as a feature selection criterion, F-score with differential privacy is adopted to measure the feature importance of each feature. Finally, to evaluate the effectiveness of feature selection, differentially private SVM combined with dynamic relevance measure is utilized to choose features. Experimental results show that the proposed DPFSDR algorithm can effectively obtain the optimal feature subset, and improve data utility while preserving data privacy.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Batool, Sumaira, Imran Abbs, Fatima Farooq und Ishtiaq Ahmad. „Comparative Efficiency Analysis of Public and Private Colleges of Multan District: Data Envelope Approach Analysis“. Review of Economics and Development Studies 2, Nr. 1 (30.06.2016): 69–80. http://dx.doi.org/10.26710/reads.v2i1.125.

Der volle Inhalt der Quelle
Annotation:
The purpose of this paper is to evaluate the efficiency of public and private sector colleges in Multan district. We use output oriented data envelopment analysis to measure technical and scale efficiency of a sample of 40 colleges, using data for the year 2014. DEA, which is the most popular technique used to measure the relative efficiency of non-profit organizations due to the absence of prices or relative values of educational outputs, is employed to compare efficiency of both types of colleges. Moreover, it can handle multiple inputs and outputs with great ease. As public and private colleges are working under similar environmental conditions, we have used a single frontier, incorporating four educational inputs and four outputs. The results of the data demonstrate that private colleges lag behind public colleges in terms of CRS and VRS technical efficiency scores and scale efficiency scores. Our study of colleges is in contrast with the dominant paradigm that private colleges outperform the state-run colleges.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Chen, Z. F., J. J. Shuai, F. J. Tian, W. Y. Li, S. H. Zang und X. Z. Zhang. „An Improved Privacy Protection Algorithm for Multimodal Data Fusion“. Scientific Programming 2022 (23.08.2022): 1–7. http://dx.doi.org/10.1155/2022/4189148.

Der volle Inhalt der Quelle
Annotation:
With the rapid development of Internet technology, the use and sharing of data have brought great opportunities and challenges to mankind. On the one hand, the development of data sharing and analysis technology has promoted the improvement of economic and social benefits. On the other hand, protecting private information has become an urgent issue in the Internet era. In addition, the amount and type of information data are also increasing. At present, most algorithms can only encrypt a single type of small-scale data, which cannot meet the current data environment. Therefore, it is very necessary to study the privacy protection algorithm of multimodal data fusion. To improve the security of privacy protection algorithm, combined with the idea of multimode, this paper combines the improved traditional spatial steganography algorithm LSB matching method and the improved traditional transform domain steganography algorithm DCT with AES encryption algorithm after modifying the S-box and then combines it with image stitching technology, so as to realize a safe and reliable privacy protection algorithm of multimode information fusion. The algorithm completes the hidden communication of private information, which not only ensures that the receiver can accurately recover private information in the process of information transmission but also greatly improves the security of private information transmission.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Zhang, Hao, Yewei Xia, Yixin Ren, Jihong Guan und Shuigeng Zhou. „Differentially Private Nonlinear Causal Discovery from Numerical Data“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 10 (26.06.2023): 12321–28. http://dx.doi.org/10.1609/aaai.v37i10.26452.

Der volle Inhalt der Quelle
Annotation:
Recently, several methods such as private ANM, EM-PC and Priv-PC have been proposed to perform differentially private causal discovery in various scenarios including bivariate, multivariate Gaussian and categorical cases. However, there is little effort on how to conduct private nonlinear causal discovery from numerical data. This work tries to challenge this problem. To this end, we propose a method to infer nonlinear causal relations from observed numerical data by using regression-based conditional independence test (RCIT) that consists of kernel ridge regression (KRR) and Hilbert-Schmidt independence criterion (HSIC) with permutation approximation. Sensitivity analysis for RCIT is given and a private constraint-based causal discovery framework with differential privacy guarantee is developed. Extensive simulations and real-world experiments for both conditional independence test and causal discovery are conducted, which show that our method is effective in handling nonlinear numerical cases and easy to implement. The source code of our method and data are available at https://github.com/Causality-Inference/PCD.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Kang, Shujing, Xia Lin, Kaiqi Yang, Jianing Sun und Daiteng Ren. „Data Elements Empowering Breakthrough Innovation Enterprises: A Current Analysis and Improvement Pathways“. Journal of Management and Social Development 1, Nr. 3 (Mai 2024): 221–26. http://dx.doi.org/10.62517/jmsd.202412332.

Der volle Inhalt der Quelle
Annotation:
In the era of digitization, data elements have become a significant force driving economic and social development. Their full utilization can help enterprises improve decision-making efficiency and prediction accuracy, as well as enhance production efficiency and profitability. They are also playing an increasingly crucial role in corporate R&D and innovation activities. Zhejiang, a significant economic province in China, relies heavily on private enterprises as the backbone of its economic development. However, faced with increasingly fierce market competition and rapidly changing market environments, Zhejiang's private enterprises need to seek new capabilities for breakthrough innovation to maintain their competitive advantage. This article aims to explore how data elements empower breakthrough innovation in Zhejiang's private enterprises. Taking data elements' empowerment of breakthrough innovation in Zhejiang's private enterprises as the research object, this article deeply analyzes the current status of Zhejiang's private enterprises' innovation driven by data. The study found that Zhejiang's private enterprises have achieved remarkable results in product development, market expansion, supply chain management, and other aspects, relying on modern information technologies such as big data and cloud computing. However, they also face challenges such as data silos and data security. To address these issues, this article proposes improvement paths such as strengthening data sharing, improving data governance, and cultivating innovative talents to promote higher-quality breakthrough innovation in Zhejiang's private enterprises.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

de Jong, Jins, Bart Kamphorst und Shannon Kroes. „Differentially Private Block Coordinate Descent for Linear Regression on Vertically Partitioned Data“. Journal of Cybersecurity and Privacy 2, Nr. 4 (09.11.2022): 862–81. http://dx.doi.org/10.3390/jcp2040044.

Der volle Inhalt der Quelle
Annotation:
We present a differentially private extension of the block coordinate descent algorithm by means of objective perturbation. The algorithm iteratively performs linear regression in a federated setting on vertically partitioned data. In addition to a privacy guarantee, we derive a utility guarantee; a tolerance parameter indicates how much the differentially private regression may deviate from the analysis without differential privacy. The algorithm’s performance is compared with that of the standard block coordinate descent algorithm on both artificial test data and real-world data. We find that the algorithm is fast and able to generate practical predictions with single-digit privacy budgets, albeit with some accuracy loss.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Jia, Dongning, Bo Yin und Xianqing Huang. „Association Analysis of Private Information in Distributed Social Networks Based on Big Data“. Wireless Communications and Mobile Computing 2021 (04.06.2021): 1–12. http://dx.doi.org/10.1155/2021/1181129.

Der volle Inhalt der Quelle
Annotation:
As people’s awareness of the issue of privacy leakage continues to increase, and the demand for privacy protection continues to increase, there is an urgent need for some effective methods or means to achieve the purpose of protecting privacy. So far, there have been many achievements in the research of location-based privacy services, and it can effectively protect the location privacy of users. However, there are few research results that require privacy protection, and the privacy protection system needs to be improved. Aiming at the shortcomings of traditional differential privacy protection, this paper designs a differential privacy protection mechanism based on interactive social networks. Under this mechanism, we have proved that it meets the protection conditions of differential privacy and prevents the leakage of private information with the greatest possibility. In this paper, we establish a network evolution game model, in which users only play games with connected users. Then, based on the game model, a dynamic equation is derived to express the trend of the proportion of users adopting privacy protection settings in the network over time, and the impact of the benefit-cost ratio on the evolutionarily stable state is analyzed. A real data set is used to verify the feasibility of the model. Experimental results show that the model can effectively describe the dynamic evolution of social network users’ privacy protection behaviors. This model can help social platforms design effective security services and incentive mechanisms, encourage users to adopt privacy protection settings, and promote the deployment of privacy protection mechanisms in the network.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Swanberg, Marika, Ira Globus-Harris, Iris Griffith, Anna Ritz, Adam Groce und Andrew Bray. „Improved Differentially Private Analysis of Variance“. Proceedings on Privacy Enhancing Technologies 2019, Nr. 3 (01.07.2019): 310–30. http://dx.doi.org/10.2478/popets-2019-0049.

Der volle Inhalt der Quelle
Annotation:
Abstract Hypothesis testing is one of the most common types of data analysis and forms the backbone of scientific research in many disciplines. Analysis of variance (ANOVA) in particular is used to detect dependence between a categorical and a numerical variable. Here we show how one can carry out this hypothesis test under the restrictions of differential privacy. We show that the F -statistic, the optimal test statistic in the public setting, is no longer optimal in the private setting, and we develop a new test statistic F1 with much higher statistical power. We show how to rigorously compute a reference distribution for the F1 statistic and give an algorithm that outputs accurate p-values. We implement our test and experimentally optimize several parameters. We then compare our test to the only previous work on private ANOVA testing, using the same effect size as that work. We see an order of magnitude improvement, with our test requiring only 7% as much data to detect the effect.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Peng, Shin-yi. „Public–Private Interactions in Privacy Governance“. Laws 11, Nr. 6 (26.10.2022): 80. http://dx.doi.org/10.3390/laws11060080.

Der volle Inhalt der Quelle
Annotation:
This paper addresses the possible roles of private actors when privacy paradigms are in flux. If the traditional “informed consent”-based government-dominated approaches are ill-suited to the big data ecosystem, can private governance fill the gap created by state regulation? In reality, how is public–private partnership being implemented in the privacy protection frameworks? This paper uses cases from APEC’s Cross-Border Privacy Rules (CBPR) and the EU’s General Data Protection Regulation (GDPR) as models for exploration. The analysis in this paper demonstrates the fluidity of interactions across public and private governance realms. Self-regulation and state regulation are opposing ends of a regulatory continuum, with CBPR-type “collaboration” and GDPR-type “coordination” falling somewhere in the middle. The author concludes that there is an evident gap between private actors’ potential governing functions and their current roles in privacy protection regimes. Looking to the future, technological developments and market changes call for further public–private convergence in privacy governance, allowing the public authority and the private sector to simultaneously reshape global privacy norms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Mayuri Arun Gaikwad. „Homomorphic Encryption and Secure Multi-Party Computation: Mathematical Tools for Privacy-Preserving Data Analysis in the Cloud“. Panamerican Mathematical Journal 33, Nr. 2 (04.07.2024): 75–88. http://dx.doi.org/10.52783/pmj.v33.i2.876.

Der volle Inhalt der Quelle
Annotation:
With more and more people using cloud computing and storing and handling data remotely, protecting the privacy and safety of private data has become very important. Homomorphic encryption and safe multi-party computing (MPC) are two new mathematics tools that offer strong ways to analyze data in the cloud while protecting privacy. When you use homomorphic encryption, you can do calculations directly on protected data, so you can process data securely without having to decode private data. This method makes sure that data stays protected while operations are being done, keeping it safe from people who shouldn't have access to it or seeing it. Cloud service providers can use homomorphic encryption to do different types of analysis on protected data, like collecting, searching, and machine learning, without revealing the private information that lies beneath. Secure multi-party computation protects privacy in situations where multiple people work together to analyze data. MPC allows for joint analysis without letting other people see individual datasets by spreading computations across multiple entities, each of which holds a piece of the data. MPC uses cryptographic protocols and methods to make sure that processes are done without revealing private inputs. This lets multiple people work together to analyze data while keeping privacy. These math tools can be used for many different types of data analysis jobs in the cloud, such as predictive modeling, machine learning, and statistical analysis. They also make it safe for different groups to share and work together on data, like businesses, academics, and people, without putting data protection at risk. There are still problems with how homomorphic encryption and safe MPC can be used in the real world and how they can be scaled up. These problems are mostly related to the amount of work that needs to be done and how efficiently it works. The main goal of ongoing study is to create improved methods and programs that will make these techniques work better and be easier to use in the real world.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Basha, M. John, T. Satyanarayana Murthy, A. S. Valarmathy, Ahmed Radie Abbas, Djuraeva Gavhar, R. Rajavarman und N. Parkunam. „Privacy-Preserving Data Mining and Analytics in Big Data“. E3S Web of Conferences 399 (2023): 04033. http://dx.doi.org/10.1051/e3sconf/202339904033.

Der volle Inhalt der Quelle
Annotation:
Privacy concerns have gotten more attention as Big Data has spread. The difficulties of striking a balance between the value of data and individual privacy have led to the emergence of privacy-preserving data mining and analytics approaches as a crucial area of research. An overview of the major ideas, methods, and developments in privacy-preserving data mining and analytics in the context of Big Data is given in this abstract. Data mining that protects privacy tries to glean useful insights from huge databases while shielding the private data of individuals. Commonly used in traditional data mining methods, sharing or pooling data might have serious privacy implications. On the other hand, privacy-preserving data mining strategies concentrate on creating procedures and algorithms that enable analysis without jeopardizing personal information. Finally, privacy-preserving data mining and analytics in the Big Data age bring important difficulties and opportunities. An overview of the main ideas, methods, and developments in privacy-preserving data mining and analytics are given in this abstract. It underscores the value of privacy in the era of data-driven decision-making and the requirement for effective privacy-preserving solutions to safeguard sensitive personal data while facilitating insightful analysis of huge datasets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Wood, Alexander, Vladimir Shpilrain, Kayvan Najarian und Delaram Kahrobaei. „Private naive bayes classification of personal biomedical data: Application in cancer data analysis“. Computers in Biology and Medicine 105 (Februar 2019): 144–50. http://dx.doi.org/10.1016/j.compbiomed.2018.11.018.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Bălă, Raluca-Maria, und Elena-Maria Prada. „Migration and Private Consumption in Europe: A Panel Data Analysis“. Procedia Economics and Finance 10 (2014): 141–49. http://dx.doi.org/10.1016/s2212-5671(14)00287-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Jiang, Yangdi, Yi Liu, Xiaodong Yan, Anne-Sophie Charest, Linglong Kong und Bei Jiang. „Analysis of Differentially Private Synthetic Data: A Measurement Error Approach“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 19 (24.03.2024): 21206–13. http://dx.doi.org/10.1609/aaai.v38i19.30114.

Der volle Inhalt der Quelle
Annotation:
Differentially private (DP) synthetic datasets have been receiving significant attention from academia, industry, and government. However, little is known about how to perform statistical inference using DP synthetic datasets. Naive approaches that do not take into account the induced uncertainty due to the DP mechanism will result in biased estimators and invalid inferences. In this paper, we present a class of maximum likelihood estimator (MLE)-based easy-to-implement bias-corrected DP estimators with valid asymptotic confidence intervals (CI) for parameters in regression settings, by establishing the connection between additive DP mechanisms and measurement error models. Our simulation shows that our estimator has comparable performance to the widely used sufficient statistic perturbation (SSP) algorithm in some scenarios but with the advantage of releasing a synthetic dataset and obtaining statistically valid asymptotic CIs, which can achieve better coverage when compared to the naive CIs obtained by ignoring the DP mechanism.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Xing, Hongjun, und Darchia Maia. „Analysis on the Development Strategy of Private Education Based on Data Mining Algorithm“. Mathematical Problems in Engineering 2022 (11.07.2022): 1–10. http://dx.doi.org/10.1155/2022/2783398.

Der volle Inhalt der Quelle
Annotation:
In order to improve the development effect of private education, this paper analyzes the current situation of private education combined with the data mining algorithm and explores the problems existing in the development of private education. Moreover, this paper combines the semi-parametric product estimation method with parameter estimation and applies the estimation method to model-assisted sampling estimation. This work enhances the estimate accuracy of the sample estimation and increases the field of application of the model while enhancing the classic generalized regression estimation. It also modifies the estimation accuracy on the basis of the linear assumption. The experimental study reveals that the data mining algorithm-based analysis approach for private education development provided in this work has a certain impact, and the development strategy of private education is assessed on this premise.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

DE CAPITANI DI VIMERCATI, SABRINA, SARA FORESTI, GIOVANNI LIVRAGA und PIERANGELA SAMARATI. „DATA PRIVACY: DEFINITIONS AND TECHNIQUES“. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 20, Nr. 06 (Dezember 2012): 793–817. http://dx.doi.org/10.1142/s0218488512400247.

Der volle Inhalt der Quelle
Annotation:
The proper protection of data privacy is a complex task that requires a careful analysis of what actually has to be kept private. Several definitions of privacy have been proposed over the years, from traditional syntactic privacy definitions, which capture the protection degree enjoyed by data respondents with a numerical value, to more recent semantic privacy definitions, which take into consideration the mechanism chosen for releasing the data. In this paper, we illustrate the evolution of the definitions of privacy, and we survey some data protection techniques devised for enforcing such definitions. We also illustrate some well-known application scenarios in which the discussed data protection techniques have been successfully used, and present some open issues.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Li, Yanshu, und Daowei Zhang. „A Spatial Panel Data Analysis of Tree Planting in the US South“. Southern Journal of Applied Forestry 31, Nr. 4 (01.11.2007): 192–98. http://dx.doi.org/10.1093/sjaf/31.4.192.

Der volle Inhalt der Quelle
Annotation:
Abstract This study used panel data models with spatial error correlation to analyze private tree planting in the US South from 1955 to 2003. Controlling for statewide, fixed effects allows us to disentangle the effect of spatial interaction from that of state heterogeneity and omitted variables. The results show that there is significant spatial interdependence among the southern states in private tree planting. Harvest rates, softwood sawtimber price, income levels, cost of capital, and federal and state cost-share programs are important factors affecting nonindustrial private (nonindustrial private forestland [NIPF]) tree planting. Harvest rates, softwood sawtimber and pulpwood prices, and planting cost are important factors affecting forest industry (FI) tree planting. Finally, the Soil Bank Program has had substitution effects on southern FI tree planting and nonsubsidized NIPF tree planting.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Li, Yiwei, Shuai Wang und Qilong Wu. „Convergence Analysis for Differentially Private Federated Averaging in Heterogeneous Settings“. Mathematics 13, Nr. 3 (02.02.2025): 497. https://doi.org/10.3390/math13030497.

Der volle Inhalt der Quelle
Annotation:
Federated learning (FL) has emerged as a prominent approach for distributed machine learning, enabling collaborative model training while preserving data privacy. However, the presence of non-i.i.d. data and the need for robust privacy protection introduce significant challenges in theoretically analyzing the performance of FL algorithms. In this paper, we present novel theoretical analysis on typical differentially private federated averaging (DP-FedAvg) by judiciously considering the impact of non-i.i.d. data on convergence and privacy guarantees. Our contributions are threefold: (i) We introduce a theoretical framework for analyzing the convergence of DP-FedAvg algorithm by considering different client sampling and data sampling strategies, privacy amplification and non-i.i.d. data. (ii) We explore the privacy–utility tradeoff and demonstrate how client strategies interact with differential privacy to affect learning performance. (iii) We provide extensive experimental validation using real-world datasets to verify our theoretical findings.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Hasan, Fayyad-Kazan, Kassem-Moussa Sondos, Hejase Hussin J und Hejase Ale J. „Forensic analysis of private browsing mechanisms: Tracing internet activities“. Journal of Forensic Science and Research 5, Nr. 1 (08.03.2021): 012–19. http://dx.doi.org/10.29328/journal.jfsr.1001022.

Der volle Inhalt der Quelle
Annotation:
Forensic analysts are more than ever facing challenges upon conducting their deep investigative analysis on digital devices due to the technological progression. Of these are the difficulties present upon analyzing web browser artefacts as this became more complicated when web browser companies introduced private browsing mode, a feature aiming to protect users’ data upon opening a private browsing session, by leaving no traces of data on the local device used. Aiming to investigate whether the claims of web browser companies are true concerning the protection private browsing provides to the users and whether it really doesn’t leave any browsing data behind, the most popular desktop browsers in Windows were analyzed after surfing them regularly and privately. The results shown in this paper suggest that the privacy provided varies among different companies since evidence might be recovered from some of the browsers but not from others.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Balaine, Lorraine, Cathal Buckley und Emma J. Dillon. „Mixed public-private and private extension systems: A comparative analysis using farm-level data from Ireland“. Land Use Policy 117 (Juni 2022): 106086. http://dx.doi.org/10.1016/j.landusepol.2022.106086.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Liu, Haifei, Weishu Li und Yulian Liu. „Research on the Integration of Data Statistics and Analysis in the Training of Private Equity Talents“. Scientific Journal of Economics and Management Research 6, Nr. 12 (27.12.2024): 225–30. https://doi.org/10.54691/wrjjav50.

Der volle Inhalt der Quelle
Annotation:
The private equity industry is in its early stages of development and has huge potential for future development. In the specific training of private equity talents in schools, we have encountered many difficulties. The first thing we face is the formulation of a private equity training program. The private equity direction of our school's finance is set up to meet the huge demand of the economic society for private equity talents. This article will focus on the application of data statistics and analysis in the training of private equity talents.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Senekane, Makhamisa. „Differentially Private Image Classification Using Support Vector Machine and Differential Privacy“. Machine Learning and Knowledge Extraction 1, Nr. 1 (20.02.2019): 483–91. http://dx.doi.org/10.3390/make1010029.

Der volle Inhalt der Quelle
Annotation:
The ubiquity of data, including multi-media data such as images, enables easy mining and analysis of such data. However, such an analysis might involve the use of sensitive data such as medical records (including radiological images) and financial records. Privacy-preserving machine learning is an approach that is aimed at the analysis of such data in such a way that privacy is not compromised. There are various privacy-preserving data analysis approaches such as k-anonymity, l-diversity, t-closeness and Differential Privacy (DP). Currently, DP is a golden standard of privacy-preserving data analysis due to its robustness against background knowledge attacks. In this paper, we report a scheme for privacy-preserving image classification using Support Vector Machine (SVM) and DP. SVM is chosen as a classification algorithm because unlike variants of artificial neural networks, it converges to a global optimum. SVM kernels used are linear and Radial Basis Function (RBF), while ϵ -differential privacy was the DP framework used. The proposed scheme achieved an accuracy of up to 98%. The results obtained underline the utility of using SVM and DP for privacy-preserving image classification.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Kulkarni, Shantanu, Pranjali Bawane, Rahul S.S, M. B. Bagwan und Shailly Gupta. „Surgical Confidentiality and Data Protection: A Legal Analysis“. Journal of Neonatal Surgery 14, Nr. 2S (10.02.2025): 87–96. https://doi.org/10.52783/jns.v14.1661.

Der volle Inhalt der Quelle
Annotation:
Surgical secrecy and data protection are important parts of the medical field because they protect patients' privacy and allow for quick and effective care. As technology improves, medical data is being processed and sent over more and more platforms. This makes people worry about the safety of private data. The purpose of this paper is to look at the legal aspects of the current systems that protect surgical privacy and data. It focusses on how laws like HIPAA (Health Insurance Portability and Accountability Act), GDPR (General Data Protection Regulation), and other jurisdiction-specific rules are changing how they work. It looks at how to best balance the need for medical workers to share information with patients' rights to privacy so that care can be given effectively. The study also talks about the moral problems doctors face and the legal effects of losing faith during surgery. Telemedicine and electronic health records (EHR), two new developments in healthcare technology that affect patient data protection, are also included. We go into a lot of detail about important problems like informed permission, access limits, and data protection. The last part of the study suggests that laws should be changed to make data more secure. This will increase customer trust and make sure that healthcare systems around the world are following the rules. By looking at how law, healthcare, and technology interact, this study is able to fully explain the difficulties and answers that come with keeping surgery privacy in the modern world.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Deruelle, Thibaud, Veronika Kalouguina, Philipp Trein und Joël Wagner. „Designing privacy in personalized health: An empirical analysis“. Big Data & Society 10, Nr. 1 (Januar 2023): 205395172311586. http://dx.doi.org/10.1177/20539517231158636.

Der volle Inhalt der Quelle
Annotation:
A crucial challenge for personalized health is the handling of individuals’ data and specifically the protection of their privacy. Secure storage of personal health data is of paramount importance to convince citizens to collect personal health data. In this survey experiment, we test individuals’ willingness to produce and store personal health data, based on different storage options and whether this data is presented as common good or private good. In this paper, we focus on the nonmedical context with two means to self-produce data: connected devices that record physical activity and genetic tests that appraise risks of diseases. We use data from a survey experiment fielded in Switzerland in March 2020 and perform regression analyses on a representative sample of Swiss citizens in the French- and German-speaking cantons. Our analysis shows that respondents are more likely to use both apps and tests when their data is framed as a private good to be stored by individuals themselves. Our results demonstrate that concerns regarding the privacy of personal heath data storage trumps any other variable when it comes to the willingness to use personalized health technologies. Individuals prefer a data storage format where they retain control over the data. Ultimately, this study presents results susceptible to inform decision-makers in designing privacy in personalized health initiatives.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie