To see the other types of publications on this topic, follow the link: Shared Data Software.

Journal articles on the topic 'Shared Data Software'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Shared Data Software.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Mandrykin, M. U., and A. V. Khoroshilov. "Towards deductive verification of C programs with shared data." Programming and Computer Software 42, no. 5 (September 2016): 324–32. http://dx.doi.org/10.1134/s0361768816050054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Robinson, Patrick G., and James D. Arthur. "Distributed process creation within a shared data space framework." Software: Practice and Experience 25, no. 2 (February 1995): 175–91. http://dx.doi.org/10.1002/spe.4380250205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Martin, Bruce. "Concurrent programming vs. concurrency control: shared events or shared data." ACM SIGPLAN Notices 24, no. 4 (April 1989): 142–44. http://dx.doi.org/10.1145/67387.67426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Baldwin, Adrian, and Simon Shiu. "Enabling shared audit data." International Journal of Information Security 4, no. 4 (February 8, 2005): 263–76. http://dx.doi.org/10.1007/s10207-004-0061-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

SKAF, HALA, FRANCOIS CHAROY, and CLAUDE GODART. "MAINTAINING SHARED WORKSPACES CONSISTENCY DURING SOFTWARE DEVELOPMENT." International Journal of Software Engineering and Knowledge Engineering 09, no. 05 (October 1999): 623–42. http://dx.doi.org/10.1142/s0218194099000334.

Full text
Abstract:
The development of large software is always done by teams of people working together and struggling to produce quality software within their budget. Each person in these teams generally knows his job and wants to do it, without being bothered by other people. However, when people work towards a common goal they have to exchange data and create dependencies between each other regarding these data. If these people have to follow a process, cooperating and synchronizing with co-workers and trying to reach one's own goal becomes too difficult to manage. This may lead to frustration, lower productivity and reluctancy to follow the predefined process. This is why some support is needed to avoid common mistakes that occur when people exchange data. In this paper, a hybrid approach to support cooperation is presented. The originality of this approach is the ability to enforce general properties on cooperative interactions while using the semantic of applications to fit particular situations or requirements. This paper gives a brief idea about the general enforced properties on activity interactions. It describes in detail the semantic rules that control activity results, the impacts of the cooperation on these rules and how both dimensions interact.
APA, Harvard, Vancouver, ISO, and other styles
6

Saifan, Ahmad A., and Zainab Lataifeh. "Privacy preserving defect prediction using generalization and entropy-based data reduction." Intelligent Data Analysis 25, no. 6 (October 29, 2021): 1369–405. http://dx.doi.org/10.3233/ida-205504.

Full text
Abstract:
The software engineering community produces data that can be analyzed to enhance the quality of future software products, and data regarding software defects can be used by data scientists to create defect predictors. However, sharing such data raises privacy concerns, since sensitive software features are usually considered as business assets that should be protected in accordance with the law. Early research efforts on protecting the privacy of software data found that applying conventional data anonymization to mask sensitive attributes of software features degrades the quality of the shared data. In addition, data produced by such approaches is not immune to attacks such as inference and background knowledge attacks. This research proposes a new approach to share protected release of software defects data that can still be used in data science algorithms. We created a generalization (clustering)-based approach to anonymize sensitive software attributes. Tomek link and AllNN data reduction approaches were used to discard noisy records that may affect the usefulness of the shared data. The proposed approach considers diversity of sensitive attributes as an important factor to avoid inference and background knowledge attacks on the anonymized data, therefore data discarded is removed from both defective and non-defective records. We conducted experiments conducted on several benchmark software defect datasets, using both data quality and privacy measures to evaluate the proposed approach. Our findings showed that the proposed approach outperforms existing well-known techniques using accuracy and privacy measures.
APA, Harvard, Vancouver, ISO, and other styles
7

Morris, Donald G., and David K. Lowenthal. "Accurate data redistribution cost estimation in software distributed shared memory systems." ACM SIGPLAN Notices 36, no. 7 (July 2001): 62–71. http://dx.doi.org/10.1145/568014.379570.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Focardi, Riccardo, Roberto Lucchi, and Gianluigi Zavattaro. "Secure shared data-space coordination languages: A process algebraic survey." Science of Computer Programming 63, no. 1 (November 2006): 3–15. http://dx.doi.org/10.1016/j.scico.2005.07.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Osterbye, Kasper. "Abstract data types with shared operations." ACM SIGPLAN Notices 23, no. 6 (June 1988): 91–96. http://dx.doi.org/10.1145/44546.44554.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Wei Feng. "Software Design for High-Speed Data Capture." Applied Mechanics and Materials 536-537 (April 2014): 536–39. http://dx.doi.org/10.4028/www.scientific.net/amm.536-537.536.

Full text
Abstract:
10G Ethernet technology has been widely used in modern high speed communication system. As a result, program design for high-speed data capture on 10G Ethernet, as the first and important step in network monitor and analysis system, has become a challenging task. This paper proposed a high-speed data capture method based on WinCap and shared memory pool technology and has features of high speed, low packet loss rate, high efficiency and good portability. The system test and data analysis proved that the proposed method in this paper can effectively capture the data at speed of 6Gbps and stably keep the packet loss rate under 0.03%.
APA, Harvard, Vancouver, ISO, and other styles
11

Russello, Giovanni, Michel R. V. Chaudron, Maarten van Steen, and Ibrahim Bokharouss. "An experimental evaluation of self-managing availability in shared data spaces." Science of Computer Programming 64, no. 2 (January 2007): 246–62. http://dx.doi.org/10.1016/j.scico.2006.06.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Rahm, Erhard, and Donald Ferguson. "Cache management for shared sequential data access." Information Systems 18, no. 4 (June 1993): 197–213. http://dx.doi.org/10.1016/0306-4379(93)90017-u.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Lysetskyi, Yu M., and S. V. Kozachenko. "Software-defined data storage systems." Mathematical machines and systems 1 (2021): 17–23. http://dx.doi.org/10.34121/1028-9763-2021-1-17-23.

Full text
Abstract:
Every year the amount of generated data grows exponentially which entails an increase in both the number and capacity of data storage systems. The highest capacity is required for data storage systems that are used to store backups and archives, file storages with shared access, testing and development environments, virtual machine storages, corporate or public web services. To solve such tasks, nowadays manufacturers offer three types of storage systems: block and file storages which have already become a standard used for implementing IT infrastructures, and software-defined storage systems. They allow to create data storages on non-specialized equipment, such as a group of x86-64 server nodes managed by general-purpose operating systems. The main feature of software-defined data storages is the transfer of storage functions from the hardware level to the software level where these storage functions are defined not by physical features of the hardware but by the software selected for specific tasks solving. Today there are three main singled out technologies characterized by scalable architecture that allow to in-crease efficiency and storage volume through adding new nodes to a single pool: Ceph, DELL EMC VxFlex OS, HP StoreVirtual VSA. Software-defined data storages have the following advantages: fault tolerance, efficiency, flexibility and economy. Utilization of software-defined storages allows to increase efficiency of IT infrastructure and reduce its maintenance costs; to build a hybrid infrastructure that would allow to use internal and external cloud resources; to increase efficiency of both services and us-ers by providing reliable connection by using the most convenient devices; to build a portal as a single point of services and resources control.
APA, Harvard, Vancouver, ISO, and other styles
14

Ye, Juan. "Shared learning activity labels across heterogeneous datasets." Journal of Ambient Intelligence and Smart Environments 13, no. 2 (March 26, 2021): 77–94. http://dx.doi.org/10.3233/ais-210590.

Full text
Abstract:
Nowadays, the advancement of sensing and communication technologies has led to the possibility of collecting a large amount of sensor data, however, to build a reliable computational model and accurately recognise human activities we still need the annotations on sensor data. Acquiring high-quality, detailed, continuous annotations is a challenging task. In this paper, we explore the solution space on sharing annotated activities across different datasets in order to enhance the recognition accuracies. The main challenge is to resolve heterogeneity in feature and activity space between datasets; that is, each dataset can have a different number of sensors in heterogeneous sensing technologies and deployed in diverse environments and record various activities on different users. To address the challenge, we have designed and developed sharing data and sharing classifiers algorithms that feature the knowledge model to enable computationally-efficient feature space remapping and uncertainty reasoning to enable effective classifier fusion. We have validated the algorithms on three third-party real-world datasets and demonstrated their effectiveness in recognising activities only with annotations from as little as 0.1% of each dataset.
APA, Harvard, Vancouver, ISO, and other styles
15

Gomez-Diaz, Teresa, and Tomas Recio. "Research Software vs. Research Data I: Towards a Research Data definition in the Open Science context." F1000Research 11 (January 28, 2022): 118. http://dx.doi.org/10.12688/f1000research.78195.1.

Full text
Abstract:
Background: Research Software is a concept that has been only recently clarified. In this paper we address the need for a similar enlightenment concerning the Research Data concept. Methods: Our contribution begins by reviewing the Research Software definition, which includes the analysis of software as a legal concept, followed by the study of its production in the research environment and within the Open Science framework. Then we explore the challenges of a data definition and some of the Research Data definitions proposed in the literature. Results: We propose a Research Data concept featuring three characteristics: the data should be produced (collected, processed, analyzed, shared & disseminated) to answer a scientific question, by a scientific team, and has yield a result published or disseminated in some article or scientific contribution of any kind. Conclusions: The analysis of this definition and the context in which it is proposed provides some answers to the Borgman’s conundrum challenges, that is, which Research Data might be shared, by whom, with whom, under what conditions, why, and to what effects. They are completed with answers to the questions: how? and where?
APA, Harvard, Vancouver, ISO, and other styles
16

Gomez-Diaz, Teresa, and Tomas Recio. "Research Software vs. Research Data I: Towards a Research Data definition in the Open Science context." F1000Research 11 (November 1, 2022): 118. http://dx.doi.org/10.12688/f1000research.78195.2.

Full text
Abstract:
Background: Research Software is a concept that has been only recently clarified. In this paper we address the need for a similar enlightenment concerning the Research Data concept. Methods: Our contribution begins by reviewing the Research Software definition, which includes the analysis of software as a legal concept, followed by the study of its production in the research environment and within the Open Science framework. Then we explore the challenges of a data definition and some of the Research Data definitions proposed in the literature. Results: We propose a Research Data concept featuring three characteristics: the data should be produced (collected, processed, analyzed, shared & disseminated) to answer a scientific question, by a scientific team, and has yield a result published or disseminated in some article or scientific contribution of any kind. Conclusions: The analysis of this definition and the context in which it is proposed provides some answers to the Borgman’s conundrum challenges, that is, which Research Data might be shared, by whom, with whom, under what conditions, why, and to what effects. They are completed with answers to the questions: how? and where?
APA, Harvard, Vancouver, ISO, and other styles
17

Xiong, Yu, Zhiqiang Li, Bin Zhou, and Xiancun Dong. "Cross-layer shared protection strategy towards data plane in software defined optical networks." Optics Communications 412 (April 2018): 66–73. http://dx.doi.org/10.1016/j.optcom.2017.11.085.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Koelbel, C., P. Mehrotra, and J. Van Rosendale. "Supporting shared data structures on distributed memory architectures." ACM SIGPLAN Notices 25, no. 3 (March 1990): 177–86. http://dx.doi.org/10.1145/99164.99183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Chandra, Rohit, Ding-Kai Chen, Robert Cox, Dror E. Maydan, Nenad Nedeljkovic, and Jennifer M. Anderson. "Data distribution support on distributed shared memory multiprocessors." ACM SIGPLAN Notices 32, no. 5 (May 1997): 334–45. http://dx.doi.org/10.1145/258916.258945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Jingjing, and Yang Chi. "Data Management and Service Mode of Library Based on Data Mining Algorithm." Scientific Programming 2022 (September 21, 2022): 1–12. http://dx.doi.org/10.1155/2022/2414830.

Full text
Abstract:
Data management for large-scale data library services with mining procedures improves the availability and readiness of heterogeneous sources. The heterogeneous data sources are assimilated as a single entity through mining procedures to meet the data demands. This article introduces connectivity-persistent data mining method (CDMM) to improve the data handling precision with boosting availability. The proposed method relies on federated learning for identifying the service demands, thereby providing data mining. The learning paradigm accumulates information on shared data library existence over various services. Based on the availability, further mining demands are forwarded to the data management system. If the existence verified by the federated learning is adaptable, then sharing-enabled mining is endorsed for the connected users. The data management then augments several heterogeneous shared libraries to meet the mining requirements. This process is reversible based on the service mode and existence. Therefore, the proposed method improves data availability with less mining and access time and fewer failures.
APA, Harvard, Vancouver, ISO, and other styles
21

Mead, Nancy R., Dan Shoemaker, and Carol Woody. "Principles and Measurement Models for Software Assurance." International Journal of Secure Software Engineering 4, no. 1 (January 2013): 1–10. http://dx.doi.org/10.4018/jsse.2013010101.

Full text
Abstract:
Ensuring and sustaining software product integrity requires that all project stakeholders share a common understanding of the status of the product throughout the development and sustainment processes. Accurately measuring the product’s status helps achieve this shared understanding. This paper presents an effective measurement model organized by seven principles that capture the fundamental managerial and technical concerns of development and sustainment. These principles guided the development of the measures presented in the paper. Data from the quantitative measures help organizational stakeholders make decisions about the performance of their overall software assurance processes. Complementary risk-based data help them make decisions relative to the assessment of risk. The quantitative and risk-based measures form a comprehensive model to assess program and organizational performance. An organization using this model will be able to assess its performance to ensure secure and trustworthy products.
APA, Harvard, Vancouver, ISO, and other styles
22

Rybalchenko, Alexey, Dennis Klein, Mohammad Al-Turany, and Thorsten Kollegger. "Shared Memory Transport for ALFA." EPJ Web of Conferences 214 (2019): 05029. http://dx.doi.org/10.1051/epjconf/201921405029.

Full text
Abstract:
The high data rates expected for the next generation of particle physics experiments (e.g.: new experiments at FAIR/GSI and the upgrade of CERN experiments) call for dedicated attention with respect to design of the needed computing infrastructure. The common ALICE-FAIR framework ALFA is a modern software layer, that serves as a platform for simulation, reconstruction and analysis of particle physics experiments. Beside standard services needed for simulation and reconstruction of particle physics experiments, ALFA also provides tools for data transport, configuration and deployment. The FairMQ module in ALFA offers building blocks for creating distributed software components (processes) that communicate between each other via message passing. The abstract "message passing" interface in FairMQ has at the moment three implementations: ZeroMQ, nanomsg and shared memory. The newly developed shared memory transport will be presented, that provides significant per-formance benefits for transferring large data chunks between components on the same node. The implementation in FairMQ allows users to switch between the different transports via a trivial configuration change. The design decisions, im-plementation details and performance numbers of the shared memory transport in FairMQ/ALFA will be highlighted.
APA, Harvard, Vancouver, ISO, and other styles
23

Rubart, Jessica, and Peter Dawabi. "Shared data modeling with UML-G." International Journal of Computer Applications in Technology 19, no. 3/4 (2004): 231. http://dx.doi.org/10.1504/ijcat.2004.004071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Sato, Mitsuhisa, Hiroshi Harada, Atsushi Hasegawa, and Yutaka Ishikawa. "Cluster-Enabled OpenMP: An OpenMP Compiler for the SCASH Software Distributed Shared Memory System." Scientific Programming 9, no. 2-3 (2001): 123–30. http://dx.doi.org/10.1155/2001/605217.

Full text
Abstract:
OpenMP is attracting wide-spread interest because of its easy-to-use parallel programming model for shared memory multiprocessors. We have implemented a "cluster-enabled" OpenMP compiler for a page-based software distributed shared memory system, SCASH, which works on a cluster of PCs. It allows OpenMP programs to run transparently in a distributed memory environment. The compiler transforms OpenMP programs into parallel programs using SCASH so that shared global variables are allocated at run time in the shared address space of SCASH. A set of directives is added to specify data mapping and loop scheduling method which schedules iterations onto threads associated with the data mapping. Our experimental results show that the data mapping may greatly impact on the performance of OpenMP programs in the software distributed shared memory system. The performance of some NAS parallel benchmark programs in OpenMP is improved by using our extended directives.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Xuehe, and Lingjie Duan. "Economic Analysis of Rollover and Shared Data Plans." IEEE Transactions on Mobile Computing 19, no. 9 (September 1, 2020): 2088–100. http://dx.doi.org/10.1109/tmc.2019.2922132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Paten, Benedict, Mark Diekhans, Brian J. Druker, Stephen Friend, Justin Guinney, Nadine Gassner, Mitchell Guttman, et al. "The NIH BD2K center for big data in translational genomics." Journal of the American Medical Informatics Association 22, no. 6 (July 14, 2015): 1143–47. http://dx.doi.org/10.1093/jamia/ocv047.

Full text
Abstract:
Abstract The world’s genomics data will never be stored in a single repository – rather, it will be distributed among many sites in many countries. No one site will have enough data to explain genotype to phenotype relationships in rare diseases; therefore, sites must share data. To accomplish this, the genetics community must forge common standards and protocols to make sharing and computing data among many sites a seamless activity. Through the Global Alliance for Genomics and Health, we are pioneering the development of shared application programming interfaces (APIs) to connect the world’s genome repositories. In parallel, we are developing an open source software stack (ADAM) that uses these APIs. This combination will create a cohesive genome informatics ecosystem. Using containers, we are facilitating the deployment of this software in a diverse array of environments. Through benchmarking efforts and big data driver projects, we are ensuring ADAM’s performance and utility.
APA, Harvard, Vancouver, ISO, and other styles
27

VERGARA-NIEDERMAYR, CRISTOBAL, FUSHENG WANG, TONY PAN, TAHSIN KURC, and JOEL SALTZ. "SEMANTICALLY INTEROPERABLE XML DATA." International Journal of Semantic Computing 07, no. 03 (September 2013): 237–55. http://dx.doi.org/10.1142/s1793351x13500037.

Full text
Abstract:
XML is ubiquitously used as an information exchange platform for web-based applications in healthcare, life sciences, and many other domains. Proliferating XML data are now managed through latest native XML database technologies. XML data sources conforming to common XML schemas could be shared and integrated with syntactic interoperability. Semantic interoperability can be achieved through semantic annotations of data models using common data elements linked to concepts from ontologies. In this paper, we present a framework and software system to support the development of semantic interoperable XML based data sources that can be shared through a Grid infrastructure. We also present our work on supporting semantic validated XML data through semantic annotations for XML Schema, semantic validation and semantic authoring of XML data. We demonstrate the use of the system for a biomedical database of medical image annotations and markups.
APA, Harvard, Vancouver, ISO, and other styles
28

Jing Liu, Yu Jiang, Zechao Li, Zhi-Hua Zhou, and Hanqing Lu. "Partially Shared Latent Factor Learning With Multiview Data." IEEE Transactions on Neural Networks and Learning Systems 26, no. 6 (June 2015): 1233–46. http://dx.doi.org/10.1109/tnnls.2014.2335234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Yu, Qian, Tong Li, Zhong Wen Xie, Na Zhao, and Ying Lin. "Distributed Computing Design Methods for Multicore Application Programming." Advanced Materials Research 756-759 (September 2013): 1295–99. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.1295.

Full text
Abstract:
In order to solve the serial execution caused by multithreaded concurrent access to shared data and realize the dynamic load balance of tasks on shared memory symmetric multi-processor (multi-core) computing platform, new design methods are presented. By presenting multicore distributed locks, multicore shared data localization, multicore distributed queue, the new design methods can greatly decrease the number of accessing the shared data and realize the dynamic load balance of tasks. For illustration, design scheme of multicore task manager of server software are given by using new design methods. Results shows the new design methods reduce the number of access shared resources, partially resolve the serial execution of cooperative threads and realize the dynamic task balance of server software, which validate the superiority of this approach.
APA, Harvard, Vancouver, ISO, and other styles
30

Korpała, Grzegorz. "Data management in CUDA Programming for High Bandwidth Memory in GPU Accelerators." Computer Methods in Material Science 16, no. 3 (2016): 121–26. http://dx.doi.org/10.7494/cmms.2016.3.0580.

Full text
Abstract:
The number of applications that use GPU accelerated calculations always grows. The software conversion to such calculation type is although complex but gives enormous energy efficiency and performance. In the publication is presented a method of the data sets, in which a temporary storage is occurred in the shared memory of GPU. The loading of values to shared memory via massive data access makes it possible to exploit the full power of the High Bandwidth Memory. It is demonstrated by CUDA codes for Cellular Automaton application and corresponding indexing of the data in global and shared memory. The real and the theoretical Speed-Up are described and shown in this publication.
APA, Harvard, Vancouver, ISO, and other styles
31

Zou, Lida, Qingzhong Li, and Lanju Kong. "Isolated Storage of Multi-Tenant Data Based on Shared Schema." Cybernetics and Information Technologies 16, no. 3 (September 1, 2016): 91–103. http://dx.doi.org/10.1515/cait-2016-0036.

Full text
Abstract:
Abstract Multi-tenant data management is an important part of supporting efficient operation of software as a service application. Multi-tenant data use shared schema to reduce resource usage cost. However, massive data of different tenants are stored in the same schema, which causes useless data of other tenants to be read when a tenant just need access its own disk data. In this paper we focus on disk storage method of multi-tenant data based on shared schema to address the above low efficiency of data access. According to isolation requirement of multitenant data, we store a tenant’s data in some contiguous disk blocks. The experimental results illustrate that query efficiency in range query and join query is 1.5–2 times the existing storage method, and no indexing query efficiency improves 10–70 times.
APA, Harvard, Vancouver, ISO, and other styles
32

Gupta, Amarnath, and Chaitanya Baru. "An extensible information model for shared scientific data collections." Future Generation Computer Systems 16, no. 1 (November 1999): 9–20. http://dx.doi.org/10.1016/s0167-739x(99)00031-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Lu, Paul. "Integrating Bulk-Data Transfer into the Aurora Distributed Shared Data System." Journal of Parallel and Distributed Computing 61, no. 11 (November 2001): 1609–32. http://dx.doi.org/10.1006/jpdc.2001.1758.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Suh, Ilhyun, and Yon Dohn Chung. "A Workload Assignment Strategy for Efficient ROLAP Data Cube Computation in Distributed Systems." International Journal of Data Warehousing and Mining 12, no. 3 (July 2016): 51–71. http://dx.doi.org/10.4018/ijdwm.2016070104.

Full text
Abstract:
Data cube plays a key role in the analysis of multidimensional data. Nowadays, the explosive growth of multidimensional data has made distributed solutions important for data cube computation. Among the architectures for distributed processing, the shared-nothing architecture is known to have the best scalability. However, frequent and massive network communication among the processors can be a performance bottleneck in shared-nothing distributed processing. Therefore, suppressing the amount of data transmission among the processors can be an effective strategy for improving overall performance. In addition, dividing the workload and distributing them evenly to the processors is important. In this paper, the authors present a distributed algorithm for data cube computation that can be adopted in shared-nothing systems. The proposed algorithm gains efficiency by adopting the workload assignment strategy that reduces the total network cost and allocates the workload evenly to each processor, simultaneously.
APA, Harvard, Vancouver, ISO, and other styles
35

Begum, M. Shaheda. "Persistent Cohesion with Advanced Ring Signatures for Shared Data in Cloud." International Journal for Research in Applied Science and Engineering Technology 9, no. 11 (November 30, 2021): 351–55. http://dx.doi.org/10.22214/ijraset.2021.38797.

Full text
Abstract:
Abstract: Motivated by the exponential growth and the huge success of cloud data services bring the cloud common place for data to be not only stored in the cloud, but also shared across multiple users. Our scheme also has the added feature of access control in which only valid users are able to decrypt the stored information. Unfortunately, the integrity of cloud data is subject to skepticism due to the existence of hardware/software failures and human errors. Several mechanisms have been designed to allow both data owners and public verifiers to efficiently audit cloud data integrity without retrieving the entire data from the cloud server. However, public auditing on the integrity of shared data with these existing mechanisms will inevitably reveal confidential information—identity privacy—to public verifiers. In this paper, we propose a novel privacy-preserving mechanism that supports public auditing on shared data stored in the cloud. In particular, we exploit ring signatures to compute verification metadata needed to audit the correctness of shared data. With our mechanism, the identity of the signer on each block in shared data is kept private from public verifiers, who are able to efficiently verify shared data integrity without retrieving the entire file. In addition, our mechanism is able to perform multiple auditing tasks simultaneously instead of verifying them one by one. Our experimental results demonstrate the effectiveness and efficiency of our mechanism when auditing shared data integrity. Keywords: Public auditing, privacy-preserving, shared data, cloud computing
APA, Harvard, Vancouver, ISO, and other styles
36

Borji, Samaneh, Amir Reza Asnafi, and Maryam Pakdaman Naeini. "A Comparative Study of Social Media Data Archiving Software." Preservation, Digital Technology & Culture 51, no. 3 (October 1, 2022): 111–19. http://dx.doi.org/10.1515/pdtc-2022-0013.

Full text
Abstract:
Abstract The importance and growth of the amount of data available on social media have made organizations use Social Media Data Archiving Software (SMDAS) to collect and archive their data. This study compares the features of three SMDAS: ArchiveSocial, Pagefreezer, and Smarsh. First, by surveying the developers’ websites and catalogs, the features of all three software products are identified and classified into four areas. After using statistical methods and the Chi-square test, significant differences among features of the software in each domain are investigated. “Access to deleted records,” “automatic archiving,” “archiving of native formats,” “archive categorization,” “archive sharing,” “simple and advanced search,” “online service,” and “advanced discovery and monitoring functions” were the shared features. A significant difference was noted in the domain of security and data preservation, with Pagefreezer software offering more features than the other two software. In the other areas, no significant difference was observed. Knowledge of SMDAS can help librarians and other information professionals choose and use it wisely. Comparing features may also benefit companies that are developing SMDAS. The literature suggests to use the studied software; nevertheless, few studies discussed the software’s features in detail. This article has made a valuable contribution to comparing the software’s features.
APA, Harvard, Vancouver, ISO, and other styles
37

Crooks, Natacha. "Efficient Data Sharing across Trust Domains." ACM SIGMOD Record 52, no. 2 (August 10, 2023): 36–37. http://dx.doi.org/10.1145/3615952.3615962.

Full text
Abstract:
Cross-Trust-Domain Processing. Data is now a commodity. We know how to compute and store it efficiently and reliably at scale. We have, however, paid less attention to the notion of trust. Yet, data owners today are no longer the entities storing or processing their data (medical records are stored on the cloud, data is shared across banks, etc.). In fact, distributed systems today consist of many different parties, whether it is cloud providers, jurisdictions, organisations or humans. Modern data processing and storage always straddles trust domains.
APA, Harvard, Vancouver, ISO, and other styles
38

Cierniak, Michał, and Wei Li. "Unifying data and control transformations for distributed shared-memory machines." ACM SIGPLAN Notices 30, no. 6 (June 1995): 205–17. http://dx.doi.org/10.1145/223428.207145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Dong, Changyu, Giovanni Russello, and Naranker Dulay. "Shared and searchable encrypted data for untrusted servers." Journal of Computer Security 19, no. 3 (May 30, 2011): 367–97. http://dx.doi.org/10.3233/jcs-2010-0415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Strevell, Michael W., and Harvey G. Cragon. "Data type transformation in heterogeneous shared memory multiprocessors." Journal of Parallel and Distributed Computing 12, no. 2 (June 1991): 164–70. http://dx.doi.org/10.1016/0743-7315(91)90021-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Chen, Ying, Frank Dehne, Todd Eavis, and Andrew Rau-Chaplin. "Parallel ROLAP Data Cube Construction on Shared-Nothing Multiprocessors." Distributed and Parallel Databases 15, no. 3 (May 2004): 219–36. http://dx.doi.org/10.1023/b:dapd.0000018572.20283.e0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Demazière, Didier, François Horn, and Marc Zune. "The Functioning of a Free Software Community." Science & Technology Studies 20, no. 2 (January 1, 2007): 34–54. http://dx.doi.org/10.23987/sts.55211.

Full text
Abstract:
The ability to build solid and coherent software from spontaneous, sudden and evanescent involvement is viewed as an enigma by sociologists and economists. The internal heterogeneity of project contributors questions the functioning of collective action: how can commitments that are so dissimilar be put together? Our objective is to consider FLOSS communities as going concerns which necessitate a minimum of order and common, shared, social rules to function. Through an in-depth and diachronic analysis of the Spip project, we present two classical modes of social regulation: a control regulation centred on the product and an autonomous regulation reflecting the differentiated commitments. Our data shows that the meaning, value and legitimacy of contributors’ involvements are defined and rated more collectively, through exchanges, judgments, and evaluations. A third regulation mode, called distributed community regulation and aimed at creating and transforming shared rules that produces recognition and stratification, is then presented.
APA, Harvard, Vancouver, ISO, and other styles
43

Chandra, Abhishek, Weibo Gong, and Prashant Shenoy. "Dynamic resource allocation for shared data centers using online measurements." ACM SIGMETRICS Performance Evaluation Review 31, no. 1 (June 10, 2003): 300–301. http://dx.doi.org/10.1145/885651.781067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Liekovuori, Kaisa, Samu Rautio, and Aatu Härkönen. "Shared Parameter Database of War Gaming Software: Case Study on Commercial Simulation Databases." Security Dimensions 35, no. 35 (March 31, 2021): 82–99. http://dx.doi.org/10.5604/01.3001.0014.8241.

Full text
Abstract:
Background: The current research brings up the perspective of security-critical information systems in shared parameter databases in the context of processing sensitive data at Finnish Naval Warfare Centre. It refers to the environment of isolated military war gaming simulation and modeling systems. The research problem is: How to make an optimal solution for data distribution in different military war gaming simulation and modeling software? Objectives: The objective is to create a single shared database usable with different detail level software, e.g. high-level scenario simulation, technical system-of-system simulations, and system-level physical simulations. Methods: The methods are modeling, simulation and operation analysis. The approach is inductive, the strategy is a qualitative case study and the data collection was implemented by exploring database models and their combinations. The integration was implemented in an object-relational database management system (ORDBMS), PostgreSQL. Results: The shared database led to efficient access to simulation parameters, more straightforward system integration and improved scalability. Conclusions: The results of modeling and simulation indicated that the integration is possible to implement.
APA, Harvard, Vancouver, ISO, and other styles
45

Jing, Shao Hong, and Chao Song. "Research on Real-Time Database of Configuration Software." Advanced Materials Research 433-440 (January 2012): 4003–8. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.4003.

Full text
Abstract:
This article using three kinds of storage structure combined to store field data, that is memory database, relational database and document management system. This paper also gives the basic realization scheme of real-time database, and using Windows dynamic link library technology and the global shared memory technology to establish real-time database system.
APA, Harvard, Vancouver, ISO, and other styles
46

Goebel, Stephan, Ruben Jubeh, and Simon-Lennert Raesch. "A Robotics Environment for Software Engineering Courses." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (August 4, 2011): 1874–75. http://dx.doi.org/10.1609/aaai.v25i1.7997.

Full text
Abstract:
The initial idea of using Lego Mindstorms Robots for student courses had soon to be expanded to a simulation environment as the user base in students grew larger and the need for parallel development and testing arose. An easy to use and easy to set up means of providing positioning data led to the creation of an indoor positioning system so that new users can adapt quickly and successfully, as sensors on the actual robots are difficult to configure and hard to interpret in an environmental context. A global positioning system shared among robots can make local sensors obsolete and still deliver more precise information than currently available sensors, also providing the base necessary for the robots to effectively work on shared tasks as a group. Further more, a simulator for robots programmed with Fujaba and Java which was developed along the way can be used by many developers simultaneously and lets them evaluate their code in a simple way, while close to real-world results.
APA, Harvard, Vancouver, ISO, and other styles
47

Niebur, Glen L., and Thomas R. Chase. "A Grammar Driven Data Translation System for Computer Integrated Manufacturing." Journal of Mechanical Design 124, no. 1 (February 1, 1999): 136–42. http://dx.doi.org/10.1115/1.1434268.

Full text
Abstract:
The ability to share data among the numerous software applications used to design and analyze products is an important aspect of computer integrated manufacturing. In the past, data sharing has been implemented by direct translation of data files, through neutral data formats, and through central shared databases. This paper describes a data sharing architecture that addresses some of the limitations of these systems. It employs a database management system as a central repository for part data in an application independent format. A configurable translator, called Datatrans, is used to transfer data between the database management system and native application data files. In this way software from multiple independent vendors or legacy software can be supported, because applications need not incorporate code specific to any database management system nor have knowledge of the centralized database schema. The translator uses grammars to define and recognize the data to be translated. The grammar is augmented with commands that are passed to the front-end of the database management system or to application programs to store and retrieve data. Thus the query language and the application commands serve as a high level interface to the underlying database and application data files. Entity transformations that are beyond the scope of a grammatical description are performed by database methods, which are available to all applications. An implementation of the system demonstrates the feasibility of this approach.
APA, Harvard, Vancouver, ISO, and other styles
48

Kimmet, James S. "Auto-Generated Coherent Data Store for ConcurrentModular Embedded Systems." ACM SIGAda Ada Letters 41, no. 1 (October 28, 2022): 74–77. http://dx.doi.org/10.1145/3570315.3570321.

Full text
Abstract:
A thread-safe data store has been developed to enforce interface consistency and shared data coherency in a concurrent modular embedded real-time system. Typical messaging techniques may not provide optimal data transfer between software components in all embedded systems, especially if there is a high degree of data interdependency as the number of components increases. The data store paradigm reduces the overall communication load by providing finer data item granularity, and eliminating the copy and transfer of unused message content. The data store described in this paper is implemented with code auto-generation and provides compile-time error checking, ensuring effortless data integrity by automatically rebuilding when software component interfaces are changed. The data store has been successfully employed to rehost a highly-coupled legacy software application into a more modularized component architecture.
APA, Harvard, Vancouver, ISO, and other styles
49

Shin, Donghoon, and P. Bruce Berra. "Computer architectures for logic-oriented data/knowledge bases." Knowledge Engineering Review 4, no. 1 (March 1989): 1–29. http://dx.doi.org/10.1017/s0269888900004720.

Full text
Abstract:
AbstractKnowledge base management systems (KBMS) are designed to efficiently retrieve and manipulate large shared knowledge bases. A significant subclass of KBMS consisting of a combination of logic programming and database is often called a logic oriented knowledge base system (LOKBS). These systems must possess considerable processing and I/O capabilities so many approaches have been taken to the improvement of their performance. In this paper we review the current performance enhancing hardware approaches for LOKBS. We include parallelism, both in processing and I/O, algorithms, caching, and physical data organizations.
APA, Harvard, Vancouver, ISO, and other styles
50

Wu, Libing, Jing Wang, Sherali Zeadally, and Debiao He. "Privacy-preserving auditing scheme for shared data in public clouds." Journal of Supercomputing 74, no. 11 (August 13, 2018): 6156–83. http://dx.doi.org/10.1007/s11227-018-2527-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography