Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Client-side storage.

Zeitschriftenartikel zum Thema „Client-side storage“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Client-side storage" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Kim, Cheiyol, Sangmin Lee, Youngkyun Kim und Daewha Seo. „Adaptation of Distributed File System to VDI Storage by Client-Side CacheAdaptation of Distributed File System to VDI Storage by Client-Side Cache“. Journal of Computers 11, Nr. 1 (Januar 2016): 10–17. http://dx.doi.org/10.17706/jcp.11.1.10-17.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Hatzieleftheriou, Andromachi, und Stergios V. Anastasiadis. „Client-Side Journaling for Durable Shared Storage“. ACM Transactions on Storage 13, Nr. 4 (15.12.2017): 1–34. http://dx.doi.org/10.1145/3149372.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Song, Gyuwon, Suhyun Kim und Dongmahn Seo. „Saveme: client-side aggregation of cloud storage“. IEEE Transactions on Consumer Electronics 61, Nr. 3 (August 2015): 302–10. http://dx.doi.org/10.1109/tce.2015.7298089.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Janik, Arkadiusz, und Szymon Kiebzak. „Client-side Storage. Offline Availability of Data“. Journal of Automation, Mobile Robotics and Intelligent Systems 8, Nr. 4 (20.12.2014): 3–10. http://dx.doi.org/10.14313/jamris_4-2014/30.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Hariyadi, Dedy, Imam Puji Santoso und Ramadhana Saputra. „IMPLEMENTASI PROTEKSI CLIENT-SIDE PADA PRIVATE CLOUD STORAGE NEXTCLOUD“. Jurnal Manajemen Informatika dan Sistem Informasi 2, Nr. 1 (31.01.2019): 16. http://dx.doi.org/10.36595/misi.v2i1.65.

Der volle Inhalt der Quelle
Annotation:
Saat ini hampir setiap perangkat terhubung dengan teknologi komputasi awan. Teknologi komputasi awan yang menawarkan layanan menarik adalah Cloud Storage seperti Google Drive, Dropbox, One Drive, Mega, dan lain-lain. Teknologi Cloud Storage semacam itu dapat diterapkan di lingkungan private atau on-premise. Peranti lunak Cloud Storage yang dapat diinstall di lingkungan private diantaranya, OwnCloud, Nextcloud, SeaFile, dan lain-lain. Implementasi Cloud Storage perlu diwaspadai karena memiliki celah keamanan saat transmisi data dari client ke server atau sebaliknya dan tidak terproteksinya berkas yang tersimpan pada Cloud Storage server. Pada penelitian ini menunjukkan hasil pengujian kerentanan menyimpan berkas dan direktori di penyedia Cloud Storage berserta memberikan solusi mengatasi keamanan tersebut.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Park, Kyungsu, Ji Eun Eom, Jeongsu Park und Dong Hoon Lee. „Secure and Efficient Client-side Deduplication for Cloud Storage“. Journal of the Korea Institute of Information Security and Cryptology 25, Nr. 1 (28.02.2015): 83–94. http://dx.doi.org/10.13089/jkiisc.2015.25.1.83.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Youn, Taek-Young, Nam-Su Jho, Kyung Hyune Rhee und Sang Uk Shin. „Authorized Client-Side Deduplication Using CP-ABE in Cloud Storage“. Wireless Communications and Mobile Computing 2019 (15.05.2019): 1–11. http://dx.doi.org/10.1155/2019/7840917.

Der volle Inhalt der Quelle
Annotation:
Since deduplication inevitably implies data sharing, control over access permissions in an encrypted deduplication storage is more important than a traditional encrypted storage. Therefore, in terms of flexibility, data deduplication should be combined with data access control techniques. In this paper, we propose an authorized deduplication scheme using CP-ABE to solve this problem. The proposed scheme provides client-side deduplication while providing confidentiality through client-side encryption to prevent exposure of users’ sensitive data on untrusted cloud servers. Also, unlike existing convergent encryption schemes, it provides authorized convergent encryption by using CP-ABE to allow only authorized users to access critical data. The proposed authorized deduplication scheme provides an adequate trade-off between storage space efficiency and security in cloud environment and is very suitable for the hybrid cloud model considering both the data security and the storage efficiency in a business environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Seo, Dongmahn, Suhyun Kim und Gyuwon Song. „Mutual exclusion method in client-side aggregation of cloud storage“. IEEE Transactions on Consumer Electronics 63, Nr. 2 (Mai 2017): 185–90. http://dx.doi.org/10.1109/tce.2017.014838.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Shin, Youngjoo, und Kwangjo Kim. „Differentially private client-side data deduplication protocol for cloud storage services“. Security and Communication Networks 8, Nr. 12 (30.10.2014): 2114–23. http://dx.doi.org/10.1002/sec.1159.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Braeken, An. „Highly Efficient Symmetric Key Based Authentication and Key Agreement Protocol Using Keccak“. Sensors 20, Nr. 8 (11.04.2020): 2160. http://dx.doi.org/10.3390/s20082160.

Der volle Inhalt der Quelle
Annotation:
Efficient authentication and key agreement protocols between two entities are required in many application areas. In particular, for client–server type of architectures, the client is mostly represented by a constrained device and thus highly efficient protocols are needed. We propose in this paper two protocols enabling the construction of a mutual authenticated key ensuring anonymity and unlinkability of the client and resisting the most well known attacks. The main difference between the two proposed protocols is in the storage requirements on the server side. The innovation of our protocols relies on the fact that, thanks to the usage of the sponge construction, available in the newly proposed SHA3 standard with underlying Keccak design, the computation cost can be reduced to only one hash operation on the client side in case of the protocol with storage and two hash operations for the protocol without storage and thus leads to a very efficient solution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Mishra, Bharati, Debasish Jena, Ramasubbareddy Somula und S. Sankar. „Secure Key Storage and Access Delegation Through Cloud Storage“. International Journal of Knowledge and Systems Science 11, Nr. 4 (Oktober 2020): 45–64. http://dx.doi.org/10.4018/ijkss.2020100104.

Der volle Inhalt der Quelle
Annotation:
Cloud storage is gaining popularity to store and share files. To secure the files, cloud storage providers supply client interfaces with the facility to encrypt the files and upload them into the cloud. When client-side encryption is done, the onus of key management lies with the cloud user. Public key proxy re-encryption mechanisms can be used to distribute the key among stakeholders of the file. However, clients use low powered devices like mobile phones to share their files. Lightweight cryptography operations are needed to carry out the encryption operations. Ring-LWE-based encryption scheme meets this criterion. In this work, a proxy re-encryption scheme is proposed to distribute the file key. The scheme is proved CCA secure under Ring-LWE assumption in the random oracle model. The performance of the scheme is compared with the existing proxy re-encryption schemes which are observed to show better performance for re-encryption and re-key generation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Kim, Won-Bin, und Im Yeong Lee. „Client-Side Deduplication for Protection of a Private Data in Cloud Storage“. Advanced Science Letters 22, Nr. 9 (01.09.2016): 2448–52. http://dx.doi.org/10.1166/asl.2016.7854.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

He, Kai, Chuanhe Huang, Hao Zhou, Jiaoli Shi, Xiaomao Wang und Feng Dan. „Public auditing for encrypted data with client-side deduplication in cloud storage“. Wuhan University Journal of Natural Sciences 20, Nr. 4 (09.07.2015): 291–98. http://dx.doi.org/10.1007/s11859-015-1095-8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Li, Chao Ling, und Yue Chen. „Merkle Hash Tree Based Deduplication in Cloud Storage“. Applied Mechanics and Materials 556-562 (Mai 2014): 6223–27. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.6223.

Der volle Inhalt der Quelle
Annotation:
To deduplicate the sensitive data in a cloud storage center, a scheme called as MHT-Dedup that is based on MHT (Merkle Hash Tree) is proposed. It achieves the cross-user file-level client-side deduplication and local block-level client-side deduplication concurrently. It firstly encrypts the file on block granularity, and then authenticates the file ciphertext to find duplicated files (Proofs of oWnership, PoW) and check the hash of block plaintext to find duplicated blocks. In the PoW protocol of MHT-Dedup, an authenticating binary tree is generated from the tags of encrypted blocks to assuredly find the duplicated files. MHT-Dedup gets rid of the conflict between data deduplication and encryption, achieves the file-level and block-level deduplication concurrently, avoids the misuse of storage system by users, resists to the inside and outside attacks to data confidentiality, and prevents the target collision attack to files and brute force attack to blocks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Bouyer, Asgarali, und Mojtaba Zirak. „Digital Forensics in private Seafile Cloud Storage from both client and server side“. International Journal of Electronic Security and Digital Forensics 13, Nr. 1 (2021): 1. http://dx.doi.org/10.1504/ijesdf.2021.10031878.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Youn, Taek-Young, Ku-Young Chang, Kyung-Hyune Rhee und Sang Uk Shin. „Efficient Client-Side Deduplication of Encrypted Data With Public Auditing in Cloud Storage“. IEEE Access 6 (2018): 26578–87. http://dx.doi.org/10.1109/access.2018.2836328.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Bouyer, Asgarali, und Mojtaba Zirak. „Digital forensics in private Seafile cloud storage from both client and server side“. International Journal of Electronic Security and Digital Forensics 13, Nr. 3 (2021): 233. http://dx.doi.org/10.1504/ijesdf.2021.114954.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Yuan, Jingbin, Jing Zhang, Lijun Shen, Dandan Zhang, Wenhuan Yu und Hua Han. „Massive Data Management and Sharing Module for Connectome Reconstruction“. Brain Sciences 10, Nr. 5 (22.05.2020): 314. http://dx.doi.org/10.3390/brainsci10050314.

Der volle Inhalt der Quelle
Annotation:
Recently, with the rapid development of electron microscopy (EM) technology and the increasing demand of neuron circuit reconstruction, the scale of reconstruction data grows significantly. This brings many challenges, one of which is how to effectively manage large-scale data so that researchers can mine valuable information. For this purpose, we developed a data management module equipped with two parts, a storage and retrieval module on the server-side and an image cache module on the client-side. On the server-side, Hadoop and HBase are introduced to resolve massive data storage and retrieval. The pyramid model is adopted to store electron microscope images, which represent multiresolution data of the image. A block storage method is proposed to store volume segmentation results. We design a spatial location-based retrieval method for fast obtaining images and segments by layers rapidly, which achieves a constant time complexity. On the client-side, a three-level image cache module is designed to reduce latency when acquiring data. Through theoretical analysis and practical tests, our tool shows excellent real-time performance when handling large-scale data. Additionally, the server-side can be used as a backend of other similar software or a public database to manage shared datasets, showing strong scalability.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Panidi, E. „FOG COMPUTING PERSPECTIVES IN CONNECTION WITH THE CURRENT GEOSPATIAL STANDARDS“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W2 (16.11.2017): 171–74. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w2-171-2017.

Der volle Inhalt der Quelle
Annotation:
Cloud Computing technologies and cloud-based Geographic Information Systems have became widely used in recent decades. However, the complexity and size of geospatial datasets remains growing and sometimes become going out of the cloud infrastructure paradigm. Additionally, many of currently used client devices have sufficient computational resources to store and process some amounts of data directly. Consequently, multilevel management techniques are demanded that support capabilities of horizontal (client-to-client) data flows in addition to vertical (cloud-to-client) data flows. These tendencies in information technologies (in general) have led to the appearance of Fog Computing paradigm that extends a cloud infrastructure with the computational resources of client devices and implements client-side data storage, management and interchange. <br><br> This position paper summarizes and discusses mentioned tendencies in connection with a number of available Open Geospatial Consortium standards. The paper highlights the standards, which can be recognized as the platform for the Fog Computing implementation into geospatial domain, and analyzing their strong and weak features from the Fog Computing point of view. The analysis is built upon author’s experience in implementation of the client-side geospatial Web services.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Youn, Taek-Young, Nam-Su Jho, Keonwoo Kim, Ku-Young Chang und Ki-Woong Park. „Locked Deduplication of Encrypted Data to Counter Identification Attacks in Cloud Storage Platforms“. Energies 13, Nr. 11 (29.05.2020): 2742. http://dx.doi.org/10.3390/en13112742.

Der volle Inhalt der Quelle
Annotation:
Deduplication of encrypted data is a significant function for both the privacy of stored data and efficient storage management. Several deduplication techniques have been designed to provide improved security or efficiency. In this study, we focus on the client-side deduplication technique, which has more advantages than the server-side deduplication technique, particularly in communication overhead, owing to conditional data transmissions. From a security perspective, poison, dictionary, and identification attacks are considered as threats against client-side deduplication. Unfortunately, in contrast to other attacks, identification attacks and the corresponding countermeasures have not been studied in depth. In identification attacks, an adversary tries to identify the existence of a specific file. Identification attacks should be countered because adversaries can use the attacks to break the privacy of the data owner. Therefore, in the literature, some counter-based countermeasures have been proposed as temporary remedies for such attacks. In this paper, we present an analysis of the security features of deduplication techniques against identification attacks and show that the lack of security of the techniques can be eliminated by providing uncertainness to the conditional responses in the deduplication protocol, which are based on the existence of files. We also present a concrete countermeasure, called the time-locked deduplication technique, which can provide uncertainness to the conditional responses by withholding the operation of the deduplication functionality until a predefined time. An additional cost for locking is incurred only when the file to be stored does not already exist in the server’s storage. Therefore, our technique can improve the security of client-side deduplication against identification attacks at almost the same cost as existing techniques, except in the case of files uploaded for the first time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Li, Shanshan, Chunxiang Xu und Yuan Zhang. „CSED: Client-Side encrypted deduplication scheme based on proofs of ownership for cloud storage“. Journal of Information Security and Applications 46 (Juni 2019): 250–58. http://dx.doi.org/10.1016/j.jisa.2019.03.015.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Wang, Shuang, Xiaoqian Jiang, Feng Chen, Lijuan Cui und Samuel Cheng. „Streamlined Genome Sequence Compression using Distributed Source Coding“. Cancer Informatics 13s1 (Januar 2014): CIN.S13879. http://dx.doi.org/10.4137/cin.s13879.

Der volle Inhalt der Quelle
Annotation:
We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Cheng, Wen, Chunyan Li, Lingfang Zeng, Yingjin Qian, Xi Li und André Brinkmann. „NVMM-Oriented Hierarchical Persistent Client Caching for Lustre“. ACM Transactions on Storage 17, Nr. 1 (02.02.2021): 1–22. http://dx.doi.org/10.1145/3404190.

Der volle Inhalt der Quelle
Annotation:
In high-performance computing (HPC), data and metadata are stored on special server nodes and client applications access the servers’ data and metadata through a network, which induces network latencies and resource contention. These server nodes are typically equipped with (slow) magnetic disks, while the client nodes store temporary data on fast SSDs or even on non-volatile main memory (NVMM). Therefore, the full potential of parallel file systems can only be reached if fast client side storage devices are included into the overall storage architecture. In this article, we propose an NVMM-based hierarchical persistent client cache for the Lustre file system (NVMM-LPCC for short). NVMM-LPCC implements two caching modes: a read and write mode (RW-NVMM-LPCC for short) and a read only mode (RO-NVMM-LPCC for short). NVMM-LPCC integrates with the Lustre Hierarchical Storage Management (HSM) solution and the Lustre layout lock mechanism to provide consistent persistent caching services for I/O applications running on client nodes, meanwhile maintaining a global unified namespace of the entire Lustre file system. The evaluation results presented in this article show that NVMM-LPCC can increase the average read throughput by up to 35.80 times and the average write throughput by up to 9.83 times compared with the native Lustre system, while providing excellent scalability.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Yeo, Hui-Shyong, Xiao-Shen Phang, Hoon-Jae Lee und Hyotaek Lim. „Leveraging client-side storage techniques for enhanced use of multiple consumer cloud storage services on resource-constrained mobile devices“. Journal of Network and Computer Applications 43 (August 2014): 142–56. http://dx.doi.org/10.1016/j.jnca.2014.04.006.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Kim, Cheiyol, Youngchul Kim, Youngchang Kim, Sangmin Lee, Youngkyun Kim und Daewha Seo. „Performance Enhancement of Distributed File System as Virtual Desktop Storage Using Client Side SSD Cache“. KIPS Transactions on Computer and Communication Systems 3, Nr. 12 (31.12.2014): 433–42. http://dx.doi.org/10.3745/ktccs.2014.3.12.433.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Simon, Michal, und Andrew Hanushevsky. „Exploring the virtues of XRootD5: Declarative API“. EPJ Web of Conferences 251 (2021): 02063. http://dx.doi.org/10.1051/epjconf/202125102063.

Der volle Inhalt der Quelle
Annotation:
Across the years, being the backbone of numerous data management solutions used within the WLCG collaboration, the XRootD framework and protocol became one of the most important building blocks for storage solutions in the High Energy Physics (HEP) community. The latest big milestone for the project, release 5, introduced multitude of architectural improvements and functional enhancements, including the new client side declarative API, which is the main focus of this study. In this contribution, we give an overview of the new client API and we discuss its motivation and its positive impact on overall software quality (coupling, cohesion), readability and composability.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Rao, Lu, Tengfei Tu, Hua Zhang, Qiaoyan Wen und Jia Xiao. „Dynamic Outsourced Proofs of Retrievability Enabling Auditing Migration for Remote Storage Security“. Wireless Communications and Mobile Computing 2018 (2018): 1–19. http://dx.doi.org/10.1155/2018/4186243.

Der volle Inhalt der Quelle
Annotation:
Remote data auditing service is important for mobile clients to guarantee the intactness of their outsourced data stored at cloud side. To relieve mobile client from the nonnegligible burden incurred by performing the frequent data auditing, more and more literatures propose that the execution of such data auditing should be migrated from mobile client to third-party auditor (TPA). However, existing public auditing schemes always assume that TPA is reliable, which is the potential risk for outsourced data security. Although Outsourced Proofs of Retrievability (OPOR) have been proposed to further protect against the malicious TPA and collusion among any two entities, the original OPOR scheme applies only to the static data, which is the limitation that should be solved for enabling data dynamics. In this paper, we design a novel authenticated data structure called bv23Tree, which enables client to batch-verify the indices and values of any number of appointed leaves all at once for efficiency. By utilizing bv23Tree and a hierarchical storage structure, we present the first solution for Dynamic OPOR (DOPOR), which extends the OPOR model to support dynamic updates of the outsourced data. Extensive security and performance analyses show the reliability and effectiveness of our proposed scheme.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Yang, Chao, Mingyue Zhang, Qi Jiang, Junwei Zhang, Danping Li, Jianfeng Ma und Jian Ren. „Zero knowledge based client side deduplication for encrypted files of secure cloud storage in smart cities“. Pervasive and Mobile Computing 41 (Oktober 2017): 243–58. http://dx.doi.org/10.1016/j.pmcj.2017.03.014.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Periasamy, J. K., und B. Latha. „Secure and duplication detection in cloud using cryptographic hashing method“. International Journal of Engineering & Technology 7, Nr. 1.7 (05.02.2018): 105. http://dx.doi.org/10.14419/ijet.v7i1.7.9585.

Der volle Inhalt der Quelle
Annotation:
De-duplication systems are adopting De-duplication strategies such as client side or server side De-duplication.Particularly, in the beginning of cloud storage, data De-duplication techniques happen to add importance to store original volumes of data in the cloud. This technique motivates the enterprise and organization to farm out data storage to cloud service providers, as proof of several case studies was done.Block level de-duplication is used to discover, removes redundancies compare with previously stored information. The file will be separated into smaller segment as given by the system size or uneven size blocks or chunks whatever we need. Using predetermined size of blocks, the system can simplify the computations of block limits, although using uneven size blocks provides improved de-duplication. Secure Cloud introduce a new concept to audit entities with continuation of a Map Reduce cloud, which is used to help the client is easy to make data tags before feeding the data and audit with theprobity of data is analysed and stored in the cloud. the subject of previously finished work is fixed so that the computational load at user or auditor has huge amount of tag making. In accumulation, Secure Cloud also enables secure de-duplication. Perceive that the “validity” measured in Secure Cloud is the deterrence of leakage, the side channel information. In order to check the leakage of such side channel data, the work accept the custom and design a proof of rights protocol amid cloud servers and clients, which permit clients to confirm the cloud servers what they exactly own the object information.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Zhang, Wen Jun, Ai Min Yang und Yi Liu. „Orthodontic EMR Cloud Based on 2-Tier Cloud Architecture“. Applied Mechanics and Materials 50-51 (Februar 2011): 812–17. http://dx.doi.org/10.4028/www.scientific.net/amm.50-51.812.

Der volle Inhalt der Quelle
Annotation:
Now for orthodontists there are no commercial orthodontic EMR systems suitable for their clinical needs in China, so we study orthodontist’s daily workflow, analyze the requirements, and finally develop the orthodontic EMR Cloud for Peking University School of Stomatology. We propose and adopt 2-Tier Cloud-ARchitecture (2TCAR), which contains rich client tier based on Rich Internet Application (RIA) and server-side Cloud tier based on SimpleDB, to develop the orthodontist’s EMR Cloud for orthodontics according to orthodontist’s workflow. In the 2TCAR the rich client tier is maximized to implement almost all functionalities of user interfaces and transaction logic in the EMR. Functionalities in server-side Cloud tier are simplified only to implement data storage and query. Communication between the two Cloud tiers is also simplified via REST. In the article we research corresponding technologies such as Cloud computing, REST, Flex in RIA and SimpleDB Cloud. Further, in the orthodontic EMR Cloud we use Flex to implement UI presentation & interaction, transaction logic, and REST requests & responses in rich client tier; then design SimpleDB Cloud in server-side Cloud tier, and communication between two tiers via REST. And the EMR Cloud is integrated with existing Resister, Ward and Drugstore information systems. The practice shows that the orthodontic EMR Cloud based on the 2TCAR can fit seamlessly into orthodontist’s daily workflow and effectively replace current paper medical records.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Xu, Haiping, und Deepti Bhalerao. „Reliable and Secure Distributed Cloud Data Storage Using Reed-Solomon Codes“. International Journal of Software Engineering and Knowledge Engineering 25, Nr. 09n10 (November 2015): 1611–32. http://dx.doi.org/10.1142/s0218194015400355.

Der volle Inhalt der Quelle
Annotation:
Despite the popularity and many advantages of using cloud data storage, there are still major concerns about the data stored in the cloud, such as security, reliability and confidentiality. In this paper, we propose a reliable and secure distributed cloud data storage schema using Reed-Solomon codes. Different from existing approaches to achieving data reliability with redundancy at the server side, our proposed mechanism relies on multiple cloud service providers (CSP), and protects users’ cloud data from the client side. In our approach, we view multiple cloud-based storage services as virtual independent disks for storing redundant data encoded with erasure codes. Since each CSP has no access to a user’s complete data, the data stored in the cloud would not be easily compromised. Furthermore, the failure or disconnection of a CSP will not result in the loss of a user’s data as the missing data pieces can be readily recovered. To demonstrate the feasibility of our approach, we developed a prototype distributed cloud data storage application using three major CSPs. The experimental results show that, besides the reliability and security related benefits of our approach, the application outperforms each individual CSP for uploading and downloading files.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Park, Cheolhee, Hyunil Kim, Dowon Hong und Changho Seo. „A Symmetric Key Based Deduplicatable Proof of Storage for Encrypted Data in Cloud Storage Environments“. Security and Communication Networks 2018 (01.11.2018): 1–16. http://dx.doi.org/10.1155/2018/2193897.

Der volle Inhalt der Quelle
Annotation:
Over the recent years, cloud storage services have become increasingly popular, where users can outsource data and access the outsourced data anywhere, anytime. Accordingly, the data in the cloud is growing explosively. Among the outsourced data, most of them are duplicated. Cloud storage service providers can save huge amounts of resources via client-side deduplication. On the other hand, for safe outsourcing, clients who use the cloud storage service desire data integrity and confidentiality of the outsourced data. However, ensuring confidentiality and integrity in the cloud storage environment can be difficult. Recently, in order to achieve integrity with deduplication, the notion of deduplicatable proof of storage has emerged, and various schemes have been proposed. However, previous schemes are still inefficient and insecure. In this paper, we propose a symmetric key based deduplicatable proof of storage scheme, which ensures confidentiality with dictionary attack resilience and supports integrity auditing based on symmetric key cryptography. In our proposal, we introduce a bit-level challenge in a deduplicatable proof of storage protocol to minimize data access. In addition, we prove the security of our proposal in the random oracle model with information theory. Implementation results show that our scheme has the best performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Wang, Guan, Qiang Liu, Jun Zhou und Jian Zhong Chen. „A Multi-Factors Identity Authentication Scheme in Classified Environment“. Advanced Materials Research 765-767 (September 2013): 1734–38. http://dx.doi.org/10.4028/www.scientific.net/amr.765-767.1734.

Der volle Inhalt der Quelle
Annotation:
Since the traditional methods of identity authentication cant attest safety from one platform to another, they cant be applied in classified environment. This paper proposes a multi-factors identity authentication scheme which is used for network storage in classified environment; the scheme installs a TCM (Trusted Cryptograph Module) chip on the client as well as authentication server, and makes full use of the features of the TCM to ensure the platform on both side and the entire process of identity authentication can be trusted.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Shenbaga Bharatha Priya, A., J. Ganesh und Mareeswari M. Devi. „Dynamic Load Rebalancing Algorithm for Private Cloud“. Applied Mechanics and Materials 573 (Juni 2014): 556–59. http://dx.doi.org/10.4028/www.scientific.net/amm.573.556.

Der volle Inhalt der Quelle
Annotation:
Infrastructure-As-A-Service (IAAS) provides an environmental setup under any type of cloud. In Distributed file system (DFS), nodes are simultaneously serve computing and storage functions; that is parallel Data Processing and storage in cloud. Here, file is considered as a data or load. That file is partitioned into a number of File chunks (FC) allocated in distinct nodes so that Map Reduce tasks can be performed in parallel over the nodes. Files and Nodes can be dynamically created, deleted, and added. This results in load imbalance in a distributed file system; that is, the file chunks are not distributed as uniformly as possible among the Chunk Servers (CS). Emerging distributed file systems in production systems strongly depend on a central node for chunk reallocation or Distributed node to maintain global knowledge of all chunks. This dependence is clearly inadequate in a large-scale, failure-prone environment because the central load balancer is put under considerable workload that is linearly scaled with the system size, it may thus become the performance bottleneck and the single point of failure and memory wastage in distributed nodes. So, we have to enhance the Client side module with server side module to create, delete and update the file chunks in Client Module. And manage the overall private cloud and apply dynamic load balancing algorithm to perform auto scaling options in private cloud. In this project, a fully distributed load rebalancing algorithm is presented to cope with the load imbalance problem.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Yan, Xiaoyan, Qilin Wu und Youming Sun. „A Homomorphic Encryption and Privacy Protection Method Based on Blockchain and Edge Computing“. Wireless Communications and Mobile Computing 2020 (18.08.2020): 1–9. http://dx.doi.org/10.1155/2020/8832341.

Der volle Inhalt der Quelle
Annotation:
With its decentralization, reliable database, security, and quasi anonymity, blockchain provides a new solution for data storage and sharing as well as privacy protection. This paper combines the advantages of blockchain and edge computing and constructs the key technology solutions of edge computing based on blockchain. On one hand, it achieves the security protection and integrity check of cloud data; and on the other hand, it also realizes more extensive secure multiparty computation. In order to assure the operating efficiency of blockchain and alleviate the computational burden of client, it also introduces the Paillier cryptosystem which supports additive homomorphism. The task execution side encrypts all data, while the edge node can process the ciphertext of the data received, acquire and return the ciphertext of the final result to the client. The simulation experiment proves that the proposed algorithm is effective and feasible.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Arunambika T. und Senthil Vadivu P. „OCEDS“. International Journal of Distributed Systems and Technologies 12, Nr. 3 (Juli 2021): 48–63. http://dx.doi.org/10.4018/ijdst.2021070103.

Der volle Inhalt der Quelle
Annotation:
Many organizations require handling a massive quantity of data. The rapid growth of data in size leads to the demand for a new large space for storage. It is impossible to store bulk data individually. The data growth issues compel organizations to search novel cost-efficient ways of storage. In cloud computing, reducing an execution cost and reducing a storage price are two of several problems. This work proposed an optimal cost-effective data storage (OCEDS) algorithm in cloud data centres to deal with this problem. Storing the entire database in the cloud on the cloud client is not the best approach. It raises processing costs on both the customer and the cloud service provider. Execution and storage cost optimization is achieved through the proposed OCEDS algorithm. Cloud CSPs present their clients profit-maximizing services while clients want to reduce their expenses. The previous works concentrated on only one side of cost optimization (CSP point of view or consumer point of view), but this OCEDS reduces execution and storage costs on both sides.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Chen, Shang Liang, Ying Han Hsiao, Yun Yao Chen und You Chen Lin. „A Cloud Multi-Tenant Architecture for Developing Machine Remote Monitoring Systems“. Applied Mechanics and Materials 764-765 (Mai 2015): 775–78. http://dx.doi.org/10.4028/www.scientific.net/amm.764-765.775.

Der volle Inhalt der Quelle
Annotation:
This study proposes innovative multi-tenant remote monitoring platform architecture based on cloud with an injection machine manufacturer as cloud data center. This study was designed to develop machine connection mechanism and monitoring module, etc. with machine manufacturers. Under the architecture of this study, machine manufacturers can provide virtualization technology based remote monitoring systems for manufacturing buyers to rapidly develop custom monitoring software. All data storage devices such as servers are provided by the machine manufacturer, and the client side can effectively manage injection machine data by simply renting virtual machine space.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Myneni, Madhu Bala, L. V. Narasimha Prasad und D. Naveen Kumar. „Intelligent Hybrid Cloud Data Hosting Services with Effective Cost and High Availability“. International Journal of Electrical and Computer Engineering (IJECE) 7, Nr. 4 (01.08.2017): 2176. http://dx.doi.org/10.11591/ijece.v7i4.pp2176-2182.

Der volle Inhalt der Quelle
Annotation:
<p>In this Paper the major concentration is an efficient and user based data hosting service for hybrid cloud. It provides friendly transaction scheme with the features of cost effective and high availability to all users. This framework intelligently puts data into cloud with effective cost and high availability. This gives a plan of proof of information respectability in which the client has utilize to check the rightness of his information. In this study the major cloud storage vendors in India are considered and the parameters like storage space, cost of storage, outgoing bandwidth and type of transition mode. Based on available knowledge on all parameters of existing cloud service providers in India, the intelligent hybrid cloud data hosting framework are assured to customers for low cost and high availability with mode of transition. It guarantees that the ability at the customer side is negligible and which will be helpful for customers.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Al-Saleh, Kholoud, und Abdelfettah Belghith. „Locality Aware Path ORAM: Implementation, Experimentation and Analytical Modeling“. Computers 7, Nr. 4 (29.10.2018): 56. http://dx.doi.org/10.3390/computers7040056.

Der volle Inhalt der Quelle
Annotation:
In this paper, we propose an advanced implementation of Path ORAM to hide the access pattern to outsourced data into the cloud. This implementation takes advantage of eventual data locality and popularity by introducing a small amount of extra storage at the client side. Two replacement strategies are used to manage this extra storage (cache): the Least Recently Used (LRU) and the Least Frequently Used (LFU). Using the same test bed, conducted experiments clearly show the superiority of the advanced implementation compared to the traditional Path ORAM implementation, even for a small cache size and reduced data locality. We then present a mathematical model that provides closed form solutions when data requests follow a Zipf distribution with non-null parameter. This model is showed to have a small and acceptable relative error and is then well validated by the conducted experimental results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Jovanovic, Zeljko. „Data stream management system for moving sensor object data“. Serbian Journal of Electrical Engineering 12, Nr. 1 (2015): 117–27. http://dx.doi.org/10.2298/sjee1501117j.

Der volle Inhalt der Quelle
Annotation:
Sensor and communication development has led to the development of new types of applications. Classic database data storage becomes inadequate when data streams arrive from multiple sensors. Then, data querying and result presentation are not efficient. The desired results are obtained with a delay, and the database is filled with a large amount of unnecessary data. To adequately support the above applications, Data Stream Management System (DSMS) applications are needed. DSMSs provide real-time data stream processing. In this paper, a client-server system is presented with DSMS realized on the Java WebDSMS application server side. WebDSMS functionalities are tested with simulated data and in real-life usage.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Beckham, Olly, Gord Oldman, Julie Karrie und Dorth Craig. „Techniques used to formulate confidential data by means of fragmentation and hybrid encryption“. International research journal of management, IT and social sciences 6, Nr. 6 (15.10.2019): 68–86. http://dx.doi.org/10.21744/irjmis.v6n6.766.

Der volle Inhalt der Quelle
Annotation:
Cloud computing is a concept shifting in the approach how computing resources are deployed and purchased. Even though the cloud has a capable, elastic, and consistent design, several security concerns restrain customers to completely accept this novel technology and move from traditional computing to cloud computing. In the article, we aspire to present a form of a novel architectural model for offering protection to numerous cloud service providers with the intention to devise and extend security means for cloud computing. In this work, we presented a two-tier architecture for security in multi-clouds; one at the client side, and other at the server side. The article presented a security domination outline for multi-clouds and supports security needs like Confidentiality, Integrity, Availability, Authorization, and Non-repudiation for cloud storage. Through this document we have anticipated, HBDaSeC, a secure-computation protocol to ease the challenges of enforcing the protection of data for information security in the cloud.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

You, Weijing, Lei Lei, Bo Chen und Limin Liu. „What If Keys Are Leaked? towards Practical and Secure Re-Encryption in Deduplication-Based Cloud Storage“. Information 12, Nr. 4 (26.03.2021): 142. http://dx.doi.org/10.3390/info12040142.

Der volle Inhalt der Quelle
Annotation:
By only storing a unique copy of duplicate data possessed by different data owners, deduplication can significantly reduce storage cost, and hence is used broadly in public clouds. When combining with confidentiality, deduplication will become problematic as encryption performed by different data owners may differentiate identical data which may then become not deduplicable. The Message-Locked Encryption (MLE) is thus utilized to derive the same encryption key for the identical data, by which the encrypted data are still deduplicable after being encrypted by different data owners. As keys may be leaked over time, re-encrypting outsourced data is of paramount importance to ensure continuous confidentiality, which, however, has not been well addressed in the literature. In this paper, we design SEDER, a SEcure client-side Deduplication system enabling Efficient Re-encryption for cloud storage by (1) leveraging all-or-nothing transform (AONT), (2) designing a new delegated re-encryption (DRE), and (3) proposing a new proof of ownership scheme for encrypted cloud data (PoWC). Security analysis and experimental evaluation validate security and efficiency of SEDER, respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Gatuha, George, und Tao Jiang. „Android Based Naive Bayes Probabilistic Detection Model for Breast Cancer and Mobile Cloud Computing: Design and Implementation“. International Journal of Engineering Research in Africa 21 (Dezember 2015): 197–208. http://dx.doi.org/10.4028/www.scientific.net/jera.21.197.

Der volle Inhalt der Quelle
Annotation:
Mobile phone technology initiatives are revolutionizing healthcare delivery in Africa and other developing countries. M-health services have transformed maternal health, management of communicable diseases such as Ebola and prevention of chronic diseases. Technological innovations in m-health have improved healthcare efficiency and effectiveness as well as extending health services to remote locations in rural African communities. This paper describes a ubiquitous m- health system that is based on the user centric paradigm of Mobile Cloud Computing (MCC) and android medical-data mining techniques. The development of ultra-fast 4G mobile networks and sophisticated smartphones and tablets has brought the cloud computing paradigm to the mobile domain.The system’s client side is based on an android platform for breast bio-data collection; a data mining technique based on Naïve Bayes probabilistic classifier (NBC) algorithm for predicting malignancy in breast tissue and the server-side MCC data storage. Experimental results indicate that the android Naïve Bayes classifier achieves 96.4% accuracy on Wisconsin Breast Cancer (WBC) data from UCI machine learning database.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Satrya, Gandeva Bayu, und A. Ahmad Nasrullah. „Analisis Forensik Android: Artefak pada Aplikasi Penyimpanan Awan Box“. Jurnal Teknologi Informasi dan Ilmu Komputer 7, Nr. 3 (22.05.2020): 521. http://dx.doi.org/10.25126/jtiik.2020732220.

Der volle Inhalt der Quelle
Annotation:
<p>Sistem penyimpanan melalui cloud memiliki banyak keunggulan, seperti kemampuan akses dari lokasi manapun serta kemudahan penyimpanan pencadangan file-file pada komputer dan smartphone. Terdapat banyak pilihan layanan penyimpanan melalui cloud, seperti Dropbox, Microsoft OneDrive, Google Drive, dan Box. Dari beberapa jenis layanan peyimpanan tersebut Box adalah satu-satunya layanan penyimpanan cloud yang mampu menjamin tingkat reliability uptime hingga 99.9%. Awalnnya, Box hanya ditujukan untuk kegiatan bisnis saja, namun sekarang Box dapat digunakan oleh pengguna secara umum. Selain memberikan pengaruh yang positif, pertumbuhan penggunaan teknologi layanan penyimpanan cloud juga telah memberikan peningkatan dalam peluang terjadinya kejahatan di dunia maya. Forensik digital merupakan solusi terbaru dalam mengamati keamanan sistem dan jaringan, sementara forensik bergerak adalah pengembangan forensic digital yang sepenuhnya difokuskan pada media smartphone. Forensik bergerak dapat dilakukan dalam dua sisi, yaitu server dan client. Studi kasus dalam penelitian ini berfokus pada penggunaan smartphone OS Android yang terinstal Box sebagai layanan penyimpanan cloud. Sedangkan tujuan utama dari penelitian ini adalah untuk menyediakan sebuah metode forensik bergerak untuk menemukan artefak pada smartphone Android yang telah terinstal dengan aplikasi Box.</p><p><em><strong>Abstract</strong></em></p><p class="Judul2"><em>Storing files in a cloud has many advantages, such as the ability to access them from any location and to keep backups of those files on computers and smartphones. There are many choices for cloud storage services, such as Dropbox, Microsoft OneDrive, Google Drive, and Box. Of these, Box is the only cloud storage service that guarantees uptime reliability 99.99% of the time. At first, Box was intended for business use only, but now it is also freely available for public use. Growth in cloud storage technology use has also resulted in increased opportunities for cybercrime to take place. Digital forensics is the latest solution for system and network security observers, while mobile forensics is a development of digital forensics that is fully focused on smartphone media. Mobile forensics can be performed on both the server and client sides. In this research, mobile forensics was performed on the client side. The case study in this paper focused on an Android operating system </em><em>(OS)</em><em> smartphone using Box cloud storage. The purpose of this study was to provide a mobile forensics method for finding artifacts on smartphones that have a Box application installed.</em></p><p><em><strong><br /></strong></em></p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

EL-SANA, JIHAD, und NETA SOKOLOVSKY. „VIEW-DEPENDENT RENDERING FOR LARGE POLYGONAL MODELS OVER NETWORKS“. International Journal of Image and Graphics 03, Nr. 02 (April 2003): 265–90. http://dx.doi.org/10.1142/s0219467803001007.

Der volle Inhalt der Quelle
Annotation:
In this paper we are presenting a novel approach that enables the rendering of large-shared datasets at interactive rates using inexpensive workstations. Our algorithm is based on view-dependent rendering and client-server technology — servers host large datasets and manage the selection of the various levels of detail, while clients receive blocks of update operations which are used to generate the appropriate level of detail in an incremental manner. We assume that servers are capable machines in terms of storage capacity and computational power and clients are inexpensive workstations which have limited 3D rendering capabilities. For optimization purposes we have developed two similar approaches — one for local area networks and the other for wide area networks. For the second approach we have performed several changes to adapt to the limitation of the wide area networks. To avoid network latency we have developed two powerful mechanisms that cache the adapt operation blocks on the clients' side and predict the future view-parameters of clients based on their recent behavior. Our approach dramatically reduces the amount of memory used by each client and the entire computing system since the dataset is stored only once in the local memory of the server. In addition, it decreases the load on the network as a result of the incremental update contributed by view-dependent rendering.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Lynen, Simon, Bernhard Zeisl, Dror Aiger, Michael Bosse, Joel Hesch, Marc Pollefeys, Roland Siegwart und Torsten Sattler. „Large-scale, real-time visual–inertial localization revisited“. International Journal of Robotics Research 39, Nr. 9 (07.07.2020): 1061–84. http://dx.doi.org/10.1177/0278364920931151.

Der volle Inhalt der Quelle
Annotation:
The overarching goals in image-based localization are scale, robustness, and speed. In recent years, approaches based on local features and sparse 3D point-cloud models have both dominated the benchmarks and seen successful real-world deployment. They enable applications ranging from robot navigation, autonomous driving, virtual and augmented reality to device geo-localization. Recently, end-to-end learned localization approaches have been proposed which show promising results on small-scale datasets. However, the positioning accuracy, scalability, latency, and compute and storage requirements of these approaches remain open challenges. We aim to deploy localization at a global scale where one thus relies on methods using local features and sparse 3D models. Our approach spans from offline model building to real-time client-side pose fusion. The system compresses the appearance and geometry of the scene for efficient model storage and lookup leading to scalability beyond what has been demonstrated previously. It allows for low-latency localization queries and efficient fusion to be run in real-time on mobile platforms by combining server-side localization with real-time visual–inertial-based camera pose tracking. In order to further improve efficiency, we leverage a combination of priors, nearest-neighbor search, geometric match culling, and a cascaded pose candidate refinement step. This combination outperforms previous approaches when working with large-scale models and allows deployment at unprecedented scale. We demonstrate the effectiveness of our approach on a proof-of-concept system localizing 2.5 million images against models from four cities in different regions of the world achieving query latencies in the 200 ms range.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Mo, Jiaqing, Zhongwang Hu, Hang Chen und Wei Shen. „An Efficient and Provably Secure Anonymous User Authentication and Key Agreement for Mobile Cloud Computing“. Wireless Communications and Mobile Computing 2019 (04.02.2019): 1–12. http://dx.doi.org/10.1155/2019/4520685.

Der volle Inhalt der Quelle
Annotation:
Nowadays, due to the rapid development and wide deployment of handheld mobile devices, the mobile users begin to save their resources, access services, and run applications that are stored, deployed, and implemented in cloud computing which has huge storage space and massive computing capability with their mobile devices. However, the wireless channel is insecure and vulnerable to various attacks that pose a great threat to the transmission of sensitive data. Thus, the security mechanism of how the mobile devices and remote cloud server authenticate each other to create a secure session in mobile cloud computing environment has aroused the interest of researchers. In this paper, we propose an efficient and provably secure anonymous two-factor user authentication protocol for the mobile cloud computing environment. The proposed scheme not only provides mutual authentication between mobile devices and cloud computing but also fulfills the known security evaluation criteria. Moreover, utilization of ECC in our scheme reduces the computing cost for mobile devices that are computation capability limited and battery energy limited. In addition, the formal security proof is given to show that the proposed scheme is secure under random oracle model. Security analysis and performance comparisons indicate that the proposed scheme has reasonable computation cost and communication overhead at the mobile client side as well as the server side and is more efficient and more secure than the related competitive works.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Ježek, Jan. „Data Architecture for Sensor Network“. Geoinformatics FCE CTU 7 (29.12.2011): 31–38. http://dx.doi.org/10.14311/gi.7.3.

Der volle Inhalt der Quelle
Annotation:
Fast development of hardware in recent years leads to the high availability of simple sensing devices at minimal cost. As a consequence, there is many of sensor networks nowadays. These networks can continuously produce a large amount of observed data including the location of measurement. Optimal data architecture for such propose is a challenging issue due to its large scale and spatio-temporal nature. The aim of this paper is to describe data architecture that was used in a particular solution for storage of sensor data. This solution is based on relation data model – concretely PostgreSQL and PostGIS. We will mention out experience from real world projects focused on car monitoring and project targeted on agriculture sensor networks. We will also shortly demonstrate the possibilities of client side API and the potential of other open source libraries that can be used for cartographic visualization (e.g. GeoServer). The main objective is to describe the strength and weakness of usage of relation database system for such propose and to introduce also alternative approaches based on NoSQL concept.<br />
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Rudikova, L. V., und O. R. Myslivec. „About a concept of creating a social network users information aggregation and data processing system“. «System analysis and applied information science», Nr. 4 (06.02.2019): 65–72. http://dx.doi.org/10.21122/2309-4923-2018-4-65-72.

Der volle Inhalt der Quelle
Annotation:
The development of a general concept and implementation of a data-storage and analysis system for practice oriented data, one of the subsystems of which is an analytical system for the accumulation and analysis of data from users of social networks, is topical. The development of a general concept and implementation of a data-storage and analysis system for practice oriented data, one of the subsystems of which is an analytical system for the accumulation and analysis of data from users of social networks, is topical. Data that users leave about themselves in social networks can be useful in solving various tasks. The proposed article describes the subject area associated with the collection and storage of data from users of social networks. Proceeding from the subject area, the general architecture of the universal data collection and storage system is proposed, which is based on the client-server architecture. For the server side of the system, a fragment of the data model is provided, which is associated with the accumulation of data from external sources. The framework of the system architecture is described. The developed universal system is based on the information technology of data warehousing, and it has the following aspects: an expandable complex subject area, the integration of stored data that come from various sources, the invariance of stored data in time with mandatory labels, relatively high data stability, the search for necessary trade-off in data redundancy, modularity of individual system units, fl and extensibility of the architecture, high security requirements vulnerable data.The proposed system organizes the process of collecting data and filling the database from external sources. To do this, the system has a module for collecting and converting information from third-party Internet sources and sending them to the database. The system is intended for various users interested in analyzing data of users of social networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Nikiel, Sławomir. „A Proposition of Mobile Fractal Image Decompression“. International Journal of Applied Mathematics and Computer Science 17, Nr. 1 (01.03.2007): 129–36. http://dx.doi.org/10.2478/v10006-007-0012-5.

Der volle Inhalt der Quelle
Annotation:
A Proposition of Mobile Fractal Image DecompressionMultimedia are becoming one of the most important elements of the user interface with regard to the acceptance of modern mobile devices. The multimodal content that is delivered and available for a wide range of mobile telephony terminals is indispensable to bind users to a system and its services. Currently available mobile devices are equipped with multimedia capabilities and decent processing power and storage area. The most crucial factors are then the bandwidth and costs of media transfer. This is particularly visible in mobile gaming, where textures represent the bulk of binary data to be acquired from the content provider. Image textures have traditionally added visual realism to computer graphics. The realism increases with the resolution of textures. This represents a challenge to the limited bandwidth of mobile-oriented systems. The challenge is even more obvious in mobile gaming, where single image depicts a collection of shots or animation cycles for sprites and a backdrop scenery. In order to increase the efficiency of image and image texture transfer, a fractal based compression scheme is proposed. The main idea is to use an asymmetric server-client architecture. The resource demanding compression process is performed on the server side while the client part decompresses highly packed image data. The method offers a very high compression ratio for pictures representing image textures for natural scenes. It aims to minimize the transmission bandwidth that should speed up the downloading process and minimize the cost and time of data transfer. The paper focuses on the implementation of fractal decompression schemes suitable for most mobile devices, and opens a discussion on fractal image models for limited resource applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie