Добірка наукової літератури з теми "Coded MapReduce"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Coded MapReduce".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Coded MapReduce"

1

Xiong, Hui. "Comparative Analysis of Chinese Culture and Hong Kong, Macao, and Taiwan Culture in the Field of Public Health Based on the CNN Model." Journal of Environmental and Public Health 2022 (September 6, 2022): 1–10. http://dx.doi.org/10.1155/2022/9928040.

Повний текст джерела
Анотація:
In view of the defect of a large amount of information on cultural resources and poor recommendation effect on a standalone platform, a cultural recommendation system based on the Hadoop platform was proposed, combined with the convolutional neural network (CNN). It aims to improve the adaptability of Chinese culture and Hong Kong, Macao, and Taiwan culture. Firstly, the CNN is used to encode the collected information deeply and map it to the deep feature space. Secondly, the attention mechanism is used to focus the coded features in the deep feature space to improve the classification ability of features. Then, the model in this article is deployed using the distributed file system of the Hadoop platform, and the MapReduce programming model is used to implement the cultural resource recommendation algorithm in parallel. Finally, the recommendation simulation experiment of cultural resources is carried out, and the results show that the proposed model has good recommendation performance, and it is applied to open-source data in the real public health field to test, and the results also perform well.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Hanafi, Idris, and Amal Abdel-Raouf. "P-Codec: Parallel Compressed File Decompression Algorithm for Hadoop." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 15, no. 8 (May 24, 2016): 6991–98. http://dx.doi.org/10.24297/ijct.v15i8.1500.

Повний текст джерела
Анотація:
The increasing amount and size of data being handled by data analytic applications running on Hadoop has created a need for faster data processing. One of the effective methods for handling big data sizes is compression. Data compression not only makes network I/O processing faster, but also provides better utilization of resources. However, this approach defeats one of Hadoop’s main purposes, which is the parallelism of map and reduce tasks. The number of map tasks created is determined by the size of the file, so by compressing a large file, the number of mappers is reduced which in turn decreases parallelism. Consequently, standard Hadoop takes longer times to process. In this paper, we propose the design and implementation of a Parallel Compressed File Decompressor (P-Codec) that improves the performance of Hadoop when processing compressed data. P-Codec includes two modules; the first module decompresses data upon retrieval by a data node during the phase of uploading the data to the Hadoop Distributed File System (HDFS). This process reduces the runtime of a job by removing the burden of decompression during the MapReduce phase. The second P-Codec module is a decompressed map task divider that increases parallelism by dynamically changing the map task split sizes based on the size of the final decompressed block. Our experimental results using five different MapReduce benchmarks show an average improvement of approximately 80% compared to standard Hadoop.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Huang, Xiaohui, Jiabao Li, Jining Yan, and Lizhe Wang. "An adaptive geographic meshing and coding method for remote sensing data." IOP Conference Series: Earth and Environmental Science 1004, no. 1 (March 1, 2022): 012006. http://dx.doi.org/10.1088/1755-1315/1004/1/012006.

Повний текст джерела
Анотація:
Abstract Spatial indexing techniques, inherently data structures, are generally used in portals opened by institutions or organizations to efficiently filter RS images according to their spatial extent, thus providing researchers with fast Remote Sensing (RS) image data discovery ability. Specifically, space-based spatial indexing approaches are widely adopted to index RS images in distributed environments by mapping RS images in two-dimensional space into several one-dimensional spatial codes. However, current spatial indexing approaches still suffer from the boundary objects problem, which leads to multiple spatial codes for a boundary-crossing RS image and thus alleviates the performance of spatial indexes built on top of these spatial codes. To solve this problem, we propose an adaptive geographic meshing and coding method (AGMD) by combining the famous subdivision model GeoSOT and XZ-ordering to generate only one spatial code for RS images with different spatial widths. Then, we implement our proposed method with a unified big data programming model, (i.e., Apache Beam), to enable its execution in various distributed computing engines (e.g., MapReduce, and Apache Spark, etc.) in distributed environments. Finally, we conduct a series of experiments on real datasets, the archived Landsat metadata collection in level 2. The results show that the proposed AGMD method performs well on metrics, including the following aspects: the effectiveness of the storage overhead and the time cost are up to 359.7% and 58.02 %, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Al-Fatlawi, Ahmed Abdul Hassan, Ghassan N. Mohammed, and Israa Al Barazanchi. "Optimizing the Performance of Clouds Using Hash Codes in Apache Hadoop and Spark." Journal of Southwest Jiaotong University 54, no. 6 (2019). http://dx.doi.org/10.35741/issn.0258-2724.54.6.3.

Повний текст джерела
Анотація:
Hash functions are an integral part of MapReduce software, both in Apache Hadoop and Spark. If the hash function performs badly, the load in the reduced part will not be balanced and access times will spike. To investigate this problem further, we ran the Wordcount program with numerous different hash functions on Amazon AWS. In particular, we will leverage the Amazon Elastic MapReduce framework. The paper investigates the general purpose, cryptographic, checksum, and special hash functions. Through the analysis, we present the corresponding runtime results.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Nowicki, Marek. "Comparison of sort algorithms in Hadoop and PCJ." Journal of Big Data 7, no. 1 (November 16, 2020). http://dx.doi.org/10.1186/s40537-020-00376-9.

Повний текст джерела
Анотація:
AbstractSorting algorithms are among the most commonly used algorithms in computer science and modern software. Having efficient implementation of sorting is necessary for a wide spectrum of scientific applications. This paper describes the sorting algorithm written using the partitioned global address space (PGAS) model, implemented using the Parallel Computing in Java (PCJ) library. The iterative implementation description is used to outline the possible performance issues and provide means to resolve them. The key idea of the implementation is to have an efficient building block that can be easily integrated into many application codes. This paper also presents the performance comparison of the PCJ implementation with the MapReduce approach, using Apache Hadoop TeraSort implementation. The comparison serves to show that the performance of the implementation is good enough, as the PCJ implementation shows similar efficiency to the Hadoop implementation.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Coded MapReduce"

1

Usha, D., and Reshma Raman. "A Forensic Way to Find Solutions for Security Challenges in Cloudserver Through MapReduce Technique." In Advances in Social Networking and Online Communities, 330–38. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-9640-1.ch021.

Повний текст джерела
Анотація:
Cloud computing is a large and distributed platform repository of user information. But it also extensively serves the security threats in the research aspect. This chapter attempts to find the solution to the security challenges through the MapReduce technique in a forensic way. Four security challenges are included in this chapter: losing the user information during the mapping process for different reasons such as the shutdown of the server, which causes parallel or unrelated services to get interrupted; the velocity of attack, which enables security threats to amplify and spread quickly in the cloud; injecting malicious code; and finally information deletion. MapReduce and dynamic decomposition-based distributed algorithm with the help of Hadoop and JavaBeans in the live forensic method is used to find solution to the problem. MapReduce is a software framework and live forensics is the method attempting to discover, control, and eliminate threats in a live system environment. This chapter uses Hadoop's cloud simulation techniques that can give a live result.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Verma, Chitresh, and Rajiv Pandey. "Statistical Visualization of Big Data Through Hadoop Streaming in RStudio." In Advances in Data Mining and Database Management, 549–77. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-3142-5.ch019.

Повний текст джерела
Анотація:
Data Visualization enables visual representation of the data set for interpretation of data in a meaningful manner from human perspective. The Statistical visualization calls for various tools, algorithms and techniques that can support and render graphical modeling. This chapter shall explore on the detailed features R and RStudio. The combination of Hadoop and R for the Big Data Analytics and its data visualization shall be demonstrated through appropriate code snippets. The integration perspective of R and Hadoop is explained in detail with the help of a utility called Hadoop streaming jar. The various R packages and their integration with Hadoop operations in the R environment are explained through suitable examples. The process of data streaming is provided using different readers of Hadoop streaming package. A case based statistical project is considered in which the data set is visualized after dual execution using the Hadoop MapReduce and R script.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Verma, Chitresh, and Rajiv Pandey. "Statistical Visualization of Big Data Through Hadoop Streaming in RStudio." In Research Anthology on Big Data Analytics, Architectures, and Applications, 758–87. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-3662-2.ch035.

Повний текст джерела
Анотація:
Data Visualization enables visual representation of the data set for interpretation of data in a meaningful manner from human perspective. The Statistical visualization calls for various tools, algorithms and techniques that can support and render graphical modeling. This chapter shall explore on the detailed features R and RStudio. The combination of Hadoop and R for the Big Data Analytics and its data visualization shall be demonstrated through appropriate code snippets. The integration perspective of R and Hadoop is explained in detail with the help of a utility called Hadoop streaming jar. The various R packages and their integration with Hadoop operations in the R environment are explained through suitable examples. The process of data streaming is provided using different readers of Hadoop streaming package. A case based statistical project is considered in which the data set is visualized after dual execution using the Hadoop MapReduce and R script.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Coded MapReduce"

1

Li, Songze, Mohammad Ali Maddah-Ali, and A. Salman Avestimehr. "Coded MapReduce." In 2015 53rd Annual Allerton Conference on Communication, Control and Computing (Allerton). IEEE, 2015. http://dx.doi.org/10.1109/allerton.2015.7447112.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Konstantinidis, Konstantinos, and Aditya Ramamoorthy. "CAMR: Coded Aggregated MapReduce." In 2019 IEEE International Symposium on Information Theory (ISIT). IEEE, 2019. http://dx.doi.org/10.1109/isit.2019.8849227.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Dong, Yamei, Bin Tang, Baoliu Ye, Zhihao Qu, and Sanglu Lu. "Intermediate Value Size Aware Coded MapReduce." In 2020 IEEE 26th International Conference on Parallel and Distributed Systems (ICPADS). IEEE, 2020. http://dx.doi.org/10.1109/icpads51040.2020.00054.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Lampiris, Eleftherios, Daniel Jimenez Zorrilla, and Petros Elia. "Mapping Heterogeneity Does Not Affect Wireless Coded MapReduce." In 2019 IEEE International Symposium on Information Theory (ISIT). IEEE, 2019. http://dx.doi.org/10.1109/isit.2019.8849492.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ramkumar, Vinayak, and P. Vijay Kumar. "Coded MapReduce Schemes Based on Placement Delivery Array." In 2019 IEEE International Symposium on Information Theory (ISIT). IEEE, 2019. http://dx.doi.org/10.1109/isit.2019.8849570.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Gupta, Sneh, and V. Lalitha. "Locality-aware hybrid coded MapReduce for server-rack architecture." In 2017 IEEE Information Theory Workshop (ITW). IEEE, 2017. http://dx.doi.org/10.1109/itw.2017.8277996.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Pang, Xin, Zhisong Bie, and Xuehong Lin. "Access Point Decoding Coded MapReduce for Tree Fog Network." In 2018 International Conference on Network Infrastructure and Digital Content (IC-NIDC). IEEE, 2018. http://dx.doi.org/10.1109/icnidc.2018.8525757.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Li, Runhui, and Patrick P. C. Lee. "Making mapreduce scheduling effective in erasure-coded storage clusters." In 2015 IEEE International Workshop on Local and Metropolitan Area Networks (LANMAN). IEEE, 2015. http://dx.doi.org/10.1109/lanman.2015.7114730.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Li, Runhui, Patrick P. C. Lee, and Yuchong Hu. "Degraded-First Scheduling for MapReduce in Erasure-Coded Storage Clusters." In 2014 44th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). IEEE, 2014. http://dx.doi.org/10.1109/dsn.2014.47.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wang, Yuhan, and Youlong Wu. "Coded MapReduce with Pre-set Data and Reduce Function Assignments." In GLOBECOM 2022 - 2022 IEEE Global Communications Conference. IEEE, 2022. http://dx.doi.org/10.1109/globecom48099.2022.10001706.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії