Gotowa bibliografia na temat „Coded MapReduce”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Coded MapReduce”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Coded MapReduce"

1

Xiong, Hui. "Comparative Analysis of Chinese Culture and Hong Kong, Macao, and Taiwan Culture in the Field of Public Health Based on the CNN Model". Journal of Environmental and Public Health 2022 (6.09.2022): 1–10. http://dx.doi.org/10.1155/2022/9928040.

Pełny tekst źródła
Streszczenie:
In view of the defect of a large amount of information on cultural resources and poor recommendation effect on a standalone platform, a cultural recommendation system based on the Hadoop platform was proposed, combined with the convolutional neural network (CNN). It aims to improve the adaptability of Chinese culture and Hong Kong, Macao, and Taiwan culture. Firstly, the CNN is used to encode the collected information deeply and map it to the deep feature space. Secondly, the attention mechanism is used to focus the coded features in the deep feature space to improve the classification ability of features. Then, the model in this article is deployed using the distributed file system of the Hadoop platform, and the MapReduce programming model is used to implement the cultural resource recommendation algorithm in parallel. Finally, the recommendation simulation experiment of cultural resources is carried out, and the results show that the proposed model has good recommendation performance, and it is applied to open-source data in the real public health field to test, and the results also perform well.
Style APA, Harvard, Vancouver, ISO itp.
2

Hanafi, Idris, i Amal Abdel-Raouf. "P-Codec: Parallel Compressed File Decompression Algorithm for Hadoop". INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 15, nr 8 (24.05.2016): 6991–98. http://dx.doi.org/10.24297/ijct.v15i8.1500.

Pełny tekst źródła
Streszczenie:
The increasing amount and size of data being handled by data analytic applications running on Hadoop has created a need for faster data processing. One of the effective methods for handling big data sizes is compression. Data compression not only makes network I/O processing faster, but also provides better utilization of resources. However, this approach defeats one of Hadoop’s main purposes, which is the parallelism of map and reduce tasks. The number of map tasks created is determined by the size of the file, so by compressing a large file, the number of mappers is reduced which in turn decreases parallelism. Consequently, standard Hadoop takes longer times to process. In this paper, we propose the design and implementation of a Parallel Compressed File Decompressor (P-Codec) that improves the performance of Hadoop when processing compressed data. P-Codec includes two modules; the first module decompresses data upon retrieval by a data node during the phase of uploading the data to the Hadoop Distributed File System (HDFS). This process reduces the runtime of a job by removing the burden of decompression during the MapReduce phase. The second P-Codec module is a decompressed map task divider that increases parallelism by dynamically changing the map task split sizes based on the size of the final decompressed block. Our experimental results using five different MapReduce benchmarks show an average improvement of approximately 80% compared to standard Hadoop.
Style APA, Harvard, Vancouver, ISO itp.
3

Huang, Xiaohui, Jiabao Li, Jining Yan i Lizhe Wang. "An adaptive geographic meshing and coding method for remote sensing data". IOP Conference Series: Earth and Environmental Science 1004, nr 1 (1.03.2022): 012006. http://dx.doi.org/10.1088/1755-1315/1004/1/012006.

Pełny tekst źródła
Streszczenie:
Abstract Spatial indexing techniques, inherently data structures, are generally used in portals opened by institutions or organizations to efficiently filter RS images according to their spatial extent, thus providing researchers with fast Remote Sensing (RS) image data discovery ability. Specifically, space-based spatial indexing approaches are widely adopted to index RS images in distributed environments by mapping RS images in two-dimensional space into several one-dimensional spatial codes. However, current spatial indexing approaches still suffer from the boundary objects problem, which leads to multiple spatial codes for a boundary-crossing RS image and thus alleviates the performance of spatial indexes built on top of these spatial codes. To solve this problem, we propose an adaptive geographic meshing and coding method (AGMD) by combining the famous subdivision model GeoSOT and XZ-ordering to generate only one spatial code for RS images with different spatial widths. Then, we implement our proposed method with a unified big data programming model, (i.e., Apache Beam), to enable its execution in various distributed computing engines (e.g., MapReduce, and Apache Spark, etc.) in distributed environments. Finally, we conduct a series of experiments on real datasets, the archived Landsat metadata collection in level 2. The results show that the proposed AGMD method performs well on metrics, including the following aspects: the effectiveness of the storage overhead and the time cost are up to 359.7% and 58.02 %, respectively.
Style APA, Harvard, Vancouver, ISO itp.
4

Al-Fatlawi, Ahmed Abdul Hassan, Ghassan N. Mohammed i Israa Al Barazanchi. "Optimizing the Performance of Clouds Using Hash Codes in Apache Hadoop and Spark". Journal of Southwest Jiaotong University 54, nr 6 (2019). http://dx.doi.org/10.35741/issn.0258-2724.54.6.3.

Pełny tekst źródła
Streszczenie:
Hash functions are an integral part of MapReduce software, both in Apache Hadoop and Spark. If the hash function performs badly, the load in the reduced part will not be balanced and access times will spike. To investigate this problem further, we ran the Wordcount program with numerous different hash functions on Amazon AWS. In particular, we will leverage the Amazon Elastic MapReduce framework. The paper investigates the general purpose, cryptographic, checksum, and special hash functions. Through the analysis, we present the corresponding runtime results.
Style APA, Harvard, Vancouver, ISO itp.
5

Nowicki, Marek. "Comparison of sort algorithms in Hadoop and PCJ". Journal of Big Data 7, nr 1 (16.11.2020). http://dx.doi.org/10.1186/s40537-020-00376-9.

Pełny tekst źródła
Streszczenie:
AbstractSorting algorithms are among the most commonly used algorithms in computer science and modern software. Having efficient implementation of sorting is necessary for a wide spectrum of scientific applications. This paper describes the sorting algorithm written using the partitioned global address space (PGAS) model, implemented using the Parallel Computing in Java (PCJ) library. The iterative implementation description is used to outline the possible performance issues and provide means to resolve them. The key idea of the implementation is to have an efficient building block that can be easily integrated into many application codes. This paper also presents the performance comparison of the PCJ implementation with the MapReduce approach, using Apache Hadoop TeraSort implementation. The comparison serves to show that the performance of the implementation is good enough, as the PCJ implementation shows similar efficiency to the Hadoop implementation.
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Coded MapReduce"

1

Usha, D., i Reshma Raman. "A Forensic Way to Find Solutions for Security Challenges in Cloudserver Through MapReduce Technique". W Advances in Social Networking and Online Communities, 330–38. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-9640-1.ch021.

Pełny tekst źródła
Streszczenie:
Cloud computing is a large and distributed platform repository of user information. But it also extensively serves the security threats in the research aspect. This chapter attempts to find the solution to the security challenges through the MapReduce technique in a forensic way. Four security challenges are included in this chapter: losing the user information during the mapping process for different reasons such as the shutdown of the server, which causes parallel or unrelated services to get interrupted; the velocity of attack, which enables security threats to amplify and spread quickly in the cloud; injecting malicious code; and finally information deletion. MapReduce and dynamic decomposition-based distributed algorithm with the help of Hadoop and JavaBeans in the live forensic method is used to find solution to the problem. MapReduce is a software framework and live forensics is the method attempting to discover, control, and eliminate threats in a live system environment. This chapter uses Hadoop's cloud simulation techniques that can give a live result.
Style APA, Harvard, Vancouver, ISO itp.
2

Verma, Chitresh, i Rajiv Pandey. "Statistical Visualization of Big Data Through Hadoop Streaming in RStudio". W Advances in Data Mining and Database Management, 549–77. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-3142-5.ch019.

Pełny tekst źródła
Streszczenie:
Data Visualization enables visual representation of the data set for interpretation of data in a meaningful manner from human perspective. The Statistical visualization calls for various tools, algorithms and techniques that can support and render graphical modeling. This chapter shall explore on the detailed features R and RStudio. The combination of Hadoop and R for the Big Data Analytics and its data visualization shall be demonstrated through appropriate code snippets. The integration perspective of R and Hadoop is explained in detail with the help of a utility called Hadoop streaming jar. The various R packages and their integration with Hadoop operations in the R environment are explained through suitable examples. The process of data streaming is provided using different readers of Hadoop streaming package. A case based statistical project is considered in which the data set is visualized after dual execution using the Hadoop MapReduce and R script.
Style APA, Harvard, Vancouver, ISO itp.
3

Verma, Chitresh, i Rajiv Pandey. "Statistical Visualization of Big Data Through Hadoop Streaming in RStudio". W Research Anthology on Big Data Analytics, Architectures, and Applications, 758–87. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-3662-2.ch035.

Pełny tekst źródła
Streszczenie:
Data Visualization enables visual representation of the data set for interpretation of data in a meaningful manner from human perspective. The Statistical visualization calls for various tools, algorithms and techniques that can support and render graphical modeling. This chapter shall explore on the detailed features R and RStudio. The combination of Hadoop and R for the Big Data Analytics and its data visualization shall be demonstrated through appropriate code snippets. The integration perspective of R and Hadoop is explained in detail with the help of a utility called Hadoop streaming jar. The various R packages and their integration with Hadoop operations in the R environment are explained through suitable examples. The process of data streaming is provided using different readers of Hadoop streaming package. A case based statistical project is considered in which the data set is visualized after dual execution using the Hadoop MapReduce and R script.
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Coded MapReduce"

1

Li, Songze, Mohammad Ali Maddah-Ali i A. Salman Avestimehr. "Coded MapReduce". W 2015 53rd Annual Allerton Conference on Communication, Control and Computing (Allerton). IEEE, 2015. http://dx.doi.org/10.1109/allerton.2015.7447112.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Konstantinidis, Konstantinos, i Aditya Ramamoorthy. "CAMR: Coded Aggregated MapReduce". W 2019 IEEE International Symposium on Information Theory (ISIT). IEEE, 2019. http://dx.doi.org/10.1109/isit.2019.8849227.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Dong, Yamei, Bin Tang, Baoliu Ye, Zhihao Qu i Sanglu Lu. "Intermediate Value Size Aware Coded MapReduce". W 2020 IEEE 26th International Conference on Parallel and Distributed Systems (ICPADS). IEEE, 2020. http://dx.doi.org/10.1109/icpads51040.2020.00054.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Lampiris, Eleftherios, Daniel Jimenez Zorrilla i Petros Elia. "Mapping Heterogeneity Does Not Affect Wireless Coded MapReduce". W 2019 IEEE International Symposium on Information Theory (ISIT). IEEE, 2019. http://dx.doi.org/10.1109/isit.2019.8849492.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Ramkumar, Vinayak, i P. Vijay Kumar. "Coded MapReduce Schemes Based on Placement Delivery Array". W 2019 IEEE International Symposium on Information Theory (ISIT). IEEE, 2019. http://dx.doi.org/10.1109/isit.2019.8849570.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Gupta, Sneh, i V. Lalitha. "Locality-aware hybrid coded MapReduce for server-rack architecture". W 2017 IEEE Information Theory Workshop (ITW). IEEE, 2017. http://dx.doi.org/10.1109/itw.2017.8277996.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Pang, Xin, Zhisong Bie i Xuehong Lin. "Access Point Decoding Coded MapReduce for Tree Fog Network". W 2018 International Conference on Network Infrastructure and Digital Content (IC-NIDC). IEEE, 2018. http://dx.doi.org/10.1109/icnidc.2018.8525757.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Li, Runhui, i Patrick P. C. Lee. "Making mapreduce scheduling effective in erasure-coded storage clusters". W 2015 IEEE International Workshop on Local and Metropolitan Area Networks (LANMAN). IEEE, 2015. http://dx.doi.org/10.1109/lanman.2015.7114730.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Li, Runhui, Patrick P. C. Lee i Yuchong Hu. "Degraded-First Scheduling for MapReduce in Erasure-Coded Storage Clusters". W 2014 44th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). IEEE, 2014. http://dx.doi.org/10.1109/dsn.2014.47.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Wang, Yuhan, i Youlong Wu. "Coded MapReduce with Pre-set Data and Reduce Function Assignments". W GLOBECOM 2022 - 2022 IEEE Global Communications Conference. IEEE, 2022. http://dx.doi.org/10.1109/globecom48099.2022.10001706.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii