Literatura académica sobre el tema "MapReduce programming model"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "MapReduce programming model".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "MapReduce programming model"

1

Zhang, Guigang, Chao Li, Yong Zhang y Chunxiao Xing. "A Semantic++ MapReduce Parallel Programming Model". International Journal of Semantic Computing 08, n.º 03 (septiembre de 2014): 279–99. http://dx.doi.org/10.1142/s1793351x14400091.

Texto completo
Resumen
Big data is playing a more and more important role in every area such as medical health, internet finance, culture and education etc. How to process these big data efficiently is a huge challenge. MapReduce is a good parallel programming language to process big data. However, it has lots of shortcomings. For example, it cannot process complex computing. It cannot suit real-time computing. In order to overcome these shortcomings of MapReduce and its variants, in this paper, we propose a Semantic++ MapReduce parallel programming model. This study includes the following parts. (1) Semantic++ MapReduce parallel programming model. It includes physical framework of semantic++ MapReduce parallel programming model and logic framework of semantic++ MapReduce parallel programming model; (2) Semantic++ extraction and management method for big data; (3) Semantic++ MapReduce parallel programming computing framework. It includes semantic++ map, semantic++ reduce and semantic++ shuffle; (4) Semantic++ MapReduce for multi-data centers. It includes basic framework of semantic++ MapReduce for multi-data centers and semantic++ MapReduce application framework for multi-data centers; (5) A Case Study of semantic++ MapReduce across multi-data centers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Lämmel, Ralf. "Google’s MapReduce programming model — Revisited". Science of Computer Programming 70, n.º 1 (enero de 2008): 1–30. http://dx.doi.org/10.1016/j.scico.2007.07.001.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Retnowo, Murti. "Syncronize Data Using MapReduceModel Programming". International Journal of Engineering Technology and Natural Sciences 3, n.º 2 (31 de diciembre de 2021): 82–88. http://dx.doi.org/10.46923/ijets.v3i2.140.

Texto completo
Resumen
Research in the processing of the data shows that the larger data increasingly requires a longer time. Processing huge amounts of data on a single computer has limitations that can be overcome by parallel processing. This study utilized the MapReduce programming model data synchronization by duplicating the data from database client to database server. MapReduce is a programming model that was developed to speed up the processing of large data. MapReduce model application on the training process performed on data sharing that is adapted to number of sub-process (thread) and data entry to database server and displays data from data synchronization. The experiments were performed using data of 1,000, 10,000, 100,000 and 1,000,000 of data, and use the thread as much as 1, 5, 10, 15, 20 and 25 threads. The results showed that the use of MapReduce programming model can result in a faster time, but time to create many thread that many require a longer time. The results of the use of MapReduce programming model can provide time efficiency in synchronizing data both on a single database or a distributed database.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Garg, Uttama. "Data Analytic Models That Redress the Limitations of MapReduce". International Journal of Web-Based Learning and Teaching Technologies 16, n.º 6 (noviembre de 2021): 1–15. http://dx.doi.org/10.4018/ijwltt.20211101.oa7.

Texto completo
Resumen
The amount of data in today’s world is increasing exponentially. Effectively analyzing Big Data is a very complex task. The MapReduce programming model created by Google in 2004 revolutionized the big-data comput-ing market. Nowadays the model is being used by many for scientific and research analysis as well as for commercial purposes. The MapReduce model however is quite a low-level progamming model and has many limitations. Active research is being undertaken to make models that overcome/remove these limitations. In this paper we have studied some popular data analytic models that redress some of the limitations of MapReduce; namely ASTERIX and Pregel (Giraph) We discuss these models briefly and through the discussion highlight how these models are able to overcome MapReduce’s limitations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Gao, Tilei, Ming Yang, Rong Jiang, Yu Li y Yao Yao. "Research on Computing Efficiency of MapReduce in Big Data Environment". ITM Web of Conferences 26 (2019): 03002. http://dx.doi.org/10.1051/itmconf/20192603002.

Texto completo
Resumen
The emergence of big data has brought a great impact on traditional computing mode, the distributed computing framework represented by MapReduce has become an important solution to this problem. Based on the big data, this paper deeply studies the principle and framework of MapReduce programming. On the basis of mastering the principle and framework of MapReduce programming, the time consumption of distributed computing framework MapReduce and traditional computing model is compared with concrete programming experiments. The experiment shows that MapReduce has great advantages in large data volume.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Siddesh, G. M., Kavya Suresh, K. Y. Madhuri, Madhushree Nijagal, B. R. Rakshitha y K. G. Srinivasa. "Optimizing Crawler4j using MapReduce Programming Model". Journal of The Institution of Engineers (India): Series B 98, n.º 3 (12 de agosto de 2016): 329–36. http://dx.doi.org/10.1007/s40031-016-0267-z.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Zhang, Weidong, Boxin He, Yifeng Chen y Qifei Zhang. "GMR: graph-compatible MapReduce programming model". Multimedia Tools and Applications 78, n.º 1 (23 de agosto de 2017): 457–75. http://dx.doi.org/10.1007/s11042-017-5102-2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Durairaj, M. y T. S. Poornappriya. "Importance of MapReduce for Big Data Applications: A Survey". Asian Journal of Computer Science and Technology 7, n.º 1 (5 de mayo de 2018): 112–18. http://dx.doi.org/10.51983/ajcst-2018.7.1.1817.

Texto completo
Resumen
Significant regard for MapReduce framework has been trapped by a wide range of areas. It is presently a practical model for data-focused applications because of its basic interface of programming, high elasticity, and capacity to withstand the subjection to defects. Additionally, it is fit for preparing a high extent of data in Distributed Computing environments (DCE). MapReduce, on various events, has turned out to be material to a wide scope of areas. MapReduce is a parallel programming model and a related usage presented by Google. In the programming model, a client determines the calculation by two capacities, Map and Reduce. The basic MapReduce library consequently parallelizes the calculation and handles muddled issues like data dispersion, load adjusting, and adaptation to non-critical failure. Huge data spread crosswise over numerous machines, need to parallelize. Moves the data, and gives booking, adaptation to non-critical failure. A writing survey on the MapReduce programming in different areas has completed in this paper. An examination course has been distinguished by utilizing a writing audit.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Rokhman, Nur y Amelia Nursanti. "The MapReduce Model on Cascading Platform for Frequent Itemset Mining". IJCCS (Indonesian Journal of Computing and Cybernetics Systems) 12, n.º 2 (31 de julio de 2018): 149. http://dx.doi.org/10.22146/ijccs.34102.

Texto completo
Resumen
The implementation of parallel algorithms is very interesting research recently. Parallelism is very suitable to handle large-scale data processing. MapReduce is one of the parallel and distributed programming models. The implementation of parallel programming faces many difficulties. The Cascading gives easy scheme of Hadoop system which implements MapReduce model.Frequent itemsets are most often appear objects in a dataset. The Frequent Itemset Mining (FIM) requires complex computation. FIM is a complicated problem when implemented on large-scale data. This paper discusses the implementation of MapReduce model on Cascading for FIM. The experiment uses the Amazon dataset product co-purchasing network metadata.The experiment shows the fact that the simple mechanism of Cascading can be used to solve FIM problem. It gives time complexity O(n), more efficient than the nonparallel which has complexity O(n2/m).
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Wang, Changjian, Yuxing Peng, Mingxing Tang, Dongsheng Li, Shanshan Li y Pengfei You. "An Efficient MapReduce Computing Model for Imprecise Applications". International Journal of Web Services Research 13, n.º 3 (julio de 2016): 46–63. http://dx.doi.org/10.4018/ijwsr.2016070103.

Texto completo
Resumen
Optimizing the Map process is important for the improvement of the MapReduce performance. Many efforts have been devoted into the problem to design more efficient scheduling strategies. However, there exists a kind of MapReduce applications, named imprecise applications, where the imprecise results based on part of map tasks can satisfy the requirements of imprecise applications and thus the job processes can be completed when enough map tasks are processed. According to the feature of imprecise applications, the authors propose an improved MapReduce model, named MapCheckReduce, which can terminate the map process when the requirements of an imprecise application is satisfied. Compared to MapReduce, a Check mechanism and a set of extended programming interfaces are added to MapCheckReduce. The Check mechanism receives and analyzes messages submitted by completed map tasks and then determines whether to terminate the map phase according to the analysis results. The programming interfaces are used by the programmers to define the termination conditions of the map process. A data-prefetching mechanism is designed and implemented in MapCheckReduce which can improve the performance of MapCheckReduce effectively. The MapCheckReduce prototype has been implemented and experiment results verify the feasibility and effectiveness of MapCheckReduce.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "MapReduce programming model"

1

Elteir, Marwa Khamis. "A MapReduce Framework for Heterogeneous Computing Architectures". Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/28786.

Texto completo
Resumen
Nowadays, an increasing number of computational systems are equipped with heterogeneous compute resources, i.e., following different architecture. This applies to the level of a single chip, a single node and even supercomputers and large-scale clusters. With its impressive price-to-performance ratio as well as power efficiently compared to traditional multicore processors, graphics processing units (GPUs) has become an integrated part of these systems. GPUs deliver high peak performance; however efficiently exploiting their computational power requires the exploration of a multi-dimensional space of optimization methodologies, which is challenging even for the well-trained expert. The complexity of this multi-dimensional space arises not only from the traditionally well known but arduous task of architecture-aware GPU optimization at design and compile time, but it also arises in the partitioning and scheduling of the computation across these heterogeneous resources. Even with programming models like the Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL), the developer still needs to manage the data transfer be- tween host and device and vice versa, orchestrate the execution of several kernels, and more arduously, optimize the kernel code. In this dissertation, we aim to deliver a transparent parallel programming environment for heterogeneous resources by leveraging the power of the MapReduce programming model and OpenCL programming language. We propose a portable architecture-aware framework that efficiently runs an application across heterogeneous resources, specifically AMD GPUs and NVIDIA GPUs, while hiding complex architectural details from the developer. To further enhance performance portability, we explore approaches for asynchronously and efficiently distributing the computations across heterogeneous resources. When applied to benchmarks and representative applications, our proposed framework significantly enhances performance, including up to 58% improvement over traditional approaches to task assignment and up to a 45-fold improvement over state-of-the-art MapReduce implementations.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Rivault, Sébastien. "Parallélisme, équilibrage de charges et extensibilité dans le traitement des mégadonnées sur des systèmes à grande échelle". Electronic Thesis or Diss., Orléans, 2024. http://www.theses.fr/2024ORLE1019.

Texto completo
Resumen
Durant les deux dernières décennies, grâce à la réduction des coûts de stockage, d'échange et de traitement de l'information, le volume de données générées chaque année ne cesse d'exploser. Les enjeux liés au traitement de ces mégadonnées sont souvent décrits par la règle des 3V : le volume, la variété et la vitesse de création, de collecte, d'analyse et de partage des données. Pour stocker et analyser ces ensembles de données volumineux, il est essentiel d'utiliser des grappes de machines et des algorithmes extensibles et insensibles aux déséquilibres pouvant se produire pour répartir équitablement la charge sur l'ensemble des machines de traitement. Des applications telles que le filtrage collaboratif, la déduplication et la résolution d'entités sont nécessaires pour identifier des relations dans des mégadonnées en se reposant sur une notion de similarité entre les enregistrements. Ces applications permettent entre autres de retrouver des utilisateurs ayant des goûts similaires, de nettoyer les données et de détecter des fraudes. Dans ces cas, les opérations de jointure et de recherche par similarité sont utilisées pour retrouver tous les enregistrements similaires dans une ou plusieurs collections de données en utilisant une distance et un seuil défini par l'utilisateur. Bien que les équi-jointures parallèles aient été largement étudiées et mises en œuvre avec succès sur des architectures parallèles et distribuées, ces algorithmes ne sont pas adaptés à l'opération de jointure par similarité, car il n'y aucune technique de hachage ou de tri dans la littérature permettant de retrouver tous les couples d'enregistrements potentiellement similaires en une seule étape. Des techniques existent dans la littérature pour réduire l'espace de recherche tout en garantissant la complétude du résultat et en évitant le calcul d'un produit Cartésien et la comparaison de tous les couples d'enregistrements. Cependant, l'extensibilité de ces techniques est limitée et ne permet pas le traitement efficace de mégadonnées sur des systèmes à grandes échelles. Des méthodes approximatives ont été proposées pour traiter la jointure par similarité en renonçant à une petite partie du résultat et en fournissant une garantie probabiliste sur l'exhaustivité des résultats et permettant de réduire l'espace de recherche. Ces méthodes reposent sur des fonctions de hachages dont les probabilités de collisions sont sensibles à la similarité des objets. Nous nous reposons sur ces techniques pour proposer des solutions efficaces, pour le traitement de la jointure et de la recherche par similarité, basées sur l'utilisation de LSH (Locality Sensitive Hashing), d'histogrammes distribués et de schémas de communication randomisés dans le but de réduire les coûts de traitement, de communication et de lecture/écriture de données sur disques aux seules données pertinentes pour diverses distances et sur divers objets, et ce, dans le but de proposer un framework générique basé sur le modèle de programmation MapReduce répondant aux enjeux de volume, de variété et de vitesse d'analyse des données. L'efficacité et l'extensibilité des solutions proposées ont été étudiées en utilisant un modèle de coût et ont été confirmées par une série d'expériences mesurant également l'exhaustivité du résultat et la réduction de l'espace de recherche, garantissant ainsi une solution efficace quels que soient la taille, le déséquilibre des données en entrée, la distance et les seuils fournis par l'utilisateur
Over the past two decades, owing to the reduction of storage, exchange and data processing costs, the volume of data generated each year continues to explode. The challenges related to big data processing are often described by the 3Vs : volume, variety and velocity of data creation, analysis, and sharing. To store and analyze these large datasets, it is essential to use clusters of machines and scalable algorithms that are insensitive to load imbalance that may occur among processing nodes. Applications such as collaborative filtering, deduplication and entity resolution are necessary to identify relationships in big datasets relying on a notion of similarity between records. These applications enable finding users with similar tastes, cleaning data and detecting frauds in large datasets. In these cases, similarity join and similarity search operations are often used to retrieve all similar records in one or more datasets using a distance and a user-defined threshold. Although parallel joins have been widely studied and successfully implemented on parallel and distributed architectures, the algorithms are not suitable for similarity join operation, since there is no hashing or sorting techniques, in the literature, that can retrieve all potentially similar record pairs in one step. Many techniques have been introduced in the literature to reduce the search space while ensuring the completeness of the result and avoiding the computation of a Cartesian product and the comparison of all record pairs. However, the scalability of these techniques is limited and does not allow efficient processing of big datasets on large-scale systems. Approximate methods have been proposed to handle similarity join by ignoring a very small part of the result and providing a probabilistic guarantee on the completeness of the results, while reducing the search space. These methods rely on hash functions whose collision probabilities are sensitive to the objects' similarity. We rely on these techniques to propose efficient solutions for similarity join and similarity search processing, based on LSH (Locality Sensitive Hashing), distributed histograms, and randomized communication schemes in order to reduce processing time, communication, and disks I/O costs to only relevant data for various distances and objects. The aim is to propose a generic framework based on the MapReduce programming model that meets the challenges of volume, variety, and velocity of big data analysis.The efficiency and scalability of the proposed solutions were studied using a cost model and confirmed by a series of experiments measuring the result completeness and the reduction of the search space, while guaranteeing efficient similarity join processing regardless the data size and data skew, the distance and the user-defined thresholds
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Chen, Jhih-Siang y 陳智翔. "A study of distributed sequential pattern mining method based on MapReduce programming model". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/18996078478404490541.

Texto completo
Resumen
碩士
淡江大學
資訊管理學系碩士班
104
Sequential pattern mining is a data mining method for obtaining frequent sequential patterns in a large sequential database. Conventional sequence data mining methods could be divided into two categories: Apriori-like methods and pattern growth methods. These algorithms are mainly executed on standalone environment. There are some disadvantages like large database scanning time, scalability problem, less efficient for massive dataset. To improve the performance of sequential pattern mining and to improve the scalability issues, this study presents a distributed sequential pattern mining method based on Hadoop platform and Map Reduce programming model. Mining tasks are decomposed to many distributed tasks, the Map function is used to mine each sequential pattern in a subset of database. Then the Reduce function merges together all these identified patterns. It simplifies the search space and acquires a higher mining efficiency. In this study, we have further discussion on the influence of the setting of user-specified minimum support threshold on the distributed mining process. According to our experiments, it has been found that the threshold setting should be different in Map and Reduce mining process to prevent loss of some frequent patterns.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "MapReduce programming model"

1

Jin, Hai, Shadi Ibrahim, Li Qi, Haijun Cao, Song Wu y Xuanhua Shi. "The MapReduce Programming Model and Implementations". En Cloud Computing, 373–90. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2011. http://dx.doi.org/10.1002/9780470940105.ch14.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Jin, Chao y Rajkumar Buyya. "MapReduce Programming Model for .NET-Based Cloud Computing". En Lecture Notes in Computer Science, 417–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03869-3_41.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Jain, Arushi, Vishal Bhatnagar y Annavarapu Chandra Sekhara Rao. "Smart Heart Attack Forewarning Model Using MapReduce Programming Paradigm". En Advances in Information Communication Technology and Computing, 37–43. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5421-6_5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Janaki Meena, M. y S. P. Syed Ibrahim. "Statistical and Evolutionary Feature Selection Techniques Parallelized Using MapReduce Programming Model". En Studies in Big Data, 159–80. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-27520-8_8.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Indyk, Wojciech, Tomasz Kajdanowicz y Przemyslaw Kazienko. "Cooperative Decision Making Algorithm for Large Networks Using MapReduce Programming Model". En Lecture Notes in Computer Science, 53–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-32609-7_7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Brindha, G. Siva y M. Gobi. "CryptoDataMR: Enhancing the Data Protection Using Cryptographic Hash and Encryption/Decryption Through MapReduce Programming Model". En International Conference on Innovative Computing and Communications, 95–115. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-3315-0_9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Arputhamary, B. "Skew Handling Technique for Scheduling Huge Data Mapper with High End Reducers in MapReduce Programming Model". En Learning and Analytics in Intelligent Systems, 331–39. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-38501-9_33.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Suthaharan, Shan. "MapReduce Programming Platform". En Machine Learning Models and Algorithms for Big Data Classification, 99–119. Boston, MA: Springer US, 2016. http://dx.doi.org/10.1007/978-1-4899-7641-3_5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Dimitrov, Vladimir. "Cloud Programming Models (MapReduce)". En Encyclopedia of Cloud Computing, 596–608. Chichester, UK: John Wiley & Sons, Ltd, 2016. http://dx.doi.org/10.1002/9781118821930.ch49.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Ryczkowska, Magdalena y Marek Nowicki. "Performance Comparison of Graph BFS Implemented in MapReduce and PGAS Programming Models". En Parallel Processing and Applied Mathematics, 328–37. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-78054-2_31.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "MapReduce programming model"

1

Ming, Li, Xu Guang-Hui, Wu Li-Fa y Ji Yao. "Performance Research on MapReduce Programming Model". En 2011 First International Conference on Instrumentation, Measurement, Computer, Communication and Control (IMCCC). IEEE, 2011. http://dx.doi.org/10.1109/imccc.2011.60.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Siddesh, G. M., K. G. Srinivasa, Ishank Mishra, Abhinav Anurag y Eklavya Uppal. "Phylogenetic Analysis Using MapReduce Programming Model". En 2015 IEEE International Parallel and Distributed Processing Symposium Workshop (IPDPSW). IEEE, 2015. http://dx.doi.org/10.1109/ipdpsw.2015.57.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Luo, Yuan y Beth Plale. "Hierarchical MapReduce Programming Model and Scheduling Algorithms". En 2012 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid). IEEE, 2012. http://dx.doi.org/10.1109/ccgrid.2012.132.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Benelallam, Amine, Abel Gómez y Massimo Tisi. "ATL-MR: model transformation on MapReduce". En SPLASH '15: Conference on Systems, Programming, Languages, and Applications: Software for Humanity. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2837476.2837482.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Kang, Yun Hee y Young B. Park. "Applying MapReduce Programming Model for Handling Scientific Problems". En 2014 International Conference on Information Science and Applications (ICISA). IEEE, 2014. http://dx.doi.org/10.1109/icisa.2014.6847367.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Zhao, Junfeng, Wenhui Gai y Han Wu. "Fortran Code Refactoring Based on MapReduce Programming Model". En The 35th International Conference on Software Engineering and Knowledge Engineering. KSI Research Inc., 2023. http://dx.doi.org/10.18293/seke2023-072.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Li, Min, Xin Yang y Xiaolin Li. "Domain-Based MapReduce Programming Model for Complex Scientific Applications". En 2013 IEEE International Conference on High Performance Computing and Communications (HPCC) & 2013 IEEE International Conference on Embedded and Ubiquitous Computing (EUC). IEEE, 2013. http://dx.doi.org/10.1109/hpcc.and.euc.2013.87.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Deshmukh, Rajshree A., Bharathi H. N. y Amiya K. Tripathy. "Parallel Processing of Frequent Itemset Based on MapReduce Programming Model". En 2019 5th International Conference On Computing, Communication, Control And Automation (ICCUBEA). IEEE, 2019. http://dx.doi.org/10.1109/iccubea47591.2019.9128369.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Vione, Engelbertus y J. B. Budi Darmawan. "Performance of K-means in Hadoop Using MapReduce Programming Model". En International Conference of Science and Technology for the Internet of Things. EAI, 2019. http://dx.doi.org/10.4108/eai.19-10-2018.2282545.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Zhang, Fan, Qutaibah M. Malluhi y Tamer M. Elsyed. "ConMR: Concurrent MapReduce Programming Model for Large Scale Shared-Data Applications". En 2013 42nd International Conference on Parallel Processing (ICPP). IEEE, 2013. http://dx.doi.org/10.1109/icpp.2013.134.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía