Добірка наукової літератури з теми "Language transfer (Language learning) Germany"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Language transfer (Language learning) Germany".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Language transfer (Language learning) Germany"
Jajić Novogradec, Marina. "Positive and Negative Lexical Transfer in English Vocabulary Acquisition." ELOPE: English Language Overseas Perspectives and Enquiries 18, no. 2 (December 29, 2021): 139–65. http://dx.doi.org/10.4312/elope.18.2.139-165.
Повний текст джерелаSadouki, Fatiha. "Examples of cross-linguistic influence in learning German as a foreign language." EduLingua 6, no. 1 (2020): 61–83. http://dx.doi.org/10.14232/edulingua.2020.1.4.
Повний текст джерелаSabourin, Laura, Laurie A. Stowe, and Ger J. de Haan. "Transfer effects in learning a second language grammatical gender system." Second Language Research 22, no. 1 (January 2006): 1–29. http://dx.doi.org/10.1191/0267658306sr259oa.
Повний текст джерелаJarosz, Józef. "Wirklichkeitsnah oder stereotyp? Das Bild von Dänemark und den Dänen in ausgewählten deutschen Lehrbüchern für Dänisch als Fremdsprache." Folia Scandinavica Posnaniensia 20, no. 1 (December 1, 2016): 91–104. http://dx.doi.org/10.1515/fsp-2016-0028.
Повний текст джерелаSokolova, M., and E. Plisov. "CROSS-LINGUISTIC TRANSFER CLASSROOM L3 ACQUISITION IN UNIVERSITY SETTING." Vestnik of Minin University 7, no. 1 (March 17, 2019): 6. http://dx.doi.org/10.26795/2307-1281-2019-7-1-6.
Повний текст джерелаBawej, Izabela. "Rozumowanie dedukcyjne w procesie uczenia się języka niemieckiego jako drugiego języka obcego na przykładzie podsystemu gramatycznego." Neofilolog, no. 58/1 (April 27, 2022): 85–98. http://dx.doi.org/10.14746/n.2022.58.1.6.
Повний текст джерелаO'BRIEN, MARY GRANTHAM, CARRIE N. JACKSON, and CHRISTINE E. GARDNER. "Cross-linguistic differences in prosodic cues to syntactic disambiguation in German and English." Applied Psycholinguistics 35, no. 1 (August 10, 2012): 27–70. http://dx.doi.org/10.1017/s0142716412000252.
Повний текст джерелаBožinović, Nikolina, and Barbara Perić. "The role of typology and formal similarity in third language acquisition (German and Spanish)." Strani jezici 50, no. 1 (2021): 9–30. http://dx.doi.org/10.22210/strjez/50-1/1.
Повний текст джерелаHopp, Holger. "Cross-linguistic influence in the child third language acquisition of grammar: Sentence comprehension and production among Turkish-German and German learners of English." International Journal of Bilingualism 23, no. 2 (January 24, 2018): 567–83. http://dx.doi.org/10.1177/1367006917752523.
Повний текст джерелаOdaryuk, Irina V., and Artem S. Gampartsumov. "Development of foreign language communicative competence in the process of academic and professional interaction in a second foreign language." Samara Journal of Science 9, no. 3 (November 20, 2020): 282–86. http://dx.doi.org/10.17816/snv202093307.
Повний текст джерелаДисертації з теми "Language transfer (Language learning) Germany"
Samperio, Sanchez Nahum. "General learning strategies : identification, transfer to language learning and effect on language achievement." Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/412008/.
Повний текст джерелаZhang, Yuan Ph D. Massachusetts Institute of Technology. "Transfer learning for low-resource natural language analysis." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/108847.
Повний текст джерелаThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 131-142).
Expressive machine learning models such as deep neural networks are highly effective when they can be trained with large amounts of in-domain labeled training data. While such annotations may not be readily available for the target task, it is often possible to find labeled data for another related task. The goal of this thesis is to develop novel transfer learning techniques that can effectively leverage annotations in source tasks to improve performance of the target low-resource task. In particular, we focus on two transfer learning scenarios: (1) transfer across languages and (2) transfer across tasks or domains in the same language. In multilingual transfer, we tackle challenges from two perspectives. First, we show that linguistic prior knowledge can be utilized to guide syntactic parsing with little human intervention, by using a hierarchical low-rank tensor method. In both unsupervised and semi-supervised transfer scenarios, this method consistently outperforms state-of-the-art multilingual transfer parsers and the traditional tensor model across more than ten languages. Second, we study lexical-level multilingual transfer in low-resource settings. We demonstrate that only a few (e.g., ten) word translation pairs suffice for an accurate transfer for part-of-speech (POS) tagging. Averaged across six languages, our approach achieves a 37.5% improvement over the monolingual top-performing method when using a comparable amount of supervision. In the second monolingual transfer scenario, we propose an aspect-augmented adversarial network that allows aspect transfer over the same domain. We use this method to transfer across different aspects in the same pathology reports, where traditional domain adaptation approaches commonly fail. Experimental results demonstrate that our approach outperforms different baselines and model variants, yielding a 24% gain on this pathology dataset.
by Yuan Zhang.
Ph. D.
Jin, Di Ph D. Massachusetts Institute of Technology. "Transfer learning and robustness for natural language processing." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/129004.
Повний текст джерелаCataloged from student-submitted PDF of thesis.
Includes bibliographical references (pages 189-217).
Teaching machines to understand human language is one of the most elusive and long-standing challenges in Natural Language Processing (NLP). Driven by the fast development of deep learning, state-of-the-art NLP models have already achieved human-level performance in various large benchmark datasets, such as SQuAD, SNLI, and RACE. However, when these strong models are deployed to real-world applications, they often show poor generalization capability in two situations: 1. There is only a limited amount of data available for model training; 2. Deployed models may degrade significantly in performance on noisy test data or natural/artificial adversaries. In short, performance degradation on low-resource tasks/datasets and unseen data with distribution shifts imposes great challenges to the reliability of NLP models and prevent them from being massively applied in the wild. This dissertation aims to address these two issues.
Towards the first one, we resort to transfer learning to leverage knowledge acquired from related data in order to improve performance on a target low-resource task/dataset. Specifically, we propose different transfer learning methods for three natural language understanding tasks: multi-choice question answering, dialogue state tracking, and sequence labeling, and one natural language generation task: machine translation. These methods are based on four basic transfer learning modalities: multi-task learning, sequential transfer learning, domain adaptation, and cross-lingual transfer. We show experimental results to validate that transferring knowledge from related domains, tasks, and languages can improve the target task/dataset significantly. For the second issue, we propose methods to evaluate the robustness of NLP models on text classification and entailment tasks.
On one hand, we reveal that although these models can achieve a high accuracy of over 90%, they still easily crash over paraphrases of original samples by changing only around 10% words to their synonyms. On the other hand, by creating a new challenge set using four adversarial strategies, we find even the best models for the aspect-based sentiment analysis task cannot reliably identify the target aspect and recognize its sentiment accordingly. On the contrary, they are easily confused by distractor aspects. Overall, these findings raise great concerns of robustness of NLP models, which should be enhanced to ensure their long-run stable service.
by Di Jin.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Mechanical Engineering
Utgof, Darja. "The Perception of Lexical Similarities Between L2 English and L3 Swedish." Thesis, Linköping University, Department of Culture and Communication, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15874.
Повний текст джерелаThe present study investigates lexical similarity perceptions by students of Swedish as a foreign language (L3) with a good yet non-native proficiency in English (L2). The general theoretical framework is provided by studies in transfer of learning and its specific instance, transfer in language acquisition.
It is accepted as true that all previous linguistic knowledge is facilitative in developing proficiency in a new language. However, a frequently reported phenomenon is that students see similarities between two systems in a different way than linguists and theoreticians of education do. As a consequence, the full facilitative potential of transfer remains unused.
The present research seeks to shed light on the similarity perceptions with the focus on the comprehension of a written text. In order to elucidate students’ views, a form involving similarity judgements and multiple choice questions for formally similar items has been designed, drawing on real language use as provided by corpora. 123 forms have been distributed in 6 groups of international students, 4 of them studying Swedish at Level I and 2 studying at Level II.
The test items in the form vary in the degree of formal, semantic and functional similarity from very close cognates, to similar words belonging to different word classes, to items exhibiting category membership and/or being in subordinate/superordinate relation to each other, to deceptive cognates. The author proposes expected similarity ratings and compares them to the results obtained. The objective measure of formal similarity is provided by a string matching algorithm, Levenshtein distance.
The similarity judgements point at the fact that intermediate similarity values can be considered problematic. Similarity ratings between somewhat similar items are usually lower than could be expected. Besides, difference in grammatical meaning lowers similarity values significantly even if lexical meaning nearly coincides. Thus, the obtained results indicate that in order to utilize similarities to facilitate language learning, more attention should be paid to underlying similarities.
Casula, Camilla. "Transfer Learning for Multilingual Offensive Language Detection with BERT." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-412450.
Повний текст джерелаPanzeri-Alvarez, Christina. "Metacognition and language transfer for an English language development transitional program." CSUSB ScholarWorks, 1998. https://scholarworks.lib.csusb.edu/etd-project/1780.
Повний текст джерелаTse, Siu-ching, and 謝兆政. "Cross linguistic influence in polyglots: encoding of the future by L3 learners of Swedish." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B4842187X.
Повний текст джерелаpublished_or_final_version
Linguistics
Master
Master of Arts
Meftah, Sara. "Neural Transfer Learning for Domain Adaptation in Natural Language Processing." Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG021.
Повний текст джерелаRecent approaches based on end-to-end deep neural networks have revolutionised Natural Language Processing (NLP), achieving remarkable results in several tasks and languages. Nevertheless, these approaches are limited with their "gluttony" in terms of annotated data, since they rely on a supervised training paradigm, i.e. training from scratch on large amounts of annotated data. Therefore, there is a wide gap between NLP technologies capabilities for high-resource languages compared to the long tail of low-resourced languages. Moreover, NLP researchers have focused much of their effort on training NLP models on the news domain, due to the availability of training data. However, many research works have highlighted that models trained on news fail to work efficiently on out-of-domain data, due to their lack of robustness against domain shifts. This thesis presents a study of transfer learning approaches, through which we propose different methods to take benefit from the pre-learned knowledge on the high-resourced domain to enhance the performance of neural NLP models in low-resourced settings. Precisely, we apply our approaches to transfer from the news domain to the social media domain. Indeed, despite the importance of its valuable content for a variety of applications (e.g. public security, health monitoring, or trends highlight), this domain is still poor in terms of annotated data. We present different contributions. First, we propose two methods to transfer the knowledge encoded in the neural representations of a source model pretrained on large labelled datasets from the source domain to the target model, further adapted by a fine-tuning on few annotated examples from the target domain. The first transfers contextualised supervisedly pretrained representations, while the second method transfers pretrained weights, used to initialise the target model's parameters. Second, we perform a series of analysis to spot the limits of the above-mentioned proposed methods. We find that even if the proposed transfer learning approach enhances the performance on social media domain, a hidden negative transfer may mitigate the final gain brought by transfer learning. In addition, an interpretive analysis of the pretrained model, show that pretrained neurons may be biased by what they have learned from the source domain, thus struggle with learning uncommon target-specific patterns. Third, stemming from our analysis, we propose a new adaptation scheme which augments the target model with normalised, weighted and randomly initialised neurons that beget a better adaptation while maintaining the valuable source knowledge. Finally, we propose a model, that in addition to the pre-learned knowledge from the high-resource source-domain, takes advantage of various supervised NLP tasks
Mozafari, Marzieh. "Hate speech and offensive language detection using transfer learning approaches." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAS007.
Повний текст джерелаThe great promise of social media platforms (e.g., Twitter and Facebook) is to provide a safe place for users to communicate their opinions and share information. However, concerns are growing that they enable abusive behaviors, e.g., threatening or harassing other users, cyberbullying, hate speech, racial and sexual discrimination, as well. In this thesis, we focus on hate speech as one of the most concerning phenomenon in online social media.Given the high progression of online hate speech and its severe negative effects, institutions, social media platforms, and researchers have been trying to react as quickly as possible. The recent advancements in Natural Language Processing (NLP) and Machine Learning (ML) algorithms can be adapted to develop automatic methods for hate speech detection in this area.The aim of this thesis is to investigate the problem of hate speech and offensive language detection in social media, where we define hate speech as any communication criticizing a person or a group based on some characteristics, e.g., gender, sexual orientation, nationality, religion, race. We propose different approaches in which we adapt advanced Transfer Learning (TL) models and NLP techniques to detect hate speech and offensive content automatically, in a monolingual and multilingual fashion.In the first contribution, we only focus on English language. Firstly, we analyze user-generated textual content to gain a brief insight into the type of content by introducing a new framework being able to categorize contents in terms of topical similarity based on different features. Furthermore, using the Perspective API from Google, we measure and analyze the toxicity of the content. Secondly, we propose a TL approach for identification of hate speech by employing a combination of the unsupervised pre-trained model BERT (Bidirectional Encoder Representations from Transformers) and new supervised fine-tuning strategies. Finally, we investigate the effect of unintended bias in our pre-trained BERT based model and propose a new generalization mechanism in training data by reweighting samples and then changing the fine-tuning strategies in terms of the loss function to mitigate the racial bias propagated through the model. To evaluate the proposed models, we use two publicly available datasets from Twitter.In the second contribution, we consider a multilingual setting where we focus on low-resource languages in which there is no or few labeled data available. First, we present the first corpus of Persian offensive language consisting of 6k micro blog posts from Twitter to deal with offensive language detection in Persian as a low-resource language in this domain. After annotating the corpus, we perform extensive experiments to investigate the performance of transformer-based monolingual and multilingual pre-trained language models (e.g., ParsBERT, mBERT, XLM-R) in the downstream task. Furthermore, we propose an ensemble model to boost the performance of our model. Then, we expand our study into a cross-lingual few-shot learning problem, where we have a few labeled data in target language, and adapt a meta-learning based approach to address identification of hate speech and offensive language in low-resource languages
Mau, Pui-sze Priscilla. "Cross-language transfer of phonological awareness in Chinese-English bilinguals." Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B36889301.
Повний текст джерелаКниги з теми "Language transfer (Language learning) Germany"
Rosén, Christina. "Warum klingt das nicht deutsch?": Probleme der Informationsstrukturierung in deutschen Texten schwedischer Schüler und Studenten. Stockholm: Almqvist & Wiksell, 2006.
Знайти повний текст джерелаHansen-Jaax, Dörte. Transfer bei Diglossie: Synchrone Sprachkontaktphänomene im Niederdeutschen. Hamburg: Kovač, 1995.
Знайти повний текст джерелаSchloter, Andreas Leonhard. Interferenzfehler beim Erwerb des Englischen als Fremdsprache: Ein empirischer Beitrag zur Fehlerursachenforschung. München: Tuduv, 1992.
Знайти повний текст джерелаBefähigung zum zusammenhängenden (monologischen) Sprechen durch Transfer der Schreibtätigkeit im Russischunterricht der allgemeinbildenden Schule. Frankfurt am Main: P. Lang, 1994.
Знайти повний текст джерелаLeontiy, Halyna. Multikulturelles Deutschland im Sprachvergleich: Das Deutsche im Fokus der meist verbreiteten Migrantensprachen : ein Handbuch für DaF-Lehrende und Studierende, für Pädagogen/-innen und Erzieher/-innen. Berlin: Lit, 2013.
Знайти повний текст джерелаGass, Susan M., and Larry Selinker, eds. Language Transfer in Language Learning. Amsterdam: John Benjamins Publishing Company, 1992. http://dx.doi.org/10.1075/lald.5.
Повний текст джерелаM, Gass Susan, and Selinker Larry 1937-, eds. Language transfer in language learning. Amsterdam: J. Benjamins Pub. Co., 1992.
Знайти повний текст джерелаM, Gass Susan, and Selinker Larry 1937-, eds. Language transfer in language learning. Amsterdam: J. Benjamins Pub. Co., 1994.
Знайти повний текст джерелаBordag, Denisa. Psycholinguistische Aspekte der Interferenzerscheinungen in der Flexionsmorphologie des Tschechischen als Fremdsprache. Hildesheim: Olms, 2006.
Знайти повний текст джерелаOdlin, Terence. Language transfer: Cross-linguistic influence in language learning. Cambridge: Cambridge University Press, 1989.
Знайти повний текст джерелаЧастини книг з теми "Language transfer (Language learning) Germany"
Angelovska, Tanja, and Angela Hahn. "Written L3 (English): Transfer Phenomena of L2 (German) Lexical and Syntactic Properties." In Second Language Learning and Teaching, 23–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29557-7_2.
Повний текст джерелаBiswas, Rajarshi, Michael Barz, Mareike Hartmann, and Daniel Sonntag. "Improving German Image Captions Using Machine Translation and Transfer Learning." In Statistical Language and Speech Processing, 3–14. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89579-2_1.
Повний текст джерелаJanik, Marta Olga. "5. Positive and Negative Transfer in the L2 Adjective Inflection of English-, German- and Polish-speaking Learners of L2 Norwegian." In Crosslinguistic Influence and Distinctive Patterns of Language Learning, edited by Anne Golden, Scott Jarvis, and Kari Tenfjord, 84–109. Bristol, Blue Ridge Summit: Multilingual Matters, 2017. http://dx.doi.org/10.21832/9781783098774-007.
Повний текст джерелаBroselow, Ellen. "Nonobvious Transfer." In Language Transfer in Language Learning, 71. Amsterdam: John Benjamins Publishing Company, 1992. http://dx.doi.org/10.1075/lald.5.07bro.
Повний текст джерелаZierdt, M., N. I. Bykov, D. Kley, A. A. Bondarovich, and G. Schmidt. "Technology Learning and Transfer of Knowledge—Practices and Lessons Learned from the German-Language Study Programme in Barnaul." In KULUNDA: Climate Smart Agriculture, 501–6. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-15927-6_38.
Повний текст джерелаArd, Josh, and Taco Homburg. "Verification of Language Transfer." In Language Transfer in Language Learning, 47. Amsterdam: John Benjamins Publishing Company, 1992. http://dx.doi.org/10.1075/lald.5.06ard.
Повний текст джерелаSelinker, Larry, and Usha Lakshmanan. "Language Transfer And Fossilization." In Language Transfer in Language Learning, 197. Amsterdam: John Benjamins Publishing Company, 1992. http://dx.doi.org/10.1075/lald.5.13sel.
Повний текст джерелаGass, Susan M., and Larry Selinker. "Introduction." In Language Transfer in Language Learning, 1. Amsterdam: John Benjamins Publishing Company, 1992. http://dx.doi.org/10.1075/lald.5.03gas.
Повний текст джерелаCorder, S. Pit. "A Role for the Mother Tongue." In Language Transfer in Language Learning, 18. Amsterdam: John Benjamins Publishing Company, 1992. http://dx.doi.org/10.1075/lald.5.04cor.
Повний текст джерелаSchachter, Jacquelyn. "A New Account of Language Transfer." In Language Transfer in Language Learning, 32. Amsterdam: John Benjamins Publishing Company, 1992. http://dx.doi.org/10.1075/lald.5.05sch.
Повний текст джерелаТези доповідей конференцій з теми "Language transfer (Language learning) Germany"
Gebhard, Christian Alexander. "Who attends our foreign language courses? A preliminary look into the profile of learners of Chinese." In 4th International Conference. Business Meets Technology. València: Editorial Universitat Politècnica de València, 2022. http://dx.doi.org/10.4995/bmt2022.2022.15328.
Повний текст джерелаGimenez Calpe, Ana. "The Lecture-Performance: Implementing Performative Pedagogy in Literature Class." In Sixth International Conference on Higher Education Advances. Valencia: Universitat Politècnica de València, 2020. http://dx.doi.org/10.4995/head20.2020.11186.
Повний текст джерелаFarhadi, Ali, David Forsyth, and Ryan White. "Transfer Learning in Sign language." In 2007 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2007. http://dx.doi.org/10.1109/cvpr.2007.383346.
Повний текст джерелаRuder, Sebastian, Matthew E. Peters, Swabha Swayamdipta, and Thomas Wolf. "Transfer Learning in Natural Language Processing." In Proceedings of the 2019 Conference of the North. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/n19-5004.
Повний текст джерелаStiehm, Sebastian, Larissa Köttgen, Sebastian Thelen, Mario Weisskopf, Florian Welter, Anja Richert, Ingrid Isenhardt, and Sabina Jeschke. "Blended Learning Through Integrating Lego Mindstorms NXT Robots in Engineering Education." In ASME 2015 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/imece2015-51641.
Повний текст джерелаHintz, Gerold, and Chris Biemann. "Language Transfer Learning for Supervised Lexical Substitution." In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2016. http://dx.doi.org/10.18653/v1/p16-1012.
Повний текст джерелаWang, Xu, Chengda Tang, Xiaotian Zhao, Xuancai Li, Zhuolin Jin, Dequan Zheng, and Tiejun Zhao. "Transfer Learning Methods for Spoken Language Understanding." In ICMI '19: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3340555.3356096.
Повний текст джерелаWang, Dong, and Thomas Fang Zheng. "Transfer learning for speech and language processing." In 2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA). IEEE, 2015. http://dx.doi.org/10.1109/apsipa.2015.7415532.
Повний текст джерелаZhu, Su, and Kai Yu. "Concept Transfer Learning for Adaptive Language Understanding." In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue. Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/w18-5047.
Повний текст джерелаShen, Chia-Hao, Janet Y. Sung, and Hung-Yi Lee. "Language Transfer of Audio Word2Vec: Learning Audio Segment Representations Without Target Language Data." In ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. http://dx.doi.org/10.1109/icassp.2018.8461305.
Повний текст джерелаЗвіти організацій з теми "Language transfer (Language learning) Germany"
Salter, R., Quyen Dong, Cody Coleman, Maria Seale, Alicia Ruvinsky, LaKenya Walker, and W. Bond. Data Lake Ecosystem Workflow. Engineer Research and Development Center (U.S.), April 2021. http://dx.doi.org/10.21079/11681/40203.
Повний текст джерела