Gotowa bibliografia na temat „Data-efficient Deep Learning”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Data-efficient Deep Learning”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Data-efficient Deep Learning"
Chaudhary, Dr Sumit, Ms Neha Singh i Salaiya Pankaj. "Time-Efficient Algorithm for Data Annotation using Deep Learning". Indian Journal of Artificial Intelligence and Neural Networking 2, nr 5 (30.08.2022): 8–11. http://dx.doi.org/10.54105/ijainn.e1058.082522.
Pełny tekst źródłaBiswas, Surojit, Grigory Khimulya, Ethan C. Alley, Kevin M. Esvelt i George M. Church. "Low-N protein engineering with data-efficient deep learning". Nature Methods 18, nr 4 (kwiecień 2021): 389–96. http://dx.doi.org/10.1038/s41592-021-01100-y.
Pełny tekst źródłaEdstrom, Jonathon, Yifu Gong, Dongliang Chen, Jinhui Wang i Na Gong. "Data-Driven Intelligent Efficient Synaptic Storage for Deep Learning". IEEE Transactions on Circuits and Systems II: Express Briefs 64, nr 12 (grudzień 2017): 1412–16. http://dx.doi.org/10.1109/tcsii.2017.2767900.
Pełny tekst źródłaFeng, Wenhui, Chongzhao Han, Feng Lian i Xia Liu. "A Data-Efficient Training Method for Deep Reinforcement Learning". Electronics 11, nr 24 (16.12.2022): 4205. http://dx.doi.org/10.3390/electronics11244205.
Pełny tekst źródłaHu, Wenjin, Feng Liu i Jiebo Peng. "An Efficient Data Classification Decision Based on Multimodel Deep Learning". Computational Intelligence and Neuroscience 2022 (4.05.2022): 1–10. http://dx.doi.org/10.1155/2022/7636705.
Pełny tekst źródłaMairittha, Nattaya, Tittaya Mairittha i Sozo Inoue. "On-Device Deep Learning Inference for Efficient Activity Data Collection". Sensors 19, nr 15 (5.08.2019): 3434. http://dx.doi.org/10.3390/s19153434.
Pełny tekst źródłaDuan, Yanjie, Yisheng Lv, Yu-Liang Liu i Fei-Yue Wang. "An efficient realization of deep learning for traffic data imputation". Transportation Research Part C: Emerging Technologies 72 (listopad 2016): 168–81. http://dx.doi.org/10.1016/j.trc.2016.09.015.
Pełny tekst źródłaSashank, Madipally Sai Krishna, Vijay Souri Maddila, Vikas Boddu i Y. Radhika. "Efficient deep learning based data augmentation techniques for enhanced learning on inadequate medical imaging data". ACTA IMEKO 11, nr 1 (31.03.2022): 6. http://dx.doi.org/10.21014/acta_imeko.v11i1.1226.
Pełny tekst źródłaPetrovic, Nenad, i Djordje Kocic. "Data-driven framework for energy-efficient smart cities". Serbian Journal of Electrical Engineering 17, nr 1 (2020): 41–63. http://dx.doi.org/10.2298/sjee2001041p.
Pełny tekst źródłaYue, Yang, Bingyi Kang, Zhongwen Xu, Gao Huang i Shuicheng Yan. "Value-Consistent Representation Learning for Data-Efficient Reinforcement Learning". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 9 (26.06.2023): 11069–77. http://dx.doi.org/10.1609/aaai.v37i9.26311.
Pełny tekst źródłaRozprawy doktorskie na temat "Data-efficient Deep Learning"
Lundström, Dennis. "Data-efficient Transfer Learning with Pre-trained Networks". Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138612.
Pełny tekst źródłaEdstrom, Jonathon. "Embracing Visual Experience and Data Knowledge: Efficient Embedded Memory Design for Big Videos and Deep Learning". Diss., North Dakota State University, 2019. https://hdl.handle.net/10365/31558.
Pełny tekst źródłaNational Science Foundation
ND EPSCoR
Center for Computationally Assisted Science and Technology (CCAST)
Sagen, Markus. "Large-Context Question Answering with Cross-Lingual Transfer". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-440704.
Pełny tekst źródłaNayak, Gaurav Kumar. "Data-efficient Deep Learning Algorithms for Computer Vision Applications". Thesis, 2022. https://etd.iisc.ac.in/handle/2005/6094.
Pełny tekst źródłaAfreen, Ahmad. "Data Efficient Domain Generalization". Thesis, 2022. https://etd.iisc.ac.in/handle/2005/6047.
Pełny tekst źródłaWong, Jun Hua. "Efficient Edge Intelligence in the Era of Big Data". Thesis, 2021. http://hdl.handle.net/1805/26385.
Pełny tekst źródłaSmart wearables, known as emerging paradigms for vital big data capturing, have been attracting intensive attentions. However, one crucial problem is their power-hungriness, i.e., the continuous data streaming consumes energy dramatically and requires devices to be frequently charged. Targeting this obstacle, we propose to investigate the biodynamic patterns in the data and design a data-driven approach for intelligent data compression. We leverage Deep Learning (DL), more specifically, Convolutional Autoencoder (CAE), to learn a sparse representation of the vital big data. The minimized energy need, even taking into consideration the CAE-induced overhead, is tremendously lower than the original energy need. Further, compared with state-of-the-art wavelet compression-based method, our method can compress the data with a dramatically lower error for a similar energy budget. Our experiments and the validated approach are expected to boost the energy efficiency of wearables, and thus greatly advance ubiquitous big data applications in era of smart health. In recent years, there has also been a growing interest in edge intelligence for emerging instantaneous big data inference. However, the inference algorithms, especially deep learning, usually require heavy computation requirements, thereby greatly limiting their deployment on the edge. We take special interest in the smart health wearable big data mining and inference. Targeting the deep learning’s high computational complexity and large memory and energy requirements, new approaches are urged to make the deep learning algorithms ultra-efficient for wearable big data analysis. We propose to leverage knowledge distillation to achieve an ultra-efficient edge-deployable deep learning model. More specifically, through transferring the knowledge from a teacher model to the on-edge student model, the soft target distribution of the teacher model can be effectively learned by the student model. Besides, we propose to further introduce adversarial robustness to the student model, by stimulating the student model to correctly identify inputs that have adversarial perturbation. Experiments demonstrate that the knowledge distillation student model has comparable performance to the heavy teacher model but owns a substantially smaller model size. With adversarial learning, the student model has effectively preserved its robustness. In such a way, we have demonstrated the framework with knowledge distillation and adversarial learning can, not only advance ultra-efficient edge inference, but also preserve the robustness facing the perturbed input.
Schwarzer, Max. "Data-efficient reinforcement learning with self-predictive representations". Thesis, 2020. http://hdl.handle.net/1866/25105.
Pełny tekst źródłaData efficiency remains a key challenge in deep reinforcement learning. Although modern techniques have been shown to be capable of attaining high performance in extremely complex tasks, including strategy games such as StarCraft, Chess, Shogi, and Go as well as in challenging visual domains such as Atari games, doing so generally requires enormous amounts of interactional data, limiting how broadly reinforcement learning can be applied. In this thesis, we propose SPR, a method drawing from recent advances in self-supervised representation learning designed to enhance the data efficiency of deep reinforcement learning agents. We evaluate this method on the Atari Learning Environment, and show that it dramatically improves performance with limited computational overhead. When given roughly the same amount of learning time as human testers, a reinforcement learning agent augmented with SPR achieves super-human performance on 7 out of 26 games, an increase of 350% over the previous state of the art, while also strongly improving mean and median performance. We also evaluate this method on a set of continuous control tasks, showing substantial improvements over previous methods. Chapter 1 introduces concepts necessary to understand the work presented, including overviews of Deep Reinforcement Learning and Self-Supervised Representation learning. Chapter 2 contains a detailed description of our contributions towards leveraging self-supervised representation learning to improve data-efficiency in reinforcement learning. Chapter 3 provides some conclusions drawn from this work, including a number of proposals for future work.
(11013474), Jun Hua Wong. "Efficient Edge Intelligence In the Era of Big Data". Thesis, 2021.
Znajdź pełny tekst źródłaEhrler, Matthew. "VConstruct: a computationally efficient method for reconstructing satellite derived Chlorophyll-a data". Thesis, 2021. http://hdl.handle.net/1828/13346.
Pełny tekst źródłaGraduate
Książki na temat "Data-efficient Deep Learning"
Jena, Om Prakash, Alok Ranjan Tripathy, Brojo Kishore Mishra i Ahmed A. Elngar, red. Augmented Intelligence: Deep Learning, Machine Learning, Cognitive Computing, Educational Data Mining. BENTHAM SCIENCE PUBLISHERS, 2022. http://dx.doi.org/10.2174/97898150404011220301.
Pełny tekst źródłaDelgado Martín, Jordi, Andrea Muñoz-Ibáñez i Ismael Himar Falcón-Suárez. 6th International Workshop on Rock Physics: A Coruña, Spain 13 -17 June 2022: Book of Abstracts. Wyd. 2022. Servizo de Publicacións da UDC, 2022. http://dx.doi.org/10.17979/spudc.000005.
Pełny tekst źródłaCzęści książek na temat "Data-efficient Deep Learning"
Sarkar, Tirthajyoti. "Modular and Productive Deep Learning Code". W Productive and Efficient Data Science with Python, 113–56. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8121-5_5.
Pełny tekst źródłaVepakomma, Praneeth, i Ramesh Raskar. "Split Learning: A Resource Efficient Model and Data Parallel Approach for Distributed Deep Learning". W Federated Learning, 439–51. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96896-0_19.
Pełny tekst źródłaMaulana, Muhammad Rizki, i Wee Sun Lee. "Ensemble and Auxiliary Tasks for Data-Efficient Deep Reinforcement Learning". W Machine Learning and Knowledge Discovery in Databases. Research Track, 122–38. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86486-6_8.
Pełny tekst źródłaGhantasala, G. S. Pradeep, L. R. Sudha, T. Veni Priya, P. Deepan i R. Raja Vignesh. "An Efficient Deep Learning Framework for Multimedia Big Data Analytics". W Multimedia Computing Systems and Virtual Reality, 99–127. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003196686-5.
Pełny tekst źródłaSharma, Pranav, Marcus Rüb, Daniel Gaida, Heiko Lutz i Axel Sikora. "Deep Learning in Resource and Data Constrained Edge Computing Systems". W Machine Learning for Cyber Physical Systems, 43–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-662-62746-4_5.
Pełny tekst źródłaZheng, Yefeng, David Liu, Bogdan Georgescu, Hien Nguyen i Dorin Comaniciu. "Robust Landmark Detection in Volumetric Data with Efficient 3D Deep Learning". W Deep Learning and Convolutional Neural Networks for Medical Image Computing, 49–61. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-42999-1_4.
Pełny tekst źródłaSymeonidis, C., P. Nousi, P. Tosidis, K. Tsampazis, N. Passalis, A. Tefas i N. Nikolaidis. "Efficient Realistic Data Generation Framework Leveraging Deep Learning-Based Human Digitization". W Proceedings of the International Neural Networks Society, 271–83. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-80568-5_23.
Pełny tekst źródłaRajawat, Anand Singh, Kanishk Barhanpurkar, S. B. Goyal, Pradeep Bedi, Rabindra Nath Shaw i Ankush Ghosh. "Efficient Deep Learning for Reforming Authentic Content Searching on Big Data". W Advanced Computing and Intelligent Technologies, 319–27. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-2164-2_26.
Pełny tekst źródłaZheng, Yefeng, David Liu, Bogdan Georgescu, Hien Nguyen i Dorin Comaniciu. "3D Deep Learning for Efficient and Robust Landmark Detection in Volumetric Data". W Lecture Notes in Computer Science, 565–72. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24553-9_69.
Pełny tekst źródłaGhesu, Florin C., Bogdan Georgescu, Yefeng Zheng, Joachim Hornegger i Dorin Comaniciu. "Marginal Space Deep Learning: Efficient Architecture for Detection in Volumetric Image Data". W Lecture Notes in Computer Science, 710–18. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24553-9_87.
Pełny tekst źródłaStreszczenia konferencji na temat "Data-efficient Deep Learning"
Zhang, Xianchao, Wentao Yang, Xiaotong Zhang, Han Liu i Guanglu Wang. "Data-Efficient Deep Reinforcement Learning with Symmetric Consistency". W 2022 26th International Conference on Pattern Recognition (ICPR). IEEE, 2022. http://dx.doi.org/10.1109/icpr56361.2022.9956417.
Pełny tekst źródłaLi, Hang, Ju Wang, Xi Chen, Xue Liu i Gregory Dudek. "Data-Efficient Communication Traffic Prediction With Deep Transfer Learning". W ICC 2022 - IEEE International Conference on Communications. IEEE, 2022. http://dx.doi.org/10.1109/icc45855.2022.9838413.
Pełny tekst źródłaWang, Xue. "An efficient federated learning optimization algorithm on non-IID data". W International Conference on Cloud Computing, Performance Computing, and Deep Learning (CCPCDL 2022), redaktor Sandeep Saxena. SPIE, 2022. http://dx.doi.org/10.1117/12.2640939.
Pełny tekst źródłaKok, Ibrahim, Burak H. Corak, Uraz Yavanoglu i Suat Ozdemir. "Deep Learning based Delay and Bandwidth Efficient Data Transmission in IoT". W 2019 IEEE International Conference on Big Data (Big Data). IEEE, 2019. http://dx.doi.org/10.1109/bigdata47090.2019.9005680.
Pełny tekst źródłaChen, Zhixin, Zhixin Jia, Mengxiang Lin i Shibo Jian. "Towards Generalization and Data Efficient Learning of Deep Robotic Grasping". W 2022 IEEE 17th Conference on Industrial Electronics and Applications (ICIEA). IEEE, 2022. http://dx.doi.org/10.1109/iciea54703.2022.10006045.
Pełny tekst źródłaXiong, Y., i J. Cheng. "Efficient Seismic Data Interpolation Using Deep Convolutional Networks and Transfer Learning". W 81st EAGE Conference and Exhibition 2019. European Association of Geoscientists & Engineers, 2019. http://dx.doi.org/10.3997/2214-4609.201900768.
Pełny tekst źródłaXu, Changming, Hengfeng Ding, Xuejian Zhang, Cong Wang i Hongji Yang. "A Data-Efficient Method of Deep Reinforcement Learning for Chinese Chess". W 2022 IEEE 22nd International Conference on Software Quality, Reliability, and Security Companion (QRS-C). IEEE, 2022. http://dx.doi.org/10.1109/qrs-c57518.2022.00109.
Pełny tekst źródłaWu, Di, Jikun Kang, Yi Tian Xu, Hang Li, Jimmy Li, Xi Chen, Dmitriy Rivkin i in. "Load Balancing for Communication Networks via Data-Efficient Deep Reinforcement Learning". W GLOBECOM 2021 - 2021 IEEE Global Communications Conference. IEEE, 2021. http://dx.doi.org/10.1109/globecom46510.2021.9685294.
Pełny tekst źródłaAwad, Abdalaziz, Philipp Brendel i Andreas Erdmann. "Data efficient deep learning for imaging with novel EUV mask absorbers". W Optical and EUV Nanolithography XXXV, redaktorzy Anna Lio i Martin Burkhardt. SPIE, 2022. http://dx.doi.org/10.1117/12.2613954.
Pełny tekst źródłaMelas-Kyriazi, Luke, George Han i Celine Liang. "Generation-Distillation for Efficient Natural Language Understanding in Low-Data Settings". W Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/d19-6114.
Pełny tekst źródła