Добірка наукової літератури з теми "Data-efficient Deep Learning"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Data-efficient Deep Learning".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Data-efficient Deep Learning"
Chaudhary, Dr Sumit, Ms Neha Singh, and Salaiya Pankaj. "Time-Efficient Algorithm for Data Annotation using Deep Learning." Indian Journal of Artificial Intelligence and Neural Networking 2, no. 5 (August 30, 2022): 8–11. http://dx.doi.org/10.54105/ijainn.e1058.082522.
Повний текст джерелаBiswas, Surojit, Grigory Khimulya, Ethan C. Alley, Kevin M. Esvelt, and George M. Church. "Low-N protein engineering with data-efficient deep learning." Nature Methods 18, no. 4 (April 2021): 389–96. http://dx.doi.org/10.1038/s41592-021-01100-y.
Повний текст джерелаEdstrom, Jonathon, Yifu Gong, Dongliang Chen, Jinhui Wang, and Na Gong. "Data-Driven Intelligent Efficient Synaptic Storage for Deep Learning." IEEE Transactions on Circuits and Systems II: Express Briefs 64, no. 12 (December 2017): 1412–16. http://dx.doi.org/10.1109/tcsii.2017.2767900.
Повний текст джерелаFeng, Wenhui, Chongzhao Han, Feng Lian, and Xia Liu. "A Data-Efficient Training Method for Deep Reinforcement Learning." Electronics 11, no. 24 (December 16, 2022): 4205. http://dx.doi.org/10.3390/electronics11244205.
Повний текст джерелаHu, Wenjin, Feng Liu, and Jiebo Peng. "An Efficient Data Classification Decision Based on Multimodel Deep Learning." Computational Intelligence and Neuroscience 2022 (May 4, 2022): 1–10. http://dx.doi.org/10.1155/2022/7636705.
Повний текст джерелаMairittha, Nattaya, Tittaya Mairittha, and Sozo Inoue. "On-Device Deep Learning Inference for Efficient Activity Data Collection." Sensors 19, no. 15 (August 5, 2019): 3434. http://dx.doi.org/10.3390/s19153434.
Повний текст джерелаDuan, Yanjie, Yisheng Lv, Yu-Liang Liu, and Fei-Yue Wang. "An efficient realization of deep learning for traffic data imputation." Transportation Research Part C: Emerging Technologies 72 (November 2016): 168–81. http://dx.doi.org/10.1016/j.trc.2016.09.015.
Повний текст джерелаSashank, Madipally Sai Krishna, Vijay Souri Maddila, Vikas Boddu, and Y. Radhika. "Efficient deep learning based data augmentation techniques for enhanced learning on inadequate medical imaging data." ACTA IMEKO 11, no. 1 (March 31, 2022): 6. http://dx.doi.org/10.21014/acta_imeko.v11i1.1226.
Повний текст джерелаPetrovic, Nenad, and Djordje Kocic. "Data-driven framework for energy-efficient smart cities." Serbian Journal of Electrical Engineering 17, no. 1 (2020): 41–63. http://dx.doi.org/10.2298/sjee2001041p.
Повний текст джерелаYue, Yang, Bingyi Kang, Zhongwen Xu, Gao Huang, and Shuicheng Yan. "Value-Consistent Representation Learning for Data-Efficient Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 11069–77. http://dx.doi.org/10.1609/aaai.v37i9.26311.
Повний текст джерелаДисертації з теми "Data-efficient Deep Learning"
Lundström, Dennis. "Data-efficient Transfer Learning with Pre-trained Networks." Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138612.
Повний текст джерелаEdstrom, Jonathon. "Embracing Visual Experience and Data Knowledge: Efficient Embedded Memory Design for Big Videos and Deep Learning." Diss., North Dakota State University, 2019. https://hdl.handle.net/10365/31558.
Повний текст джерелаNational Science Foundation
ND EPSCoR
Center for Computationally Assisted Science and Technology (CCAST)
Sagen, Markus. "Large-Context Question Answering with Cross-Lingual Transfer." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-440704.
Повний текст джерелаNayak, Gaurav Kumar. "Data-efficient Deep Learning Algorithms for Computer Vision Applications." Thesis, 2022. https://etd.iisc.ac.in/handle/2005/6094.
Повний текст джерелаAfreen, Ahmad. "Data Efficient Domain Generalization." Thesis, 2022. https://etd.iisc.ac.in/handle/2005/6047.
Повний текст джерелаWong, Jun Hua. "Efficient Edge Intelligence in the Era of Big Data." Thesis, 2021. http://hdl.handle.net/1805/26385.
Повний текст джерелаSmart wearables, known as emerging paradigms for vital big data capturing, have been attracting intensive attentions. However, one crucial problem is their power-hungriness, i.e., the continuous data streaming consumes energy dramatically and requires devices to be frequently charged. Targeting this obstacle, we propose to investigate the biodynamic patterns in the data and design a data-driven approach for intelligent data compression. We leverage Deep Learning (DL), more specifically, Convolutional Autoencoder (CAE), to learn a sparse representation of the vital big data. The minimized energy need, even taking into consideration the CAE-induced overhead, is tremendously lower than the original energy need. Further, compared with state-of-the-art wavelet compression-based method, our method can compress the data with a dramatically lower error for a similar energy budget. Our experiments and the validated approach are expected to boost the energy efficiency of wearables, and thus greatly advance ubiquitous big data applications in era of smart health. In recent years, there has also been a growing interest in edge intelligence for emerging instantaneous big data inference. However, the inference algorithms, especially deep learning, usually require heavy computation requirements, thereby greatly limiting their deployment on the edge. We take special interest in the smart health wearable big data mining and inference. Targeting the deep learning’s high computational complexity and large memory and energy requirements, new approaches are urged to make the deep learning algorithms ultra-efficient for wearable big data analysis. We propose to leverage knowledge distillation to achieve an ultra-efficient edge-deployable deep learning model. More specifically, through transferring the knowledge from a teacher model to the on-edge student model, the soft target distribution of the teacher model can be effectively learned by the student model. Besides, we propose to further introduce adversarial robustness to the student model, by stimulating the student model to correctly identify inputs that have adversarial perturbation. Experiments demonstrate that the knowledge distillation student model has comparable performance to the heavy teacher model but owns a substantially smaller model size. With adversarial learning, the student model has effectively preserved its robustness. In such a way, we have demonstrated the framework with knowledge distillation and adversarial learning can, not only advance ultra-efficient edge inference, but also preserve the robustness facing the perturbed input.
Schwarzer, Max. "Data-efficient reinforcement learning with self-predictive representations." Thesis, 2020. http://hdl.handle.net/1866/25105.
Повний текст джерелаData efficiency remains a key challenge in deep reinforcement learning. Although modern techniques have been shown to be capable of attaining high performance in extremely complex tasks, including strategy games such as StarCraft, Chess, Shogi, and Go as well as in challenging visual domains such as Atari games, doing so generally requires enormous amounts of interactional data, limiting how broadly reinforcement learning can be applied. In this thesis, we propose SPR, a method drawing from recent advances in self-supervised representation learning designed to enhance the data efficiency of deep reinforcement learning agents. We evaluate this method on the Atari Learning Environment, and show that it dramatically improves performance with limited computational overhead. When given roughly the same amount of learning time as human testers, a reinforcement learning agent augmented with SPR achieves super-human performance on 7 out of 26 games, an increase of 350% over the previous state of the art, while also strongly improving mean and median performance. We also evaluate this method on a set of continuous control tasks, showing substantial improvements over previous methods. Chapter 1 introduces concepts necessary to understand the work presented, including overviews of Deep Reinforcement Learning and Self-Supervised Representation learning. Chapter 2 contains a detailed description of our contributions towards leveraging self-supervised representation learning to improve data-efficiency in reinforcement learning. Chapter 3 provides some conclusions drawn from this work, including a number of proposals for future work.
(11013474), Jun Hua Wong. "Efficient Edge Intelligence In the Era of Big Data." Thesis, 2021.
Знайти повний текст джерелаEhrler, Matthew. "VConstruct: a computationally efficient method for reconstructing satellite derived Chlorophyll-a data." Thesis, 2021. http://hdl.handle.net/1828/13346.
Повний текст джерелаGraduate
Книги з теми "Data-efficient Deep Learning"
Jena, Om Prakash, Alok Ranjan Tripathy, Brojo Kishore Mishra, and Ahmed A. Elngar, eds. Augmented Intelligence: Deep Learning, Machine Learning, Cognitive Computing, Educational Data Mining. BENTHAM SCIENCE PUBLISHERS, 2022. http://dx.doi.org/10.2174/97898150404011220301.
Повний текст джерелаDelgado Martín, Jordi, Andrea Muñoz-Ibáñez, and Ismael Himar Falcón-Suárez. 6th International Workshop on Rock Physics: A Coruña, Spain 13 -17 June 2022: Book of Abstracts. 2022nd ed. Servizo de Publicacións da UDC, 2022. http://dx.doi.org/10.17979/spudc.000005.
Повний текст джерелаЧастини книг з теми "Data-efficient Deep Learning"
Sarkar, Tirthajyoti. "Modular and Productive Deep Learning Code." In Productive and Efficient Data Science with Python, 113–56. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8121-5_5.
Повний текст джерелаVepakomma, Praneeth, and Ramesh Raskar. "Split Learning: A Resource Efficient Model and Data Parallel Approach for Distributed Deep Learning." In Federated Learning, 439–51. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96896-0_19.
Повний текст джерелаMaulana, Muhammad Rizki, and Wee Sun Lee. "Ensemble and Auxiliary Tasks for Data-Efficient Deep Reinforcement Learning." In Machine Learning and Knowledge Discovery in Databases. Research Track, 122–38. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86486-6_8.
Повний текст джерелаGhantasala, G. S. Pradeep, L. R. Sudha, T. Veni Priya, P. Deepan, and R. Raja Vignesh. "An Efficient Deep Learning Framework for Multimedia Big Data Analytics." In Multimedia Computing Systems and Virtual Reality, 99–127. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003196686-5.
Повний текст джерелаSharma, Pranav, Marcus Rüb, Daniel Gaida, Heiko Lutz, and Axel Sikora. "Deep Learning in Resource and Data Constrained Edge Computing Systems." In Machine Learning for Cyber Physical Systems, 43–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-662-62746-4_5.
Повний текст джерелаZheng, Yefeng, David Liu, Bogdan Georgescu, Hien Nguyen, and Dorin Comaniciu. "Robust Landmark Detection in Volumetric Data with Efficient 3D Deep Learning." In Deep Learning and Convolutional Neural Networks for Medical Image Computing, 49–61. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-42999-1_4.
Повний текст джерелаSymeonidis, C., P. Nousi, P. Tosidis, K. Tsampazis, N. Passalis, A. Tefas, and N. Nikolaidis. "Efficient Realistic Data Generation Framework Leveraging Deep Learning-Based Human Digitization." In Proceedings of the International Neural Networks Society, 271–83. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-80568-5_23.
Повний текст джерелаRajawat, Anand Singh, Kanishk Barhanpurkar, S. B. Goyal, Pradeep Bedi, Rabindra Nath Shaw, and Ankush Ghosh. "Efficient Deep Learning for Reforming Authentic Content Searching on Big Data." In Advanced Computing and Intelligent Technologies, 319–27. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-2164-2_26.
Повний текст джерелаZheng, Yefeng, David Liu, Bogdan Georgescu, Hien Nguyen, and Dorin Comaniciu. "3D Deep Learning for Efficient and Robust Landmark Detection in Volumetric Data." In Lecture Notes in Computer Science, 565–72. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24553-9_69.
Повний текст джерелаGhesu, Florin C., Bogdan Georgescu, Yefeng Zheng, Joachim Hornegger, and Dorin Comaniciu. "Marginal Space Deep Learning: Efficient Architecture for Detection in Volumetric Image Data." In Lecture Notes in Computer Science, 710–18. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24553-9_87.
Повний текст джерелаТези доповідей конференцій з теми "Data-efficient Deep Learning"
Zhang, Xianchao, Wentao Yang, Xiaotong Zhang, Han Liu, and Guanglu Wang. "Data-Efficient Deep Reinforcement Learning with Symmetric Consistency." In 2022 26th International Conference on Pattern Recognition (ICPR). IEEE, 2022. http://dx.doi.org/10.1109/icpr56361.2022.9956417.
Повний текст джерелаLi, Hang, Ju Wang, Xi Chen, Xue Liu, and Gregory Dudek. "Data-Efficient Communication Traffic Prediction With Deep Transfer Learning." In ICC 2022 - IEEE International Conference on Communications. IEEE, 2022. http://dx.doi.org/10.1109/icc45855.2022.9838413.
Повний текст джерелаWang, Xue. "An efficient federated learning optimization algorithm on non-IID data." In International Conference on Cloud Computing, Performance Computing, and Deep Learning (CCPCDL 2022), edited by Sandeep Saxena. SPIE, 2022. http://dx.doi.org/10.1117/12.2640939.
Повний текст джерелаKok, Ibrahim, Burak H. Corak, Uraz Yavanoglu, and Suat Ozdemir. "Deep Learning based Delay and Bandwidth Efficient Data Transmission in IoT." In 2019 IEEE International Conference on Big Data (Big Data). IEEE, 2019. http://dx.doi.org/10.1109/bigdata47090.2019.9005680.
Повний текст джерелаChen, Zhixin, Zhixin Jia, Mengxiang Lin, and Shibo Jian. "Towards Generalization and Data Efficient Learning of Deep Robotic Grasping." In 2022 IEEE 17th Conference on Industrial Electronics and Applications (ICIEA). IEEE, 2022. http://dx.doi.org/10.1109/iciea54703.2022.10006045.
Повний текст джерелаXiong, Y., and J. Cheng. "Efficient Seismic Data Interpolation Using Deep Convolutional Networks and Transfer Learning." In 81st EAGE Conference and Exhibition 2019. European Association of Geoscientists & Engineers, 2019. http://dx.doi.org/10.3997/2214-4609.201900768.
Повний текст джерелаXu, Changming, Hengfeng Ding, Xuejian Zhang, Cong Wang, and Hongji Yang. "A Data-Efficient Method of Deep Reinforcement Learning for Chinese Chess." In 2022 IEEE 22nd International Conference on Software Quality, Reliability, and Security Companion (QRS-C). IEEE, 2022. http://dx.doi.org/10.1109/qrs-c57518.2022.00109.
Повний текст джерелаWu, Di, Jikun Kang, Yi Tian Xu, Hang Li, Jimmy Li, Xi Chen, Dmitriy Rivkin, et al. "Load Balancing for Communication Networks via Data-Efficient Deep Reinforcement Learning." In GLOBECOM 2021 - 2021 IEEE Global Communications Conference. IEEE, 2021. http://dx.doi.org/10.1109/globecom46510.2021.9685294.
Повний текст джерелаAwad, Abdalaziz, Philipp Brendel, and Andreas Erdmann. "Data efficient deep learning for imaging with novel EUV mask absorbers." In Optical and EUV Nanolithography XXXV, edited by Anna Lio and Martin Burkhardt. SPIE, 2022. http://dx.doi.org/10.1117/12.2613954.
Повний текст джерелаMelas-Kyriazi, Luke, George Han, and Celine Liang. "Generation-Distillation for Efficient Natural Language Understanding in Low-Data Settings." In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/d19-6114.
Повний текст джерела