Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Interpolation-Based data augmentation“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Interpolation-Based data augmentation" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Interpolation-Based data augmentation"
Oh, Cheolhwan, Seungmin Han und Jongpil Jeong. „Time-Series Data Augmentation based on Interpolation“. Procedia Computer Science 175 (2020): 64–71. http://dx.doi.org/10.1016/j.procs.2020.07.012.
Der volle Inhalt der QuelleLi, Yuliang, Xiaolan Wang, Zhengjie Miao und Wang-Chiew Tan. „Data augmentation for ML-driven data preparation and integration“. Proceedings of the VLDB Endowment 14, Nr. 12 (Juli 2021): 3182–85. http://dx.doi.org/10.14778/3476311.3476403.
Der volle Inhalt der QuelleHuang, Chenhui, und Akinobu Shibuya. „High Accuracy Geochemical Map Generation Method by a Spatial Autocorrelation-Based Mixture Interpolation Using Remote Sensing Data“. Remote Sensing 12, Nr. 12 (21.06.2020): 1991. http://dx.doi.org/10.3390/rs12121991.
Der volle Inhalt der QuelleTsourtis, Anastasios, Georgios Papoutsoglou und Yannis Pantazis. „GAN-Based Training of Semi-Interpretable Generators for Biological Data Interpolation and Augmentation“. Applied Sciences 12, Nr. 11 (27.05.2022): 5434. http://dx.doi.org/10.3390/app12115434.
Der volle Inhalt der QuelleBi, Xiao-ying, Bo Li, Wen-long Lu und Xin-zhi Zhou. „Daily runoff forecasting based on data-augmented neural network model“. Journal of Hydroinformatics 22, Nr. 4 (16.05.2020): 900–915. http://dx.doi.org/10.2166/hydro.2020.017.
Der volle Inhalt der Quellede Rojas, Ana Lazcano. „Data augmentation in economic time series: Behavior and improvements in predictions“. AIMS Mathematics 8, Nr. 10 (2023): 24528–44. http://dx.doi.org/10.3934/math.20231251.
Der volle Inhalt der QuelleXie, Xiangjin, Li Yangning, Wang Chen, Kai Ouyang, Zuotong Xie und Hai-Tao Zheng. „Global Mixup: Eliminating Ambiguity with Clustering“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 11 (26.06.2023): 13798–806. http://dx.doi.org/10.1609/aaai.v37i11.26616.
Der volle Inhalt der QuelleGuo, Hongyu. „Nonlinear Mixup: Out-Of-Manifold Data Augmentation for Text Classification“. Proceedings of the AAAI Conference on Artificial Intelligence 34, Nr. 04 (03.04.2020): 4044–51. http://dx.doi.org/10.1609/aaai.v34i04.5822.
Der volle Inhalt der QuelleLim, Seong-Su, und Oh-Wook Kwon. „FrameAugment: A Simple Data Augmentation Method for Encoder–Decoder Speech Recognition“. Applied Sciences 12, Nr. 15 (28.07.2022): 7619. http://dx.doi.org/10.3390/app12157619.
Der volle Inhalt der QuelleXie, Kai, Yuxuan Gao, Yadang Chen und Xun Che. „Mask Mixup Model: Enhanced Contrastive Learning for Few-Shot Learning“. Applied Sciences 14, Nr. 14 (11.07.2024): 6063. http://dx.doi.org/10.3390/app14146063.
Der volle Inhalt der QuelleDissertationen zum Thema "Interpolation-Based data augmentation"
Venkataramanan, Shashanka. „Metric learning for instance and category-level visual representation“. Electronic Thesis or Diss., Université de Rennes (2023-....), 2024. http://www.theses.fr/2024URENS022.
Der volle Inhalt der QuelleThe primary goal in computer vision is to enable machines to extract meaningful information from visual data, such as images and videos, and leverage this information to perform a wide range of tasks. To this end, substantial research has focused on developing deep learning models capable of encoding comprehensive and robust visual representations. A prominent strategy in this context involves pretraining models on large-scale datasets, such as ImageNet, to learn representations that can exhibit cross-task applicability and facilitate the successful handling of diverse downstream tasks with minimal effort. To facilitate learning on these large-scale datasets and encode good representations, com- plex data augmentation strategies have been used. However, these augmentations can be limited in their scope, either being hand-crafted and lacking diversity, or generating images that appear unnatural. Moreover, the focus of these augmentation techniques has primarily been on the ImageNet dataset and its downstream tasks, limiting their applicability to a broader range of computer vision problems. In this thesis, we aim to tackle these limitations by exploring different approaches to en- hance the efficiency and effectiveness in representation learning. The common thread across the works presented is the use of interpolation-based techniques, such as mixup, to generate diverse and informative training examples beyond the original dataset. In the first work, we are motivated by the idea of deformation as a natural way of interpolating images rather than using a convex combination. We show that geometrically aligning the two images in the fea- ture space, allows for more natural interpolation that retains the geometry of one image and the texture of the other, connecting it to style transfer. Drawing from these observations, we explore the combination of mixup and deep metric learning. We develop a generalized formu- lation that accommodates mixup in metric learning, leading to improved representations that explore areas of the embedding space beyond the training classes. Building on these insights, we revisit the original motivation of mixup and generate a larger number of interpolated examples beyond the mini-batch size by interpolating in the embedding space. This approach allows us to sample on the entire convex hull of the mini-batch, rather than just along lin- ear segments between pairs of examples. Finally, we investigate the potential of using natural augmentations of objects from videos. We introduce a "Walking Tours" dataset of first-person egocentric videos, which capture a diverse range of objects and actions in natural scene transi- tions. We then propose a novel self-supervised pretraining method called DoRA, which detects and tracks objects in video frames, deriving multiple views from the tracks and using them in a self-supervised manner
Buchteile zum Thema "Interpolation-Based data augmentation"
Rabah, Mohamed Louay, Nedra Mellouli und Imed Riadh Farah. „Interpolation and Prediction of Piezometric Multivariate Time Series Based on Data Augmentation and Transformers“. In Lecture Notes in Networks and Systems, 327–44. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-47724-9_22.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Interpolation-Based data augmentation"
Ye, Mao, Haitao Wang und Zheqian Chen. „MSMix: An Interpolation-Based Text Data Augmentation Method Manifold Swap Mixup“. In 4th International Conference on Natural Language Processing and Machine Learning. Academy and Industry Research Collaboration Center (AIRCC), 2023. http://dx.doi.org/10.5121/csit.2023.130806.
Der volle Inhalt der QuelleHeo, Jaeseung, Seungbeom Lee, Sungsoo Ahn und Dongwoo Kim. „EPIC: Graph Augmentation with Edit Path Interpolation via Learnable Cost“. In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/455.
Der volle Inhalt der QuelleLi, Chen, Xutan Peng, Hao Peng, Jianxin Li und Lihong Wang. „TextGTL: Graph-based Transductive Learning for Semi-supervised Text Classification via Structure-Sensitive Interpolation“. In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/369.
Der volle Inhalt der Quelle