Literatura científica selecionada sobre o tema "Interpolation-Based data augmentation"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Interpolation-Based data augmentation".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Interpolation-Based data augmentation"
Oh, Cheolhwan, Seungmin Han e Jongpil Jeong. "Time-Series Data Augmentation based on Interpolation". Procedia Computer Science 175 (2020): 64–71. http://dx.doi.org/10.1016/j.procs.2020.07.012.
Texto completo da fonteLi, Yuliang, Xiaolan Wang, Zhengjie Miao e Wang-Chiew Tan. "Data augmentation for ML-driven data preparation and integration". Proceedings of the VLDB Endowment 14, n.º 12 (julho de 2021): 3182–85. http://dx.doi.org/10.14778/3476311.3476403.
Texto completo da fonteHuang, Chenhui, e Akinobu Shibuya. "High Accuracy Geochemical Map Generation Method by a Spatial Autocorrelation-Based Mixture Interpolation Using Remote Sensing Data". Remote Sensing 12, n.º 12 (21 de junho de 2020): 1991. http://dx.doi.org/10.3390/rs12121991.
Texto completo da fonteTsourtis, Anastasios, Georgios Papoutsoglou e Yannis Pantazis. "GAN-Based Training of Semi-Interpretable Generators for Biological Data Interpolation and Augmentation". Applied Sciences 12, n.º 11 (27 de maio de 2022): 5434. http://dx.doi.org/10.3390/app12115434.
Texto completo da fonteBi, Xiao-ying, Bo Li, Wen-long Lu e Xin-zhi Zhou. "Daily runoff forecasting based on data-augmented neural network model". Journal of Hydroinformatics 22, n.º 4 (16 de maio de 2020): 900–915. http://dx.doi.org/10.2166/hydro.2020.017.
Texto completo da fontede Rojas, Ana Lazcano. "Data augmentation in economic time series: Behavior and improvements in predictions". AIMS Mathematics 8, n.º 10 (2023): 24528–44. http://dx.doi.org/10.3934/math.20231251.
Texto completo da fonteXie, Xiangjin, Li Yangning, Wang Chen, Kai Ouyang, Zuotong Xie e Hai-Tao Zheng. "Global Mixup: Eliminating Ambiguity with Clustering". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 11 (26 de junho de 2023): 13798–806. http://dx.doi.org/10.1609/aaai.v37i11.26616.
Texto completo da fonteGuo, Hongyu. "Nonlinear Mixup: Out-Of-Manifold Data Augmentation for Text Classification". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 04 (3 de abril de 2020): 4044–51. http://dx.doi.org/10.1609/aaai.v34i04.5822.
Texto completo da fonteLim, Seong-Su, e Oh-Wook Kwon. "FrameAugment: A Simple Data Augmentation Method for Encoder–Decoder Speech Recognition". Applied Sciences 12, n.º 15 (28 de julho de 2022): 7619. http://dx.doi.org/10.3390/app12157619.
Texto completo da fonteXie, Kai, Yuxuan Gao, Yadang Chen e Xun Che. "Mask Mixup Model: Enhanced Contrastive Learning for Few-Shot Learning". Applied Sciences 14, n.º 14 (11 de julho de 2024): 6063. http://dx.doi.org/10.3390/app14146063.
Texto completo da fonteTeses / dissertações sobre o assunto "Interpolation-Based data augmentation"
Venkataramanan, Shashanka. "Metric learning for instance and category-level visual representation". Electronic Thesis or Diss., Université de Rennes (2023-....), 2024. http://www.theses.fr/2024URENS022.
Texto completo da fonteThe primary goal in computer vision is to enable machines to extract meaningful information from visual data, such as images and videos, and leverage this information to perform a wide range of tasks. To this end, substantial research has focused on developing deep learning models capable of encoding comprehensive and robust visual representations. A prominent strategy in this context involves pretraining models on large-scale datasets, such as ImageNet, to learn representations that can exhibit cross-task applicability and facilitate the successful handling of diverse downstream tasks with minimal effort. To facilitate learning on these large-scale datasets and encode good representations, com- plex data augmentation strategies have been used. However, these augmentations can be limited in their scope, either being hand-crafted and lacking diversity, or generating images that appear unnatural. Moreover, the focus of these augmentation techniques has primarily been on the ImageNet dataset and its downstream tasks, limiting their applicability to a broader range of computer vision problems. In this thesis, we aim to tackle these limitations by exploring different approaches to en- hance the efficiency and effectiveness in representation learning. The common thread across the works presented is the use of interpolation-based techniques, such as mixup, to generate diverse and informative training examples beyond the original dataset. In the first work, we are motivated by the idea of deformation as a natural way of interpolating images rather than using a convex combination. We show that geometrically aligning the two images in the fea- ture space, allows for more natural interpolation that retains the geometry of one image and the texture of the other, connecting it to style transfer. Drawing from these observations, we explore the combination of mixup and deep metric learning. We develop a generalized formu- lation that accommodates mixup in metric learning, leading to improved representations that explore areas of the embedding space beyond the training classes. Building on these insights, we revisit the original motivation of mixup and generate a larger number of interpolated examples beyond the mini-batch size by interpolating in the embedding space. This approach allows us to sample on the entire convex hull of the mini-batch, rather than just along lin- ear segments between pairs of examples. Finally, we investigate the potential of using natural augmentations of objects from videos. We introduce a "Walking Tours" dataset of first-person egocentric videos, which capture a diverse range of objects and actions in natural scene transi- tions. We then propose a novel self-supervised pretraining method called DoRA, which detects and tracks objects in video frames, deriving multiple views from the tracks and using them in a self-supervised manner
Capítulos de livros sobre o assunto "Interpolation-Based data augmentation"
Rabah, Mohamed Louay, Nedra Mellouli e Imed Riadh Farah. "Interpolation and Prediction of Piezometric Multivariate Time Series Based on Data Augmentation and Transformers". In Lecture Notes in Networks and Systems, 327–44. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-47724-9_22.
Texto completo da fonteTrabalhos de conferências sobre o assunto "Interpolation-Based data augmentation"
Ye, Mao, Haitao Wang e Zheqian Chen. "MSMix: An Interpolation-Based Text Data Augmentation Method Manifold Swap Mixup". In 4th International Conference on Natural Language Processing and Machine Learning. Academy and Industry Research Collaboration Center (AIRCC), 2023. http://dx.doi.org/10.5121/csit.2023.130806.
Texto completo da fonteHeo, Jaeseung, Seungbeom Lee, Sungsoo Ahn e Dongwoo Kim. "EPIC: Graph Augmentation with Edit Path Interpolation via Learnable Cost". In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/455.
Texto completo da fonteLi, Chen, Xutan Peng, Hao Peng, Jianxin Li e Lihong Wang. "TextGTL: Graph-based Transductive Learning for Semi-supervised Text Classification via Structure-Sensitive Interpolation". In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/369.
Texto completo da fonte