Auswahl der wissenschaftlichen Literatur zum Thema „Low-rank adaptation“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Low-rank adaptation" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Low-rank adaptation"

1

Yang, Weiqi, und Michael Spece. „Implicit Adaptation to Low Rank Structure in Online Learning“. International Journal of Machine Learning and Computing 11, Nr. 5 (September 2021): 339–44. http://dx.doi.org/10.18178/ijmlc.2021.11.5.1058.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Chen, Yanran. „A concise analysis of low-rank adaptation“. Applied and Computational Engineering 42, Nr. 1 (23.02.2024): 76–82. http://dx.doi.org/10.54254/2755-2721/42/20230688.

Der volle Inhalt der Quelle
Annotation:
Recent years the pre-trained language models have been proved to be a transformative technology within the domain of Natural Language Processing (NLP). From early word embeddings to modern transformer-based architectures, the success of models like BERT, GPT-3, and their variants has led to remarkable advancements in various NLP tasks. This paper is based on the Transformer model and explores and summarizes the application of the lightweight fine-tuning technique LoRA in pretrained language models, as well as improvements and derived technologies based on LoRA. Moreover, this paper categorizes these techniques into two main directions according to the advancements: enhancing training efficiency and improving training performance. Under these two major directions, several representative optimization and derived techniques are summarized and analyzed. Furthermore, this paper offers a perspective on the hot topics and future prospects of this research subject, and summarizes and proposes several directions that hold exploration value for the future, such as the possible avenues for further optimization and integration with other lightweight technologies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Filatov, N., und M. Kindulov. „Low Rank Adaptation for Stable Domain Adaptation of Vision Transformers“. Optical Memory and Neural Networks 32, S2 (28.11.2023): S277—S283. http://dx.doi.org/10.3103/s1060992x2306005x.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Xu, Bingrong, Jianhua Yin, Cheng Lian, Yixin Su und Zhigang Zeng. „Low-Rank Optimal Transport for Robust Domain Adaptation“. IEEE/CAA Journal of Automatica Sinica 11, Nr. 7 (Juli 2024): 1667–80. http://dx.doi.org/10.1109/jas.2024.124344.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Hu, Yahao, Yifei Xie, Tianfeng Wang, Man Chen und Zhisong Pan. „Structure-Aware Low-Rank Adaptation for Parameter-Efficient Fine-Tuning“. Mathematics 11, Nr. 20 (17.10.2023): 4317. http://dx.doi.org/10.3390/math11204317.

Der volle Inhalt der Quelle
Annotation:
With the growing scale of pre-trained language models (PLMs), full parameter fine-tuning becomes prohibitively expensive and practically infeasible. Therefore, parameter-efficient adaptation techniques for PLMs have been proposed to learn through incremental updates of pre-trained weights, such as in low-rank adaptation (LoRA). However, LoRA relies on heuristics to select the modules and layers to which it is applied, and assigns them the same rank. As a consequence, any fine-tuning that ignores the structural information between modules and layers is suboptimal. In this work, we propose structure-aware low-rank adaptation (SaLoRA), which adaptively learns the intrinsic rank of each incremental matrix by removing rank-0 components during training. We conduct comprehensive experiments using pre-trained models of different scales in both task-oriented (GLUE) and task-agnostic (Yelp and GYAFC) settings. The experimental results show that SaLoRA effectively captures the structure-aware intrinsic rank. Moreover, our method consistently outperforms LoRA without significantly compromising training efficiency.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Li, Wen, Zheng Xu, Dong Xu, Dengxin Dai und Luc Van Gool. „Domain Generalization and Adaptation Using Low Rank Exemplar SVMs“. IEEE Transactions on Pattern Analysis and Machine Intelligence 40, Nr. 5 (01.05.2018): 1114–27. http://dx.doi.org/10.1109/tpami.2017.2704624.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Jaech, Aaron, und Mari Ostendorf. „Low-Rank RNN Adaptation for Context-Aware Language Modeling“. Transactions of the Association for Computational Linguistics 6 (Dezember 2018): 497–510. http://dx.doi.org/10.1162/tacl_a_00035.

Der volle Inhalt der Quelle
Annotation:
A context-aware language model uses location, user and/or domain metadata (context) to adapt its predictions. In neural language models, context information is typically represented as an embedding and it is given to the RNN as an additional input, which has been shown to be useful in many applications. We introduce a more powerful mechanism for using context to adapt an RNN by letting the context vector control a low-rank transformation of the recurrent layer weight matrix. Experiments show that allowing a greater fraction of the model parameters to be adjusted has benefits in terms of perplexity and classification for several different types of context.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Ruff, Douglas A., Cheng Xue, Lily E. Kramer, Faisal Baqai und Marlene R. Cohen. „Low rank mechanisms underlying flexible visual representations“. Proceedings of the National Academy of Sciences 117, Nr. 47 (23.11.2020): 29321–29. http://dx.doi.org/10.1073/pnas.2005797117.

Der volle Inhalt der Quelle
Annotation:
Neuronal population responses to sensory stimuli are remarkably flexible. The responses of neurons in visual cortex have heterogeneous dependence on stimulus properties (e.g., contrast), processes that affect all stages of visual processing (e.g., adaptation), and cognitive processes (e.g., attention or task switching). Understanding whether these processes affect similar neuronal populations and whether they have similar effects on entire populations can provide insight into whether they utilize analogous mechanisms. In particular, it has recently been demonstrated that attention has low rank effects on the covariability of populations of visual neurons, which impacts perception and strongly constrains mechanistic models. We hypothesized that measuring changes in population covariability associated with other sensory and cognitive processes could clarify whether they utilize similar mechanisms or computations. Our experimental design included measurements in multiple visual areas using four distinct sensory and cognitive processes. We found that contrast, adaptation, attention, and task switching affect the variability of responses of populations of neurons in primate visual cortex in a similarly low rank way. These results suggest that a given circuit may use similar mechanisms to perform many forms of modulation and likely reflects a general principle that applies to a wide range of brain areas and sensory, cognitive, and motor processes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Jeong, Y., und H. S. Kim. „Speaker adaptation using generalised low rank approximations of training matrices“. Electronics Letters 46, Nr. 10 (2010): 724. http://dx.doi.org/10.1049/el.2010.0466.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Kim, Juhyeong, Gyunyeop Kim und Sangwoo Kang. „Lottery Rank-Pruning Adaptation Parameter Efficient Fine-Tuning“. Mathematics 12, Nr. 23 (28.11.2024): 3744. http://dx.doi.org/10.3390/math12233744.

Der volle Inhalt der Quelle
Annotation:
Recent studies on parameter-efficient fine-tuning (PEFT) have introduced effective and efficient methods for fine-tuning large language models (LLMs) on downstream tasks using fewer parameters than required by full fine-tuning. Low-rank decomposition adaptation (LoRA) significantly reduces the parameter count to 0.03% of that in full fine-tuning, maintaining satisfactory performance when training only two low-rank parameters. However, limitations remain due to the lack of task-specific parameters involved in training. To mitigate these issues, we propose the Lottery Rank-Pruning Adaptation (LoRPA) method, which utilizes the Lottery Ticket Hypothesis to prune less significant parameters based on their magnitudes following initial training. Initially, LoRPA trains with a relatively large rank size and then applies pruning to enhance performance in subsequent training with fewer parameters. We conducted experiments to compare LoRPA with LoRA baselines, including a setting with a relatively large rank size. Experimental results on the GLUE dataset with RoBERTa demonstrate that LoRPA achieves comparable results on the base scale while outperforming LoRA with various rank sizes by 0.04% to 0.74% on a large scale across multiple tasks. Additionally, on generative summarization tasks using BART-base on the CNN/DailyMail and XSum datasets, LoRPA outperformed LoRA at the standard rank size and other PEFT methods in most of the metrics. These results validate the efficacy of lottery pruning for LoRA in downstream natural-language understanding and generation tasks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Low-rank adaptation"

1

Grativol, Ribeiro Lucas. „Neural network compression in the context of federated learning and edge devices“. Electronic Thesis or Diss., Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2024. http://www.theses.fr/2024IMTA0444.

Der volle Inhalt der Quelle
Annotation:
L’apprentissage fédéré est un cadre d’apprentissage automatique collaboratif et décentralisé, motivé par des préoccupations croissantes concernant la confidentialité desdonnées. En transférant l’entraînement des modèles vers des noeuds locaux et en conservant les données sur place, il favorise une approche plus respectueuse de la vie privée. Toutefois, cette méthode impose un surcoût en termes de communication et de calcul à ceux qui l’adoptent. Dans ce manuscrit, nous examinons les principaux défis de l’apprentissage fédéré et proposons des solutions visant à augumenter l’efficacité tout en réduisant les besoins en ressources matérielles. Plus précisément, nous explorons des techniques de compression classiques, comme l’élagage, ainsi que des approximations en rang faible afin de diminuer les coûts associés à l’apprentissage fédéré. Pour les scénarios où les participants disposent de capacités de communication limitées, nous introduisons une méthodologie de co-conception pour un algorithme d’apprentissage avec peu d’examples embarqué. Notre solution intègre les contraintes matérielles au sein d’un pipeline de déploiement sur des plateformes FPGA, aboutissant à un algorithme à faible latence qui peut également être exploité pour mettre en oeuvre des modèles post-apprentissage fédéré
Federated learning is a collaborative, decentralized machine learning framework driven by growing concerns about data privacy. By shifting model training to local nodes and keeping data local, it enables more privacy-conscious training. However, this approach imposes additional communication and computation overhead on those who adopt it. In this manuscript, we examine the key challenges in federated learning and propose solutions to increase efficiency and reduce hardware requirements. Specifically, we explore classic compression techniques, such as pruning, and low-rank approximations to lower the costs associated with federated learning. For scenarios where participants have limited communication capabilities, we introduce a co-design methodology for an embedded few-shot learning algorithm. Our proposed solution integrates hardware constraints into a deployment pipeline for FPGA platforms, resulting in a low-latency algorithm that can also be leveraged to implement post-federated learning models
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Breloy, Arnaud. „Algorithmes d’estimation et de détection en contexte hétérogène rang faible“. Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLN021/document.

Der volle Inhalt der Quelle
Annotation:
Une des finalités du traitement d’antenne est la détection et la localisation de cibles en milieu bruité. Dans la plupart des cas pratiques, comme par exemple le RADAR ou le SONAR actif, il faut estimer dans un premier temps les propriétés statistiques du bruit, et plus précisément sa matrice de covariance ; on dispose à cette fin de données secondaires supposées identiquement distribuées. Dans ce contexte, les hypothèses suivantes sont généralement formulées : bruit gaussien, données secondaires ne contenant que du bruit, et bien sûr matériels fonctionnant parfaitement. Il est toutefois connu aujourd’hui que le bruit en RADAR est de nature impulsive et que l’hypothèse Gaussienne est parfois mal adaptée. C’est pourquoi, depuis quelques années, le bruit et en particulier le fouillis de sol est modélisé par des processus elliptiques, et principalement des Spherically Invariant Random Vectors (SIRV). Dans ce nouveau cadre, la Sample Covariance Matrix (SCM) estimant classiquement la matrice de covariance du bruit entraîne des pertes de performances très importantes des détecteurs / estimateurs. Dans ce contexte non-gaussien, d’autres estimateurs de la matrice de covariance mieux adaptés à cette statistique du bruit ont été développés : la Matrice du Point Fixe (MPF) et les M-estimateurs.Parallèlement, dans un cadre où le bruit se décompose sous la forme d’une somme d’un fouillis rang faible et d’un bruit blanc, la matrice de covariance totale est structurée sous la forme rang faible plus identité. Cette information peut être utilisée dans le processus d'estimation afin de réduire le nombre de données nécessaires. De plus, il aussi est possible d'utiliser le projecteur orthogonal au sous espace fouillis à la place de la matrice de covariance ce qui nécessite moins de données secondaires et d’être aussi plus robuste aux données aberrantes. On calcule classiquement ce projecteur à partir d'un estimateur de la matrice de covariance. Néanmoins l'état de l'art ne présente pas d'estimateurs à la fois être robustes aux distributions hétérogènes, et rendant compte de la structure rang faible des données. C'est pourquoi ces travaux se focalisent sur le développement de nouveaux estimateurs (de covariance et de sous espace), directement adaptés au contexte considéré. Les contributions de cette thèse s'orientent donc autour de trois axes :- Nous présenterons tout d'abord un modèle statistique précis : celui de sources hétérogènes ayant une covariance rang faible noyées dans un bruit blanc gaussien. Ce modèle et est, par exemple, fortement justifié pour des applications de type radar. Il à cependant peu été étudié pour la problématique d'estimation de matrice de covariance. Nous dériverons donc l'expression du maximum de vraisemblance de la matrice de covariance pour ce contexte. Cette expression n'étant pas une forme close, nous développerons différents algorithmes pour tenter de l'atteindre efficacement.- Nous développons de nouveaux estimateurs directs de projecteur sur le sous espace fouillis, ne nécessitant pas un estimé de la matrice de covariance intermédiaire, adaptés au contexte considéré.- Nous étudierons les performances des estimateurs proposés et de l'état de l'art sur une application de Space Time Adaptative Processing (STAP) pour radar aéroporté, au travers de simulations et de données réelles
One purpose of array processing is the detection and location of a target in a noisy environment. In most cases (as RADAR or active SONAR), statistical properties of the noise, especially its covariance matrix, have to be estimated using i.i.d. samples. Within this context, several hypotheses are usually made: Gaussian distribution, training data containing only noise, perfect hardware. Nevertheless, it is well known that a Gaussian distribution doesn’t provide a good empirical fit to RADAR clutter data. That’s why noise is now modeled by elliptical process, mainly Spherically Invariant Random Vectors (SIRV). In this new context, the use of the SCM (Sample Covariance Matrix), a classical estimate of the covariance matrix, leads to a loss of performances of detectors/estimators. More efficient estimators have been developed, such as the Fixed Point Estimator and M-estimators.If the noise is modeled as a low-rank clutter plus white Gaussian noise, the total covariance matrix is structured as low rank plus identity. This information can be used in the estimation process to reduce the number of samples required to reach acceptable performance. Moreover, it is possible to estimate the basis vectors of the clutter-plus-noise orthogonal subspace rather than the total covariance matrix of the clutter, which requires less data and is more robust to outliers. The orthogonal projection to the clutter plus noise subspace is usually calculated from an estimatd of the covariance matrix. Nevertheless, the state of art does not provide estimators that are both robust to various distributions and low rank structured.In this Thesis, we therefore develop new estimators that are fitting the considered context, to fill this gap. The contributions are following three axes :- We present a precise statistical model : low rank heterogeneous sources embedded in a white Gaussian noise.We express the maximum likelihood estimator for this context.Since this estimator has no closed form, we develop several algorithms to reach it effitiently.- For the considered context, we develop direct clutter subspace estimators that are not requiring an intermediate Covariance Matrix estimate.- We study the performances of the proposed methods on a Space Time Adaptive Processing for airborne radar application. Tests are performed on both synthetic and real data
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Combernoux, Alice. „Détection et filtrage rang faible pour le traitement d'antenne utilisant la théorie des matrices aléatoires en grandes dimensions“. Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC016/document.

Der volle Inhalt der Quelle
Annotation:
Partant du constat que dans plus en plus d'applications, la taille des données à traiter augmente, il semble pertinent d'utiliser des outils appropriés tels que la théorie des matrices aléatoires dans le régime en grandes dimensions. Plus particulièrement, dans les applications de traitement d'antenne et radar spécifiques STAP et MIMO-STAP, nous nous sommes intéressés au traitement d'un signal d'intérêt corrompu par un bruit additif composé d'une partie dite rang faible et d'un bruit blanc gaussien. Ainsi l'objet de cette thèse est d'étudier dans le régime en grandes dimensions la détection et le filtrage dit rang faible (fonction de projecteurs) pour le traitement d'antenne en utilisant la théorie des matrices aléatoires.La thèse propose alors trois contributions principales, dans le cadre de l'analyse asymptotique de fonctionnelles de projecteurs. Ainsi, premièrement, le régime en grandes dimensions permet ici de déterminer une approximation/prédiction des performances théoriques non asymptotiques, plus précise que ce qui existe actuellement en régime asymptotique classique (le nombre de données d'estimation tends vers l'infini à taille des données fixe). Deuxièmement, deux nouveaux filtres et deux nouveaux détecteurs adaptatifs rang faible ont été proposés et il a été montré qu'ils présentaient de meilleures performances en fonction des paramètres du système en terme de perte en RSB, probabilité de fausse alarme et probabilité de détection. Enfin, les résultats ont été validés sur une application de brouillage, puis appliqués aux traitements radar STAP et MIMO-STAP sparse. L'étude a alors mis en évidence une différence notable avec l'application de brouillage liée aux modèles de matrice de covariance traités dans cette thèse
Nowadays, more and more applications deal with increasing dimensions. Thus, it seems relevant to exploit the appropriated tools as the random matrix theory in the large dimensional regime. More particularly, in the specific array processing applications as the STAP and MIMO-STAP radar applications, we were interested in the treatment of a signal of interest corrupted by an additive noise composed of a low rang noise and a white Gaussian. Therefore, the aim of this thesis is to study the low rank filtering and detection (function of projectors) in the large dimensional regime for array processing with random matrix theory tools.This thesis has three main contributions in the context of asymptotic analysis of projector functionals. Thus, the large dimensional regime first allows to determine an approximation/prediction of theoretical non asymptotic performance, much more precise than the literature in the classical asymptotic regime (when the number of estimation data tends to infinity at a fixed dimension). Secondly, two new low rank adaptive filters and detectors have been proposed and it has been shown that they have better performance as a function of the system parameters, in terms of SINR loss, false alarm probability and detection probability. Finally, the results have been validated on a jamming application and have been secondly applied to the STAP and sparse MIMO-STAP processings. Hence, the study highlighted a noticeable difference with the jamming application, related to the covariance matrix models concerned by this thesis
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Low-rank adaptation"

1

Raab, Christoph, und Frank-Michael Schleif. „Low-Rank Subspace Override for Unsupervised Domain Adaptation“. In Lecture Notes in Computer Science, 132–47. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58285-2_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Benaglia, Riccardo, Angelo Porrello, Pietro Buzzega, Simone Calderara und Rita Cucchiara. „Trajectory Forecasting Through Low-Rank Adaptation of Discrete Latent Codes“. In Lecture Notes in Computer Science, 236–51. Cham: Springer Nature Switzerland, 2024. https://doi.org/10.1007/978-3-031-78444-6_16.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Fang, Zhengyi, Yue Wang, Ran Yi und Lizhuang Ma. „Dropout Mixture Low-Rank Adaptation for Visual Parameters-Efficient Fine-Tuning“. In Lecture Notes in Computer Science, 369–86. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72667-5_21.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Paranjape, Jay N., Shameema Sikder, S. Swaroop Vedula und Vishal M. Patel. „Low-Rank Adaptation of Segment Anything Model for Surgical Scene Segmentation“. In Lecture Notes in Computer Science, 187–202. Cham: Springer Nature Switzerland, 2024. https://doi.org/10.1007/978-3-031-78198-8_13.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Cappelletti, Silvia, Lorenzo Baraldi, Federico Cocchi, Marcella Cornia, Lorenzo Baraldi und Rita Cucchiara. „Adapt to Scarcity: Few-Shot Deepfake Detection via Low-Rank Adaptation“. In Lecture Notes in Computer Science, 111–26. Cham: Springer Nature Switzerland, 2024. https://doi.org/10.1007/978-3-031-78305-0_8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Park, Dongwon, Hayeon Kim und Se Young Chun. „Contribution-Based Low-Rank Adaptation with Pre-training Model for Real Image Restoration“. In Lecture Notes in Computer Science, 87–105. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-73039-9_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Lotey, Taveena, Aman Verma und Partha Pratim Roy. „EEG-Based Mental Imagery Task Adaptation via Ensemble of Weight-Decomposed Low-Rank Adapters“. In Lecture Notes in Computer Science, 309–24. Cham: Springer Nature Switzerland, 2024. https://doi.org/10.1007/978-3-031-78195-7_21.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Chari, Martin Munashe, Hamisai Hamandawana und Leocadia Zhou. „Socioeconomically Informed Use of Geostatistics to Track Adaptation of Resource-Poor Communities to Climate Change“. In African Handbook of Climate Change Adaptation, 1555–81. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-45106-6_122.

Der volle Inhalt der Quelle
Annotation:
AbstractAs the Green Climate Fund continues to make concerted efforts to leverage funding for resource-constrained communities in the global south under the aegis of increasing climate change impacts in sub-Saharan Africa, there is urgent and compelling need for tools that assist organizations to track the effectiveness of adaptation interventions in reducing vulnerability. This chapter offers a cost-effective methodology to track adaptation by using a case-study-based identification of communities with diminishing coping capacities in Raymond Mhlaba Local Municipality in the Eastern Cape Province of South Africa. Multistep geostatistical techniques were utilized in the ArcGIS 10.5 software environment to rank and spatialize changes in adaptation by using demographic census data for the years 2001 and 2011. Results of the analysis revealed that 12 communities had declining or static adaptive capacities between 2001 and 2011, while 10 communities had long-term decrease in adaptive capacities from 2001 to 2011 from a sampling universe of 134 communities. These findings are important because they demonstrate that the methodology can be effectively used to provide actionable information on the prevalence of low adaptation capacities at appropriate temporal and spatial scales, in order to guide the allocation of limited resources to the most deserving communities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Wang, Meng, Tian Lin, Ting Xu, Ke Zou, Haoyu Chen, Huazhu Fu und Ching-Yu Cheng. „Enhancing Large Foundation Models to Identify Fundus Diseases Based on Contrastive Enhanced Low-Rank Adaptation Prompt“. In Lecture Notes in Computer Science, 157–66. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-73119-8_16.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Zhu, Vince, Zhanghexuan Ji, Dazhou Guo, Puyang Wang, Yingda Xia, Le Lu, Xianghua Ye, Wei Zhu und Dakai Jin. „Low-Rank Continual Pyramid Vision Transformer: Incrementally Segment Whole-Body Organs in CT with Light-Weighted Adaptation“. In Lecture Notes in Computer Science, 371–81. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-72111-3_35.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Low-rank adaptation"

1

Wu, Taiqiang, Jiahao Wang, Zhe Zhao und Ngai Wong. „Mixture-of-Subspaces in Low-Rank Adaptation“. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 7880–99. Stroudsburg, PA, USA: Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.emnlp-main.450.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Grativol, Lucas, Mathieu Léonardon, Guillaume Muller, Virginie Fresse und Matthieu Arzel. „FLoCoRA: Federated Learning Compression with Low-Rank Adaptation“. In 2024 32nd European Signal Processing Conference (EUSIPCO), 1786–90. IEEE, 2024. http://dx.doi.org/10.23919/eusipco63174.2024.10715461.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Liang, Yan-Shuo, und Wu-Jun Li. „InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning“. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 23638–47. IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.02231.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Zanella, Maxime, und Ismail Ben Ayed. „Low-Rank Few-Shot Adaptation of Vision-Language Models“. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 1593–603. IEEE, 2024. http://dx.doi.org/10.1109/cvprw63382.2024.00166.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Ferreira-Caballero, Sebastián, Diego P. Pinto-Roa, José Luis Vázquez Noguera, Jordan Ayala, Pedro E. Gardel-Sotomayor und Pastor Pérez-Estigarribia. „Low-Rank Adaptation Applied to Multiclass Diabetic Retinopathy Classification“. In 2024 L Latin American Computer Conference (CLEI), 1–9. IEEE, 2024. http://dx.doi.org/10.1109/clei64178.2024.10700586.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Li, Yinqiao, Linqi Song und Hanxu Hou. „LoRAN: Improved Low-Rank Adaptation by a Non-Linear Transformation“. In Findings of the Association for Computational Linguistics: EMNLP 2024, 3134–43. Stroudsburg, PA, USA: Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.findings-emnlp.177.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Zhang, Yiwei, Kun Li, Liang Yuan, Jiawen Cheng, Yunquan Zhang, Ting Cao und Mao Yang. „LoRAStencil: Low-Rank Adaptation of Stencil Computation on Tensor Cores“. In SC24: International Conference for High Performance Computing, Networking, Storage and Analysis, 1–17. IEEE, 2024. https://doi.org/10.1109/sc41406.2024.00059.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Agiza, Ahmed, Marina Neseem und Sherief Reda. „MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning“. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 16196–205. IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.01533.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Li, Linfeng, und Lei Guo. „Dynamic Low-Rank Adaptation Based Pruning Algorithm for Large Language Models“. In 2024 7th International Conference on Pattern Recognition and Artificial Intelligence (PRAI), 1094–99. IEEE, 2024. https://doi.org/10.1109/prai62207.2024.10826600.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Yang, Peng, Hong Ying, Jianxin Duan, Linyue Shi und Chen Yang. „Quantized Low-Rank Adaptation Based Parameter-efficient Tuning for Low-resource Visual Question Answering“. In 2024 6th International Conference on Electronic Engineering and Informatics (EEI), 1318–22. IEEE, 2024. http://dx.doi.org/10.1109/eei63073.2024.10696314.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie