To see the other types of publications on this topic, follow the link: Low-rank adaptation.

Journal articles on the topic 'Low-rank adaptation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Low-rank adaptation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yang, Weiqi, and Michael Spece. "Implicit Adaptation to Low Rank Structure in Online Learning." International Journal of Machine Learning and Computing 11, no. 5 (September 2021): 339–44. http://dx.doi.org/10.18178/ijmlc.2021.11.5.1058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Yanran. "A concise analysis of low-rank adaptation." Applied and Computational Engineering 42, no. 1 (February 23, 2024): 76–82. http://dx.doi.org/10.54254/2755-2721/42/20230688.

Full text
Abstract:
Recent years the pre-trained language models have been proved to be a transformative technology within the domain of Natural Language Processing (NLP). From early word embeddings to modern transformer-based architectures, the success of models like BERT, GPT-3, and their variants has led to remarkable advancements in various NLP tasks. This paper is based on the Transformer model and explores and summarizes the application of the lightweight fine-tuning technique LoRA in pretrained language models, as well as improvements and derived technologies based on LoRA. Moreover, this paper categorizes these techniques into two main directions according to the advancements: enhancing training efficiency and improving training performance. Under these two major directions, several representative optimization and derived techniques are summarized and analyzed. Furthermore, this paper offers a perspective on the hot topics and future prospects of this research subject, and summarizes and proposes several directions that hold exploration value for the future, such as the possible avenues for further optimization and integration with other lightweight technologies.
APA, Harvard, Vancouver, ISO, and other styles
3

Filatov, N., and M. Kindulov. "Low Rank Adaptation for Stable Domain Adaptation of Vision Transformers." Optical Memory and Neural Networks 32, S2 (November 28, 2023): S277—S283. http://dx.doi.org/10.3103/s1060992x2306005x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xu, Bingrong, Jianhua Yin, Cheng Lian, Yixin Su, and Zhigang Zeng. "Low-Rank Optimal Transport for Robust Domain Adaptation." IEEE/CAA Journal of Automatica Sinica 11, no. 7 (July 2024): 1667–80. http://dx.doi.org/10.1109/jas.2024.124344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hu, Yahao, Yifei Xie, Tianfeng Wang, Man Chen, and Zhisong Pan. "Structure-Aware Low-Rank Adaptation for Parameter-Efficient Fine-Tuning." Mathematics 11, no. 20 (October 17, 2023): 4317. http://dx.doi.org/10.3390/math11204317.

Full text
Abstract:
With the growing scale of pre-trained language models (PLMs), full parameter fine-tuning becomes prohibitively expensive and practically infeasible. Therefore, parameter-efficient adaptation techniques for PLMs have been proposed to learn through incremental updates of pre-trained weights, such as in low-rank adaptation (LoRA). However, LoRA relies on heuristics to select the modules and layers to which it is applied, and assigns them the same rank. As a consequence, any fine-tuning that ignores the structural information between modules and layers is suboptimal. In this work, we propose structure-aware low-rank adaptation (SaLoRA), which adaptively learns the intrinsic rank of each incremental matrix by removing rank-0 components during training. We conduct comprehensive experiments using pre-trained models of different scales in both task-oriented (GLUE) and task-agnostic (Yelp and GYAFC) settings. The experimental results show that SaLoRA effectively captures the structure-aware intrinsic rank. Moreover, our method consistently outperforms LoRA without significantly compromising training efficiency.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Wen, Zheng Xu, Dong Xu, Dengxin Dai, and Luc Van Gool. "Domain Generalization and Adaptation Using Low Rank Exemplar SVMs." IEEE Transactions on Pattern Analysis and Machine Intelligence 40, no. 5 (May 1, 2018): 1114–27. http://dx.doi.org/10.1109/tpami.2017.2704624.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jaech, Aaron, and Mari Ostendorf. "Low-Rank RNN Adaptation for Context-Aware Language Modeling." Transactions of the Association for Computational Linguistics 6 (December 2018): 497–510. http://dx.doi.org/10.1162/tacl_a_00035.

Full text
Abstract:
A context-aware language model uses location, user and/or domain metadata (context) to adapt its predictions. In neural language models, context information is typically represented as an embedding and it is given to the RNN as an additional input, which has been shown to be useful in many applications. We introduce a more powerful mechanism for using context to adapt an RNN by letting the context vector control a low-rank transformation of the recurrent layer weight matrix. Experiments show that allowing a greater fraction of the model parameters to be adjusted has benefits in terms of perplexity and classification for several different types of context.
APA, Harvard, Vancouver, ISO, and other styles
8

Ruff, Douglas A., Cheng Xue, Lily E. Kramer, Faisal Baqai, and Marlene R. Cohen. "Low rank mechanisms underlying flexible visual representations." Proceedings of the National Academy of Sciences 117, no. 47 (November 23, 2020): 29321–29. http://dx.doi.org/10.1073/pnas.2005797117.

Full text
Abstract:
Neuronal population responses to sensory stimuli are remarkably flexible. The responses of neurons in visual cortex have heterogeneous dependence on stimulus properties (e.g., contrast), processes that affect all stages of visual processing (e.g., adaptation), and cognitive processes (e.g., attention or task switching). Understanding whether these processes affect similar neuronal populations and whether they have similar effects on entire populations can provide insight into whether they utilize analogous mechanisms. In particular, it has recently been demonstrated that attention has low rank effects on the covariability of populations of visual neurons, which impacts perception and strongly constrains mechanistic models. We hypothesized that measuring changes in population covariability associated with other sensory and cognitive processes could clarify whether they utilize similar mechanisms or computations. Our experimental design included measurements in multiple visual areas using four distinct sensory and cognitive processes. We found that contrast, adaptation, attention, and task switching affect the variability of responses of populations of neurons in primate visual cortex in a similarly low rank way. These results suggest that a given circuit may use similar mechanisms to perform many forms of modulation and likely reflects a general principle that applies to a wide range of brain areas and sensory, cognitive, and motor processes.
APA, Harvard, Vancouver, ISO, and other styles
9

Jeong, Y., and H. S. Kim. "Speaker adaptation using generalised low rank approximations of training matrices." Electronics Letters 46, no. 10 (2010): 724. http://dx.doi.org/10.1049/el.2010.0466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kim, Juhyeong, Gyunyeop Kim, and Sangwoo Kang. "Lottery Rank-Pruning Adaptation Parameter Efficient Fine-Tuning." Mathematics 12, no. 23 (November 28, 2024): 3744. http://dx.doi.org/10.3390/math12233744.

Full text
Abstract:
Recent studies on parameter-efficient fine-tuning (PEFT) have introduced effective and efficient methods for fine-tuning large language models (LLMs) on downstream tasks using fewer parameters than required by full fine-tuning. Low-rank decomposition adaptation (LoRA) significantly reduces the parameter count to 0.03% of that in full fine-tuning, maintaining satisfactory performance when training only two low-rank parameters. However, limitations remain due to the lack of task-specific parameters involved in training. To mitigate these issues, we propose the Lottery Rank-Pruning Adaptation (LoRPA) method, which utilizes the Lottery Ticket Hypothesis to prune less significant parameters based on their magnitudes following initial training. Initially, LoRPA trains with a relatively large rank size and then applies pruning to enhance performance in subsequent training with fewer parameters. We conducted experiments to compare LoRPA with LoRA baselines, including a setting with a relatively large rank size. Experimental results on the GLUE dataset with RoBERTa demonstrate that LoRPA achieves comparable results on the base scale while outperforming LoRA with various rank sizes by 0.04% to 0.74% on a large scale across multiple tasks. Additionally, on generative summarization tasks using BART-base on the CNN/DailyMail and XSum datasets, LoRPA outperformed LoRA at the standard rank size and other PEFT methods in most of the metrics. These results validate the efficacy of lottery pruning for LoRA in downstream natural-language understanding and generation tasks.
APA, Harvard, Vancouver, ISO, and other styles
11

Tao, JianWen, Dawei Song, Shiting Wen, and Wenjun Hu. "Robust multi-source adaptation visual classification using supervised low-rank representation." Pattern Recognition 61 (January 2017): 47–65. http://dx.doi.org/10.1016/j.patcog.2016.07.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Tao, JianWen, Shiting Wen, and Wenjun Hu. "Robust domain adaptation image classification via sparse and low rank representation." Journal of Visual Communication and Image Representation 33 (November 2015): 134–48. http://dx.doi.org/10.1016/j.jvcir.2015.09.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Ren, Chuan-Xian, Xiao-Lin Xu, and Hong Yan. "Generalized Conditional Domain Adaptation: A Causal Perspective With Low-Rank Translators." IEEE Transactions on Cybernetics 50, no. 2 (February 2020): 821–34. http://dx.doi.org/10.1109/tcyb.2018.2874219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Wu, Hanrui, and Michael K. Ng. "Multiple Graphs and Low-Rank Embedding for Multi-Source Heterogeneous Domain Adaptation." ACM Transactions on Knowledge Discovery from Data 16, no. 4 (August 31, 2022): 1–25. http://dx.doi.org/10.1145/3492804.

Full text
Abstract:
Multi-source domain adaptation is a challenging topic in transfer learning, especially when the data of each domain are represented by different kinds of features, i.e., Multi-source Heterogeneous Domain Adaptation (MHDA). It is important to take advantage of the knowledge extracted from multiple sources as well as bridge the heterogeneous spaces for handling the MHDA paradigm. This article proposes a novel method named Multiple Graphs and Low-rank Embedding (MGLE), which models the local structure information of multiple domains using multiple graphs and learns the low-rank embedding of the target domain. Then, MGLE augments the learned embedding with the original target data. Specifically, we introduce the modules of both domain discrepancy and domain relevance into the multiple graphs and low-rank embedding learning procedure. Subsequently, we develop an iterative optimization algorithm to solve the resulting problem. We evaluate the effectiveness of the proposed method on several real-world datasets. Promising results show that the performance of MGLE is better than that of the baseline methods in terms of several metrics, such as AUC, MAE, accuracy, precision, F1 score, and MCC, demonstrating the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
15

Hong, Chaoqun, Zhiqiang Zeng, Rongsheng Xie, Weiwei Zhuang, and Xiaodong Wang. "Domain adaptation with low-rank alignment for weakly supervised hand pose recovery." Signal Processing 142 (January 2018): 223–30. http://dx.doi.org/10.1016/j.sigpro.2017.07.032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Yang, Liran, Min Men, Yiming Xue, and Ping Zhong. "Low-rank representation-based regularized subspace learning method for unsupervised domain adaptation." Multimedia Tools and Applications 79, no. 3-4 (December 5, 2019): 3031–47. http://dx.doi.org/10.1007/s11042-019-08474-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Tao, Jianwen, Haote Xu, and Jianjing Fu. "Low-Rank Constrained Latent Domain Adaptation Co-Regression for Robust Depression Recognition." IEEE Access 7 (2019): 145406–25. http://dx.doi.org/10.1109/access.2019.2944211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Xiao, Ting, Cangning Fan, Peng Liu, and Hongwei Liu. "Simultaneously Improve Transferability and Discriminability for Adversarial Domain Adaptation." Entropy 24, no. 1 (December 27, 2021): 44. http://dx.doi.org/10.3390/e24010044.

Full text
Abstract:
Although adversarial domain adaptation enhances feature transferability, the feature discriminability will be degraded in the process of adversarial learning. Moreover, most domain adaptation methods only focus on distribution matching in the feature space; however, shifts in the joint distributions of input features and output labels linger in the network, and thus, the transferability is not fully exploited. In this paper, we propose a matrix rank embedding (MRE) method to enhance feature discriminability and transferability simultaneously. MRE restores a low-rank structure for data in the same class and enforces a maximum separation structure for data in different classes. In this manner, the variations within the subspace are reduced, and the separation between the subspaces is increased, resulting in improved discriminability. In addition to statistically aligning the class-conditional distribution in the feature space, MRE forces the data of the same class in different domains to exhibit an approximate low-rank structure, thereby aligning the class-conditional distribution in the label space, resulting in improved transferability. MRE is computationally efficient and can be used as a plug-and-play term for other adversarial domain adaptation networks. Comprehensive experiments demonstrate that MRE can advance state-of-the-art domain adaptation methods.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Mingliang, Daoqiang Zhang, Jiashuang Huang, Pew-Thian Yap, Dinggang Shen, and Mingxia Liu. "Identifying Autism Spectrum Disorder With Multi-Site fMRI via Low-Rank Domain Adaptation." IEEE Transactions on Medical Imaging 39, no. 3 (March 2020): 644–55. http://dx.doi.org/10.1109/tmi.2019.2933160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zhu, Chenyang, Lanlan Zhang, Weibin Luo, Guangqi Jiang, and Qian Wang. "Tensorial multiview low-rank high-order graph learning for context-enhanced domain adaptation." Neural Networks 181 (January 2025): 106859. http://dx.doi.org/10.1016/j.neunet.2024.106859.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Trust, Paul, and Rosane Minghim. "A Study on Text Classification in the Age of Large Language Models." Machine Learning and Knowledge Extraction 6, no. 4 (November 21, 2024): 2688–721. http://dx.doi.org/10.3390/make6040129.

Full text
Abstract:
Large language models (LLMs) have recently made significant advances, excelling in tasks like question answering, summarization, and machine translation. However, their enormous size and hardware requirements make them less accessible to many in the machine learning community. To address this, techniques such as quantization, prefix tuning, weak supervision, low-rank adaptation, and prompting have been developed to customize these models for specific applications. While these methods have mainly improved text generation, their implications for the text classification task are not thoroughly studied. Our research intends to bridge this gap by investigating how variations like model size, pre-training objectives, quantization, low-rank adaptation, prompting, and various hyperparameters influence text classification tasks. Our overall conclusions show the following: 1—even with synthetic labels, fine-tuning works better than prompting techniques, and increasing model size does not always improve classification performance; 2—discriminatively trained models generally perform better than generatively pre-trained models; and 3—fine-tuning models at 16-bit precision works much better than using 8-bit or 4-bit models, but the performance drop from 8-bit to 4-bit is smaller than from 16-bit to 8-bit. In another scale of our study, we conducted experiments with different settings for low-rank adaptation (LoRA) and quantization, finding that increasing LoRA dropout negatively affects classification performance. We did not find a clear link between the LoRA attention dimension (rank) and performance, observing only small differences between standard LoRA and its variants like rank-stabilized LoRA and weight-decomposed LoRA. Additional observations to support model setup for classification tasks are presented in our analyses.
APA, Harvard, Vancouver, ISO, and other styles
22

Le, Khoi M., Trinh Pham, Tho Quan, and Anh Tuan Luu. "LAMPAT: Low-Rank Adaption for Multilingual Paraphrasing Using Adversarial Training." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 16 (March 24, 2024): 18435–43. http://dx.doi.org/10.1609/aaai.v38i16.29804.

Full text
Abstract:
Paraphrases are texts that convey the same meaning while using different words or sentence structures. It can be used as an automatic data augmentation tool for many Natural Language Processing tasks, especially when dealing with low-resource languages, where data shortage is a significant problem. To generate a paraphrase in multilingual settings, previous studies have leveraged the knowledge from the machine translation field, i.e., forming a paraphrase through zero-shot machine translation in the same language. Despite good performance on human evaluation, those methods still require parallel translation datasets, thus making them inapplicable to languages that do not have parallel corpora. To mitigate that problem, we proposed the first unsupervised multilingual paraphrasing model, LAMPAT (Low-rank Adaptation for Multilingual Paraphrasing using Adversarial Training), by which monolingual dataset is sufficient enough to generate a human-like and diverse sentence. Throughout the experiments, we found out that our method not only works well for English but can generalize on unseen languages as well. Data and code are available at https://github.com/phkhanhtrinh23/LAMPAT.
APA, Harvard, Vancouver, ISO, and other styles
23

Zdunek, Rafał, and Tomasz Sadowski. "Image Completion with Hybrid Interpolation in Tensor Representation." Applied Sciences 10, no. 3 (January 22, 2020): 797. http://dx.doi.org/10.3390/app10030797.

Full text
Abstract:
The issue of image completion has been developed considerably over the last two decades, and many computational strategies have been proposed to fill-in missing regions in an incomplete image. When the incomplete image contains many small-sized irregular missing areas, a good alternative seems to be the matrix or tensor decomposition algorithms that yield low-rank approximations. However, this approach uses heuristic rank adaptation techniques, especially for images with many details. To tackle the obstacles of low-rank completion methods, we propose to model the incomplete images with overlapping blocks of Tucker decomposition representations where the factor matrices are determined by a hybrid version of the Gaussian radial basis function and polynomial interpolation. The experiments, carried out for various image completion and resolution up-scaling problems, demonstrate that our approach considerably outperforms the baseline and state-of-the-art low-rank completion methods.
APA, Harvard, Vancouver, ISO, and other styles
24

Mavaddaty, Samira, Seyed Mohammad Ahadi, and Sanaz Seyedin. "A novel speech enhancement method by learnable sparse and low-rank decomposition and domain adaptation." Speech Communication 76 (February 2016): 42–60. http://dx.doi.org/10.1016/j.specom.2015.11.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Hong, Zhenchen, Jingwei Xiong, Han Yang, and Yu K. Mo. "Lightweight Low-Rank Adaptation Vision Transformer Framework for Cervical Cancer Detection and Cervix Type Classification." Bioengineering 11, no. 5 (May 8, 2024): 468. http://dx.doi.org/10.3390/bioengineering11050468.

Full text
Abstract:
Cervical cancer is a major health concern worldwide, highlighting the urgent need for better early detection methods to improve outcomes for patients. In this study, we present a novel digital pathology classification approach that combines Low-Rank Adaptation (LoRA) with the Vision Transformer (ViT) model. This method is aimed at making cervix type classification more efficient through a deep learning classifier that does not require as much data. The key innovation is the use of LoRA, which allows for the effective training of the model with smaller datasets, making the most of the ability of ViT to represent visual information. This approach performs better than traditional Convolutional Neural Network (CNN) models, including Residual Networks (ResNets), especially when it comes to performance and the ability to generalize in situations where data are limited. Through thorough experiments and analysis on various dataset sizes, we found that our more streamlined classifier is highly accurate in spotting various cervical anomalies across several cases. This work advances the development of sophisticated computer-aided diagnostic systems, facilitating more rapid and accurate detection of cervical cancer, thereby significantly enhancing patient care outcomes.
APA, Harvard, Vancouver, ISO, and other styles
26

Hu, Yaopeng. "Optimizing e-commerce recommendation systems through conditional image generation: Merging LoRA and cGANs for improved performance." Applied and Computational Engineering 32, no. 1 (January 22, 2024): 177–84. http://dx.doi.org/10.54254/2755-2721/32/20230207.

Full text
Abstract:
This research concentrates on the integration of Low-Rank Adaptation for Text-to-Image Diffusion Fine-tuning and Conditional Image Generation in e-commerce recommendation systems. Low-Rank Adaptation for Text-to-Image Diffusion Fine-tuning, skilled in producing precise and diverse images from aesthetic descriptions provided by users, is extremely valuable for personalizing product suggestions. The enhancement of the interpretation of textual prompts and consequent image generation is accomplished through the fine-tuning of cross-attention layers in the Stable Diffusion model. In an effort to advance personalization further, Conditional Generative Adversarial Networks are employed to transform these textual descriptions into corresponding product images. In order to assure effective data communication, particularly in areas with low connectivity, the system makes use of Long Range technology, thereby improving system accessibility. Preliminary results demonstrate a considerable improvement in recommendation precision, user engagement, and conversion rates. These results underscore the potential impact of integrating such advanced artificial intelligence techniques in e-commerce, optimizing the shopping experience by generating personalized, accurate, and visually appealing product suggestions.
APA, Harvard, Vancouver, ISO, and other styles
27

Tatianchenko, Natalia Petrovna. "Psychological conditions for the formation of adaptation potential of an individual in the learning process." Психология и Психотехника, no. 1 (January 2021): 62–77. http://dx.doi.org/10.7256/2454-0722.2021.1.32485.

Full text
Abstract:
  This article is dedicated to examination of psychological conditions of the formation of adaptation potential of an individual in the learning process. It is established that the personal adaptation potential is the interconnected psychological characteristics of a person that determine success of adaptation to the external environment. Leaning on the analysis of literature sources, the author built a research model that reveals the following factors affecting formation of adaptation potential: psychic stability, adequate self-esteem, communication skills, behavioral regulation, coping behavior and group interaction skills. For assessing the adaptive capabilities and individual psychological characteristics of the students, the author applied the following methods: empirical (multi-level personal questionnaire “Adaptability” by A. G. Maklakov, S. V. Chermyanin; methodology “Socio-psychological comfort of the environment” by A. G. Maklakov; coping-test of R. Lazarus, S. Folkman, translated by T. L. Kryukova, E. V. Kuftyak and M. S. Zamyshlyaeva; Sixteen Personality Factor Questionnaire; mathematical-statistical (correlation analysis) Spearman's Rank Correlation Coefficient; nonparametric comparison method of the two related samples (Wilcoxon signed-rank test). The scientific novelty of the presented materials consists in the development of scientific representation of psychological conditions of the formation of adaptation potential of an individual in the learning process, as well as in identification of the causes of maladaptation (emotional instability, low self-control, propensity for authoritarian behavior, low conformity and normativity of behavior). The acquired experimental data on the level of development of adaptation potential of the students allow predicting the success of their learning process. The provided materials can be used by psychologists and pedagogues in organization of work aimed at prevention of maladaptative conditions among adolescents.  
APA, Harvard, Vancouver, ISO, and other styles
28

Yan, Chaokun, Haicao Yan, Wenjuan Liang, Menghan Yin, Huimin Luo, and Junwei Luo. "DP-SSLoRA: A privacy-preserving medical classification model combining differential privacy with self-supervised low-rank adaptation." Computers in Biology and Medicine 179 (September 2024): 108792. http://dx.doi.org/10.1016/j.compbiomed.2024.108792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Hong, Yang, Xiaowei Zhou, Ruzhuang Hua, Qingxuan Lv, and Junyu Dong. "WaterSAM: Adapting SAM for Underwater Object Segmentation." Journal of Marine Science and Engineering 12, no. 9 (September 11, 2024): 1616. http://dx.doi.org/10.3390/jmse12091616.

Full text
Abstract:
Object segmentation, a key type of image segmentation, focuses on detecting and delineating individual objects within an image, essential for applications like robotic vision and augmented reality. Despite advancements in deep learning improving object segmentation, underwater object segmentation remains challenging due to unique underwater complexities such as turbulence diffusion, light absorption, noise, low contrast, uneven illumination, and intricate backgrounds. The scarcity of underwater datasets further complicates these challenges. The Segment Anything Model (SAM) has shown potential in addressing these issues, but its adaptation for underwater environments, AquaSAM, requires fine-tuning all parameters, demanding more labeled data and high computational costs. In this paper, we propose WaterSAM, an adapted model for underwater object segmentation. Inspired by Low-Rank Adaptation (LoRA), WaterSAM incorporates trainable rank decomposition matrices into the Transformer’s layers, specifically enhancing the image encoder. This approach significantly reduces the number of trainable parameters to 6.7% of SAM’s parameters, lowering computational costs. We validated WaterSAM on three underwater image datasets: COD10K, SUIM, and UIIS. Results demonstrate that WaterSAM significantly outperforms pre-trained SAM in underwater segmentation tasks, contributing to advancements in marine biology, underwater archaeology, and environmental monitoring.
APA, Harvard, Vancouver, ISO, and other styles
30

Ica Wahyuni, Nonok Karlina, and Citra Setyo Dwi Andhini. "Correlation Of Self Efficacy With Stress Adaptation On Chronic Kidney Failure Patients Hemodialysis In Waled General HospitalCirebon District." Jurnal Kesehatan Mahardika 6, no. 2 (September 1, 2019): 12–16. http://dx.doi.org/10.54867/jkm.v6i2.41.

Full text
Abstract:
Patients with chronic kidney failure each year experience an increase of 3.8 percent. To overcome kidney failure, hemodialysis therapy is needed. But patients undergoing hemodialysis will experience problems such as discomfort, increase stress and affect quality of life. To reduce these problems, intervention is needed to give self efficacy to patients, based on this it will also affect the stress adaptation of patients undergoing hemodialysis.Purpose of this research was to knew correlation of self efficacy with stress adaptation on chronic kidney failure patients that have hemodialysis in the Waled Hospital of Cirebon.Design of this research was descriptived correlating with cross sectional approach. Sample of this research using purvosive sampling amounted 99 respondens. Instruments used in the form of questionnaire. Data analysis using Spearman- rank test. The place of research at waled hospital in the Cirebon.The results showed that the majority of self-efficacy carried out in the hemodialysis room of Waled Hospital in Cirebon District was mostly in the low category of 90 (90.9%) and stress adaptation was mostly in the very severe category 54 (54.5%). The results of the Spearman rank test 0.000 ( P Value = 0.000 ; α.= 0.05 ; r = 0.546 ).show that H0 is rejected. Conclusion of this research there was a correlation of self efficacy with stress adaptation on chronic kidney failure patients that have hemodialysis in the waled hospital of Cirebon.
APA, Harvard, Vancouver, ISO, and other styles
31

Tian, Qing, and Canyu Sun. "Structure preserved ordinal unsupervised domain adaptation." Electronic Research Archive 32, no. 11 (2024): 6338–63. http://dx.doi.org/10.3934/era.2024295.

Full text
Abstract:
<p>Unsupervised domain adaptation (UDA) aims to transfer the knowledge from labeled source domain to unlabeled target domain. The main challenge of UDA stems from the domain shift between the source and target domains. Currently, in the discrete classification problems, most existing UDA methods usually adopt the distribution alignment strategy while enforcing unstable instances to pass through the low-density areas. However, the scenario of ordinal regression (OR) is rarely researched in UDA, and the traditional UDA methods cannot preferably handle OR since they do not preserve the order relationships in data labels, like in human age estimation. To address this issue, we proposed a structure-oriented adaptation strategy, namely, structure preserved ordinal unsupervised domain adaptation (SPODA). More specifically, on one hand, the global structure information was modeled and embedded into an auto-encoder framework via a low-rank transferred structure matrix. On the other hand, the local structure information was preserved through a weighted pair-wise strategy in the latent space. Guided by both the local and global structure information, a well-performance latent space was generated, whose geometric structure was adopted to further obtain a more discriminant ordinal regressor. To further enhance its generalization, a counterpart of SPODA with deep architecture was developed. Finally, extensive experiments indicated that in addressing the OR problem, SPODA was more effective and advanced than existing related domain adaptation methods.</p>
APA, Harvard, Vancouver, ISO, and other styles
32

Yashchenko, Elena Fedorovna, Ekaterina Galiulovna Shchelokova, and Olga Vasilievna Lazorak. "PERSONAL FEATURES OF FOREIGN STUDENTS WITH A HIGH AND LOW LEVEL OF SELF-ACTUALIZATION DURING SOCIO-PSYCHOLOGICAL ADAPTATION." Психология. Психофизиология 13, no. 2 (July 20, 2020): 62–75. http://dx.doi.org/10.14529/jpps200206.

Full text
Abstract:
Internationalization is one of the current directions of education development. Specially organized university work and identification of features of self-actualization in foreign students will contribute to the development of more effective programs of psychological support during socio-psychological adaptation. Aim. The paper aims to identify the personal features of foreign students with a high and low level of self-actualization during socio-psychological adaptation. Materials and methods. 52 foreign students aged from 18 to 32 years studying in London were examined. The study was based on the following methods: Yankovsky questionnaire of adaptation to a new socio-cultural environment, the Rogers and Diamond technique of diagnosis of socio-psychological adaptation, the Jones and Crandall Short Index of self-actualization, etc. The results for interpretation were obtained by statistical analysis (Mann – Whitney criterion) and correlation analysis (Spearman rank correlation coefficient) using SPSS 22.0 statistical software package. Results. Foreign students with a high level of self-actualization experience subjective well-being, and social surrounding is significant to them; however, subjective well-being is reduced when it is impossible to preserve individuality and choose a conformal type of adaptation. Foreign students with a low level of self-actualization have a subjective disadvantage that lessens when maintaining their individuality, pragmatic orientation, the acceptance of others, and self-actualization. Nevertheless, foreign students with a low level of self-actualization demonstrated a correlation between self-actualization and the interactive type of adaptation. Conclusion. The prospects for self-actualization of foreign students with a high and low level of self-actualization and the specifics of their socio-psychological adaptation have been studied.
APA, Harvard, Vancouver, ISO, and other styles
33

Hou, Zejiang, Julian Salazar, and George Polovets. "Meta-Learning the Difference: Preparing Large Language Models for Efficient Adaptation." Transactions of the Association for Computational Linguistics 10 (2022): 1249–65. http://dx.doi.org/10.1162/tacl_a_00517.

Full text
Abstract:
Abstract Large pretrained language models (PLMs) are often domain- or task-adapted via finetuning or prompting. Finetuning requires modifying all of the parameters and having enough data to avoid overfitting while prompting requires no training and few examples but limits performance. Instead, we prepare PLMs for data- and parameter-efficient adaptation by learning to learn the difference between general and adapted PLMs. This difference is expressed in terms of model weights and sublayer structure through our proposed dynamic low-rank reparameterization and learned architecture controller. Experiments on few-shot dialogue completion, low-resource abstractive summarization, and multi-domain language modeling show improvements in adaptation time and performance over direct finetuning or preparation via domain-adaptive pretraining. Ablations show our task-adaptive reparameterization (TARP) and model search (TAMS) components individually improve on other parameter-efficient transfer like adapters and structure-learning methods like learned sparsification.
APA, Harvard, Vancouver, ISO, and other styles
34

Qian Shi, Bo Du, and Liangpei Zhang. "Domain Adaptation for Remote Sensing Image Classification: A Low-Rank Reconstruction and Instance Weighting Label Propagation Inspired Algorithm." IEEE Transactions on Geoscience and Remote Sensing 53, no. 10 (October 2015): 5677–89. http://dx.doi.org/10.1109/tgrs.2015.2427791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Utomo, Hanung Addi Chandra, Yuris Mulya Saputra, and Agi Prasetiadi. "Implementasi Sistem Konfigurasi Router Berbasis Natural Language Processing dengan Pendekatan Low Rank Adaptation Finetuning dan 8-Bit Quantization." Journal of Internet and Software Engineering 4, no. 2 (December 1, 2023): 1–7. http://dx.doi.org/10.22146/jise.v4i2.9093.

Full text
Abstract:
Konfigurasi Router merupakan salah satu hal penting dalam jaringan komputer. Proses ini memerlukan pemahaman tentang bahasa dan sintaks khusus yang dapat memakan waktu lama bagi seseorang yang tidak terbiasa. Penerapan Natural language processing bisa membantu mengatasi masalah ini. Untuk mencapai tujuan dari penerapan ini, Finetuning perlu dilakukan pada model yang ada seperti model GPT-J-6B yang telah dilatih menggunakan 6 milyar parameter. Dengan menggunakan dataset yang terdiri dari konfigurasi router, diharapkan proses finetuning bisa meningkatkan performa model untuk mendeteksi maksud dari input text dalam Bahasa natural yang kemudian bisa memberikan command-command yang sesuai dengan perintah yang diberikan. Selain itu penggunaan teknik lain seperti Low Rank Adaptation (LoRA) yang dapat digunakan untuk mengoptimalkan proses Finetuning agar lebih efisien tanpa mengurangi performa model, dan penggunaan teknik 8-bit quantization untuk memperkecil penggunaan resource saat menjalankan model. Dengan beberapa teknik ini, proses finetuning dapat dilakukan dengan stabil dalam Google Colaboratory. Oleh karena itu, dengan implementasi NLP pada konfigurasi router ini dan teknik-teknik diatas, dapat meningkatkan efektivitas pengelolaan jaringan dengan menggunakan waktu dan sumber daya yang efisien. Melalui penelitian ini berhasil didapatkan model konfigurasi router berbasis NLP dengan akurasi sebesar 98%.
APA, Harvard, Vancouver, ISO, and other styles
36

Kashina, Yuliya V., Irina L. Cherednik, and Svetlana V. Polishchuk. "Students’ index of adaptation to the educational process depending on the personality type." Journal of Medical and Biological Research, no. 3 (October 10, 2022): 213–20. http://dx.doi.org/10.37482/2687-1491-z108.

Full text
Abstract:
The purpose of this study was to establish a correlation between the index of adaptation to the educational process and the personality type of medical students. Materials and methods. The research involved 184 secondand fifth-year medical students. We determined their personality types (according to Eysenck Personality Inventory (EPI), variant A) and index of regulatory and adaptive status (IRAS) using the cardiorespiratory synchronism test (V.M. Pokrovsky) on the VNS-Mikro device (Neurosoft, Russia) at the beginning and at the end of the academic year. The adaptation level was determined by calculating the integrative quantitative indicator, i.e. the adaptation index (ratio of IRAS at the end of the academic year to IRAS at the beginning of the academic year, multiplied by 100). Results. Students with different personality types, genetically predetermined, demonstrated different adaptation index values (p < 0.001): phlegmatic students (n = 26) 81.9 ± 1.0 (high adaptation level); choleric (n = 22) 72.1 ± 1.0 (high adaptation level); sanguine (n = 22) 34.1 ± 1.2 (moderate adaptation level); melancholic (n = 20) 22.6 ± 0.8 (low adaptation level); phlegmatic/sanguine (n = 20) 79.4 ± 0.8 (high adaptation level); sanguine/choleric (n = 26) 43.2 ± 0.9 (moderate adaptation level); phlegmatic/melancholic (n = 30) 36.6 ± 1.1 (moderate adaptation level); melancholic/ choleric (n = 18) 25.2 ± 0.6 (low adaptation level). Correlation analysis with Spearman’s rank correlation coefficient (interpretation using the Chaddock scale) revealed a statistically significant relationship between IRAS values at the beginning and at the end of the school year (r = 0.53). The data obtained showed that all students had a decrease in IRAS at the end of the academic year, personality type affecting the indicator’s annual dynamics. Melancholic and melancholic/choleric medical students had the lowest adaptation level. The identified risk groups require special attention and an individual approach when planning the educational process.
APA, Harvard, Vancouver, ISO, and other styles
37

Shumakov, Vadim Anatolevich, Darya Aleksandrovna Dubrovina, and Anna Vladimirovna Platonova. "SOCIAL AND PSYCHOLOGICAL ADAPTATION OF YOUNGER SCHOOLCHILDREN TO THE LEARNING ENVIRONMENT AS A FACTOR OF THEIR EMOTIONAL WELL-BEING." Психология. Психофизиология 12, no. 4 (January 15, 2020): 63–70. http://dx.doi.org/10.14529/jpps190407.

Full text
Abstract:
Abstract: The article considers the phenomenon of socio-psychological adaptation of younger schoolchildren to learning at school. In this period, the usual daily routine changes, children are forced to obey the rules of school life, fulfilling the requirements of the teacher. Aim. The purpose of the article is to identify the role of socio-psychological adaptation of first graders to schooling in the formation of their emotional well-being. Materials and methods. 107 first-graders, including 42 boys and 65 girls, aged from 7 to 8 years (average age 7.5 ± 0.5 years) participated in the study. The following psychodiagnostic techniques were used: the technique "Ladder" (V.G. Schur) assesses the level of emotional well-being; “School drawing” methodology, which determines the attitude of a first-grader to school and the level of school anxiety; diagnosis of school anxiety (A.M. Prikhozhan) between children, communication with an adult and a teacher. Mathematical and statistical processing is carried out using Spearman's rank correlation coefficients, cluster analysis, qualitative analysis of research results. The calculations were performed using SPSS Statistics v. 17.0. Results. Three levels of socio-psychological changes were revealed: a high level of adaptation (n = 52) – primary classes with an emotionally favorable attitude to school, an average level of adaptation (n = 35) – students with an emotionally neutral attitude to school, a low level of adaptation (n = 20) – students with an emotionally negative attitude towards school. Conclusion. Younger schoolchildren with different indicators of socio-psychological adaptation differ in terms of emotional well-being. It is proved that with a high level of adaptation, first-graders show an emotionally favorable attitude towards school, with an average level of adaptation – an emotionally neutral attitude towards school, and with a low level of adaptation – an emotionally negative attitude towards school.
APA, Harvard, Vancouver, ISO, and other styles
38

Martini, Luca, Saverio Iacono, Daniele Zolezzi, and Gianni Viardo Vercelli. "Advancing Persistent Character Generation: Comparative Analysis of Fine-Tuning Techniques for Diffusion Models." AI 5, no. 4 (September 29, 2024): 1779–92. http://dx.doi.org/10.3390/ai5040088.

Full text
Abstract:
In the evolving field of artificial intelligence, fine-tuning diffusion models is crucial for generating contextually coherent digital characters across various media. This paper examines four advanced fine-tuning techniques: Low-Rank Adaptation (LoRA), DreamBooth, Hypernetworks, and Textual Inversion. Each technique enhances the specificity and consistency of character generation, expanding the applications of diffusion models in digital content creation. LoRA efficiently adapts models to new tasks with minimal adjustments, making it ideal for environments with limited computational resources. It excels in low VRAM contexts due to its targeted fine-tuning of low-rank matrices within cross-attention layers, enabling faster training and efficient parameter tweaking. DreamBooth generates highly detailed, subject-specific images but is computationally intensive and suited for robust hardware environments. Hypernetworks introduce auxiliary networks that dynamically adjust the model’s behavior, allowing for flexibility during inference and on-the-fly model switching. This adaptability, however, can result in slightly lower image quality. Textual Inversion embeds new concepts directly into the model’s embedding space, allowing for rapid adaptation to novel styles or concepts, but is less effective for precise character generation. This analysis shows that LoRA is the most efficient for producing high-quality outputs with minimal computational overhead. In contrast, DreamBooth excels in high-fidelity images at the cost of longer training. Hypernetworks provide adaptability with some tradeoffs in quality, while Textual Inversion serves as a lightweight option for style integration. These techniques collectively enhance the creative capabilities of diffusion models, delivering high-quality, contextually relevant outputs.
APA, Harvard, Vancouver, ISO, and other styles
39

Mahendra, Anton, and Styawati Styawati. "Implementasi Lowk-Rank Adaptation of Large Langauage Model (LoRA) Untuk Effisiensi Large Language Model." JIPI (Jurnal Ilmiah Penelitian dan Pembelajaran Informatika) 9, no. 4 (November 19, 2024): 1881–90. https://doi.org/10.29100/jipi.v9i4.5519.

Full text
Abstract:
Model transformator seperti LlaMA 2 sangat kuat untuk memproses berbagai tugas bahasa alami, namun memiliki kekuatan pemrosesan yang signifikan dan keterbatasan memori yang membuatnya sulit untuk diimplementasikan. Tantangan terbesarnya terletak pada konsumsi sumber daya penyimpanan yang besar dan kebutuhan daya komputasi dalam jumlah besar. Untuk mengatasi permasalahan tersebut, dikembangkan solusi berupa implementasi LoRA (Low Rank Adapter). LoRA, khususnya di LlaMA 2, menggunakan pendekatan adaptif dalam mengompresi model Transformer menggunakan adaptor berdaya rendah. Penerapan LoRA pada model ini mengurangi jumlah operasi floating-point, sehingga mempercepat proses pelatihan dan inferensi. Secara signifikan mengurangi konsumsi daya dan penggunaan memori. Tujuan utama penerapan LoRA di LlaMA 2 adalah untuk mengoptimalkan efisiensi model, dengan fokus pada pengurangan operasi floating-point dan meningkatkan penggunaan memori GPU.
APA, Harvard, Vancouver, ISO, and other styles
40

Arian, Md Sahadul Hasan, Faisal Ahmed Sifat, Saif Ahmed, Nabeel Mohammed, Taseef Hasan Farook, and James Dudley. "Dental Loop Chatbot: A Prototype Large Language Model Framework for Dentistry." Software 3, no. 4 (December 17, 2024): 587–94. https://doi.org/10.3390/software3040029.

Full text
Abstract:
The Dental Loop Chatbot was developed as a real-time, evidence-based guidance system for dental practitioners using a fine-tuned large language model (LLM) and Retrieval-Augmented Generation (RAG). This paper outlines the development and preliminary evaluation of the chatbot as a scalable clinical decision-support tool designed for resource-limited settings. The system’s architecture incorporates Quantized Low-Rank Adaptation (QLoRA) for efficient fine-tuning, while dynamic retrieval mechanisms ensure contextually accurate and relevant responses. This prototype lays the groundwork for future triaging and diagnostic support systems tailored specifically to the field of dentistry.
APA, Harvard, Vancouver, ISO, and other styles
41

Wu, Haokun. "Large language models capsule: A research analysis of In-Context Learning (ICL) and Parameter-Efficient Fine-Tuning (PEFT) methods." Applied and Computational Engineering 43, no. 1 (February 26, 2024): 327–31. http://dx.doi.org/10.54254/2755-2721/43/20230858.

Full text
Abstract:
In the context of natural language processing (NLP), this paper addresses the growing need for efficient adaptation techniques for pre-trained language models. It begins by summarizing the current landscape of NLP, highlighting the challenges associated with fine-tuning large language models like BERT and Transformer. The paper then introduces and analyzes three categories of parameter-efficient fine-tuning (PEFT) approaches, namely, In-Context Learning (ICL)-inspired Fine-Tuning, Low-Rank Adaptation PEFTs (LoRA), and Activation-based PEFTs. Within these categories, it explores techniques such as prefix-tuning, prompt tuning, (IA)3, and LoRA, shedding light on their advantages and applications. Through a comprehensive examination, this paper concludes by emphasizing the interplay between performance, parameter efficiency, and adaptability in the context of NLP models. It also provides insights into the future prospects of these techniques in advancing the field of NLP. To summarize, this paper offers a detailed analysis of PEFT methods and their potential to democratize access to cutting-edge NLP capabilities, paving the way for more efficient model adaptation in various applications.
APA, Harvard, Vancouver, ISO, and other styles
42

Adams, Henry, Lara Kassab, and Deanna Needell. "An adaptation for iterative structured matrix completion." Foundations of Data Science 3, no. 4 (2021): 769. http://dx.doi.org/10.3934/fods.2021028.

Full text
Abstract:
<p style='text-indent:20px;'>The task of predicting missing entries of a matrix, from a subset of known entries, is known as <i>matrix completion</i>. In today's data-driven world, data completion is essential whether it is the main goal or a pre-processing step. Structured matrix completion includes any setting in which data is not missing uniformly at random. In recent work, a modification to the standard nuclear norm minimization (NNM) for matrix completion has been developed to take into account <i>sparsity-based</i> structure in the missing entries. This notion of structure is motivated in many settings including recommender systems, where the probability that an entry is observed depends on the value of the entry. We propose adjusting an Iteratively Reweighted Least Squares (IRLS) algorithm for low-rank matrix completion to take into account sparsity-based structure in the missing entries. We also present an iterative gradient-projection-based implementation of the algorithm that can handle large-scale matrices. Finally, we present a robust array of numerical experiments on matrices of varying sizes, ranks, and level of structure. We show that our proposed method is comparable with the adjusted NNM on small-sized matrices, and often outperforms the IRLS algorithm in structured settings on matrices up to size <inline-formula><tex-math id="M1">\begin{document}$ 1000 \times 1000 $\end{document}</tex-math></inline-formula>.</p>
APA, Harvard, Vancouver, ISO, and other styles
43

Eker, Oktay, Murat Avcı, Selen Çiğdem, Oğuzhan Özdemir, Fatih Nar, and Dmitry Kudinov. "Integrating SAM and LoRA for DSM-Based Planar Region Extraction in Building Footprints." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-4/W10-2024 (May 31, 2024): 57–64. http://dx.doi.org/10.5194/isprs-archives-xlviii-4-w10-2024-57-2024.

Full text
Abstract:
Abstract. In this paper, we present a novel approach for segmenting planar regions in Digital Surface Models (DSMs) by adapting the Segment Anything Model (SAM), an open-source framework. Our approach specifically tailors SAM to recognize planar regions within given building footprints, employing the Low-Rank Adaptation (LoRA) technique. This adaptation benefits from a detailed and realistic synthetic dataset, coupled with a novel labeling strategy for planar regions in our ground truth, enhancing the model’s effectiveness and reproducibility. Unlike traditional plane detection techniques, our method consistently and accurately identifies equivalent planar regions across identical DSM inputs. Following the segmentation phase, we introduced a novel plane fitting algorithm to determine the parameters for each planar region. This enables us to refine the edges of these areas and utilize the resulting plane equations to construct precise, watertight 3D models of buildings. Despite its training on synthetic data, our model exhibits remarkable performance on both synthetic and real-world datasets, exemplified by its application to the Zurich dataset.
APA, Harvard, Vancouver, ISO, and other styles
44

Shvyrov, V. V., D. A. Kapustin, R. N. Sentyay, and T. I. Shulika. "Using Large Language Models to Classify Some Vulnerabilities in Program Code." Programmnaya Ingeneria 15, no. 9 (September 9, 2024): 465–75. http://dx.doi.org/10.17587/prin.15.465-475.

Full text
Abstract:
The paper studies the effectiveness of using large language models to detect common types of vulnerabilities in Python program code. In particular, using the technique of low-rank adaptation of (LoRA) models, fine-tuning of the CodeBERT-python model is performed. To train the models, we use the author's dataset, which consists of marked-up program code in Python. The trained models are used to detect and classify potential vulnerabilities. To evaluate the effectiveness of models, the number of false positives, false negatives, true positives and true negatives is determined. Also, accuracy, recall and F1-measures are calculated on a test data set for various configurations of model training macro parameters
APA, Harvard, Vancouver, ISO, and other styles
45

Kim, Sanghyeon, Hyunmo Yang, Younghyun Kim, Youngjoon Hong, and Eunbyung Park. "Corrigendum to “Hydra: Multi-head Low-rank Adaptation for Parameter Efficient Fine-tuning” [Neural Networks Volume 178, October (2024), 1-11/106414]]." Neural Networks 181 (January 2025): 106878. http://dx.doi.org/10.1016/j.neunet.2024.106878.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Cheng, Yuxi, Yang Song, Yi Liu, Hui Zhang, and Feng Liu. "High-Performance Binocular Disparity Prediction Algorithm for Edge Computing." Sensors 24, no. 14 (July 14, 2024): 4563. http://dx.doi.org/10.3390/s24144563.

Full text
Abstract:
End-to-end disparity estimation algorithms based on cost volume deployed in edge-end neural network accelerators have the problem of structural adaptation and need to ensure accuracy under the condition of adaptation operator. Therefore, this paper proposes a novel disparity calculation algorithm that uses low-rank approximation to approximately replace 3D convolution and transposed 3D convolution, WReLU to reduce data compression caused by the activation function, and unimodal cost volume filtering and a confidence estimation network to regularize cost volume. It alleviates the problem of disparity-matching cost distribution being far away from the true distribution and greatly reduces the computational complexity and number of parameters of the algorithm while improving accuracy. Experimental results show that compared with a typical disparity estimation network, the absolute error of the proposed algorithm is reduced by 38.3%, the three-pixel error is reduced to 1.41%, and the number of parameters is reduced by 67.3%. The calculation accuracy is better than that of other algorithms, it is easier to deploy, and it has strong structural adaptability and better practicability.
APA, Harvard, Vancouver, ISO, and other styles
47

Mitrofanov, Igor. "SOCIO-PSYCHOLOGICAL ADAPTATION IN ADOLESCENTS WITH INTERNET-DEPENDENT BEHAVIOR." Child in a Digital World 1, no. 1 (2023): 64. http://dx.doi.org/10.61365/forum.2023.049.

Full text
Abstract:
Relevance. The problem of a modern teenager is in building relationships with society and forming his adaptation in it. Digital technologies are changing the structure of a teenager’s activity and his personal development. The purpose of this study was to determine the stable Internet-dependent behavior in adolescents as a factor infl uencing socio-psychological adaptation. Research methods and sampling. S. Chen’s Internet Addiction Scale (CTAS) and the Questionnaire of socio-psychological adaptation, SPA. K. Rogers in the adaptation by A.K. Ositsky were used. The study was carried out on the basis of MOBU Secondary schools No.  and No.  in Sochi. The mathematical analysis was carried out on a sample of  people, aged - years. Main results show that of the  participants, .% have a tendency to develop Internet-dependent behavior, % belong to the group of formed Internet-dependent behavior, .% have a low level of dependence. According to the results of Spearman’s rank correlation, statistically significant relationships were determined. () Internet-dependent behavior has an impact on interaction with other people; joint activity also indicates an increase in the rejection of the other with the recognition of uniqueness and shortcomings. () Intrapersonal problems are interrelated with the rejection of others, which affects self-acceptance, refl ections on the mental health of the individual and adaptation to the requirements of society. () The scale of intrapersonal and health-related problems and the infl uence of the Internet space on the establishment of the daily routine are interrelated with the individual’s predisposition to the external locus of control, when the tendency to attribute the causes of what is happening to external factors dominates. They are also interrelated with the level of adaptation of a teenager to existence in society. The level of adaptation is reduced. Conclusion. The results obtained indicate that stable Internet-dependent behavior is a factor infl uencing socio- psychological adaptation. In this regard, teenager prefers to spend more time in the space where experiencing fewer diffi culties is expexted.
APA, Harvard, Vancouver, ISO, and other styles
48

Bazi, Yakoub, Laila Bashmal, Mohamad Mahmoud Al Rahhal, Riccardo Ricci, and Farid Melgani. "RS-LLaVA: A Large Vision-Language Model for Joint Captioning and Question Answering in Remote Sensing Imagery." Remote Sensing 16, no. 9 (April 23, 2024): 1477. http://dx.doi.org/10.3390/rs16091477.

Full text
Abstract:
In this paper, we delve into the innovative application of large language models (LLMs) and their extension, large vision-language models (LVLMs), in the field of remote sensing (RS) image analysis. We particularly emphasize their multi-tasking potential with a focus on image captioning and visual question answering (VQA). In particular, we introduce an improved version of the Large Language and Vision Assistant Model (LLaVA), specifically adapted for RS imagery through a low-rank adaptation approach. To evaluate the model performance, we create the RS-instructions dataset, a comprehensive benchmark dataset that integrates four diverse single-task datasets related to captioning and VQA. The experimental results confirm the model’s effectiveness, marking a step forward toward the development of efficient multi-task models for RS image analysis.
APA, Harvard, Vancouver, ISO, and other styles
49

Hu, Haotian, Alex Jie Yang, Sanhong Deng, Dongbo Wang, Min Song, and Si Shen. "A Generative Drug–Drug Interaction Triplets Extraction Framework Based on Large Language Models." Proceedings of the Association for Information Science and Technology 60, no. 1 (October 2023): 980–82. http://dx.doi.org/10.1002/pra2.918.

Full text
Abstract:
ABSTRACTDrug–Drug Interaction (DDI) may affect the activity and efficacy of drugs, potentially leading to diminished therapeutic effect or even serious side effects. Therefore, automatic recognition of drug entities and relations involved in DDI is of great significance for pharmaceutical and medical care. In this paper, we propose a generative DDI triplets extraction framework based on Large Language Models (LLMs). We comprehensively apply various training methods, such as In‐context learning, Instruction‐tuning, and Task‐tuning, to investigate the biomedical information extraction capabilities of GPT‐3, OPT, and LLaMA. We also introduce Low‐Rank Adaptation (LoRA) technology to significantly reduce trainable parameters. The proposed method achieves satisfactory results in DDI triplet extraction, and demonstrates strong generalization ability on similar corpus.
APA, Harvard, Vancouver, ISO, and other styles
50

Makaricheva, Elvira V., and Maria S. Burguvan. "Specificity and dynamics of psychological adaptation during the COVID-19 pandemic." Neurology Bulletin LIV, no. 2 (July 19, 2022): 23–32. http://dx.doi.org/10.17816/nb106247.

Full text
Abstract:
BACKGROUND. The relevance is due to the negative consequences caused by the COVID-19 pandemic for individuals and for society as a whole, covering almost all aspects of life at the macro and individual levels, and the lack of detailed studies of the psychological state of the population. AIM. Study of the specifics and dynamics of psychological adaptation in subjects during the COVID-19 pandemic. MATERIAL AND METHODS. Method of studying personality accentuations of K. Leonhard (modified by S. Shmishek); diagnostics of the state of aggression (Bass-Darkey questionnaire), multilevel personality questionnaire Adaptiveness by A.G. Maklakov and S.V. Chermyanin, test-questionnaire Health, activity, mood, clinical questionnaire for the detection and evaluation of neurotic conditions (Yakhin K.K., Mendelevich D.M.). Statistical analysis of the data was performed using Spearmans rank correlation coefficient, Students t-test for independent samples, and Students t-test for dependent samples. The study involved 51 people 16% are men and 84% are women, who were selected by a random continuous method, whose average age is 21.31.87 years. The study was carried out in 2 stages. The first stage: the end of April 2020 21 days after the start of voluntary self-isolation; second stage: end of September beginning of November 2020. RESULTS. The subjects were found to have such character accentuations as exaltation 94%, hyperthymism 88%, emotivity 86%, low level of personal adaptive potential (2.11.43), neurotic depression prevailed 43%, obsessive-phobic disorders 33%, conversion disorders 27%. The expression of aggression was carried out mainly through verbal aggression (6.352.43), guilt (5.591.72) and irritation (5.371.92). CONCLUSION. The subjects have a low level of personal adaptive potential, which increased with the end of self-isolation, accompanied by a gradual acceptance of what is happening, stabilization of the growth in the number of sick and dead, news about the development of measures to combat the spread of the virus, methods of treatment and prevention.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography