Siga este link para ver outros tipos de publicações sobre o tema: Online continual learning.

Artigos de revistas sobre o tema "Online continual learning"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Online continual learning".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Li, Guozheng, Peng Wang, Qiqing Luo, Yanhe Liu e Wenjun Ke. "Online Noisy Continual Relation Learning". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 11 (26 de junho de 2023): 13059–66. http://dx.doi.org/10.1609/aaai.v37i11.26534.

Texto completo da fonte
Resumo:
Recent work for continual relation learning has achieved remarkable progress. However, most existing methods only focus on tackling catastrophic forgetting to improve performance in the existing setup, while continually learning relations in the real-world must overcome many other challenges. One is that the data possibly comes in an online streaming fashion with data distributions gradually changing and without distinct task boundaries. Another is that noisy labels are inevitable in real-world, as relation samples may be contaminated by label inconsistencies or labeled with distant supervision. In this work, therefore, we propose a novel continual relation learning framework that simultaneously addresses both online and noisy relation learning challenges. Our framework contains three key modules: (i) a sample separated online purifying module that divides the online data stream into clean and noisy samples, (ii) a self-supervised online learning module that circumvents inferior training signals caused by noisy data, and (iii) a semi-supervised offline finetuning module that ensures the participation of both clean and noisy samples. Experimental results on FewRel, TACRED and NYT-H with real-world noise demonstrate that our framework greatly outperforms the combinations of the state-of-the-art online continual learning and noisy label learning methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Alfarra, Motasem, Zhipeng Cai, Adel Bibi, Bernard Ghanem e Matthias Müller. "SimCS: Simulation for Domain Incremental Online Continual Segmentation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 10 (24 de março de 2024): 10795–803. http://dx.doi.org/10.1609/aaai.v38i10.28952.

Texto completo da fonte
Resumo:
Continual Learning is a step towards lifelong intelligence where models continuously learn from recently collected data without forgetting previous knowledge. Existing continual learning approaches mostly focus on image classification in the class-incremental setup with clear task boundaries and unlimited computational budget. This work explores the problem of Online Domain-Incremental Continual Segmentation (ODICS), where the model is continually trained over batches of densely labeled images from different domains, with limited computation and no information about the task boundaries. ODICS arises in many practical applications. In autonomous driving, this may correspond to the realistic scenario of training a segmentation model over time on a sequence of cities. We analyze several existing continual learning methods and show that they perform poorly in this setting despite working well in class-incremental segmentation. We propose SimCS, a parameter-free method complementary to existing ones that uses simulated data to regularize continual learning. Experiments show that SimCS provides consistent improvements when combined with different CL methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Shim, Dongsub, Zheda Mai, Jihwan Jeong, Scott Sanner, Hyunwoo Kim e Jongseong Jang. "Online Class-Incremental Continual Learning with Adversarial Shapley Value". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 11 (18 de maio de 2021): 9630–38. http://dx.doi.org/10.1609/aaai.v35i11.17159.

Texto completo da fonte
Resumo:
As image-based deep learning becomes pervasive on every device, from cell phones to smart watches, there is a growing need to develop methods that continually learn from data while minimizing memory footprint and power consumption. While memory replay techniques have shown exceptional promise for this task of continual learning, the best method for selecting which buffered images to replay is still an open question. In this paper, we specifically focus on the online class-incremental setting where a model needs to learn new classes continually from an online data stream. To this end, we contribute a novel Adversarial Shapley value scoring method that scores memory data samples according to their ability to preserve latent decision boundaries for previously observed classes (to maintain learning stability and avoid forgetting) while interfering with latent decision boundaries of current classes being learned (to encourage plasticity and optimal learning of new class boundaries). Overall, we observe that our proposed ASER method provides competitive or improved performance compared to state-of-the-art replay-based continual learning methods on a variety of datasets.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Cheng, Kan, Yongxin Ma, Guanglu Wang, Linlin Zong e Xinyue Liu. "NLOCL: Noise-Labeled Online Continual Learning". Electronics 13, n.º 13 (29 de junho de 2024): 2560. http://dx.doi.org/10.3390/electronics13132560.

Texto completo da fonte
Resumo:
Continual learning (CL) from infinite data streams has become a challenge for neural network models in real-world scenarios. Catastrophic forgetting of previous knowledge occurs in this learning setting, and existing supervised CL methods rely excessively on accurately labeled samples. However, the real-world data labels are usually misled by noise, which influences the CL agents and aggravates forgetting. To address this problem, we propose a method named noise-labeled online continual learning (NLOCL), which implements the online CL model with noise-labeled data streams. NLOCL uses an empirical replay strategy to retain crucial examples, separates data streams by small-loss criteria, and includes semi-supervised fine-tuning for labeled and unlabeled samples. Besides, NLOCL combines small loss with class diversity measures and eliminates online memory partitioning. Furthermore, we optimized the experience replay stage to enhance the model performance by retaining significant clean-labeled examples and carefully selecting suitable samples. In the experiment, we designed noise-labeled data streams by injecting noisy labels into multiple datasets and partitioning tasks to simulate infinite data streams realistically. The experimental results demonstrate the superior performance and robust learning capabilities of our proposed method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Li, Jiyong, Dilshod Azizov, Yang LI e Shangsong Liang. "Contrastive Continual Learning with Importance Sampling and Prototype-Instance Relation Distillation". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 12 (24 de março de 2024): 13554–62. http://dx.doi.org/10.1609/aaai.v38i12.29259.

Texto completo da fonte
Resumo:
Recently, because of the high-quality representations of contrastive learning methods, rehearsal-based contrastive continual learning has been proposed to explore how to continually learn transferable representation embeddings to avoid the catastrophic forgetting issue in traditional continual settings. Based on this framework, we propose Contrastive Continual Learning via Importance Sampling (CCLIS) to preserve knowledge by recovering previous data distributions with a new strategy for Replay Buffer Selection (RBS), which minimize estimated variance to save hard negative samples for representation learning with high quality. Furthermore, we present the Prototype-instance Relation Distillation (PRD) loss, a technique designed to maintain the relationship between prototypes and sample representations using a self-distillation process. Experiments on standard continual learning benchmarks reveal that our method notably outperforms existing baselines in terms of knowledge preservation and thereby effectively counteracts catastrophic forgetting in online contexts. The code is available at https://github.com/lijy373/CCLIS.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Kim, Doyoung, Dongmin Park, Yooju Shin, Jihwan Bang, Hwanjun Song e Jae-Gil Lee. "Adaptive Shortcut Debiasing for Online Continual Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 12 (24 de março de 2024): 13122–31. http://dx.doi.org/10.1609/aaai.v38i12.29211.

Texto completo da fonte
Resumo:
We propose a novel framework DropTop that suppresses the shortcut bias in online continual learning (OCL) while being adaptive to the varying degree of the shortcut bias incurred by continuously changing environment. By the observed high-attention property of the shortcut bias, highly-activated features are considered candidates for debiasing. More importantly, resolving the limitation of the online environment where prior knowledge and auxiliary data are not ready, two novel techniques---feature map fusion and adaptive intensity shifting---enable us to automatically determine the appropriate level and proportion of the candidate shortcut features to be dropped. Extensive experiments on five benchmark datasets demonstrate that, when combined with various OCL algorithms, DropTop increases the average accuracy by up to 10.4% and decreases the forgetting by up to 63.2%.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Liu, Bing. "Learning on the Job: Online Lifelong and Continual Learning". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 09 (3 de abril de 2020): 13544–49. http://dx.doi.org/10.1609/aaai.v34i09.7079.

Texto completo da fonte
Resumo:
One of the hallmarks of the human intelligence is the ability to learn continuously, accumulate the knowledge learned in the past and use the knowledge to help learn more and learn better. It is hard to imagine a truly intelligent system without this capability. This type of learning differs significantly than the classic machine learning (ML) paradigm of isolated single-task learning. Although there is already research on learning a sequence of tasks incrementally under the names of lifelong learning or continual learning, they still follow the traditional two-phase separate training and testing paradigm in learning each task. The tasks are also given by the user. This paper adds on-the-job learning to the mix to emphasize the need to learn during application (thus online) after the model has been deployed, which traditional ML cannot do. It aims to leverage the learned knowledge to discover new tasks, interact with humans and the environment, make inferences, and incrementally learn the new tasks on the fly during applications in a self-supervised and interactive manner. This is analogous to human on-the-job learning after formal training. We use chatbots and self-driving cars as examples to discuss the need, some initial work, and key challenges and opportunities in building this capability.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Ha, Donghee, Mooseop Kim e Chi Yoon Jeong. "Online Continual Learning in Acoustic Scene Classification: An Empirical Study". Sensors 23, n.º 15 (3 de agosto de 2023): 6893. http://dx.doi.org/10.3390/s23156893.

Texto completo da fonte
Resumo:
Numerous deep learning methods for acoustic scene classification (ASC) have been proposed to improve the classification accuracy of sound events. However, only a few studies have focused on continual learning (CL) wherein a model continually learns to solve issues with task changes. Therefore, in this study, we systematically analyzed the performance of ten recent CL methods to provide guidelines regarding their performances. The CL methods included two regularization-based methods and eight replay-based methods. First, we defined realistic and difficult scenarios such as online class-incremental (OCI) and online domain-incremental (ODI) cases for three public sound datasets. Then, we systematically analyzed the performance of each CL method in terms of average accuracy, average forgetting, and training time. In OCI scenarios, iCaRL and SCR showed the best performance for small buffer sizes, and GDumb showed the best performance for large buffer sizes. In ODI scenarios, SCR adopting supervised contrastive learning consistently outperformed the other methods, regardless of the memory buffer size. Most replay-based methods have an almost constant training time, regardless of the memory buffer size, and their performance increases with an increase in the memory buffer size. Based on these results, we must first consider GDumb/SCR for the continual learning methods for ASC.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Hihn, Heinke, e Daniel A. Braun. "Online continual learning through unsupervised mutual information maximization". Neurocomputing 578 (abril de 2024): 127422. http://dx.doi.org/10.1016/j.neucom.2024.127422.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Ye, Fei, Adrian G. Bors e Kun Zhang. "Continual Unsupervised Generative Modelling via Online Optimal Transport". Proceedings of the AAAI Conference on Artificial Intelligence 39, n.º 21 (11 de abril de 2025): 22092–100. https://doi.org/10.1609/aaai.v39i21.34362.

Texto completo da fonte
Resumo:
Lately, deep generative models have achieved excellent results after learning pre-defined and static data distribution. Meanwhile, their performance on continual learning suffers from degeneration, caused by catastrophic forgetting. In this paper, we study the unsupervised generative modelling in a more realistic continual learning scenario, where class and task information are absent during both training and inference learning phases. To implement this goal, the proposed memory approach consists of a temporary memory system, which stores data examples while a dynamic expansion memory system would gradually preserve those samples that are crucial for long-term memorization. A novel memory expansion mechanism is then proposed, by employing optimal transport distances between the statistics of memorized samples and each newly seen datum. This paper proposes the Sinkhorn-based Dual Dynamic Memory (SDDM) method, by considering Sinkhorn distance as an optimal transport measure, for evaluating the significance of the data to be stored in the memory buffer. The Sinkhorn transport algorithm leads to preserving a diversity of samples within a compact memory capacity. The memory buffering approach does not interact with the model's training process and can be optimized independently in both supervised and unsupervised learning without any modifications. Moreover, we also propose a novel dynamic model expansion mechanism to automatically increase the model's capacity whenever necessary, which can deal with infinite data streams and further improve the model's performance. Experimental results show that the proposed approach achieves state-of-the-art performance in both supervised and unsupervised learning.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Guo, Yiduo, Wenpeng Hu, Dongyan Zhao e Bing Liu. "Adaptive Orthogonal Projection for Batch and Online Continual Learning". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 6 (28 de junho de 2022): 6783–91. http://dx.doi.org/10.1609/aaai.v36i6.20634.

Texto completo da fonte
Resumo:
Catastrophic forgetting is a key obstacle to continual learning. One of the state-of-the-art approaches is orthogonal projection. The idea of this approach is to learn each task by updating the network parameters or weights only in the direction orthogonal to the subspace spanned by all previous task inputs. This ensures no interference with tasks that have been learned. The system OWM that uses the idea performs very well against other state-of-the-art systems. In this paper, we first discuss an issue that we discovered in the mathematical derivation of this approach and then propose a novel method, called AOP (Adaptive Orthogonal Projection), to resolve it, which results in significant accuracy gains in empirical evaluations in both the batch and online continual learning settings without saving any previous training data as in replay-based methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Mai, Zheda, Ruiwen Li, Jihwan Jeong, David Quispe, Hyunwoo Kim e Scott Sanner. "Online continual learning in image classification: An empirical survey". Neurocomputing 469 (janeiro de 2022): 28–51. http://dx.doi.org/10.1016/j.neucom.2021.10.021.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Gu, Jianyang, Kai Wang, Wei Jiang e Yang You. "Summarizing Stream Data for Memory-Constrained Online Continual Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 11 (24 de março de 2024): 12217–25. http://dx.doi.org/10.1609/aaai.v38i11.29111.

Texto completo da fonte
Resumo:
Replay-based methods have proved their effectiveness on online continual learning by rehearsing past samples from an auxiliary memory. With many efforts made on improving training schemes based on the memory, however, the information carried by each sample in the memory remains under-investigated. Under circumstances with restricted storage space, the informativeness of the memory becomes critical for effective replay. Although some works design specific strategies to select representative samples, by only employing a small number of original images, the storage space is still not well utilized. To this end, we propose to Summarize the knowledge from the Stream Data (SSD) into more informative samples by distilling the training characteristics of real images. Through maintaining the consistency of training gradients and relationship to the past tasks, the summarized samples are more representative for the stream data compared to the original images. Extensive experiments are conducted on multiple online continual learning benchmarks to support that the proposed SSD method significantly enhances the replay effects. We demonstrate that with limited extra computational overhead, SSD provides more than 3% accuracy boost for sequential CIFAR-100 under extremely restricted memory buffer. Code in https://github.com/vimar-gu/SSD.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Jacobsen, Andrew, Matthew Schlegel, Cameron Linke, Thomas Degris, Adam White e Martha White. "Meta-Descent for Online, Continual Prediction". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 3943–50. http://dx.doi.org/10.1609/aaai.v33i01.33013943.

Texto completo da fonte
Resumo:
This paper investigates different vector step-size adaptation approaches for non-stationary online, continual prediction problems. Vanilla stochastic gradient descent can be considerably improved by scaling the update with a vector of appropriately chosen step-sizes. Many methods, including AdaGrad, RMSProp, and AMSGrad, keep statistics about the learning process to approximate a second order update—a vector approximation of the inverse Hessian. Another family of approaches use meta-gradient descent to adapt the stepsize parameters to minimize prediction error. These metadescent strategies are promising for non-stationary problems, but have not been as extensively explored as quasi-second order methods. We first derive a general, incremental metadescent algorithm, called AdaGain, designed to be applicable to a much broader range of algorithms, including those with semi-gradient updates or even those with accelerations, such as RMSProp. We provide an empirical comparison of methods from both families. We conclude that methods from both families can perform well, but in non-stationary prediction problems the meta-descent methods exhibit advantages. Our method is particularly robust across several prediction problems, and is competitive with the state-of-the-art method on a large-scale, time-series prediction problem on real data from a mobile robot.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Riemer, Matthew, Tim Klinger, Djallel Bouneffouf e Michele Franceschini. "Scalable Recollections for Continual Lifelong Learning". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 1352–59. http://dx.doi.org/10.1609/aaai.v33i01.33011352.

Texto completo da fonte
Resumo:
Given the recent success of Deep Learning applied to a variety of single tasks, it is natural to consider more human-realistic settings. Perhaps the most difficult of these settings is that of continual lifelong learning, where the model must learn online over a continuous stream of non-stationary data. A successful continual lifelong learning system must have three key capabilities: it must learn and adapt over time, it must not forget what it has learned, and it must be efficient in both training time and memory. Recent techniques have focused their efforts primarily on the first two capabilities while questions of efficiency remain largely unexplored. In this paper, we consider the problem of efficient and effective storage of experiences over very large time-frames. In particular we consider the case where typical experiences are O(n) bits and memories are limited to O(k) bits for k
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Wang, Kai, Joost van de Weijer e Luis Herranz. "ACAE-REMIND for online continual learning with compressed feature replay". Pattern Recognition Letters 150 (outubro de 2021): 122–29. http://dx.doi.org/10.1016/j.patrec.2021.06.025.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Yu, Da, Mingyi Zhang, Mantian Li, Fusheng Zha, Junge Zhang, Lining Sun e Kaiqi Huang. "Squeezing More Past Knowledge for Online Class-Incremental Continual Learning". IEEE/CAA Journal of Automatica Sinica 10, n.º 3 (março de 2023): 722–36. http://dx.doi.org/10.1109/jas.2023.123090.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Oros, Ramona Georgiana, Andreas Pester e Caterina Berbenni-Rehm. "Knowledge Management Platform for Online Training". International Journal of Advanced Corporate Learning (iJAC) 8, n.º 3 (8 de outubro de 2015): 30. http://dx.doi.org/10.3991/ijac.v8i3.4907.

Texto completo da fonte
Resumo:
In our rapidly changing world where technology is continuingly being updated and altered, the learning process should (must) be continual as well. To ensure the highest quality of work performance, there are three requirements: 1) clearly structured learning processes; 2) well-organized and structured learning; and 3) knowledge and skill pools. These are especially necessary for novel and rapidly changing technologies such as online engineering. This paper will present an implementation option of how to structure remote labs knowledge using PROMIS®, a knowledge and process management platform.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Lee, Byung Hyun, Min-hwan Oh e Se Young Chun. "Doubly Perturbed Task Free Continual Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 12 (24 de março de 2024): 13346–54. http://dx.doi.org/10.1609/aaai.v38i12.29236.

Texto completo da fonte
Resumo:
Task-free online continual learning (TF-CL) is a challenging problem where the model incrementally learns tasks without explicit task information. Although training with entire data from the past, present as well as future is considered as the gold standard, naive approaches in TF-CL with the current samples may be conflicted with learning with samples in the future, leading to catastrophic forgetting and poor plasticity. Thus, a proactive consideration of an unseen future sample in TF-CL becomes imperative. Motivated by this intuition, we propose a novel TF-CL framework considering future samples and show that injecting adversarial perturbations on both input data and decision-making is effective. Then, we propose a novel method named Doubly Perturbed Continual Learning (DPCL) to efficiently implement these input and decision-making perturbations. Specifically, for input perturbation, we propose an approximate perturbation method that injects noise into the input data as well as the feature vector and then interpolates the two perturbed samples. For decision-making process perturbation, we devise multiple stochastic classifiers. We also investigate a memory management scheme and learning rate scheduling reflecting our proposed double perturbations. We demonstrate that our proposed method outperforms the state-of-the-art baseline methods by large margins on various TF-CL benchmarks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Jha, Kishlay, Guangxu Xun e Aidong Zhang. "Continual representation learning for evolving biomedical bipartite networks". Bioinformatics 37, n.º 15 (3 de fevereiro de 2021): 2190–97. http://dx.doi.org/10.1093/bioinformatics/btab067.

Texto completo da fonte
Resumo:
Abstract Motivation Many real-world biomedical interactions such as ‘gene-disease’, ‘disease-symptom’ and ‘drug-target’ are modeled as a bipartite network structure. Learning meaningful representations for such networks is a fundamental problem in the research area of Network Representation Learning (NRL). NRL approaches aim to translate the network structure into low-dimensional vector representations that are useful to a variety of biomedical applications. Despite significant advances, the existing approaches still have certain limitations. First, a majority of these approaches do not model the unique topological properties of bipartite networks. Consequently, their straightforward application to the bipartite graphs yields unsatisfactory results. Second, the existing approaches typically learn representations from static networks. This is limiting for the biomedical bipartite networks that evolve at a rapid pace, and thus necessitate the development of approaches that can update the representations in an online fashion. Results In this research, we propose a novel representation learning approach that accurately preserves the intricate bipartite structure, and efficiently updates the node representations. Specifically, we design a customized autoencoder that captures the proximity relationship between nodes participating in the bipartite bicliques (2 × 2 sub-graph), while preserving both the global and local structures. Moreover, the proposed structure-preserving technique is carefully interleaved with the central tenets of continual machine learning to design an incremental learning strategy that updates the node representations in an online manner. Taken together, the proposed approach produces meaningful representations with high fidelity and computational efficiency. Extensive experiments conducted on several biomedical bipartite networks validate the effectiveness and rationality of the proposed approach.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Han, Ya-nan, e Jian-wei Liu. "Online continual learning via the knowledge invariant and spread-out properties". Expert Systems with Applications 213 (março de 2023): 119004. http://dx.doi.org/10.1016/j.eswa.2022.119004.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Kim, Junsu, e Suhyun Kim. "Salient Frequency-aware Exemplar Compression for Resource-constrained Online Continual Learning". Proceedings of the AAAI Conference on Artificial Intelligence 39, n.º 17 (11 de abril de 2025): 17895–903. https://doi.org/10.1609/aaai.v39i17.33968.

Texto completo da fonte
Resumo:
Online Class-Incremental Learning (OCIL) enables a model to learn new classes from a data stream. Since data stream samples are seen only once and the capacity of storage is constrained, OCIL is particularly susceptible to Catastrophic Forgetting (CF). While exemplar replay methods alleviate CF by storing representative samples, the limited capacity of the buffer inhibits capturing the entire old data distribution, leading to CF. In this regard, recent papers suggest image compression for better memory usage. However, existing methods raise two concerns: computational overhead and compression defects. On one hand, computational overhead can limit their applicability in OCIL settings, as models might miss learning opportunities from the current streaming data if computational resources are budgeted and preoccupied with compression. On the other hand, typical compression schemes demanding low computational overhead, such as JPEG, introduce noise detrimental to training. To address these issues, we propose Salient Frequency-aware Exemplar Compression (SFEC), an efficient and effective JPEG-based compression framework. SFEC exploits saliency information in the frequency domain to reduce negative impacts from compression artifacts for learning. Moreover, SFEC employs weighted sampling for exemplar elimination based on the distance between raw and compressed data to mitigate artifacts further. Our experiments employing the baseline OCIL method on benchmark datasets such as CIFAR-100 and Mini-ImageNet demonstrate the superiority of SFEC over previous exemplar compression methods in streaming scenarios.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Chin, Matthew, e Roberto Corizzo. "Continual Semi-Supervised Malware Detection". Machine Learning and Knowledge Extraction 6, n.º 4 (10 de dezembro de 2024): 2829–54. https://doi.org/10.3390/make6040135.

Texto completo da fonte
Resumo:
Detecting malware has become extremely important with the increasing exposure of computational systems and mobile devices to online services. However, the rapidly evolving nature of malicious software makes this task particularly challenging. Despite the significant number of machine learning works for malware detection proposed in the last few years, limited interest has been devoted to continual learning approaches, which could allow models to showcase effective performance in challenging and dynamic scenarios while being computationally efficient. Moreover, most of the research works proposed thus far adopt a fully supervised setting, which relies on fully labelled data and appears to be impractical in a rapidly evolving malware landscape. In this paper, we address malware detection from a continual semi-supervised one-class learning perspective, which only requires normal/benign data and empowers models with a greater degree of flexibility, allowing them to detect multiple malware types with different morphology. Specifically, we assess the effectiveness of two replay strategies on anomaly detection models and analyze their performance in continual learning scenarios with three popular malware detection datasets (CIC-AndMal2017, CIC-MalMem-2022, and CIC-Evasive-PDFMal2022). Our evaluation shows that replay-based strategies can achieve competitive performance in terms of continual ROC-AUC with respect to the considered baselines and bring new perspectives and insights on this topic.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Matuga, Julia M., Deborah Wooldridge e Sandra Poirier. "Assuring Quality in Online Course Delivery". International Journal of Adult Vocational Education and Technology 2, n.º 1 (janeiro de 2011): 36–49. http://dx.doi.org/10.4018/javet.2011010104.

Texto completo da fonte
Resumo:
This paper examines the critical issue of assuring quality online course delivery by examining four key components of online teaching and learning. The topic of course delivery is viewed as a cultural issue that permeates processes from the design of an online course to its evaluation. First, the authors examine and review key components of and tools for designing high impact online courses that support student learning. Second, in this paper, the authors provide suggestions for faculty teaching online courses to assist in creating high quality online courses that supports teaching and, consequently, facilitates opportunities for student learning. Quality online course delivery is also contingent on the support of faculty by administration. Lastly, this paper provides suggestions for conducting course evaluation and feedback loops for the continual improvement of online learning and teaching. These four components are essential elements in assuring quality online courses.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Yu, Hsien-Hua, Ru-Ping Hu e Mei-Lien Chen. "Global Pandemic Prevention Continual Learning—Taking Online Learning as an Example: The Relevance of Self-Regulation, Mind-Unwandered, and Online Learning Ineffectiveness". Sustainability 14, n.º 11 (27 de maio de 2022): 6571. http://dx.doi.org/10.3390/su14116571.

Texto completo da fonte
Resumo:
Since the global COVID-19 pandemic began, online learning has gained increasing importance as learners are socially isolated by physical and psychological threats, and have to face the epidemic and take preventive measures to ensure non-stop learning. Based on socially situated cognition theory, this study focused on exploring the relevance of online learning ineffectiveness (OLI) predicted by self-regulated learning (SRL) in different phases of learning (preparation, performance, and self-reflection) and its interaction with mind-unwandered during the COVID-19 pandemic. The subjects of the study were senior general high and technical high school students. After completing the online questionnaire, the PLS-SEM method of the structural equation model was used to analyze the data. Results demonstrated that self-regulation in two phases of preparation (i.e., cognitive strategy and emotional adjustment) and performance (i.e., mission strategy and environmental adjustment) in SRL are positively related to mind-unwandered in online learning. Moreover, mind-unwandered in online learning was positively related to the self-reflection phase (i.e., time management and help-seeking) of SRL. Additionally, self-reflection of SRL was negatively related to online learning ineffectiveness. PLS assessments found that the preparation and performance sub-constructs of SRL were negatively related to online learning ineffectiveness mediated by mind-unwandered and self-reflection of SRL. The results suggest that teachers can enhance their students’ self-regulation in online learning, and assist them in being more mind-unwandered in online learning.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

T Subramaniam, Thirumeni, e Nur Amalina Diyana Suhaimi. "Continual Quality Improvement of Online Course Delivery Using Perceived Course Learning Outcomes". Malaysian Journal of Distance Education 21, n.º 1 (2019): 43–55. http://dx.doi.org/10.21315/mjde2019.21.1.3.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Zeng, Ziqian, Jianwei Wang, Lin Wu, Weikai Lu e Huiping Zhuang. "3D-AOCL: Analytic online continual learning for imbalanced 3D point cloud classification". Alexandria Engineering Journal 111 (janeiro de 2025): 530–39. http://dx.doi.org/10.1016/j.aej.2024.10.037.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Yang, Shuangming, Jiangtong Tan e Badong Chen. "Robust Spike-Based Continual Meta-Learning Improved by Restricted Minimum Error Entropy Criterion". Entropy 24, n.º 4 (25 de março de 2022): 455. http://dx.doi.org/10.3390/e24040455.

Texto completo da fonte
Resumo:
The spiking neural network (SNN) is regarded as a promising candidate to deal with the great challenges presented by current machine learning techniques, including the high energy consumption induced by deep neural networks. However, there is still a great gap between SNNs and the online meta-learning performance of artificial neural networks. Importantly, existing spike-based online meta-learning models do not target the robust learning based on spatio-temporal dynamics and superior machine learning theory. In this invited article, we propose a novel spike-based framework with minimum error entropy, called MeMEE, using the entropy theory to establish the gradient-based online meta-learning scheme in a recurrent SNN architecture. We examine the performance based on various types of tasks, including autonomous navigation and the working memory test. The experimental results show that the proposed MeMEE model can effectively improve the accuracy and the robustness of the spike-based meta-learning performance. More importantly, the proposed MeMEE model emphasizes the application of the modern information theoretic learning approach on the state-of-the-art spike-based learning algorithms. Therefore, in this invited paper, we provide new perspectives for further integration of advanced information theory in machine learning to improve the learning performance of SNNs, which could be of great merit to applied developments with spike-based neuromorphic systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

S. Lu, Hwangji, e Robert Smiles. "The Role of Collaborative Learning in the Online Education". International Journal of Economics, Business and Management Research 06, n.º 06 (2022): 125–37. http://dx.doi.org/10.51505/ijebmr.2022.6608.

Texto completo da fonte
Resumo:
Collaboration or teamwork is indispensable in today’s workplace. The continual and increasingly rapid change in the external environment requires professionals equipped with unique skill sets to work together and solve problems collaboratively. It is the higher education institution’s responsibility to prepare students for the necessary skills expected by the employers. Collaborative learning is an active learning approach that two or more students team up toward common goals. The knowledge and product are created through the active, social, and engaging process. Students develop communication, interpersonal, metacognitive thinking, and problemsolving skills, as well as their understanding of the diverse perspectives for real-world profession-related situations. Furthermore, working in a team could reduce student’s feelings of isolation in the online learning environment. As such more and more online programs incorporate collaborative learning into their curricula. In this paper, the benefits, challenges, and solutions to challenges in collaborative learning are presented and discussed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Han, Ya-nan, e Jian-wei Liu. "Online Continual Learning via the Meta-learning update with Multi-scale Knowledge Distillation and Data Augmentation". Engineering Applications of Artificial Intelligence 113 (agosto de 2022): 104966. http://dx.doi.org/10.1016/j.engappai.2022.104966.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Yao, Jiaqi, Bowen Zheng e Julia Kowal. "Continual learning for online state of charge estimation across diverse lithium-ion batteries". Journal of Energy Storage 117 (maio de 2025): 116086. https://doi.org/10.1016/j.est.2025.116086.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Adaimi, Rebecca, e Edison Thomaz. "Lifelong Adaptive Machine Learning for Sensor-Based Human Activity Recognition Using Prototypical Networks". Sensors 22, n.º 18 (12 de setembro de 2022): 6881. http://dx.doi.org/10.3390/s22186881.

Texto completo da fonte
Resumo:
Continual learning (CL), also known as lifelong learning, is an emerging research topic that has been attracting increasing interest in the field of machine learning. With human activity recognition (HAR) playing a key role in enabling numerous real-world applications, an essential step towards the long-term deployment of such systems is to extend the activity model to dynamically adapt to changes in people’s everyday behavior. Current research in CL applied to the HAR domain is still under-explored with researchers exploring existing methods developed for computer vision in HAR. Moreover, analysis has so far focused on task-incremental or class-incremental learning paradigms where task boundaries are known. This impedes the applicability of such methods for real-world systems. To push this field forward, we build on recent advances in the area of continual learning and design a lifelong adaptive learning framework using Prototypical Networks, LAPNet-HAR, that processes sensor-based data streams in a task-free data-incremental fashion and mitigates catastrophic forgetting using experience replay and continual prototype adaptation. Online learning is further facilitated using contrastive loss to enforce inter-class separation. LAPNet-HAR is evaluated on five publicly available activity datasets in terms of its ability to acquire new information while preserving previous knowledge. Our extensive empirical results demonstrate the effectiveness of LAPNet-HAR in task-free CL and uncover useful insights for future challenges.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Liu, Yanhe, Peng Wang, Wenjun Ke, Guozheng Li, Xiye Chen, Jiteng Zhao e Ziyu Shang. "Unify Named Entity Recognition Scenarios via Contrastive Real-Time Updating Prototype". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 12 (24 de março de 2024): 14035–43. http://dx.doi.org/10.1609/aaai.v38i12.29312.

Texto completo da fonte
Resumo:
Supervised named entity recognition (NER) aims to classify entity mentions into a fixed number of pre-defined types. However, in real-world scenarios, unknown entity types are continually involved. Naive fine-tuning will result in catastrophic forgetting on old entity types. Existing continual methods usually depend on knowledge distillation to alleviate forgetting, which are less effective on long task sequences. Moreover, most of them are specific to the class-incremental scenario and cannot adapt to the online scenario, which is more common in practice. In this paper, we propose a unified framework called Contrastive Real-time Updating Prototype (CRUP) that can handle different scenarios for NER. Specifically, we train a Gaussian projection model by a regularized contrastive objective. After training on each batch, we store the mean vectors of representations belong to new entity types as their prototypes. Meanwhile, we update existing prototypes belong to old types only based on representations of the current batch. The final prototypes will be used for the nearest class mean classification. In this way, CRUP can handle different scenarios through its batch-wise learning. Moreover, CRUP can alleviate forgetting in continual scenarios only with current data instead of old data. To comprehensively evaluate CRUP, we construct extensive benchmarks based on various datasets. Experimental results show that CRUP significantly outperforms baselines in continual scenarios and is also competitive in the supervised scenario.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Wu, Tiandeng, Qijiong Liu, Yi Cao, Yao Huang, Xiao-Ming Wu e Jiandong Ding. "Continual Graph Convolutional Network for Text Classification". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 11 (26 de junho de 2023): 13754–62. http://dx.doi.org/10.1609/aaai.v37i11.26611.

Texto completo da fonte
Resumo:
Graph convolutional network (GCN) has been successfully applied to capture global non-consecutive and long-distance semantic information for text classification. However, while GCN-based methods have shown promising results in offline evaluations, they commonly follow a seen-token-seen-document paradigm by constructing a fixed document-token graph and cannot make inferences on new documents. It is a challenge to deploy them in online systems to infer steaming text data. In this work, we present a continual GCN model (ContGCN) to generalize inferences from observed documents to unobserved documents. Concretely, we propose a new all-token-any-document paradigm to dynamically update the document-token graph in every batch during both the training and testing phases of an online system. Moreover, we design an occurrence memory module and a self-supervised contrastive learning objective to update ContGCN in a label-free manner. A 3-month A/B test on Huawei public opinion analysis system shows ContGCN achieves 8.86% performance gain compared with state-of-the-art methods. Offline experiments on five public datasets also show ContGCN can improve inference quality. The source code will be released at https://github.com/Jyonn/ContGCN.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Huo, Fushuo, Wenchao Xu, Jingcai Guo, Haozhao Wang e Yunfeng Fan. "Non-exemplar Online Class-Incremental Continual Learning via Dual-Prototype Self-Augment and Refinement". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 11 (24 de março de 2024): 12698–707. http://dx.doi.org/10.1609/aaai.v38i11.29165.

Texto completo da fonte
Resumo:
This paper investigates a new, practical, but challenging problem named Non-exemplar Online Class-incremental continual Learning (NO-CL), which aims to preserve the discernibility of base classes without buffering data examples and efficiently learn novel classes continuously in a single-pass (i.e., online) data stream. The challenges of this task are mainly two-fold: (1) Both base and novel classes suffer from severe catastrophic forgetting as no previous samples are available for replay. (2) As the online data can only be observed once, there is no way to fully re-train the whole model, e.g., re-calibrate the decision boundaries via prototype alignment or feature distillation. In this paper, we propose a novel Dual-prototype Self-augment and Refinement method (DSR) for NO-CL problem, which consists of two strategies: 1) Dual class prototypes: vanilla and high-dimensional prototypes are exploited to utilize the pre-trained information and obtain robust quasi-orthogonal representations rather than example buffers for both privacy preservation and memory reduction. 2) Self-augment and refinement: Instead of updating the whole network, we optimize high-dimensional prototypes alternatively with the extra projection module based on self-augment vanilla prototypes, through a bi-level optimization problem. Extensive experiments demonstrate the effectiveness and superiority of the proposed DSR in NO-CL.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

N, Kavyashree, Shailaja L K e Anitha J. "Real-time Attention Span Tracking in Online Education". International Journal of Innovative Technology and Exploring Engineering 11, n.º 9 (30 de agosto de 2022): 11–17. http://dx.doi.org/10.35940/ijitee.g9191.0811922.

Texto completo da fonte
Resumo:
E-learning has changed how students grow over the past ten years by allowing them access to high-quality education whenever and wherever they need it. In any case, understudies frequently get occupied in light of different reasons, which influence the learning ability by and large. Numerous experts have been striving to address the nature of online education, but we really need a comprehensive solution to this problem. This essay aims to present a method for monitoring students' continuing attention during online classes using the surveillance camera and oral input. We investigate different picture handling strategies and AI calculations all through this review. We suggest a framework that makes use of five specific non-verbal cues to calculate an understudy's consideration score during computer-based tasks and generate continual feedback for both the association and the understudy. The output can be used as a heuristic to investigate both the speakers' and understudy' general methods of exhibiting themselves
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Stewart, Heather, Rod Gapp e Luke Houghton. "Large Online First Year Learning and Teaching: the Lived Experience of Developing a Student-Centred Continual Learning Practice". Systemic Practice and Action Research 33, n.º 4 (23 de maio de 2019): 435–51. http://dx.doi.org/10.1007/s11213-019-09492-x.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Michel, Nicolas, Giovanni Chierchia, Romain Negrel e Jean-François Bercher. "Learning Representations on the Unit Sphere: Investigating Angular Gaussian and Von Mises-Fisher Distributions for Online Continual Learning". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 13 (24 de março de 2024): 14350–58. http://dx.doi.org/10.1609/aaai.v38i13.29348.

Texto completo da fonte
Resumo:
We use the maximum a posteriori estimation principle for learning representations distributed on the unit sphere. We propose to use the angular Gaussian distribution, which corresponds to a Gaussian projected on the unit-sphere and derive the associated loss function. We also consider the von Mises-Fisher distribution, which is the conditional of a Gaussian in the unit-sphere. The learned representations are pushed toward fixed directions, which are the prior means of the Gaussians; allowing for a learning strategy that is resilient to data drift. This makes it suitable for online continual learning, which is the problem of training neural networks on a continuous data stream, where multiple classification tasks are presented sequentially so that data from past tasks are no longer accessible, and data from the current task can be seen only once. To address this challenging scenario, we propose a memory-based representation learning technique equipped with our new loss functions. Our approach does not require negative data or knowledge of task boundaries and performs well with smaller batch sizes while being computationally efficient. We demonstrate with extensive experiments that the proposed method outperforms the current state-of-the-art methods on both standard evaluation scenarios and realistic scenarios with blurry task boundaries. For reproducibility, we use the same training pipeline for every compared method and share the code at https://github.com/Nicolas1203/ocl-fd.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Phillips, Craig, e Jacqueline O'Flaherty. "Evaluating nursing students' engagement in an online course using flipped virtual classrooms". Student Success 10, n.º 1 (7 de março de 2019): 59–71. http://dx.doi.org/10.5204/ssj.v10i1.1098.

Texto completo da fonte
Resumo:
Flipped classroom models allocate more time for active learning approaches compared with more traditional pedagogies, however what is less clear with the utilisation of flipped learning is evidence to support whether students in flipped classes are given more opportunities to develop higher order thinking skills (HOTs) to effect deep learning compared with the traditional ways of teaching. Focussing on this gap, this study compares on campus and off campus student engagement in two courses using different deliveries: online face-to-face (f2f) mixed mode (on campus students attend traditional f2f on campus classes and off campus students study exclusively online) versus fully online mode, utilising flipped classes (all student study off campus engaging in flipped virtual classes). Final course grades were similar for both deliveries; however, the study suggests flipped classes offered students more opportunities to develop HOTs and engage more deeply in the learning process. Students’ evaluations of the online flipped delivery were mixed, with those students previously enrolled exclusively as on campus, particularly dissatisfied with fully online delivery and virtual class tutor experience. Recommendations are made concerning both the timing of the introduction of fully online delivery in a program and the need for continual up-skilling of staff who teach in online environments.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Lee, Po-Lei, Sheng-Hao Chen, Tzu-Chien Chang, Wei-Kung Lee, Hao-Teng Hsu e Hsiao-Huang Chang. "Continual Learning of a Transformer-Based Deep Learning Classifier Using an Initial Model from Action Observation EEG Data to Online Motor Imagery Classification". Bioengineering 10, n.º 2 (1 de fevereiro de 2023): 186. http://dx.doi.org/10.3390/bioengineering10020186.

Texto completo da fonte
Resumo:
The motor imagery (MI)-based brain computer interface (BCI) is an intuitive interface that enables users to communicate with external environments through their minds. However, current MI-BCI systems ask naïve subjects to perform unfamiliar MI tasks with simple textual instruction or a visual/auditory cue. The unclear instruction for MI execution not only results in large inter-subject variability in the measured EEG patterns but also causes the difficulty of grouping cross-subject data for big-data training. In this study, we designed an BCI training method in a virtual reality (VR) environment. Subjects wore a head-mounted device (HMD) and executed action observation (AO) concurrently with MI (i.e., AO + MI) in VR environments. EEG signals recorded in AO + MI task were used to train an initial model, and the initial model was continually improved by the provision of EEG data in the following BCI training sessions. We recruited five healthy subjects, and each subject was requested to participate in three kinds of tasks, including an AO + MI task, an MI task, and the task of MI with visual feedback (MI-FB) three times. This study adopted a transformer- based spatial-temporal network (TSTN) to decode the user’s MI intentions. In contrast to other convolutional neural network (CNN) or recurrent neural network (RNN) approaches, the TSTN extracts spatial and temporal features, and applies attention mechanisms along spatial and temporal dimensions to perceive the global dependencies. The mean detection accuracies of TSTN were 0.63, 0.68, 0.75, and 0.77 in the MI, first MI-FB, second MI-FB, and third MI-FB sessions, respectively. This study demonstrated the AO + MI gave an easier way for subjects to conform their imagery actions, and the BCI performance was improved with the continual learning of the MI-FB training process.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Qian, Yiming. "An enhanced Transformer framework with incremental learning for online stock price prediction". PLOS ONE 20, n.º 1 (13 de janeiro de 2025): e0316955. https://doi.org/10.1371/journal.pone.0316955.

Texto completo da fonte
Resumo:
To address the limitations of existing stock price prediction models in handling real-time data streams—such as poor scalability, declining predictive performance due to dynamic changes in data distribution, and difficulties in accurately forecasting non-stationary stock prices—this paper proposes an incremental learning-based enhanced Transformer framework (IL-ETransformer) for online stock price prediction. This method leverages a multi-head self-attention mechanism to deeply explore the complex temporal dependencies between stock prices and feature factors. Additionally, a continual normalization mechanism is employed to stabilize the data stream, enhancing the model’s adaptability to dynamic changes. To ensure that the model retains prior knowledge while integrating new information, a time series elastic weight consolidation (TSEWC) algorithm is introduced to enable efficient incremental training with incoming data. Experiments conducted on five publicly available datasets demonstrate that the proposed method not only effectively captures the temporal information in the data but also fully exploits the correlations among multi-dimensional features, significantly improving stock price prediction accuracy. Notably, the method shows robust performance in coping with non-stationary and frequently changing financial market data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Milligan, Colin, Allison Littlejohn e Obiageli Ukadike. "Professional Learning through Massive Open Online Courses". Proceedings of the International Conference on Networked Learning 9 (7 de abril de 2014): 368–71. http://dx.doi.org/10.54337/nlc.v9.9014.

Texto completo da fonte
Resumo:
This study explores the role of Massive Open Online Courses (MOOCs) in supporting and enabling professional learning, or learning for work. The research examines how professionals self-regulate their learning in MOOCs. The study is informed by contemporary theories of professional learning, that argue that conventional forms of learning are no longer effective in knowledge intensive domains. As work roles evolve and learning for work becomes continual and personalised, self-regulation is becoming a critical element of professional learning. Yet, established forms of professional learning generally have not taken advantage of the affordances of social, semantic technologies to support self-regulated learning. MOOCs present a potentially useful approach to professional learning that may be designed to encourage self-regulated learning. The study is contextualised within ‘Fundamentals of clinical trials', a MOOC for health professionals designed and run by the Harvard Medical School, Harvard School of Public Health, and Harvard Catalyst, the Harvard Clinical and Translational Science Center, and offered by edX. The research design builds on the authors' previous studies in the areas of Technology Enhanced Learning and Professional Learning and in particular, research which explored the learning behaviours of education professionals in the Change 11 MOOC. The previous studies demonstrated a link between individual learners SRL profile and their goal setting behaviour in the Change 11 MOOC as well as uncovering other factors which influenced their engagement with the MOOC environment. The present study extends the original study by further focusing on specific aspects of self-regulation identified by the Change11 studies and our parallel studies of self-regulated learning in knowledge workers. The analysis of learner behaviour in the Fundamentals of Clinical Trials is complemented by additional exploration of the design considerations of the MOOC, to determine the extent to which course design can support or inhibit self-regulation of learning. The study poses three research questions: How are Massive Open Online Courses currently designed to support self-regulated learning? What self-regulated learning strategies and behaviours do professionals adopt? and How can MOOCs be designed to encourage professionals to self-regulate their learning? Validated methods and instruments from the original study will be adapted and employed. The research is unique in providing evidence around two critical aspects of MOOCs that are not well understood: the skills and dispositions necessary for self-regulated learning in MOOC environments, and how MOOCs can be designed to encourage the development and emergence of SRL behaviours.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Lin, Fangyuan. "Sentiment analysis in online education: An analytical approach and application". Applied and Computational Engineering 33, n.º 1 (22 de janeiro de 2024): 9–17. http://dx.doi.org/10.54254/2755-2721/33/20230225.

Texto completo da fonte
Resumo:
This paper presents a groundbreaking approach to the application of sentiment analysis within the domain of online education. By introducing an innovative methodology, the aim is to streamline the process of automatically evaluating sentiments and extracting opinions from the vast sea of content produced by learners during their online interactions. This not only aids educators in swiftly gauging the general mood and perspective of their student body, but also allows them to delve deeper into the nuanced feedback provided, thus ensuring the continual improvement of course quality. In an era where digital learning platforms are growing exponentially, understanding students' attitudes, concerns, and overall satisfaction is paramount. Our methodology, therefore, is not just a technical advancement, but also a strategic tool for educational institutions aiming to thrive in the digital age. The current research landscape, while expansive, has often overlooked the significance of real-time sentiment analysis in e-learning environments. This study, therefore, bridges an important gap, bringing to the forefront the importance of harnessing student feedback in a digital format, allowing educators to tailor their approach for optimal student engagement and success.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Sood, Parul, Alka Thapa, Nidhi Sharma, Navdeep Kaur, Niyati Chitkara, Honey Chitkara e Prabhjot Kaur. "Action Research for Continual Improvement of Online Learning of Kindergarten Students at Chitkara International School During Covid-19". ECS Transactions 107, n.º 1 (24 de abril de 2022): 6889–903. http://dx.doi.org/10.1149/10701.6889ecst.

Texto completo da fonte
Resumo:
This study examines the effect of strategies adopted by Chitkara International School for the continuous improvement of online learning of kindergarten students, during the transition of classes from offline mode to online mode. The objective of this study is to increase the interest of the students in online classes, to increase the engagement of the parents in the child’s learning process, and upskilling the teachers for handling online classes during COVID-19 time. The study was conducted on 373 kindergarten students, parents, and 25 teachers of Chitkara International School. The study was conducted using pre-test/ post-test single group design. The sample was analyzed based on the qualitative and quantitative data. The findings of the study reported that there was significant improvement in the attendance of the students of the kindergarten classes, the cooperation and involvement of the parents increased, and there was a significant increase in the participation of the students in all the activities. This study was conducted with a purpose of providing quality education to students even during COVID times and all efforts were made in sync with the Sustainable Development Goal 4, which focuses on quality education.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Ye, Fei, e Adrian G. Bors. "Continual compression model for online continual learning". Applied Soft Computing, novembro de 2024, 112427. http://dx.doi.org/10.1016/j.asoc.2024.112427.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Du, Zhekai, Zhe Xiao, Ruijing Wang, Ruimeng Gan e Jingjing Li. "Online Continual Learning with Declarative Memory". SSRN Electronic Journal, 2022. http://dx.doi.org/10.2139/ssrn.4293723.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Yu, Yang, Zhekai Du, Lichao Meng, Jingjing Li e Jiang Hu. "Adaptive online continual multi-view learning". Information Fusion, setembro de 2023, 102020. http://dx.doi.org/10.1016/j.inffus.2023.102020.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Xiao, Zhe, Zhekai Du, Ruijin Wang, Ruimeng Gan e Jingjing Li. "Online continual learning with declarative memory". Neural Networks, março de 2023. http://dx.doi.org/10.1016/j.neunet.2023.03.025.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Schiemer, Martin, Lei Fang, Simon Dobson e Juan Ye. "Online continual learning for human activity recognition". Pervasive and Mobile Computing, junho de 2023, 101817. http://dx.doi.org/10.1016/j.pmcj.2023.101817.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Schiemer, Martin, Lei Fang, Simon Dobson e Juan Ye. "Online Continual Learning for Human Activity Recognition". SSRN Electronic Journal, 2023. http://dx.doi.org/10.2139/ssrn.4357622.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia