Journal articles on the topic 'Complex activity recognition'

To see the other types of publications on this topic, follow the link: Complex activity recognition.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Complex activity recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jalloul, Nahed, Fabienne Poree, Geoffrey Viardot, Phillipe L' Hostis, and Guy Carrault. "Activity Recognition Using Complex Network Analysis." IEEE Journal of Biomedical and Health Informatics 22, no. 4 (July 2018): 989–1000. http://dx.doi.org/10.1109/jbhi.2017.2762404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fusier, Florent, Valéry Valentin, François Brémond, Monique Thonnat, Mark Borg, David Thirde, and James Ferryman. "Video understanding for complex activity recognition." Machine Vision and Applications 18, no. 3-4 (February 13, 2007): 167–88. http://dx.doi.org/10.1007/s00138-006-0054-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Wei-Xin, and Nuno Vasconcelos. "Complex Activity Recognition Via Attribute Dynamics." International Journal of Computer Vision 122, no. 2 (June 21, 2016): 334–70. http://dx.doi.org/10.1007/s11263-016-0918-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Saguna, Saguna, Arkady Zaslavsky, and Dipanjan Chakraborty. "Complex activity recognition using context-driven activity theory and activity signatures." ACM Transactions on Computer-Human Interaction 20, no. 6 (December 2013): 1–34. http://dx.doi.org/10.1145/2490832.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Maradani, Lohitha. "Human Activity Recognition." International Journal for Research in Applied Science and Engineering Technology 10, no. 7 (July 31, 2022): 1983–88. http://dx.doi.org/10.22214/ijraset.2022.45630.

Full text
Abstract:
Abstract: Human Activity Recognition (HAR) is one of the active research areas in computer vision as well as human computer interaction. However, it remains a very complex task, due to unresolvable challenges such as sensor motion, sensor placement, cluttered background, and inherent variability in the way activities are conducted by different humans. Human activity recognition is an ability to interpret human body gesture or motion via sensors and determine human activity or action. Most of the human daily tasks can be simplified or automated if they can be recognized via HAR system. Typically, HAR system can be either supervised or unsupervised. A supervised HAR system required some prior training with dedicated datasets while unsupervised HAR system is being configured with a set of rules during development. HAR is considered as an important component in various scientific research contexts i.e. surveillance, healthcare and human computer interaction.
APA, Harvard, Vancouver, ISO, and other styles
6

Kashanian, Hooman, Saeed Sharif, and Ralf Akildyz. "Estimation of Walking rate in Complex activity recognition." International Journal of Computer Applications Technology and Research 5, no. 9 (September 4, 2016): 568–77. http://dx.doi.org/10.7753/ijcatr0509.1003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xia, Li-min, Xiao-ting Shi, and Hong-bin Tu. "An approach for complex activity recognition by key frames." Journal of Central South University 22, no. 9 (September 2015): 3450–57. http://dx.doi.org/10.1007/s11771-015-2885-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Omolaja, Adebola, Abayomi Otebolaku, and Ali Alfoudi. "Context-Aware Complex Human Activity Recognition Using Hybrid Deep Learning Models." Applied Sciences 12, no. 18 (September 16, 2022): 9305. http://dx.doi.org/10.3390/app12189305.

Full text
Abstract:
Smart devices, such as smartphones, smartwatches, etc., are examples of promising platforms for automatic recognition of human activities. However, it is difficult to accurately monitor complex human activities on these platforms due to interclass pattern similarities, which occur when different human activities exhibit similar signal patterns or characteristics. Current smartphone-based recognition systems depend on traditional sensors, such as accelerometers and gyroscopes, which are built-in in these devices. Therefore, apart from using information from the traditional sensors, these systems lack the contextual information to support automatic activity recognition. In this article, we explore environmental contexts, such as illumination (light conditions) and noise level, to support sensory data obtained from the traditional sensors using a hybrid of Convolutional Neural Network and Long Short-Term Memory (CNN–LSTM) learning models. The models performed sensor fusion by augmenting low-level sensor signals with rich contextual data to improve the models’ recognition accuracy and generalization. Two sets of experiments were performed to validate the proposed solution. The first set of experiments used triaxial inertial sensing signals to train baseline models, while the second set of experiments combined the inertial signals with contextual information from environmental sensors. The obtained results demonstrate that contextual information, such as environmental noise level and light conditions using hybrid deep learning models, achieved better recognition accuracy than the traditional baseline activity recognition models without contextual information.
APA, Harvard, Vancouver, ISO, and other styles
9

Huan, Ruohong, Chengxi Jiang, Luoqi Ge, Jia Shu, Ziwei Zhan, Peng Chen, Kaikai Chi, and Ronghua Liang. "Human Complex Activity Recognition With Sensor Data Using Multiple Features." IEEE Sensors Journal 22, no. 1 (January 1, 2022): 757–75. http://dx.doi.org/10.1109/jsen.2021.3130913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Atif Hanif, Muhammad, Tallha Akram, Aamir Shahzad, Muhammad Attique Khan, Usman Tariq, Jung-In Choi, Yunyoung Nam, and Zanib Zulfiqar. "Smart Devices Based Multisensory Approach for Complex Human Activity Recognition." Computers, Materials & Continua 70, no. 2 (2022): 3221–34. http://dx.doi.org/10.32604/cmc.2022.019815.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Xu, Cheng, Duo Chai, Jie He, Xiaotong Zhang, and Shihong Duan. "InnoHAR: A Deep Neural Network for Complex Human Activity Recognition." IEEE Access 7 (2019): 9893–902. http://dx.doi.org/10.1109/access.2018.2890675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Peng, Liangying, Ling Chen, Menghan Wu, and Gencai Chen. "Complex Activity Recognition Using Acceleration, Vital Sign, and Location Data." IEEE Transactions on Mobile Computing 18, no. 7 (July 1, 2019): 1488–98. http://dx.doi.org/10.1109/tmc.2018.2863292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Mobark, Mohammed, Suriayati Chuprat, Teddy Mantoro, and Azizul Azizan. "Utilization of Mobile Phone Sensors for Complex Human Activity Recognition." Advanced Science Letters 23, no. 6 (June 1, 2017): 5466–71. http://dx.doi.org/10.1166/asl.2017.7401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Ferber, G. "Syntactic Pattern Recognition of Intermittent EEG Activity." Methods of Information in Medicine 24, no. 02 (April 1985): 79–84. http://dx.doi.org/10.1055/s-0038-1635362.

Full text
Abstract:
SummaryUp to now, computerised processing of EEG signals has entered the domain of clinical application at most with respect to background activity and. the recognition of some intermittent basic patterns.Although the EEG is a multichannel signal, this recognition is performed separately for each channel, taking into account at most the immediate past and future. The result is a set of intermittent basic patterns. They are to be looked at as constituents of “complex patterns” which correspond to the entities used in the visual assessment.In this paper we present a method of uniting these basic patterns by means of syntactic pattern recognition algorithms. Together with this process the basic patterns are validated or devalidated, and the resulting complex EEG pattern is allocated to one of several pattern classes. To demonstrate how this procedure works, an example of artifact recognition is used. In order to get an acceptable performance, the process of syntactic pattern recognition is divided into a sequence of three steps. The resulting algorithms can be used for assessing clinical routine EEG. Some results are reported.
APA, Harvard, Vancouver, ISO, and other styles
15

Mekruksavanich, Sakorn, and Anuchit Jitpattanakul. "RNN-based deep learning for physical activity recognition using smartwatch sensors: A case study of simple and complex activity recognition." Mathematical Biosciences and Engineering 19, no. 6 (2022): 5671–98. http://dx.doi.org/10.3934/mbe.2022265.

Full text
Abstract:
<abstract><p>Currently, identification of complex human activities is experiencing exponential growth through the use of deep learning algorithms. Conventional strategies for recognizing human activity generally rely on handcrafted characteristics from heuristic processes in time and frequency domains. The advancement of deep learning algorithms has addressed most of these issues by automatically extracting features from multimodal sensors to correctly classify human physical activity. This study proposed an attention-based bidirectional gated recurrent unit as Att-BiGRU to enhance recurrent neural networks. This deep learning model allowed flexible forwarding and reverse sequences to extract temporal-dependent characteristics for efficient complex activity recognition. The retrieved temporal characteristics were then used to exemplify essential information through an attention mechanism. A human activity recognition (HAR) methodology combined with our proposed model was evaluated using the publicly available datasets containing physical activity data collected by accelerometers and gyroscopes incorporated in a wristwatch. Simulation experiments showed that attention mechanisms significantly enhanced performance in recognizing complex human activity.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
16

Sanchez Guinea, Alejandro, Mehran Sarabchian, and Max Mühlhäuser. "Improving Wearable-Based Activity Recognition Using Image Representations." Sensors 22, no. 5 (February 25, 2022): 1840. http://dx.doi.org/10.3390/s22051840.

Full text
Abstract:
Activity recognition based on inertial sensors is an essential task in mobile and ubiquitous computing. To date, the best performing approaches in this task are based on deep learning models. Although the performance of the approaches has been increasingly improving, a number of issues still remain. Specifically, in this paper we focus on the issue of the dependence of today’s state-of-the-art approaches to complex ad hoc deep learning convolutional neural networks (CNNs), recurrent neural networks (RNNs), or a combination of both, which require specialized knowledge and considerable effort for their construction and optimal tuning. To address this issue, in this paper we propose an approach that automatically transforms the inertial sensors time-series data into images that represent in pixel form patterns found over time, allowing even a simple CNN to outperform complex ad hoc deep learning models that combine RNNs and CNNs for activity recognition. We conducted an extensive evaluation considering seven benchmark datasets that are among the most relevant in activity recognition. Our results demonstrate that our approach is able to outperform the state of the art in all cases, based on image representations that are generated through a process that is easy to implement, modify, and extend further, without the need of developing complex deep learning models.
APA, Harvard, Vancouver, ISO, and other styles
17

Ding, Xue, Chunlei Hu, Weiliang Xie, Yi Zhong, Jianfei Yang, and Ting Jiang. "Device-Free Multi-Location Human Activity Recognition Using Deep Complex Network." Sensors 22, no. 16 (August 18, 2022): 6178. http://dx.doi.org/10.3390/s22166178.

Full text
Abstract:
Wi-Fi-based human activity recognition has attracted broad attention for its advantages, which include being device-free, privacy-protected, unaffected by light, etc. Owing to the development of artificial intelligence techniques, existing methods have made great improvements in sensing accuracy. However, the performance of multi-location recognition is still a challenging issue. According to the principle of wireless sensing, wireless signals that characterize activity are also seriously affected by location variations. Existing solutions depend on adequate data samples at different locations, which are labor-intensive. To solve the above concerns, we present an amplitude- and phase-enhanced deep complex network (AP-DCN)-based multi-location human activity recognition method, which can fully utilize the amplitude and phase information simultaneously so as to mine more abundant information from limited data samples. Furthermore, considering the unbalanced sample number at different locations, we propose a perception method based on the deep complex network-transfer learning (DCN-TL) structure, which effectively realizes knowledge sharing among various locations. To fully evaluate the performance of the proposed method, comprehensive experiments have been carried out with a dataset collected in an office environment with 24 locations and five activities. The experimental results illustrate that the approaches can achieve 96.85% and 94.02% recognition accuracy, respectively.
APA, Harvard, Vancouver, ISO, and other styles
18

Sakr, Nehal A., Mervat Abu-ElKheir, A. Atwan, and H. H. Soliman. "A multilabel classification approach for complex human activities using a combination of emerging patterns and fuzzy sets." International Journal of Electrical and Computer Engineering (IJECE) 9, no. 4 (August 1, 2019): 2993. http://dx.doi.org/10.11591/ijece.v9i4.pp2993-3001.

Full text
Abstract:
In our daily lives, humans perform different Activities of Daily Living (ADL), such as cooking, and studying. According to the nature of humans, they perform these activities in a sequential/simple or an overlapping/complex scenario. Many research attempts addressed simple activity recognition, but complex activity recognition is still a challenging issue. Recognition of complex activities is a multilabel classification problem, such that a test instance is assigned to a multiple overlapping activities. Existing data-driven techniques for complex activity recognition can recognize a maximum number of two overlapping activities and require a training dataset of complex (i.e. multilabel) activities. In this paper, we propose a multilabel classification approach for complex activity recognition using a combination of Emerging Patterns and Fuzzy Sets. In our approach, we require a training dataset of only simple (i.e. single-label) activities. First, we use a pattern mining technique to extract discriminative features called Strong Jumping Emerging Patterns (SJEPs) that exclusively represent each activity. Then, our scoring function takes SJEPs and fuzzy membership values of incoming sensor data and outputs the activity label(s). We validate our approach using two different dataset. Experimental results demonstrate the efficiency and superiority of our approach against other approaches.
APA, Harvard, Vancouver, ISO, and other styles
19

Shahad, Rabiah Adawiyah, Mohamad Hanif Md Saad, and Aini Hussain. "Activity Recognition for Smart Building Application Using Complex Event Processing Approach." International Journal on Advanced Science, Engineering and Information Technology 8, no. 2 (March 31, 2018): 315. http://dx.doi.org/10.18517/ijaseit.8.2.2575.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Shoaib, Muhammad, Stephan Bosch, Ozlem Incel, Hans Scholten, and Paul Havinga. "Complex Human Activity Recognition Using Smartphone and Wrist-Worn Motion Sensors." Sensors 16, no. 4 (March 24, 2016): 426. http://dx.doi.org/10.3390/s16040426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Mohamed, Raihani, Thinagaran Perumal, Md Nasir Sulaiman, and Norwati Mustapha. "Multi Resident Complex Activity Recognition in Smart Home: A Literature Review." International Journal of Smart Home 11, no. 6 (June 30, 2017): 21–32. http://dx.doi.org/10.14257/ijsh.2017.11.6.03.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Bharti, Pratool, Debraj De, Sriram Chellappan, and Sajal K. Das. "HuMAn: Complex Activity Recognition with Multi-Modal Multi-Positional Body Sensing." IEEE Transactions on Mobile Computing 18, no. 4 (April 1, 2019): 857–70. http://dx.doi.org/10.1109/tmc.2018.2841905.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Guo, De Feng, Bin Liu, Xiao Tian Jin, and Hong Jian Liu. "Human Activity Recognition Using Smart-Phone Sensors." Applied Mechanics and Materials 571-572 (June 2014): 1019–29. http://dx.doi.org/10.4028/www.scientific.net/amm.571-572.1019.

Full text
Abstract:
Activity recognition is a challenging problem for context-aware systems and applications. Many studies in this field has mainly adopted techniques based on supervised or semi-supervised learning algorithms to recognize activities by movement patterns gathered through sensors, but these existing systems suffer from complex issues for feature representations of sensor data and multi-sensors integration. In this paper, we propose a novel feature learning method for activity recognition based on entropy and construct an activity recognition model with multi-class AdaBoost algorithm. Experiments on sensor data from a real dataset demonstrate the significant potential of our method to extract features for activity recognition. The experimental results also show recognition model based on multi-class AdaBoost is effective. The average precision and recall for six activities are 95.9% and 95.9%, respectively, which are higher than results obtained by using other methods such as Support Vector Machine (SVM) or K-Nearest Neighbor (KNN).
APA, Harvard, Vancouver, ISO, and other styles
24

Zhang, Mingyuan, Shuo Chen, Xuefeng Zhao, and Zhen Yang. "Research on Construction Workers’ Activity Recognition Based on Smartphone." Sensors 18, no. 8 (August 14, 2018): 2667. http://dx.doi.org/10.3390/s18082667.

Full text
Abstract:
This research on identification and classification of construction workers’ activity contributes to the monitoring and management of individuals. Since a single sensor cannot meet management requirements of a complex construction environment, and integrated multiple sensors usually lack systemic flexibility and stability, this paper proposes an approach to construction-activity recognition based on smartphones. The accelerometers and gyroscopes embedded in smartphones were utilized to collect three-axis acceleration and angle data of eight main activities with relatively high frequency in simulated floor-reinforcing steel work. Data acquisition from multiple body parts enhanced the dimensionality of activity features to better distinguish between different activities. The CART algorithm of a decision tree was adopted to build a classification training model whose effectiveness was evaluated and verified through cross-validation. The results showed that the accuracy of classification for overall samples was up to 89.85% and the accuracy of prediction was 94.91%. The feasibility of using smartphones as data-acquisition tools in construction management was verified. Moreover, it was proved that the combination of a decision-tree algorithm with smartphones could achieve complex activity classification and identification.
APA, Harvard, Vancouver, ISO, and other styles
25

Mohammad, Mohammad, Randall D. York, Jonathan Hommel, and Geoffrey M. Kapler. "Characterization of a Novel Origin Recognition Complex-Like Complex: Implications for DNA Recognition, Cell Cycle Control, and Locus-Specific Gene Amplification." Molecular and Cellular Biology 23, no. 14 (July 15, 2003): 5005–17. http://dx.doi.org/10.1128/mcb.23.14.5005-5017.2003.

Full text
Abstract:
ABSTRACT The origin recognition complex (ORC) plays a central role in eukaryotic DNA replication. Here we describe a unique ORC-like complex in Tetrahymena thermophila, TIF4, which bound in an ATP-dependent manner to sequences required for cell cycle-controlled replication and gene amplification (ribosomal DNA [rDNA] type I elements). TIF4's mode of DNA recognition was distinct from that of other characterized ORCs, as it bound exclusively to single-stranded DNA. In contrast to yeast ORCs, TIF4 DNA binding activity was cell cycle regulated and peaked during S phase, coincident with the redistribution of the Orc2-related subunit, p69, from the cytoplasm to the macronucleus. Origin-binding activity and nuclear p69 immunoreactivity were further regulated during development, where they distinguished replicating from nonreplicating nuclei. Both activities were lost from germ line micronuclei following the programmed arrest of micronuclear replication. Replicating macronuclei stained with Orc2 antibodies throughout development in wild-type cells but failed to do so in the amplification-defective rmm11 mutant. Collectively, these findings indicate that the regulation of TIF4 is intimately tied to the cell cycle and developmentally programmed replication cycles. They further implicate TIF4 in rDNA gene amplification. As type I elements interact with other sequence-specific single-strand breaks (in vitro and in vivo), the dynamic interplay of Orc-like (TIF4) and non-ORC-like proteins with this replication determinant may provide a novel mechanism for regulation.
APA, Harvard, Vancouver, ISO, and other styles
26

Razzaq, Muhammad Asif, Ian Cleland, Chris Nugent, and Sungyoung Lee. "SemImput: Bridging Semantic Imputation with Deep Learning for Complex Human Activity Recognition." Sensors 20, no. 10 (May 13, 2020): 2771. http://dx.doi.org/10.3390/s20102771.

Full text
Abstract:
The recognition of activities of daily living (ADL) in smart environments is a well-known and an important research area, which presents the real-time state of humans in pervasive computing. The process of recognizing human activities generally involves deploying a set of obtrusive and unobtrusive sensors, pre-processing the raw data, and building classification models using machine learning (ML) algorithms. Integrating data from multiple sensors is a challenging task due to dynamic nature of data sources. This is further complicated due to semantic and syntactic differences in these data sources. These differences become even more complex if the data generated is imperfect, which ultimately has a direct impact on its usefulness in yielding an accurate classifier. In this study, we propose a semantic imputation framework to improve the quality of sensor data using ontology-based semantic similarity learning. This is achieved by identifying semantic correlations among sensor events through SPARQL queries, and by performing a time-series longitudinal imputation. Furthermore, we applied deep learning (DL) based artificial neural network (ANN) on public datasets to demonstrate the applicability and validity of the proposed approach. The results showed a higher accuracy with semantically imputed datasets using ANN. We also presented a detailed comparative analysis, comparing the results with the state-of-the-art from the literature. We found that our semantic imputed datasets improved the classification accuracy with 95.78% as a higher one thus proving the effectiveness and robustness of learned models.
APA, Harvard, Vancouver, ISO, and other styles
27

Yongmian Zhang, Yifan Zhang, E. Swears, N. Larios, Ziheng Wang, and Qiang Ji. "Modeling Temporal Interactions with Interval Temporal Bayesian Networks for Complex Activity Recognition." IEEE Transactions on Pattern Analysis and Machine Intelligence 35, no. 10 (October 2013): 2468–83. http://dx.doi.org/10.1109/tpami.2013.33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Voulodimos, Athanasios, Dimitrios Kosmopoulos, Georgios Vasileiou, Emmanuel Sardis, Vasileios Anagnostopoulos, Constantinos Lalos, Anastasios Doulamis, and Theodora Varvarigou. "A Threefold Dataset for Activity and Workflow Recognition in Complex Industrial Environments." IEEE Multimedia 19, no. 3 (July 2012): 42–52. http://dx.doi.org/10.1109/mmul.2012.31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Liu, Li, Shu Wang, Guoxin Su, Zi-Gang Huang, and Ming Liu. "Towards complex activity recognition using a Bayesian network-based probabilistic generative framework." Pattern Recognition 68 (August 2017): 295–309. http://dx.doi.org/10.1016/j.patcog.2017.02.028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Liu, Li, Yuxin Peng, Shu Wang, Ming Liu, and Zigang Huang. "Complex activity recognition using time series pattern dictionary learned from ubiquitous sensors." Information Sciences 340-341 (May 2016): 41–57. http://dx.doi.org/10.1016/j.ins.2016.01.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Triboan, Darpan, Liming Chen, Feng Chen, and Zumin Wang. "Semantic segmentation of real-time sensor data stream for complex activity recognition." Personal and Ubiquitous Computing 21, no. 3 (February 18, 2017): 411–25. http://dx.doi.org/10.1007/s00779-017-1005-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Qi, Jin, Zhangjing Wang, Xiancheng Lin, and Chunming Li. "Learning Complex Spatio-Temporal Configurations of Body Joints for Online Activity Recognition." IEEE Transactions on Human-Machine Systems 48, no. 6 (December 2018): 637–47. http://dx.doi.org/10.1109/thms.2018.2850301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Jiang, Yajun, Paolo Rossi, and Charalampos G. Kalodimos. "Structural basis for client recognition and activity of Hsp40 chaperones." Science 365, no. 6459 (September 19, 2019): 1313–19. http://dx.doi.org/10.1126/science.aax1280.

Full text
Abstract:
Hsp70 and Hsp40 chaperones work synergistically in a wide range of biological processes including protein synthesis, membrane translocation, and folding. We used nuclear magnetic resonance spectroscopy to determine the solution structure and dynamic features of an Hsp40 in complex with an unfolded client protein. Atomic structures of the various binding sites in the client complexed to the binding domains of the Hsp40 reveal the recognition pattern. Hsp40 engages the client in a highly dynamic fashion using a multivalent binding mechanism that alters the folding properties of the client. Different Hsp40 family members have different numbers of client-binding sites with distinct sequence selectivity, providing additional mechanisms for activity regulation and function modification. Hsp70 binding to Hsp40 displaces the unfolded client. The activity of Hsp40 is altered in its complex with Hsp70, further regulating client binding and release.
APA, Harvard, Vancouver, ISO, and other styles
34

Patil, Prof Sonali, Siddhi Shelke, Shivani Joldapke, Vikrant Jumle, and Sakshi Chikhale. "Review on Human Activity Recognition for Military Restricted Areas." International Journal for Research in Applied Science and Engineering Technology 10, no. 12 (December 31, 2022): 603–6. http://dx.doi.org/10.22214/ijraset.2022.47926.

Full text
Abstract:
Abstract: Human Activity Recognition(HAR) is an active field of research and scientific development in which various models have been proposed using different methods for identification and categorization of activities using Machine Learning. HAR has reached a remarkable milestone in the area of computer vision. Except for applications in human-computer interactions, surveillance systems and robotics, lately it has extended its applicability in the fields of healthcare, multimedia retrieval, social networking, and education as well. It aims to bring latest technologies together to develop complex assistive system with adaptive capability and learning behaviour. HAR interprets human motion using computer and machine vision technologies to identify and detect simple and complex actions in real-world. This paper presents research made for surveillance of restricted military areas. Our scope is to develop a live monitoring system for tracking the illegal activities done in the restricted area for border security, which is an issue of concern since decades. In this we have introduced a deep learning model that learns to classify human actions without having prior knowledge. The features of image or video set are extracted and detected for detected for classifying whether the activity is illegal or not. Many harmful actions can be avoided or at least have their negative effects reduced as a result of the adoption of this concept. Finally, the activity recognition rate showed a good performance as a result of these findings
APA, Harvard, Vancouver, ISO, and other styles
35

Kee, Y. J., M. N. Shah Zainudin, M. I. Idris, R. H. Ramlee, and M. R. Kamarudin. "Activity Recognition on Subject Independent Using Machine Learning." Cybernetics and Information Technologies 20, no. 3 (September 1, 2020): 64–74. http://dx.doi.org/10.2478/cait-2020-0028.

Full text
Abstract:
AbstractRecent Activity Daily Living (ADL) not only tackles simple activities, but also caters to a wide range of complex activities. Although the same activity has been carried out under the same environmental conditions, the acceleration signal obtained from each subject considerably differs. This happens due to the pattern of action generated for each subject is diverse based on several aspects such as subject age, gender, emotion and personality. This project therefore compares the accuracy of various machine learning models for ADL classification. On top of that, this research work also scrutinizes the effectiveness of various feature selection methods to identify the most relevant attribute for ADL classification. As a result, Random Forest was able to achieve the highest accuracy of 83.3% on subject independent matter in ADL classification. Meanwhile, CFS Subset Evaluator is considered to be a good feature selector as it successfully selected the 8 most relevant features compared with Correlation and Information Gain Evaluator.
APA, Harvard, Vancouver, ISO, and other styles
36

Lateef Haroon P.S, Abdul, and U. Eranna. "A simplified machine learning approach for recognizing human activity." International Journal of Electrical and Computer Engineering (IJECE) 9, no. 5 (October 1, 2019): 3465. http://dx.doi.org/10.11591/ijece.v9i5.pp3465-3473.

Full text
Abstract:
With the wide ranges of real-time event feed capturing devices, there has been significant progress in the area of digital image processing towards activity detection and recognition. Irrespective of the presence of various such devices, they are not adequate to meet dynamic monitoring demands of the visual surveillance system, and their features are highly limited towards complex human activity recognition system. Review of existing system confirms that still there is a large scope of enhancement as they lack applicability to real-life events and also doesn't offer optimal system performance. Therefore, the proposed manuscript presents a model for activity recognition system where the accuracy of recognition operation and system performance are retained with good balance. The study presents a simplified feature extraction process from spatial and temporal traits of the event feeds that is further subjected to the machine learning mechanism for boosting recognition performance
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Chenglin, Carrie Lu Tong, Di Niu, Bei Jiang, Xiao Zuo, Lei Cheng, Jian Xiong, and Jianming Yang. "Similarity Embedding Networks for Robust Human Activity Recognition." ACM Transactions on Knowledge Discovery from Data 15, no. 6 (May 19, 2021): 1–17. http://dx.doi.org/10.1145/3448021.

Full text
Abstract:
Deep learning models for human activity recognition (HAR) based on sensor data have been heavily studied recently. However, the generalization ability of deep models on complex real-world HAR data is limited by the availability of high-quality labeled activity data, which are hard to obtain. In this article, we design a similarity embedding neural network that maps input sensor signals onto real vectors through carefully designed convolutional and Long Short-Term Memory (LSTM) layers. The embedding network is trained with a pairwise similarity loss, encouraging the clustering of samples from the same class in the embedded real space, and can be effectively trained on a small dataset and even on a noisy dataset with mislabeled samples. Based on the learned embeddings, we further propose both nonparametric and parametric approaches for activity recognition. Extensive evaluation based on two public datasets has shown that the proposed similarity embedding network significantly outperforms state-of-the-art deep models on HAR classification tasks, is robust to mislabeled samples in the training set, and can also be used to effectively denoise a noisy dataset.
APA, Harvard, Vancouver, ISO, and other styles
38

Doniger, Glen M., John J. Foxe, Micah M. Murray, Beth A. Higgins, Joan Gay Snodgrass, Charles E. Schroeder, and Daniel C. Javitt. "Activation Timecourse of Ventral Visual Stream Object-recognition Areas: High Density Electrical Mapping of Perceptual Closure Processes." Journal of Cognitive Neuroscience 12, no. 4 (July 2000): 615–21. http://dx.doi.org/10.1162/089892900562372.

Full text
Abstract:
Object recognition is achieved even in circumstances when only partial information is available to the observer. Perceptual closure processes are essential in enabling such recognitions to occur. We presented successively less fragmented images while recording high-density event-related potentials (ERPs), which permitted us to monitor brain activity during the perceptual closure processes leading up to object recognition. We reveal a bilateral ERP component (Ncl) that tracks these processes (onsets ∼ 230 msec, maximal at ∼290 msec). Scalp-current density mapping of the Ncl revealed bilateral occipito-temporal scalp foci, which are consistent with generators in the human ventral visual stream, and specifically the lateral-occipital or LO complex as defined by hemodynamic studies of object recognition.
APA, Harvard, Vancouver, ISO, and other styles
39

Peng, Liangying, Ling Chen, Xiaojie Wu, Haodong Guo, and Gencai Chen. "Hierarchical Complex Activity Representation and Recognition Using Topic Model and Classifier Level Fusion." IEEE Transactions on Biomedical Engineering 64, no. 6 (June 2017): 1369–79. http://dx.doi.org/10.1109/tbme.2016.2604856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Sawyer, Robert G., and Timothy L. Pruett. "Cellular mechanisms of abscess formation: Macrophage procoagulant activity and major histocompatibility complex recognition." Surgery 120, no. 3 (September 1996): 488–95. http://dx.doi.org/10.1016/s0039-6060(96)80068-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Liu, Li, Shu Wang, Guoxin Su, Bin Hu, Yuxin Peng, Qingyu Xiong, and Junhao Wen. "A framework of mining semantic-based probabilistic event relations for complex activity recognition." Information Sciences 418-419 (December 2017): 13–33. http://dx.doi.org/10.1016/j.ins.2017.07.022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Tabatabaee Malazi, Hadi, and Mohammad Davari. "Combining emerging patterns with random forest for complex activity recognition in smart homes." Applied Intelligence 48, no. 2 (July 5, 2017): 315–30. http://dx.doi.org/10.1007/s10489-017-0976-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Soro, Andrea, Gino Brunner, Simon Tanner, and Roger Wattenhofer. "Recognition and Repetition Counting for Complex Physical Exercises with Deep Learning." Sensors 19, no. 3 (February 10, 2019): 714. http://dx.doi.org/10.3390/s19030714.

Full text
Abstract:
Activity recognition using off-the-shelf smartwatches is an important problem in humanactivity recognition. In this paper, we present an end-to-end deep learning approach, able to provideprobability distributions over activities from raw sensor data. We apply our methods to 10 complexfull-body exercises typical in CrossFit, and achieve a classification accuracy of 99.96%. We additionallyshow that the same neural network used for exercise recognition can also be used in repetitioncounting. To the best of our knowledge, our approach to repetition counting is novel and performswell, counting correctly within an error of 1 repetitions in 91% of the performed sets.
APA, Harvard, Vancouver, ISO, and other styles
44

Zhou, Song, and Tianhan Gao. "Brain Activity Recognition Method Based on Attention-Based RNN Mode." Applied Sciences 11, no. 21 (November 5, 2021): 10425. http://dx.doi.org/10.3390/app112110425.

Full text
Abstract:
Brain activity recognition based on electroencephalography (EEG) marks a major research orientation in intelligent medicine, especially in human intention prediction, human–computer control and neurological diagnosis. The literature research mainly focuses on the recognition of single-person binary brain activity, which is limited in the more extensive and complex scenarios. Therefore, brain activity recognition in multiperson and multi-objective scenarios has aroused increasingly more attention. Another challenge is the reduction of recognition accuracy caused by the interface of external noise as well as EEG’s low signal-to-noise ratio. In addition, traditional EEG feature analysis proves to be time-intensive and it relies heavily on mature experience. The paper proposes a novel EEG recognition method to address the above issues. The basic feature of EEG is first analyzed according to the band of EEG. The attention-based RNN model is then adopted to eliminate the interference to achieve the purpose of automatic recognition of the original EEG signal. Finally, we evaluate the proposed method with public and local data sets of EEG and perform lots of tests to investigate how factors affect the results of recognition. As shown by the test results, compared with some typical EEG recognition methods, the proposed method owns better recognition accuracy and suitability in multi-objective task scenarios.
APA, Harvard, Vancouver, ISO, and other styles
45

Fernandez-Olivares, Juan, and Raul Perez. "Driver Activity Recognition by Means of Temporal HTN Planning." Proceedings of the International Conference on Automated Planning and Scheduling 30 (June 1, 2020): 375–83. http://dx.doi.org/10.1609/icaps.v30i1.6683.

Full text
Abstract:
When delivering a transport service, scheduled driver workplans have to be aligned with world wide complex hours of service (HoS) regulations which constraint the amount of working and driving time without resting. The activities of such workplans are recorded by onboard sensors in large temporal event logs. Transport companies are interested on recognizing what a driver is doing, based on the temporal observations from event logs, considering the terms defined by HoS regulations. This work presents an application of temporal HTN planning to plan and goal recognition that, starting from a real event log extracted from a tachograph, identifies different sub-sequences of a driver's daily and weekly driving activity and labels them according to the terms defined by HoS regulation.
APA, Harvard, Vancouver, ISO, and other styles
46

Roy, Patrice C., Bruno Bouchard, Abdenour Bouzouane, and Sylvain Giroux. "Ambient Activity Recognition in Smart Environments for Cognitive Assistance." International Journal of Robotics Applications and Technologies 1, no. 1 (January 2013): 29–56. http://dx.doi.org/10.4018/ijrat.2013010103.

Full text
Abstract:
In this paper, the authors investigate the challenging key issues that emerge from research in the field of ambient intelligence in smart environments, under the context of activity recognition. The authors clearly describe the specific functional needs inherent in cognitive assistance for effective activity recognition, and then the authors present the fundamental research that addresses this problem in such a context. This paper is more of a survey and an analysis of existing works that have been studied for potential integration into our laboratories, rather than a focused evaluation report. The authors’ objective is to identify gaps in the capabilities of current techniques and to suggest the most productive lines of research to address this complex issue. As such, the contribution is of both theoretical and practical significance.
APA, Harvard, Vancouver, ISO, and other styles
47

Mekruksavanich, Sakorn, and Anuchit Jitpattanakul. "Deep Convolutional Neural Network with RNNs for Complex Activity Recognition Using Wrist-Worn Wearable Sensor Data." Electronics 10, no. 14 (July 14, 2021): 1685. http://dx.doi.org/10.3390/electronics10141685.

Full text
Abstract:
Sensor-based human activity recognition (S-HAR) has become an important and high-impact topic of research within human-centered computing. In the last decade, successful applications of S-HAR have been presented through fruitful academic research and industrial applications, including for healthcare monitoring, smart home controlling, and daily sport tracking. However, the growing requirements of many current applications for recognizing complex human activities (CHA) have begun to attract the attention of the HAR research field when compared with simple human activities (SHA). S-HAR has shown that deep learning (DL), a type of machine learning based on complicated artificial neural networks, has a significant degree of recognition efficiency. Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are two different types of DL methods that have been successfully applied to the S-HAR challenge in recent years. In this paper, we focused on four RNN-based DL models (LSTMs, BiLSTMs, GRUs, and BiGRUs) that performed complex activity recognition tasks. The efficiency of four hybrid DL models that combine convolutional layers with the efficient RNN-based models was also studied. Experimental studies on the UTwente dataset demonstrated that the suggested hybrid RNN-based models achieved a high level of recognition performance along with a variety of performance indicators, including accuracy, F1-score, and confusion matrix. The experimental results show that the hybrid DL model called CNN-BiGRU outperformed the other DL models with a high accuracy of 98.89% when using only complex activity data. Moreover, the CNN-BiGRU model also achieved the highest recognition performance in other scenarios (99.44% by using only simple activity data and 98.78% with a combination of simple and complex activities).
APA, Harvard, Vancouver, ISO, and other styles
48

Buslepp, Jennifer, Rui Zhao, Debora Donnini, Douglas Loftus, Mohamed Saad, Ettore Appella, and Edward J. Collins. "T Cell Activity Correlates with Oligomeric Peptide-Major Histocompatibility Complex Binding on T Cell Surface." Journal of Biological Chemistry 276, no. 50 (October 2, 2001): 47320–28. http://dx.doi.org/10.1074/jbc.m109231200.

Full text
Abstract:
Recognition of virally infected cells by CD8+T cells requires differentiation between self and nonself peptide-class I major histocompatibility complexes (pMHC). Recognition of foreign pMHC by host T cells is a major factor in the rejection of transplanted organs from the same species (allotransplant) or different species (xenotransplant). AHIII12.2 is a murine T cell clone that recognizes the xenogeneic (human) class I MHC HLA-A2.1 molecule (A2) and the syngeneic murine class I MHC H-2 Dbmolecule (Db). Recognition of both A2 and Dbare peptide-dependent, and the sequences of the peptides recognized have been determined. Alterations in the antigenic peptides bound to A2 cause large changes in AHIII12.2 T cell responsiveness. Crystal structures of three representative peptides (agonist, null, and antagonist) bound to A2 partially explain the changes in AHIII12.2 responsiveness. Using class I pMHC octamers, a strong correlation is seen between T cell activity and the affinity of pMHC complexes for the T cell receptor. However, contrary to previous studies, we see similar half-lives for the pMHC multimers bound to the AHIII12.2 cell surface.
APA, Harvard, Vancouver, ISO, and other styles
49

des Georges, Amédée, Yaser Hashem, Anett Unbehaun, Robert A. Grassucci, Derek Taylor, Christopher U. T. Hellen, Tatyana V. Pestova, and Joachim Frank. "Structure of the mammalian ribosomal pre-termination complex associated with eRF1•eRF3•GDPNP." Nucleic Acids Research 42, no. 5 (December 11, 2013): 3409–18. http://dx.doi.org/10.1093/nar/gkt1279.

Full text
Abstract:
Abstract Eukaryotic translation termination results from the complex functional interplay between two release factors, eRF1 and eRF3, in which GTP hydrolysis by eRF3 couples codon recognition with peptidyl-tRNA hydrolysis by eRF1. Here, we present a cryo-electron microscopy structure of pre-termination complexes associated with eRF1•eRF3•GDPNP at 9.7 -Å resolution, which corresponds to the initial pre-GTP hydrolysis stage of factor attachment and stop codon recognition. It reveals the ribosomal positions of eRFs and provides insights into the mechanisms of stop codon recognition and triggering of eRF3’s GTPase activity.
APA, Harvard, Vancouver, ISO, and other styles
50

Freedman, Richard, Hee-Tae Jung, and Shlomo Zilberstein. "Plan and Activity Recognition from a Topic Modeling Perspective." Proceedings of the International Conference on Automated Planning and Scheduling 24 (May 11, 2014): 360–64. http://dx.doi.org/10.1609/icaps.v24i1.13683.

Full text
Abstract:
We examine new ways to perform plan recognition (PR) using natural language processing (NLP) techniques. PR often focuses on the structural relationships between consecutive observations and ordered activities that comprise plans. However, NLP commonly treats text as a bag-of-words, omitting such structural relationships and using topic models to break down the distribution of concepts discussed in documents. In this paper, we examine an analogous treatment of plans as distributions of activities. We explore the application of Latent Dirichlet Allocation topic models to human skeletal data of plan execution traces obtained from a RGB-D sensor. This investigation focuses on representing the data as text and interpreting learned activities as a form of activity recognition (AR). Additionally, we explain how the system may perform PR. The initial empirical results suggest that such NLP methods can be useful in complex PR and AR tasks.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography