Academic literature on the topic '3DCNNs'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic '3DCNNs.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "3DCNNs"

1

Paralic, Martin, Kamil Zelenak, Patrik Kamencay, and Robert Hudec. "Automatic Approach for Brain Aneurysm Detection Using Convolutional Neural Networks." Applied Sciences 13, no. 24 (December 16, 2023): 13313. http://dx.doi.org/10.3390/app132413313.

Full text
Abstract:
The paper introduces an approach for detecting brain aneurysms, a critical medical condition, by utilizing a combination of 3D convolutional neural networks (3DCNNs) and Convolutional Long Short-Term Memory (ConvLSTM). Brain aneurysms pose a significant health risk, and early detection is vital for effective treatment. Traditional methods for aneurysm detection often rely on complex and time-consuming procedures. A radiologist specialist annotates each aneurysm and supports our work with true-ground annotations. From the annotated data, we extract images to train proposed neural networks. The paper experiments with two different types of networks, specifically focusing on 2D convolutional neural networks (2DCNNs), 3D convolutional neural networks (3DCNNs), and Convolutional Long Short-Term Memory (ConvLSTM). Our goal is to create a virtual assistant to improve the search for aneurysm locations, with the aim of further realizing the virtual assistant. Subsequently, a radiologist specialist will confirm or reject the presence of an aneurysm, leading to a reduction in the time spent on the searching process and revealing hidden aneurysms. Our experimental results demonstrate the superior performance of the proposed approach compared to existing methods, showcasing its potential as a valuable tool in clinical settings for early and accurate brain aneurysm detection. This innovative fusion of 3DCNN and LSTM (3DCNN-ConvLSTM) techniques not only improves diagnostic precision but also holds promise for advancing the field of medical image analysis, particularly in the domain of neurovascular diseases. Overall, our research underscores the potential of neural networks for the machine detection of brain aneurysms.
APA, Harvard, Vancouver, ISO, and other styles
2

Vrskova, Roberta, Patrik Kamencay, Robert Hudec, and Peter Sykora. "A New Deep-Learning Method for Human Activity Recognition." Sensors 23, no. 5 (March 4, 2023): 2816. http://dx.doi.org/10.3390/s23052816.

Full text
Abstract:
Currently, three-dimensional convolutional neural networks (3DCNNs) are a popular approach in the field of human activity recognition. However, due to the variety of methods used for human activity recognition, we propose a new deep-learning model in this paper. The main objective of our work is to optimize the traditional 3DCNN and propose a new model that combines 3DCNN with Convolutional Long Short-Term Memory (ConvLSTM) layers. Our experimental results, which were obtained using the LoDVP Abnormal Activities dataset, UCF50 dataset, and MOD20 dataset, demonstrate the superiority of the 3DCNN + ConvLSTM combination for recognizing human activities. Furthermore, our proposed model is well-suited for real-time human activity recognition applications and can be further enhanced by incorporating additional sensor data. To provide a comprehensive comparison of our proposed 3DCNN + ConvLSTM architecture, we compared our experimental results on these datasets. We achieved a precision of 89.12% when using the LoDVP Abnormal Activities dataset. Meanwhile, the precision we obtained using the modified UCF50 dataset (UCF50mini) and MOD20 dataset was 83.89% and 87.76%, respectively. Overall, our work demonstrates that the combination of 3DCNN and ConvLSTM layers can improve the accuracy of human activity recognition tasks, and our proposed model shows promise for real-time applications.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Dingheng, Guangshe Zhao, Guoqi Li, Lei Deng, and Yang Wu. "Compressing 3DCNNs based on tensor train decomposition." Neural Networks 131 (November 2020): 215–30. http://dx.doi.org/10.1016/j.neunet.2020.07.028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hong, Qingqing, Xinyi Zhong, Weitong Chen, Zhenghua Zhang, Bin Li, Hao Sun, Tianbao Yang, and Changwei Tan. "SATNet: A Spatial Attention Based Network for Hyperspectral Image Classification." Remote Sensing 14, no. 22 (November 21, 2022): 5902. http://dx.doi.org/10.3390/rs14225902.

Full text
Abstract:
In order to categorize feature classes by capturing subtle differences, hyperspectral images (HSIs) have been extensively used due to the rich spectral-spatial information. The 3D convolution-based neural networks (3DCNNs) have been widely used in HSI classification because of their powerful feature extraction capability. However, the 3DCNN-based HSI classification approach could only extract local features, and the feature maps it produces include a lot of spatial information redundancy, which lowers the classification accuracy. To solve the above problems, we proposed a spatial attention network (SATNet) by combining 3D OctConv and ViT. Firstly, 3D OctConv divided the feature maps into high-frequency maps and low-frequency maps to reduce spatial information redundancy. Secondly, the ViT model was used to obtain global features and effectively combine local-global features for classification. To verify the effectiveness of the method in the paper, a comparison with various mainstream methods on three publicly available datasets was performed, and the results showed the superiority of the proposed method in terms of classification evaluation performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Gomez-Donoso, Francisco, Felix Escalona, and Miguel Cazorla. "Par3DNet: Using 3DCNNs for Object Recognition on Tridimensional Partial Views." Applied Sciences 10, no. 10 (May 14, 2020): 3409. http://dx.doi.org/10.3390/app10103409.

Full text
Abstract:
Deep learning-based methods have proven to be the best performers when it comes to object recognition cues both in images and tridimensional data. Nonetheless, when it comes to 3D object recognition, the authors tend to convert the 3D data to images and then perform their classification. However, despite its accuracy, this approach has some issues. In this work, we present a deep learning pipeline for object recognition that takes a point cloud as input and provides the classification probabilities as output. Our proposal is trained on synthetic CAD objects and is able to perform accurately when fed with real data provided by commercial sensors. Unlike most approaches, our method is specifically trained to work on partial views of the objects rather than on a full representation, which is not the representation of the objects as captured by commercial sensors. We trained our proposal with the ModelNet10 dataset and achieved a 78.39 % accuracy. We also tested it by adding noise to the dataset and against a number of datasets and real data with high success.
APA, Harvard, Vancouver, ISO, and other styles
6

Motamed, Sara, and Elham Askari. "Detection of handgun using 3D convolutional neural network model (3DCNNs)." Signal and Data Processing 20, no. 2 (September 1, 2023): 69–79. http://dx.doi.org/10.61186/jsdp.20.2.69.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Firsov, Nikita, Evgeny Myasnikov, Valeriy Lobanov, Roman Khabibullin, Nikolay Kazanskiy, Svetlana Khonina, Muhammad A. Butt, and Artem Nikonorov. "HyperKAN: Kolmogorov–Arnold Networks Make Hyperspectral Image Classifiers Smarter." Sensors 24, no. 23 (November 30, 2024): 7683. https://doi.org/10.3390/s24237683.

Full text
Abstract:
In traditional neural network designs, a multilayer perceptron (MLP) is typically employed as a classification block following the feature extraction stage. However, the Kolmogorov–Arnold Network (KAN) presents a promising alternative to MLP, offering the potential to enhance prediction accuracy. In this paper, we studied KAN-based networks for pixel-wise classification of hyperspectral images. Initially, we compared baseline MLP and KAN networks with varying numbers of neurons in their hidden layers. Subsequently, we replaced the linear, convolutional, and attention layers of traditional neural networks with their KAN-based counterparts. Specifically, six cutting-edge neural networks were modified, including 1D (1DCNN), 2D (2DCNN), and 3D convolutional networks (two different 3DCNNs, NM3DCNN), as well as transformer (SSFTT). Experiments conducted using seven publicly available hyperspectral datasets demonstrated a substantial improvement in classification accuracy across all the networks. The best classification quality was achieved using a KAN-based transformer architecture.
APA, Harvard, Vancouver, ISO, and other styles
8

Alharbi, Yasser F., and Yousef A. Alotaibi. "Decoding Imagined Speech from EEG Data: A Hybrid Deep Learning Approach to Capturing Spatial and Temporal Features." Life 14, no. 11 (November 18, 2024): 1501. http://dx.doi.org/10.3390/life14111501.

Full text
Abstract:
Neuroimaging is revolutionizing our ability to investigate the brain’s structural and functional properties, enabling us to visualize brain activity during diverse mental processes and actions. One of the most widely used neuroimaging techniques is electroencephalography (EEG), which records electrical activity from the brain using electrodes positioned on the scalp. EEG signals capture both spatial (brain region) and temporal (time-based) data. While a high temporal resolution is achievable with EEG, spatial resolution is comparatively limited. Consequently, capturing both spatial and temporal information from EEG data to recognize mental activities remains challenging. In this paper, we represent spatial and temporal information obtained from EEG signals by transforming EEG data into sequential topographic brain maps. We then apply hybrid deep learning models to capture the spatiotemporal features of the EEG topographic images and classify imagined English words. The hybrid framework utilizes a sequential combination of three-dimensional convolutional neural networks (3DCNNs) and recurrent neural networks (RNNs). The experimental results reveal the effectiveness of the proposed approach, achieving an average accuracy of 77.8% in identifying imagined English speech.
APA, Harvard, Vancouver, ISO, and other styles
9

Wei, Minghua, and Feng Lin. "A novel multi-dimensional features fusion algorithm for the EEG signal recognition of brain's sensorimotor region activated tasks." International Journal of Intelligent Computing and Cybernetics 13, no. 2 (June 8, 2020): 239–60. http://dx.doi.org/10.1108/ijicc-02-2020-0019.

Full text
Abstract:
PurposeAiming at the shortcomings of EEG signals generated by brain's sensorimotor region activated tasks, such as poor performance, low efficiency and weak robustness, this paper proposes an EEG signals classification method based on multi-dimensional fusion features.Design/methodology/approachFirst, the improved Morlet wavelet is used to extract the spectrum feature maps from EEG signals. Then, the spatial-frequency features are extracted from the PSD maps by using the three-dimensional convolutional neural networks (3DCNNs) model. Finally, the spatial-frequency features are incorporated to the bidirectional gated recurrent units (Bi-GRUs) models to extract the spatial-frequency-sequential multi-dimensional fusion features for recognition of brain's sensorimotor region activated task.FindingsIn the comparative experiments, the data sets of motor imagery (MI)/action observation (AO)/action execution (AE) tasks are selected to test the classification performance and robustness of the proposed algorithm. In addition, the impact of extracted features on the sensorimotor region and the impact on the classification processing are also analyzed by visualization during experiments.Originality/valueThe experimental results show that the proposed algorithm extracts the corresponding brain activation features for different action related tasks, so as to achieve more stable classification performance in dealing with AO/MI/AE tasks, and has the best robustness on EEG signals of different subjects.
APA, Harvard, Vancouver, ISO, and other styles
10

Torres, Felipe Soares, Shazia Akbar, Srinivas Raman, Kazuhiro Yasufuku, Felix Baldauf-Lenschen, and Natasha B. Leighl. "Automated imaging-based stratification of early-stage lung cancer patients prior to receiving surgical resection using deep learning applied to CTs." Journal of Clinical Oncology 39, no. 15_suppl (May 20, 2021): 1552. http://dx.doi.org/10.1200/jco.2021.39.15_suppl.1552.

Full text
Abstract:
1552 Background: Computed tomography (CT) imaging is an important tool to guide further investigation and treatment in patients with lung cancer. For patients with early stage lung cancer, surgery remains an optimal treatment option. Artificial intelligence applied to pretreatment CTs may have the ability to quantify mortality risk and stratify patients for more individualized diagnostic, treatment and monitoring decisions. Methods: A fully automated, end-to-end model was designed to localize the 36cm x 36cm x 36cm space centered on the lungs and learn deep prognostic features using a 3-dimensional convolutional neural network (3DCNN) to predict 5-year mortality risk. The 3DCNN was trained and validated in a 5-fold cross-validation using 2,924 CTs of 1,689 lung cancer patients from 6 public datasets made available in The Cancer Imaging Archive. We evaluated 3DCNN’s ability to stratify stage I & II patients who received surgery into mortality risk quintiles using the Cox proportional hazards model. Results: 260 of the 1,689 lung cancer patients in the withheld validation dataset were diagnosed as stage I or II, received a surgical resection within 6 months of their pretreatment CT and had known 5-year disease and survival outcomes. Based on the 3DCNN’s predicted mortality risk, patients in the highest risk quintile had a 14.2-fold (95% CI 4.3-46.8, p < 0.001) increase in 5-year mortality hazard compared to patients in the lowest risk quintile. Conclusions: Deep learning applied to pretreatment CTs provides personalised prognostic insights for early stage lung cancer patients who received surgery and has the potential to inform treatment and monitoring decisions.[Table: see text]
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "3DCNNs"

1

Ali, Abid. "Analyse vidéo à l'aide de réseaux de neurones profonds : une application pour l'autisme." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4066.

Full text
Abstract:
La compréhension des actions dans les vidéos est un élément crucial de la vision par ordinateur, avec des implications significatives dans divers domaines. À mesure que notre dépendance aux données visuelles augmente, comprendre et interpréter les actions humaines dans les vidéos devient essentiel pour faire progresser les technologies dans la surveillance, les soins de santé, les systèmes autonomes et l'interaction homme-machine. L'interprétation précise des actions dans les vidéos est fondamentale pour créer des systèmes intelligents capables de naviguer efficacement et de répondre aux complexités du monde réel. Dans ce contexte, les avancées dans la compréhension des actions repoussent les limites de la vision par ordinateur et jouent un rôle crucial dans la transformation des applications de pointe qui impactent notre quotidien. La vision par ordinateur a réalisé des progrès significatifs avec l'essor des méthodes d'apprentissage profond, telles que les réseaux de neurones convolutifs (CNN), repoussant les frontières de la vision par ordinateur et permettant à la communauté de progresser dans de nombreux domaines, notamment la segmentation d'images, la détection d'objets, la compréhension des scènes, et bien plus encore. Cependant, le traitement des vidéos reste limité par rapport aux images statiques. Dans cette thèse, nous nous concentrons sur la compréhension des actions, en la divisant en deux parties principales : la reconnaissance d'actions et la détection d'actions, ainsi que leur application dans le domaine médical pour l'analyse de l'autisme. Dans cette thèse, nous explorons les divers aspects et défis de la compréhension des vidéos, tant d'un point de vue général que spécifique à une application. Nous présentons ensuite nos contributions et solutions pour relever ces défis. De plus, nous introduisons le jeu de données ACTIVIS, conçu pour diagnostiquer l'autisme chez les jeunes enfants. Notre travail est divisé en deux parties principales : la modélisation générique et les modèles appliqués. Dans un premier temps, nous nous concentrons sur l'adaptation des modèles d'images pour les tâches de reconnaissance d'actions en incorporant la modélisation temporelle à l'aide de techniques de fine-tuning efficaces en paramètres (PEFT). Nous abordons également la détection et l'anticipation des actions en temps réel en proposant un nouveau modèle conjoint pour l'anticipation des actions et la détection d'actions en ligne dans des scénarios de la vie réelle. En outre, nous introduisons une nouvelle tâche appelée "interaction lâche" dans des situations dyadiques et ses applications dans l'analyse de l'autisme. Enfin, nous nous concentrons sur l'aspect appliqué de la compréhension des vidéos en proposant un modèle de reconnaissance d'actions pour les comportements répétitifs dans les vidéos d'individus autistes. Nous concluons en proposant une méthode faiblement supervisée pour estimer le score de gravité des enfants autistes dans des vidéos longues
Understanding actions in videos is a crucial element of computer vision with significant implications across various fields. As our dependence on visual data grows, comprehending and interpreting human actions in videos becomes essential for advancing technologies in surveillance, healthcare, autonomous systems, and human-computer interaction. The accurate interpretation of actions in videos is fundamental for creating intelligent systems that can effectively navigate and respond to the complexities of the real world. In this context, advances in action understanding push the boundaries of computer vision and play a crucial role in shaping the landscape of cutting-edge applications that impact our daily lives. Computer vision has made significant progress with the rise of deep learning methods such as convolutional neural networks (CNNs) pushing the boundaries of computer vision and enabling the computer vision community to advance in many domains, including image segmentation, object detection, scene understanding, and more. However, video processing remains limited compared to static images. In this thesis, we focus on action understanding, dividing it into two main parts: action recognition and action detection, and their application in the medical domain for autism analysis.In this thesis, we explore the various aspects and challenges of video understanding from a general and an application-specific perspective. We then present our contributions and solutions to address these challenges. In addition, we introduce the ACTIVIS dataset, designed to diagnose autism in young children. Our work is divided into two main parts: generic modeling and applied models. Initially, we focus on adapting image models for action recognition tasks by incorporating temporal modeling using parameter-efficient fine-tuning (PEFT) techniques. We also address real-time action detection and anticipation by proposing a new joint model for action anticipation and online action detection in real-life scenarios. Furthermore, we introduce a new task called 'loose-interaction' in dyadic situations and its applications in autism analysis. Finally, we concentrate on the applied aspect of video understanding by proposing an action recognition model for repetitive behaviors in videos of autistic individuals. We conclude by proposing a weakly-supervised method to estimate the severity score of autistic children in long videos
APA, Harvard, Vancouver, ISO, and other styles
2

Botina, Monsalve Deivid. "Remote photoplethysmography measurement and filtering using deep learning based methods." Electronic Thesis or Diss., Bourgogne Franche-Comté, 2022. http://www.theses.fr/2022UBFCK061.

Full text
Abstract:
RPPG est une technique développée pour mesurer le signal du pouls et ensuite estimer les données physiologiques telles que la fréquence cardiaque, la fréquence respiratoire et la variabilité du pouls.En raison des multiples sources de bruit qui détériorent la qualité du signal RPPG, les filtres conventionnels sont couramment utilisés. Cependant, certaines altérations demeurent, alors qu'un œil expérimenté peut facilement les identifier. Dans la première partie de cette thèse, nous proposons le réseau LSTMDF (Long Short-Term Memory Deep-Filter) pour réaliser le filtrage du signal RPPG. Nous utilisons différents protocoles pour analyser les performances de la méthode. Nous démontrons comment le réseau peut être entraîné efficacement avec un nombre limité de signaux. Notre étude démontre expérimentalement la supériorité du filtre basé sur le LSTM par rapport aux filtres conventionnels. Le réseau est ainsi peu sensible rapport signal/bruit moyen des signaux RPPG.Les approches basées sur les réseaux convolutifs tels que les 3DCNN ont récemment surpassé les méthodes manuelles traditionnelles dans la tâche de mesure du RPPG. Cependant, il est connu que les grands modèles 3DCNN ont des coûts de calcul élevés et peuvent être inadaptés aux applications en temps réel. Comme deuxième contribution de cette thèse, nous proposons une étude d'une architecture 3DCNN, trouvant le meilleur compromis entre la précision de la mesure du pouls et le temps d'inférence. Nous utilisons une étude d'ablation où nous diminuons la taille de l'entrée, proposons une fonction de perte personnalisée, et évaluons l'impact de différents espaces de couleur d'entrée. Le résultat est le RPPG en temps réel (RTRPPG), un outil de mesure du RPPG de bout en bout qui peut être utilisé sur GPU et CPU. Nous avons également proposé une méthode d'augmentation des données qui vise à améliorer les performances des réseaux d'apprentissage profond lorsque la base de données présente des caractéristiques spécifiques (par exemple, les mouvements de type fitness) et lorsque les données disponibles sont peu nombreuses
RPPG is a technique developed to measure the blood volume pulse signal and then estimate physiological data such as pulse rate, breathing rate, and pulse rate variability.Due to the multiple sources of noise that deteriorate the quality of the RPPG signal, conventional filters are commonly used. However, some alterations remain, but interestingly, an experienced eye can easily identify them. In the first part of this thesis, we propose the Long Short-Term Memory Deep-Filter (LSTMDF) network in the RPPG filtering task. We use different protocols to analyze the performance of the method. We demonstrate how the network can be efficiently trained with a few signals. Our study demonstrates experimentally the superiority of the LSTM-based filter compared with conventional filters. We found a network sensitivity related to the average signal-to-noise ratio on the RPPG signals.Approaches based on convolutional networks such as 3DCNNs have recently outperformed traditional hand-crafted methods in the RPPG measurement task. However, it is well known that large 3DCNN models have high computational costs and may be unsuitable for real-time applications. As the second contribution of this thesis, we propose a study of a 3DCNN architecture, finding the best compromise between pulse rate measurement precision and inference time. We use an ablation study where we decrease the input size, propose a custom loss function, and evaluate the impact of different input color spaces. The result is the Real-Time RPPG (RTRPPG), an end-to-end RPPG measurement framework that can be used in GPU and CPU. We also proposed a data augmentation method that aims to improve the performance of deep learning networks when the database has specific characteristics (e.g., fitness movement) and when there is not enough data available
APA, Harvard, Vancouver, ISO, and other styles
3

Castelli, Filippo Maria. "3D CNN methods in biomedical image segmentation." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18796/.

Full text
Abstract:
A definite trend in Biomedical Imaging is the one towards the integration of increasingly complex interpretative layers to the pure data acquisition process. One of the most interesting and looked-forward goals in the field is the automatic segmentation of objects of interest in extensive acquisition data, target that would allow Biomedical Imaging to look beyond its use as a purely assistive tool to become a cornerstone in ambitious large-scale challenges like the extensive quantitative study of the Human Brain. In 2019 Convolutional Neural Networks represent the state of the art in Biomedical Image segmentation and scientific interests from a variety of fields, spacing from automotive to natural resource exploration, converge to their development. While most of the applications of CNNs are focused on single-image segmentation, biomedical image data -being it MRI, CT-scans, Microscopy, etc- often benefits from three-dimensional volumetric expression. This work explores a reformulation of the CNN segmentation problem that is native to the 3D nature of the data, with particular interest to the applications to Fluorescence Microscopy volumetric data produced at the European Laboratories for Nonlinear Spectroscopy in the context of two different large international human brain study projects: the Human Brain Project and the White House BRAIN Initiative.
APA, Harvard, Vancouver, ISO, and other styles
4

Casserfelt, Karl. "A Deep Learning Approach to Video Processing for Scene Recognition in Smart Office Environments." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20429.

Full text
Abstract:
The field of computer vision, where the goal is to allow computer systems to interpret and understand image data, has in recent years seen great advances with the emergence of deep learning. Deep learning, a technique that emulates the information processing of the human brain, has been shown to almost solve the problem of object recognition in image data. One of the next big challenges in computer vision is to allow computers to not only recognize objects, but also activities. This study is an exploration of the capabilities of deep learning for the specific problem area of activity recognition in office environments. The study used a re-labeled subset of the AMI Meeting Corpus video data set to comparatively evaluate different neural network models performance in the given problem area, and then evaluated the best performing model on a new novel data set of office activities captured in a research lab in Malmö University. The results showed that the best performing model was a 3D convolutional neural network (3DCNN) with temporal information in the third dimension, however a recurrent convolutional network (RCNN) using a pre-trained VGG16 model to extract features and put into a recurrent neural network with a unidirectional Long-Short-Term-Memory (LSTM) layer performed almost as well with the right configuration. An analysis of the results suggests that a 3DCNN's performance is dependent on the camera angle, specifically how well movement is spatially distributed between people in frame.
APA, Harvard, Vancouver, ISO, and other styles
5

Lyu, Kai-Dy, and 呂凱迪. "Investigating the effects of hypoxia-induction on MMP-2, MMP-9 in A549 cells." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/3dcrn2.

Full text
Abstract:
碩士
中山醫學大學
醫學檢驗暨生物技術學系碩士班
102
Hypoxia causing by oxygen deprivation is a common feature of solid tumors. Hypoxia-inducible factors (HIFs) are stress-responsive transcriptional regulators of cellular and physiological processes involved in oxygen metabolism. Hif-1α will over express when cell was under hypoxia condition. In order to confirm the CoCl2 induced hypoxia cell model’s effect, A549 cells were treated with 100 μM CoCl2 for 0-24 hours, or with various concentrations of CoCl2 (0-200 μM) for 24 hours. Western blot analysis was used to examining the expression of HIF-1α protein. The protein expression level of HIF-1α was increased by the addition of CoCl2 in a time-dependent and dose-dependent manner. The Matrix metalloproteinases (MMPs) are a family of Zn2+ and Ca2+ dependent proteolytic enzymes. MMPs have potent ability to degrade structural proteins of the extracellular matrix (ECM) and to remodel tissues for morphogenesis, angiogenesis, neurogenesis, and tissue repair. But MMPs also have detrimental effects in carcinogenesis, including migration (adhesion / dispersion), differentiation, angiogenesis, and apoptosis. Previous studies had confirmed that MMP-2 & MMP-9 are the gelatinases responsible for the degradation of gelatin, release of cell surface receptors, apoptosis, and the chemokine / cytokine / activator cleavage. Over expression of MMP-2 & MMP-9 can increase the growth of tumor cells, angiogenesis, invasion and tumor progression. In this study, exposure to CoCl2 of A549 cells will increase mRNA and protein expression of MMP-2 and MMP-9 in dose-dependent manner. Enzyme activities of MMP-2 and MMP-9 were investigated using gelatin zymography, A549 cells were treated with CoCl2 and showed that it will increase the activities of MMP-2 and MMP-9. Taken together, our data show that the cell viability and migration of A549 were stimulated under hypoxic condition. Hypoxia induction can also increase protein, mRNA expression and enzyme activities of MMP-2 and MMP-9. These two surface molecules may participate in some important mechanism in cancer cell under hypoxia.
APA, Harvard, Vancouver, ISO, and other styles
6

TSAI, BING-SHIUAN, and 蔡秉軒. "AJAX based Modularized EMV framework." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/3dc4n4.

Full text
Abstract:
碩士
國立高雄應用科技大學
資訊管理研究所碩士班
106
With the rapid development of smart mobile devices and the Internet, it has become easier for the public to obtain information, and applications and commercial systems are gradually becoming mobile in the form of Hybrid App. The MVC architecture is the most common software architecture model for web page and system development. It can divide the system into multiple parts and each performs one's own functions and has the advantages of reducing the repetitiveness of the code and increasing the scalability. Therefore, most software frameworks use MVC as their benchmark. Well-known PHP frameworks such as Laravel and Phalcon have their own advantages in system development, but if they are used to write Web pages needed for Hybrid App, the conversion will not be suitable because of the different system carriers. So, the purpose of current study is hopes to base on the current common object orientation and event-driven to use AJAX for data transfer, and combine CSS, HTML, Javascript, PHP, in the spirit of MVC, to launch a new framework suitable for Hybrid App development. It is expected to improve the deficiencies of the existing framework and make it easier and faster to develop applications in the future.
APA, Harvard, Vancouver, ISO, and other styles
7

HSIEH, MING-CHUAN, and 謝明娟. "Role of Iron-containing Alcohol Dehydrogenases in Alcohol Metabolism and Stress Resistance in Acinetobacter baumannii." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/3dcndg.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "3DCNNs"

1

Wang, Yingdong, Qingfeng Wu, and Qunsheng Ruan. "EEG Emotion Classification Using 2D-3DCNN." In Knowledge Science, Engineering and Management, 645–54. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-10986-7_52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tian, Zhenhuan, Yizhuan Jia, Xuejun Men, and Zhongwei Sun. "3DCNN for Pulmonary Nodule Segmentation and Classification." In Lecture Notes in Computer Science, 386–95. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-50516-5_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Yahui, Huimin Ma, Xinpeng Xing, and Zeyu Pan. "Eulerian Motion Based 3DCNN Architecture for Facial Micro-Expression Recognition." In MultiMedia Modeling, 266–77. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37731-1_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Jihong, Jing Zhang, Hui Zhang, Xi Liang, and Li Zhuo. "Extracting Deep Video Feature for Mobile Video Classification with ELU-3DCNN." In Communications in Computer and Information Science, 151–59. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8530-7_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Raju, Manu, and Ajin R. Nair. "Abnormal Cardiac Condition Classification of ECG Using 3DCNN - A Novel Approach." In IFMBE Proceedings, 219–30. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-51120-2_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Elangovan, Taranya, R. Arockia Xavier Annie, Keerthana Sundaresan, and J. D. Pradhakshya. "Hand Gesture Recognition for Sign Languages Using 3DCNN for Efficient Detection." In Computer Methods, Imaging and Visualization in Biomechanics and Biomedical Engineering II, 215–33. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-10015-4_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jiang, Siyu, and Yimin Chen. "Hand Gesture Recognition by Using 3DCNN and LSTM with Adam Optimizer." In Advances in Multimedia Information Processing – PCM 2017, 743–53. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77380-3_71.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jaswal, Gaurav, Seshan Srirangarajan, and Sumantra Dutta Roy. "Range-Doppler Hand Gesture Recognition Using Deep Residual-3DCNN with Transformer Network." In Pattern Recognition. ICPR International Workshops and Challenges, 759–72. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68780-9_57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Islam, Md Sajjatul, Yuan Gao, Zhilong Ji, Jiancheng Lv, Adam Ahmed Qaid Mohammed, and Yongsheng Sang. "3DCNN Backed Conv-LSTM Auto Encoder for Micro Facial Expression Video Recognition." In Machine Learning and Intelligent Communications, 90–105. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04409-0_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Luo, Tengqi, Yueming Ding, Rongxi Cui, Xingwang Lu, and Qinyue Tan. "Short-Term Photovoltaic Power Prediction Based on 3DCNN and CLSTM Hybrid Model." In Lecture Notes in Electrical Engineering, 679–86. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-0877-2_71.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "3DCNNs"

1

Power, David, and Ihsan Ullah. "Automated Assessment of Simulated Laparoscopic Surgical Performance using 3DCNN." In 2024 46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 1–4. IEEE, 2024. https://doi.org/10.1109/embc53108.2024.10782160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhong, Jiangnan, Ling Zhang, Cheng Li, Jiong Niu, Zhaokai Liu, Cheng Wang, and Zongtai Li. "Target Detection in Clutter Regions Based on 3DCNN for HFSWR." In OCEANS 2024 - SINGAPORE, 1–4. IEEE, 2024. http://dx.doi.org/10.1109/oceans51537.2024.10682236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Men, Yutao, Jian Luo, Zixian Zhao, Hang Wu, Feng Luo, Guang Zhang, and Ming Yu. "Surgical Gesture Recognition in Open Surgery Based on 3DCNN and SlowFast." In 2024 IEEE 7th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), 429–33. IEEE, 2024. http://dx.doi.org/10.1109/itnec60942.2024.10733142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fang, Chun-Ting, Tsung-Jung Liu, and Kuan-Hsien Liu. "Micro-Expression Recognition Based On 3DCNN Combined With GRU and New Attention Mechanism." In 2024 IEEE International Conference on Image Processing (ICIP), 2466–72. IEEE, 2024. http://dx.doi.org/10.1109/icip51287.2024.10648137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Siyu, Fei Luo, Jianhua Du, and Liang Cao. "Short-Term Forecasting of High-Altitude Wind Fields Along Flight Routes Based on CAM-ConvLSTM-3DCNN." In 2024 2nd International Conference on Algorithm, Image Processing and Machine Vision (AIPMV), 337–42. IEEE, 2024. http://dx.doi.org/10.1109/aipmv62663.2024.10691906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kopuz, Barış, and Nihan Kahraman. "Comparison and Analysis of LSTM-Capsule Networks and 3DConv-LSTM Autoencoder in Ambient Anomaly Detection." In 2024 15th National Conference on Electrical and Electronics Engineering (ELECO), 1–5. IEEE, 2024. https://doi.org/10.1109/eleco64362.2024.10847263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yi, Yang, Feng Ni, Yuexin Ma, Xinge Zhu, Yuankai Qi, Riming Qiu, Shijie Zhao, Feng Li, and Yongtao Wang. "High Performance Gesture Recognition via Effective and Efficient Temporal Modeling." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/141.

Full text
Abstract:
State-of-the-art hand gesture recognition methods have investigated the spatiotemporal features based on 3D convolutional neural networks (3DCNNs) or convolutional long short-term memory (ConvLSTM). However, they often suffer from the inefficiency due to the high computational complexity of their network structures. In this paper, we focus instead on the 1D convolutional neural networks and propose a simple and efficient architectural unit, Multi-Kernel Temporal Block (MKTB), that models the multi-scale temporal responses by explicitly applying different temporal kernels. Then, we present a Global Refinement Block (GRB), which is an attention module for shaping the global temporal features based on the cross-channel similarity. By incorporating the MKTB and GRB, our architecture can effectively explore the spatiotemporal features within tolerable computational cost. Extensive experiments conducted on public datasets demonstrate that our proposed model achieves the state-of-the-art with higher efficiency. Moreover, the proposed MKTB and GRB are plug-and-play modules and the experiments on other tasks, like video understanding and video-based person re-identification, also display their good performance in efficiency and capability of generalization.
APA, Harvard, Vancouver, ISO, and other styles
8

Lin, Yangping, Zhiqiang Ning, Jia Liu, Mingshu Zhang, Pei Chen, and Xiaoyuan Yang. "Video steganography network based on 3DCNN." In 2021 International Conference on Digital Society and Intelligent Systems (DSInS). IEEE, 2021. http://dx.doi.org/10.1109/dsins54396.2021.9670614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Daichi, Chenyu Li, Fanzhao Lin, Dan Zeng, and Shiming Ge. "Detecting Deepfake Videos with Temporal Dropout 3DCNN." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/178.

Full text
Abstract:
While the abuse of deepfake technology has brought about a serious impact on human society, the detection of deepfake videos is still very challenging due to their highly photorealistic synthesis on each frame. To address that, this paper aims to leverage the possible inconsistent cues among video frames and proposes a Temporal Dropout 3-Dimensional Convolutional Neural Network (TD-3DCNN) to detect deepfake videos. In the approach, the fixed-length frame volumes sampled from a video are fed into a 3-Dimensional Convolutional Neural Network (3DCNN) to extract features across different scales and identified whether they are real or fake. Especially, a temporal dropout operation is introduced to randomly sample frames in each batch. It serves as a simple yet effective data augmentation and can enhance the representation and generalization ability, avoiding model overfitting and improving detecting accuracy. In this way, the resulting video-level classifier is accurate and effective to identify deepfake videos. Extensive experiments on benchmarks including Celeb-DF(v2) and DFDC clearly demonstrate the effectiveness and generalization capacity of our approach.
APA, Harvard, Vancouver, ISO, and other styles
10

Tatebe, Yoshiki, Daisuke Deguchi, Yasutomo Kawanishi, Ichiro Ide, Hiroshi Murase, and Utsushi Sakai. "Pedestrian detection from sparse point-cloud using 3DCNN." In 2018 International Workshop on Advanced Image Technology (IWAIT). IEEE, 2018. http://dx.doi.org/10.1109/iwait.2018.8369680.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography