Gotowa bibliografia na temat „3DCNNs”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „3DCNNs”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "3DCNNs"
Paralic, Martin, Kamil Zelenak, Patrik Kamencay i Robert Hudec. "Automatic Approach for Brain Aneurysm Detection Using Convolutional Neural Networks". Applied Sciences 13, nr 24 (16.12.2023): 13313. http://dx.doi.org/10.3390/app132413313.
Pełny tekst źródłaVrskova, Roberta, Patrik Kamencay, Robert Hudec i Peter Sykora. "A New Deep-Learning Method for Human Activity Recognition". Sensors 23, nr 5 (4.03.2023): 2816. http://dx.doi.org/10.3390/s23052816.
Pełny tekst źródłaWang, Dingheng, Guangshe Zhao, Guoqi Li, Lei Deng i Yang Wu. "Compressing 3DCNNs based on tensor train decomposition". Neural Networks 131 (listopad 2020): 215–30. http://dx.doi.org/10.1016/j.neunet.2020.07.028.
Pełny tekst źródłaHong, Qingqing, Xinyi Zhong, Weitong Chen, Zhenghua Zhang, Bin Li, Hao Sun, Tianbao Yang i Changwei Tan. "SATNet: A Spatial Attention Based Network for Hyperspectral Image Classification". Remote Sensing 14, nr 22 (21.11.2022): 5902. http://dx.doi.org/10.3390/rs14225902.
Pełny tekst źródłaGomez-Donoso, Francisco, Felix Escalona i Miguel Cazorla. "Par3DNet: Using 3DCNNs for Object Recognition on Tridimensional Partial Views". Applied Sciences 10, nr 10 (14.05.2020): 3409. http://dx.doi.org/10.3390/app10103409.
Pełny tekst źródłaMotamed, Sara, i Elham Askari. "Detection of handgun using 3D convolutional neural network model (3DCNNs)". Signal and Data Processing 20, nr 2 (1.09.2023): 69–79. http://dx.doi.org/10.61186/jsdp.20.2.69.
Pełny tekst źródłaFirsov, Nikita, Evgeny Myasnikov, Valeriy Lobanov, Roman Khabibullin, Nikolay Kazanskiy, Svetlana Khonina, Muhammad A. Butt i Artem Nikonorov. "HyperKAN: Kolmogorov–Arnold Networks Make Hyperspectral Image Classifiers Smarter". Sensors 24, nr 23 (30.11.2024): 7683. https://doi.org/10.3390/s24237683.
Pełny tekst źródłaAlharbi, Yasser F., i Yousef A. Alotaibi. "Decoding Imagined Speech from EEG Data: A Hybrid Deep Learning Approach to Capturing Spatial and Temporal Features". Life 14, nr 11 (18.11.2024): 1501. http://dx.doi.org/10.3390/life14111501.
Pełny tekst źródłaWei, Minghua, i Feng Lin. "A novel multi-dimensional features fusion algorithm for the EEG signal recognition of brain's sensorimotor region activated tasks". International Journal of Intelligent Computing and Cybernetics 13, nr 2 (8.06.2020): 239–60. http://dx.doi.org/10.1108/ijicc-02-2020-0019.
Pełny tekst źródłaTorres, Felipe Soares, Shazia Akbar, Srinivas Raman, Kazuhiro Yasufuku, Felix Baldauf-Lenschen i Natasha B. Leighl. "Automated imaging-based stratification of early-stage lung cancer patients prior to receiving surgical resection using deep learning applied to CTs." Journal of Clinical Oncology 39, nr 15_suppl (20.05.2021): 1552. http://dx.doi.org/10.1200/jco.2021.39.15_suppl.1552.
Pełny tekst źródłaRozprawy doktorskie na temat "3DCNNs"
Ali, Abid. "Analyse vidéo à l'aide de réseaux de neurones profonds : une application pour l'autisme". Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4066.
Pełny tekst źródłaUnderstanding actions in videos is a crucial element of computer vision with significant implications across various fields. As our dependence on visual data grows, comprehending and interpreting human actions in videos becomes essential for advancing technologies in surveillance, healthcare, autonomous systems, and human-computer interaction. The accurate interpretation of actions in videos is fundamental for creating intelligent systems that can effectively navigate and respond to the complexities of the real world. In this context, advances in action understanding push the boundaries of computer vision and play a crucial role in shaping the landscape of cutting-edge applications that impact our daily lives. Computer vision has made significant progress with the rise of deep learning methods such as convolutional neural networks (CNNs) pushing the boundaries of computer vision and enabling the computer vision community to advance in many domains, including image segmentation, object detection, scene understanding, and more. However, video processing remains limited compared to static images. In this thesis, we focus on action understanding, dividing it into two main parts: action recognition and action detection, and their application in the medical domain for autism analysis.In this thesis, we explore the various aspects and challenges of video understanding from a general and an application-specific perspective. We then present our contributions and solutions to address these challenges. In addition, we introduce the ACTIVIS dataset, designed to diagnose autism in young children. Our work is divided into two main parts: generic modeling and applied models. Initially, we focus on adapting image models for action recognition tasks by incorporating temporal modeling using parameter-efficient fine-tuning (PEFT) techniques. We also address real-time action detection and anticipation by proposing a new joint model for action anticipation and online action detection in real-life scenarios. Furthermore, we introduce a new task called 'loose-interaction' in dyadic situations and its applications in autism analysis. Finally, we concentrate on the applied aspect of video understanding by proposing an action recognition model for repetitive behaviors in videos of autistic individuals. We conclude by proposing a weakly-supervised method to estimate the severity score of autistic children in long videos
Botina, Monsalve Deivid. "Remote photoplethysmography measurement and filtering using deep learning based methods". Electronic Thesis or Diss., Bourgogne Franche-Comté, 2022. http://www.theses.fr/2022UBFCK061.
Pełny tekst źródłaRPPG is a technique developed to measure the blood volume pulse signal and then estimate physiological data such as pulse rate, breathing rate, and pulse rate variability.Due to the multiple sources of noise that deteriorate the quality of the RPPG signal, conventional filters are commonly used. However, some alterations remain, but interestingly, an experienced eye can easily identify them. In the first part of this thesis, we propose the Long Short-Term Memory Deep-Filter (LSTMDF) network in the RPPG filtering task. We use different protocols to analyze the performance of the method. We demonstrate how the network can be efficiently trained with a few signals. Our study demonstrates experimentally the superiority of the LSTM-based filter compared with conventional filters. We found a network sensitivity related to the average signal-to-noise ratio on the RPPG signals.Approaches based on convolutional networks such as 3DCNNs have recently outperformed traditional hand-crafted methods in the RPPG measurement task. However, it is well known that large 3DCNN models have high computational costs and may be unsuitable for real-time applications. As the second contribution of this thesis, we propose a study of a 3DCNN architecture, finding the best compromise between pulse rate measurement precision and inference time. We use an ablation study where we decrease the input size, propose a custom loss function, and evaluate the impact of different input color spaces. The result is the Real-Time RPPG (RTRPPG), an end-to-end RPPG measurement framework that can be used in GPU and CPU. We also proposed a data augmentation method that aims to improve the performance of deep learning networks when the database has specific characteristics (e.g., fitness movement) and when there is not enough data available
Castelli, Filippo Maria. "3D CNN methods in biomedical image segmentation". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18796/.
Pełny tekst źródłaCasserfelt, Karl. "A Deep Learning Approach to Video Processing for Scene Recognition in Smart Office Environments". Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20429.
Pełny tekst źródłaLyu, Kai-Dy, i 呂凱迪. "Investigating the effects of hypoxia-induction on MMP-2, MMP-9 in A549 cells". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/3dcrn2.
Pełny tekst źródła中山醫學大學
醫學檢驗暨生物技術學系碩士班
102
Hypoxia causing by oxygen deprivation is a common feature of solid tumors. Hypoxia-inducible factors (HIFs) are stress-responsive transcriptional regulators of cellular and physiological processes involved in oxygen metabolism. Hif-1α will over express when cell was under hypoxia condition. In order to confirm the CoCl2 induced hypoxia cell model’s effect, A549 cells were treated with 100 μM CoCl2 for 0-24 hours, or with various concentrations of CoCl2 (0-200 μM) for 24 hours. Western blot analysis was used to examining the expression of HIF-1α protein. The protein expression level of HIF-1α was increased by the addition of CoCl2 in a time-dependent and dose-dependent manner. The Matrix metalloproteinases (MMPs) are a family of Zn2+ and Ca2+ dependent proteolytic enzymes. MMPs have potent ability to degrade structural proteins of the extracellular matrix (ECM) and to remodel tissues for morphogenesis, angiogenesis, neurogenesis, and tissue repair. But MMPs also have detrimental effects in carcinogenesis, including migration (adhesion / dispersion), differentiation, angiogenesis, and apoptosis. Previous studies had confirmed that MMP-2 & MMP-9 are the gelatinases responsible for the degradation of gelatin, release of cell surface receptors, apoptosis, and the chemokine / cytokine / activator cleavage. Over expression of MMP-2 & MMP-9 can increase the growth of tumor cells, angiogenesis, invasion and tumor progression. In this study, exposure to CoCl2 of A549 cells will increase mRNA and protein expression of MMP-2 and MMP-9 in dose-dependent manner. Enzyme activities of MMP-2 and MMP-9 were investigated using gelatin zymography, A549 cells were treated with CoCl2 and showed that it will increase the activities of MMP-2 and MMP-9. Taken together, our data show that the cell viability and migration of A549 were stimulated under hypoxic condition. Hypoxia induction can also increase protein, mRNA expression and enzyme activities of MMP-2 and MMP-9. These two surface molecules may participate in some important mechanism in cancer cell under hypoxia.
TSAI, BING-SHIUAN, i 蔡秉軒. "AJAX based Modularized EMV framework". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/3dc4n4.
Pełny tekst źródła國立高雄應用科技大學
資訊管理研究所碩士班
106
With the rapid development of smart mobile devices and the Internet, it has become easier for the public to obtain information, and applications and commercial systems are gradually becoming mobile in the form of Hybrid App. The MVC architecture is the most common software architecture model for web page and system development. It can divide the system into multiple parts and each performs one's own functions and has the advantages of reducing the repetitiveness of the code and increasing the scalability. Therefore, most software frameworks use MVC as their benchmark. Well-known PHP frameworks such as Laravel and Phalcon have their own advantages in system development, but if they are used to write Web pages needed for Hybrid App, the conversion will not be suitable because of the different system carriers. So, the purpose of current study is hopes to base on the current common object orientation and event-driven to use AJAX for data transfer, and combine CSS, HTML, Javascript, PHP, in the spirit of MVC, to launch a new framework suitable for Hybrid App development. It is expected to improve the deficiencies of the existing framework and make it easier and faster to develop applications in the future.
HSIEH, MING-CHUAN, i 謝明娟. "Role of Iron-containing Alcohol Dehydrogenases in Alcohol Metabolism and Stress Resistance in Acinetobacter baumannii". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/3dcndg.
Pełny tekst źródłaCzęści książek na temat "3DCNNs"
Wang, Yingdong, Qingfeng Wu i Qunsheng Ruan. "EEG Emotion Classification Using 2D-3DCNN". W Knowledge Science, Engineering and Management, 645–54. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-10986-7_52.
Pełny tekst źródłaTian, Zhenhuan, Yizhuan Jia, Xuejun Men i Zhongwei Sun. "3DCNN for Pulmonary Nodule Segmentation and Classification". W Lecture Notes in Computer Science, 386–95. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-50516-5_34.
Pełny tekst źródłaWang, Yahui, Huimin Ma, Xinpeng Xing i Zeyu Pan. "Eulerian Motion Based 3DCNN Architecture for Facial Micro-Expression Recognition". W MultiMedia Modeling, 266–77. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37731-1_22.
Pełny tekst źródłaLiu, Jihong, Jing Zhang, Hui Zhang, Xi Liang i Li Zhuo. "Extracting Deep Video Feature for Mobile Video Classification with ELU-3DCNN". W Communications in Computer and Information Science, 151–59. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8530-7_15.
Pełny tekst źródłaRaju, Manu, i Ajin R. Nair. "Abnormal Cardiac Condition Classification of ECG Using 3DCNN - A Novel Approach". W IFMBE Proceedings, 219–30. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-51120-2_24.
Pełny tekst źródłaElangovan, Taranya, R. Arockia Xavier Annie, Keerthana Sundaresan i J. D. Pradhakshya. "Hand Gesture Recognition for Sign Languages Using 3DCNN for Efficient Detection". W Computer Methods, Imaging and Visualization in Biomechanics and Biomedical Engineering II, 215–33. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-10015-4_19.
Pełny tekst źródłaJiang, Siyu, i Yimin Chen. "Hand Gesture Recognition by Using 3DCNN and LSTM with Adam Optimizer". W Advances in Multimedia Information Processing – PCM 2017, 743–53. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77380-3_71.
Pełny tekst źródłaJaswal, Gaurav, Seshan Srirangarajan i Sumantra Dutta Roy. "Range-Doppler Hand Gesture Recognition Using Deep Residual-3DCNN with Transformer Network". W Pattern Recognition. ICPR International Workshops and Challenges, 759–72. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68780-9_57.
Pełny tekst źródłaIslam, Md Sajjatul, Yuan Gao, Zhilong Ji, Jiancheng Lv, Adam Ahmed Qaid Mohammed i Yongsheng Sang. "3DCNN Backed Conv-LSTM Auto Encoder for Micro Facial Expression Video Recognition". W Machine Learning and Intelligent Communications, 90–105. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04409-0_9.
Pełny tekst źródłaLuo, Tengqi, Yueming Ding, Rongxi Cui, Xingwang Lu i Qinyue Tan. "Short-Term Photovoltaic Power Prediction Based on 3DCNN and CLSTM Hybrid Model". W Lecture Notes in Electrical Engineering, 679–86. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-0877-2_71.
Pełny tekst źródłaStreszczenia konferencji na temat "3DCNNs"
Power, David, i Ihsan Ullah. "Automated Assessment of Simulated Laparoscopic Surgical Performance using 3DCNN". W 2024 46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 1–4. IEEE, 2024. https://doi.org/10.1109/embc53108.2024.10782160.
Pełny tekst źródłaZhong, Jiangnan, Ling Zhang, Cheng Li, Jiong Niu, Zhaokai Liu, Cheng Wang i Zongtai Li. "Target Detection in Clutter Regions Based on 3DCNN for HFSWR". W OCEANS 2024 - SINGAPORE, 1–4. IEEE, 2024. http://dx.doi.org/10.1109/oceans51537.2024.10682236.
Pełny tekst źródłaMen, Yutao, Jian Luo, Zixian Zhao, Hang Wu, Feng Luo, Guang Zhang i Ming Yu. "Surgical Gesture Recognition in Open Surgery Based on 3DCNN and SlowFast". W 2024 IEEE 7th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), 429–33. IEEE, 2024. http://dx.doi.org/10.1109/itnec60942.2024.10733142.
Pełny tekst źródłaFang, Chun-Ting, Tsung-Jung Liu i Kuan-Hsien Liu. "Micro-Expression Recognition Based On 3DCNN Combined With GRU and New Attention Mechanism". W 2024 IEEE International Conference on Image Processing (ICIP), 2466–72. IEEE, 2024. http://dx.doi.org/10.1109/icip51287.2024.10648137.
Pełny tekst źródłaChen, Siyu, Fei Luo, Jianhua Du i Liang Cao. "Short-Term Forecasting of High-Altitude Wind Fields Along Flight Routes Based on CAM-ConvLSTM-3DCNN". W 2024 2nd International Conference on Algorithm, Image Processing and Machine Vision (AIPMV), 337–42. IEEE, 2024. http://dx.doi.org/10.1109/aipmv62663.2024.10691906.
Pełny tekst źródłaKopuz, Barış, i Nihan Kahraman. "Comparison and Analysis of LSTM-Capsule Networks and 3DConv-LSTM Autoencoder in Ambient Anomaly Detection". W 2024 15th National Conference on Electrical and Electronics Engineering (ELECO), 1–5. IEEE, 2024. https://doi.org/10.1109/eleco64362.2024.10847263.
Pełny tekst źródłaYi, Yang, Feng Ni, Yuexin Ma, Xinge Zhu, Yuankai Qi, Riming Qiu, Shijie Zhao, Feng Li i Yongtao Wang. "High Performance Gesture Recognition via Effective and Efficient Temporal Modeling". W Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/141.
Pełny tekst źródłaLin, Yangping, Zhiqiang Ning, Jia Liu, Mingshu Zhang, Pei Chen i Xiaoyuan Yang. "Video steganography network based on 3DCNN". W 2021 International Conference on Digital Society and Intelligent Systems (DSInS). IEEE, 2021. http://dx.doi.org/10.1109/dsins54396.2021.9670614.
Pełny tekst źródłaZhang, Daichi, Chenyu Li, Fanzhao Lin, Dan Zeng i Shiming Ge. "Detecting Deepfake Videos with Temporal Dropout 3DCNN". W Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/178.
Pełny tekst źródłaTatebe, Yoshiki, Daisuke Deguchi, Yasutomo Kawanishi, Ichiro Ide, Hiroshi Murase i Utsushi Sakai. "Pedestrian detection from sparse point-cloud using 3DCNN". W 2018 International Workshop on Advanced Image Technology (IWAIT). IEEE, 2018. http://dx.doi.org/10.1109/iwait.2018.8369680.
Pełny tekst źródła