Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „3DCNNs“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "3DCNNs" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "3DCNNs"
Paralic, Martin, Kamil Zelenak, Patrik Kamencay und Robert Hudec. „Automatic Approach for Brain Aneurysm Detection Using Convolutional Neural Networks“. Applied Sciences 13, Nr. 24 (16.12.2023): 13313. http://dx.doi.org/10.3390/app132413313.
Der volle Inhalt der QuelleVrskova, Roberta, Patrik Kamencay, Robert Hudec und Peter Sykora. „A New Deep-Learning Method for Human Activity Recognition“. Sensors 23, Nr. 5 (04.03.2023): 2816. http://dx.doi.org/10.3390/s23052816.
Der volle Inhalt der QuelleWang, Dingheng, Guangshe Zhao, Guoqi Li, Lei Deng und Yang Wu. „Compressing 3DCNNs based on tensor train decomposition“. Neural Networks 131 (November 2020): 215–30. http://dx.doi.org/10.1016/j.neunet.2020.07.028.
Der volle Inhalt der QuelleHong, Qingqing, Xinyi Zhong, Weitong Chen, Zhenghua Zhang, Bin Li, Hao Sun, Tianbao Yang und Changwei Tan. „SATNet: A Spatial Attention Based Network for Hyperspectral Image Classification“. Remote Sensing 14, Nr. 22 (21.11.2022): 5902. http://dx.doi.org/10.3390/rs14225902.
Der volle Inhalt der QuelleGomez-Donoso, Francisco, Felix Escalona und Miguel Cazorla. „Par3DNet: Using 3DCNNs for Object Recognition on Tridimensional Partial Views“. Applied Sciences 10, Nr. 10 (14.05.2020): 3409. http://dx.doi.org/10.3390/app10103409.
Der volle Inhalt der QuelleMotamed, Sara, und Elham Askari. „Detection of handgun using 3D convolutional neural network model (3DCNNs)“. Signal and Data Processing 20, Nr. 2 (01.09.2023): 69–79. http://dx.doi.org/10.61186/jsdp.20.2.69.
Der volle Inhalt der QuelleFirsov, Nikita, Evgeny Myasnikov, Valeriy Lobanov, Roman Khabibullin, Nikolay Kazanskiy, Svetlana Khonina, Muhammad A. Butt und Artem Nikonorov. „HyperKAN: Kolmogorov–Arnold Networks Make Hyperspectral Image Classifiers Smarter“. Sensors 24, Nr. 23 (30.11.2024): 7683. https://doi.org/10.3390/s24237683.
Der volle Inhalt der QuelleAlharbi, Yasser F., und Yousef A. Alotaibi. „Decoding Imagined Speech from EEG Data: A Hybrid Deep Learning Approach to Capturing Spatial and Temporal Features“. Life 14, Nr. 11 (18.11.2024): 1501. http://dx.doi.org/10.3390/life14111501.
Der volle Inhalt der QuelleWei, Minghua, und Feng Lin. „A novel multi-dimensional features fusion algorithm for the EEG signal recognition of brain's sensorimotor region activated tasks“. International Journal of Intelligent Computing and Cybernetics 13, Nr. 2 (08.06.2020): 239–60. http://dx.doi.org/10.1108/ijicc-02-2020-0019.
Der volle Inhalt der QuelleTorres, Felipe Soares, Shazia Akbar, Srinivas Raman, Kazuhiro Yasufuku, Felix Baldauf-Lenschen und Natasha B. Leighl. „Automated imaging-based stratification of early-stage lung cancer patients prior to receiving surgical resection using deep learning applied to CTs.“ Journal of Clinical Oncology 39, Nr. 15_suppl (20.05.2021): 1552. http://dx.doi.org/10.1200/jco.2021.39.15_suppl.1552.
Der volle Inhalt der QuelleDissertationen zum Thema "3DCNNs"
Ali, Abid. „Analyse vidéo à l'aide de réseaux de neurones profonds : une application pour l'autisme“. Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4066.
Der volle Inhalt der QuelleUnderstanding actions in videos is a crucial element of computer vision with significant implications across various fields. As our dependence on visual data grows, comprehending and interpreting human actions in videos becomes essential for advancing technologies in surveillance, healthcare, autonomous systems, and human-computer interaction. The accurate interpretation of actions in videos is fundamental for creating intelligent systems that can effectively navigate and respond to the complexities of the real world. In this context, advances in action understanding push the boundaries of computer vision and play a crucial role in shaping the landscape of cutting-edge applications that impact our daily lives. Computer vision has made significant progress with the rise of deep learning methods such as convolutional neural networks (CNNs) pushing the boundaries of computer vision and enabling the computer vision community to advance in many domains, including image segmentation, object detection, scene understanding, and more. However, video processing remains limited compared to static images. In this thesis, we focus on action understanding, dividing it into two main parts: action recognition and action detection, and their application in the medical domain for autism analysis.In this thesis, we explore the various aspects and challenges of video understanding from a general and an application-specific perspective. We then present our contributions and solutions to address these challenges. In addition, we introduce the ACTIVIS dataset, designed to diagnose autism in young children. Our work is divided into two main parts: generic modeling and applied models. Initially, we focus on adapting image models for action recognition tasks by incorporating temporal modeling using parameter-efficient fine-tuning (PEFT) techniques. We also address real-time action detection and anticipation by proposing a new joint model for action anticipation and online action detection in real-life scenarios. Furthermore, we introduce a new task called 'loose-interaction' in dyadic situations and its applications in autism analysis. Finally, we concentrate on the applied aspect of video understanding by proposing an action recognition model for repetitive behaviors in videos of autistic individuals. We conclude by proposing a weakly-supervised method to estimate the severity score of autistic children in long videos
Botina, Monsalve Deivid. „Remote photoplethysmography measurement and filtering using deep learning based methods“. Electronic Thesis or Diss., Bourgogne Franche-Comté, 2022. http://www.theses.fr/2022UBFCK061.
Der volle Inhalt der QuelleRPPG is a technique developed to measure the blood volume pulse signal and then estimate physiological data such as pulse rate, breathing rate, and pulse rate variability.Due to the multiple sources of noise that deteriorate the quality of the RPPG signal, conventional filters are commonly used. However, some alterations remain, but interestingly, an experienced eye can easily identify them. In the first part of this thesis, we propose the Long Short-Term Memory Deep-Filter (LSTMDF) network in the RPPG filtering task. We use different protocols to analyze the performance of the method. We demonstrate how the network can be efficiently trained with a few signals. Our study demonstrates experimentally the superiority of the LSTM-based filter compared with conventional filters. We found a network sensitivity related to the average signal-to-noise ratio on the RPPG signals.Approaches based on convolutional networks such as 3DCNNs have recently outperformed traditional hand-crafted methods in the RPPG measurement task. However, it is well known that large 3DCNN models have high computational costs and may be unsuitable for real-time applications. As the second contribution of this thesis, we propose a study of a 3DCNN architecture, finding the best compromise between pulse rate measurement precision and inference time. We use an ablation study where we decrease the input size, propose a custom loss function, and evaluate the impact of different input color spaces. The result is the Real-Time RPPG (RTRPPG), an end-to-end RPPG measurement framework that can be used in GPU and CPU. We also proposed a data augmentation method that aims to improve the performance of deep learning networks when the database has specific characteristics (e.g., fitness movement) and when there is not enough data available
Castelli, Filippo Maria. „3D CNN methods in biomedical image segmentation“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18796/.
Der volle Inhalt der QuelleCasserfelt, Karl. „A Deep Learning Approach to Video Processing for Scene Recognition in Smart Office Environments“. Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20429.
Der volle Inhalt der QuelleLyu, Kai-Dy, und 呂凱迪. „Investigating the effects of hypoxia-induction on MMP-2, MMP-9 in A549 cells“. Thesis, 2014. http://ndltd.ncl.edu.tw/handle/3dcrn2.
Der volle Inhalt der Quelle中山醫學大學
醫學檢驗暨生物技術學系碩士班
102
Hypoxia causing by oxygen deprivation is a common feature of solid tumors. Hypoxia-inducible factors (HIFs) are stress-responsive transcriptional regulators of cellular and physiological processes involved in oxygen metabolism. Hif-1α will over express when cell was under hypoxia condition. In order to confirm the CoCl2 induced hypoxia cell model’s effect, A549 cells were treated with 100 μM CoCl2 for 0-24 hours, or with various concentrations of CoCl2 (0-200 μM) for 24 hours. Western blot analysis was used to examining the expression of HIF-1α protein. The protein expression level of HIF-1α was increased by the addition of CoCl2 in a time-dependent and dose-dependent manner. The Matrix metalloproteinases (MMPs) are a family of Zn2+ and Ca2+ dependent proteolytic enzymes. MMPs have potent ability to degrade structural proteins of the extracellular matrix (ECM) and to remodel tissues for morphogenesis, angiogenesis, neurogenesis, and tissue repair. But MMPs also have detrimental effects in carcinogenesis, including migration (adhesion / dispersion), differentiation, angiogenesis, and apoptosis. Previous studies had confirmed that MMP-2 & MMP-9 are the gelatinases responsible for the degradation of gelatin, release of cell surface receptors, apoptosis, and the chemokine / cytokine / activator cleavage. Over expression of MMP-2 & MMP-9 can increase the growth of tumor cells, angiogenesis, invasion and tumor progression. In this study, exposure to CoCl2 of A549 cells will increase mRNA and protein expression of MMP-2 and MMP-9 in dose-dependent manner. Enzyme activities of MMP-2 and MMP-9 were investigated using gelatin zymography, A549 cells were treated with CoCl2 and showed that it will increase the activities of MMP-2 and MMP-9. Taken together, our data show that the cell viability and migration of A549 were stimulated under hypoxic condition. Hypoxia induction can also increase protein, mRNA expression and enzyme activities of MMP-2 and MMP-9. These two surface molecules may participate in some important mechanism in cancer cell under hypoxia.
TSAI, BING-SHIUAN, und 蔡秉軒. „AJAX based Modularized EMV framework“. Thesis, 2018. http://ndltd.ncl.edu.tw/handle/3dc4n4.
Der volle Inhalt der Quelle國立高雄應用科技大學
資訊管理研究所碩士班
106
With the rapid development of smart mobile devices and the Internet, it has become easier for the public to obtain information, and applications and commercial systems are gradually becoming mobile in the form of Hybrid App. The MVC architecture is the most common software architecture model for web page and system development. It can divide the system into multiple parts and each performs one's own functions and has the advantages of reducing the repetitiveness of the code and increasing the scalability. Therefore, most software frameworks use MVC as their benchmark. Well-known PHP frameworks such as Laravel and Phalcon have their own advantages in system development, but if they are used to write Web pages needed for Hybrid App, the conversion will not be suitable because of the different system carriers. So, the purpose of current study is hopes to base on the current common object orientation and event-driven to use AJAX for data transfer, and combine CSS, HTML, Javascript, PHP, in the spirit of MVC, to launch a new framework suitable for Hybrid App development. It is expected to improve the deficiencies of the existing framework and make it easier and faster to develop applications in the future.
HSIEH, MING-CHUAN, und 謝明娟. „Role of Iron-containing Alcohol Dehydrogenases in Alcohol Metabolism and Stress Resistance in Acinetobacter baumannii“. Thesis, 2019. http://ndltd.ncl.edu.tw/handle/3dcndg.
Der volle Inhalt der QuelleBuchteile zum Thema "3DCNNs"
Wang, Yingdong, Qingfeng Wu und Qunsheng Ruan. „EEG Emotion Classification Using 2D-3DCNN“. In Knowledge Science, Engineering and Management, 645–54. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-10986-7_52.
Der volle Inhalt der QuelleTian, Zhenhuan, Yizhuan Jia, Xuejun Men und Zhongwei Sun. „3DCNN for Pulmonary Nodule Segmentation and Classification“. In Lecture Notes in Computer Science, 386–95. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-50516-5_34.
Der volle Inhalt der QuelleWang, Yahui, Huimin Ma, Xinpeng Xing und Zeyu Pan. „Eulerian Motion Based 3DCNN Architecture for Facial Micro-Expression Recognition“. In MultiMedia Modeling, 266–77. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37731-1_22.
Der volle Inhalt der QuelleLiu, Jihong, Jing Zhang, Hui Zhang, Xi Liang und Li Zhuo. „Extracting Deep Video Feature for Mobile Video Classification with ELU-3DCNN“. In Communications in Computer and Information Science, 151–59. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8530-7_15.
Der volle Inhalt der QuelleRaju, Manu, und Ajin R. Nair. „Abnormal Cardiac Condition Classification of ECG Using 3DCNN - A Novel Approach“. In IFMBE Proceedings, 219–30. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-51120-2_24.
Der volle Inhalt der QuelleElangovan, Taranya, R. Arockia Xavier Annie, Keerthana Sundaresan und J. D. Pradhakshya. „Hand Gesture Recognition for Sign Languages Using 3DCNN for Efficient Detection“. In Computer Methods, Imaging and Visualization in Biomechanics and Biomedical Engineering II, 215–33. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-10015-4_19.
Der volle Inhalt der QuelleJiang, Siyu, und Yimin Chen. „Hand Gesture Recognition by Using 3DCNN and LSTM with Adam Optimizer“. In Advances in Multimedia Information Processing – PCM 2017, 743–53. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77380-3_71.
Der volle Inhalt der QuelleJaswal, Gaurav, Seshan Srirangarajan und Sumantra Dutta Roy. „Range-Doppler Hand Gesture Recognition Using Deep Residual-3DCNN with Transformer Network“. In Pattern Recognition. ICPR International Workshops and Challenges, 759–72. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68780-9_57.
Der volle Inhalt der QuelleIslam, Md Sajjatul, Yuan Gao, Zhilong Ji, Jiancheng Lv, Adam Ahmed Qaid Mohammed und Yongsheng Sang. „3DCNN Backed Conv-LSTM Auto Encoder for Micro Facial Expression Video Recognition“. In Machine Learning and Intelligent Communications, 90–105. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04409-0_9.
Der volle Inhalt der QuelleLuo, Tengqi, Yueming Ding, Rongxi Cui, Xingwang Lu und Qinyue Tan. „Short-Term Photovoltaic Power Prediction Based on 3DCNN and CLSTM Hybrid Model“. In Lecture Notes in Electrical Engineering, 679–86. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-0877-2_71.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "3DCNNs"
Power, David, und Ihsan Ullah. „Automated Assessment of Simulated Laparoscopic Surgical Performance using 3DCNN“. In 2024 46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 1–4. IEEE, 2024. https://doi.org/10.1109/embc53108.2024.10782160.
Der volle Inhalt der QuelleZhong, Jiangnan, Ling Zhang, Cheng Li, Jiong Niu, Zhaokai Liu, Cheng Wang und Zongtai Li. „Target Detection in Clutter Regions Based on 3DCNN for HFSWR“. In OCEANS 2024 - SINGAPORE, 1–4. IEEE, 2024. http://dx.doi.org/10.1109/oceans51537.2024.10682236.
Der volle Inhalt der QuelleMen, Yutao, Jian Luo, Zixian Zhao, Hang Wu, Feng Luo, Guang Zhang und Ming Yu. „Surgical Gesture Recognition in Open Surgery Based on 3DCNN and SlowFast“. In 2024 IEEE 7th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), 429–33. IEEE, 2024. http://dx.doi.org/10.1109/itnec60942.2024.10733142.
Der volle Inhalt der QuelleFang, Chun-Ting, Tsung-Jung Liu und Kuan-Hsien Liu. „Micro-Expression Recognition Based On 3DCNN Combined With GRU and New Attention Mechanism“. In 2024 IEEE International Conference on Image Processing (ICIP), 2466–72. IEEE, 2024. http://dx.doi.org/10.1109/icip51287.2024.10648137.
Der volle Inhalt der QuelleChen, Siyu, Fei Luo, Jianhua Du und Liang Cao. „Short-Term Forecasting of High-Altitude Wind Fields Along Flight Routes Based on CAM-ConvLSTM-3DCNN“. In 2024 2nd International Conference on Algorithm, Image Processing and Machine Vision (AIPMV), 337–42. IEEE, 2024. http://dx.doi.org/10.1109/aipmv62663.2024.10691906.
Der volle Inhalt der QuelleKopuz, Barış, und Nihan Kahraman. „Comparison and Analysis of LSTM-Capsule Networks and 3DConv-LSTM Autoencoder in Ambient Anomaly Detection“. In 2024 15th National Conference on Electrical and Electronics Engineering (ELECO), 1–5. IEEE, 2024. https://doi.org/10.1109/eleco64362.2024.10847263.
Der volle Inhalt der QuelleYi, Yang, Feng Ni, Yuexin Ma, Xinge Zhu, Yuankai Qi, Riming Qiu, Shijie Zhao, Feng Li und Yongtao Wang. „High Performance Gesture Recognition via Effective and Efficient Temporal Modeling“. In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/141.
Der volle Inhalt der QuelleLin, Yangping, Zhiqiang Ning, Jia Liu, Mingshu Zhang, Pei Chen und Xiaoyuan Yang. „Video steganography network based on 3DCNN“. In 2021 International Conference on Digital Society and Intelligent Systems (DSInS). IEEE, 2021. http://dx.doi.org/10.1109/dsins54396.2021.9670614.
Der volle Inhalt der QuelleZhang, Daichi, Chenyu Li, Fanzhao Lin, Dan Zeng und Shiming Ge. „Detecting Deepfake Videos with Temporal Dropout 3DCNN“. In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/178.
Der volle Inhalt der QuelleTatebe, Yoshiki, Daisuke Deguchi, Yasutomo Kawanishi, Ichiro Ide, Hiroshi Murase und Utsushi Sakai. „Pedestrian detection from sparse point-cloud using 3DCNN“. In 2018 International Workshop on Advanced Image Technology (IWAIT). IEEE, 2018. http://dx.doi.org/10.1109/iwait.2018.8369680.
Der volle Inhalt der Quelle