Gotowa bibliografia na temat „VGG16 MODEL”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „VGG16 MODEL”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "VGG16 MODEL"

1

Sukegawa, Shintaro, Kazumasa Yoshii, Takeshi Hara, Katsusuke Yamashita, Keisuke Nakano, Norio Yamamoto, Hitoshi Nagatsuka i Yoshihiko Furuki. "Deep Neural Networks for Dental Implant System Classification". Biomolecules 10, nr 7 (1.07.2020): 984. http://dx.doi.org/10.3390/biom10070984.

Pełny tekst źródła
Streszczenie:
In this study, we used panoramic X-ray images to classify and clarify the accuracy of different dental implant brands via deep convolutional neural networks (CNNs) with transfer-learning strategies. For objective labeling, 8859 implant images of 11 implant systems were used from digital panoramic radiographs obtained from patients who underwent dental implant treatment at Kagawa Prefectural Central Hospital, Japan, between 2005 and 2019. Five deep CNN models (specifically, a basic CNN with three convolutional layers, VGG16 and VGG19 transfer-learning models, and finely tuned VGG16 and VGG19) were evaluated for implant classification. Among the five models, the finely tuned VGG16 model exhibited the highest implant classification performance. The finely tuned VGG19 was second best, followed by the normal transfer-learning VGG16. We confirmed that the finely tuned VGG16 and VGG19 CNNs could accurately classify dental implant systems from 11 types of panoramic X-ray images.
Style APA, Harvard, Vancouver, ISO itp.
2

Kumar, Vijay, Anis Zarrad, Rahul Gupta i Omar Cheikhrouhou. "COV-DLS: Prediction of COVID-19 from X-Rays Using Enhanced Deep Transfer Learning Techniques". Journal of Healthcare Engineering 2022 (11.04.2022): 1–13. http://dx.doi.org/10.1155/2022/6216273.

Pełny tekst źródła
Streszczenie:
In this paper, modifications in neoteric architectures such as VGG16, VGG19, ResNet50, and InceptionV3 are proposed for the classification of COVID-19 using chest X-rays. The proposed architectures termed “COV-DLS” consist of two phases: heading model construction and classification. The heading model construction phase utilizes four modified deep learning architectures, namely Modified-VGG16, Modified-VGG19, Modified-ResNet50, and Modified-InceptionV3. An attempt is made to modify these neoteric architectures by incorporating the average pooling and dense layers. The dropout layer is also added to prevent the overfitting problem. Two dense layers with different activation functions are also added. Thereafter, the output of these modified models is applied during the classification phase, when COV-DLS are applied on a COVID-19 chest X-ray image data set. Classification accuracy of 98.61% is achieved by Modified-VGG16, 97.22% by Modified-VGG19, 95.13% by Modified-ResNet50, and 99.31% by Modified-InceptionV3. COV-DLS outperforms existing deep learning models in terms of accuracy and F1-score.
Style APA, Harvard, Vancouver, ISO itp.
3

Lai, Ren Yu, Kim Gaik Tay, Audrey Huong, Chang Choon Chew i Shuhaida Ismail. "Dorsal hand Vein Authentication System Using Convolution Neural Network". International Journal of Emerging Technology and Advanced Engineering 12, nr 8 (2.08.2022): 83–90. http://dx.doi.org/10.46338/ijetae0822_11.

Pełny tekst źródła
Streszczenie:
This study proposes using a dorsal hand vein authentication system using transfer learning from convolutional neural network models of VGG16 and VGG19. The required images were obtained from Bosphorus Hand Vein Database. Among the 100 users, the first 80 users were treated as registered users, while the remaining users as unregistered users. 960 left-hand images of the registered users were trained during the training phase. Meanwhile, 100 images, consisting of 80 registered and 20 unregistered users, were randomly selected for the authentication application testing. Our results showed that VGG19 produced a superior validation accuracy as compared to that of VGG16 given by 96.9% and 94.3%, respectively. The testing accuracy of VGG16 and VGG19 is 99% and 100%, respectively. Since VGG19 is shown to outperform its shallower counterpart, we implemented a User Interface (UI) based on VGG19 model for dorsal hand vein identification. These findings indicate that our system may be deployed for biometric authentication in the future for a more efficient and secure implementation of person identification and imposter detection
Style APA, Harvard, Vancouver, ISO itp.
4

Bodavarapu, Pavan Nageswar Reddy, i P. V. V. S. Srinivas. "Facial expression recognition for low resolution images using convolutional neural networks and denoising techniques". Indian Journal of Science and Technology 14, nr 12 (27.03.2021): 971–83. http://dx.doi.org/10.17485/ijst/v14i12.14.

Pełny tekst źródła
Streszczenie:
Background/Objectives: There is only limited research work is going on in the field of facial expression recognition on low resolution images. Mostly, all the images in the real world will be in low resolution and might also contain noise, so this study is to design a novel convolutional neural network model (FERConvNet), which can perform better on low resolution images. Methods: We proposed a model and then compared with state-of-art models on FER2013 dataset. There is no publicly available dataset, which contains low resolution images for facial expression recognition (Anger, Sad, Disgust, Happy, Surprise, Neutral, Fear), so we created a Low Resolution Facial Expression (LRFE) dataset, which contains more than 6000 images of seven types of facial expressions. The existing FER2013 dataset and LRFE dataset were used. These datasets were divided in the ratio 80:20 for training and testing and validation purpose. A HDM is proposed, which is a combination of Gaussian Filter, Bilateral Filter and Non local means denoising Filter. This hybrid denoising method helps us to increase the performance of the convolutional neural network. The proposed model was then compared with VGG16 and VGG19 models. Findings: The experimental results show that the proposed FERConvNet_HDM approach is effective than VGG16 and VGG19 in facial expression recognition on both FER2013 and LRFE dataset. The proposed FERConvNet_HDM approach achieved 85% accuracy on Fer2013 dataset, outperforming the VGG16 and VGG19 models, whose accuracies are 60% and 53% on Fer2013 dataset respectively. The same FERConvNet_HDM approach when applied on LRFE dataset achieved 95% accuracy. After analyzing the results, our FERConvNet_HDM approach performs better than VGG16 and VGG19 on both Fer2013 and LRFE dataset. Novelty/Applications: HDM with convolutional neural networks, helps in increasing the performance of convolutional neural networks in Facial expression recognition. Keywords: Facial expression recognition; facial emotion; convolutional neural network; deep learning; computer vision
Style APA, Harvard, Vancouver, ISO itp.
5

Shinde, Krishna K., i C. N. Kayte. "Fingerprint Recognition Based on Deep Learning Pre-Train with Our Best CNN Model for Person Identification". ECS Transactions 107, nr 1 (24.04.2022): 2209–20. http://dx.doi.org/10.1149/10701.2209ecst.

Pełny tekst źródła
Streszczenie:
In this article, we use pre-train VGG16, VGG19, and ResNet50 with ImageNet wights and our best CNN model to identify human fingerprint patterns. The system including pre-processing phase where the input fingerprint images first technique apply cropping and normalize for unwanted part remove of fingerprint images and normalize its dimension, second Image Enhancement for removing noise in to ridgelines, and last Canny Edge Detection technique for adjustment to smooth image with Gaussian to remove noise. Then apply one by one model on KVKR fingerprint dataset. Our best CNN model has automatically extracted features and RMSprop optimizer use for classification this features. This study performing experimental work of each pre-processed dataset and testing these three models with different dataset size of input train, test, and validation data. The VGG16 model got a better recognition accuracy than VGG19 and ResNet50 models.
Style APA, Harvard, Vancouver, ISO itp.
6

Athavale, Vijay Anant, Suresh Chand Gupta, Deepak Kumar i Savita. "Human Action Recognition Using CNN-SVM Model". Advances in Science and Technology 105 (kwiecień 2021): 282–90. http://dx.doi.org/10.4028/www.scientific.net/ast.105.282.

Pełny tekst źródła
Streszczenie:
In this paper, a pre-trained CNN model VGG16 with the SVM classifier is presented for the HAR task. The deep features are learned via the VGG16 pre-trained CNN model. The VGG 16 network is previously used for the image classification task. We used VGG16 for the signal classification of human activity, which is recorded by the accelerometer sensor of the mobile phone. The UniMiB dataset contains the 11771 samples of the daily life activity of humans. A Smartphone records these samples through the accelerometer sensor. The features are learned via the fifth max-pooling layer of the VGG16 CNN model and feed to the SVM classifier. The SVM classifier replaced the fully connected layer of the VGG16 model. The proposed VGG16-SVM model achieves effective and efficient results. The proposed method of VGG16-SVM is compared with the previously used schemes. The classification accuracy and F-Score are the evaluation parameters, and the proposed method provided 79.55% accuracy and 71.63% F-Score.
Style APA, Harvard, Vancouver, ISO itp.
7

Ko, Kyung-Kyu, i Eun-Sung Jung. "Improving Air Pollution Prediction System through Multimodal Deep Learning Model Optimization". Applied Sciences 12, nr 20 (15.10.2022): 10405. http://dx.doi.org/10.3390/app122010405.

Pełny tekst źródła
Streszczenie:
Many forms of air pollution increase as science and technology rapidly advance. In particular, fine dust harms the human body, causing or worsening heart and lung-related diseases. In this study, the level of fine dust in Seoul after 8 h is predicted to prevent health damage in advance. We construct a dataset by combining two modalities (i.e., numerical and image data) for accurate prediction. In addition, we propose a multimodal deep learning model combining a Long Short Term Memory (LSTM) and Convolutional Neural Network (CNN). An LSTM AutoEncoder is chosen as a model for numerical time series data processing and basic CNN. A Visual Geometry Group Neural Network (VGGNet) (VGG16, VGG19) is also chosen as a CNN model for image processing to compare performance differences according to network depth. The VGGNet is a standard deep CNN architecture with multiple layers. Our multimodal deep learning model using two modalities (i.e., numerical and image data) showed better performance than a single deep learning model using only one modality (numerical data). Specifically, the performance improved up to 14.16% when the VGG19 model, which has a deeper network, was used rather than the VGG16 model.
Style APA, Harvard, Vancouver, ISO itp.
8

Hasan, Moh Arie, Yan Riyanto i Dwiza Riana. "Grape leaf image disease classification using CNN-VGG16 model". Jurnal Teknologi dan Sistem Komputer 9, nr 4 (5.07.2021): 218–23. http://dx.doi.org/10.14710/jtsiskom.2021.14013.

Pełny tekst źródła
Streszczenie:
This study aims to classify the disease image on grape leaves using image processing. The segmentation uses the k-means clustering algorithm, the feature extraction process uses the VGG16 transfer learning technique, and the classification uses CNN. The dataset is from Kaggle of 4000 grape leaf images for four classes: leaves with black measles, leaf spot, healthy leaf, and blight. Google images of 100 pieces were also used as test data outside the dataset. The accuracy of the CNN model training is 99.50 %. The classification yields an accuracy of 97.25 % using the test data, while using test image data outside the dataset obtains an accuracy of 95 %. The designed image processing method can be applied to identify and classify disease images on grape leaves.
Style APA, Harvard, Vancouver, ISO itp.
9

Singh, Tajinder Pal, Sheifali Gupta, Meenu Garg, Amit Verma, V. V. Hung, H. H. Thien i Md Khairul Islam. "Transfer and Deep Learning-Based Gurmukhi Handwritten Word Classification Model". Mathematical Problems in Engineering 2023 (3.05.2023): 1–20. http://dx.doi.org/10.1155/2023/4768630.

Pełny tekst źródła
Streszczenie:
The world is having a vast collection of text with abandon of knowledge. However, it is a difficult and time-taking process to manually read and recognize the text written in numerous regional scripts. The task becomes more critical with Gurmukhi script due to complex structure of characters motivated from the challenges in designing an error-free and accurate classification model of Gurmukhi characters. In this paper, the author has customized the convolutional neural network model to classify handwritten Gurmukhi words. Furthermore, dataset has been prepared with 24000 handwritten Gurmukhi word images with 12 classes representing the month’s names. The dataset has been collected from 500 users of heterogeneous profession and age group. The dataset has been simulated using the proposed CNN model as well as various pretrained models named as ResNet 50, VGG19, and VGG16 at 100 epochs and 40 batch sizes. The proposed CNN model has obtained the best accuracy value of 0.9973, whereas the ResNet50 model has obtained the accuracy of 0.4015, VGG19 has obtained the accuracy of 0.7758, and the VGG16 model has obtained value accuracy of 0.8056. With the current accuracy rate, noncomplex architectural pattern, and prowess gained through learning using different writing styles, the proposed CNN model will be of great benefit to the researchers working in this area to use it in other ImageNet-based classification problems.
Style APA, Harvard, Vancouver, ISO itp.
10

Shakhovska, Nataliya, i Pavlo Pukach. "Comparative Analysis of Backbone Networks for Deep Knee MRI Classification Models". Big Data and Cognitive Computing 6, nr 3 (21.06.2022): 69. http://dx.doi.org/10.3390/bdcc6030069.

Pełny tekst źródła
Streszczenie:
This paper focuses on different types of backbone networks for machine learning architectures which perform classification of knee Magnetic Resonance Imaging (MRI) images. This paper aims to compare different types of feature extraction networks for the same classification task, in terms of accuracy and performance. Multiple variations of machine learning models were trained based on the MRNet architecture, choosing AlexNet, ResNet, VGG-11, VGG-16, and Efficientnet as the backbone. The models were evaluated on the MRNet validation dataset, computing Area Under the Receiver Operating Characteristics Curve (ROC-AUC), accuracy, f1 score, and Cohen’s Kappa as evaluation metrics. The MRNet-VGG16 model variant shows the best results for Anterior Cruciate Ligament (ACL) tear detection. For general abnormality detection, MRNet-VGG16 is dominated by MRNet-Resnet in confidence between 0.5 and 0.75 and by MRNet-VGG11 for confidence more than 0.8. Due to the non-uniform nature of backbone network performance on different MRI planes, it is advisable to use an LR ensemble of: VGG16 on a coronal plane for all classification tasks; on an axial plane for abnormality and ACL tear detection; Alexnet on a sagittal plane for abnormality detection, and an axial plane for meniscal tear detection; and VGG11 on a sagittal plane for ACL tear detection. The results also indicate that the Cohen’s Kappa metric is valuable in model evaluation for the MRNet dataset, as it provides deeper insights on classification decisions.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "VGG16 MODEL"

1

Albert, Florea George, i Filip Weilid. "Deep Learning Models for Human Activity Recognition". Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20201.

Pełny tekst źródła
Streszczenie:
AMI Meeting Corpus (AMI) -databasen används för att undersöka igenkännande av gruppaktivitet. AMI Meeting Corpus (AMI) -databasen ger forskare fjärrstyrda möten och naturliga möten i en kontorsmiljö; mötescenario i ett fyra personers stort kontorsrum. För attuppnågruppaktivitetsigenkänninganvändesbildsekvenserfrånvideosoch2-dimensionella audiospektrogram från AMI-databasen. Bildsekvenserna är RGB-färgade bilder och ljudspektrogram har en färgkanal. Bildsekvenserna producerades i batcher så att temporala funktioner kunde utvärderas tillsammans med ljudspektrogrammen. Det har visats att inkludering av temporala funktioner både under modellträning och sedan förutsäga beteende hos en aktivitet ökar valideringsnoggrannheten jämfört med modeller som endast använder rumsfunktioner[1]. Deep learning arkitekturer har implementerats för att känna igen olika mänskliga aktiviteter i AMI-kontorsmiljön med hjälp av extraherade data från the AMI-databas.Neurala nätverks modellerna byggdes med hjälp av KerasAPI tillsammans med TensorFlow biblioteket. Det finns olika typer av neurala nätverksarkitekturer. Arkitekturerna som undersöktes i detta projektet var Residual Neural Network, Visual GeometryGroup 16, Inception V3 och RCNN (LSTM). ImageNet-vikter har använts för att initialisera vikterna för Neurala nätverk basmodeller. ImageNet-vikterna tillhandahålls av Keras API och är optimerade för varje basmodell [2]. Basmodellerna använder ImageNet-vikter när de extraherar funktioner från inmatningsdata. Funktionsextraktionen med hjälp av ImageNet-vikter eller slumpmässiga vikter tillsammans med basmodellerna visade lovande resultat. Både Deep Learning användningen av täta skikt och LSTM spatio-temporala sekvens predikering implementerades framgångsrikt.
The Augmented Multi-party Interaction(AMI) Meeting Corpus database is used to investigate group activity recognition in an office environment. The AMI Meeting Corpus database provides researchers with remote controlled meetings and natural meetings in an office environment; meeting scenario in a four person sized office room. To achieve the group activity recognition video frames and 2-dimensional audio spectrograms were extracted from the AMI database. The video frames were RGB colored images and audio spectrograms had one color channel. The video frames were produced in batches so that temporal features could be evaluated together with the audio spectrogrames. It has been shown that including temporal features both during model training and then predicting the behavior of an activity increases the validation accuracy compared to models that only use spatial features [1]. Deep learning architectures have been implemented to recognize different human activities in the AMI office environment using the extracted data from the AMI database.The Neural Network models were built using the Keras API together with TensorFlow library. There are different types of Neural Network architectures. The architecture types that were investigated in this project were Residual Neural Network, Visual Geometry Group 16, Inception V3 and RCNN(Recurrent Neural Network). ImageNet weights have been used to initialize the weights for the Neural Network base models. ImageNet weights were provided by Keras API and was optimized for each base model[2]. The base models uses ImageNet weights when extracting features from the input data.The feature extraction using ImageNet weights or random weights together with the base models showed promising results. Both the Deep Learning using dense layers and the LSTM spatio-temporal sequence prediction were implemented successfully.
Style APA, Harvard, Vancouver, ISO itp.
2

GUPTA, RASHI. "IMAGE FORGERY DETECTION USING CNN MODEL". Thesis, 2022. http://dspace.dtu.ac.in:8080/jspui/handle/repository/19175.

Pełny tekst źródła
Streszczenie:
Image forgery detection has become more relevant in the real world in recent years since it is so easy to change a particular image and share it throughout social media, which may quickly lead to fake news and fake rumors all over the world. These editing softwares have posed a significant challenge to image forensics in terms of proposing and implementing various methods and strategies for detecting image counterfeiting. There have been a variety of traditional approaches for forgery detection, but they all focus on simple feature extraction and are more specialized to the type of forgery. However, as research advances, multiple deep learning approaches are being implemented to identify forgeries in images. Deep learning approaches have demonstrated exceptional outcomes in image forgery when compared to traditional methods. The numerous sorts of image forgeries are discussed in this work. The work presents and compares different applied and proven image forgery detection approaches, as well as a comprehensive literature analysis of deep learning algorithms for detecting various types of image counterfeiting. Also CNN network is build based on a prior study and compare its performance on two different datasets to address this issue. Furthermore, the impact of a data augmentation approach is assessed as well as several hyperparameters on classification accuracy. Our findings imply that the dataset's difficulty has a significant influence on the outcomes. In this study, we have also aimed to determine detection of image forgery using deep learning approach. The CNN Model is used along with the ELA extraction model which is then used for detection of forgery in images. Later we also used two CNN Models, VGG16 Model and VGG19 Model for the better comparison and understanding.
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "VGG16 MODEL"

1

Gupta, Pranjal Raaj, Disha Sharma i Nidhi Goel. "Image Forgery Detection by CNN and Pretrained VGG16 Model". W Advances in Intelligent Systems and Computing, 141–52. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-6887-6_13.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Anju, T. E., i S. Vimala. "Finetuned-VGG16 CNN Model for Tissue Classification of Colorectal Cancer". W Intelligent Sustainable Systems, 73–84. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1726-6_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Lincy, R. Babitha, i R. Gayathri. "Off-Line Tamil Handwritten Character Recognition Based on Convolutional Neural Network with VGG16 and VGG19 Model". W Advances in Automation, Signal Processing, Instrumentation, and Control, 1935–45. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-8221-9_180.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Ramya Manaswi, V., i B. Sankarababu. "A Flexible Accession on Brain Tumour Detection and Classification Using VGG16 Model". W Smart Intelligent Computing and Applications, Volume 1, 225–38. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9669-5_21.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Vidya, D., Shivanand Rumma i Mallikarjun Hangargi. "Apple Classification Based on MRI Images Using VGG16 Convolutional Deep Learning Model". W Proceedings of the First International Conference on Advances in Computer Vision and Artificial Intelligence Technologies (ACVAIT 2022), 114–21. Dordrecht: Atlantis Press International BV, 2023. http://dx.doi.org/10.2991/978-94-6463-196-8_10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Kumar, Ashish, Raied Razi, Anshul Singh i Himansu Das. "Res-VGG: A Novel Model for Plant Disease Detection by Fusing VGG16 and ResNet Models". W Communications in Computer and Information Science, 383–400. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-6318-8_32.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Ranjan, Amit, Chandrashekhar Kumar, Rohit Kumar Gupta i Rajiv Misra. "Transfer Learning Based Approach for Pneumonia Detection Using Customized VGG16 Deep Learning Model". W Internet of Things and Connected Technologies, 17–28. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-94507-7_2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Ahmed, Mohammed Junaid, Ashutosh Satapathy, Ch Raga Madhuri, K. Yashwanth Chowdary i A. Naveen Sai. "A Hybrid Model Built on VGG16 and Random Forest Algorithm for Land Classification". W Inventive Systems and Control, 267–80. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1624-5_20.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Chhabra, Mohit, i Rajneesh Kumar. "An Advanced VGG16 Architecture-Based Deep Learning Model to Detect Pneumonia from Medical Images". W Lecture Notes in Electrical Engineering, 457–71. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-8774-7_37.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Hason Rudd, David, Huan Huo i Guandong Xu. "An Extended Variational Mode Decomposition Algorithm Developed Speech Emotion Recognition Performance". W Advances in Knowledge Discovery and Data Mining, 219–31. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-33380-4_17.

Pełny tekst źródła
Streszczenie:
AbstractEmotion recognition (ER) from speech signals is a robust approach since it cannot be imitated like facial expression or text based sentiment analysis. Valuable information underlying the emotions are significant for human-computer interactions enabling intelligent machines to interact with sensitivity in the real world. Previous ER studies through speech signal processing have focused exclusively on associations between different signal mode decomposition methods and hidden informative features. However, improper decomposition parameter selections lead to informative signal component losses due to mode duplicating and mixing. In contrast, the current study proposes VGG-optiVMD, an empowered variational mode decomposition algorithm, to distinguish meaningful speech features and automatically select the number of decomposed modes and optimum balancing parameter for the data fidelity constraint by assessing their effects on the VGG16 flattening output layer. Various feature vectors were employed to train the VGG16 network on different databases and assess VGG-optiVMD reproducibility and reliability. One, two, and three-dimensional feature vectors were constructed by concatenating Mel-frequency cepstral coefficients, Chromagram, Mel spectrograms, Tonnetz diagrams, and spectral centroids. Results confirmed a synergistic relationship between the fine-tuning of the signal sample rate and decomposition parameters with classification accuracy, achieving state-of-the-art 96.09% accuracy in predicting seven emotions on the Berlin EMO-DB database.
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "VGG16 MODEL"

1

Zhang, Jing-Wei, Kuang-Chyi Lee i Gadi Ashok Kumar Reddy. "Rubber Gasket Defect Classification by VGG16 model". W 2022 IEEE 4th Eurasia Conference on IOT, Communication and Engineering (ECICE). IEEE, 2022. http://dx.doi.org/10.1109/ecice55674.2022.10042837.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Surekha, G., Patlolla Sai Keerthana, Nallantla Jaswanth Varma i Tummala Sai Gopi. "Hybrid Image Classification Model using ResNet101 and VGG16". W 2023 2nd International Conference on Applied Artificial Intelligence and Computing (ICAAIC). IEEE, 2023. http://dx.doi.org/10.1109/icaaic56838.2023.10140790.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Antonio, Elbren, Cyrus Rael i Elmer Buenavides. "Changing Input Shape Dimension Using VGG16 Network Model". W 2021 IEEE International Conference on Automatic Control & Intelligent Systems (I2CACIS). IEEE, 2021. http://dx.doi.org/10.1109/i2cacis52118.2021.9495858.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Li, Ziheng, Yuelong Zhang, Jiankai Zuo, Yupeng Zou i Mingxuan Song. "Improved image-based lung opacity detection of VGG16 model". W 2nd International Conference on Computer Vision, Image and Deep Learning, redaktorzy Fengjie Cen i Badrul Hisham bin Ahmad. SPIE, 2021. http://dx.doi.org/10.1117/12.2604522.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Albashish, Dheeb, Rizik Al-Sayyed, Azizi Abdullah, Mohammad Hashem Ryalat i Nedaa Ahmad Almansour. "Deep CNN Model based on VGG16 for Breast Cancer Classification". W 2021 International Conference on Information Technology (ICIT). IEEE, 2021. http://dx.doi.org/10.1109/icit52682.2021.9491631.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Panthakkan, Alavikunhu, S. M. Anzar, Saeed Al Mansoori i Hussain Al Ahmad. "Accurate Prediction of COVID-19 (+) Using AI Deep VGG16 Model". W 2020 3rd International Conference on Signal Processing and Information Security (ICSPIS). IEEE, 2020. http://dx.doi.org/10.1109/icspis51252.2020.9340145.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Ziyue, Chen, i Gao Yuanyuan. "Primate Recognition System Design Based on Deep Learning Model VGG16". W 2022 7th International Conference on Image, Vision and Computing (ICIVC). IEEE, 2022. http://dx.doi.org/10.1109/icivc55077.2022.9886310.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

N, Valarmathi, Bavya S, Deepika p, Dharani L i Hemalatha P. "Deep Learning Model for Automated Kidney Stone Detection using VGG16". W 2023 Second International Conference on Electronics and Renewable Systems (ICEARS). IEEE, 2023. http://dx.doi.org/10.1109/icears56392.2023.10085509.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Qassim, Hussam, Abhishek Verma i David Feinzimer. "Compressed residual-VGG16 CNN model for big data places image recognition". W 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC). IEEE, 2018. http://dx.doi.org/10.1109/ccwc.2018.8301729.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Jin, Xuesong, Xin Du i Huiyuan Sun. "VGG-S: Improved Small Sample Image Recognition Model Based on VGG16". W 2021 3rd International Conference on Artificial Intelligence and Advanced Manufacture (AIAM). IEEE, 2021. http://dx.doi.org/10.1109/aiam54119.2021.00054.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii