Academic literature on the topic 'VGG -16 convolutional'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'VGG -16 convolutional.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "VGG -16 convolutional"

1

Agus, Minarno Eko, Sasongko Yoni Bagas, Munarko Yuda, Nugroho Adi Hanung, and Zaidah Ibrahim. "Convolutional Neural Network featuring VGG-16 Model for Glioma Classification." JOIV : International Journal on Informatics Visualization 6, no. 3 (September 30, 2022): 660. http://dx.doi.org/10.30630/joiv.6.3.1230.

Full text
Abstract:
Magnetic Resonance Imaging (MRI) is a body sensing technique that can produce detailed images of the condition of organs and tissues. Specifically related to brain tumors, the resulting images can be analyzed using image detection techniques so that tumor stages can be classified automatically. Detection of brain tumors requires a high level of accuracy because it is related to the effectiveness of medical actions and patient safety. So far, the Convolutional Neural Network (CNN) or its combination with GA has given good results. For this reason, in this study, we used a similar method but with a variant of the VGG-16 architecture. VGG-16 variant adds 16 layers by modifying the dropout layer (using softmax activation) to reduce overfitting and avoid using a lot of hyper-parameters. We also experimented with using augmentation techniques to anticipate data limitations. Experiment using data The Cancer Imaging Archive (TCIA) - The Repository of Molecular Brain Neoplasia Data (REMBRANDT) contains MRI images of 130 patients with different ailments, grades, races, and ages with 520 images. The tumor type was Glioma, and the images were divided into grades II, III, and IV, with the composition of 226, 101, and 193 images, respectively. The data is divided by 68% and 32% for training and testing purposes. We found that VGG-16 was more effective for brain tumor image classification, with an accuracy of up to 100%.
APA, Harvard, Vancouver, ISO, and other styles
2

Moumen, Rajae, Raddouane Chiheb, and Rdouan Faizi. "Real-time Arabic scene text detection using fully convolutional neural networks." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 2 (April 1, 2021): 1634. http://dx.doi.org/10.11591/ijece.v11i2.pp1634-1640.

Full text
Abstract:
The aim of this research is to propose a fully convolutional approach to address the problem of real-time scene text detection for Arabic language. Text detection is performed using a two-steps multi-scale approach. The first step uses light-weighted fully convolutional network: TextBlockDetector FCN, an adaptation of VGG-16 to eliminate non-textual elements, localize wide scale text and give text scale estimation. The second step determines narrow scale range of text using fully convolutional network for maximum performance. To evaluate the system, we confront the results of the framework to the results obtained with single VGG-16 fully deployed for text detection in one-shot; in addition to previous results in the state-of-the-art. For training and testing, we initiate a dataset of 575 images manually processed along with data augmentation to enrich training process. The system scores a precision of 0.651 vs 0.64 in the state-of-the-art and a FPS of 24.3 vs 31.7 for a VGG-16 fully deployed.
APA, Harvard, Vancouver, ISO, and other styles
3

Leong, Mei Chee, Dilip K. Prasad, Yong Tsui Lee, and Feng Lin. "Semi-CNN Architecture for Effective Spatio-Temporal Learning in Action Recognition." Applied Sciences 10, no. 2 (January 12, 2020): 557. http://dx.doi.org/10.3390/app10020557.

Full text
Abstract:
This paper introduces a fusion convolutional architecture for efficient learning of spatio-temporal features in video action recognition. Unlike 2D convolutional neural networks (CNNs), 3D CNNs can be applied directly on consecutive frames to extract spatio-temporal features. The aim of this work is to fuse the convolution layers from 2D and 3D CNNs to allow temporal encoding with fewer parameters than 3D CNNs. We adopt transfer learning from pre-trained 2D CNNs for spatial extraction, followed by temporal encoding, before connecting to 3D convolution layers at the top of the architecture. We construct our fusion architecture, semi-CNN, based on three popular models: VGG-16, ResNets and DenseNets, and compare the performance with their corresponding 3D models. Our empirical results evaluated on the action recognition dataset UCF-101 demonstrate that our fusion of 1D, 2D and 3D convolutions outperforms its 3D model of the same depth, with fewer parameters and reduces overfitting. Our semi-CNN architecture achieved an average of 16–30% boost in the top-1 accuracy when evaluated on an input video of 16 frames.
APA, Harvard, Vancouver, ISO, and other styles
4

AGUSTINA, REGITA, RITA MAGDALENA, and NOR KUMALASARI CAECAR PRATIWI. "Klasifikasi Kanker Kulit menggunakan Metode Convolutional Neural Network dengan Arsitektur VGG-16." ELKOMIKA: Jurnal Teknik Energi Elektrik, Teknik Telekomunikasi, & Teknik Elektronika 10, no. 2 (April 12, 2022): 446. http://dx.doi.org/10.26760/elkomika.v10i2.446.

Full text
Abstract:
ABSTRAKKanker kulit merupakan penyakit yang ditimbulkan oleh perubahan karakteristik sel penyusun kulit dari normal menjadi ganas, yang menyebabkan sel tersebut membelah secara tidak terkendali dan merusak DNA. Deteksi dini dan diagnosis yang akurat diperlukan untuk membantu masyarakat mengindentifikasi apakah kanker kulit atau hanya kelainan kulit biasa. Pada studi ini, dirancang sebuah sistem yang dapat mengklasifikasi kanker kulit dengan memanfaatkan citra kulit pasien yang kemudian diolah menggunakan metode Convolutional Neural Network (CNN) arsitektur VGG-16. Dataset yang digunakan berupa citra jaringan kanker sebanyak 4000 gambar. Proses diawali dengan input citra, pre-processing, pelatihan model dan pengujian sistem. Hasil terbaik diperoleh pada pengujian tanpa pre-processing CLAHE dan Gaussian filter, dengan menggunakan hyperparameter optimizer SGD, learning rate 0,001, epoch 50 dan batch size 32. Akurasi yang diperoleh sebesar 99,70%, loss 0,0055, presisi 0,9975, recall 0,9975 dan f1-score 0,9950.Kata kunci: Kanker kulit, CNN, VGG-16 ABSTRACTSkin cancer is a disease caused by changes in the characteristics of skin cells from normal to malignant, which causes the cells to divide uncontrollably and damage DNA. Early detection and accurate diagnosis are necessary to help the public identify whether skin cancer or just a common skin disorder. In this study, a system was designed that can classify skin cancer by utilizing images of patients' skin which is then processed using the Convolutional Neural Network (CNN) method of VGG-16 architecture. Dataset used in the form of cancer tissue imagery as many as 4000 images. The process begins with image input, pre-processing, model training and system testing. The best results were obtained on testing without pre-processing CLAHE and Gaussian filters, using hyperparameters, SGD optimizer, learning rate 0.001, epoch 50 and batch size 32. Accuracy obtained by 99.70%, loss 0.0055, precision 0.9975, recall 0.9975 and f1-score 0.9950.Keywords: Skin cancer, CNN, VGG-16
APA, Harvard, Vancouver, ISO, and other styles
5

Cakra, Cakra, Syafruddin Syarif, Hamdan Gani, and Andi Patombongi. "ANALISIS KESEGARAN IKAN MUJAIR DAN IKAN NILA DENGAN METODE CONVOLUTIONAL NEURAL NETWORK." Simtek : jurnal sistem informasi dan teknik komputer 7, no. 2 (August 10, 2022): 74–79. http://dx.doi.org/10.51876/simtek.v7i2.138.

Full text
Abstract:
Dalam riset ini, kami melakukan eksperimen implementasi klasifikasi kesegaran ikan mujair dan ikan nila (segar dan tidak segar) berdasarkan mata ikan menggunakan transfer learning dari enam CNN, yaitu Resnet, Alexnet, Vgg-16, Squeezenet, Densenet dan Inception. Dari hasil eksperimen klasifikasi dua kelas kesegaran ikan mujair menggunakan 451 citra menunjukkan bahwa VGG mencapai kinerja terbaik dibanding arsitektur lainnya dimana akurasi klasifikasi mencapai 73%. Dengan akurasi lebih tinggi dibanding arsitektur lainnya maka Resnet relatif lebih tepat digunakan untuk klasifikasi dua kelas kesegaran ikan Mujair, sedangkan ikan nila dengan menggunakan 574 citra menunjukkan bahwa VGG mencapai kinerja lebih baik dibanding arsitektur lainnya dengan akurasi klasifikasi mencapai 57,9%, dengan demikan maka VGG relatif lebih tepat digunakan untuk klasifikasi dua kelas kesegaran ikan Nila.
APA, Harvard, Vancouver, ISO, and other styles
6

劉怡, 劉怡. "Research of Art Point of Interest Recommendation Algorithm Based on Modified VGG-16 Network." 電腦學刊 33, no. 1 (February 2022): 071–85. http://dx.doi.org/10.53106/199115992022023301008.

Full text
Abstract:
<p>Traditional point of interest (POI) recommendation algorithms ignore the semantic context of comment information. Integrating convolutional neural networks into recommendation systems has become one of the hotspots in art POI recommendation research area. To solve the above problems, this paper proposes a new art POI recommendation model based on improved VGG-16. Based on the original VGG-16, the improved VGG-16 method optimizes the fully connection layer and uses transfer learning to share the weight parameters of each layer in VGG-16 pre-training model for subsequent training. The new model fuses the review information and user check-in information to improve the performance of POI recommendation. Experiments on real check-in data sets show that the proposed model has better recommendation performance than other advanced points of interest recommendation methods.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
7

Kamalova, Yu B., and N. A. Andriyanov. "Recognition of microscopic images of pollen grains using the convolutional neural network VGG-16." Bulletin of the South Ural State University. Ser. Computer Technologies, Automatic Control & Radioelectronics 22, no. 3 (2022): 39–46. http://dx.doi.org/10.14529/ctcr220304.

Full text
Abstract:
The article presents the result of an experiment on the application of transfer learning using the Visual Geometry Group with 16 layers (VGG-16) convolutional neural network in relation to the problem of recognizing pollen grains in images. An analysis of the information-theoretical base on the application of machine learning algorithms to the problem of classifying pollen grains over the past few years has shown the need to develop (apply) a new method for recognizing images of pollen grains obtained using an optical microscope. Currently, automatic classification for pollen identification is becoming a very active area of research. The article substantiates the task of automating the classification of pollen grains. The aim of the study is to analyze the efficiency and accuracy of classifying microscopic images of pollen grains using transfer learning of the VGG-16 convolutional neural network. Transfer learning was performed using the VGG-16 neural network, which has 13 convolutional layers grouped into 5 blocks with pooling and 3 smoothing layers at the output. Since transfer learning is used, the number of training epochs can be chosen to be small. For this network, only the smoothing output layers change, and the feature extraction remains the same as in the classical model. Therefore, it was chosen to use 10 training epochs. Other hyperparameters are as follows: Drop Out regularization with a probability of 50%, optimization method is ADAM, activation function is sigmoid, loss function is cross-entropy, batch size is 32 images. As a result, by adjusting the hyperparameters of the model and using augmentation, it was possible to achieve a share of correct recognitions of about 80%. At the same time, due to the different number of training examples, the particular characteristics of the classes differ somewhat. Thus, the maximum precision and recall reach 94% and 83%, respectively, for the Dandelion type. In the future, studies are planned to use augmentation as a preprocessing to create a balanced sample. By applying the VGG-16 convolutional neural network to the problem of pollen grain image recognition, high accuracy and efficiency of the method were achieved.
APA, Harvard, Vancouver, ISO, and other styles
8

J, Samson Immanuel, Manoj G, and Divya P. S. "Performance Metric Estimation of Fast RCNN with VGG-16 Architecture for Emotional Recognition." International Journal of Applied Mathematics, Computational Science and Systems Engineering 4 (June 25, 2022): 30–38. http://dx.doi.org/10.37394/232026.2022.4.4.

Full text
Abstract:
Faster R-CNN is a state-of-the-art universal object detection approach based on a convolutional neural network that offers object limits and objectness scores at each location in an image at the same time. To hypothesis object locations, state-of-the-art object detection networks rely on region proposal techniques. The accuracy of ML/DL models has been shown to be limited in the past due to a range of issues, including wavelength selection, spatial resolution, and hyper parameter selection and tuning. The goal of this study is to create a new automated emotional detection system based on the CK+ database. Fast R-CNN has lowered the detection network’s operating time, revealing region proposal computation as a bottleneck. We develop a Region Proposal Network (RPN) in this paper that shares full-image convolutional features with the detection network, allowing for almost cost-free region suggestions. The suggested VGG-16 Fast RCNN model obtained user accuracy close to 100 percent in the emotion class, followed by VGG-16 (99.79 percent), Alexnet (98.58 percent), and Googlenet (98.58 percent) (98.32 percent). After extensive hyper parameter tuning for emotional recognition, the generated Fast RCNN VGG-16 model showed an overall accuracy of 99.79 percent, far higher than previously published results.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Jiwei, and Man Dang. "Theoretical Model and Implementation Path of Party Building Intelligent Networks in Colleges and Universities from the Perspective of Artificial Intelligence." Mobile Information Systems 2022 (May 16, 2022): 1–11. http://dx.doi.org/10.1155/2022/3926970.

Full text
Abstract:
The paper aims to promote the growth of party building work in colleges and universities to improve school party organization, team management and strengthen party member ideological construction and overall party quality. We design intelligent party member business knowledge learning classrooms using deep learning to improve the quality of party members. First, we develop a convolutional neural network (CNN)-based classroom face recognition system and improve its loss function using the associated theory of the Visual Geometry Group 16 (VGG-16) model. Then, using the Single Shot Multi-Box Detector (SSD), we establish a classroom standing behavior identification system. The experimental results demonstrate that the accuracy rate of the conventional VGG-16 in the face recognition system is 93.5%, while the upgraded VGG-16 is 96.5%, with a 3.2% increase over the baseline models.
APA, Harvard, Vancouver, ISO, and other styles
10

Saleh, Yaser, and Nesreen Otoum. "Road-Type Classification through Deep Learning Networks Fine-Tuning." Journal of Information & Knowledge Management 19, no. 01 (March 2020): 2040020. http://dx.doi.org/10.1142/s0219649220400201.

Full text
Abstract:
Road-type classification is increasingly becoming important to be embedded in interactive maps to provide additional useful information for users. The ubiquity of smartphones supported with high definition cameras offers a rich source of information that can be utilised by machine learning techniques. In this paper, we propose a novel Convolutional Neural Network (CNN)-based approach to classify road types using a collection of publicly available images. To overcome the challenge of having huge dataset to train and test CNNs, our approach employs fine-tuning. We conducted an experiment where the VGG-16, VGG-S and GoogLeNet networks were constructed and fine-tuned with the dataset gathered. Our approach achieved an accuracy of 99% in VGG-16 and 100% in VGG-S, while using the GoogLeNet model produced results up to 98%.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "VGG -16 convolutional"

1

Rawat, Jyoti, Doina Logofătu, and Sruthi Chiramel. "Factors Affecting Accuracy of Convolutional Neural Network Using VGG-16." In Proceedings of the 21st EANN (Engineering Applications of Neural Networks) 2020 Conference, 251–60. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-48791-1_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Singh, Inderpreet, Sunil Kr Singh, Sudhakar Kumar, and Kriti Aggarwal. "Dropout-VGG Based Convolutional Neural Network for Traffic Sign Categorization." In Lecture Notes on Data Engineering and Communications Technologies, 247–61. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9416-5_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Suedumrong, Chaichana, Komgrit Leksakul, Pranprach Wattana, and Poti Chaopaisarn. "Application of Deep Convolutional Neural Networks VGG-16 and GoogLeNet for Level Diabetic Retinopathy Detection." In Lecture Notes in Networks and Systems, 56–65. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89880-9_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dandotiya, Monika, and Madhukar Dubey. "A VGG-16 Framework for an Efficient Indoor-Outdoor." In SCRS CONFERENCE PROCEEDINGS ON INTELLIGENT SYSTEMS, 321–30. Soft Computing Research Society, 2021. http://dx.doi.org/10.52458/978-93-91842-08-6-32.

Full text
Abstract:
Computer vision had reached a new level that allows robots from the limits of laboratories to explore the outside world. Even with progress in this area, robots are struggling to understand their location. The classification of the scene is an important step in understanding the scene. In many applications, a scene classifi- cation can be used such as a surveillance camera, self-driving, a household robot, and a database imaging system. Monitoring cameras are now everywhere installed. The accuracy of scene classification of indoor-outdoor techniques is weak. Using the Convolution Neural Net-work Model in VGG-16, this study attempts to im- prove accuracy. This research presents a new method for classifying images into classes using VGG-16. The algorithm’s outputs are validated using the SUN397 indoor-outdoor dataset, and outcomesdemonstrates that the suggested methodol- ogy outperforms existing technologies for indoor-outdoor scene classification. In this paper, Very Deep Convolutional Networks for Large-Scale Image Recognition” is what we implement. In ImageNet, a dataset of over 14 million images belonging to 1000 classes, the model achieves 92.7 percent top-5 test accuracy. It outperforms Alex Net by sequentially replacing large kernel-sized filters (11 and 5 in the first and second convolutional layers, respectively) with multiple 33 kernel-sized filters. We attain Training loss is 10percent and Training Accuracy is 96 percent in our projected work.
APA, Harvard, Vancouver, ISO, and other styles
5

S., Sudha, Srinivasan A., T. Gayathri Devi, and Mardeni Bin Roslee. "Detection and Classification of Diabetic Retinopathy Using Image Processing Algorithms, Convolutional Neural Network, and Signal Processing Techniques." In Advances in Computational Intelligence and Robotics, 270–80. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-8892-5.ch017.

Full text
Abstract:
Diabetic retinopathy (DR) affects blood vessels in the retina and arises due to complications of diabetes. Diabetes is a serious health issue that must be considered and taken care of at the right time. Modern lifestyle, stress at workplaces, and unhealthy food habits affect the health conditions of our body. So the detection of lesions and treatment at an early stage is required. The detection and classification of early signs of diabetic retinopathy can be done by three different approaches. In Approach 1, an image processing algorithm is proposed. In Approach 2, convolutional neural network (CNN-VGG Net 16) is proposed for the classification of fundus images into normal and DR images. In Approach 3, a signal processing method is used for the detection of diabetic retinopathy using electro retinogram signal (ERG). Finally, the performance measures are calculated for all three approaches, and it is found that detection using CNN improves the accuracy.
APA, Harvard, Vancouver, ISO, and other styles
6

Abd Hamid, Nur Amirah, Mohd Ibrahim Shapiai, Uzma Batool, Ranjit Singh Sarban Singh, Muhamad Kamal Mohammed Amin, and Khairil Ashraf Elias. "Incorporating Attention Mechanism in Enhancing Classification of Alzheimer’s Disease." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2021. http://dx.doi.org/10.3233/faia210048.

Full text
Abstract:
Alzheimer’s disease (AD) is a progressive and irreversible neurodegenerative disease that requires attentive medical evaluation. Therefore, diagnosing of AD accurately is crucial to provide the patients with appropriate treatment to slow down the progression of AD as well to facilitate the treatment interventions. To date, deep learning by means of convolutional neural networks (CNNs) has been widely used in diagnosing of AD. There are several well-established CNNs architectures that have been used in the image classification domain for magnetic resonance imaging (MRI) images analysis such as LeNet-5, Inception-V4, VGG-16 and Residual Network. However, these existing deep learning-based methods have lack of ability to be spatial invariance to the input data, due to overlooking some salient local features of the region of interest (ROI) (i.e., hippocampal). In medical image analysis, local features of MRI images are hard to exploit due to the small pixel size of ROI. On the other hand, CNNs requires large dataset sample to perform well, but we have limited number of MRI images to train, thus, leading to overfitting. Therefore, we propose a novel deep learning-based model without pre-processing techniques by incorporating attention mechanism and global average pooling (GAP) layer to VGG-16 architecture to capture the salient features of the MRI image for subtle discriminating of AD and normal control (NC). Also, we utilize transfer learning to surpass the overfitting issue. Experiment is performed on data collected from Open Access Series of Imaging Studies (OASIS) database. The accuracy performance of binary classification (AD vs NC) using proposed method significantly outperforms the existing methods, 12-layered CNNs (trained from scratch) and Inception-V4 (transfer learning) by increasing 1.93% and 3.43% of the accuracy. In conclusion, Attention-GAP model capable of improving and achieving notable classification accuracy in diagnosing AD.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "VGG -16 convolutional"

1

Menezes, Richardson Santiago Teles, Angelo Marcelino Cordeiro, Rafael Magalhães, and Helton Maia. "Classification of Paintings Authorship Using Convolutional Neural Network." In Congresso Brasileiro de Inteligência Computacional. SBIC, 2021. http://dx.doi.org/10.21528/cbic2021-116.

Full text
Abstract:
In this paper, state-of-the-art architectures of Convolutional Neural Networks (CNNs) are explained and compared concerning authorship classification of famous paintings. The chosen CNNs architectures were VGG-16, VGG-19, Residual Neural Networks (ResNet), and Xception. The used dataset is available on the website Kaggle, under the title “Best Artworks of All Time”. Weighted classes for each artist with more than 200 paintings present in the dataset were created to represent and classify each artist’s style. The performed experiments resulted in an accuracy of up to 95% for the Xception architecture with an average F1-score of 0.87, 92% of accuracy with an average F1-score of 0.83 for the ResNet in its 50-layer configuration, while both of the VGG architectures did not present satisfactory results for the same amount of epochs, achieving at most 60% of accuracy.
APA, Harvard, Vancouver, ISO, and other styles
2

Pereira, Amanda Lucas, Manoela Kohler, and Marco Aurélio C. Pacheco. "Evolutionary Convolutional Neural Network: a case study." In Congresso Brasileiro de Inteligência Computacional. SBIC, 2021. http://dx.doi.org/10.21528/cbic2021-129.

Full text
Abstract:
Most of the state-of-the-art Convolutional Neural Network (CNN) architectures are manually crafted by experts, usually with background knowledge from extent working experience in this research field. Therefore, this manner of designing CNNs is highly limited and many approaches have been developed to try to make this procedure more automatic. This paper presents a case study in tackling the architecture search problem by using a Genetic Algorithm (GA) to optimize an existing CNN Architecture. The proposed methodology uses VGG-16 convolutional blocks as its building blocks and each individual from the GA corresponds to a possible model built from these blocks with varying filter sizes, keeping fixed the original network architecture connections. The selection of the fittest individuals are done according to their weighted F1-Score when training from scratch on the available data. To evaluate the best individual found from the proposed methodology, the performance is compared to a VGG-16 model trained from scratch on the same data.
APA, Harvard, Vancouver, ISO, and other styles
3

Tolulope Ibitoye, Oladapo, and Oluwafunso Oluwole Osaloni. "Masked Faces Classification using Deep Convolutional Neural Network with VGG-16 Architecture." In 2022 3rd International Conference on Electrical Engineering and Informatics (ICon EEI). IEEE, 2022. http://dx.doi.org/10.1109/iconeei55709.2022.9972288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rodrigues Filho, Marcelo Luis, and Omar Andres Carmona Cortes. "Efficient Breast Cancer Classification Using Histopathological Images and a Simpler VGG." In Brazilian e-Science Workshop. Sociedade Brasileira de Computação, 2021. http://dx.doi.org/10.5753/bresci.2021.15783.

Full text
Abstract:
Breast cancer is the second most deadly disease worldwide. This severe condition led 627,000 people to die in 2018. Thus, early detection is critical for improving the patients' lifetime or even cure them. In this context, we can appeal to Medicine 4.0 that exploits the machine learning capabilities to obtain a faster and more efficient diagnosis. Therefore, this work aims to apply a simpler convolutional neural network, called VGG-7, for classifying breast cancer in histopathological images. Results have shown that VGG-7 overcomes the performance of VGG-16 and VGG-19, showing an accuracy of 98%, a precision of 99%, a recall of 98%, and an F1 score of 98%.
APA, Harvard, Vancouver, ISO, and other styles
5

Alippi, Cesare, Simone Disabato, and Manuel Roveri. "Moving Convolutional Neural Networks to Embedded Systems: The AlexNet and VGG-16 Case." In 2018 17th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN). IEEE, 2018. http://dx.doi.org/10.1109/ipsn.2018.00049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Defaoui, Mahassine, Lahcen Koutti, Mohamed El Ansari, Redouan Lahmyed, and Lhoussaine Masmoudi. "PedVis-VGG-16: A Fine-tuned deep convolutional neural network for pedestrian image classifications." In 2022 9th International Conference on Wireless Networks and Mobile Communications (WINCOM). IEEE, 2022. http://dx.doi.org/10.1109/wincom55661.2022.9966465.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Ming-Ai, and Dong-Qin Xu. "A Transfer Learning Method based on VGG-16 Convolutional Neural Network for MI Classification." In 2021 33rd Chinese Control and Decision Conference (CCDC). IEEE, 2021. http://dx.doi.org/10.1109/ccdc52312.2021.9602818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vo, Duc-Dung, Ba-Viet Ngo, Truong-Duy Nguyen, Thanh-Hai Nguyen, and Thanh-Nghia Nguyen. "A Convolutional Neural Network with The VGG-16 Model for Classifying Human Brain Tumor." In 2022 6th International Conference on Green Technology and Sustainable Development (GTSD). IEEE, 2022. http://dx.doi.org/10.1109/gtsd54989.2022.9989206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pahinkar, Ajinkya, Prajval Mohan, Ankita Mandal, and Krishnamoorthy A. "Faster Region Based Convolutional Neural Network and VGG 16 for Multi-Class Tyre Defect Detection." In 2021 12th International Conference on Computing Communication and Networking Technologies (ICCCNT). IEEE, 2021. http://dx.doi.org/10.1109/icccnt51525.2021.9579855.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kayali, Devrim, Priscilla Olawale, Yoney Kirsal-Ever, and Kamil Dimililer. "The effect of Compressor-Decompressor Networks with different image sizes on Mask Detection using Convolutional Neural Networks - VGG-16." In 2022 Innovations in Intelligent Systems and Applications Conference (ASYU). IEEE, 2022. http://dx.doi.org/10.1109/asyu56188.2022.9925317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography