To search, Click
below search items.
|
|
All
Published Papers Search Service
|
Title
|
A Remote Sensing Scene Classification Model Based on EfficientNet-V2L Deep Neural Networks
|
Author
|
Atif A. Aljabri, Abdullah Alshanqiti, Ahmad B. Alkhodre, Ayyub Alzahem, and Ahmed Hagag
|
Citation |
Vol. 22 No. 10 pp. 406-412
|
Abstract
|
Scene classification of very high-resolution (VHR) imagery can attribute semantics to land cover in a variety of domains. Real-world application requirements have not been addressed by conventional techniques for remote sensing image classification. Recent research has demonstrated that deep convolutional neural networks (CNNs) are effective at extracting features due to their strong feature extraction capabilities. In order to improve classification performance, these approaches rely primarily on semantic information. Since the abstract and global semantic information makes it difficult for the network to correctly classify scene images with similar structures and high interclass similarity, it achieves a low classification accuracy. We propose a VHR remote sensing image classification model that uses extracts the global feature from the original VHR image using an EfficientNet-V2L CNN pre-trained to detect similar classes. The image is then classified using a multilayer perceptron (MLP). This method was evaluated using two benchmark remote sensing datasets: the 21-class UC Merced, and the 38-class PatternNet. As compared to other state-of-the-art models, the proposed model significantly improves performance.
|
Keywords
|
VHR, Remote sensing, scene classification, Deep learning, EfficientNet.
|
URL
|
http://paper.ijcsns.org/07_book/202210/20221053.pdf
|
|