Vision Eagle Attention: a new lens for advancing image classification
Guardat en:
| Publicat a: | arXiv.org (Dec 9, 2024), p. n/a |
|---|---|
| Autor principal: | |
| Publicat: |
Cornell University Library, arXiv.org
|
| Matèries: | |
| Accés en línia: | Citation/Abstract Full text outside of ProQuest |
| Etiquetes: |
Sense etiquetes, Sigues el primer a etiquetar aquest registre!
|
MARC
| LEADER | 00000nab a2200000uu 4500 | ||
|---|---|---|---|
| 001 | 3130501255 | ||
| 003 | UK-CbPIL | ||
| 022 | |a 2331-8422 | ||
| 035 | |a 3130501255 | ||
| 045 | 0 | |b d20241209 | |
| 100 | 1 | |a Hasan, Mahmudul | |
| 245 | 1 | |a Vision Eagle Attention: a new lens for advancing image classification | |
| 260 | |b Cornell University Library, arXiv.org |c Dec 9, 2024 | ||
| 513 | |a Working Paper | ||
| 520 | 3 | |a In computer vision tasks, the ability to focus on relevant regions within an image is crucial for improving model performance, particularly when key features are small, subtle, or spatially dispersed. Convolutional neural networks (CNNs) typically treat all regions of an image equally, which can lead to inefficient feature extraction. To address this challenge, I have introduced Vision Eagle Attention, a novel attention mechanism that enhances visual feature extraction using convolutional spatial attention. The model applies convolution to capture local spatial features and generates an attention map that selectively emphasizes the most informative regions of the image. This attention mechanism enables the model to focus on discriminative features while suppressing irrelevant background information. I have integrated Vision Eagle Attention into a lightweight ResNet-18 architecture, demonstrating that this combination results in an efficient and powerful model. I have evaluated the performance of the proposed model on three widely used benchmark datasets: FashionMNIST, Intel Image Classification, and OracleMNIST, with a primary focus on image classification. Experimental results show that the proposed approach improves classification accuracy. Additionally, this method has the potential to be extended to other vision tasks, such as object detection, segmentation, and visual tracking, offering a computationally efficient solution for a wide range of vision-based applications. Code is available at: https://github.com/MahmudulHasan11085/Vision-Eagle-Attention.git | |
| 653 | |a Feature extraction | ||
| 653 | |a Attention | ||
| 653 | |a Image classification | ||
| 653 | |a Optical tracking | ||
| 653 | |a Visual tasks | ||
| 653 | |a Visual discrimination | ||
| 653 | |a Computer vision | ||
| 653 | |a Performance evaluation | ||
| 653 | |a Object recognition | ||
| 653 | |a Image segmentation | ||
| 653 | |a Artificial neural networks | ||
| 653 | |a Classification | ||
| 773 | 0 | |t arXiv.org |g (Dec 9, 2024), p. n/a | |
| 786 | 0 | |d ProQuest |t Engineering Database | |
| 856 | 4 | 1 | |3 Citation/Abstract |u https://www.proquest.com/docview/3130501255/abstract/embedded/ZKJTFFSVAI7CB62C?source=fedsrch |
| 856 | 4 | 0 | |3 Full text outside of ProQuest |u http://arxiv.org/abs/2411.10564 |