Novel dual convolution adaptive focus neural network for book genre classification

Guardado en:
Detalles Bibliográficos
Publicado en:PLoS One vol. 20, no. 11 (Nov 2025), p. e0331011
Autor principal: Zeng, Qingtao
Otros Autores: Zhang, Lixin, Zhao, Jiefeng, Xu, Anping, Qi, Yali, Yu, Liqin, Li, Wenjing, Xia, Haochang
Publicado:
Public Library of Science
Materias:
Acceso en línea:Citation/Abstract
Full Text
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!

MARC

LEADER 00000nab a2200000uu 4500
001 3269854692
003 UK-CbPIL
022 |a 1932-6203 
024 7 |a 10.1371/journal.pone.0331011  |2 doi 
035 |a 3269854692 
045 2 |b d20251101  |b d20251130 
084 |a 174835  |2 nlm 
100 1 |a Zeng, Qingtao 
245 1 |a Novel dual convolution adaptive focus neural network for book genre classification 
260 |b Public Library of Science  |c Nov 2025 
513 |a Journal Article 
520 3 |a Book covers typically contain a wealth of information. With the annual increase in the number of books published, deep learning has been utilised to achieve automatic identification and classification of book covers. This approach overcomes the inefficiency of traditional manual classification operations and enhances the management efficiency of modern book retrieval systems. In the realm of computer vision, the YOLO algorithm has garnered significant attention owing to its excellent performance across various visual tasks. Therefore, this study introduces the CPPDE-YOLO model, a novel dual-convolution adaptive focus neural network that integrates the PConv and PWConv operators, alongside dynamic sampling technology and efficient multi-scale attention. By incorporating specific enhancement features, the original YOLOv8 framework has been optimised to yield superior performance in book cover classification. The aim of this model is to significantly enhance the accuracy of image classification by refining the algorithm. For effective book cover classification, it is imperative to consider complex global feature information to capture intricate features while managing computational costs. To address this, we propose a hybrid model that integrates parallel convolution and point-by-point convolution within the backbone network, integrating it into the DualConv framework to capture complex feature information. Moreover, we integrate the efficient multi-scale attention mechanism into each cross stage partial network fusion residual block in the head section to focus on learning key features for more precise classification. The dynamic sampling method is employed instead of the traditional UPsample method to overcome its inherent limitations. Finally, experimental results on real datasets validate the performance enhancement of our proposed CPPDE-YOLO network structure compared to the original YOLOv8 classification structure, achieving Top_1 Accuracy and Top_5 Accuracy improvement of 1.1% and 1.0%, respectively. This underscores the effectiveness of our proposed algorithm in enhancing book genre classification. 
653 |a Visual tasks 
653 |a Accuracy 
653 |a Deep learning 
653 |a Algorithms 
653 |a Sampling methods 
653 |a Books 
653 |a Convolution 
653 |a Neural networks 
653 |a Attention 
653 |a Computer vision 
653 |a Classification 
653 |a Sampling 
653 |a Machine learning 
653 |a Efficiency 
653 |a Performance enhancement 
653 |a Novels 
653 |a Effectiveness 
653 |a Image classification 
653 |a Design 
653 |a Information processing 
653 |a Object recognition 
653 |a Economic 
653 |a Environmental 
700 1 |a Zhang, Lixin 
700 1 |a Zhao, Jiefeng 
700 1 |a Xu, Anping 
700 1 |a Qi, Yali 
700 1 |a Yu, Liqin 
700 1 |a Li, Wenjing 
700 1 |a Xia, Haochang 
773 0 |t PLoS One  |g vol. 20, no. 11 (Nov 2025), p. e0331011 
786 0 |d ProQuest  |t Health & Medical Collection 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3269854692/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text  |u https://www.proquest.com/docview/3269854692/fulltext/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3269854692/fulltextPDF/embedded/L8HZQI7Z43R0LA5T?source=fedsrch