MAF: An algorithm based on multi-agent characteristics for infrared and visible video fusion
Uloženo v:
| Vydáno v: | PLoS One vol. 20, no. 3 (Mar 2025), p. e0315266 |
|---|---|
| Hlavní autor: | |
| Další autoři: | , , |
| Vydáno: |
Public Library of Science
|
| Témata: | |
| On-line přístup: | Citation/Abstract Full Text Full Text - PDF |
| Tagy: |
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
MARC
| LEADER | 00000nab a2200000uu 4500 | ||
|---|---|---|---|
| 001 | 3173975065 | ||
| 003 | UK-CbPIL | ||
| 022 | |a 1932-6203 | ||
| 024 | 7 | |a 10.1371/journal.pone.0315266 |2 doi | |
| 035 | |a 3173975065 | ||
| 045 | 2 | |b d20250301 |b d20250331 | |
| 084 | |a 174835 |2 nlm | ||
| 100 | 1 | |a Liu, Yandong | |
| 245 | 1 | |a MAF: An algorithm based on multi-agent characteristics for infrared and visible video fusion | |
| 260 | |b Public Library of Science |c Mar 2025 | ||
| 513 | |a Journal Article | ||
| 520 | 3 | |a Addressing the limitation of existing infrared and visible video fusion models, which fail to dynamically adjust fusion strategies based on video differences, often resulting in suboptimal or failed outcomes, we propose an infrared and visible video fusion algorithm that leverages the autonomous and flexible characteristics of multi-agent systems. First, we analyze the functional architecture of agents and the inherent properties of multi-agent systems to construct a multi-agent fusion model and corresponding fusion agents. Next, we identify regions of interest in each frame of the video sequence, focusing on frames that exhibit significant changes. The multi-agent fusion model then perceives the key distinguishing features between the images to be fused, deploys the appropriate fusion agents, and employs the effectiveness of fusion to infer and determine the fusion algorithms, rules, and parameters, ultimately selecting the optimal fusion strategy. Finally, in the context of a complex fusion process, the multi-agent fusion model performs the fusion task through the collaborative interaction of multiple fusion agents. This approach establishes a multi-layered, dynamically adaptable fusion model, enabling real-time adjustments to the fusion algorithm during the infrared and visible video fusion process. Experimental results demonstrate that our method outperforms existing approaches in preserving key targets in infrared videos and structural details in visible videos. Evaluation metrics indicate that the fusion outcomes obtained using our method achieve optimal values in 66.7% of cases, with sub-optimal and higher values accounting for 80.9%, significantly surpassing the performance of traditional single fusion methods. | |
| 653 | |a Deep learning | ||
| 653 | |a Collaboration | ||
| 653 | |a Video | ||
| 653 | |a Algorithms | ||
| 653 | |a Multilayers | ||
| 653 | |a Neural networks | ||
| 653 | |a Decision making | ||
| 653 | |a Methods | ||
| 653 | |a Multiagent systems | ||
| 653 | |a Real time | ||
| 653 | |a Infrared imaging | ||
| 653 | |a Efficiency | ||
| 653 | |a Semantics | ||
| 653 | |a System theory | ||
| 653 | |a Environmental | ||
| 700 | 1 | |a Ji, Linna | |
| 700 | 1 | |a Yang, Fengbao | |
| 700 | 1 | |a Guo, Xiaoming | |
| 773 | 0 | |t PLoS One |g vol. 20, no. 3 (Mar 2025), p. e0315266 | |
| 786 | 0 | |d ProQuest |t Health & Medical Collection | |
| 856 | 4 | 1 | |3 Citation/Abstract |u https://www.proquest.com/docview/3173975065/abstract/embedded/6A8EOT78XXH2IG52?source=fedsrch |
| 856 | 4 | 0 | |3 Full Text |u https://www.proquest.com/docview/3173975065/fulltext/embedded/6A8EOT78XXH2IG52?source=fedsrch |
| 856 | 4 | 0 | |3 Full Text - PDF |u https://www.proquest.com/docview/3173975065/fulltextPDF/embedded/6A8EOT78XXH2IG52?source=fedsrch |