Ultrafast vision perception by neuromorphic optical flow
Kaydedildi:
| Yayımlandı: | arXiv.org (Sep 10, 2024), p. n/a |
|---|---|
| Yazar: | |
| Diğer Yazarlar: | , , , |
| Baskı/Yayın Bilgisi: |
Cornell University Library, arXiv.org
|
| Konular: | |
| Online Erişim: | Citation/Abstract Full text outside of ProQuest |
| Etiketler: |
Etiket eklenmemiş, İlk siz ekleyin!
|
MARC
| LEADER | 00000nab a2200000uu 4500 | ||
|---|---|---|---|
| 001 | 3109526242 | ||
| 003 | UK-CbPIL | ||
| 022 | |a 2331-8422 | ||
| 035 | |a 3109526242 | ||
| 045 | 0 | |b d20240910 | |
| 100 | 1 | |a Wang, Shengbo | |
| 245 | 1 | |a Ultrafast vision perception by neuromorphic optical flow | |
| 260 | |b Cornell University Library, arXiv.org |c Sep 10, 2024 | ||
| 513 | |a Working Paper | ||
| 520 | 3 | |a Optical flow is crucial for robotic visual perception, yet current methods primarily operate in a 2D format, capturing movement velocities only in horizontal and vertical dimensions. This limitation results in incomplete motion cues, such as missing regions of interest or detailed motion analysis of different regions, leading to delays in processing high-volume visual data in real-world settings. Here, we report a 3D neuromorphic optical flow method that leverages the time-domain processing capability of memristors to embed external motion features directly into hardware, thereby completing motion cues and dramatically accelerating the computation of movement velocities and subsequent task-specific algorithms. In our demonstration, this approach reduces visual data processing time by an average of 0.3 seconds while maintaining or improving the accuracy of motion prediction, object tracking, and object segmentation. Interframe visual processing is achieved for the first time in UAV scenarios. Furthermore, the neuromorphic optical flow algorithm's flexibility allows seamless integration with existing algorithms, ensuring broad applicability. These advancements open unprecedented avenues for robotic perception, without the trade-off between accuracy and efficiency. | |
| 653 | |a Time domain analysis | ||
| 653 | |a Visual tasks | ||
| 653 | |a Three dimensional flow | ||
| 653 | |a Two dimensional flow | ||
| 653 | |a Optical data processing | ||
| 653 | |a Data processing | ||
| 653 | |a Algorithms | ||
| 653 | |a Visual perception | ||
| 653 | |a Optical flow (image analysis) | ||
| 653 | |a Three dimensional motion | ||
| 653 | |a Visual perception driven algorithms | ||
| 653 | |a Two dimensional analysis | ||
| 700 | 1 | |a Gao, Shuo | |
| 700 | 1 | |a Pu, Tongming | |
| 700 | 1 | |a Zhao, Liangbing | |
| 700 | 1 | |a Arokia Nathan | |
| 773 | 0 | |t arXiv.org |g (Sep 10, 2024), p. n/a | |
| 786 | 0 | |d ProQuest |t Engineering Database | |
| 856 | 4 | 1 | |3 Citation/Abstract |u https://www.proquest.com/docview/3109526242/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch |
| 856 | 4 | 0 | |3 Full text outside of ProQuest |u http://arxiv.org/abs/2409.15345 |