PhenoBench -- A Large Dataset and Benchmarks for Semantic Image Interpretation in the Agricultural Domain

Guardat en:
Dades bibliogràfiques
Publicat a:arXiv.org (Jul 24, 2024), p. n/a
Autor principal: Weyler, Jan
Altres autors: Magistri, Federico, Marks, Elias, Yue Linn Chong, Sodano, Matteo, Roggiolani, Gianmarco, Chebrolu, Nived, Stachniss, Cyrill, Behley, Jens
Publicat:
Cornell University Library, arXiv.org
Matèries:
Accés en línia:Citation/Abstract
Full text outside of ProQuest
Etiquetes: Afegir etiqueta
Sense etiquetes, Sigues el primer a etiquetar aquest registre!

MARC

LEADER 00000nab a2200000uu 4500
001 2823796831
003 UK-CbPIL
022 |a 2331-8422 
024 7 |a 10.1109/TPAMI.2024.3419548  |2 doi 
035 |a 2823796831 
045 0 |b d20240724 
100 1 |a Weyler, Jan 
245 1 |a PhenoBench -- A Large Dataset and Benchmarks for Semantic Image Interpretation in the Agricultural Domain 
260 |b Cornell University Library, arXiv.org  |c Jul 24, 2024 
513 |a Working Paper 
520 3 |a The production of food, feed, fiber, and fuel is a key task of agriculture, which has to cope with many challenges in the upcoming decades, e.g., a higher demand, climate change, lack of workers, and the availability of arable land. Vision systems can support making better and more sustainable field management decisions, but also support the breeding of new crop varieties by allowing temporally dense and reproducible measurements. Recently, agricultural robotics got an increasing interest in the vision and robotics communities since it is a promising avenue for coping with the aforementioned lack of workers and enabling more sustainable production. While large datasets and benchmarks in other domains are readily available and enable significant progress, agricultural datasets and benchmarks are comparably rare. We present an annotated dataset and benchmarks for the semantic interpretation of real agricultural fields. Our dataset recorded with a UAV provides high-quality, pixel-wise annotations of crops and weeds, but also crop leaf instances at the same time. Furthermore, we provide benchmarks for various tasks on a hidden test set comprised of different fields: known fields covered by the training data and a completely unseen field. Our dataset, benchmarks, and code are available at \url{https://www.phenobench.org}. 
653 |a Robotics 
653 |a Visual tasks 
653 |a Datasets 
653 |a Agricultural production 
653 |a Visual perception 
653 |a Image segmentation 
653 |a Vision systems 
653 |a Crops 
653 |a Arable land 
653 |a Availability 
653 |a Crop production 
653 |a Computer vision 
653 |a Semantic segmentation 
653 |a Plants (botany) 
653 |a Domains 
653 |a Benchmarks 
653 |a Visual perception driven algorithms 
653 |a Semantics 
700 1 |a Magistri, Federico 
700 1 |a Marks, Elias 
700 1 |a Yue Linn Chong 
700 1 |a Sodano, Matteo 
700 1 |a Roggiolani, Gianmarco 
700 1 |a Chebrolu, Nived 
700 1 |a Stachniss, Cyrill 
700 1 |a Behley, Jens 
773 0 |t arXiv.org  |g (Jul 24, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/2823796831/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2306.04557