Small, sparse, but substantial: techniques for segmenting small agricultural fields using sparse ground data
Abstract
The recent thrust on digital agriculture (DA) has renewed significant research interest in the automated delineation of agricultural fields. Most prior work addressing this problem has focused on detecting medium to large fields, while there is strong evidence that around 40% of the fields worldwide and 70% of the fields in Asia and Africa are small. The lack of adequate labelled images for small fields, huge variations in their colour, texture, and shape, and faint boundary lines separating them make it difficult to develop an end-to-end learning model for detecting such fields. Hence, in this paper, we present a multi-stage approach that uses a combination of machine learning and image processing techniques. In the first stage, we leverage state-of-the-art edge-detection algorithms such as holistically nested edge detection (HED) to extract first-level contours and polygons. In the second stage, we propose image-processing techniques to identify polygons that are non-fields, over-segmentations, or noise and eliminate them. The next stage tackles under-segmentations using a combination of a novel ‘cut-point’ based technique and localized second-level edge detection to obtain individual parcels. Since a few small, non-cropped but vegetated or constructed pockets can be interspersed in areas that are predominantly croplands, in the final stage, we train a classifier for identifying each parcel from the previous stage as an agricultural field or not. In an evaluation using high-resolution 1.19 m imagery sourced from DigitalGlobe, we show that our approach has a high F-Score of 0.84 in areas with large fields and reasonable accuracy with an F-Score of 0.73 in areas with small fields, which is encouraging. We also apply our techniques without any additional specialized processing to coarser resolution open satellite imagery and find the approach to be reasonable for delineating large fields with such imagery.