A Study of Segmentation Methods for Corals in Benthic Photographs

22/05/2023

Paper by Yohan Runhaar
Data Contributed by Seaview survey photo-quadrat and image classification dataset. (Manuel Gonzalez-Rivero, Alberto Rodriguez-Ramirez, Oscar Beijbom, Peter Dalton, Emma V Kennedy, Benjamin P Neal, Julie Vercelloni, Pim Bongaerts, Anjani Ganase, Dominic EP Bryant, et al., University of Queensland Library, 2019.)
Supervised by
Dr. Peter Heemskerk and Rick Groenendijk, PHD
Blog Written by Marcel Kempers

Introduction to the Project: Advancing Coral Reef Monitoring: Exploring Automated Methods for Precise Coral  Segmentation

Marine research and coral monitoring play a crucial role in combating the dire predictions regarding the sustainability of coral reefs worldwide. In the pursuit of assessing the health of reef sites over time, marine biologists traditionally rely on in situ data collection and sparse label classification, a labor-intensive process. However, there is a growing need for an automated approach to save valuable time for researchers. In 2013, CoralNet introduced a semi-automated patch-based classifier for coral species in benthic photographs. Despite this progress, both manual methods and existing platforms like CoralNet fall short in providing dense segmentation masks for precise decision-making and long-term ecosystem analysis.

In his paper, Yohan conducted a study to evaluate the current state-of-the-art methods for coral classification and segmentation. Yohan investigated and partially replicated these methods to explore their effectiveness. The findings show that patch-based classification, combined with a sliding window algorithm, proves to be an effective approach for identifying coral presence in underwater images and achieving an approximate semantic segmentation. Additionally, Yohan highlights the potential of combining Gabor filters and Sobel kernel edge detection with other methods for semantic segmentation. Yohan also demonstrates that superpixel algorithms show promise for achieving full dense segmentation of benthic imagery. These algorithms can efficiently convert sparsely labeled datasets into densely labeled masks, which can be used as training data for state-of-the-art supervised segmentation methods. However, further research is needed for each discussed result, and Yohan presents potential future research directions accordingly.

This research is interesting because it aims to address the time-consuming manual processes and the limitations of existing semi-automated approaches. By leveraging advanced computer vision techniques and developing dense segmentation masks, this study offers a pathway to enhance the efficiency and precision of coral reef health assessment. The automation of classification and segmentation tasks empowers marine biologists to allocate more time to analysis and conservation efforts.

Yohan would like to acknowledge the support of the University of Amsterdam's Project Artificial Intelligence. We also extend our sincere gratitude to Peter Heemskerk and PHD candidate Rick Groenendijk for their invaluable supervision and assistance throughout the project. Additionally, we are grateful to the University of Queensland for providing the open and detailed Seaview dataset, which facilitated further research in the field of computer vision for the betterment of our oceans.

The Literature on Coral Segmentation

In a research paper by Aloun et al. [11], a modified version of the JSEG color image segmentation algorithm was presented. The original method aimed to separate color-texture regions in images or videos. However, it assumed uniformly distributed color-texture regions, which limited its effectiveness in underwater image segmentation where colors are influenced by water conditions. To address this, the researchers proposed an improved JSEG algorithm (see Figure 2 above) that processed images in the Lab color space and utilized the k-means algorithm for color segmentation. Experimental results showed that the updated algorithm achieved more accurate segmentation of coral reefs compared to the original method.

Creating dense image masks for training supervised segmentation models is a time-consuming task. To overcome this challenge, Alonso and Murillo [7] proposed a solution using adaptive superpixel segmentation propagation (see Figure 3 below). They demonstrated that sparse point labels scattered on the image can be augmented into dense representations, which can then be used to train segmentation models. The resulting dense masks were comparable to those generated by training with dense ground truth data. This approach greatly reduces the labeling effort required to achieve detailed segmentation.



The methodology for sparse label augmentation is based on superpixel segmentation techniques. Traditional single-level superpixel approaches suffer from predefined numbers of superpixels, leading to unlabeled regions. To address this, the authors proposed a multi-level superpixel segmentation method. It involves an iterative process where superpixel size increases until all pixels are labeled. This ensures that no regions are left unlabeled, improving the accuracy of the dense masks (see Figure 4 below).



In a follow-up paper, Alonso et al. [8] introduced an algorithm that can train semantic segmentation models from sparsely labeled data. It builds upon their previous multi-level superpixel segmentation implementation, achieving a significant reduction in processing time. The proposed method yields comparable results to training with extensive annotations, significantly reducing the labeling effort while achieving detailed segmentation. The augmented dense ground truth masks, along with paired images, are then used as training data for the DeepLabv3+ semantic segmentation model, enabling accurate segmentation of unseen coral images.

Comparisons were made between the proposed method and existing implementations like CoralSeg and the Superpixel Sampling Network (SSN) [18]. The point label-aware implementation demonstrated more precise dense masks (Figure 6 below).

Patch-based Classification

One approach to segmenting coral images is through patch-based classification. Building on the success of patch-based classification for automated point label annotation highlighted by the CoralNet paper [6], the study explores the use of a patch-based classification model for full image segmentation. A pre-trained ResNet network, known for its accuracy, is fine-tuned on the Seaview dataset to replicate the patch-based classification method described in CoralNet.

To use the Seaview Dataset, each image is cropped into multiple patches based on their corresponding labels. Each label serves as the center of a patch extracted from the image. Two crop sizes, 224 × 224 and 112 × 112, are used. Figure 9 and Figure 10 showcase sample cropped patches from an image with these crop sizes.

The modified ResNet network is trained to classify patches as either coral or not coral. The training involves 15 epochs with a learning rate of 1e-2, an Adam optimizer, and a cross-entropy loss function. Both the 224 × 224 and 112 × 112 pixel datasets of cropped patches are rebalanced using undersampling to address class imbalance. The dataset is split into training and test sets (80:20 ratio) and further split to create a training-validation set.

The results show that the learning curve and validation accuracies for both crop sizes converge to similar values. There is no significant difference in performance between the two crop sizes for patch-based classification, as depicted in Figures 11a, 11b, 11c, and 11d.

"Super"pixels to the Rescue

Superpixels are an important technique that has shown promising results in generating dense image segmentation masks using sparse annotations [7][8][17]. Inspired by the work of Alonso et al., who found the SLIC method [14] to be effective in generating superpixels, we investigate this method in our experiments.

The SLIC method is a variant of the classical k-means clustering algorithm that groups pixels with similar colors and close proximity in the image plane, effectively generating superpixels.

In his study, Yohan explores the claims made by the superpixels papers [7][8][9] regarding the success of the SLIC method for generating superpixels. For this experiment, he apply the SLIC method to two images from the Atlantic region subset of the Seaview dataset: 17001652802.jpg and 17004348702.jpg. We vary the number of segments to be generated, namely 100, 200, and 300.

The results (shown in Figures 15 and 16 below) demonstrate the effectiveness of the SLIC method in properly clustering pixels based on their similarity to neighboring pixels. Interestingly, the best results for both images are achieved with 200 superpixels. When using only 100 superpixels, coral pixels tend to be clustered into larger groups along with non-coral pixels. Conversely, 300 superpixels result in unnecessary splits within already well-grouped clusters.

Moreover, when overlaying the labels (blue for coral labels and red for non-coral labels) on the images with 200 superpixels, we observe that no superpixel contains both coral and non-coral labels. However, it is important to note that there are some superpixels that do not contain any labels at all, as highlighted when overlaying the labels on the images.

Conclusions

From the performed experiments, the superpixel results are arguably the most promising of the results. Indeed, the SLIC method is already capable of generating proper superpixels without requiring any adaptation to work with benthic images. Be that as it may, overlapping the labels over the generated superpixels does showcase that some created superpixel clusters do not contain any labels (coral or non-coral) nor are they neighbors of clusters containing any labels.

While this blog post provides an overview without delving deep into the methodology and results of two other research methods, namely Sliding Window Segmentation and Image Feature Extraction Methods, we invite you to contact us for the full paper and a detailed description of these approaches.

By continuing to explore these avenues and developing these techniques, we can enhance the accuracy and efficiency of coral segmentation. Another interesting area for future research would be to combine both classification and superpixel methods to investigate the accuracy of utilizing a well-trained classifier to label superpixels that do not already contain a label.

Join us in the pursuit of better coral analysis and conservation efforts by uploading your data and experiencing the capabilities of our open coral AI system firsthand. Together, we can make a positive impact on the future of our coral ecosystems.

Conduct a research project with us

This project is a project very close to our hearts at Reef Support. It isn’t only about helping researchers work with coastal communities in their effort to innovate with digital technologies and monitor the reefs but also about fostering wide-spread marine knowledge. For interested students that want to take on a future challenge in reef-related research, do contact us and let us know about your idea, or if we have an opportunity we will see where you can play a role.

References

[1] Lauretta Burke, Katie Reytar, Mark Spalding, and Allison Perry. Reefs at risk revisited. World Resources Institute, Nov 2012.
[2] Coral reef ecosystems. National Oceanic and Atmospheric Administration.
[3] David Obura, Greta Aeby, Natchanon Amornthammarong, Ward Appeltans, Nic Bax, Joe Bishop, Russell Brainard, Samuel Chan, Pamela Fletcher, Timothy Lamont, Lew Gramer, Mishal Gudka, John Halas, James Hendee, Gregor Hodgson, Danwei Huang, Mike Jankulak, Albert Jones, Tadashi Kimura, and Supin Wongbusarakum. Coral reef monitoring, reef assessment technologies, and ecosystem-based management. Frontiers in Marine Science, 6:580, 09 2019.
[4] Reef Monitoring Techniques. https://newheavenreefconservation.org/learning-resources/explore-topics/reef-monitoring-techniques. [Accessed 15-Nov2022].
[5] Hannah Murphy and Gregory Jenkins. Observational methods used in marine spatial monitoring of fishes and associated habitats: A review. Marine and Freshwater Research - MAR FRESHWATER RES, 61, 01 2010.
[6] Qimin Chen, Oscar Beijbom, Stephen Chan, Jessica Bouwmeester, and David Kriegman. A new deep learning engine for coralnet. In 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), pages 3686–3695, 2021.
[7] Inigo Alonso and Ana C. Murillo. Semantic segmentation from sparse labeling using multilevel superpixels. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 5785–5792, 2018.
[8] I˜nigo Alonso, Matan Yuval, Gal Eyal, Tali Treibitz, and Ana C. Murillo. Coralseg: Learning coral segmentation from sparse annotations. Journal of Field Robotics, 36(8):1456–1477, 2019.
[9] Scarlett Raine, Ross Marchant, Brano Kusy, Frederic Maire, and Tobias Fischer. Point label aware superpixels for multi-species segmentation of underwater imagery. IEEE Robotics and Automation Letters, 7(3):8291–8298, jul 2022.
[10] Mingxing Tan and Quoc V. Le. Efficientnet: Rethinking model scaling for convolutional neural networks. 2019.
[11] Mohammad Aloun, Muhammad Hitam, Wan Nural, Jawahir Hj, Wan Yussof, Abdul Aziz, abdul aziz abdul hamid, Zainudin Bachok, Mohd Safuan Che Din, and Mohd Safuan. Improved coral reef images segmentation using modified jseg algorithm. Journal of Telecommunication, Electronic and Computer Engineering, 11 2017.
[12] Y. Deng and B.S. Manjunath. Unsupervised segmentation of color-texture regions in images and video. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(8):800– 810, 2001.
[13] Jing Zhang, Yong-Wei Gao, Sheng-Wei Feng, Zhi-Hua Chen, and Yu-Bo Yuan. Image segmentation with texture clustering based jseg. In 2015 International Conference on Machine Learning and Cybernetics (ICMLC), volume 2, pages 599–603, 2015.
[14] Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, and Sabine
S¨usstrunk. Slic superpixels compared to state-of-the-art superpixel methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 3 (11):2274–2282, 2012.
[15] Christian Conrad, Matthias Mertz, and Rudolf Mester. Contour-relaxed superpixels. volume 8081, 08 2013.
[16] Yuhang Zhang, Richard Hartley, John Mashford, and Stewart Burn. Superpixels via pseudoboolean optimization. In 2011 International Conference on Computer Vision, pages 1387– 1394, 2011.
[17] Scarlett Raine, Ross Marchant, Brano Kusy, Frederic Maire, and Tobias Fischer. Point label aware superpixels for multi-species segmentation of underwater imagery. IEEE Robotics and Automation Letters, 7(3):8291–8298, jul 2022.
[18] Varun Jampani, Deqing Sun, Ming-Yu Liu, Ming-Hsuan Yang, and Jan Kautz. Superpixel sampling networks. CoRR, abs/1807.10174, 2018.
19] Manuel Gonz´alez-Rivero, Alberto Rodriguez-Ramirez, Oscar Beijbom, Peter Dalton,
Emma V Kennedy, Benjamin P Neal, Julie Vercelloni, Pim Bongaerts, Anjani Ganase, Dominic EP Bryant, et al. Seaview survey photo-quadrat and image classification dataset. University of Queensland Library, 2019.
[20] Jinsu Lee, Junseong Bang, and Seong-Il Yang. Object detection with sliding window in images including multiple similar objects. In 2017 International Conference on Information and Communication Technology Convergence (ICTC), pages 803–806, 2017.
[21] Xuewen Wang, Xiaoqing Ding, and Changsong Liu. Gabor filters-based feature extraction for character recognition. Pattern Recognition, 38(3):369–379, 2005.
[22] Susovan Jana, Ranjan Parekh, and Bijan Sarkar. Chapter 3 - A semi-supervised approach
for automatic detection and segmentation of optic disc from retinal fundus image. In Janmenjoy Nayak, Bighnaraj Naik, Danilo Pelusi, and Asit Kumar Das, editors, Handbook of Computational Intelligence in Biomedical Engineering and Healthcare, pages 65–91. Academic Press, January 2021.
[23] John Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-8(6):679–698, 1986.
[24] Satoshi Suzuki and KeiichiA be. Topological structural analysis of digitized binary images by border following. Computer Vision, Graphics, and Image Processing, 30(1):32–46, 1985.
[25] Bernhard Preim and Charl Botha. Chapter 4 - Image Analysis for Medical Visualization. In Bernhard Preim and Charl Botha, editors, Visual Computing for Medicine (Second Edition), pages 111–175. Morgan Kaufmann, Boston, second edition edition, 2014.
[26] Tao Hu, Yao Wang, Yisong Chen, Peng Lu, Heng Wang, and Guoping Wang. Sobel heuristic kernel for aerial semantic segmentation. In 2018 25th IEEE International Conference on Image Processing (ICIP), pages 3074–3078, 2018.