Water Segmentation and Fast Reconstruction of Flood Hydrographs

Project Overview

We study new semi-supervised video object segmentation (VOS) techniques for objects with rapid appearance changes, such as water (flood, wave, etc.).

Accurate semantic segmentation of water and surrounding object from videos captured in the fields (e.g. survalence cameras, traffic cameras) has direct applications in esimating water level and constructing flood hydrographs in urban areas in real-time during flash floods, hurricanes, and other extreme weather events, which remains as a difficult task

Semantic segmentation of water segmentation is technically challenging because water often has rapidly changing appearance caused by free-form self-deformation, environment illumination, reflections, wave, ripples, turbulence, sediment concentration, etc. Such rapidly changing appearance often leads to poor water segmentation in videos

We have maintained a public data benchmark of video water segmentation and a series of models for effective water segmentation.


Video Object Segmentation with Adaptive Feature Bank and Uncertain-Region Refinement

Yongqing Liang, Xin Li, Navid Jafari, Qin Chen

Neural Information Processing Systems (NIPS), 2020

[Paper] [Supplementary Doc] [Codes]

WaterNet: An adaptive matching pipeline for segmenting water with volatile appearance

Yongqing Liang, Navid Jafari, Xing Luo, Qin Chen, Yanpeng Cao, and Xin Li

Computational Visual Media, 6:65-78, 2020


Real-Time Water Level Monitoring using Live Cameras and Computer Vision Techniques

Navid H. Jafari, Xin Li, Qin Chen, Canyu Le, Logan P. Betzer, and Yongqing Liang

Computers and Geosciences, Vol. 147, Article 104642, 2021


Dataset and Pre-trained Model

    The WaterV1 contains:

  • Training Set: 7 vdieo clips with annotation.
  • Test Set: 1 evaluation video.
  • The WaterV2 contains:

  • Training Set: 2188 water images with annotation.
  • Test Set: 20 evaluation videos.
  • A Pretrained WaterNet Model: Trained on WaterV2 Training Set for 200 Epochs


  • We gratefully acknowledge the support of National Science Foundation (EAR-1760582) and Louisiana State Board of Regents (ITRS LEQSF(2018-21)-RD-B-03) on this research.

  • Other:

    Locations of visitors to this page
    Web Page Hit Counter