3.65 KB
Newer Older
Matej Choma's avatar
Matej Choma committed
# Precipitation Video Resolution Upscaling
Matej Choma's avatar
Matej Choma committed
Matej Choma's avatar
Matej Choma committed

4 5
The repository for NI-MVI semestral work on video resolution upscaling.

Matej Choma's avatar
Matej Choma committed
### Assignment
7 8 9

Vezměte krátké 5-10 sekundové video a vytvořte generátor, který bude generovat
video s vyšším rozlišením. Používejte obě architektury: GAN a U-Net.
Matej Choma's avatar
Matej Choma committed
10 11
Porovnejte výsledky.

Matej Choma's avatar
Matej Choma committed
12 13 14 15 16 17 18
## Report

## SR of the precipitation video
![SR with U-Net DIV2K model](data/output_unet_div2k.mp4)
![SR with SRResNet model](data/output_srresnet.mp4)

Matej Choma's avatar
Matej Choma committed
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
## Milestone
The objective of this work is to 4x upscale the resolution of `data/target.mp4` video. The video contains 24 hours of weather radar precipitation data with a resolution of 480x270 pixels. Weather radar data is generally noisy, which poses a secondary challenge of denoising the data during SR.

![Target Video](data/target.mp4)

### Planned approach to the semestral work

The upscaling of video resolution can be decomposed into upscaling of individual frames. Thus, I will focus on image super-resolution (SR) ML models and use the best performing one to generate target video.

I plan to build and train the following models:
  * U-net, initially motivated by my bachelor's thesis [1]. From my experience, this architecture is able to pick the low hanging fruits in various computer vision tasks. The U-net described in [2] won second place at NTIRE2019 challenge [4], which supports this claim. I plan to utilize the findings from [2] for the training of the U-net.
  * SRGAN [5], which is the first utilization of the GAN framework for the SR task.

I will use the DIV2K dataset [6] for training. For validation, I will use the Set14 benchmark [7], and weather radar validation set created for this work. I will use both qualitative evaluation and quantitative evaluation with PSNR and SSIM metrics.

![weather radar image](data/examples/radar.png "1920x1080 weather radar image")

I am posing the following questions:
  * Can SR model trained on camera images generate weather radar images?
  * Does evaluation on Set14 benchmark correlate with the weather radar validation set?
  * Can training or finetuning on weather radar data improve the performance?

>All of the weather radar data was provided by the Czech company [Meteopress](

### Literature
  * [[1]]( Choma, Matej. *Interpolation and Extrapolation of Subsequent Weather Radar Images.*
  * [[2]]( Feng, Ruicheng, et al. *Suppressing model overfitting for image super-resolution networks.*
  * [[3]]( Lugmayr, Andreas, Martin Danelljan, and Radu Timofte. *Ntire 2020 challenge on real-world image super-resolution: Methods and results.*
  * [[4]]( Cai, Jianrui, et al. *Ntire 2019 challenge on real image super-resolution: Methods and results.*
  * [[5]]( Ledig, Christian, et al. *Photo-realistic single image super-resolution using a generative adversarial network.*
  * [[6]]( Agustsson, Eirikur and Timofte, Radu. *NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study.*
  * [[7]]( Zeyde, Roman, Michael Elad, and Matan Protter. *On single image scale-up using sparse-representations.*