Creating Segmentation Maps using Style GAN

Published:

Project Github Repository | Project Report (Hebrew) | Youtube Demo of styleGAN training process

Students: Inbal Aharoni, Shani Israelov, Supervised by: Idan kligvasser

The use of GAN has drastically affected low-level vision in graphics, particularly in tasks related to image creation and image-to-image translation. Today the training process, despite all the latest developments, is still unstable.

Given a semantic segmentation map in which we can separate and look at each pixel in the mage and tag it to the relevant class it represents, we can with the help of GAN produce images based on this map and hope to reach a more stable model. With the success of GANs we produced segmentation maps. With these maps and with the help of the generative model we can get a semantic understanding of the data set and even create completely new scenes.

We simplified the training process of unconditional GAN's. The training process carried out in two stages. We first created a segmentation map and then created a realistic image from it by the help of SPADE. In the next stage we compared ,by FID and visually, our training process results to results from a process that use only unconditional GAN. Our training process achieved better results.