The Wayback Machine - https://web.archive.org/web/20200304113149/https://github.com/topics/image-to-image-translation
Skip to content
#

image-to-image-translation

Here are 63 public repositories matching this topic...

lkiani
lkiani commented May 18, 2019

Which images should I use in the evaluation code?
I compared the generated images with Ground truth images and generated images with labels but got an error and the result of all the parameters was zero.
Generated images should be converted to label?
Solving this problem is very important to me.
thank you
![57882413-e101bd00-7838-11e9-8ffe-b0ef87f6e346](https://user-images.githubusercontent.c

shinigami1992
shinigami1992 commented Oct 13, 2019

Hi, thanks for your great work.

I'm trying to apply stargan on my own custom anime dataset to learn facial expression transfer.
My dataset includes 2000 images per each emotions. (2k happy, 2k sad, etc)

I run the training process for about 100k iterations.

The problem is that I only see some small changes specially in mouth part. Do you have any suggestion? How can I make network more f

xinario
xinario commented Jan 22, 2019
  1. Which are the 850 images mentioned in your paper?
    Inside each experiment folder (SE0, SE1....SE28), there are 906 images. So to get the exact 850 images, you need to, first, reordering the image sequence according to the [SliceLocation] field of the Dicom image (sort in ascending order) and you will get images arranged from pelvis to head. Then just keeping slice 21 to 870 and discard the rest

Improve this page

Add a description, image, and links to the image-to-image-translation topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the image-to-image-translation topic, visit your repo's landing page and select "manage topics."

Learn more

You can’t perform that action at this time.