Implementation of "Deep Context-Aware Descreening and Rescreening of Halftone Images" paper via PyTorch. Nov 2018- Present
This project pertains automated Descreening process. Descreening is the
task that we try to
reconstruct the halftoned image (which is the mandatory process to
interact images with printers, scanners,
monitors, etc) meanwhile reducing the amount of data loss.
wikipedia/halftone - Paper
The authors, have not published any code about this
So this implementation is the first one and it is fully in PyTorch.
The implementation can be divided into below separate projects:
- CoarseNet: Modified version of U-Net architecture introduced in "Convolutional Networks for Biomedical Image Segmentation" to work as a low-pass filter by remove halftone patterns.
- DetailsNet: A deep CNN generator and two discriminators which are trained simultaneously to improve image quality.
- EdgeNet: A simple CNN model to extract canny edge features as it is necessary as part of the end-to-end learning procedure.
- ObjectNet: Modified version of "Pyramid Scene Parsing Network" to only return 25 major classified segments instead of 150 to be adapted to halftoning process.
- Halftoning-Algorithms: Implementation of some of the halftone algorithms provided in most recent digital color haltoning books to prepare data as the input of the whole project.
- Places365-Preprocessing: A custom and extendable implementation of Dataset abstract class in PyTorch to handle lazy loading of a huge data functionality via utilizing CPU for preprocessing and GPU for training.