ED2 dataset

Citation Author(s):
Tofayel Ahammad
Ovee
Auburn University
Eftakhar Ahmed
Arnob
Auburn University
Jean-Francois
Louf
Auburn University
Submitted by:
Jean-Francois Louf
Last updated:
Sat, 03/22/2025 - 15:03
DOI:
10.21227/k52q-qs41
Data Format:
License:
0
0 ratings - Please login to submit your rating.

Abstract 

Shape completion remains a fundamental challenge in computer vision and image processing, particularly for tasks involving hand-drawn sketches and occluded objects. Traditional deep learning methods such as Generative Adversarial Networks (GANs) and Convolutional Neural Networks (CNNs) often suffer from high computational costs and poor generalization on sparse, abstract structures. We introduce the Efficiency-Driven Encoder-Decoder (ED$^2$) model, a novel neural architecture designed to achieve state-of-the-art shape reconstruction quality while significantly reducing computational overhead. Unlike conventional methods, ED$^2$ utilizes a compact encoder-decoder framework optimized to minimize structural discrepancies through an adaptive loss function, ensuring high fidelity reconstruction with reduced artifacts. Extensive evaluations on diverse datasets demonstrate ED$^2$’s superior perceptual quality, achieving Structural Similarity Index (SSIM) values of 80–90$\%$ even for inputs with up to 75$\%$ missing information. Furthermore, our approach enhances edge continuity, geometric consistency, and shape plausibility, outperforming state-of-the-art models such as GAN*, SketchGAN*, and U-Net* in both visual realism and computational efficiency. The model’s lightweight design enables real-time deployment in image inpainting, sketch-based design, and augmented reality applications, making it well-suited for resource-constrained systems. By bridging the gap between efficiency and perceptual quality, ED$^2$ sets a new benchmark for robust and scalable shape completion in image processing.

Instructions: 

Efficiency-Driven Neural Network for Shape Completion - Source Code Documentation

This repository contains the implementation of an Efficiency-Driven Encoder-Decoder (ED²) model for shape completion in image processing. The model is designed to achieve state-of-the-art shape reconstruction quality while significantly reducing computational overhead.

Repository Structure

The repository should be organized as follows:

/
├── Source_Code.ipynb     # Main Jupyter notebook containing all the code
├── dataset/              # Directory containing the image datasets
│   ├── complete/         # Complete shape images
│   │   ├── circle/       # Circle images
│   │   ├── rectangle/    # Rectangle images
│   │   ├── heart/        # Heart images
│   │   ├── star/         # Star images
│   │   └── triangle/     # Triangle images
│   ├── partial/          # Partial shape images with same structure
│   │   ├── circle/       # Circle images
│   │   ├── rectangle/    # Rectangle images
│   │   ├── heart/        # Heart images
│   │   ├── star/         # Star images
│   │   └── triangle/     # Triangle images
│   └── SSIM_Images/      # Images for SSIM analysis
├── Data/                 # Directory containing analysis data
│   ├── data.txt          # SSIM vs Partiality index data
│   ├── component_data.txt          # SSIM vs Partiality index data
│   └── frequency_component_1_and_2.txt  # PCA analysis data
├── README.md             # This file

Requirements

The code requires the following libraries:

  • Python 3.x
  • TensorFlow 2.x
  • NumPy
  • Matplotlib
  • OpenCV (cv2)
  • Pandas
  • scikit-learn
  • scikit-image
  • SciPy

You can install the required packages using:

pip install tensorflow numpy matplotlib opencv-python pandas scikit-learn scikit-image scipy

Running the Code

  1. The main code and graph generation codes are contained in Source_Code.ipynb, which can be run using Jupyter Notebook or Jupyter Lab.
  2. Ensure that the dataset and Data folders are placed in the correct directory as per the structure mentioned above.
  3. Open the Source_Code.ipynb file in Jupyter Notebook or any compatible environment.
  4. Run the notebook cells sequentially to execute the code and generate the results.
  5. The code will automatically handle training the model, evaluating it, and generating various plots and results.
  6. This code will generate and save multiple pdf files with the plotted graphs in the same directory as the notebook.

Important Notes

  1. Path Configuration:
    • The code uses relative paths (e.g., ./dataset/, ./Data/). Make sure your current working directory is set correctly.
    • If you encounter path-related errors, you may need to modify the path variables in the code:
      • DATASET_RELATIVE_PATH = './dataset/'
      • BASE_DIR = '.'
  2. macOS-Specific Code:
    • The section under "Figure 5 - a, b, c, d" contains code that is specifically mentioned as only working on macOS due to how TensorFlow handles model files in different operating systems.
    • Look for the comment: [IMPORTANT: This code only works on Mac OS due to how tensorflow handles model files in different OS]
    • If you're using Windows or Linux, this specific code block may not work properly, but the rest of the notebook should run fine.
  3. Dataset Loading:
    • The code expects image datasets in the proper structure. Make sure your dataset is properly organized as described in the repository structure.
    • If the dataset is not found or images fail to load, check the console for error messages about file paths.
  4. Checkpoint Management:
    • The code attempts to find an existing trained model in the directory structure. If no model is found, it will train a new one.
    • Trained models will be saved in directories with names like results-{train_acc:.2f}-{val_acc:.2f}/.

Code Sections

The notebook is organized into several sections:

  1. Main Source Code: Dataset processing, model training, model evaluation, and results plotting.
  2. Figure 4: Dataset cloud representation for visualizing the training data.
  3. Figure 5: PCA analysis and visualization of the model's internal representations (macOS-specific).
  4. Figure 6: Training and testing results visualization.
  5. Figure 7: SSIM index plots for comparing true and predicted shapes.
  6. Figure 9: SSIM index vs Partiality index analysis - Shows how the model maintains high structural similarity even when large portions of shapes are missing.
  7. Figure 10: Resource efficiency comparison - Compares the computational efficiency of the ED² model with other state-of-the-art models.

Troubleshooting

If you encounter issues:

  1. Path errors: Ensure all the file paths are correct for your system. You may need to modify path variables.
  2. Missing datasets: Make sure the dataset directory structure is correct.
  3. macOS-specific code: Skip the "Figure 5" section if you're not using macOS or adapt it for your operating system.

Dataset Files

LOGIN TO ACCESS DATASET FILES

Documentation

AttachmentSize
File README.md5.19 KB