Digital Twin (DT)-CycleGAN: Enabling Zero-Shot Sim-to-Real Transfer of Visual Grasping Models

Citation Author(s):
David
Liu
Yuzhong
Chen
Zihao
Wu
Submitted by:
Zihao Wu
Last updated:
Tue, 03/14/2023 - 17:14
DOI:
10.21227/8s9p-ak63
Data Format:
Research Article Link:
License:
190 Views
Categories:
Keywords:
0
0 ratings - Please login to submit your rating.

Abstract 

Deep learning has revolutionized the field of robotics. To deal with the lack of annotated training samples for learning deep models in robotics, Sim-to-Real transfer has been invented and widely used. However, such deep models trained in simulation environment typically do not transfer very well to the real world due to the challenging problem of “reality gap”. In response, this letter presents a conceptually new Digital Twin (DT)-CycleGAN framework by integrating the advantages of both DT methodology and the CycleGAN model so that the reality gap can be effectively bridged. Our core innovation is that real and virtual DT robots are forced to mimic each other in a way that the gaps or differences between simulated and realistic robotic behaviors are minimized. To effectively realize this innovation, visual grasping is employed as an exemplar robotic task, and the reality gap in zero-shot Sim-to-Real transfer of visual grasping models is defined as grasping action consistency losses and intrinsically penalized during the DT-CycleGAN training process in realistic simulation environments. Specifically, first, cycle consistency losses between real visual images and simulation images are defined and minimized to reduce the reality gaps in visual appearances during visual grasping tasks. Second, the grasping agent’s action consistency losses are defined and penalized to minimize the inconsistency of the grasping agent’s actions between the virtual states generated by the DT-CycleGAN generator and the real visual states. Extensive experiments demonstrated the effectiveness and efficiency of our novel DT-CycleGAN framework for zero-shot Sim-to-Real transfer.

This is the source code for the above work to facilitate other researchers’ works in this promising direction. Implementation details can also be found at https://github.com/YuzhongChen-98/DigitalTwin-CycleGAN.

Instructions: 

DEMO Video
===============
A standard .mp4 file which can be played using most media players.

DEMO Code
===============

###########Requirements
The usage of python package for this project
numpy==1.21.6
pybullet==3.2.5
timm==0.4.12
torch==1.9.1
torchvision==0.10.1
tqdm==4.62.3
opencv_python_headless==4.5.5.64
Pillow==9.2.0
visdom==0.1.8.9

###########File tree
All the files are as listed
├─GAN // Train the DT-CycleGAN, CyclGAN, RetinaGAN
│ ├─Data // Data for GAN training
│ │ └─train
│ │ ├─real_block
│ │ ├─real_blue
│ │ ├─real_red
│ │ └─simu_pure_1k
│ └─output // Output trained GAN model
│ ├─CycleGAN
│ ├─DT-CycleGAN
│ │ ├─blocks-complex
│ │ ├─blocks-pure
│ │ ├─blue
│ │ └─red
│ └─RetinaGAN
└─Robot-FTC // Train the Model on simulation enviroment
├─checkpoints
├─Env
│ └─__pycache__
├─ftc_robot // the robots model for pybullet
│ ├─config
│ ├─launch
│ ├─meshes
│ └─urdf
├─Mesh // background mesh figures
│ └─swirly
├─objects // grisp target
├─runs // record while training
│ ├─Object_detect_Mesh_resnet26d
│ ├─Object_detect_Mesh_swin_tiny
│ ├─Object_detect_Mesh_vit_tiny
│ ├─Object_detect_pure_swin-ti
│ ├─Object_detect_pure_swin-ti-cycleagan
│ ├─Object_detect_pure_swin-ti-cyclegan
│ ├─Object_detect_pure_swin-ti-retinagan
│ ├─Object_detect_pure_swin-ti_10k
│ ├─Object_detect_pure_swin-ti_12k
│ ├─Object_detect_pure_swin-ti_14k
│ ├─Object_detect_pure_swin-ti_16k
│ ├─Object_detect_pure_swin-ti_18k
│ ├─Object_detect_pure_swin-ti_2000
│ ├─Object_detect_pure_swin-ti_20k
│ └─Object_detect_pure_swin-ti_4000
└─__pycache__

###########Usage
- ./Robot-FTC files contains the codes and model files of robots trained on the simulation enviroments. More details see ./Robot-FTC/README.md
- ./GAN files contains the codes and data of our the DT-CycleGAN methods, also with RetinaGAN and CycleGAN. MOre details see ./GAN/README.md

###########Contact

Yuzhong Chen - chenyuzhong211@gmail.com

Documentation

AttachmentSize
File ReadMe.txt2.46 KB