The next generation of Internet services, such as Metaverse, rely on mixed reality (MR) technology to provide immersive user experiences. However, the limited computation power of MR headset-mounted devices (HMDs) hinders the deployment of such services. Therefore, we propose an efficient information sharing scheme based on full-duplex device-to-device (D2D) semantic communications to address this issue. Our approach enables users to avoid heavy and repetitive computational tasks, such as artificial intelligence-generated content (AIGC) in the view images of all MR users. Specifically, a user can transmit the generated content and semantic information extracted from their view image to nearby users, who can then use this information to obtain the spatial matching of computation results under their view images. We analyze the performance of full-duplex D2D communications, including the achievable rate and bit error probability, by using generalized small-scale fading models. To facilitate semantic information sharing among users, we design a contract theoretic AI-generated incentive mechanism. The proposed diffusion model generates the optimal contract design, outperforming two deep reinforcement learning algorithms, i.e., proximal policy optimization and soft actor-critic algorithms. Our numerical analysis experiment proves the effectiveness of our proposed methods.
This repository hosts a demonstration of the semantic encoder and decoder algorithm as presented in the paper:
"AI-Generated Incentive Mechanism and Full-Duplex Semantic Communications for Information Sharing"
Authored by Hongyang Du, Jiacheng Wang, Dusit Niyato, Jiawen Kang, Zehui Xiong, and Dong In Kim, accepted by IEEE JSAC.
To create a new conda environment, execute the following command:
conda create --name sems python==3.7
Activate the created environment with:
conda activate sems
The following packages can be installed using pip:
pip install matplotlib==3.1.3
pip install torch
pip install opencv-python==4.1.2.30
pip install scipy
pip install yacs
pip install torchvision
pip install scikit-image
Download the checkpoints files by referring:
SemSharing\jsr_code\checkpoints\googledown.txt
Run run.py
to start the program.
In this demo, we consider that there are two users, whose view images are:
After running the code, several results can be viewed in PyCharm:
For instance, the safe walk area calculated by the first user:
Semantic matching results of two view images:
Another way to show semantic matching results of two view images:
How the second user transforms the view image of the first user to match their own view image:
The safe walk area information that the second user obtains based on the semantic information shared by the first user:
Then, without performing the safe walk area detecting task, the second user can know that the road in front of him/her is safe.
Should our code assist in your research, please acknowledge our work by citing:
@article{du2023ai,
title={{AI}-generated incentive mechanism and full-duplex semantic communications for information sharing},
author={Du, Hongyang and Wang, Jiacheng and Niyato, Dusit and Kang, Jiawen and Xiong, Zehui and Kim, Dong In},
journal={IEEE Journal on Selected Areas in Communications},
year={2023},
publisher={IEEE}
}
As we claimed in our paper, this repository used the codes in the following papers:
Please consider citing these papers if their codes are used in your research.
For the AI-generated incentive part in the paper, please refer to our tutorial paper: Beyond Deep Reinforcement Learning: A Tutorial on Generative Diffusion Models in Network Optimization and the codes.