We propose SymbOmni, which solves the continuous learning bottleneck in visual generation by introducing a Symbolic Concept Box and Verbalized Backpropagation. It achieves state-of-the-art on AIGC benchmarks with over 40% token reduction, outperforming mainstream closed-source models. The work received three positive reviews (4/6) at CVPR 2026, and the revised manuscript is currently under review.
@article{li2025symbomni,title={{SymbOmni}: Evolving Agentic Omni Models via Symbolic Concept Learning},author={Liu, Jinxiu and Li, Jianru and Kuang, Tanqing and Liu, Xuanming and Mei, Kangfu and Wen, Yandong and Liu, Weiyang},year={2025},}