Mingkun Xu
Ph.D. , Principal Investigator
Email: xumingkun@@gdiist.cn
Personal Profile
Dr. Mingkun Xu is the Principal Investigator of the Laboratory of Brain-Inspired Algorithm and Model. He received his Bachelor's degree from Xidian University in 2018 and his Ph.D. degree from Tsinghua University in 2023. From 2023 to 2025 he completed a postdoctoral fellowship at the Guangdong Institute of Intelligent Science and Technology. Dr. Xu’s work centers on brain-inspired algorithms, advanced AI models, and AI-for-Science applications. He has published 30+ papers in leading venues, including Nature Communications, Science Robotics, IEEE Transactions on Neural Networks and Learning Systems, as well as top AI conferences such as ICML, AAAI, CVPR, IJCAI, and EMNLP. He is also the holder of several authorized national invention patents.
Laboratory of Brain-Inspired Algorithm and Model
The Laboratory of Brain-Inspired Algorithm and Model operates at the intersection of artificial intelligence, foundation models, and neuroscience. Our mission is to engineer the next generation of AI systems that combine biological interpretability with broad cognitive competence. Guided by neuroscientific insights and empowered by machine learning and high-performance computing, we investigate neural computation from the single-neuron level up to full cognitive architectures, building bottom-up frameworks endowed with reasoning, memory, continual learning, and autonomous adaptation. Through close interdisciplinary collaboration, we strive to uncover the fundamental principles of brain-inspired computation and translate them into transformative technologies and applications for robotics, autonomous driving, biomedicine, and beyond.
Current Core Research Directions include:
1. Brain-Inspired Algorithms
Leveraging the spatio–temporal dynamics of spiking neural networks, we devise low-power, scalable learning rules and computational models grounded in key neurobiological principles—memory consolidation, continual learning, dendritic non-linearity, homeostatic regulation, feature binding, and cognitive mapping. Our work emphasises synaptic plasticity, representation learning, meta-learning-driven rapid task adaptation, and activity-dependent structural reconfiguration. The goal is to build cross-modal, multi-task agents endowed with interpretability, transferability, and incremental evolution, thereby providing a biologically grounded algorithmic substrate for downstream applications and seamless integration with foundation models.
2. Deep Learning and Foundation Models
Our efforts centre on three intertwined axes—model architecture, parameter-efficient fine-tuning, and fast inference. Architecturally, we extend Transformer- and state-space-inspired designs toward unified, vision-to-multimodal foundation models. On the training side, we integrate lightweight adaptation techniques (e.g., LoRA) with retrieval-augmented strategies to enable cost-effective fine-tuning and continual learning. For deployment, we focus on KV-cache optimisation, low-bit quantisation, and edge–cloud co-inference to achieve low-latency, low-energy operation. Complementary studies in generative model, graph representation learning, and reinforcement-learning-driven agent frameworks endow our models with a closed loop of perception, memory, and decision making, forming a robust algorithmic pillar for brain-inspired applications.
3. Brain-Inspired Applications and BI-for-Science Synergies
We translate the above algorithms and models into end-to-end pipelines for autonomous driving, collaborative robotics, and on-device online learning, markedly enhancing adaptability and energy efficiency in dynamic, uncertain environments. In parallel, we fuse brain-inspired methods with large-scale models to create task-oriented analytical tools for medical diagnosis, nucleic-acid detection, protein property prediction, and related AI-for-Science challenges, delivering breakthrough capabilities to the life-science and materials-science domains.
Selected Publications:
1. Xu M, Liu F, Hu Y, Li H, Wei Y, Zhong S, Pei J* and Deng L*. Adaptive Synaptic Scaling in Spiking Networks for Continual Learning and Enhanced Robustness[J]. IEEE Transactions on Neural Networks and Learning Systems, 2024.
2. Xu M, Wu Y, Deng L, Liu F, Li G and Pei J*. Exploiting spiking dynamics with spatial-temporal feature normalization in graph learning. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI-21), pages 3207-3213, 2021.
3. Yang Y#, Xu M#(Co-first author), Jia S, Wang B, Xu L, Wang X, Liu H, Liu Y, Guo Y, Wang L, Duan S, Liu K, Zhu M, Pei J, Duan W, Liu D, Li H*. A new opportunity for the emerging tellurium semiconductor: making resistive switching devices[J]. Nature Communications, 2021, 12(1): 1-12.
4. Wu Y#, Zhao R#, Zhu J#, Chen F#, Xu M#(Co-first author), Li G, Song S, Deng L, Wang G, Zheng H, Ma S, Pei J, Zhang Y, Zhao M, Shi L*. Brain-inspired global-local learning incorporated with neuromorphic computing[J]. Nature Communications, 2022, 13(1): 1-14.
5. Xu M*. Exploiting homeostatic synaptic modulation in spiking neural networks for semi-supervised graph learning[C]. Proceedings of the 32nd ACM International Conference on Information and Knowledge Management(CIKM). 5193-5195. (2023)
6. Chen Y, Song A, Yin H, Zhong S, Chen F, Xu Q, Wang S*, Xu M*(Corresponding author). Multi-view incremental learning with structured hebbian plasticity for enhanced fusion efficiency[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2025, 39(2): 1265-1273.
7. Wang Y, Fang X, Yin H, Li D, Li G, Xu Q*, Xu Y, Zhong S, Xu M*(Corresponding author). BIG-FUSION: Brain-Inspired Global-Local Context Fusion Framework for Multimodal Emotion Recognition in Conversations[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2025, 39(2): 1574-1582.
8. Wang Y, Tan S, Shen J, Xu Y, Song H, Xu Q, Tiwari P, Xu M*(Corresponding author). Enhancing Graph Contrastive Learning for Protein Graphs from Perspective of Invariance[C]// Forty-Second International Conference on Machine Learning (ICML). (Conference Track). 2025.
9. Xu M, Zheng H, Pei J, Deng L*. A Unified Structured Framework for AGI: Bridging Cognition and Neuromorphic Computing [C]//Artificial General Intelligence: 16th International Conference, AGI 2023, Stockholm, Sweden, June 16–19, 2023, Proceedings. Cham: Springer Nature Switzerland, 2023: 345-356.
10. Wang Y, Li D, Shen J, Xu Y, Zhong S, Xu M*(Corresponding author). ClingTP: Curriculum Learning based Multi-style Title Prefix Generation[C]//ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2025: 1-5.
11. Xu M, Liu F, Pei J*. Endowing spiking neural networks with homeostatic adaptivity for APS-DVS bimodal scenarios. [C]//Companion Publication of the 2022 International Conference on Multimodal Interaction(ICMI). 2022: 12-17.
12. Ran X#, Xu M#(Co-first author), Mei L, Xu Q, Liu Q*. Detecting out-of-distribution samples via variational auto-encoder with reliable uncertainty estimation[J]. Neural Networks, 2022, 145: 199-208.
13. Liu F#, Xu M#(Co-first author), Li G, Pei J, Shi L, Zhao R*. Adversarial symmetric GANs: Bridging adversarial samples and adversarial networks[J]. Neural Networks, 2021, 133: 148-156.
14. Wang Y#, Zhang Z#, Xu M#(Co-first author), Yang Y, Ma M, Li H, Pei J, Shi L*. Self-doping memristors with equivalently synaptic ion dynamics for neuromorphic computing[J]. ACS applied materials & interfaces, 2019, 11(27): 24230-24240.
15. Song A, Chen Y, Wang Y, Zhong S, Xu M*( Corresponding author). Orchestrating Plasticity and Stability: A Continual Knowledge Graph Embedding Framework with Bio-Inspired Dual-Mask Mechanism[C]//The 16th Asian Conference on Machine Learning (Conference Track). 2024.
16. Zhong S*, Su L, Xu M, Loke D, Yu B, Zhang Y, Zhao R*. Recent Advances in Artificial Sensory Neurons: Biological Fundamentals, Devices, Applications, and Challenges[J]. Nano-Micro Letters, 2025, 17(1): 61.
17. Liu Z, Chen J, Xu M, Ho S, Wei Y*, Ho HP, Yong KT*. Engineered multi-domain lipid nanoparticles for targeted delivery[J]. Chemical Society Reviews, 2025.
18. Yu F, Wu Y, Ma S, Xu M, Li H, Qu H, Song C, Wang T, Zhao R, Shi L*. NeuroGPR: Brain-inspired General Place Recognition with Neuromorphic Computing [J]. Science Robotics, 2023, 8(78): eabm6996.
19. Zhao R, Yang Z, Zheng H, Wu Y, Liu F, Wu Z, Li L, Chen F, Song S, Zhu J, Zhang W, Huang H, Xu M, Sheng K, Yin Q, Pei J, Li G, Zhang Y, Zhao M, Shi L*. A framework for the general design and computation of hybrid neural networks[J]. Nature Communications, 2022, 13(1): 1-12.
COPYRIGHT ? 2021
Copyright Guangdong Institute of Intelligence Science and Technology?
粵ICP備2021109615號 KCCNOfficial Account