About

Short Bio

Chao Wang

Ph.D. candidate

School of Artificial Intelligence

Xidian University, China

My Tutor: Kai Wu, Jing Liu, Licheng Jiao

Email: xiaofengxd@126.com [Google Scholar] [Researchgate Profile] [CSDN Profile] [Github Profile] [Aminer]

Research Topic: Multitasking Optimization and Learning, Evolutionary Computation, and Complex Networks (Graphs).

My Group: EvoIGroup

Publications and Preprint

Preprint

Network Collaborator: Knowledge Transfer Between Network Reconstruction and Community Detection from Dynamics[J]. arXiv preprint arXiv:2201.01134, 2023, Submit to a Journal. [paper] [code]

Nature-inspired Optimization: A Comprehensive Survey, Submit to Proceedings of IEEE.

Pareto Automatic Multi-Task Learning on Graphs, 2023, Submit to IEEE TPAMI.

Knowledge-assisted Evolutionary Graph Neural Architecture Search, 2023, Submit to IEEE CIM.

Automatic Graph Topology-Aware Transformer, 2023, Submit to IEEE TNNLS.

A Match Made in Consistency Heaven: When Large Language Models Meet Evolutionary Algorithms, 2024, Submit to a Journal. [paper]

Journal Papers

J. Zhao, L. Jiao*, C. Wang, X. Liu, F. Liu, L. Li, S. Yang, “GeoFormer: A Geometric Representation Transformer for Change Detection,” in IEEE Transactions on Geoscience and Remote Sensing (IF:8.2, JCR I, CCF B), accepted, 2023. [paper]

H. Zhao*, X. Ning, X. Liu, C. Wang, J. Liu, “What Makes Evolutionary Multi-task Optimization Better: A Comprehensive Survey,” in Applied Soft Computing (IF:8.263, JCR I), accepted, 2023. [paper]

C. Wang, L. Jiao*, J. Zhao, L. Li, X. Liu, F. Liu, S. Yang, “Bi-level Multi-objective Evolutionary Learning: A Case Study on Multi-task Graph Neural Topology Search,” in IEEE Transactions on Evolutionary Computation (IF: 16.497, JCR I, CCF B), accepted, 2022. [paper] [code]

C. Wang, J. Zhao, L. Li, L. Jiao*, J. Liu, K. Wu, “A Multi-Transformation Evolutionary Framework for Influence Maximization in Social Networks,” in IEEE Computational Intelligence Magazine (IF: 9.809, JCR I), vol. 18, no. 1, pp. 52-67, Feb. 2023. [paper] [code]

C. Wang, K. Wu*, J. Liu, “Evolutionary Multitasking AUC Optimization [Research Frontier],” in IEEE Computational Intelligence Magazine (IF: 9.809, JCR I), vol. 17, no. 2, pp. 67-82, May 2022. [paper] [code]

C. Wang, J. Liu*, K. Wu, Z. Wu, “Solving Multitask Optimization Problems With Adaptive Knowledge Transfer via Anomaly Detection,” in IEEE Transactions on Evolutionary Computation (IF: 16.497, JCR I, CCF B), vol. 26, no. 2, pp. 304-318, April 2022. [paper] [code]

K. Wu, C. Wang*, J. Liu, “Evolutionary Multitasking Multilayer Network Reconstruction,” in IEEE Transactions on Cybernetics (IF: 19.118, JCR I, CCF B), 2022. [paper] [code]

C. Wang, J. Liu*, K. Wu, C. Ying, “Learning Large-scale Fuzzy Cognitive Maps Using an Evolutionary Many-task Algorithm,” Applied Soft Computing (IF: 8.263, JCR I), vol. 108, 2021, 107441. [paper] [code]

K. Wu, C. Wang, J. Liu, “Multilayer Nonlinear Dynamical Network Reconstruction from Streaming Data,” SCIENTIA SINICA Technologica, vol. 52, no. 6, pp. 971-982, 2022. or 吴凯, 王超, 刘静*, 流数据驱动的非线性动力学网络重构, 中国科学: 技术科学, vol. 52, no. 6, pp. 971-982, 2022. [paper] [code]

C. Ying, J. Liu, K. Wu, C. Wang, “A Multiobjective Evolutionary Approach for Solving Large-Scale Network Reconstruction Problems via Logistic Principal Component Analysis,” in IEEE Transactions on Cybernetics (IF: 19.118, JCR I, CCF B), 2022. [paper] [code]

Conference Papers

K. Wu, J. Liu*, C. Wang, K. Yuan, “Pareto Optimization for Influence Maximization in Social Networks,” in Evolutionary Multi-Criterion Optimization. EMO 2021. [paper] [code]

Research Topic

The main research directions are multitasking/transfer optimization and learning, natural evolution strategy, and complex networks (Graphs).

Evolutionary Multitasking Optimization and Transfer Optimization [1]-[2] [Summary]

Evolutionary Multitasking Optimization is a paradigm that focuses on solving multiple self-contained tasks at the same time. Inspired by the well-established concepts of transfer learning and multi-task learning in predictive analytics, the key motivation behind multitask optimization is that if optimization tasks are related to each other (in terms of their optimal solutions, or the general characteristics of their function landscapes), then the search progress on one can be transferred to substantially speedup the search on the other. Notably, the success of the paradigm is not necessarily limited to one-way knowledge transfers from simpler to more complex tasks. In fact, in an attempt to intentionally solve a harder task, several simpler ones may often be unintentionally solved. In addition, there has been growing interest in conducting research on evolutionary transfer optimization in recent years: a paradigm that integrates EA solvers with knowledge learning and transfer across related domains to achieve better optimization efficiency and performance.

Multi-task Learning as Multi-objective Optimization [3] [Summary]

In multi-task learning, multiple tasks are solved jointly, sharing inductive bias between them. Multi-task learning is inherently a multi-objective problem because different tasks may conflict, necessitating a trade-off. We explicitly cast multi-task learning as multi-objective optimization, with the overall objective of finding Pareto optimal solution set.

Automatic Graph Representation Learning and Multi-task Learning [4] [Summary]

Graph machine learning has been extensively studied in both academic and industry. However, as the literature on graph learning booms with a vast number of emerging methods and techniques, it becomes increasingly difficult to manually design the optimal machine learning algorithm for different graph-related tasks. To tackle the challenge, automated graph machine learning, which aims at discovering the best hyper-parameter and neural architecture configuration for different graph tasks/data without manual design, is gaining an increasing number of attentions from the research community. We extensively discuss automated graph machine approaches, covering hyper-parameter optimization (HPO) and neural architecture search (NAS) for multi-task graph machine learning.

Influence Maximization with Reinforcement Learning [5] [Summary]

Influence Maximization (IM), which selects a set of k users (called seed set) from a social network to maximize the expected number of influenced users (called influence spread), is a key algorithmic problem in social influence analysis. Due to its immense application potential and enormous technical challenges, IM has been extensively studied in the past decade. We focus on the following key aspects: (1) proxy models, (2) a rigorous theoretical of IM algorithms with reinforcement learning.

Natural Evolution Strategy and Learning to Optimize [6,7] [Summary]

Natural evolution strategies (NES) are a family of numerical optimization algorithms for black box problems. Similar in spirit to evolution strategies, they iteratively update the (continuous) parameters of a search distribution by following the natural gradient towards higher expected fitness. Learning to optimize (L2O) is an emerging approach that leverages machine learning to develop optimization methods, aiming at reducing the laborious iterations of hand engineering. It automates the design of an optimization method based on its performance on a set of training problems.

Evolutionary Algorithms and Large Language Models [8] [Summary]

Study the coupling of Evolutionary Algorithms and Large Language Models.

[1] Gupta A, Ong Y S, Feng L. Multifactorial evolution: toward evolutionary multitasking[J]. IEEE Transactions on Evolutionary Computation, 2015, 20(3): 343-357.

[2] K. C. Tan, L. Feng and M. Jiang. Evolutionary Transfer Optimization - A New Frontier in Evolutionary Computation Research[J]. IEEE Computational Intelligence Magazine, 2021, 16(1): 22-33.

[3] Sener O, Koltun V. Multi-task learning as multi-objective optimization[J]. Advances in neural information processing systems, 2018, 31.

[4] Wang X, Zhang Z, Zhu W. Automated Graph Machine Learning: Approaches, Libraries and Directions[J]. arXiv preprint arXiv:2201.01288, 2022.

[5] Li Y, Fan J, Wang Y, et al. Influence maximization on social graphs: A survey[J]. IEEE Transactions on Knowledge and Data Engineering, 2018, 30(10): 1852-1872.

[6] Wierstra D, Schaul T, Glasmachers T, et al. Natural evolution strategies[J]. The Journal of Machine Learning Research, 2014, 15(1): 949-980.

[7] Chen T, Chen X, Chen W, et al. Learning to optimize: A primer and a benchmark[J]. arXiv preprint arXiv:2103.12828, 2021.

[8] Meyerson, E, Nelson, M J, Bradley, H, Moradi, A, Hoover, A K, Lehman, J. Language model crossover: Variation through few-shot prompting[J]. arXiv preprint arXiv:2302.12170, 2023.

This Site

This blog mainly shares and records the problems encountered in my scientific research.

Posts

The posts are at different status.

Status Meaning
Completed This post is considered completed, but I might edit it when I came up with something new.
Writing This post is being actively edited.
Paused This post is considered of low priority. I will come back to this post later.
Archived This post is outdated and I probably won’t update it anymore.

Quick Links

conferences for EC: FOGA,GECCO,PPSN,CEC

IEEE CIS IEEE CIM IEEE TEC IEEE TNNLS IEEE TFS IEEE TAI IEEE TETCI IEEE TCYB IEEE TSMC

JMLR AIJ ECJ ACM TELO SWEVO ARTL SCIS

IEEE CS IEEE TPAMI IEEE TKDE IEEE TPDS Proc. IEEE

CCF

NMI NC PNAS Nature Science

ArXiv-CS-Neural-and-Evolutionary-Computing