中国海洋大学信息科学与工程学部重点资助类科研博士后。博士毕业于北京科技大学计算机与通信工程学院,导师为王睿教授,曾于2018年6月-2019年7月在星环信息科技(上海)股份有限公司基础架构部工作,从事分布式数据库基础平台研发,融合多种异构分布式计算、存储引擎,加速SQL计算。于2023年7月加入中国海洋大学于彦伟教授课题组。 主要研究方向为边缘计算、边缘智能、信息中心网络等新型网络、信息年龄。在ToN、IoTJ、IJCAI、CIKM等国际期刊和会议上发表论文20余篇,维护社区开源项目3个。主持中国博士后特别资助、中国博士后面上资助、山东省博士后创新项目等多项基金。担任中国计算机学会服务计算专委会执委,是IEEE Internet of Things Journal, The Journal of Supercomputing等期刊审稿人。
Temporal motifs are compact subgraph patterns that recur frequently within a sequence of timestamps. They reveal implicit insights in the graph data and guide informed decision-making. However, current methods for exactly counting temporal motifs face challenges of high time complexity and inapplicability when motifs involve four nodes and struggle to scale to larger temporal graphs. In this paper, we propose a novel and exact counting framework tailored to 4-node, 3-edge, and 4-edge single-interaction temporal motifs whose time window size is constrained in a fixed interval. To speed up the counting process, we begin by categorizing all 4-node temporal motifs based on their structural characteristics. Subsequently, we present three rapid and precise sub-algorithms, each dedicated to counting motifs within its category. To expedite the counting process, we implement a series of straightforward and highly effective counters. Our algorithm cleverly uses these counters to identify and record all temporal motif instances based on the information and interrelationships of edges, significantly enhancing counting efficiency, especially for large-scale temporal graphs. Our extensive experiments on 14 large-scale real-world temporal graphs demonstrate the superiority of our work in terms of efficiency. Results show that our work significantly outperforms all state-of-the-art baselines and achieves a remarkable speedup of up to 25,816-fold.
@article{vldbj2025mc,title={Efficiently Counting Four-Node Motifs in Large-Scale Temporal Graphs},author={Zhang#, Zhihao and Qi#, Jianpeng and Cao, Lei and Dong, Junyu and Yu, Yanwei},journal={The VLDB Jounral},volume={34},number={44},pages={1-27},year={2025},month=may,publisher={Springer},doi={10.1007/s00778-025-00926-8},url={https://link.springer.com/article/10.1007/s00778-025-00926-8},}
@inproceedings{motto2024cikm,author={Li#, Jiantao and Qi#, Jianpeng and Huang, Yueling and Cao, Lei and Yu, Yanwei and Dong, Junyu},booktitle={the 33rd ACM International Conference on Information and Knowledge Management (CIKM)},title={MoTTo: Scalable motif counting with time-aware topology constraint for large-scale temporal graphs},year={2024},volume={},number={},pages={},}
In edge computing scenarios, there is a need for modeling dedicated features and heterogeneous devices functions, as well as integrating multiple complex scenarios with diverse objectives and frequent interactions. However, existing platforms modeling for the whole device ignores the independence between functional components resulting in limited scenario support. We propose an open-source simulator named EasiEI. EasiEI addresses the need for higher level feature replaceability and independence in modeling complex edge scenarios through independent functional component-level modeling and microkernel architecture. This approach enables users to assemble independent functional components in a plug-and-play manner for heterogeneous devices or different application requirements. EasiEI is fully compatible with all the existing built-in modules in NS3 (a powerful network discrete event simulator). To verify the flexibility and extensibility of EasiEI, we implement several centralized and decentralized computing paradigms cases in a step-by-step way. These cases restore and simulate the performance state of various real devices in real time, meeting the requirements for verifying the edge computing ideas such as task scheduling in a distributed manner. Results show that the simulations have well reflected the characteristics of the real world and can construct complex environment flexibly.
@article{easiei2024iotj,title={EasiEI: A simulator to flexibly modeling complex edge computing environments},author={Su#, Xiao and Qi#, Jianpeng and Wang, Jiahao and Wang, Rui},journal={IEEE Internet of Things Journal},volume={11},pages={1558-1571},number={1},year={2024},month=jun,publisher={IEEE},doi={10.1109/JIOT.2023.3289870},url={https://ieeexplore.ieee.org/document/10164279},google_scholar_id={qUcmZB5y_30C},}
Despite placing services and computing resources at the edge of the network for ultra-low latency, we still face the challenge of centralized scheduling costs, including delays from additional request forwarding and resource selection. To address this challenge, we propose SmartBuoy, a new computing paradigm. Our approach starts with a service coverage concept that assumes users within the coverage have high access availability. To enable users to perceive service status, we design a distributed metric table that synchronizes service status periodically and distributively. We propose coverage indicator updating principles to make the updating process more effective. We then implement two distributed methods, SmartBuoy-Time and SmartBuoy-Reliability, that enable users to perceive service capability directly and immediately. To determine the metric table update window size, we provide an analysis method based on user access patterns and offer a theoretical upper bound in a dynamic environment, making SmartBuoy easy to use. Finally, we implement the proposed methods distributively on an open-source edge computing simulator. Experiments on a real-world network topology dataset demonstrate the efficiency of SmartBuoy in reducing delays and improving the success rate.
@article{smartbuoy2024tnet,author={Qi, Jianpeng and Su, Xiao and Wang, Rui},title={Towards distributively build time-sensitive-service coverage in compute first networking},journal={IEEE/ACM Transactions on Networking},volume={32},publisher={IEEE},number={1},pages={582-597},doi={10.1109/TNET.2023.3289830},year={2024},url={https://ieeexplore.ieee.org/document/10172050/},google_scholar_id={hC7cP41nSMkC},}
Named data networking (NDN) constructs a network by names, providing a flexible and decentralized way to manage resources within the edge computing continuum. This paper aims to solve the question, “Given a function with its parameters and metadata, how to select the executor in a distributed manner and obtain the result in NDN?” To answer it, we design R2 that involves the following stages. First, we design a name structure including data, function names, and other function parameters. Second, we develop a 2-phase mechanism, where in the first phase, the function request from a client-first reaches the data source and retrieves the metadata. Then the best node is selected while the metadata responds to the client. In the second phase, the chosen node directly retrieves the data, executes the function, and provides the result to the client. Furthermore, we propose a stop condition to intelligently reduce the processing time of the first phase and provide a simple proof and range analysis. Simulations confirm that R2 outperforms the current solutions in terms of resource allocation, especially when the data volume and the function complexity are high. In the experiments, when the data size is 100 KiB and the function complexity is O(n2) , the speedup ratio is 4.61. To further evaluate R2, we also implement a general intermediate data processing logic named “Bolt” implemented on an app-level in ndnSIM. We believe that R2 shall help the researchers and developers to verify their ideas smoothly.
@article{r22023tnet,author={Qi, Jianpeng and Wang, Rui},title={R2: A distributed remote function execution mechanism with built-in metadata},journal={IEEE/ACM Transactions on Networking},publisher={IEEE},volume={31},year={2023},number={24},pages={710-723},doi={10.1109/TNET.2022.3198467},url={https://doi.org/10.1109/TNET.2022.3198467},google_scholar_id={aqlVkmm33-oC},}
Computation and/or communication-intensive collaborative services accompanied by several distributed tasks/components, such as the services in Internet of Things, can be anywhere nowadays. These services are usually used by users at the Internet edge, making cloud computing struggles with the high end-to-end latency. Thanks to edge computing which pushes resources to the edge, the goals with lower latency can be well satisfied. However, in actual scenarios especially under dynamic edge computing networks, changes exist in resources, including computing, bandwidth, and nodes. Meanwhile, data packets (or flow) among collaborative tasks/components of a service can also not be conserved. These characteristics lead the service reliability hard to be guaranteed and make existing reliability evaluation methods no longer accurate. To study the effect of distributed and collaborative service deployment strategies under such background, we propose a reliability evaluation method (REMR). We first look for the solution set which can meet the time constraints. Then, we calculate the reliability of service supported by the solution set based on the principle of inclusion–exclusion with distributions of available transmission bandwidth and computing resources. Finally, we provide an illustrative example with several real-world data sets to make REMR easy to follow. To make REMR more reliable, we also propose and implement a Monte Carlo simulation method. Experiments prove that the reliability calculated by REMR is nearly the same as the simulation results and both the latencies and the jitters are also at a lower level.
@article{remr2022iotj,author={Chen#, Liang and Qi#, Jianpeng and Su, Xiao and Wang, Rui},title={REMR: A reliability evaluation method for dynamic edge computing network under time constraints},journal={IEEE Internet of Things Journal},volume={10},number={5},pages={4281-4291},publisher={IEEE},month=mar,url={https://ieeexplore.ieee.org/document/10172050},doi={10.1109/JIOT.2022.3216056},google_scholar_id={mVmsd5A6BfQC},year={2023},}