支持10w級調度!新鮮出爐的SnailJob性能壓測報告
當下企業業務系統復雜,任務調度、任務失敗重試、安全控制、監控告警等需求層出不窮,許多傳統方案都面臨接入復雜、擴展成本高、失敗重試機制單一等痛點。
SnailJob的誕生正是為了解決這些難題。
平臺概述
SnailJob 是一個專注于分布式任務調度與重試的平臺,采用分區分桶架構具備極高的伸縮性和容錯性,無需依賴外部中間件即可實現秒級調度和復雜重試策略,同時擁有現代化 UI 和完善的權限與告警機制。
SnailJob 性能壓測報告
- 報告日期: 2025-08-25
- 版本: 1.7.2
- 提供者: rpei
測試目標
本次壓測的目標是驗證 單個 SnailJob 服務節點在穩定條件下可支持的最大定時任務數量,并評估系統在高并發任務調度下的整體性能表現。
測試環境
?? 數據庫
- 類型: 阿里云 RDS MySQL 8.0
- 實例規格: mysql.n2.xlarge.1(8 vCPU,16 GB 內存)
- 存儲: 100 GB,InnoDB 引擎
- 版本: MySQL_InnoDB_8.0_Default
?? 應用部署
- 服務器信息: 阿里云 ECS g6.4xlarge
- SnailJob Server: 單實例(4 vCPU,8 GB 內存)
- SnailJob Client: 16 個實例(每個 1 vCPU,1 GB 內存)
服務端配置
pekko配置(snail-job-server-starter/src/main/resources/snailjob.conf)
pekko {
actor {
common-log-dispatcher {
type = "Dispatcher"
executor = "thread-pool-executor"
thread-pool-executor {
core-pool-size-min = 16
core-pool-size-factor = 1.0
core-pool-size-max = 256
}
throughput = 10
}
common-scan-task-dispatcher {
type = "Dispatcher"
executor = "thread-pool-executor"
thread-pool-executor {
core-pool-size-min = 64
core-pool-size-factor = 1.0
core-pool-size-max = 256
}
throughput = 10
}
netty-receive-request-dispatcher {
type = "Dispatcher"
executor = "thread-pool-executor"
thread-pool-executor {
core-pool-size-min = 128
core-pool-size-factor = 1.0
core-pool-size-max = 256
}
throughput = 10
}
retry-task-executor-dispatcher {
type = "Dispatcher"
executor = "thread-pool-executor"
thread-pool-executor {
core-pool-size-min = 32
core-pool-size-factor = 1.0
core-pool-size-max = 256
}
throughput = 10
}
retry-task-executor-call-client-dispatcher {
type = "Dispatcher"
executor = "thread-pool-executor"
thread-pool-executor {
core-pool-size-min = 32
core-pool-size-factor = 1.0
core-pool-size-max = 256
}
throughput = 10
}
retry-task-executor-result-dispatcher {
type = "Dispatcher"
executor = "thread-pool-executor"
thread-pool-executor {
core-pool-size-min = 32
core-pool-size-factor = 1.0
core-pool-size-max = 256
}
throughput = 10
}
job-task-prepare-dispatcher {
type = "Dispatcher"
executor = "thread-pool-executor"
thread-pool-executor {
core-pool-size-min = 128
core-pool-size-factor = 1.0
core-pool-size-max = 256
}
throughput = 10
}
job-task-executor-dispatcher {
type = "Dispatcher"
executor = "thread-pool-executor"
thread-pool-executor {
core-pool-size-min = 160
core-pool-size-factor = 1.0
core-pool-size-max = 256
}
throughput = 10
}
job-task-executor-call-client-dispatcher {
type = "Dispatcher"
executor = "thread-pool-executor"
thread-pool-executor {
core-pool-size-min = 160
core-pool-size-factor = 1.0
core-pool-size-max = 256
}
throughput = 10
}
job-task-executor-result-dispatcher {
type = "Dispatcher"
executor = "thread-pool-executor"
thread-pool-executor {
core-pool-size-min = 160
core-pool-size-factor = 1.0
core-pool-size-max = 256
}
throughput = 10
}
workflow-task-prepare-dispatcher {
type = "Dispatcher"
executor = "thread-pool-executor"
thread-pool-executor {
core-pool-size-min = 4
core-pool-size-factor = 1.0
core-pool-size-max = 256
}
throughput = 10
}
workflow-task-executor-dispatcher {
type = "Dispatcher"
executor = "thread-pool-executor"
thread-pool-executor {
core-pool-size-min = 4
core-pool-size-factor = 1.0
core-pool-size-max = 512
}
throughput = 10
}
}
}系統配置文件(snail-job-server-starter/src/main/resources/application.yml)
server:
port: 8080
servlet:
context-path: /snail-job
spring:
main:
banner-mode: off
profiles:
active: dev
datasource:
name: snail_job
## mysql
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://ex-snailjob-mysql-svc:3306/snail_job?useSSL=false&characterEncoding=utf8&useUnicode=true
username: root
password: Ab1234567
type: com.zaxxer.hikari.HikariDataSource
hikari:
connection-timeout: 30000
minimum-idle: 16
maximum-pool-size: 256
auto-commit: true
idle-timeout: 30000
pool-name: snail_job
max-lifetime: 1800000
web:
resources:
static-locations: classpath:admin/
mybatis-plus:
typeAliasesPackage: com.aizuda.snailjob.template.datasource.persistence.po
global-config:
db-config:
where-strategy: NOT_EMPTY
capital-mode: false
logic-delete-value: 1
logic-not-delete-value: 0
configuration:
map-underscore-to-camel-case: true
cache-enabled: true
logging:
config: /usr/snailjob/config/logback.xml
snail-job:
retry-pull-page-size: 2000 # 拉取重試數據的每批次的大小
job-pull-page-size: 2000 # 拉取重試數據的每批次的大小
server-port: 17888 # 服務器端口
log-storage: 7 # 日志保存時間(單位: day)
rpc-type: grpc
summary-day: 0
server-rpc:
keep-alive-time: 45s # 心跳間隔45秒
keep-alive-timeout: 15s # 心跳超時15秒
permit-keep-alive-time: 30s # 允許心跳間隔30秒
dispatcher-tp: # 調度線程池配置
core-pool-size: 100
maximum-pool-size: 100
client-rpc:
keep-alive-time: 45s # 心跳間隔45秒
keep-alive-timeout: 15s # 心跳超時15秒
client-tp: # 客戶端線程池配置
core-pool-size: 100
maximum-pool-size: 100測試場景
- 每個定時任務的執行周期:60 秒
- 單個任務平均執行耗時:200 毫秒
- 測試目標:測量單節點 SnailJob Server 可穩定調度的任務數量
測試結果
在單節點(4C/8G)環境下,SnailJob Server 能夠穩定承載 30,000 個定時任務,并保證任務在每 60 秒 內按時執行。此時數據庫負載率僅 20%,表明系統具備良好的可擴展性。通過水平擴展服務端節點,理論上可輕松支持 100,000+ 任務調度,滿足絕大多數企業的業務場景。 同時,SnailJob Pro 版本引入 Redis 緩存改造與日志剝離(基于 Mongo 存儲),進一步提升了系統的調度能力與穩定性。
資源消耗情況(受公司保密限制,截圖無法公開,這里僅分享壓測的結果數據)
指標 | 數據 |
SnailJob服務端CPU使用率 | 均值:71% 峰值:82% |
SnailJob服務端內存 | 約32% |
數據庫實例IOPS使用率 | 采樣間隔5秒峰值:40% |
數據庫實例CPU使用率 | 約20% |
數據庫實例內存使用率 | 約55% |
總結
SnailJob 的性能瓶頸主要來源于 數據庫存儲。由于調度過程中存在大量任務批次與日志寫入操作,對數據庫 IOPS 會產生較大壓力。因此在部署 SnailJob 時,建議:
- 數據庫獨立部署,避免與其他業務服務共享實例;
- 優先選擇高性能磁盤,以提升寫入效率;
- 開啟異步寫盤,進一步降低數據庫寫入延遲。






















