Subspace Institute

LAPLACE Event Fetcher

直播中突发网络中断?开播后发现忘开弹幕机?该功能可在未打开弹幕机时持续监控直播间事件,并周期同步至本地,真正做到不错过任何礼物。云端事件同步只适用于控制台模式

  • 云端事件可获取最近 72 小时(可配置)内的所有付费礼物事件
  • 在控制台中会每 5 分钟自动与云端同步,用于获取主播可能因为网络原因而遗漏的礼物

此功能的配置需要具有一定的计算机基础,如果您是主播,请将此文档移交给社团/公会内相应的技术人员

前置条件

  • 熟悉容器或 Kubernetes 的基本操作
  • 高中及以上学历(如果您是初中生并且完全可以理解下方的部署文档,可通过 Discord 联系我)

请确认满足上述条件,否则不建议继续阅读

服务器最低要求

  • linux/amd64
  • CPU:至少 0.25 核
  • 内存:至少 256 MB,推荐 512 MB 或更多。内存需求根据监控直播间数量线性增长
  • 服务器可长时间稳定与哔哩哔哩弹幕服务器建立连接,推荐日本、新加坡、或中国(域名需备案)节点的服务器
  • PostgreSQL
  • Redis(可选)

Serverless 要求

目前测试通过 Koyeb/Vercel + Supabase/Neon/Render 的排列组合

Docker 可用标签

  • latest:最新稳定版
  • edge:最新开发版
  • sha-<hash>:指定 commit hash 的版本

更多标签和版本请查看 Docker Hub 页面

安装方法

  • 通过 Docker 或 Docker Compose 或 Kubernetes 进行部署
  • 设置公网访问:需要支持 HTTPS 访问,可通过 Traefik、Nginx、Caddy、或 serverless 云服务进行配置
  • 输入 API:配置好后,在 LAPLACE Chat 的配置器 - 进阶 - 自定义云端事件 API 中填入您的 API 地址

Docker Compose 配置示例

以下为在 Docker Compose 中部署 LAPLACE Event Fetcher 的配置示例

docker-compose.yml
services:
  lef:
    image: sparanoid/laplace-event-fetcher:latest
    environment:
      DATABASE_URL: postgresql://lef:lef@lef-pg:5432/lef
      ROOMS: 25034104,456117
      TZ: Asia/Shanghai # Recommended, this ensure all cron tasks are executed in CST
    depends_on:
      - lef-pg
      - lef-redis # See below
    restart: always

  lef-pg:
    image: postgres:16-alpine
    environment:
      POSTGRES_DB: lef
      POSTGRES_USER: lef
      POSTGRES_PASSWORD: lef
    volumes:
      - lef-db:/var/lib/postgresql/data
    restart: always
    healthcheck:
      test: pg_isready -U lef -h 127.0.0.1
      interval: 5s

  # Redis is optional, but recommended for caching
  lef-redis:
    image: redis:latest
    volumes:
      - lef-redis:/data
    restart: always

volumes:
  lef-db:
  lef-redis:

Kubernetes 配置示例

以下为在 Kubernetes 中部署 LAPLACE Event Fetcher 的配置示例

configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: lef-config
data:
  ROOMS: '25034104,456117'
  TZ: 'Asia/Shanghai'
secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: lef-secret
type: Opaque
stringData:
  DATABASE_URL: 'postgresql://lef:lef@lef-pg:5432/lef'
  POSTGRES_DB: 'lef'
  POSTGRES_USER: 'lef'
  POSTGRES_PASSWORD: 'lef'
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: lef-pg-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
postgres.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: lef-pg
spec:
  replicas: 1
  selector:
    matchLabels:
      app: lef-pg
  template:
    metadata:
      labels:
        app: lef-pg
    spec:
      containers:
        - name: postgres
          image: postgres:16-alpine
          envFrom:
            - secretRef:
                name: lef-secret
          ports:
            - containerPort: 5432
          volumeMounts:
            - name: postgres-storage
              mountPath: /var/lib/postgresql/data
          livenessProbe:
            exec:
              command:
                - pg_isready
                - -U
                - lef
                - -h
                - 127.0.0.1
            initialDelaySeconds: 30
            periodSeconds: 10
      volumes:
        - name: postgres-storage
          persistentVolumeClaim:
            claimName: lef-pg-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: lef-pg
spec:
  selector:
    app: lef-pg
  ports:
    - port: 5432
event-fetcher.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: lef
spec:
  replicas: 1
  selector:
    matchLabels:
      app: lef
  template:
    metadata:
      labels:
        app: lef
    spec:
      containers:
        - name: lef
          image: sparanoid/laplace-event-fetcher:latest
          envFrom:
            - configMapRef:
                name: lef-config
            - secretRef:
                name: lef-secret
          ports:
            - containerPort: 8080
          resources:
            requests:
              cpu: 250m
              memory: 256Mi
            limits:
              cpu: 500m
              memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
  name: lef
spec:
  selector:
    app: lef
  ports:
    - port: 80
      targetPort: 8080

Ingress、HTTPS 请根据自己的实际需求进行配置

ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: lef-ingress
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: 'true'
spec:
  rules:
    - host: lef.example.tld
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: lef
                port:
                  number: 80

Deploy: Koyeb + Neon

First, you need to create a PostgresQL database from Supabase:

  • Create a new database in the Supabase dashboard
  • Change the region to AWS US East (N. Virginia)
  • Remember the database password and write down your connection string from Connection Details - Connection string

Then, we need to deploy the event fetcher on Koyeb:

  • Create a new app and choose Docker as your deployment method
  • Use docker.io/sparanoid/laplace-event-fetcher as your image. Leave blank or type edge for the latest beta images
  • Change the region to WAS (Close to the database you created)
  • Choose the eMicro instance (a larger instance can handle more rooms)
  • Click the Advanced button and add the following environment variables: ROOMS and DATABASE_URL. The DATABASE_URL variable should be something like postgresql://postgres:<DB_PASSWORD>@xxxxxxxxxxxxxxxxxxxx.us-east-2.aws.neon.tech/neondb?sslmode=require you got from the previous step
  • Click Deploy

Tested Serverless Platforms

The following combinations are tested and working:

  • Koyeb + Supabase
  • Koyeb + Neon
  • Render
  • Render + Neon

WebSocket API (Bridge Mode)

The server provides a WebSocket mode at the root path (/) for real-time event streaming. This feature can be enabled by setting WEBSOCKET_BRIDGE=1 or WEBSOCKET_BRIDGE=true. When enabled, the server will accept WebSocket connections from LAPLACE Event Bridge SDK.

Authentication

If WEBSOCKET_BRIDGE_AUTH is configured, clients must authenticate using the Sec-WebSocket-Protocol header. For example:

const ws = new WebSocket('ws://localhost:8080/', ['client', 'your-auth-key'])

Then you'll receive real-time LaplaceEvent as they are processed by the server.

Testing

A test client is included at websocket-test.html that you can open in your browser to test the WebSocket connection and see live events.

You can also use wscat to test the WebSocket connection:

wscat -c ws://localhost:8080 -s client -s <auth-token>

OpenTelemetry

The service includes optional OpenTelemetry support for distributed tracing and observability. When configured, it automatically instruments PostgreSQL operations.

Axiom Logging (Optional)

If you're using Axiom and want structured logging in addition to tracing, the service can automatically configure Pino logger to send logs to Axiom when it detects Axiom credentials in the OTLP headers:

  • Logs are automatically sent when both X-Axiom-Dataset and Authorization are present in OTEL_EXPORTER_OTLP_HEADERS
  • Log level is set to debug by default

Instrumented Components

When OpenTelemetry is enabled, the following components are automatically instrumented:

  • PostgreSQL: All database queries and operations
  • HTTP requests: All incoming HTTP requests and responses

API

  • GET /events/<room_id>: Get events from the database by room id
    • queries:
      • ?full=1: Get all events
  • POST /upload: Upload events and store them in the database
    • headers:
      • Authorization: Bearer <UPLOAD_KEY>
    • body:
      • Raw JSON LaplaceEvent[] or LAPLACE Chat Archive (.lca)
  • GET /ping: Check if the server is running. This will connect to the database and return pong in JSON format with a 200 status code, or a 500 status code if it fails.

Environment Variables

  • PORT (optional): Server port to listen. Default: 8080
  • ROOMS: Rooms to fetch. Multiple rooms can be separated by commas. Default: 456117. It's recommended the rooms you added are less than 10 per node, otherwise Bilibili will prevent you from getting events.
  • DATABASE_URL: Database to connect. ie. postgresql://username:password@pg:5432/lef
  • REDIS_URL (optional): Redis to connect. ie. redis://username:password@redis:6379
  • EVENTS_KEEP (optional): Events older than this value will be discarded. Default: 72 (hours)
  • RESTART_WAIT (optional): Time to wait before re-establishing connections. Default: 2000 (ms)
  • RESTART_INTERVAL (optional): Connection restart interval in cron format. Default: 0 6,18 * * * (Every day at 6:00 AM and 6:00 PM)
  • LOGIN_SYNC_TOKEN (optional): Token from LAPLACE Login Sync extension. Default: undefined. Multiple keys can be separated by commas.
  • LOGIN_SYNC_SERVER (optional): Custom sync server. Default: undefined
  • AUTH_KEY (optional): A long random string as credentials to access the REST API. Useful if you only want to expose the API to authorized users. Default: undefined. Multiple keys can be separated by commas.
  • UPLOAD_KEY (optional): A long random string as credentials to upload events to the cloud. Default: undefined. You must set this if you want to upload events. You can use openssl rand -hex 32 to generate a random string for this.
  • WEBSOCKET_BRIDGE (optional): Enable WebSocket bridge mode for real-time event streaming. Set to false or 0 to disable. Default: undefined (disabled)
  • WEBSOCKET_BRIDGE_AUTH (optional): Password for WebSocket authentication. When set, clients must provide this password to connect to the WebSocket endpoint. Default: undefined
  • SENTRY_DSN (optional): Sentry DSN client key. Default: undefined
  • SENTRY_SAMPLE_RATE (optional): Sentry sample rate. Default: 1.0
  • OTEL_EXPORTER_OTLP_ENDPOINT (optional): The OpenTelemetry endpoint URL (e.g., https://api.example.com). When not set, OpenTelemetry is disabled.
  • OTEL_EXPORTER_OTLP_HEADERS (optional): Headers to send with OpenTelemetry requests in comma-separated key=value format (e.g., Authorization=Bearer <token>,X-Dataset=my-dataset).

Development

bun run dev # session 1
bunx drizzle-kit studio # session 2

# init db and create migrations
bunx drizzle-kit migrate

# Update schemas
bunx drizzle-kit generate