OCPP Server: Architecture, Implementation, and Scaling Guide
An OCPP server is the central system that every charge point in a network connects to. It handles the WebSocket connections, processes protocol messages, manages charging sessions, and stores the operational data that drives billing, monitoring, and fleet management. Without it, chargers are disconnected hardware.
Building an OCPP server that works in a lab is straightforward. Building one that runs reliably in production — handling thousands of concurrent connections, multiple protocol versions, and the full range of real-world charger behavior — requires architectural decisions that most general-purpose web development teams haven’t encountered before.
This guide covers the architecture, OCPP server implementation steps, open-source options, and scaling patterns for production deployments.
What an OCPP Server Does in the Charging Ecosystem
Central System vs Charge Point — The Client-Server Model
In OCPP terminology, the charge point is the client and the OCPP central system is the server. The charge point initiates the WebSocket connection to the server and maintains it persistently. All communication flows over this connection — the server can push messages to the charge point (like RemoteStartTransaction or SetChargingProfile) without waiting for a request.
OCPP Transport Layers: WebSocket (JSON) and SOAP
OCPP supports two transport bindings:
- OCPP-J (JSON over WebSocket) — Used by OCPP 1.6J, 2.0.1, and 2.1. This is the modern standard. Messages are JSON-encoded and transported over persistent WebSocket connections with TLS.
- OCPP-S (SOAP over HTTP) — Used by OCPP 1.5 and earlier. Legacy transport that uses XML-encoded SOAP messages over HTTP. Still encountered in older deployments but not recommended for new implementations.
New OCPP server software implementations should target WebSocket/JSON exclusively. The Open Charge Alliance publishes the full specification for both transport bindings.
OCPP Server Architecture — The Core Components
WebSocket Connection Manager
The connection manager is the first layer a charge point interacts with. It handles TLS termination (encrypted connections per OCPP security profiles), connection lifecycle (accepting new connections, managing heartbeat intervals, detecting stale connections), and a connection registry mapping each WebSocket connection to a charge point identity.
At scale, the connection manager must handle thousands of concurrent WebSocket connections. This is where architectural choices diverge from typical web applications — a traditional HTTP server handles short-lived connections, while an OCPP server maintains long-lived, stateful connections.
Message Router and Action Handlers
Every OCPP message has an action type (BootNotification, StatusNotification, Authorize, StartTransaction, MeterValues, etc.). The message router deserializes incoming messages, validates them against the OCPP schema, and dispatches them to the appropriate action handler.
Transaction and CDR Processing
Transaction management is the financial core of the OCPP server. It tracks session lifecycle (StartTransaction → MeterValues → StopTransaction), energy accumulation from meter readings, and CDR (Charge Detail Record) generation for billing systems. CDR accuracy is non-negotiable — it directly affects revenue.
Persistence Layer and Monitoring
The persistence layer stores charge point configurations, transaction history, authorization caches, and operational logs. Choose a database architecture that separates time-series data (meter values, status changes) from transactional data (sessions, CDRs) and configuration data (charger profiles, firmware versions).
Implementing an OCPP Server — Step by Step
Choosing Your Protocol Version (1.6J, 2.0.1, 2.1)
Start with OCPP 1.6J — it’s what the majority of deployed chargers support. Add 2.0.1 or 2.1 support in parallel for newer hardware. Your server should handle version detection during the WebSocket handshake (OCPP uses WebSocket subprotocol headers to negotiate the version).
Tech Stack Options
| Language / Runtime | WebSocket Maturity | Async I/O Model | OCPP Libraries | Production Fit |
|---|---|---|---|---|
| Node.js | Excellent (ws, Socket.io) | Event loop (native) | Several (node-ocpp, ocpp-rpc) | Strong for I/O-bound workloads. Natural fit for WebSocket-heavy applications. |
| Java (Spring / Vert.x) | Excellent | Thread pool (Spring) or event loop (Vert.x) | SteVe (1.6J), custom libraries | Enterprise-grade. Vert.x offers async performance comparable to Node.js. |
| Go | Good (gorilla/websocket) | Goroutines (lightweight threads) | Limited | Excellent concurrency model. Fewer OCPP-specific libraries. |
| Python | Moderate (websockets, asyncio) | asyncio event loop | ocpp (Python package) | Good for prototyping. Performance ceiling for high-concurrency production use. |
| .NET (C#) | Good (SignalR) | Task-based async (TAP) | OCPP.Core | Strong for .NET shops. Azure-native deployment. |
The language choice matters less than the WebSocket framework maturity and async I/O capabilities. OCPP servers are I/O-bound (thousands of persistent connections), not CPU-bound.
The Credential Handshake and Security
OCPP 1.6J uses HTTP Basic Authentication during the WebSocket upgrade. OCPP 2.0.1 introduces three security profiles with increasing rigor. Profile 2 adds server-side TLS certificates; Profile 3 adds mutual TLS with client certificates. Implementing Profile 3 requires a PKI (Public Key Infrastructure) for certificate management.
Testing with OCPP Simulators
Before connecting real hardware, test against OCPP charge point simulators. Test every message type, edge case (out-of-order messages, network interruptions, partial messages), and the full transaction lifecycle. Real chargers behave less predictably than simulators — vendor-specific quirks are common — but simulators catch the majority of protocol-level bugs.
Open-Source OCPP Servers: What’s Available
SteVe (Java)
SteVe is the most established open-source OCPP server. Built on Java/Spring, it supports OCPP 1.2, 1.5, and 1.6J. It provides a web interface for charger management and a database-backed persistence layer. SteVe is well-suited for testing, development, and small-scale deployments.
OCPP.Core (.NET)
OCPP.Core is a lightweight .NET implementation supporting OCPP 1.6 and 2.0. It’s simpler than SteVe, making it a good starting point for .NET teams that want to understand the protocol structure before building production infrastructure.
OpenOCPP (Embedded C++)
OpenOCPP targets embedded environments and resource-constrained deployments. Written in C++, it’s designed for scenarios where the OCPP central system runs on edge hardware rather than cloud infrastructure.
Limitations of Open-Source for Production
| Project | Language | OCPP Versions | Multi-Tenancy | HA / Clustering | Security Profiles | Production Ready |
|---|---|---|---|---|---|---|
| SteVe | Java (Spring) | 1.2, 1.5, 1.6J | No | No | Basic Auth only | Testing / small-scale |
| OCPP.Core | .NET (C#) | 1.6, 2.0 | No | No | Basic Auth only | Prototyping |
| OpenOCPP | C++ | 1.6, 2.0.1 | No | No | Profiles 1-3 | Embedded / edge |
Scaling an OCPP Server for Production
Connection Pooling and Load Balancing
WebSocket connections are stateful, which complicates traditional load balancing. Sticky sessions ensure a charge point’s connection always routes to the same server instance, but this creates hot spots. Alternatives include a shared connection state store (Redis) and message broker (RabbitMQ, Kafka) that decouple the WebSocket layer from the business logic layer.
Multi-Tenancy Considerations
Commercial OCPP platforms serve multiple clients. Multi-tenancy requires data isolation (separate charger registries, transaction stores, and CDR pipelines per tenant), tenant-specific configuration (tariffs, authorization rules, branding), and tenant-scoped access control for operator dashboards.
Protocol Version Coexistence (1.6 + 2.x)
Running OCPP 1.6J and 2.0.1+ on the same server instance requires a version-aware message dispatcher. The internal data model should be version-agnostic so that both protocol versions map to the same session, transaction, and charger entities.
In a project with IMP PAN — a Horizon 2020 research initiative — Codibly built a scalable OCPP 1.6J server supporting multi-site management across Poland, Denmark, and the Netherlands, with V2G capabilities.
For teams evaluating whether to build or buy their OCPP server infrastructure, see our build vs buy decision framework. Codibly’s OCPP Accelerators provide a production-tested foundation. For full OCPP implementation services, the service page covers the engagement model.
The Server Is the Foundation — Build It for the Protocol You’ll Need Tomorrow
An OCPP server is infrastructure that your entire charging platform depends on. The architectural decisions you make — connection management, multi-version support, security profiles, multi-tenancy — determine not just what works today but what’s possible as your fleet grows and protocol requirements evolve.
Start with OCPP 1.6J because it’s what chargers run today. Architect for 2.0.1 and 2.1 because it’s what chargers will run tomorrow. The OCPP 2.1 features — V2G support, dynamic pricing, display messaging, improved security — are where the protocol is heading. Build a server foundation that can absorb those features when your fleet demands them.
Frequently Asked Questions
An OCPP server is the protocol communication layer — it handles WebSocket connections and OCPP message processing. A CSMS (Charging Station Management System) is the full platform built on top: charger management, billing, driver apps, analytics, and operator dashboards. Every CSMS includes an OCPP server, but an OCPP server alone isn’t a CSMS. Think of the OCPP server as the engine and the CSMS as the entire vehicle.
SteVe (Java) is the most mature, with OCPP 1.6J support and a functional web interface. OCPP.Core (.NET) is lighter and easier to extend. OpenOCPP targets embedded C++ environments. All three are suitable for development, testing, and small-scale deployments. None are production-ready for commercial-scale operations without significant hardening.
Yes, and most production servers must. The WebSocket subprotocol header carries the version identifier during the handshake. The server inspects this header and routes the connection to version-specific message handlers. The key architectural requirement is a version-agnostic internal data model — so a session started via OCPP 1.6J and one started via 2.0.1 both map to the same transaction entity in your database.