Cluster basic concepts

This article introduces a few new concepts such as Match-making, Load Balancing and Orchestration. Once we learn about these new components and what role they play in the cluster, we'll be able to start building our client- and server-side code.

Additionally we'll highlight some of the main differences when developing for standalone SFS2X vs the cluster.

Client connection and login

The initial steps to connect and login with the Overcast Cluster are the same used with standalone SmartFoxServer: we use the regular client API calls to establish a connection (TCP or WebSocket) and send the login credentials.

From the Lobby server-side we can use a Zone-level Extension to handle the login request and validate the credentials against the database.

So far, so good. Everything works as it used to. Next, we'd like to jump into a game by either searching for an existing game or starting a new one. This is accomplished via a new set of Matchmaking API.

Match-making

One of the most important differences when using the Overcast Cluster is in match-making. While the API for this job are very similar to SFS2X, there are several new concepts to keep in mind.

Match-making is always done using Match Expressions, which are part of the standard SFS2X API. If you're not familiar with it you can learn more here.

In a cluster environment there can be hundreds of thousands of Game Rooms distributed among many servers. It would be extremely difficult to manage them manually or use Room lists to let the player decide what to do. In the Overcast Cluster, the selection of a Room is done through specific filters which are provided by Match Expressions.

Let's say we have three types of games (e.g. "Texas Hold'em", "Gin Rummy" and "Pinochle") each with different sub-rules, minimum rank to join, number of players required, etc. Match Expressions allow us to filter through thousands of Rooms with specific criteria, find those we want and start playing.

In the Overcast Cluster there are several new steps happening behind the scenes when running a match-making request:

  • The Lobby executes the Load Balancing logic and obtains a reference to a Game Node that is best suited for the job.
  • The Match Expression is run on the Game Node to find (or create) a game matching the provided criteria.
  • When a game is found (either existing or by creating it) the client receives a new event called CONNECTION_REQUIRED.

The CONNECTION_REQUIRED event tells the client to start a new connection towards a specific Game Node. Once the connection is established both the Login and Join request will be automatically executed by the Game Node itself, so the player can immediately jump in the game.

In other words a successful match-making request creates a reserved "seat" for the player in the Game Node and upon login the the client is immediately sent to the designated Room.

Multiple client-side connections

As you may have intuited, one of the main differences when developing a cluster client is the use of multiple SmartFox connections, one per server, essentially.

The typical flow of a cluster-based application is the following:

  1. Connect and login to the Lobby.
  2. Send match-making request.
  3. On ConnectionRequired event, connect to the Game Node and start playing.
  4. When the game is complete leave the game Room and disconnect from the Game Node.

This doesn't exhaust all possible use cases and there can be scenarios where more than two connections are active at the same time. We'll discuss these aspects more in detail in the next chapter about client-side development.

Load Balancing

Load Balancing (LB) is a new mechanism built-in the Lobby server of an Overcast Cluster that aims at distributing clients as evenly as possible on all the active Game Nodes.

The default LB algorithm is called the Least Connection LB, as it searches the least loaded server for every match-making request. This allows to load every server in an efficient way, always using the Game Node with the least traffic.

Besides the default implementation, custom LB algorithms can be plugged at the Lobby level and replace the default behavior. For more details on the default LB or how to write your own LB algorithm take a look at the specific guide in this documentation.

Game Node Orchestration

Another important component in the cluster is the Conductor, which runs on the Lobby Node and orchestrates the amount of Game Nodes available in the system.

The Conductor monitors the load of Games Nodes and dynamically triggers the addition or removal of new servers based on the current state. In particular it relies on two separate pieces of logic:

  • ScaleUpCondition: monitors the threshold that should be reached before new Game Nodes are added.
  • ScaleDownCondition: monitors the threshold that should be reached before removing existing Game Nodes.

These components are objects in the system that can be configured independently and work as triggers for the Conductor. For example, the ScaleUpCondition can be configured to trigger at 2000 CCU (average load per Game Node) while the ScaleDownCondition can be set to 100 CCU. When these thresholds are met an event is triggered in the Conductor which in turn will start the process of adding or removing new servers.

There is much more to these components that can be discussed, but these are more advanced topics that we can leave to their respective articles in this doc.
Also, generally speaking, these mechanisms work for you behind the scenes in a semi-transparent way. Most of the times all you need to do is just tuning a few parameters via the AdminTool and forget about it.