Configuring High Availability

The Panel platform supports high availability at all levels the application architecture. Licensing rules permit high availability architectures for Sync Panel Data Center edition and higher.

The full text search service, Panel Service, and PanelTool may be installed on multiple independent servers; the database may be configured in a fault-tolerant replica set; and the web application may be load-balanced.

Configuring High Availability is considered an advanced deployment scenario, and customer support is available to guide you through the process. The following documentation provides a high level overview, without delving into the minutia of high availability architecture design.

Database Replica Set

MongoDB used by the Panel platform supports fault tolerance through replica sets. A replica set consists of a primary server, with at least two secondary servers, one of which may be a non-data storing voting server.

A replica set is created by installing separate database instances then merging them into a replica set. For instructions see: Converting to a Replica Set.

  • Install Identity Panel fully on at least two servers
  • Install Database component on a third server
  • On each database server:
    • Stop the database service
    • Navigate to application database directory ( C:\Program Files\SoftwareIDM\IdentityPanelWeb\MongoDB )
    • Edit mongodb.cfg on each server and add replSet=rs0
    • Start the database service
  • On the primary server navigate to the MongoDB\bin folder, open cmd, and run mongo.exe
  • Enter rs.initiate()
  • Run rs.conf() and rs.status() to verify replica set is operational
  • Run rs.add("host:27017") for each server to be joined to the primary.
    • If one of the replica set members should be an arbiter instead of a member edit the config file for that mongodb to have journal=false and use the command rs.addArb("host:27017") from the primary
  • Edit the MongoConnection string in each Application\Web.config for each Sync Panel web application and restart the application to reference each server in the replica set e.g. mongodb://server1:27017,server2:27017,server3:27017/?replicaSet=rs0&readPreference=primaryPreferred

Search Engine Fault Tolerance

The full-text search engine is made fault tolerant by having a load-balanced multi-search Elastic search deployment. The Web config.json file for each web application should have the SearchConnectionString setting initialized to the load-balanced URL.

The default port for search communications is 9200. If a web-application must communicate with a remote search server, the connection may be secured with https. However, the standard deployment configuration is to use only local search indices.

Load Balanced Web Application

The Panel platform web application may be placed behind most software and hardware load balancers. Session pinning is required if using the Aggregation functions in the Workflow module. This is because aggregation counters are not written to the database, and are only kept in the .NET Session container. Although it is not recommended, this behavior may be modified by using the published SoftwareIDM.RuleFunctions project to change the aggregate functions implementation.

Panel Service Fault Tolerance

Fault tolerance for Panel Service is possible simply by installing it on multiple servers.

The settings for Schedules, Health Checks, and Workflow allow all actions to be assigned to a preferred service instance. If the the preferred service is not available a failover service will be randomly chosen. A service is considered unavailable if it has made no schedule API queries for longer than the inactive threshold, which is ten minutes by default.

Each action that can be assigned to a preferred service may also be designated "Only Preferred". If this option is selected, the action will be skipped if the preferred service is offline.

Copyright © SoftwareIDM

Table of Contents