Salesforce

Space Clustering SLA (Magic xpi 4.9)

« Go Back

Information

 
Created BySalesforce Service User
Approval Process StatusPublished
Objective
Description

Space Clustering SLA (Magic xpi 4.9)

Space clustering enforces the number of Space partitions, the number of partition backups, and the way they are spread on the available grid containers (GSCs).

In the cluster with three nodes that is covered in this document, the recommendation is to deploy the Space with the minimum of two partitions with one backup each. In addition, the SLA should enforce that the partition and its backup will never reside under the same host. The MAGIC_SPACE SLA should look like this:

<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:os-sla="http://www.openspaces.org/schema/sla" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.openspaces.org/schema/sla http://www.openspaces.org/schema/8.0/sla/openspaces-sla.xsd">

<os-sla:sla cluster-schema="partitioned-sync2backup" number-of-instances="2" number-of-backups="1" max-instances-per-machine="1">

</os-sla:sla>

</beans>

Space clustering is governed by the SLA definitions. This means that the grid will always try to maintain the defined clustering when deploying the Space.

Clustering SLA is defined in the two SLA files located in the config folder:

  • For the MAGIC_INFO space, clustering is defined in the magicinfo_sla.xml file.

  • For the MAGIC_SPACE space, clustering is defined in the magicxpi_sla.xml file.

The datasource.xml file, also located in the config folder, contains the JDBC connection details to the database and is used by both spaces.

By default, the SLA files define two partitions with one backup each (four in total), and with a restriction that a primary partition and its backup partition cannot run under the same process.

For the grid to comply with the SLA definition, you need to ensure that you define enough GSCs.

In the above default configurations, since a primary partition cannot run with its backup under the same process, you need at least two GSCs running for a successful Space deployment.

The most common SLA settings are:

  1. cluster-schema – This should always be set to partitioned-sync2backup, which means that data can be in partitions and each partition can have a backup that is synchronized with it.

  2. number-of-instances – The required number of Space partitions, meaning instances of the Magic processing unit, which will be loaded. The default is 1. If you have a lot of data in memory, you may need to increase this number.

  3. number-of-backups – The number of backup partitions for each primary partition. During development you can decide that you do not need a backup and you can set this value to 0. If the number-of-instances="2" and the number-of-backups="1", there will be four instances of the Magic processing unit.

  4. max-instances-per-vm – The number of instances of the same partition that will be deployed in the same JVM (GSC), that is, under the same process. If you left the default as is, max-instances-per-vm="1", the primary and backup instances of the same partition will not be deployed on the same GSC.

  5. max-instances-per-machine – When this is set to 1, you ensure that a primary partition and its backup(s) cannot be provisioned to the same machine. Setting this to 1 should be restricted to a cluster containing a minimum of three machines. Then, if one of the machines fails, the lost partitions will move to the third machine. Or, it can also be used in a two machine cluster, but there is a risk having primary partitions with no backup until the second machine is back up and running.

Here are some SLA examples:

  1. For single partitions with two backups, and primary and backup partitions on separate GSCs, set the following in the magicxpi_sla.xml file:

    <os-sla:sla cluster-schema="partitioned-sync2backup" number-of-instances="1"
    number-of-backups="2" max-instances-per-vm="1">

    The above example requires at least three containers on a single machine. Each container will hold a single partition.

  • Using two backups is not recommended. This example is shown here to illustrate how the required number of GSCs is calculated.

  • Users should update the magicinfo_sla.xml and magicxpi_sla.xml file on both the machines in cluster. If this is not done, MAGIC_SPACE might turn into compromised state.

  1. For two partitions with one backup each, and primary and backup partitions on separate GSCs, set the following in the magicxpi_sla.xml file:

    <os-sla:sla cluster-schema="partitioned-sync2backup" number-of-instances="2"
    number-of-backups="1" max-instances-per-vm="1">

    The above example requires at least two containers on a single machine. Each container will hold two partitions.

  2. For two partitions with one backup, and primary and backup partitions on separate machines, set the following in the magicxpi_sla.xml file:

    <os-sla:sla cluster-schema="partitioned-sync2backup" number-of-instances="2"
    number-of-backups="1" max-instances-per-machine="1">

    The above example requires at least two machines with at least one container on each machine. In each machine, the container will hold two partitions. If there is a cluster of two machines, and one of the machines fails, the Magic Space deployment will be incomplete (compromised) and no backup partition will replace the lost backup partitions until the failed machine starts up again.

*** The use of max-instances-per-machine ="1" should be restricted to a cluster containing a minimum of three machines. Then, if one of the machines fails, the lost partitions will move to the third machine.

*** The number of GSCs is defined in the magicxpi-gs-agent.bat file, found under the GigaSpaces-xpi\bin folder. In the command starting with call gs-agent.bat, you should define the number of GSCs to match the number of required partitions by modifying the number next to the gsa.gsc parameter.

Reference
Attachment 
Attachment