Salesforce

Space Deployment (Magic xpi 4.9)

« Go Back

Information

 
Created BySalesforce Service User
Approval Process StatusPublished
Objective
Description

Space Deployment (Magic xpi 4.9)

Magic xpi has two spaces and a processing unit:

  • MAGIC_SPACE – This space is responsible for the project's metadata, management recovery and messaging.

  • MAGIC_INFO – This space holds the Activity Log as well as statistics for the Monitor and ODS data. The MAGIC_INFO space can also read from the database if ODS records that are not in the space are requested (Since version: 4.5).

  • MGMirror – This processing unit (PU) is responsible for managing the write operation of the Activity Log and ODS data to the database (Since version: 4.5).

The Magic xpi OS service starts the Grid Service Agent (GSA), which in turn runs the following from the <Magic xpi installation>\Runtime\Gigaspaces\config\gsa folder:

  • An application called mgdeploy that is responsible for the Magic Space (MAGIC_SPACE) deployment.

  • An application called mginfo that is responsible for the MAGIC_INFO space deployment.

  • An application called mgmirror that is responsible for the MGMirror processing unit deployment.

The deployment process uses SLA settings defined in the magicxpi_sla.xml file and the magicinfo_sla.xml file (as explained in the Space Clustering SLA topic).

The deployment process will attempt to spread the partitions on the available containers (GSCs) in such a way that a single server failure will not affect the Magic Space operation and will not cause any data loss.

This provisioning process is automatic, but once complete it will not rearrange itself.

If only one machine was running during the Magic Space deployment process, and there was no restriction in the SLA definition related to a single machine (max-instances-per-machine), this machine will hold all the partitions. Containers starting on other machines after the deployment was complete will not hold any Magic Space partitions, and the single machine that is currently running the Magic Space is now considered a single point-of-failure.

When you have more than one machine that is part of the grid, you will want to have control over when the Magic Space is deployed. When the Grid Service Agent (GSA) loads, and the machine becomes a part of the grid, that machine will not host a part of the Magic Space if there is already a Magic Space deployed on the grid.

To spread the partitions over multiple machines when one machines holds all of the partitions, you have the following options:

  1. You can use the max-instances-per-machine restriction in the SLA. This method should be restricted to a cluster of at least three machines, and it ensures that at least two machines in the grid will run the Space partitions.

    1. In the magicxpi_sla.xml file, define the max-instances-per-machine ="1" entry as explained in the Space Clustering SLA topic.

    2. When the automatic deployment process starts, it will not be completed until at least two machines are hosting the Magic Space partitions.

  2. Magic xpi can automatically monitor and rebalance the single points of failure of both primary and backup partitions running on the same host. Periodic checks are made to determine whether a rebalance of a partition's instance is required. This mechanism is controlled by the following two properties that are defined in the mgdeploy.xml file (located in the <Magic xpi installation>\Runtime\GigaSpaces\config\gsa directory):

    rebalance-partitions - When this property is set to true (default), or when it does not exist, the rebalancing mechanism is activated.
    rebalance-interval - This property defines the intervals between rebalance checks. If the property does not exist, the default is 5 minutes.

    These properties appear in the mgdeploy.xml file as follows:

    <argument>-rebalance-partitions</argument>
    <argument>
    true</argument>
    <argument>-rebalance-interval</argument>
    <argument>
    5</argument>

    Since version: 4.1

  1. You can manually rearrange the partitions from the GigaSpaces UI. To do this, open the Gigaspaces UI Hosts tab, and stand on the Hosts entry at the top of the hierarchy tree on the left. In the Services pane, on the right side of the Gigaspaces UI screen, you will see a tree of containers and partitions. You can now select a partition (either primary or backup) and drag it to a different container, as shown in the following image.

  1. You can restart the backup GSC and GigaSpaces will provision the grid. You do this as follows:

    1. Park on the GSC node of the backup partition.

    2. From the context menu, select Restart.

GigaSpaces will attempt to place the backup container on the second computer, as you can see from the image below. This provides redundancy for your application. If the secondary machine is not available, GigaSpaces will create the backup partition on the current machine. When the secondary machine becomes available again, GigaSpaces may not automatically reposition the backup on the secondary computer. You may need to perform the operation manually.

Grid Components – Memory Allocation

Memory allocation for the various GigaSpaces entities is determined in a batch file called magicxpi-gs-agent.bat. This file is found under the <Magic xpi installation>\Runtime\GigaSpaces-xpi\bin folder.

In this batch file, you will find the gigaSpaces Memory related settings section.

The GSA, GSM, and LUS entities have quite a small memory footprint, so you can leave these settings as is. The GSC is the container that runs the Space partitions and holds all of the data that flows through the projects. If you encounter any memory-related issues with the GSC, you should consider changing this value to at least 1024MB.

GigaSpaces Services Settings

In the magicxpi-gs-agent.bat file, you will see the following line:

call gs-agent.bat gsa.gsc 2 gsa.global.gsm 2 gsa.lus 1 gsa.global.lus 0 gsa.mgmirror 1 gsa.mgdeploy 1 gsa.mginfo 1

This file has the following entries:

  • gsa.gsc – The number of GSCs to deploy. This number should match the number of required partitions.

  • gsa.global.gsm – The number of GSMs to be deployed and managed globally on the grid.

  • gsa.lus – The number of LUSs to start locally. The value of this entry depends on whether the machine is responsible for running the LUS or not.

  • gsa.global.lus – The number of LUSs to be deployed and managed globally on the grid.

  • gsa.mgmirror – The number of mirrors to deploy.

  • gsa.mgdeploy – The number of MAGIC_SPACEs to deploy.

  • gsa.mginfo – The number of MAGIC_INFO spaces to deploy.

For additional information regarding GigaSpaces configuration, refer to: http://docs.gigaspaces.com/xap110adm/moving-into-production-checklist.html

Reference
Attachment 
Attachment