B2B Data Persistence - IBM Websphere

We describe B2B Persistence in detail.

Transaction store

The XB60 requires a transaction store for persisting B2B transaction metadata and to provide state management for processing AS messages. The embedded database that is used for metadata persistence is not accessible by the users of the system outside of access through the B2B Transaction Viewer.

The major issue with persisting data to a database and hard drive is that the database or hard drive can fill up to capacity extremely quickly when supporting high transaction volumes. To prevent the database and hard drive from filling up with B2B data, you can set an Archive and Purge process for each B2B Gateway Service as described in “Archive tab”. The amount of disk drive space allowed for the local persistence store can be changed to control how much the database file will be allowed to grow; this setting pertains to all B2B data in all domains on the XB60. The B2B Persistence object is only available to the Admin user in the Default domain and can be configured by using the following procedure:

  1. Log on to the Appliance as Admin into the Default domain.
  2. Select Objects B2B Persistence from the left navigation menu.
  3. The Configure B2B Persistence view will be displayed as seen.
  4. You have the option of setting the Admin State and RAID Volume fields; you do not have to change the default under normal circumstances.
  5. Change the Storage Size field to a higher number. Use a number ranging from 1024 to 65536 Megabytes. The Default is 1024 Megabytes.
  6. Click Apply to save the object to the running configuration.
  7. Optionally, click Save Config to save the object to the startup configuration.

Important: When you change the B2B Persistence Store value, you allocate the maximum size that the embedded database can grow on the hard drive. After you change the value of the B2B Persistence Store to a higher number, it cannot be changed to a lower number.

Configure B2B Persistence

Configure B2B Persistence

Document storage

The DataPower XB60 device utilizes a pair of mirrored hard disk drives for metadata and document persistence. They contain an encrypted area and an area that is not encrypted. Transaction metadata extracted from each message is retained in the B2B Persistence store (database file) on the hard disk, which is the area of the hard disk that is not encrypted and which must not be used for document storage. This space can be shared with configuration and log files if the appliance is configured to use the local:///ondisk/ directory. The default setup of the device does not use the hard disk for anything other than B2B message contents and metadata.

By default, the encrypted area of the hard disk drives is used for storing B2B payloads for non-repudiation and viewing purposes. The encrypted portion of the local drive is only 73 GB in size, so it becomes extremely important to Archive and Purge data frequently when using the default encrypted disk location. The XB60 also gives you the option of storing payloads off the device using either an NFS mount point or on an iSCSI disk subsystem.

The document storage location for messages is defined on a service-by-service basis in the Document Storage Location parameter in the B2B Gateway Main tab. This location needs to be a locally accessible file system: iSCSI, NFS, or RAID. The following sections describe how to configure both iSCSI and NFS on the XB60.

Note: We advise that you do not set the document storage location to a directory on the flash drive (local:///). Storing B2B payload data on the flash drive can result in filling the flash drive to capacity

iSCSI
DataPower XB60 appliance can connect to a Storage Area Network (SAN) using the iSCSI protocol starting with the 9004-based platforms. SANs are storage networks that focus on special network configurations that are designed for storage-specific network traffic. iSCSI is defined by Request for Comments (RFC) 3720 and stands for SCSI over Internet Protocol. Like other SAN protocols, iSCSI is a block network protocol. An initiator (client) connects to a target (server). From the initiator perspective, this connection looks like a SCSI device.

Currently, iSCSI on the XB60 appliance is implemented via a host bus adapter (HBA). The network ports eth1 and eth2 on the XB60 function as both Ethernet devices and as HBAs. When functioning as Ethernet devices, the physical ports are defined on eth1 and eth2 interfaces. When functioning as an HBA device, the physical ports are defined on iscsi1 and iscsi2. The iscsi1 and eth1 share theÂsame physical Ethernet port, and the same goes for iscsi2 and eth2. From a configuration point of view, iscsi1 and eth1 are separate devices sharing the same physical port and require a different IP.

Key factors for using an external iSCSI device are:

  • Data storage capacity compared to local drives.
  • Flexibility and scalability when more storage space is needed.
  • XB60 device failure will not cause any loss of historical payload data.

Note: Data is not encrypted on the iSCSI location. The external drive subsystem configuration in use is responsible for all data security.

iSCSI reference objects
This section outlines the objects that are needed to configure the XB60 device as an iSCSI initiator. The appliance, through the iSCSI HBA, can use the iSCSI protocol to communicate with the remote iSCSI server. The iSCSI HBA, acting as the software initiator, establishes connectivity, and when connected, an iSCSI session is started. The appliance provides two Ethernet ports that support iSCSI:

The following components need to be configured to use iSCSI for document storage:

  • iSCSi host bus adapter object: The HBA establishes communications between the appliance and the remote iSCSI server.
  • iSCSI target object: The target defines the connection information to the remote iSCSI server.
  • iSCSI remote server: The remote server is needed to communicate with the iSCSI initiator using the iSCSI protocol. The iSCSI remote server does not need to be a real storage device. There are several free software targets that can be downloaded and easily configured.
  • initialized iSCSI volume object: Initializing the iSCSI volume allows it to be made active. The iSCSI volume must be disabled before it can be initialized.

Configuring the iSCSi host bus adapter
This section describes the information that is needed to configure the iSCSI host bus adapter (HBA) on the device. The iSCSI host bus adapter(HBA) is the hardware that is responsible for the management of iSCSI communications. The iSCSI HBA initiates the iSCSI session between the DataPower appliance and the iSCSI remote target.

The HBA object can be found on the WebGUI by clicking Objects → Network - Settings → iSCSI Host Bus Adapter .

iSCSI-enabled host bus adapter

iSCSI-enabled host bus adapter

We need the following information to enable the iSCSI HBA object:

  • iSCSI Name:A valid iSCSI Name for this HBA instance. There is a predefined iSCSI Qualified Name (IQN), but it is not visible. If you leave the field blank, it defaults to the predefined name. It is possible to assign a iSCSI name. To view this value, select Status → Other Network → iSCSI Host Bus Adapter Status. There is currently no way to revert back to the default IQN name after the predefined iSCSI Name has been modified.

iSCSI host bus adapter status

iSCSI host bus adapter status

  • DHCP (Dynamic Host Configuration Protocol): This setting determines whether or not DHCP will be used for this interface. This setting is optional if a valid IP Address is supplied.
  • IP Address: The IP address assigned to this interface followed by the subnet mask. The subnet mask can be in the Classless Inter-Domain Routing (CIDR) format as a suffix onto the end of the IP address or in dotted quad format. This setting is optional if DHCP is set to On. In our example, we use DHCP.
  • Default Gateway:This field is optional. It is the default gateway for this interface.

Default HBA objects

Default HBA objects

As you can see in this, there are two predefined interfaces that can be used for iSCSI: iscsi1 and iscsi2. The Add button is grayed out, because you cannot add any additional interfaces for iSCSI. The parameters that we used to enable one of the iSCSI HBA interfaces are shown this.

The iscsi1 HBA instance with a user-defined IQN name

The iscsi1 HBA instance with a user-defined IQN name

Configuring the iSCSi target object
The iSCSI target object is depicted and has three required fields: the IQN name of the remote iSCSI server, the host name or the IP address of the remote server, and the iscsi1 HBA instance that we enabled in the previous section. The iSCSI target object waits for SCSI commands. An iSCSI target cannot initiate an iSCSI session. The iSCSI target is a connection instance of a remote iSCSI target.

The iSCSi target can be found by using the WebGUI in your application domain and clicking Objects → Network - Settings → iSCSI Target.

iSCSI Target connection instance of the remote iSCSI target

iSCSI Target connection instance of the remote iSCSI target

Configuring the iSCSi Volume
The iSCSI Volume object is depicted and has three required fields: he directory for which the file system is mounted, the logical unit number (LUN) that is provided by the remote iSCSI server, and the iSCSI Target object that we defined in the previous section. The iSCSI Target object is actually the remote connection instance to the iSCSI server. After the object has been configured for the first time, you will need to issue the action “Initialize File System. ” This action is visible from the WebGUI or the CLI by using the init-fs command. This action will partition and format the iSCSI device. After the object is in the “up” admin-state, it is mounted.

The iSCSi Volume can be found on the WebGUI in your application domain by selecting Objects → Network - Settings → iSCSI Volume.

iSCSI Volume

iSCSI Volume

After the file system has been initialized and the volume object has been enabled, you will see the foo subdirectory under local and logstore. Each application domain contains these subdirectories. These subdirectories are not shared across application domains.

File system directory

File system directory

Setting the Document Storage Location of the B2B Gateway
In order to store documents off the appliance, the Document Storage Location in the B2B Gateway can be set to store the B2B documents off-device on an iSCSI server or an NFS mount. However, the iSCSI and NFS mount locations cannot be set using WebGUI; they must be set in the CLI using the doc-location setting command, and they will not be visible in the WebGUI until it has been set on the B2B Gateway Service using the CLI method. Refer to Example below for using CLI to set the Document Storage Location to an iSCSI server on the B2BGateway Service called HubOwner, using the CLI. To learn more about the CLI, refer to the user documentation.

Setting the Document Storage Location using the CLI

Iyou can see that now the Document Storage Location is set to the subdirectory foo. This subdirectory was created when we enabled the iSCSI Volume.

B2B Gateway Document Storage Location set to iSCSI server

B2B Gateway Document Storage Location set to iSCSI server

NFS
The Network File System (NFS) protocol is another way of storing payload data off the appliance. NFS is a network file system protocol that allows a client application access to files over a network as though the network devices were attached to its local file system. Ports must be opened through the inner firewall to support NFS (2049 and 111 both TCP and User Datagram Protocol (UDP)).

NFS mounts can be statically or dynamically mounted. Dynamic mounts are constructed via URL in the form of: dpnfs://hostname/path/file, causing the directory hostname:/path to be automatically mounted by NFS. It remains mounted until it times out due to inactivity. Defining a static mount allows for the referencing of the NFS Static Mount object in the Document Storage Location URL and avoids the connection overhead associated with dynamic mounting. Mounted NFSs are exposed as a folder with the appliance’s file systems. The following section provides configuration details about how to configure the Document Storage Location to write files to a static mount point defined on an external server.

Key factors for using a NFS mount point:

  • Data storage capacity compared to local drives.
  • Flexibility and scalability when more storage space is needed.
  • XB60 device failure will not cause any loss of historical data.

NFS reference objects
This section outlines the objects that need to be configured to use a static mount point to store copies of payload data:

  • NFS Client Settings: This field contains the client properties for either the dynamic mount or static mount. This object must be enabled in the default domain by a device administrator and the Mount Refresh Time must be set. If authentication is required, DataPower supports NFS Version 4. This version of the protocol provides access to files on mounted file systems that use Kerberos security. To access the configuration object, click Objects → Network - Settings → NFS Client Settings in the left navigation menu.
  • NFS Static Mounts: This object defines the connection information to the remote static mount.

Configuring the NFS Static Mounts
This section describes the information needed to configure the NFS Static Mounts. Defining a static mount allows you to reference the NFS Static Mount object in the document storage location URL.

The NFS Static Mounts object is depicted in Figure below and has one required field: the Remote NFS Export. This field uses the following format host:/path (notice only a single slash is used), where host is the DNS name or IP address of the NFS server, and path is the path exported by the host to mount.

The NFS server must be configured to accept requests from the IP address of the DataPower XB60 device and any firewalls that are between the XB60 and the NFS must be configured to allow the connection. This example uses AUTH_SYS authentication, and the NFS server must also be configured to accept that form of authentication. Kerberos can alternately be used for authentication. It might be a better choice, because it provides data integrity and confidentiality if it is supported by the NFS server.

The iSCSi Target can be found in the WebGUI in your application domain by using the left navigation menu to select Objects → Network - Settings → NFS Static Mounts.

Sample NFS Static Mounts object

Sample NFS Static Mounts object

NFS mount locations cannot be set using the WebGUI. An NFS mount location must be set in the CLI using the doc-location setting command and will not be visible until it has been set on the B2B Gateway Service. The URL must be in the format nfs-[object name]: where object name in our example is nfs StaticMount. When a path is not specified, the file will be written to wherever the static-mount points. Refer to Example below to use CLI to set the Document Storage Location on the B2BGateway Service called HubOwner.

CLI for NFS mount point

This setting will not be visible in the WebGUI. The Document Storage Location will be set to (default). The CLI is required for verification, as stated in the previous step. When the mount is created correctly, you see the “mounted directory” in our example, nfs-b2bmount, in the directory listing using the CLI.

Monitoring hard drive space

The hard drive space is shared across all B2B Gateway objects, and it is a good idea to monitor the available space. In Version 3. 7.3 of the firmware, there is no WebGUI interface available for monitoring.

CLI
There are two commands that you can run in the CLI to monitor the disk space and the size of the persistence storage. Example below shows the results of the two commands.

Commands to monitor disk space


All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

IBM Websphere Topics