+ All Categories
Home > Documents > Continuent Tungsten 1.5 Manual - VMware · 2015-04-16 · [Continuent Tungsten 2.0 Manual]) or shun...

Continuent Tungsten 1.5 Manual - VMware · 2015-04-16 · [Continuent Tungsten 2.0 Manual]) or shun...

Date post: 11-Mar-2020
Category:
Upload: others
View: 21 times
Download: 0 times
Share this document with a friend
18
Continuent Tungsten 1.5 Manual Continuent
Transcript

Continuent Tungsten 1.5 ManualContinuent

Continuent Tungsten 1.5 ManualContinuentCopyright © 2013 and beyond Continuent, Inc.

Abstract

This manual documents Continuent Tungsten 1.5, providing Upgrade and Release Note information.

Build date: 2015-01-24, Revision: 1231

Up to date builds of this document: Continuent Tungsten 1.5 Manual (Online), Continuent Tungsten 1.5 Manual (PDF)

iii

Table of ContentsPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

1. Legal Notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v2. Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

1. Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.1. Starting and Stopping Continuent Tungsten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.1.1. Restarting the Replicator Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.1.2. Restarting the Connector Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.1.3. Restarting the Manager Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.2. Configuring Startup on Boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.3. Upgrading Continuent Tungsten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.4. Downgrading from 2.0.1 to 1.5.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

A. Release Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14A.1. Continuent Tungsten 1.5.4 GA (Not yet released) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

iv

List of Tables1.1. Key Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

v

Preface

1. Legal NoticeThe trademarks, logos, and service marks in this Document are the property of Continuent or other third parties. You are not permittedto use these Marks without the prior written consent of Continuent or such appropriate third party. Continuent, Tungsten, uni/cluster,m/cluster, p/cluster, uc/connector, and the Continuent logo are trademarks or registered trademarks of Continuent in the United States,France, Finland and other countries.

All Materials on this Document are (and shall continue to be) owned exclusively by Continuent or other respective third party owners andare protected under applicable copyrights, patents, trademarks, trade dress and/or other proprietary rights. Under no circumstances willyou acquire any ownership rights or other interest in any Materials by or through your access or use of the Materials. All right, title andinterest not expressly granted is reserved to Continuent.

All rights reserved.

2. ConventionsThis documentation uses a number of text and style conventions to indicate and differentiate between different types of information:

• Text in this style is used to show an important element or piece of information. It may be used and combined with other text styles asappropriate to the context.

• Text in this style is used to show a section heading, table heading, or particularly important emphasis of some kind.

• Program or configuration options are formatted using this style. Options are also automatically linked to their respectivedocumentation page when this is known. For example, tpm --hosts links automatically to the corresponding reference page.

• Parameters or information explicitly used to set values to commands or options is formatted using this style.

• Option values, for example on the command-line are marked up using this format: --help. Where possible, all option values are directlylinked to the reference information for that option.

• Commands, including sub-commands to a command-line tool are formatted using Text in this style. Commands are also automaticallylinked to their respective documentation page when this is known. For example, tpm links automatically to the corresponding referencepage.

• Text in this style indicates literal or character sequence text used to show a specific value.

• Filenames, directories or paths are shown like this /etc/passwd. Filenames and paths are automatically linked to the correspondingreference page if available.

Bulleted lists are used to show lists, or detailed information for a list of items. Where this information is optional, a magnifying glasssymbol enables you to expand, or collapse, the detailed instructions.

Code listings are used to show sample programs, code, configuration files and other elements. These can include both user input andreplaceable values:

shell> cd /opt/stagingshell> unzip continuent-tungsten-1.5.4-360.zip

In the above example command-lines to be entered into a shell are prefixed using shell. This shell is typically sh, ksh, or bash on Linux andUnix platforms, or Cmd.exe or PowerShell on Windows.

If commands are to be executed using administrator privileges, each line will be prefixed with root-shell, for example:

root-shell> vi /etc/passwd

To make the selection of text easier for copy/pasting, ignorable text, such as shell> are ignored during selection. This allows multi-lineinstructions to be copied without modification, for example:

mysql> create database test_selection;mysql> drop database test_selection;

Lines prefixed with mysql> should be entered within the mysql command-line.

If a command-line or program listing entry contains lines that are two wide to be displayed within the documentation, they are markedusing the » character:

Preface

vi

the first line has been extended by using a » continuation line

They should be adjusted to be entered on a single line.

Text marked up with this style is information that is entered by the user (as opposed to generated by the system). Text formatted usingthis style should be replaced with the appropriate file, version number or other variable information according to the operation beingperformed.

In the HTML versions of the manual, blocks or examples that can be userinput can be easily copied from the program listing. Where thereare multiple entries or steps, use the 'Show copy-friendly text' link at the end of each section. This provides a copy of all the user-enterabletext.

7

Chapter 1. DeploymentCreating a Continuent Tungsten Dataservice using Continuent Tungsten combines a number of different components, systems, andfunctionality, to support a running database dataservice that is capable of handling database failures, complex replication topologies, andmanagement of the client/database connection for both load balancing and failover scenarios.

Before covering the basics of creating different dataservice types, there are some key terms that will be used throughout the setup andinstallation process that identify different components of the system. these are summarised in Table 1.1, “Key Terminology”.

Table 1.1. Key Terminology

Tungsten Term Traditional Term Description

composite dataservice Multi-Site Cluster A configured Continuent Tungsten service consisting of multiple dataservices,typically at different physical locations.

dataservice Cluster A configured Continuent Tungsten service consisting of dataservers, datasourcesand connectors.

dataserver Database The database on a host. Datasources include MySQL, PostgreSQL or Oracle.

datasource Host or Node One member of a dataservice and the associated Tungsten components.

staging host - The machine from which Continuent Tungsten is installed and configured. Themachine does not need to be the same as any of the existing hosts in the cluster.

staging directory - The directory where the installation files are located and the installer is executed.Further configuration and updates must be performed from this directory.

connector - A connector is a routing service that provides management for connectivitybetween application services and the underlying dataserver.

Witness host - A witness host is a host that can be contacted using the ping protocol to act as anetwork check for the other nodes of the cluster. Witness hosts should be on thesame network and segment as the other nodes in the dataservice.

Before attempting installation, there are a number of prerequisite tasks which must be completed to set up your hosts, database, andContinuent Tungsten service:

1. Setup a staging host in [Continuent Tungsten 2.0 Manual] from which you will configure and manage your installation.

2. Configure each host in [Continuent Tungsten 2.0 Manual] that will be used within your dataservice.

3. Configure your MySQL installation in [Continuent Tungsten 2.0 Manual], so that Continuent Tungsten can work with the database.

The following sections provide guidance and instructions for creating a number of different deployment scenarios using ContinuentTungsten.

1.1. Starting and Stopping Continuent TungstenTo stop all of the services associated with a dataservice node, use the stopall script:

shell> stopall Stopping Tungsten Connector...Stopped Tungsten Connector.Stopping Tungsten Replicator Service...Stopped Tungsten Replicator Service.Stopping Tungsten Manager Service...Stopped Tungsten Manager Service.

To start all services, use the startall script:

shell> startallStarting Tungsten Manager Service...

Starting Tungsten Replicator Service...

Starting Tungsten Connector...

1.1.1. Restarting the Replicator Service

Warning

Restarting a running replicator temporarily stops and restarts replication. If the datasource has not been shunned,a failover will occur. Either set maintenance mode within cctrl (see Performing Database or OS Maintenance in

Deployment

8

[Continuent Tungsten 2.0 Manual]) or shun the datasource before restarting the replicator (Shunning a Datasourcein [Continuent Tungsten 2.0 Manual]).

To shutdown a running Tungsten Replicator you must switch off the replicator:

shell> replicator stopStopping Tungsten Replicator Service...Stopped Tungsten Replicator Service.

To start the replicator service if it is not already running:

shell> replicator startStarting Tungsten Replicator Service...

1.1.2. Restarting the Connector Service

Warning

Restarting the connector service will interrupt the communication of any running application or client connectingthrough the connector to MySQL.

To shutdown a running Tungsten Connector you must switch off the replicator:

shell> connector stopStopping Tungsten Connector Service...Stopped Tungsten Connector Service.

To start the replicator service if it is not already running:

shell> connector startStarting Tungsten Connector Service...Waiting for Tungsten Connector Service.....running: PID:12338

If the cluster was configure with auto-enable=false then you will need to put each node online individually.

1.1.3. Restarting the Manager ServiceThe manager service is designed to monitor the status and operation of the each of the datasources within the dataservice. In the eventthat the manager has become confused with the current configuration, for example due to a network or node failure, the managers can berestarted. This forces the managers to update their current status and topology information.

Before restarting managers, the dataservice should be placed in maintenance policy mode. In maintenance mode, the connectors willcontinue to service requests and the manager restart will not be treated as a failure.

To restart the managers across an entire dataservice, each manager will need to be restarted. The dataservice must be placed inmaintenance policy mode first, then:

1. To set the maintenance policy mode:

[LOGICAL:EXPERT] /dsone > set policy maintenance

2. On each datasource in the dataservice:

a. Stop the service:

shell> manager stop

b. Then start the manager service:

shell> manager start

3. Once all the managers have been restarted, set the policy mode back to the automatic:

[LOGICAL:EXPORT] /dsone > set policy automatic policy mode is now AUTOMATIC

1.2. Configuring Startup on BootBy default, Continuent Tungsten does not start automatically on boot. To enable Continuent Tungsten to start at boot time, use thedeployall script provided in the installation directory to create the necessary boot scripts:

shell> sudo /opt/continuent/tungsten/dataservice-home/bin/deployall Adding system startup for /etc/init.d/tmanager ...

Deployment

9

/etc/rc0.d/K80tmanager -> ../init.d/tmanager /etc/rc1.d/K80tmanager -> ../init.d/tmanager /etc/rc6.d/K80tmanager -> ../init.d/tmanager /etc/rc2.d/S80tmanager -> ../init.d/tmanager /etc/rc3.d/S80tmanager -> ../init.d/tmanager /etc/rc4.d/S80tmanager -> ../init.d/tmanager /etc/rc5.d/S80tmanager -> ../init.d/tmanager Adding system startup for /etc/init.d/treplicator ... /etc/rc0.d/K81treplicator -> ../init.d/treplicator /etc/rc1.d/K81treplicator -> ../init.d/treplicator /etc/rc6.d/K81treplicator -> ../init.d/treplicator /etc/rc2.d/S81treplicator -> ../init.d/treplicator /etc/rc3.d/S81treplicator -> ../init.d/treplicator /etc/rc4.d/S81treplicator -> ../init.d/treplicator /etc/rc5.d/S81treplicator -> ../init.d/treplicator Adding system startup for /etc/init.d/tconnector ... /etc/rc0.d/K82tconnector -> ../init.d/tconnector /etc/rc1.d/K82tconnector -> ../init.d/tconnector /etc/rc6.d/K82tconnector -> ../init.d/tconnector /etc/rc2.d/S82tconnector -> ../init.d/tconnector /etc/rc3.d/S82tconnector -> ../init.d/tconnector /etc/rc4.d/S82tconnector -> ../init.d/tconnector /etc/rc5.d/S82tconnector -> ../init.d/tconnector

To disable automatic startup at boot time, use the undeployall command:

shell> sudo /opt/continuent/tungsten/dataservice-home/bin/undeployall

1.3. Upgrading Continuent TungstenTo upgrade an existing installation on Continuent Tungsten, the new distribution must be downloaded and unpacked, and the includedtpm command used to update the installation. The upgrade process implies a small period of downtime for the cluster as the updatedversions of the tools are restarted, but downtime is deliberately kept to a minimum, and the cluster should be in the same operation stateonce the upgrade has finished as it was when the upgrade was started.

Upgrading with ssh Access

To perform an upgrade of an entire cluster, where you have ssh access to the other hosts in the cluster:

1. On your staging server, download the release package.

2. Unpack the release package:

shell> tar zxf tungsten-enterprise-1.5.4-254.tar.gz

3. Change to the unpackaged directory:

shell> cd tungsten-enterprise-1.5.4-254

4. Fetch a copy of the existing configuration information:

shell> ./tools/tpm fetch --hosts=host1,host2,host3 \ --user=tungsten --release-directory=/opt/continuent

Important

You must use the version of tpm from within the staging directory (./tools/tpm) of the new release, not thetpm installed with the current release.

The fetch command to tpm supports the following arguments:

• --hosts

A comma-separated list of the known hosts in the cluster. If autodetect is included, then tpm will attempt to determine other hostsin the cluster by checking the configuration files for host values.

• --user

The username to be used when logging in to other hosts.

• --release-directory

The installation directory of the current Continuent Tungsten installation. If autodetect is specified, then tpm will look for theinstallation directory by checking any running Continuent Tungsten processes.

The current configuration information will be retrieved to be used for the upgrade:

Deployment

10

shell> ./tools/tpm fetch --hosts=host1,host2,host3 --user=tungsten --release-directory=autodetect.......

5. Optionally check that the current configuration matches what you expect by using tpm reverse:

shell> ./tools/tpm reverse# Options for the alpha data servicetools/tpm configure alpha \--connector-listen-port=3306 \--connector-password=password \--connector-user=app_user \--dataservice-connectors=host1,host2,host3 \--dataservice-hosts=host1,host2,host3 \--dataservice-master-host=host1 \--datasource-log-directory=/var/lib/mysql \--datasource-password=password \--datasource-port=13306 \--datasource-user=tungsten \--home-directory=/opt/continuent \--mysql-connectorj-path=/usr/share/java/mysql-connector-java-5.1.16.jar \'--profile-script=~/.bashrc' \--start-and-report=true \--user=tungsten

6. Run the upgrade process:

shell> ./tools/tpm update

Note

During the update process, tpm may report errors or warnings that were not previously reported as problems.This is due to new features or functionality in different MySQL releases and Continuent Tungsten updates.These issues should be addressed and the update command re-executed.

A successful update will report the cluster status as determined from each host in the cluster:

.................................................Getting cluster status on host1.Tungsten Enterprise 1.5.4 build 254dsone: session established[LOGICAL] /dsone > ls

COORDINATOR[host1:AUTOMATIC:ONLINE]

ROUTERS:+----------------------------------------------------------------------------+|connector@host1[26013](ONLINE, created=0, active=0) ||connector@host2[32319](ONLINE, created=0, active=0) ||connector@host3[13791](ONLINE, created=0, active=0) |+----------------------------------------------------------------------------+

DATASOURCES:+----------------------------------------------------------------------------+|host1(master:ONLINE, progress=2, THL latency=0.072) ||STATUS [OK] [2013/10/26 06:30:25 PM BST] |+----------------------------------------------------------------------------+| MANAGER(state=ONLINE) || REPLICATOR(role=master, state=ONLINE) || DATASERVER(state=ONLINE) || CONNECTIONS(created=0, active=0) |+----------------------------------------------------------------------------+

+----------------------------------------------------------------------------+|host2(slave:ONLINE, progress=2, latency=0.265) ||STATUS [OK] [2013/10/26 06:30:21 PM BST] |+----------------------------------------------------------------------------+| MANAGER(state=ONLINE) || REPLICATOR(role=slave, master=host1, state=ONLINE) || DATASERVER(state=ONLINE) || CONNECTIONS(created=0, active=0) |+----------------------------------------------------------------------------+

+----------------------------------------------------------------------------+|host3(slave:ONLINE, progress=2, latency=0.087) ||STATUS [OK] [2013/10/26 06:30:26 PM BST] |+----------------------------------------------------------------------------+| MANAGER(state=ONLINE) || REPLICATOR(role=slave, master=host1, state=ONLINE) || DATASERVER(state=ONLINE) || CONNECTIONS(created=0, active=0) |

Deployment

11

+----------------------------------------------------------------------------+...Exiting...

###################################################################### Next Steps#####################################################################Unless automatically started, you must start the Tungsten services before the cluster will be available. Use the tpm command to start the services:

tools/tpm start

Wait a minute for the services to start up and configure themselves. After that you may proceed.

We have added Tungsten environment variables to ~/.bashrc.Run `source ~/.bashrc` to rebuild your environment.

Once your services start successfully you may begin to use the cluster.To look at services and perform administration, run the following commandfrom any host that is a cluster member.

$CONTINUENT_ROOT/tungsten/tungsten-manager/bin/cctrl

Configuration is now complete. For further information, please consultTungsten documentation, which is available at docs.continuent.com.

NOTE >> Command successfully completed

The update process should now be complete. The current version can be confirmed by starting cctrl.

Upgrading without ssh Access

To perform an upgrade of an individual node, tpm can be used on the individual host. The same method can be used to upgrade an entirecluster without requiring tpm to have ssh access to the other hosts in the dataservice.

To upgrade a cluster using this method:

1. Upgrade the slaves in the dataservice

2. Switch the current master to one of the upgraded slaves

3. Upgrade the master

4. Switch the master back to the original master

For more information on performing maintenance across a cluster, see Performing Maintenance on an Entire Dataservice in [ContinuentTungsten 2.0 Manual].

To upgrade a single host with tpm:

1. Download the release package.

2. Unpack the release package:

shell> tar zxf tungsten-enterprise-1.5.4-254.tar.gz

3. Change to the unpackaged directory:

shell> cd tungsten-enterprise-1.5.4-254

4. Execute tpm update, specifying the installation directory. This will update only this host:

shell> ./tools/tpm update --directory=/opt/continuentetting cluster status on host1.Tungsten Enterprise 1.5.4 build 254dsone: session established[LOGICAL] /alpha > ls

....Exiting...

###################################################################### Next Steps#####################################################################Unless automatically started, you must start the Tungsten services before the cluster will be available. Use the tpm command to start the services:

tools/tpm start

Deployment

12

Wait a minute for the services to start up and configure themselves. After that you may proceed.

We have added Tungsten environment variables to ~/.bashrc.Run `source ~/.bashrc` to rebuild your environment.

Once your services start successfully you may begin to use the cluster.To look at services and perform administration, run the following commandfrom any host that is a cluster member.

$CONTINUENT_ROOT/tungsten/tungsten-manager/bin/cctrl

Configuration is now complete. For further information, please consultTungsten documentation, which is available at docs.continuent.com.

NOTE >> Command successfully completed

To update all of the nodes within a cluster, the steps above will need to be performed individually on each host.

1.4. Downgrading from 2.0.1 to 1.5.4If after upgrading to Continuent Tungsten 2.0.1 you are experiencing problems, and Continuent Support have suggested that youdowngrade to Continuent Tungsten 1.5.4, follow these steps to revert your existing Continuent Tungsten installation.

1. Redirect all users directly to the MySQL server on the master. This may require changing applications and clients to point directlyto the MySQL servers. You cannot use Tungsten Connector to handle this for you, since the entire cluster, including the TungstenConnector services, will be removed.

2. Stop Tungsten services on all servers:

shell> stopall

3. Rebuild the tungsten schema on all servers. This requires a number of different steps:

First, disable logging the statements to the binary log; this information does not need to be replicated around the cluster, even afterrestart:

mysql> SET SESSION SQL_LOG_BIN=0;

Now delete the tungsten schema in preparation for it to be recreated. Within Continuent Tungsten 1.5.4, information about thereplication state is stored in the tungsten schema; within Continuent Tungsten 2.0.1 the information is stored within a schemamatching the service name, for example the service alpha would be stored in the schema tungsten_alpha.

mysql> DROP SCHEMA IF EXISTS `tungsten`;mysql> CREATE SCHEMA `tungsten`;mysql> USE tungsten;

Now create the tables to store the status information:

mysql> CREATE TABLE `consistency` ( `db` char(64) NOT NULL DEFAULT '', `tbl` char(64) NOT NULL DEFAULT '', `id` int(11) NOT NULL DEFAULT '0', `row_offset` int(11) NOT NULL, `row_limit` int(11) NOT NULL, `this_crc` char(40) DEFAULT NULL, `this_cnt` int(11) DEFAULT NULL, `master_crc` char(40) DEFAULT NULL, `master_cnt` int(11) DEFAULT NULL, `ts` timestamp NULL DEFAULT NULL, `method` char(32) DEFAULT NULL, PRIMARY KEY (`db`,`tbl`,`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8;

CREATE TABLE `heartbeat` ( `id` bigint(20) NOT NULL DEFAULT '0', `seqno` bigint(20) DEFAULT NULL, `eventid` varchar(32) DEFAULT NULL, `source_tstamp` timestamp NULL DEFAULT NULL, `target_tstamp` timestamp NULL DEFAULT NULL, `lag_millis` bigint(20) DEFAULT NULL, `salt` bigint(20) DEFAULT NULL, `name` varchar(128) DEFAULT NULL, PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8;

CREATE TABLE `history` ( `seqno` bigint(20) NOT NULL DEFAULT '0', `fragno` smallint(6) NOT NULL DEFAULT '0',

Deployment

13

`last_frag` char(1) DEFAULT NULL, `source_id` varchar(128) DEFAULT NULL, `type` tinyint(4) DEFAULT NULL, `epoch_number` bigint(20) DEFAULT NULL, `source_tstamp` timestamp NULL DEFAULT NULL, `local_enqueue_tstamp` timestamp NULL DEFAULT NULL, `processed_tstamp` timestamp NULL DEFAULT NULL, `status` tinyint(4) DEFAULT NULL, `comments` varchar(128) DEFAULT NULL, `eventid` varchar(128) DEFAULT NULL, `event` longblob, PRIMARY KEY (`seqno`,`fragno`), KEY `eventid` (`eventid`)) ENGINE=InnoDB DEFAULT CHARSET=utf8;

CREATE TABLE `trep_commit_seqno` ( `seqno` bigint(20) DEFAULT NULL, `fragno` smallint(6) DEFAULT NULL, `last_frag` char(1) DEFAULT NULL, `source_id` varchar(128) DEFAULT NULL, `epoch_number` bigint(20) DEFAULT NULL, `eventid` varchar(128) DEFAULT NULL, `applied_latency` int(11) DEFAULT NULL, `update_timestamp` timestamp NULL DEFAULT NULL) ENGINE=InnoDB DEFAULT CHARSET=utf8;

Now import the current sequence number from the existing Continuent Tungsten trep_commit_seqno table:

mysql> INSERT INTO tungsten.trep_commit_seqno (seqno, fragno, last_frag, source_id, epoch_number, eventid, applied_latency, update_timestamp) SELECT seqno, fragno, last_frag, source_id, epoch_number, eventid, applied_latency, update_timestamp FROM TUNGSTEN_SERVICE_SCHEMA.trep_commit_seqno;

Check the sequence number:

mysql> SELECT * FROM tungsten.trep_commit_seqno;

If the sequence number doesn't match on all servers, update the tungsten schema on the master with the earliest information:

mysql> SET SQL_LOG_BIN=0;mysql> UPDATE tungsten.trep_commit_seqno SET seqno=###,epoch_number=###,eventid=SSSSS;

4. Configure the 1.5.x staging directory, extracting the software and then using tpm fetch to retrieve the current configuration.

shell> ./tools/tpm fetch --user=tungsten --hosts=host1,host2,host3,host4 \ --release-directory=/opt/continuent

Note

In the event that the tpm fetch operation fails to detect the current configuration, run tpm reverse on one ofthe machines in the configured service. This will output the current configuration. If necessary, execute tpmreverse on multiple hosts to determine whether the information matches.

If you execute the returned text from tpm reverse, it will configure the service within the local directory, andthe installation can then be updated.

Ensure that the current master is listed as the master within the configuration.

Now update Continuent Tungsten to deploy Continuent Tungsten 1.5.4:

shell> ./tools/tpm update

5. Start all the services on the master:

shell> startall

Confirm that the current master is correct within trepctl and cctrl.

6. Start the services on remaining servers:

shell> startall

7. If you were using a composite data service, you must recreate the composite dataservice configuration manually.

8. Once all the services are back up and running, it is safe to point users and applications at Tungsten Connector and return to normaloperations.

14

Appendix A. Release Notes

A.1. Continuent Tungsten 1.5.4 GA (Not yet released)Continuent Tungsten 1.5.4 is a maintenance release that adds important bug fixes to the Tungsten 1.5.3 release currently in use by mostTungsten customers. It contains the following key improvements:

• Introduces quorum into Tungsten clusters to help avoid split brain problems due to network partitions. Cluster members vote whenevera node becomes unresponsive and only continue operating if they are in the majority. This feature greatly reduces the chances ofmultiple live masters.

• Enables automatic restart of managers after network hangs that disrupt communications between managers. This feature enablesclusters to ride out transient problems with physical hosts such as storage becoming inaccessible or high CPU usage that wouldotherwise cause cluster members to lose contact with each other, thereby causing application outages or manager non-responsiveness.

• Adds "witness-only managers" which replace the previous witness hosts. Witness-only managers participate in quorum computation butdo not manage a DBMS. This feature allows 2 node clusters to operate reliably across Amazon availability zones and any environmentwhere managers run on separate networks.

• Numerous minor improvements to cluster configuration files to eliminate and/or document product settings for simpler and morereliable operation.

Continuent recommends that customers who are awaiting specific fixes for 1.5.3 release consider upgrade to Continuent Tungsten 1.5.4as soon as it is generally available. All other customers should consider upgrade to Continuent Tungsten 2.0.1 as soon as it is convenient. Inaddition, we recommend all new projects start out with version 2.0.1.

Behavior Changes

The following changes have been made to Continuent Tungsten and may affect existing scripts and integration tools.Any scripts or environment which make use of these tools should check and update for the new configuration:

• Failover could be rolled back because of a failure to release a Virtual IP. The failure has been updated to trigger awarning, not a rollback of failover.

Issues: TUC-1666

• An 'UnknownHostException' would cause a failover. The behavior has been updated to result in a suspect DB server.

Issues: TUC-1667

• Failover does not occur if the manager is not running, on the master host, before the time that the database serveris stopped.

Issues: TUC-1900

Improvements, new features and functionality

• Installation and Deployment

• tpm should validate connector defaults that would fail an upgrade.

Issues: TUC-1850

• Improve tpm error message when running from wrong directory.

Issues: TUC-1853

• Tungsten Connector

• Add support for MySQL cursors in the connector.

Issues: TUC-1411

• Connector must forbid zero keepAliveTimeout.

Issues: TUC-1714

• In SOR deployments only, Connector logs show relay data service being added twice.

Issues: TUC-1720

• Change default delayBeforeOfflineIfNoManager router property to 30s and constrain it to max 60s in the code.

Release Notes

15

Issues: TUC-1752

• Router Manager connection timeout should be a property.

Issues: TUC-1754

• Reject server version that don't start with a number.

Issues: TUC-1776

• Add client IP and port when logging connector message.

Issues: TUC-1810

• Make tungsten cluster status more sql-like and reduce the number of informations displayed.

Issues: TUC-1814

• Allow connections without a schema name.

Issues: TUC-1829

• Other Issues

• Remove old/extra/redundant configuration files.

Issues: TUC-1721

Bug Fixes

• Installation and Deployment

• Within tpm the witness host was previously required and was not validated

Issues: TUC-1733

• Ruby tests should abort if installation fails

Issues: TUC-1736

• Test witness hosts on startup of the manager and have the manager exit if there are any invalid witness hosts.

Issues: TUC-1773

• Installation fails with Ruby 1.9.

Issues: TUC-1800

• When using tpm to start from a specific event, the correct directory would not be used for the selected method.

Issues: TUC-1824

• When specifying a witness host check with tpm, the check works for IP addresses but fails when using host names.

Issues: TUC-1833

• Cluster members do not reliably form a group following installation.

Issues: TUC-1852

• Installation fails with Ruby 1.9.1.

Issues: TUC-1868

• Command-line Tools

• Nagios check scripts not picking up shunned datasources

Issues: TUC-1689

• Cookbook Utility

• Cookbook should not specify witness hosts in default configuration files etc.

Release Notes

16

Issues: TUC-1734

• Backup and Restore

• Restore with xtrabackup empties the data directory and then fails.

Issues: TUC-1849

• A recovered datasource does not always come online when in automatic policy mode

Issues: TUC-1851

• Restore on datasource in slave dataservice fails to reload.

Issues: TUC-1856

• After a restore, datasource is welcomed and put online, but never gets to the online state.

Issues: TUC-1861

• A restore that occurs immediately after a recover from dataserver failure always fails.

Issues: TUC-1870

• Core Replicator

• LOAD (LOCAL) DATA INFILE would fail if the request starts with white spaces.

Issues: TUC-1639

• Null values are not correctly handled in keys for row events

Issues: TUC-1823

• Tungsten Connector

• Connector fails to send back full result of stored procedure called by prepared statement (pass through mode on).

Issues: TUC-36

• Router gateway can prevent manager startup if the connector is started before the manager

Issues: TUC-850

• The Tungsten show processlist command would throw NPE errors.

Issues: TUC-1136

• The default SQL Router properties uses the wrong load balancer

Issues: TUC-1437

• Router must go into fail-safe mode if it loses connectivity to a manager during a critical command.

Issues: TUC-1549

• When in a SOR deployment, the Connector will never return connection requests with qos=RO_RELAXED and affinity set to relay nodeonly site.

Issues: TUC-1620

• Affinity not honored when using direct connections.

Issues: TUC-1628

• An attempt to load a driver listener class can cause the connector to hang, at startup.

Issues: TUC-1669

• Broken connections returned to the c3p0 pool - further use of these will show errors.

Issues: TUC-1683

Release Notes

17

• The connector tungsten flush privileges command causes a temporary outage (denies new connection requests).

Issues: TUC-1730

• Connector should require a valid manager to operate even when in maintenance mode.

Issues: TUC-1781

• Session variables support for row replication

Issues: TUC-1784

• Connector allows connections to an offline/on-hold composite dataservice.

Issues: TUC-1787

• Router notifications are being sent to routers via GCS. This is unnecessary since a manager only updates routers that are connected toit.

Issues: TUC-1790

• Pass through not handling correctly multiple results in 1.5.4.

Issues: TUC-1792

• SmartScale will fail to create a database and use immediately.

Issues: TUC-1836

• Tungsten Manager

• A manager that cannot see itself as a part of a group should fail safe and restart

Issues: TUC-1722

• Retry of tests for networking failure does not work in the manager/rules

Issues: TUC-1723

• The 'vip check' command produces a scary message in the manager log if a VIP is not defined

Issues: TUC-1772

• Restored Slave did not change to correct master

Issues: TUC-1794

• If a manager leaves a group due to a brief outage, and does not restart, it remains stranded from the rest of the group but 'thinks' it'sstill a part of the group. This contributed to the main cause of hanging/restarts during operations.

Issues: TUC-1830

• Failover of relay aborts when relay host reboots, leaving data sources of slave service in shunned or offline state.

Issues: TUC-1832

• The recover command completes but cannot welcome the datasource, leading to a failure in tests.

Issues: TUC-1837

• After failover on primary master, relay datasource points to wrong master and has invalid role.

Issues: TUC-1858

• A stopped dataserver would not be detected if cluster was in maintenance mode when it was stopped.

Issues: TUC-1860

• Manager attempts to get status of remote replicator from the local service - causes a failure to catch up from a relay.

Issues: TUC-1864

Release Notes

18

• Using the recover using command can result in more than one service in a composite service having a master and if this happens, thecomposite service will have two masters.

Issues: TUC-1874

• Using the recover using command, the operation recovers a datasource to a master when it should recover it to a relay.

Issues: TUC-1882

• ClusterManagementHandler can read/write datasources directly from the local disk - can cause cluster configuration informationcorruption.

Issues: TUC-1899

• Platform Specific Deployments

• FreeBSD: Replicator hangs when going offline. Can cause switch to hang/abort.

Issues: TUC-1668


Recommended