+ All Categories
Home > Technology > LAB - Perforce Large Scale & Multi-Site Implementations

LAB - Perforce Large Scale & Multi-Site Implementations

Date post: 17-Nov-2014
Category:
Upload: perforce
View: 241 times
Download: 8 times
Share this document with a friend
Description:
Using the Edge server solution you can streamline how replication is set up and designed within your Perforce environment. Key configurables, options and topologies for replication in Perforce will be shown, allowing you to live on the edge and get the best performance and use out of our improved replication solutions.
Popular Tags:
40
# Tim O’Mahony Technical Support LAB - Perforce Large Scale & Multi-Site Implementations
Transcript

#

Tim O’MahonyTechnical Support

LAB - Perforce Large Scale & Multi-Site Implementations

#

• Previously…in Global Distributed Perforce• Don’t do that… do this!• Living on the Edge• Not just for multi-site, but everywhere.

Agenda

#

Previously…in Global Distributed Perforce

#

• Provide warm standby servers• Reduce load and downtime on a primary server• Provide support for build farms• Alternative to Proxy in some places

R/O replicas & Build Farms

#

• Pros – Process commands locally– Metadata and Archive files – Great for a remote site that’s browsing and submitting

little.– Great for offloading on local fast LAN sites

Forwarding Replicas

#

• Cons– Forwards all write commands to the Master Server– Trade-off vs Proxy; requires a higher level of machine

provisioning and administrative considerations.

Forwarding Replicas

#

But? Isn’t this what we have?

“Duplication of Services

and Metadata only to go

back to the master when I

need it locally

#

Don’t do that… do this!

#

• If that replica could handle 98% of things?• Regular commands happen on the replica• Reduce remote users time waiting for version

management tasks

Wouldn’t it be nice…

#

Introducing the Edge from U2

“…cause Every Band, I mean Perforce Versioning Service, needs one (or two, maybe three)

#

• Commit Server– Stores the canonical archives and permanent metadata. Similar to

a Perforce master server, but may not contain all workspace information.

• Edge Server– An edge server contains a replicated copy of the commit server

data and a unique, local copy of some workspace and work-in-progress information. It can process read-only operations and operations that only write to the local data

Concepts

#

• Each edge server must be backed up separately from the commit server.

• Exclusive locks are global.• Shelved on an edge server are not usually shared

between edge servers.• You can promote a shelves in 14.1• Auto-creation of users is not possible

Notes and Considerations

#

• Labels – global or local to edge• Triggers • Logs and Audits – Edge has it’s own• Unload depot may be different on the edge• Time Zone needs to be the same• Upgrade Commit and Edge at the same time.

Notes and Considerations

#

Performance Difference

“Benchmark of Perforce operations with 128 ms network latency between client and server. The file related commands operated against 7,000 files

#

Living on the Edge

#

• Options– From Scratch– Utilize Existing Forwarding Replicas or Build Farms

• Turn the Master into the Commit Server– ServerID and use p4 serverid to save it– Server Spec Services: commit-server.

Where to Start

#

• Setup a replica • Services: edge-server• Take a filtered checkpoint

– p4d –r $P4ROOT -K db.have,db.working,db.resolve,db.locks,db.revsh,db.workingx,db.resolvex -jd -z filtered.gz

• Restore & Start up the Edge

The basic formula

#

• Migrate Workspaces to the Edge– Have users Submit/Revert

• Unload the workspace – p4 unload -c workspace

• Reload the workspace on the edge– p4 reload -c workspace -p protocol:host:port

• protocol:host:port refers to the commit or remote edge server the workspace is being migrated from.

The basic formula

#

• Run “p4 -Ztag info”

Is it working?

$ p4 -Ztag info…... serverVersion P4D/DARWIN90X86_64/2014.1/821990 (2014/04/08)... ServerID myEdge... serverServices edge-server... changeServer change.perforce.com.au:1666... serverLicense Perforce Software Pty Ltd 500 users (expires 2015/01/06) ... serverLicense-ip 127.0.0.1... caseHandling insensitive... replica commit.perforce.com.au:1666... minClient 97.1: 1.

#

• Triggers– edge-submit

• Like a pre-submit trigger

– edge-content• mid-submit trigger on the edge server • after file transfer from the client to the edge server• prior to file transfer to the commit server. • At this point, the changelist is shelved.

Other settings

#

• Peeking (Improved concurrency through lockless reads)– p4 configure set db.peeking=2

• Consider Filtering if in remote areas• Backup Strategies • Build Servers chained of the Edge Server

Other settings

#

Not just multi-site; edge everywhere.

#

Edge everywhere

“Local and distributed edge setup

#

• Lots of Edges – Have the Commit just Commit

• lbr.replication=share– Leverage Same Storage Solution

• Commit and Edge point to same Storage

– Automatic Promotion of Shelves

• Clustered Perforce

Local Lan Edge Configurations

#

Tim O’MahonyLAB - Perforce Large Scale & Multi-Site ImplementationsPerforce SoftwareTim O’Mahony is a Technical Support Manager from the Australian office at Perforce. He has a wide and diverse knowledge of Perforce products, specializing in its server technology since 2004. Before joining Perforce, he focused on network simulation and Java programming.

#

Clustered Perforce Installations

#

• Why would I consider this model?– When servers are in a Data Center– If primary Perforce server is very high spec.– With a large number of client workspaces– For very high number of transactions

Clustered Perforce Installations

#

• Delegates load from primary server• Transparent to end users• Provides failover• Capacity is easy to increase• Improved service levels to end users

– Increase capacity– Backup– Failover

Clustered Perforce Benefits

#

Anatomy of a Perforce Cluster

#

How do I build one of those ?

#

• Packaged command line utility, p4cmgr– Configuration– Control– Administration

• Technologies– Linux– Saltstack– Python– Apache Zookeeper

Perforce Cluster Manager

#

• p4cmgr <command> <options>• p4cmgr -- help

optional arguments:

--help show this help message and exit

subcommands: {init,add,start,stop,restart,status,backup}

init Initialise a new cluster and create a depot master

add Add a service into a cluster

start Start a service or services on a host or cluster

stop Stop a service or services on a host or cluster

restart Restart a service or services on a host or cluster

status Get a simple debug style output for all nodes

backup Perform a backup of the cluster

P4CMGR Commands - General

#

• p4cmgr init <cluster> <node> [-s <service>]– Configures a new cluster– Installs salt-minion– Defines first Zookeeper– Deploys depot-master onto given node– Establishes baseline for subsequent Perforce servers

P4CMGR Commands - Configuration

#

• p4cmgr add <type> <node>• Supported types

– Zookeeper– Depot standby– Workspace servers– Workspace router

• Actions– Installs salt minion– Deploys relevant components onto node

P4CMGR Commands - Configuration

#

• p4cmgr start/stop– Brings cluster up/down in correct sequence– router started last and stopped first

• p4cmgr restart– stop then start

• p4cmgr status– Prints composition, configuration and status– Verbose output

P4CMGR Commands - Control

#

• p4cmgr backup– Still open for business– Processing load delegated to standby

• Admin checkpoint on standby• Journal rotate on master

• Still need off-site o/s backups– Checkpoint– Journal– Archives

P4CMGR Commands - Administration

#

Over to you…

#

Darrell RobinsLAB - Perforce Datacenter ConfigurationPerforce Software

Darrell Robins is a Software Developer based in the Perforce UK office. He has been with Perforce since 2011, working mainly on web based projects such as OnDemand, Commons and Insights. Life before Perforce was a mixture of web, java and c programming.


Recommended