Date post: | 09-Jan-2017 |
Category: |
Software |
Upload: | max-neunhoeffer |
View: | 588 times |
Download: | 7 times |
Scaling ArangoDB onMesosphere DCOS
Max Neunhöffer
Hamburg, 1 October 2015
www.arangodb.com
Featuresis a multi-model database (document store & graph database),
offers convenient queries (via HTTP/REST and AQL),including joins between different collections,configurable consistency guarantees using transactionsAPI extensible by JS code in the Foxx Microservice Framework.
Featuresis a multi-model database (document store & graph database),offers convenient queries (via HTTP/REST and AQL),
including joins between different collections,configurable consistency guarantees using transactionsAPI extensible by JS code in the Foxx Microservice Framework.
Featuresis a multi-model database (document store & graph database),offers convenient queries (via HTTP/REST and AQL),including joins between different collections,
configurable consistency guarantees using transactionsAPI extensible by JS code in the Foxx Microservice Framework.
Featuresis a multi-model database (document store & graph database),offers convenient queries (via HTTP/REST and AQL),including joins between different collections,configurable consistency guarantees using transactions
API extensible by JS code in the Foxx Microservice Framework.
Featuresis a multi-model database (document store & graph database),offers convenient queries (via HTTP/REST and AQL),including joins between different collections,configurable consistency guarantees using transactionsAPI extensible by JS code in the Foxx Microservice Framework.
Replication and Sharding— horizontal scalabilityArangoDB provides
easy setup of (asynchronous) replication,sharding with automatic data distributionMongoDB-style replication in the cluster,full integration with Apache Mesos and Mesosphere.
Work in progress:synchronous replication in cluster mode,fault tolerance by automatic failover andzero administration by a self-reparing and self-balancing cluster architecture,all based on the Apache Mesos infrastructure.
Replication and Sharding— horizontal scalabilityArangoDB provides
easy setup of (asynchronous) replication,sharding with automatic data distributionMongoDB-style replication in the cluster,full integration with Apache Mesos and Mesosphere.
Work in progress:synchronous replication in cluster mode,
fault tolerance by automatic failover andzero administration by a self-reparing and self-balancing cluster architecture,all based on the Apache Mesos infrastructure.
Replication and Sharding— horizontal scalabilityArangoDB provides
easy setup of (asynchronous) replication,sharding with automatic data distributionMongoDB-style replication in the cluster,full integration with Apache Mesos and Mesosphere.
Work in progress:synchronous replication in cluster mode,fault tolerance by automatic failover and
zero administration by a self-reparing and self-balancing cluster architecture,all based on the Apache Mesos infrastructure.
Replication and Sharding— horizontal scalabilityArangoDB provides
easy setup of (asynchronous) replication,sharding with automatic data distributionMongoDB-style replication in the cluster,full integration with Apache Mesos and Mesosphere.
Work in progress:synchronous replication in cluster mode,fault tolerance by automatic failover andzero administration by a self-reparing and self-balancing cluster architecture,
all based on the Apache Mesos infrastructure.
Replication and Sharding— horizontal scalabilityArangoDB provides
easy setup of (asynchronous) replication,sharding with automatic data distributionMongoDB-style replication in the cluster,full integration with Apache Mesos and Mesosphere.
Work in progress:synchronous replication in cluster mode,fault tolerance by automatic failover andzero administration by a self-reparing and self-balancing cluster architecture,all based on the Apache Mesos infrastructure.
Task Task
Task Task
Task Task
Task Task
TaskTask
Task Task
TaskTask
Task Task
TaskTask
Task
TaskTask
Task
Task Task
TaskTask
Task
Mesos Agent Mesos Agent
Mesos Agent Mesos Agent
Mesos Agent
Mesos Agent
Mesos Agent
Mesos Agent
Mesos Master Mesos Master Mesos Master(leader)
(leader)Zookeeper Zookeeper Zookeeper
Task Task
Task Task
Task Task
Task Task
TaskTask
Task Task
TaskTask
Task Task
TaskTask
Task
TaskTask
Task
Task Task
TaskTask
Task
Mesos Agent Mesos Agent
Mesos Agent Mesos Agent
Mesos Agent
Mesos Agent
Mesos Agent
Mesos Agent
Mesos Master
Zookeeper
Mesos Master(leader)
(leader)Zookeeper
Mesos Master
Zookeeper
Task Task
Task
Task Task
TaskTask
Task Task
TaskTask
Task Task
TaskTask
Task
TaskTask
Task
Task Task
TaskTask
Task
Task
Task Task
Task Task
Task
Mesos Agent
Mesos Agent Mesos Agent
Mesos Agent
Mesos Agent
Mesos Agent
Mesos Agent
Mesos Master Mesos Master Mesos Master(leader)
(leader)Zookeeper Zookeeper Zookeeper
Mesos Agent
dcos CLI Marathonschedules frameworks
starts (this is a lie)
Mesos Agent Mesos Master
Zookeeper
registers
stores stateFramework
Task
dcos CLI Marathonschedules frameworks
1. reports free resources
4. tells to execute
3. accepts orresource offers
2. makesdeclines them
starts (this is a big lie)
executes
(this is a small lie)
Mesos Agent Mesos Master
Zookeeper
Framework
Task
dcos CLI Marathonschedules frameworks
1. reports free resources
4. tells to execute
3. accepts orresource offers
2. makesdeclines them
executes
(this is a small lie)
actually, Marathon is a framework
starts (this is a big lie)
actually, it uses an "executor"
Mesos Agent Mesos Master
Zookeeper
Framework
Task
dcos CLI Marathonschedules frameworks
starts
Mesos Agent Mesos Master
Zookeeper
Framework
Task
notices
reports
reports
dcos CLI Marathonschedules frameworks
restarts
Mesos Agent Mesos Master
Zookeeper
Framework
Task
dcos CLI Marathonschedules frameworks
restarts
Mesos Agent Mesos Master
Zookeeper
Framework
Task
gets state
and reconciles
reconnects
Persistent primitivesoffer received 20151001-105738-2905319616-5050-2640-O0 with
cpus(*):4; mem(*):10895; disk(*):119761; ports(*):[31000-32000]}
trying to reserve 20151001-105738-2905319616-5050-2640-O0 withcpus(arangodb, pri):0.2; mem(arangodb, pri):512; disk(arangodb, pri):512}
offer received 20151001-105738-2905319616-5050-2640-O1 withcpus(*):3.8; mem(*):10383; disk(*):119249; ports(*):[31000-32000];cpus(arangodb, pri):0.2; mem(arangodb, pri):512; disk(arangodb, pri):512}
trying to make 20151001-105738-2905319616-5050-2640-O1 persistent fordisk(arangodb, pri)[AGENCY_c2bb93ce-46d3-4802-bed5-cf254c6f16df:dataxyz]:512
offer received 20151001-105738-2905319616-5050-2640-O2 withcpus(*):3.8; mem(*):10383; disk(*):119249; ports(*):[31000-32000];cpus(arangodb, pri):0.2; mem(arangodb, pri):512;disk(arangodb, pri)[AGENCY_c2bb93ce-46d3-4802-bed5-cf254c6f16df:dataxyz]:512
Persistent primitivesoffer received 20151001-105738-2905319616-5050-2640-O0 with
cpus(*):4; mem(*):10895; disk(*):119761; ports(*):[31000-32000]}
trying to reserve 20151001-105738-2905319616-5050-2640-O0 withcpus(arangodb, pri):0.2; mem(arangodb, pri):512; disk(arangodb, pri):512}
offer received 20151001-105738-2905319616-5050-2640-O1 withcpus(*):3.8; mem(*):10383; disk(*):119249; ports(*):[31000-32000];cpus(arangodb, pri):0.2; mem(arangodb, pri):512; disk(arangodb, pri):512}
trying to make 20151001-105738-2905319616-5050-2640-O1 persistent fordisk(arangodb, pri)[AGENCY_c2bb93ce-46d3-4802-bed5-cf254c6f16df:dataxyz]:512
offer received 20151001-105738-2905319616-5050-2640-O2 withcpus(*):3.8; mem(*):10383; disk(*):119249; ports(*):[31000-32000];cpus(arangodb, pri):0.2; mem(arangodb, pri):512;disk(arangodb, pri)[AGENCY_c2bb93ce-46d3-4802-bed5-cf254c6f16df:dataxyz]:512
Persistent primitivesoffer received 20151001-105738-2905319616-5050-2640-O0 with
cpus(*):4; mem(*):10895; disk(*):119761; ports(*):[31000-32000]}
trying to reserve 20151001-105738-2905319616-5050-2640-O0 withcpus(arangodb, pri):0.2; mem(arangodb, pri):512; disk(arangodb, pri):512}
offer received 20151001-105738-2905319616-5050-2640-O1 withcpus(*):3.8; mem(*):10383; disk(*):119249; ports(*):[31000-32000];cpus(arangodb, pri):0.2; mem(arangodb, pri):512; disk(arangodb, pri):512}
trying to make 20151001-105738-2905319616-5050-2640-O1 persistent fordisk(arangodb, pri)[AGENCY_c2bb93ce-46d3-4802-bed5-cf254c6f16df:dataxyz]:512
offer received 20151001-105738-2905319616-5050-2640-O2 withcpus(*):3.8; mem(*):10383; disk(*):119249; ports(*):[31000-32000];cpus(arangodb, pri):0.2; mem(arangodb, pri):512;disk(arangodb, pri)[AGENCY_c2bb93ce-46d3-4802-bed5-cf254c6f16df:dataxyz]:512
Persistent primitivesoffer received 20151001-105738-2905319616-5050-2640-O0 with
cpus(*):4; mem(*):10895; disk(*):119761; ports(*):[31000-32000]}
trying to reserve 20151001-105738-2905319616-5050-2640-O0 withcpus(arangodb, pri):0.2; mem(arangodb, pri):512; disk(arangodb, pri):512}
offer received 20151001-105738-2905319616-5050-2640-O1 withcpus(*):3.8; mem(*):10383; disk(*):119249; ports(*):[31000-32000];cpus(arangodb, pri):0.2; mem(arangodb, pri):512; disk(arangodb, pri):512}
trying to make 20151001-105738-2905319616-5050-2640-O1 persistent fordisk(arangodb, pri)[AGENCY_c2bb93ce-46d3-4802-bed5-cf254c6f16df:dataxyz]:512
offer received 20151001-105738-2905319616-5050-2640-O2 withcpus(*):3.8; mem(*):10383; disk(*):119249; ports(*):[31000-32000];cpus(arangodb, pri):0.2; mem(arangodb, pri):512;disk(arangodb, pri)[AGENCY_c2bb93ce-46d3-4802-bed5-cf254c6f16df:dataxyz]:512
Persistent primitivesoffer received 20151001-105738-2905319616-5050-2640-O0 with
cpus(*):4; mem(*):10895; disk(*):119761; ports(*):[31000-32000]}
trying to reserve 20151001-105738-2905319616-5050-2640-O0 withcpus(arangodb, pri):0.2; mem(arangodb, pri):512; disk(arangodb, pri):512}
offer received 20151001-105738-2905319616-5050-2640-O1 withcpus(*):3.8; mem(*):10383; disk(*):119249; ports(*):[31000-32000];cpus(arangodb, pri):0.2; mem(arangodb, pri):512; disk(arangodb, pri):512}
trying to make 20151001-105738-2905319616-5050-2640-O1 persistent fordisk(arangodb, pri)[AGENCY_c2bb93ce-46d3-4802-bed5-cf254c6f16df:dataxyz]:512
offer received 20151001-105738-2905319616-5050-2640-O2 withcpus(*):3.8; mem(*):10383; disk(*):119249; ports(*):[31000-32000];cpus(arangodb, pri):0.2; mem(arangodb, pri):512;disk(arangodb, pri)[AGENCY_c2bb93ce-46d3-4802-bed5-cf254c6f16df:dataxyz]:512
Deployment
Docker and github
One container image arangodb/arangodb-mesos used to runthe ArangoDB framework (C++ executable)all ArangoDB instances in the clusterthe Agency (etcd)
The dcos CLI by Mesosphere is a Python program (virtualenv, pip).ArangoDB subcommand: a Python program, talks JSON/REST with theframework, plugs into dcos, deployed from a github repository.github repository mesosphere/universe has all certified frameworks
Deployment
Docker and githubOne container image arangodb/arangodb-mesos used to run
the ArangoDB framework (C++ executable)all ArangoDB instances in the clusterthe Agency (etcd)
The dcos CLI by Mesosphere is a Python program (virtualenv, pip).ArangoDB subcommand: a Python program, talks JSON/REST with theframework, plugs into dcos, deployed from a github repository.github repository mesosphere/universe has all certified frameworks
Deployment
Docker and githubOne container image arangodb/arangodb-mesos used to run
the ArangoDB framework (C++ executable)all ArangoDB instances in the clusterthe Agency (etcd)The dcos CLI by Mesosphere is a Python program (virtualenv, pip).
ArangoDB subcommand: a Python program, talks JSON/REST with theframework, plugs into dcos, deployed from a github repository.github repository mesosphere/universe has all certified frameworks
Deployment
Docker and githubOne container image arangodb/arangodb-mesos used to run
the ArangoDB framework (C++ executable)all ArangoDB instances in the clusterthe Agency (etcd)The dcos CLI by Mesosphere is a Python program (virtualenv, pip).ArangoDB subcommand: a Python program, talks JSON/REST with theframework, plugs into dcos, deployed from a github repository.
github repository mesosphere/universe has all certified frameworks
Deployment
Docker and githubOne container image arangodb/arangodb-mesos used to run
the ArangoDB framework (C++ executable)all ArangoDB instances in the clusterthe Agency (etcd)The dcos CLI by Mesosphere is a Python program (virtualenv, pip).ArangoDB subcommand: a Python program, talks JSON/REST with theframework, plugs into dcos, deployed from a github repository.github repository mesosphere/universe has all certified frameworks
Scaling ArangoDBUltimate aim with a distributed database: horizontal scalability.
Devise a test, . . .to show linear scalinguse N = 8,16,24,32,40,48,56,64,72,80 nodes with 8 vCPUs each.run N/2 DBServers, N/2 asynchronous replicas and N/2 Coordinators.use single document reads, writes and 50%/50%,from N/2 load servers in the same Mesosphere clusterup to 640 vCPUs, want to write as many k docs/(s * vCPU) as possible.
Scaling ArangoDBUltimate aim with a distributed database: horizontal scalability.
Devise a test, . . .to show linear scalinguse N = 8,16,24,32,40,48,56,64,72,80 nodes with 8 vCPUs each.run N/2 DBServers, N/2 asynchronous replicas and N/2 Coordinators.
use single document reads, writes and 50%/50%,from N/2 load servers in the same Mesosphere clusterup to 640 vCPUs, want to write as many k docs/(s * vCPU) as possible.
Scaling ArangoDBUltimate aim with a distributed database: horizontal scalability.
Devise a test, . . .to show linear scalinguse N = 8,16,24,32,40,48,56,64,72,80 nodes with 8 vCPUs each.run N/2 DBServers, N/2 asynchronous replicas and N/2 Coordinators.use single document reads, writes and 50%/50%,from N/2 load servers in the same Mesosphere cluster
up to 640 vCPUs, want to write as many k docs/(s * vCPU) as possible.
Scaling ArangoDBUltimate aim with a distributed database: horizontal scalability.
Devise a test, . . .to show linear scalinguse N = 8,16,24,32,40,48,56,64,72,80 nodes with 8 vCPUs each.run N/2 DBServers, N/2 asynchronous replicas and N/2 Coordinators.use single document reads, writes and 50%/50%,from N/2 load servers in the same Mesosphere clusterup to 640 vCPUs, want to write as many k docs/(s * vCPU) as possible.
Deployment of load servers
Docker and ArangoDBUse a central ArangoDB instance to
collect results,evaluate them,and synchronise load servers.Each load server runs the Waiter in a Docker container.
The Waiter waits, most of the time,observes a collection and notices new "work"documents,fires up load processes,reports termination as a "done"document.A single JavaScript program directs the whole experiment.We deploy theWaiter using Marathon.
Deployment of load serversDocker and ArangoDB
Use a central ArangoDB instance tocollect results,evaluate them,and synchronise load servers.
Each load server runs the Waiter in a Docker container.The Waiter waits, most of the time,observes a collection and notices new "work"documents,fires up load processes,reports termination as a "done"document.
A single JavaScript program directs the whole experiment.We deploy theWaiter using Marathon.
Deployment of load serversDocker and ArangoDB
Use a central ArangoDB instance tocollect results,evaluate them,and synchronise load servers.
Each load server runs the Waiter in a Docker container.The Waiter waits, most of the time,observes a collection and notices new "work"documents,fires up load processes,reports termination as a "done"document.
A single JavaScript program directs the whole experiment.We deploy theWaiter using Marathon.
Deployment of load serversDocker and ArangoDB
Use a central ArangoDB instance tocollect results,evaluate them,and synchronise load servers.
Each load server runs the Waiter in a Docker container.
The Waiter waits, most of the time,observes a collection and notices new "work"documents,fires up load processes,reports termination as a "done"document.A single JavaScript program directs the whole experiment.We deploy theWaiter using Marathon.
Deployment of load serversDocker and ArangoDB
Use a central ArangoDB instance tocollect results,evaluate them,and synchronise load servers.
Each load server runs the Waiter in a Docker container.The Waiter
waits, most of the time,observes a collection and notices new "work"documents,fires up load processes,reports termination as a "done"document.A single JavaScript program directs the whole experiment.We deploy theWaiter using Marathon.
Deployment of load serversDocker and ArangoDB
Use a central ArangoDB instance tocollect results,evaluate them,and synchronise load servers.
Each load server runs the Waiter in a Docker container.The Waiter waits,
most of the time,observes a collection and notices new "work"documents,fires up load processes,reports termination as a "done"document.A single JavaScript program directs the whole experiment.We deploy theWaiter using Marathon.
Deployment of load serversDocker and ArangoDB
Use a central ArangoDB instance tocollect results,evaluate them,and synchronise load servers.
Each load server runs the Waiter in a Docker container.The Waiter waits, most of the time,observes a collection and notices new "work"documents,fires up load processes,reports termination as a "done"document.
A single JavaScript program directs the whole experiment.We deploy theWaiter using Marathon.
Deployment of load serversDocker and ArangoDB
Use a central ArangoDB instance tocollect results,evaluate them,and synchronise load servers.
Each load server runs the Waiter in a Docker container.The Waiter waits, most of the time,observes a collection and notices new "work"documents,fires up load processes,reports termination as a "done"document.
A single JavaScript program directs the whole experiment.
We deploy theWaiter using Marathon.
Deployment of load serversDocker and ArangoDB
Use a central ArangoDB instance tocollect results,evaluate them,and synchronise load servers.
Each load server runs the Waiter in a Docker container.The Waiter waits, most of the time,observes a collection and notices new "work"documents,fires up load processes,reports termination as a "done"document.
A single JavaScript program directs the whole experiment.We deploy theWaiter using Marathon.
DEMO?
Links
https://www.arangodb.com
https://docs.arangodb.com/cookbook/index.html
https://github.com/ArangoDB/guesser
http://mesos.apache.org/
https://mesosphere.com/
https://mesosphere.github.io/marathon/