Date post: | 24-May-2015 |
Category: |
Technology |
Upload: | aragozin |
View: | 463 times |
Download: | 1 times |
NanoCloud – cloud scale JVM
Alexey Ragozin
Feb 2014
Long time ago …
2009 – building Coherence demo for Amazon EC2• hybrid HPC / DHT cluster• a lot of debugging required• single button deployment
2009 – 2014 – developing cluster applications• demand to test distributed cases locally• Singleton syndrome of Coherence and GemFire
Distributed object paradigm
CORBA, RMI• Exposed remote interfaces
Interface is functional contract Remote protocol is NFR driven implementation
• Heavy infrastructure Brokers Complex connectivity topology
Remoting by convention
If object is “remotable” interface,It would be converted to remote stub
When passing between process boundaries
“remotable” object could be as deep in object graph as you like
stub resolved to object if passed back
Encapsulating RPC
Wrapper class• Implements functional contract• Has private instance of remotable service
Remote service• Not exposed beyond wrapper class• Managed by wrapper class• Automatically exported if wrapper class instance is
transferred to another process
Encapsulating RPC
PRO+ Decoupling of functional and remote contracts+ Consistent local/remote behaviorCON– Homogenous codebase required– Synchronous RPC syndrome is not addressed
OutputStream
NanoCloud’s Zero RMI
Own implementation of RMI Standard Java serialization Serializing of anonymous Runnable/Callable
Auto export of Remote interfaces during serialization of object graph
Single communication socket
Bidirectional communications
public interface RemotePut extends Remote { public void put(Object key, Object value); }
RemotePut remoteService = client1.exec(new Callable<RemotePut>() { @Override public RemotePut call() { final NamedCache cache = CacheFactory.getCache(cacheName); return new RemotePut() { @Override public void put(Object key, Object value) { cache.put(key, value); } }; }});
public interface RemotePut extends Remote { public void put(Object key, Object value); }
@SuppressWarnings("unused")@Testpublic void bidirectional_remoting() { // Present for typical single node cluster cloud.all().presetFastLocalCluster(); cloud.node("storage.**").localStorage(true); cloud.node("client.**").localStorage(false); // Simulates DefaultCacheServer based process cloud.node("storage.**").autoStartServices(); // declaring specific nodes to be created CohNode storage = cloud.node("storage.1"); CohNode client1 = cloud.node("client.1"); CohNode client2 = cloud.node("client.2"); // now we have 3 specific nodes in cloud // all of then will be initialized in parallel cloud.all().ensureCluster();
final String cacheName = "distr-a"; RemotePut remoteService = client1.exec(new Callable<RemotePut>() { @Override public RemotePut call() { final NamedCache cache = CacheFactory.getCache(cacheName); return new RemotePut() { @Override public void put(Object key, Object value) { cache.put(key, value); } }; } });
remoteService.put("A", "aaa"); client2.exec(new Runnable() { @Override public void run() { NamedCache cache = CacheFactory.getCache(cacheName); Assert.assertEquals("aaa", cache.get("A")); } });}
remoteService.put("A", "aaa");
Bidirectional communications
public interface RemotePut extends Remote { public void put(Object key, Object value); }
RemotePut remoteService = client1.exec(new Callable<RemotePut>() { @Override public RemotePut call() { final NamedCache cache = CacheFactory.getCache(cacheName); return new RemotePut() { @Override public void put(Object key, Object value) { cache.put(key, value); } }; }});
remoteService.put("A", "aaa");
Extending java.rmi.Remotexec will mark interface for auto export
Unlike Java RMI, there is no need to declare RemoteException for every method
Result of callable will be serialized and transferred to caller
Objects implementing remote interfaces are automatically replaced with remote stub during serialization
Here we got a remote stub, not a real implementation of interface
Call to a stub, will be converted to “remote” call to instance we have created in “virtualized” node few lines above
Casual provisioning
Normally you would• build and package your code• deploy / copy code artifact• go to server via SSH console
in worst case – to each of your servers
• start your processes via some script• repeat 20-30 times per day• configuration aspects are not considered
a lot of spare time while your file are crossing Atlantic
Casual provisioning
What NanoCloud will do for you?• Package your runtime classpath• Copy changed artifacts via SFTP• Start remote process via SSH• Do all RMI configuration/handshaking• Route console output to you• … and kill slave process once your are done
No more coffee breaks. Turn around in few seconds.
As easy as …
@Test public void remote_hello_world() throws InterruptedException { ViManager cloud = CloudFactory.createSimpleSshCloud(); cloud.node("myserver.uk.db.com"); cloud.node("**").exec(new Callable<Void>() { @Override public Void call() throws Exception { String localHost = InetAddress.getLocalHost().toString(); System.out.println("Hi! I'm running on " + localHost); return null; } }); }
All you need is …
NanoCloud requirementsSSHdJava (1.6 and above) present
Works though NAT and firewallsWorks on Amazon EC2Works everywhere where SSH works
Master – slave communications
Master process Slave hostSSH(Single TCP)
Slave
Slave
RMI(TCP)
std err
std out
std in
diag
Slavecontroller
Slavecontroller
multiplexed slave streams Agent
Death clock is ticking
Master JVM kills slave processes, unless SSH session was interrupted someone kill -9 master JVM master JVM has crashed (e.g. under debuger)Death clock is ticking on slave though if master is not responding slave process will terminate itself
No zombies allowed
Cloud scale JVM
Same API – different topoligies in-process (debug), local, remote (distributed)
Transparent remotingSSH to manage remote serverAutomatic classpath replication (with caching)Zero infrastructure
Any OS for master host SSHd + JVM for slave hosts
200+ slave topology in routinely used
Road map
NanoCloud 0.7.23• 0.7.X in used since Mar 2013• Last fix Sep 2013
NanoCloud 0.8.2 – unstable (stable ETA 2014 Q3)• Programmatic console stream access• Byte code instrumentation• JVM version verification• Option NOT to use Java SSH client (planned)• Consistent error reporting (planned)
Sneak peek: Instrumentation
System.exit() – is still fatal Some cases need “virtual time” Tweaking monolithic code Fault injection Mock injection
Sneak peek: Instrumentation
PowerMock Recompiles everything (Coherence ~ 5000
classes)
AspectJ Static interceptorsByteMan Using agent + weird language
Sneak peek: Instrumentation
ViNode node = ...
ViHookBuilder.newCallSiteHook().onTypes(System.class).onMethod("exit").doReturn(null).apply(node);
node.exec(new Callable<Void>() {@Overridepublic Void call() throws Exception {
System.exit(0);return null;
}});
API is subject to change
New opportunities
Performance testing deploy system under test deploy load generators deploy monitoring agents gather all result in one place
Deployment (remote execution task for ANT)Replace your putty with Java IDE log scrapping parallel execution
Coding for 200+ processes
Driver - concept• Driver – Java interface encapsulates test
action• One way methods• Friendly for remotting for parallel invokation+ some utility for parallel execution, workflow
etcExample:
https://gridkit.googlecode.com/svn/grid-lab/trunk/examples/zk-benchmark-sample
Links
NanoCloud• https://code.google.com/p/gridkit/wiki/NanoCloudTutorial• Maven Central: org.gridkit.lab:telecontrol-ssh:0.7.23• http://blog.ragozin.info/2013/01/remote-code-execution-in-java-made.html
ANT task• https://github.com/gridkit/gridant
ChTest (Coherence test tool)• https://code.google.com/p/gridkit/wiki/ChTest• Maven Central: org.gridkit.coherence-tools:chtest:0.2.4
Thank you
Alexey Ragozin [email protected]
http://blog.ragozin.info- my articleshttp://code.google.com/p/gridkithttp://github.com/gridkit- my open source codehttp://aragozin.timepad.ru- community events in Moscow
Managing artifacts
… a bunch of black magic to find local repoand managing classpath as easy as …
<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <version>2.8</version> <executions> <execution> <id>viconcurrent-0.7.15</id> <phase>test-compile</phase> <goals> <goal>get</goal> </goals> <configuration> <artifact>org.gridkit.lab:viconcurrent:0.7.15</artifact> </configuration> </execution> </executions></plugin>
Managing artifacts
How to get needed artifact on local disk- Maven will disallow two versions of same artifact- but we can trick it …
Transitive dependencies are not included, though.
ViNode node;…node.x(MAVEN).replace("org.gridkit.lab", "viconcurrent", "0.7.15");