Date post: | 16-Mar-2018 |
Category: |
Engineering |
Upload: | dmitriy-dumanskiy |
View: | 385 times |
Download: | 5 times |
Highload reactive server with Netty
Dmitriy DumanskiyBlynk, CTO
Java blog : https://habrahabr.ru/users/doom369/topicsDOU : https://dou.ua/users/DOOM/articles/
Makers problem
+ = ?
Makers problem
● Http/s● Mqtt● WebSockets● Own binary protocol
Blynk
10000 req/sec3 VM * 2 cores, 60$
25% load10k of local installations
Why netty?
CassandraApache Spark
Elasticsearch
Graylog
Neo4j
Vert.x
HornetQInfinispan
Finagle
Async-http-clientFirebase
Akka
CouchbasePlay frameworkRedisson
Why netty?
~700k servers
Why netty?● Less GC
Why netty?● Less GC● Optimized for Linux based OS
Why netty?● Less GC● Optimized for Linux based OS● High performance buffers
Why netty?● Less GC● Optimized for Linux based OS● High performance buffers● Well defined threading model
Why netty?● Less GC● Optimized for Linux based OS● High performance buffers● Well defined threading model● HTTP, HTTP/2, SPDY, SCTP, TCP,
UDP, UDT, MQTT, etc
When to use?● Performance is critical
When to use?● Performance is critical● Own protocol
When to use?● Performance is critical● Own protocol● Full control over network
(so_reuseport, tcp_cork, tcp_fastopen, tcp_nodelay, etc)
When to use?● Performance is critical● Own protocol● Full control over network● Game engines (agario, slither,
minecraft)
When to use?● Performance is critical● Own protocol● Full control over network● Game engines● <3 reactive
Non-Blocking
● Few threads● No context switching● No memory consumption
Non-Blockingnew Channel
read / write
Selector
Thread
new Channel
read / write
new Channel
read / write
java.nio.channels.SelectorSelector selector = Selector.open();channel.configureBlocking(false);SelectionKey key = channel.register(selector, SelectionKey.OP_READ);while(true) { selector.select(); Set<SelectionKey> selectedKeys = selector.selectedKeys(); Iterator<SelectionKey> keyIterator = selectedKeys.iterator(); while(keyIterator.hasNext()) { key = keyIterator.next(); if (key.isReadable()) { ... } }}
Selector selector = Selector.open(); // creating selectorchannel.configureBlocking(false);SelectionKey key = channel.register(selector, SelectionKey.OP_READ);while(true) { selector.select(); Set<SelectionKey> selectedKeys = selector.selectedKeys(); Iterator<SelectionKey> keyIterator = selectedKeys.iterator(); while(keyIterator.hasNext()) { key = keyIterator.next(); if (key.isReadable()) { ... } }}
Selector selector = Selector.open(); channel.configureBlocking(false);//registering channel with selector, listening for READ events onlySelectionKey key = channel.register(selector, SelectionKey.OP_READ);while(true) { selector.select(); Set<SelectionKey> selectedKeys = selector.selectedKeys(); Iterator<SelectionKey> keyIterator = selectedKeys.iterator(); while(keyIterator.hasNext()) { key = keyIterator.next(); if (key.isReadable()) { ... } }}
Selector selector = Selector.open();channel.configureBlocking(false);SelectionKey key = channel.register(selector, SelectionKey.OP_READ);while(true) { selector.select(); //blocking until we get some READ events Set<SelectionKey> selectedKeys = selector.selectedKeys(); Iterator<SelectionKey> keyIterator = selectedKeys.iterator(); while(keyIterator.hasNext()) { key = keyIterator.next(); if (key.isReadable()) { ... } }}
Selector selector = Selector.open();channel.configureBlocking(false);SelectionKey key = channel.register(selector, SelectionKey.OP_READ);while(true) { selector.select(); //now we have channels with some data Set<SelectionKey> selectedKeys = selector.selectedKeys(); Iterator<SelectionKey> keyIterator = selectedKeys.iterator(); while(keyIterator.hasNext()) { key = keyIterator.next(); if (key.isReadable()) { ... } }}
Selector selector = Selector.open();channel.configureBlocking(false);SelectionKey key = channel.register(selector, SelectionKey.OP_READ);while(true) { selector.select(); Set<SelectionKey> selectedKeys = selector.selectedKeys(); Iterator<SelectionKey> keyIterator = selectedKeys.iterator(); while(keyIterator.hasNext()) { key = keyIterator.next(); //do something with data if (key.isReadable()) { key.channel() } }}
FlowSelector
SelectionKey
Channel
ChannelPipeline
FlowChannelPipeline
fireEvent()
invokeChannelRead() executor.execute()
invokeChannelRead()
Minimal setupServerBootstrap b = new ServerBootstrap();b.group( new NioEventLoopGroup(1), new NioEventLoopGroup()) .channel(NioServerSocketChannel.class) .childHandler(new ChannelInitializer() {...});
ChannelFuture f = b.bind(8080).sync();f.channel().closeFuture().sync();
Minimal setupServerBootstrap b = new ServerBootstrap();b.group( new NioEventLoopGroup(1), //IO thread new NioEventLoopGroup()) .channel(NioServerSocketChannel.class) .childHandler(new ChannelInitializer() {...});
ChannelFuture f = b.bind(8080).sync();f.channel().closeFuture().sync();
Minimal setupServerBootstrap b = new ServerBootstrap();b.group( new NioEventLoopGroup(1), new NioEventLoopGroup() //worker threads) .channel(NioServerSocketChannel.class) .childHandler(new ChannelInitializer() {...});
ChannelFuture f = b.bind(8080).sync();f.channel().closeFuture().sync();
Minimal setupServerBootstrap b = new ServerBootstrap();b.group( new NioEventLoopGroup(1), new NioEventLoopGroup() //worker threads) .channel(NioServerSocketChannel.class) .childHandler(new ChannelInitializer() {...}); //pipeline init
ChannelFuture f = b.bind(8080).sync();f.channel().closeFuture().sync();
Minimal setup
new ChannelInitializer<SocketChannel>() { @Override protected void initChannel(SocketChannel ch) { final ChannelPipeline pipeline = ch.pipeline(); pipeline.addLast(new MyLogicHere()); }};
ChannelPipeline
ChannelPipeline
● Inbound event -> ChannelInboundHandler (CIHA)
● Outbound event -> ChannelOutboundHandler (COHA)
ChannelInboundHandlerpublic interface ChannelInboundHandler extends ChannelHandler { ... void channelRegistered(ChannelHandlerContext ctx); void channelActive(ChannelHandlerContext ctx); void channelRead(ChannelHandlerContext ctx, Object msg); void userEventTriggered(ChannelHandlerContext ctx, Object evt); void channelWritabilityChanged(ChannelHandlerContext ctx); ...}
void initChannel(SocketChannel ch) { ch.pipeline() .addLast(new MyProtocolDecoder()) .addLast(new MyProtocolEncoder()) .addLast(new MyLogicHandler());}
Own tcp/ip server
Channel
MyProtocolDecoder
MyLogicHandler
Own tcp/ip server
Channel
MyProtocolEncoder
MyLogicHandler
HandlersHttpServerCodec
ChannelTrafficShapingHandler
IdleStateHandler
ReadTimeoutHandler
ChunkedWriteHandler
SslHandler
LoggingHandler
RuleBasedIpFilter
StringDecoderJsonObjectDecoder
Base64DecoderJZlibDecoder
JZlibDecoder
Lz4FrameDecoder
ProtobufDecoderObjectDecoder
XmlFrameDecoder
void initChannel(SocketChannel ch) { ch.pipeline() .addLast(new HttpRequestDecoder()) .addLast(new HttpResponseEncoder()) .addLast(new MyHttpHandler());}
Http Server
void initChannel(SocketChannel ch) { ch.pipeline() .addLast(new HttpServerCodec()) .addLast(new MyHttpHandler());}
OR
void initChannel(SocketChannel ch) { ch.pipeline() .addLast(sslCtx.newHandler(ch.alloc())) .addLast(new HttpServerCodec()) .addLast(new MyHttpHandler());}
Https Server
void initChannel(SocketChannel ch) { ch.pipeline() .addLast(sslCtx.newHandler(ch.alloc())) .addLast(new HttpServerCodec()) .addLast(new HttpContentCompressor()) .addLast(new MyHttpHandler());}
Https Server + content gzip
@Overridepublic void channelRead(Context ctx, Object msg) { //pass flow processing to next handler super.channelRead(ctx, msg);}
Pipeline flow
@Override public void channelRead(Context ctx, Object msg) { //stop request processing return;}
Pipeline flow
public void channelRead(Context ctx, Object msg) { If (msg instanceOf LoginMessage) { LoginMessage login = (LoginMessage) msg; if (isSuperAdmin(login)) { ctx.pipeline().remove(this); ctx.pipeline().addLast(new SuperAdminHandler()); } }}
Pipeline flow on the fly
public void channelRead(Context ctx, Object msg) { ChannelFuture cf = ctx.writeAndFlush(response); cf.addListener(new ChannelFutureListener() { @Override public void complete(ChannelFuture future) { future.channel().close(); } });}
Pipeline futures
@Overridepublic void channelRead(Context ctx, Object msg) { ChannelFuture cf = ctx.writeAndFlush(response); //close connection after message was delivered cf.addListener(ChannelFutureListener.CLOSE);}
Pipeline futures
@Overridepublic void channelRead(Context ctx, Object msg) { ... ChannelFuture cf = ctx.writeAndFlush(response); cf.addListener(future -> { ... });}
Pipeline futures
public void channelRead(Context ctx, Object msg) { ChannelFuture cf = session.sendMsgToFriend(msg); cf.addListener(new ChannelFutureListener() { @Override public void complete(ChannelFuture future) { future.channel().writeAndFlush(“Delivered!”); } });}
Pipeline futures
Pipeline blocking IONon blocking pools Blocking pools
IO Event LoopsDB
Worker Event Loops
Mailing
File system
public void channelRead(Context ctx, Object msg) { if (msg instanceof HttpRequest) { HttpRequest req = (HttpRequest) msg; if (req.method() == GET && req.uri().eq(“/users”)) { Users users = dbManager.userDao.getAllUsers(); ctx.writeAndFlush(new Response(users)); } }}
Pipeline blocking IO
public void channelRead(Context ctx, Object msg) { if (msg instanceof HttpRequest) { HttpRequest req = (HttpRequest) msg; if (req.method() == POST && req.uri().eq(“/email”)) { mailManager.sendEmail(); } }
Pipeline blocking IO
public void channelRead(Context ctx, Object msg) { if (msg instanceof HttpRequest) { HttpRequest req = (HttpRequest) msg; if (req.method() == GET && req.uri().eq(“/property”)) { String property = fileManager.readProperty(); ctx.writeAndFlush(new Response(property)); } }}
Pipeline blocking IO
public void channelRead(Context ctx, Object msg) { ...
blockingThreadPool.execute(() -> {Users users = dbManager.userDao.getAllUsers();ctx.writeAndFlush(new Response(users));
});}
Pipeline blocking IO
Pipeline blocking IO
● Thread.sleep()
Pipeline blocking IO
● Thread.sleep()● java.util.concurrent.*
Pipeline blocking IO
● Thread.sleep()● java.util.concurrent.*● Intensive operations
Pipeline blocking IO
● Thread.sleep()● java.util.concurrent.*● Intensive operations● Any blocking IO (files, db, smtp, etc)
Pipeline blocking IO
● Thread.sleep()● java.util.concurrent.*● Intensive operations● Any blocking IO (files, db, smtp, etc)
@Overridepublic void channelInactive(Context ctx) { HardwareState state = getState(ctx.channel()); if (state != null) { ctx.executor().schedule( new DelayedPush(state), state.period, SECONDS ); }}
EventLoop is Executor!
public void channelRead(Context ctx, Object msg) { if (msg instanceof FullHttpRequest) { FullHttpRequest request = (FullHttpRequest) msg; User user = sessionDao.checkCookie(request); ... } super.channelRead(ctx, msg); }
Request state
private static AttributeKey<User> USER_KEY = AttributeKey.valueOf("user");
ctx.channel().attr(USER_KEY).set(user);
Request state
public void channelRead(Context ctx, Object msg) { if (msg instanceof FullHttpRequest) { FullHttpRequest request = (FullHttpRequest) msg; User user = sessionDao.checkCookie(request); ctx.channel().attr(USER_KEY).set(user); } super.channelRead(ctx, msg); }
Request state
if (isSsl(in)) { enableSsl(ctx);} else { if (isGzip()) { enableGzip(ctx); } else if (isHttp(in)) { switchToHttp(ctx); }}
Port unification
Back pressure
if (channel.isWritable()) { channel.writeAndFlush(msg);}
Back pressure
BackPressureHandler
coming soon...
Performance
Performance
https://www.techempower.com/benchmarks/#section=data-r13&hw=ph&test=plaintext
<dependency> <groupId>io.netty</groupId> <artifactId>netty-transport-native-epoll</artifactId> <version>${netty.version}</version> <classifier>${os}</classifier></dependency>
Native transport
Bootstrap b = new Bootstrap();b.group(new EpollEventLoopGroup());b.channel(EpollSocketChannel.class);
Native transport
SslContextBuilder.forServer().sslProvider(SslProvider.OpenSsl);
JNI OpenSslEngine
<dependency> <groupId>io.netty</groupId> <artifactId>netty-tcnative-boringssl-static</artifactId> <version>${netty.boring.ssl.version}</version> <classifier>${os}</classifier></dependency>
JNI OpenSslEngine
● Netty-tcnative● netty-tcnative-libressl● netty-tcnative-boringssl-static
JNI OpenSslEngine
Own ByteBuf
Own ByteBuf● Reference counted● Pooling by default● Direct memory by default● LeakDetector by default● Reduced branches, range-checks
Own ByteBuf
● ByteBufAllocator.buffer(size);● ctx.alloc().buffer(size);● channel.alloc().buffer(size);
Less system calls
for (Message msg : messages) { ctx.writeAndFlush(msg);}
Less system calls
for (Message msg : messages) { ctx.write(msg);}ctx.flush();
Thread Model
ChannelFuture inCf = ctx.deregister();
inCf.addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture cf) {
targetLoop.register(cf.channel())
.addListener(completeHandler);
}
});
Reusing Event Loop
new ServerBootstrap().group(new EpollEventLoopGroup(1), new EpollEventLoopGroup()
).bind(80);
Reusing Event LoopEventLoopGroup boss = new EpollEventLoopGroup(1);
EventLoopGroup workers = new EpollEventLoopGroup();
new ServerBootstrap().group(
boss,
workers
).bind(80);
new ServerBootstrap().group(
boss,
workers
).bind(443);
Use direct buffers
ctx.writeAndFlush(
new ResponseMessage(messageId, OK)
);
Use direct buffers
ByteBuf buf = ctx.alloc().buffer(3);//pool
buf.writeByte(messageId);
buf.writeShort(OK);
ctx.writeAndFlush(buf);
Less allocations
ByteBuf msg = makeResponse(...);msg.retain(targets.size() - 1);
for (Channel ch : targets) { ch.writeAndFlush(msg);}
Void promise
ctx.writeAndFlush(
response
);
Void promise
ctx.writeAndFlush(
response, ctx.voidPromise()
);
Reuse handlers
@Sharable
public class StringDecoder extends MessageToMessageDecoder<ByteBuf> {
...
}
Prefer context
ctx.channel().writeAndFlush();
Prefer context
ctx.channel().writeAndFlush();
ctx.writeAndFlush();
Simpler - faster
ChannelInboundHandlerAdapter
does nothing, but fast
Simpler - faster
ByteToMessageDecoder
does some work, but slower
Simpler - faster
ReplayingDecoder
does job for you, but slowest
Turn off leak detection
ResourceLeakDetector.setLevel(
ResourceLeakDetector.Level.DISABLED);
What else?
● ASCIIString● FastThreadLocal● Unsafe● Optimized Encoders
● Really fast● Low GC load● Flexible● Rapidly evolve● Cool support
Summary
● Hard● Memory leaks● Still have issues
Summary
https://github.com/blynkkk/blynk-server