+ All Categories
Home > Software > MySQL 5.7 Release Candidate -- May 30th, 2015 at CakeFEST

MySQL 5.7 Release Candidate -- May 30th, 2015 at CakeFEST

Date post: 06-Aug-2015
Category:
Upload: dave-stokes
View: 76 times
Download: 1 times
Share this document with a friend
25
MySQL 5.7 Dave Stokes MySQL Community Manager Oracle Corporation [email protected] @Stoker Slideshare.net/davestokes Opensourcedba.wordpress.com
Transcript

MySQL 5.7

Dave StokesMySQL Community Manager

Oracle Corporation

[email protected]@Stoker

Slideshare.net/davestokesOpensourcedba.wordpress.com

Safe Harbor

The following is intended to outline our

general product direction. It is intended

for information purposes only, and may

not be incorporated into any contract. It is

not a commitment to deliver any material,

code, or functionality, and should not be

relied upon in making purchasing

decision. The development, release, and

timing of any features or functionality

described for Oracle’s products remains

at the sole discretion of Oracle. Take anything I say about 5.7 'with a grain of salt' until it reaches GA status!

Happy Birthday MySQL!!!

● MySQL is 20 years old!

● InnoDB is 10!

So, what is in MySQL

5.7?

MySQL 5.7 Release Candidate

● MySQL 5.7.7 – April 8th 2015

● New Features

● Patches

● Contributions

● Enhancements

● http://dev.mysql.com/doc/relnotes/mysql/5.7/en/ For the details

JSON as a Data Type

mysql> CREATE TABLE employees (data JSON);

Query OK, 0 rows affected (0,01 sec)

mysql> INSERT INTO employees VALUES ('{"id": 1, "name": "Jane"}');

Query OK, 1 row affected (0,00 sec)

mysql> INSERT INTO employees VALUES ('{"id": 2, "name": "Joe"}');

Query OK, 1 row affected (0,00 sec)

mysql> select * from employees;

+---------------------------+

| data |

+---------------------------+

| {"id": 1, "name": "Jane"} |

| {"id": 2, "name": "Joe"} |

+---------------------------+

2 rows in set (0,00 sec)

JSON Data Type

● Document Validation

– Only valid JSON documents can be stored in a JSON column, so you get automatic validation of your data. If you try to store an invalid JSON document in a JSON column, you will get an error

● Efficient Access

– JSON document in a JSON column it is stored in an optimized binary format that allows for quicker access to object members and array elements.

– The binary format of a JSON column contains a preamble with a lookup table. The lookup table has pointers to every key/value pair in the JSON document, sorted on the key. The JSN_EXTRACT function to perform a binary search for the ‘name’ key in the table and read the corresponding value directly, without having to parse the ‘id’ key/value pair that precedes it within the JSON document.

JSON Data Type Functions

● Manipulating JSON documents:

– jsn_array()

– jsn_object()

– jsn_insert()

– jsn_remove()

– jsn_set()

– jsn_replace()

– jsn_append()

– jsn_merge()

– jsn_extract()

● JSON Query:

– JSN_SEARCH()

– JSN_CONTAINS()

– JSN_CONTAINS_PATH()

– JSN_VALID()

– JSN_TYPE()

– JSN_KEYS()

– JSN_LENGTH()

– JSN_DEPTH()

– JSN_UNQUOTE()

– JSN_QUOTE()

JSON via HTTP Plug-in

● MySQL has a new plugin that lets HTTP clients and JavaScript users connect to MySQL using HTTP.

– The development preview brings three APIs:● Key-document for nested JSON documents● CRUD for JSON mapped SQL tables● SQL with JSON replies.

Security

● Secure by default

– You will get a root password

– Bye-bye anonymous account● User='' and password=''

– No test database● Password rotation at your control

● Proxy logins

● The C client library now attempts to establish an SSL connection by default whenever the server is enabled to support SSL

Performance Schema & SYS Schema

● You asked for SYS Schema included by default

● Better instrumentation

– Can be turned on/off, instruments on/off

– Cost 2.5-3% Overhead

– Similar to Oracle V$ variables

– Use MySQL Workbench for dashboard

SYS Schema

● The Performance Schema feature is an amazing resource for runtime instrumentation within MySQL. There is a lot of information is available, but sometimes it is hard to distill. The MySQL SYS schema builds on both the performance_schema and INFORMATION_SCHEMA databases, exposing a set of views, functions, and procedures modeled directly for many day-to-day administrator tasks, such as:

– Analyzing user load

– Analyzing database object load

– Analyzing resource utilization

– Digging into poorly performing SQL

– Deeply analyzing what connections are doing

Spatial Support

● Working closely with Boost.Geometry

● Superset of OpenGIS

Replication

● Multi-threaded – Now parallel withing a database

● Multi-source replication

● CHANGE REPLICATION FILTER – Without restarting

● CHANGE MASTER – Without restarting

● SHOW SLAVE STAUS – Non blocking

● Performance Schema – Replication details

Generated columns

● two kinds of Generated Columns:

– virtual (default) ● the column will be calculated on the fly when a

record is read from a table– stored

● Stored means that the column will be calculated when a new record is written in the table, and after that it will be treated as a regular field.

Better upgrade (and down grade)

● Yes, we know upgrading can be painful!

– Upgrading (and down grading) now part of our daily QA Testing.

Better connection handling

● In some application scenarios (e.g. PHP applications) client connections have very short life spans, perhaps only executing a single query. This means that the time spent processing connects and disconnects can have a large impact on the overall performance. In 5.7 we have offloaded thread initialization and network initialization to a worker thread (WL#6606) and more than doubled MySQL’s ability to do high frequency connect/disconnect cycles, from 26K to 56K connect/disconnect cycles per second. See also Jon Olav Hauglid’s article “Improving connect/disconnect performance“.

Bulk Data Load

● Bulk Load for Create Index (WL#7277). This work implements bulk sorted index builds, thus making CREATE INDEX operations much faster. Prior to this work InnoDB looped through the base table and created one index record at a time for each record in the base table. After this work InnoDB reads many records from the base table, sorts the records using the index key, and then creates a chunked set of index records in one single bulk operation.

InnoDB Full Text

● Better than MyISAM version

● CJK (Chinese, Japanese, Korean) support

– no fixed separators for individual words

– each word can be compromised of multiple characters● MeCab – Morphological parser

– Character sets: eucjpms (ujis), cp932 (sjis), and utf8 (utf8mb4)

Set levels of error logging

● --log_error_verbosity=3

● Pick you level

– Errors only

– Errors and Warnings

– Errors, Warnings, and Notes● Set on the fly!

InnoDB

● INNODB_BUFFER_POOL_SIZE – Dynamic

● Transportable Table Spaces

● INNODB_BUFFER_POOL_DUMP_PCT – save percentage of buffer pool recently used pages to save

MySQL Fabric

● If you can set up MySQL Replication, you can set up MySQL Fabric

● Series of scripts to set up server farms for

– High Availability

– Sharding ● The Connector has the

'smarts' about the farms, no need to change application to re-shard or when master fails

MySQL Utilities

● 28 Scripts written in Python

– Automatic server fail over

– Grep for process and kill it

– Grep for column names

– Check replication

– Copy databases

– Copy grants/permissions

– Diff

– Disk usage

– Clone Servers

Only 40 Minutes – and a lot more features!

Q&A

[email protected] @Stoker

Slideshare.net/davestokes

[email protected]


Recommended