Erlang Central

Difference between revisions of "MySQL native client"

From ErlangCentral Wiki

m
m
(2 intermediate revisions by one user not shown)
Line 1: Line 1:
==Author==
+
==Introduction==
[[User:Krasnopolski]]
+
  
==Introduction==
+
[http://sourceforge.net/projects/erlmysql Erlang client] for MySQl server
[https://sourceforge.net/projects/erlmysql Erlang client] for MySQl server
+
 
is a native client. It means the client is directly connecting to the server and using
 
is a native client. It means the client is directly connecting to the server and using
 
low level (network socket level) connection protocol. So advantage of the client as a native
 
low level (network socket level) connection protocol. So advantage of the client as a native
Line 10: Line 8:
  
 
==Design==
 
==Design==
 +
 
The MySQL client is implemented as Erlang/OTP application. Root supervisor of the client
 
The MySQL client is implemented as Erlang/OTP application. Root supervisor of the client
 
manages a set of datasource processes each of them can be pointed to different MySQl servers
 
manages a set of datasource processes each of them can be pointed to different MySQl servers
 
or to the same but differently configured. To create a datasource process (or object)  
 
or to the same but differently configured. To create a datasource process (or object)  
 
we can use function from the client API:
 
we can use function from the client API:
<code>
+
<pre>
 
my:new_datasource(my_ds, #datasource{
 
my:new_datasource(my_ds, #datasource{
host = "localhost",
+
  host = "localhost",
port = 3306,
+
  port = 3306,
database = "",
+
  database = "",
user = "root",
+
  user = "root",
password = "root"}
+
  password = "root"}
 
).
 
).
</code>
+
</pre>
 
Datasource is supervisor in own turn and keeps management under one connection pool process
 
Datasource is supervisor in own turn and keeps management under one connection pool process
 
and a few active connection processes to server. The connection pool stores a reusable
 
and a few active connection processes to server. The connection pool stores a reusable
Line 31: Line 30:
  
 
==Supported features==
 
==Supported features==
 +
 
The client supports the following client features of MySQL server:
 
The client supports the following client features of MySQL server:
 
* configure client connection using #client_options{} record
 
* configure client connection using #client_options{} record
Line 41: Line 41:
  
 
==Getting started==
 
==Getting started==
 +
 +
===Obtain connection===
 +
 
We have already created datasource object above. So now we can obtain connection to our server:
 
We have already created datasource object above. So now we can obtain connection to our server:
<code>
+
<pre>
 
Conn = my:get_connection(my_ds)
 
Conn = my:get_connection(my_ds)
</code>
+
</pre>
 +
 
 +
===Create database===
 
Let's create new database (or schema) on the server:
 
Let's create new database (or schema) on the server:
<code>
+
<pre>
 
my:execute_query(Conn, "CREATE DATABASE IF NOT EXISTS testDB")
 
my:execute_query(Conn, "CREATE DATABASE IF NOT EXISTS testDB")
</code>
+
</pre>
 +
 
 +
===Create table===
 
Next step is a table creation:
 
Next step is a table creation:
<code>
+
<pre>
 
my:execute_query(Conn,  
 
my:execute_query(Conn,  
"CREATE TABLE testDB.sample_table ("
+
  "CREATE TABLE testDB.sample_table ("
"id bigint(20) NOT NULL AUTO_INCREMENT,"
+
    "id bigint(20) NOT NULL AUTO_INCREMENT,"
"name varchar(45) DEFAULT NULL,"
+
    "name varchar(45) DEFAULT NULL,"
"PRIMARY KEY (id)"
+
    "PRIMARY KEY (id)"
") ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8"
+
  ") ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8"
 
)
 
)
</code>
+
</pre>
 +
 
 +
===Insert and select===
 +
 
 
It is time to insert something to new table:
 
It is time to insert something to new table:
<code>
+
<pre>
 
my:execute_query(Conn,  
 
my:execute_query(Conn,  
"INSERT INTO testDB.sample_table(name) "
+
  "INSERT INTO testDB.sample_table(name) "
"VALUES ('Alex'), ('John')"
+
  "VALUES ('Alex'), ('John')"
 
)
 
)
</code>
+
</pre>
 
And finally we can extract this rows from the table:
 
And finally we can extract this rows from the table:
<code>
+
<pre>
 
{Metadata, Rows} = my:execute_query(Conn, "SELECT * FROM testDB.sample_table"),
 
{Metadata, Rows} = my:execute_query(Conn, "SELECT * FROM testDB.sample_table"),
 
io:format("Result: ~p~n", [Rows])
 
io:format("Result: ~p~n", [Rows])
</code>
+
</pre>
 
We can see on console something like this:
 
We can see on console something like this:
<code>
+
<pre>
 
Result: [{rs_row_data, [1,"Alex"]},{rs_row_data, [2,"John"]}]
 
Result: [{rs_row_data, [1,"Alex"]},{rs_row_data, [2,"John"]}]
</code>
+
</pre>
 
Query returns a tuple that contains two members. First one is metadata record  
 
Query returns a tuple that contains two members. First one is metadata record  
 
represented information about table fields and second one is list of records represented
 
represented information about table fields and second one is list of records represented
Line 81: Line 91:
  
 
==Prepared statement==
 
==Prepared statement==
 +
 
Let's prepare a very simple statement:
 
Let's prepare a very simple statement:
<code>
+
<pre>
Handle = my:get_prepared_statement_handle(Conn, "SELECT name FROM testDB.sample_table WHERE id = ?")
+
Handle = my:get_prepared_statement_handle(
</code>
+
  Conn,                                               % - connection handle
Now execute it:
+
  "SELECT name FROM testDB.sample_table WHERE id = ?" % - SQL prepared statement with ? placeholders
<code>
+
)
{Metadata, Rows} = my:execute_statement(Conn, Handle, [?MYSQL_TYPE_LONGLONG], [1]),
+
</pre>
 +
The function returns prepared statement handle for followed operations. Now execute it:
 +
<pre>
 +
{Metadata, Rows} = my:execute_statement(
 +
  Conn,                   % - connection handle
 +
  Handle,                 % - prepared statement handle
 +
  [?MYSQL_TYPE_LONGLONG], % - list of parameter types
 +
  [1]                     % - list of parameter values, corresponded with the statement ? placeholders.
 +
),
 
io:format("Result: ~p~n", [Rows])
 
io:format("Result: ~p~n", [Rows])
</code>
+
</pre>
Result has to be:
+
We need to pass a statement's parameter types and values to function that executes prepared statement. Result has to be:
<code>
+
<pre>
 
Result: [{rs_row_data, [1,"Alex"]}]
 
Result: [{rs_row_data, [1,"Alex"]}]
</code>
+
</pre>
 +
 
 +
==Compressed mode==
 +
 
 +
Compression protocol is supported by the client if given instance of MySQL supports it.
 +
To activate this feature set compress field in #client_options record and pass the record
 +
to datasource definition when create new datasource object.
 +
<pre>
 +
my:new_datasource(
 +
  #datasource{
 +
    name = my_ds,
 +
    host = "localhost",
 +
    port = 3306,
 +
    database = "",
 +
    user = "root",
 +
    password = "root",
 +
    flags = #client_options{compress=1}
 +
  }
 +
),
 +
Conn = my:get_connection(my_ds)
 +
</pre>
 +
Then established connection ''Conn'' allows to talk to server with compession.
 +
Note that when we are using compression we win a packets size but lost a processor time.
 +
 
 +
==Blob transfer==
 +
 
 +
MySQL client/server protocol allows transfer data for BLOB type field as a separate
 +
operation. Suppose we have a column of LONGBLOB type named as longblob_col in our sample_table above:
 +
<pre>
 +
  CREATE TABLE testDB.sample_table (
 +
    id bigint(20) NOT NULL AUTO_INCREMENT,
 +
    longblob_col longblob,
 +
    name varchar(45) DEFAULT NULL,
 +
    PRIMARY KEY (id)
 +
  ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8
 +
</pre>
 +
And we need to update the field with huge chunk of data. The command is a part of
 +
prepared statement execution cycle, so we need prapare statement first:
 +
<pre>
 +
  Handle = my:get_prepared_statement_handle(Conn,
 +
    "UPDATE testDB.sample_table SET longblob_col= ? WHERE id = ?"
 +
  )
 +
</pre>
 +
After that we can send to server long block of data that has size of 1000000 bytes:
 +
<pre>
 +
my:send_statement_long_parameter(Conn, Handle, 0, &lt;&lt;16#AA:8000000&gt;&gt;),
 +
</pre>
 +
Third parameter of the function is a position number of given parameter
 +
in prepared statement sentence. We can apply the send_statement_long_parameter/4 a few times
 +
and all chunks will be merged in one huge data block. Now as we complete a sending
 +
of statement parameter value to server we can finally execute the statement:
 +
<pre>
 +
my:execute_statement(Conn, Handle, [?LONG_BLOB, ?LONG], [null, 1]),
 +
</pre>
 +
During execution we do not need to send blob parameter value, because it already is in the server.
 +
 
 +
==Cursor fetch==
 +
 
 +
After a statement is prepared we can execute it under two modes. First kind of execution
 +
is default and immediately returns a result set of the query. Second one
 +
does not return a result set but create a cursor on the server side. To retrieve a data from
 +
this cursor we can use fetch_statement command like this:
 +
<pre>
 +
Handle = my:get_prepared_statement_handle(
 +
  Conn,
 +
  "SELECT * FROM testDB.sample_table WHERE id &lt; ?"
 +
),
 +
 
 +
{Metadata,_} = my:execute_statement(
 +
  Conn,
 +
  Handle,
 +
  [?LONGLONG],
 +
  [1],
 +
  ?CURSOR_TYPE_READ_ONLY,
 +
  true
 +
),
 +
 
 +
{_,R} = my:fetch_statement(Conn, Handle, Metadata, 2),
 +
 
 +
io:format("Result: ~p~n", [R])
 +
</pre>
 +
Command to execute the prepared statement in cursor mode and does not return any result but we need
 +
get a metadata record for the following command. The next line is a fetch command that return 2
 +
first rows from the server side cursor. A fetch command returns only binary packets of result set but
 +
skips field metadata. So we have to pass metadata record as a parameter to fetch command
 +
due to properly parse rows data. Result has to be:
 +
<pre>
 +
Result: [{rs_row_data, [2,"John"]}]
 +
</pre>
 +
 
 +
==Author==
 +
[[User:Krasnopolski]]
  
18:56, 26 September 2011 (BST)<br>
+
20:57, 30 September 2011 (BST)<br>
 
[[Category:HowTo]]
 
[[Category:HowTo]]
 
[[Category:Interfacing_Erlang]]
 
[[Category:Interfacing_Erlang]]

Revision as of 19:57, 30 September 2011

Contents

Introduction

Erlang client for MySQl server is a native client. It means the client is directly connecting to the server and using low level (network socket level) connection protocol. So advantage of the client as a native is speed and performance. Disadvantage is that client design tightly coupled with MySQL connection protocol, but MySQL team is conservative enough with it.

Design

The MySQL client is implemented as Erlang/OTP application. Root supervisor of the client manages a set of datasource processes each of them can be pointed to different MySQl servers or to the same but differently configured. To create a datasource process (or object) we can use function from the client API:

my:new_datasource(my_ds, #datasource{
  host = "localhost",
  port = 3306,
  database = "",
  user = "root",
  password = "root"}
).

Datasource is supervisor in own turn and keeps management under one connection pool process and a few active connection processes to server. The connection pool stores a reusable ip/tcp socket and is tracking their life cycles. When new connection process is creating then the connection pool is trying to provide an existed but idle socket. If set of idle socket is empty connection pool creates new one. If connection process has closed then assigned socket becomes idle and goes to set of idle sockets of the pool.

Supported features

The client supports the following client features of MySQL server:

  • configure client connection using #client_options{} record
  • compression mode for transfer data through connection
  • transfer of very long packets as a multi sub-packets
  • prepared statements execution
  • transfer a long statement parameters (blob parameters)
  • fetch cursor on server previously generated by prepared statement
  • transaction execution

Getting started

Obtain connection

We have already created datasource object above. So now we can obtain connection to our server:

Conn = my:get_connection(my_ds)

Create database

Let's create new database (or schema) on the server:

my:execute_query(Conn, "CREATE DATABASE IF NOT EXISTS testDB")

Create table

Next step is a table creation:

my:execute_query(Conn, 
  "CREATE TABLE testDB.sample_table ("
    "id bigint(20) NOT NULL AUTO_INCREMENT,"
    "name varchar(45) DEFAULT NULL,"
    "PRIMARY KEY (id)"
  ") ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8"
)

Insert and select

It is time to insert something to new table:

my:execute_query(Conn, 
  "INSERT INTO testDB.sample_table(name) "
  "VALUES ('Alex'), ('John')"
)

And finally we can extract this rows from the table:

{Metadata, Rows} = my:execute_query(Conn, "SELECT * FROM testDB.sample_table"),
io:format("Result: ~p~n", [Rows])

We can see on console something like this:

Result: [{rs_row_data, [1,"Alex"]},{rs_row_data, [2,"John"]}]

Query returns a tuple that contains two members. First one is metadata record represented information about table fields and second one is list of records represented rows of query result. Each row contains values of fields of SELECT statement. In our case there are 'id' and 'name' fields.

Prepared statement

Let's prepare a very simple statement:

Handle = my:get_prepared_statement_handle(
  Conn,                                               % - connection handle
  "SELECT name FROM testDB.sample_table WHERE id = ?" % - SQL prepared statement with ? placeholders
)

The function returns prepared statement handle for followed operations. Now execute it:

{Metadata, Rows} = my:execute_statement(
  Conn,                    % - connection handle
  Handle,                  % - prepared statement handle
  [?MYSQL_TYPE_LONGLONG],  % - list of parameter types
  [1]                      % - list of parameter values, corresponded with the statement ? placeholders.
),
io:format("Result: ~p~n", [Rows])

We need to pass a statement's parameter types and values to function that executes prepared statement. Result has to be:

Result: [{rs_row_data, [1,"Alex"]}]

Compressed mode

Compression protocol is supported by the client if given instance of MySQL supports it. To activate this feature set compress field in #client_options record and pass the record to datasource definition when create new datasource object.

my:new_datasource(
  #datasource{
    name = my_ds,
    host = "localhost",
    port = 3306,
    database = "",
    user = "root",
    password = "root",
    flags = #client_options{compress=1}
  }
),
Conn = my:get_connection(my_ds)

Then established connection Conn allows to talk to server with compession. Note that when we are using compression we win a packets size but lost a processor time.

Blob transfer

MySQL client/server protocol allows transfer data for BLOB type field as a separate operation. Suppose we have a column of LONGBLOB type named as longblob_col in our sample_table above:

  CREATE TABLE testDB.sample_table (
    id bigint(20) NOT NULL AUTO_INCREMENT,
    longblob_col longblob,
    name varchar(45) DEFAULT NULL,
    PRIMARY KEY (id)
  ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8

And we need to update the field with huge chunk of data. The command is a part of prepared statement execution cycle, so we need prapare statement first:

  Handle = my:get_prepared_statement_handle(Conn, 
    "UPDATE testDB.sample_table SET longblob_col= ? WHERE id = ?"
  )

After that we can send to server long block of data that has size of 1000000 bytes:

my:send_statement_long_parameter(Conn, Handle, 0, <<16#AA:8000000>>),

Third parameter of the function is a position number of given parameter in prepared statement sentence. We can apply the send_statement_long_parameter/4 a few times and all chunks will be merged in one huge data block. Now as we complete a sending of statement parameter value to server we can finally execute the statement:

my:execute_statement(Conn, Handle, [?LONG_BLOB, ?LONG], [null, 1]),

During execution we do not need to send blob parameter value, because it already is in the server.

Cursor fetch

After a statement is prepared we can execute it under two modes. First kind of execution is default and immediately returns a result set of the query. Second one does not return a result set but create a cursor on the server side. To retrieve a data from this cursor we can use fetch_statement command like this:

Handle = my:get_prepared_statement_handle(
  Conn, 
  "SELECT * FROM testDB.sample_table WHERE id < ?"
),	

{Metadata,_} = my:execute_statement(
  Conn, 
  Handle, 
  [?LONGLONG], 
  [1], 
  ?CURSOR_TYPE_READ_ONLY, 
  true
),

{_,R} = my:fetch_statement(Conn, Handle, Metadata, 2),

io:format("Result: ~p~n", [R])

Command to execute the prepared statement in cursor mode and does not return any result but we need get a metadata record for the following command. The next line is a fetch command that return 2 first rows from the server side cursor. A fetch command returns only binary packets of result set but skips field metadata. So we have to pass metadata record as a parameter to fetch command due to properly parse rows data. Result has to be:

Result: [{rs_row_data, [2,"John"]}]

Author

User:Krasnopolski

20:57, 30 September 2011 (BST)