当前位置:Gxlcms > mysql > MySQLProxylearnsR/WSplitting

MySQLProxylearnsR/WSplitting

时间:2021-07-01 10:21:17 帮助过:7人阅读

The trunk version of the MySQL Proxy 0.6.0 just learnt about changing backends within running connection. It is now up to lua-script to decide which backend shall be used to send requests too. We wrote a complete tutorial which covers ever

The trunk version of the MySQL Proxy 0.6.0 just learnt about changing backends within running connection. It is now up to lua-script to decide which backend shall be used to send requests too.

We wrote a complete tutorial which covers everything from:

  • building and maintaining a connection pool with high and low water marks
  • transparent authentication (no extra auth against the proxy)
  • deciding on Query Level which backend to use

and implement a transparent read/write splitter which sends all non-transactional Queries to the slaves and the rest to the master.

As the splitting is in the hands of the lua-scripting level you can use the same to implement sharding or other rules to route traffic on statement level.

Connection Pooling

For R/W Splitting we need a connection pooling. We only switch to another backend if we already have a authenticated connection open to that backend.

The MySQL protocol first does a challenge-response handshake. When we enter the query/result stage it is too late to authenticate new connections. We have to make sure that we have enough open connections to operate nicely.

In the keepalive tutorial we spend quite some code on connection management. The whole connect_servers() function is only to create new connections for all pools.

  1. create one connection to each backend
  2. create new connections until we reach min-idle-connections
  3. if the two above conditions are met, use a connection from the pool

Let's take a glimpse at the code:

  1. <code>--- config
  2. --
  3. -- connection pool
  4. local min_idle_connections = 4
  5. local max_idle_connections = 8
  6. ---
  7. -- get a connection to a backend
  8. --
  9. -- as long as we don't have enough connections in the pool, create new connections
  10. --
  11. function connect_server()
  12. -- make sure that we connect to each backend at least ones to
  13. -- keep the connections to the servers alive
  14. --
  15. -- on read_query we can switch the backends again to another backend
  16. local least_idle_conns_ndx = 0
  17. local least_idle_conns = 0
  18. for i = 1, #proxy.servers do
  19. local s = proxy.servers[i]
  20. if s.state ~= proxy.BACKEND_STATE_DOWN then
  21. -- try to connect to each backend once at least
  22. if s.idling_connections == 0 then
  23. proxy.connection.backend_ndx = i
  24. return
  25. end
  26. -- try to open at least min_idle_connections
  27. if least_idle_conns_ndx == 0 or
  28. ( s.idling_connections < min_idle_connections and
  29. s.idling_connections < least_idle_conns ) then
  30. least_idle_conns_ndx = i
  31. least_idle_conns = s.idling_connections
  32. end
  33. end
  34. end
  35. if least_idle_conns_ndx > 0 then
  36. proxy.connection.backend_ndx = least_idle_conns_ndx
  37. end
  38. if proxy.connection.backend_ndx > 0 and
  39. proxy.servers[proxy.connection.backend_ndx].idling_connections >= min_idle_connections then
  40. -- we have 4 idling connections in the pool, that's good enough
  41. return proxy.PROXY_IGNORE_RESULT
  42. end
  43. -- open a new connection
  44. end
  45. </code>

The real trick is in

  1. <code>---
  2. -- put the authed connection into the connection pool
  3. function read_auth_result(packet)
  4. -- disconnect from the server
  5. proxy.connection.backend_ndx = 0
  6. end
  7. </code>

The proxy.connection.backend_ndx = 0 we disconnect us from the current backend (lua starts indexing at index 1, 0 is out of bounds). If a second connection comes in now it can use this authed connection too as it is in the pool, idling.

By setting proxy.connection.backend_ndx you control which backend is used to send your packets too. A backend is defined as a entry of the proxy.servers table. Each connection has (zero or) one backend. The backends all have a address, a type (RW or RO) and a state (UP or DOWN).

As we also might have to many open connections in the pool we close them on shutdown again if necessary:

  1. <code>---
  2. -- close the connections if we have enough connections in the pool
  3. --
  4. -- @return nil - close connection
  5. -- IGNORE_RESULT - store connection in the pool
  6. function disconnect_client()
  7. if proxy.connection.backend_ndx == 0 then
  8. -- currently we don't have a server backend assigned
  9. --
  10. -- pick a server which has too many idling connections and close one
  11. for i = 1, #proxy.servers do
  12. local s = proxy.servers[i]
  13. if s.state ~= proxy.BACKEND_STATE_DOWN and
  14. s.idling_connections > max_idle_connections then
  15. -- try to disconnect a backend
  16. proxy.connection.backend_ndx = i
  17. return
  18. end
  19. end
  20. end
  21. end
  22. </code>

We only search for a backend which has to many open idling connections and use it before we enter the default behaviour of disconnect_client: shutdown the server connection. if proxy.connection.backend_ndx == 0 then is the "we don't have backend associated right now". We already saw this in read_auth_result.

Read/Write Splitting

That is our maintainance of the pool. connect_server() adds new auth'ed connections to the pool, disconnect_client() closes them again. The read/write splitting is part of the query/result cycle:

  1. <code>-- read/write splitting
  2. function read_query( packet )
  3. if packet:byte() == proxy.COM_QUIT then
  4. -- don't send COM_QUIT to the backend. We manage the connection
  5. -- in all aspects.
  6. proxy.response = {
  7. type = proxy.MYSQLD_PACKET_ERR,
  8. errmsg = "ignored the COM_QUIT"
  9. }
  10. return proxy.PROXY_SEND_RESULT
  11. end
  12. -- as we switch between different connenctions we have to make sure that
  13. -- we use always the same DB
  14. if packet:byte() == proxy.COM_INIT_DB then
  15. -- default_db is connection global
  16. default_db = packet:sub(2)
  17. end
  18. if proxy.connection.backend_ndx == 0 then
  19. -- we don't have a backend right now
  20. --
  21. -- let's pick a master as a good default
  22. for i = 1, #proxy.servers do
  23. local s = proxy.servers[i]
  24. if s.idling_connections > 0 and
  25. s.state ~= proxy.BACKEND_STATE_DOWN and
  26. s.type == proxy.BACKEND_TYPE_RW then
  27. proxy.connection.backend_ndx = i
  28. break
  29. end
  30. end
  31. end
  32. if packet:byte() == proxy.COM_QUERY and default_db then
  33. -- how can I know the db of the server connection ?
  34. proxy.queries:append(2, string.char(proxy.COM_INIT_DB) .. default_db)
  35. end
  36. proxy.queries:append(1, packet)
  37. </code>

Up to now it is only making sure that we behave nicely:

  • don't forward COM_QUIT to the backend as he will close the connection on us
  • intercept the COM_INIT_DB to know which DB the client wants to work on. If we switch to another backend we have to make sure the same DB is used.

The read/write splitting is now following a simple rule:

  • send all non-transactional SELECTs to a slave
  • everything else goes to the master

We are still in read_query()

  1. <code> -- read/write splitting
  2. --
  3. -- send all non-transactional SELECTs to a slave
  4. if is_in_transaction == 0 and
  5. packet:byte() == proxy.COM_QUERY and
  6. packet:sub(2, 7) == "SELECT" then
  7. local max_conns = -1
  8. local max_conns_ndx = 0
  9. for i = 1, #proxy.servers do
  10. local s = proxy.servers[i]
  11. -- pick a slave which has some idling connections
  12. if s.type == proxy.BACKEND_TYPE_RO and
  13. s.idling_connections > 0 then
  14. if max_conns == -1 or
  15. s.connected_clients < max_conns then
  16. max_conns = s.connected_clients
  17. max_conns_ndx = i
  18. end
  19. end
  20. end
  21. -- we found a slave which has a idling connection
  22. if max_conns_ndx > 0 then
  23. proxy.connection.backend_ndx = max_conns_ndx
  24. end
  25. else
  26. -- send to master
  27. end
  28. return proxy.PROXY_SEND_QUERY
  29. end
  30. </code>

If we found a slave host which has a idling connection we pick it. If all slaves are busy or down, we just send the query to the master.

As soon as we don't need this connection anymore give it backend to the pool:

  1. <code>---
  2. -- as long as we are in a transaction keep the connection
  3. -- otherwise release it so another client can use it
  4. function read_query_result( inj )
  5. local res = assert(inj.resultset)
  6. local flags = res.flags
  7. if inj.id ~= 1 then
  8. -- ignore the result of the USE <default_db>
  9. return proxy.PROXY_IGNORE_RESULT
  10. end
  11. is_in_transaction = flags.in_trans
  12. if is_in_transaction == 0 then
  13. -- release the backend
  14. proxy.connection.backend_ndx = 0
  15. end
  16. end
  17. </default_db></code>

The MySQL Protocol is nice and offers us a in-transaction-flag. This operates on the state of the transaction and works across all engines. If you want to make sure that several statements go to the same backend, open a transaction with BEGIN. No matter which storage engine you use.

Possible extensions

While we are here in this div of the code think about another use case:

  • if the master is down, ban all writing queries and only allow reading selects against the slaves.

It keeps your site up and running even if your master is gone. You only have to handle errors on write-statements and transactions.

Known Problems

We might have a race-condition that idling connection closes before we can use it. In that case we are in trouble right now and will close the connection to the client.

We have to add queuing of connections and awaking them up when the connection becomes available again to handle this later.

Next Steps

Testing, testing, testing.

  1. <code>$ mysql-proxy /
  2. --proxy-backend-addresses=10.0.0.1:3306 /
  3. --proxy-read-only-backend-addresses=10.0.0.10:3306 /
  4. --proxy-read-only-backend-addresses=10.0.0.12:3306 /
  5. --proxy-lua-script=examples/tutorial-keepalive.lua
  6. </code>

The above code works for my tests, but I don't have any real load. Nor can I create all the error-cases you have in your real-life setups. Please send all your comments, concerns and ideas to the MySQL Proxy forum.

Another upcoming step is externalizing all the load-balancer code and move it into modules to make the code easier to understand and reuseable.

人气教程排行