Saturday, November 7, 2009

Section 8.5. Copying Databases and Starting Replication







8.5. Copying Databases and Starting Replication

If you're setting up replication with an existing server that already
contains data, you will need to make an initial backup of the databases and copy the backup to the slave server. I'll list
the recommended method first, followed by some alternatives and their
limitations.

To get a snapshot of the database in a consistent state, you need to
shut down the server while you make a copy of the data, or at least
prevent users from changing data. Considering that once you set up
replication you may never have to shut down your master server for backups
again, explain to management that it's worth inconveniencing the users
this one time to get a clean, consistent backup. The following sections
will explain how to lock the tables. Note that you can allow users to make
changes as soon as your copy is made. If they make changes before
replication starts, MySQL can easily recognize and incorporate those
changes into the slave.

8.5.1. Using mysqldump

This utility, described in Chapter 16,
creates a file of SQL statements that can later be executed to recreate
databases and their contents. For the purposes of setting up
replication, use the following options while running the utility from
the command line on the master server:

mysqldump --user=root --password=my_pwd \
--extended-insert --all-databases \
--ignore-table=mysql.users --master-data > /tmp/backup.sql


The result is a text file (backup.sql)
containing SQL statements to create all of the master's databases and
tables and insert their data. Here is an explanation of some of the
special options shown:


--extended-insert

This option creates multiple-row INSERT
statements and thereby makes the resulting dump file smaller. It also
allows the backup to run faster.


--ignore-table

This option is used here so that the usernames and passwords
won't be copied. This is a good security precaution if the slave
will have different users, and especially if it will be used only
for backups of the master. Unfortunately, there is no easy way to
exclude the entire mysql database containing
user information. You could list all the tables in that database
to be excluded, but they have to be listed separately, and that
becomes cumbersome. The only table that contains passwords is the
users table, so it may be the only one that
matters. However, it depends on whether you set security on a
database, table, or other basis, and therefore want to protect
that user information.


--master-data

This option locks all of the tables during the dump to
prevent data from being changed, but allows users to continue
reading the tables. This option also adds a few lines like the
following to the end of the dump file:

-- --
Position to start replication from --

CHANGE MASTER TO MASTER_LOG_FILE='bin.000846';
CHANGE MASTER TO MASTER_LOG_POS=427;


When the dump file is executed on the slave server, these
lines will record the name of the master's binary log file and the
position in the log at the time of the backup, while the tables
were locked. When replication is started, these lines will provide
this information to the master so it will know the point in the
master's binary log to begin sending entries to the slave. This is
meant to ensure that any data that changes while you set up the
slave server isn't missed.

To execute the dump file and thereby set up the databases and data
on the slave server, copy the dump file generated by
mysqldump to the slave server. The MySQL server needs
to be running on the slave, but not replication. Run the mysql client through a command
such as the following on the slave:

mysql --user=root --password=my_pwd < /tmp/backup.sql


This will execute all of the SQL statements in the dump file,
creating a copy of the master's databases and data on the slave.

8.5.2. Alternative Methods for Making Copies

If you peruse MySQL documentation, you might get the idea
that the [click here] statement is ideal for making a
copy, but it is actually not very feasible. First, it works only on
MyISAM tables. Second, because it performs a global read lock on the
master while it is making a backup, it prevents the master from serving
users for some time. Finally, it can be very slow and depends on good
network connectivity (so it can time out while copying data). Basically,
the statement is a nice idea, but it's not very practical or dependable
in most situations. It has been deprecated by MySQL AB and will be
removed from future releases.

A better alternative is to drop down to the operating system level
and copy the raw files containing your schemas and data. To leave the
server up but prevent changes to data before you make a copy of the
MySQL data directory, you could put a read-only lock on the tables by entering the following
command:

FLUSH TABLES WITH READ LOCK;


This statement will commit any transactions that may be occurring
on the server, so be careful and make sure the lock is actually in place
before you continue. Then, without disconnecting the client that issued
the statement, copy the data directory to an alternative directory. Once
this is completed, issue an UNLOCK TABLES statement
in the client that flushed and locked the tables. After that, the master
responds to updates as usual, while you need only transfer the copy of
the data directory to the slave server, putting it into the slave
server's data directory. Be sure to change the ownership of all of the
files and directories to mysql. In Linux, this is
done by entering the following statement as
root:

chown -R mysql:mysql /path_to_data


You will run into a complication with this method of copying the
data directory if you have InnoDB tables in your databases, because they
are not stored in the data directory. Also, if you don't have
administrative access to the filesystem to be able to manually copy the
data directory, you won't be able to use this method. This is why
mysqldump remains the recommended method for copying
the master's data.








No comments:

Post a Comment