As of MySQL Version 3.23.6, you can choose between three basic
table formats (ISAM
, HEAP
and MyISAM
. Newer
MySQL may support additional table type, depending on how you
compile it.
When you create a new table, you can tell MySQL which table
type it should use for the table. MySQL will always create a
.frm
file to hold the table and column definitions. Depending on
the table type, the index and data will be stored in other files.
The default table type in MySQL is MyISAM
. If you are
trying to use a table type that is not incompiled or activated,
MySQL will instead create a table of type MyISAM
.
You can convert tables between different types with the ALTER
TABLE
statement. See section 7.8 ALTER TABLE
Syntax.
Note that MySQL supports two different kinds of
tables. Transaction-safe tables (BDB
, INNOBASE
or
GEMINI
) and not transaction-safe tables (HEAP
, ISAM
,
MERGE
, and MyISAM
).
Advantages of transaction-safe tables (TST):
COMMIT
command.
ROLLBACK
to ignore your changes (if you are not
running in auto commit mode).
Advantages of not transaction-safe tables (NTST):
You can combine TST and NTST tables in the same statements to get the best of both worlds.
MyISAM
is the default table type in MySQL Version 3.23. It's
based on the ISAM
code and has a lot of useful extensions.
The index is stored in a file with the .MYI
(MYIndex) extension,
and the data is stored in a file with the .MYD
(MYData) extension.
You can check/repair MyISAM
tables with the myisamchk
utility. See section 15.4 Using myisamchk
for Crash Recovery.
The following is new in MyISAM
:
mysqld
is started with --myisam-recover
, MyISAM
tables
will automaticly be repaired on open if the table wasn't closed properly.
INSERT
new rows in a table without deleted rows,
while other threads are reading from the table.
AUTO_INCREMENT
column. MyISAM
will automatically update this on INSERT/UPDATE
. The
AUTO_INCREMENT
value can be reset with myisamchk
. This
will make AUTO_INCREMENT
columns faster (at least 10 %) and old
numbers will not be reused as with the old ISAM. Note that when an
AUTO_INCREMENT
is defined on the end of a multi-part-key the old
behavior is still present.
AUTO_INCREMENT
column) the key tree will be split so that the high node only contains one
key. This will improve the space utilization in the key tree.
BLOB
and TEXT
columns can be indexed.
NULL
values are allowed in indexed columns. This takes 0-1
bytes/key.
myisamchk
.
MyISAM
file that indicates whether or not the
table was closed correctly. This will soon be used for automatic repair
in the MySQL server.
myisamchk
will mark tables as checked if one runs it with
--update-state
. myisamchk --fast
will only check those
tables that don't have this mark.
myisamchk -a
stores statistics for key parts (and not only for
whole keys as in ISAM
).
myisampack
can pack BLOB
and VARCHAR
columns.
MyISAM
also supports the following things, which MySQL
will be able to use in the near future:
VARCHAR
type; A VARCHAR
column starts
with a length stored in 2 bytes.
VARCHAR
may have fixed or dynamic record length.
VARCHAR
and CHAR
may be up to 64K.
All key segments have their own language definition. This will enable
MySQL to have different language definitions per column.
UNIQUE
. This will allow
you to have UNIQUE
on any combination of columns in a table. (You
can't search on a UNIQUE
computed index, however.)
Note that index files are usually much smaller with MyISAM
than with
ISAM
. This means that MyISAM
will normally use less
system resources than ISAM
, but will need more CPU when inserting
data into a compressed index.
The following options to mysqld
can be used to change the behavior of
MyISAM
tables:
Option | Meaning |
--myisam-recover=# | Automatic recover of crashed tables. |
-O myisam_sort_buffer_size=# | Buffer used when recovering tables. |
--delay-key-write-for-all-tables | Don't flush key buffers between writes for any MyISAM table |
The automatic recovery is activated if you start mysqld with
--myisam-recover=#
. See section 4.16.4 Command-line Options.
On open, the table is checked if it's marked as crashed or if the open
count variable for the table is not 0 and you are running with
--skip-locking
. If either of the above is true the following happens.
If the recover wouldn't be able to recover all rows from a previous
completed statement and you didn't specify FORCE
as an option to
myisam-recover
, then the automatic repair will abort with an error
message in the error file:
Error: Couldn't repair table: test.g00pages
If you in this case had used the FORCE
option you would instead have got
a warning in the error file:
Warning: Found 344 of 354 rows when repairing ./test/g00pages
Note that if you run automatic recover with the BACKUP
option,
you should have a cron script that automaticly moves file with names
like `tablename-datetime.BAK' from the database directories to a
backup media.
See section 4.16.4 Command-line Options.
MySQL can support different index types, but the normal type is
ISAM or MyISAM. These use a B-tree index, and you can roughly calculate
the size for the index file as (key_length+4)/0.67
, summed over
all keys. (This is for the worst case when all keys are inserted in
sorted order and we don't have any compressed keys.)
String indexes are space compressed. If the first index part is a
string, it will also be prefix compressed. Space compression makes the
index file smaller than the above figures if the string column has a lot
of trailing space or is a VARCHAR
column that is not always used
to the full length. Prefix compression is used on keys that start
with a string. Prefix compression helps if there are many strings
with an identical prefix.
In MyISAM
tables, you can also prefix compress numbers by specifying
PACK_KEYS=1
when you create the table. This helps when you have
many integer keys that have an identical prefix when the numbers are stored
high-byte first.
MyISAM supports 3 different table types. Two of them are chosen
automatically depending on the type of columns you are using. The third,
compressed tables, can only be created with the myisampack
tool.
This is the default format. It's used when the table contains no
VARCHAR
, BLOB
, or TEXT
columns.
This format is the simplest and most secure format. It is also the fastest of the on-disk formats. The speed comes from the easy way data can be found on disk. When looking up something with an index and static format it is very simple. Just multiply the row number by the row length.
Also, when scanning a table it is very easy to read a constant number of records with each disk read.
The security is evidenced if your computer crashes when writing to a
fixed-size MyISAM file, in which case myisamchk
can easily figure out where each
row starts and ends. So it can usually reclaim all records except the
partially written one. Note that in MySQL all indexes can always be
reconstructed:
CHAR
, NUMERIC
, and DECIMAL
columns are space-padded
to the column width.
myisamchk
) unless a huge number of
records are deleted and you want to return free disk space to the operating
system.
This format is used if the table contains any VARCHAR
, BLOB
,
or TEXT
columns or if the table was created with
ROW_FORMAT=dynamic
.
This format is a litte more complex because each row has to have a header that says how long it is. One record can also end up at more than one location when it is made longer at an update.
You can use OPTIMIZE table
or myisamchk
to defragment a
table. If you have static data that you access/change a lot in the same
table as some VARCHAR
or BLOB
columns, it might be a good
idea to move the dynamic columns to other tables just to avoid
fragmentation:
''
) for string columns, or zero for numeric columns. (This isn't
the same as columns containing NULL
values.) If a string column
has a length of zero after removal of trailing spaces, or a numeric
column has a value of zero, it is marked in the bit map and not saved to
disk. Non-empty strings are saved as a length byte plus the string
contents.
myisamchk
-r
from time to time to get better performance. Use myisamchk -ei
tbl_name
for some statistics.
3 + (number of columns + 7) / 8 + (number of char columns) + packed size of numeric columns + length of strings + (number of NULL columns + 7) / 8There is a penalty of 6 bytes for each link. A dynamic record is linked whenever an update causes an enlargement of the record. Each new link will be at least 20 bytes, so the next enlargement will probably go in the same link. If not, there will be another link. You may check how many links there are with
myisamchk -ed
. All links may be removed with myisamchk -r
.
This is a read-only type that is generated with the optional
myisampack
tool (pack_isam
for ISAM
tables):
myisampack
.
0
are stored using 1 bit.
BIGINT
column (8 bytes) may
be stored as a TINYINT
column (1 byte) if all values are in the range
0
to 255
.
ENUM
.
BLOB
or TEXT
columns.
myisamchk
.
MERGE
tables are new in MySQL Version 3.23.25. The code
is still in beta, but should stabilize soon!
A MERGE
table is a collection of identical MyISAM
tables
that can be used as one. You can only SELECT
, DELETE
, and
UPDATE
from the collection of tables. If you DROP
the
MERGE
table, you are only dropping the MERGE
specification.
Note that DELETE FROM merge_table
used without a WHERE
will only clear the mapping for the table, not delete everything in the
mapped tables. (We plan to fix this in 4.0).
With identical tables we mean that all tables are created with identical
column information. You can't put a MERGE over tables where the columns
are packed differently or doesn't have exactly the same columns.
Some of the tables can however be compressed with myisampack
.
See section 14.11 The MySQL Compressed Read-only Table Generator.
When you create a MERGE
table, you will get a .frm
table
definition file and a .MRG
table list file. The .MRG
just
contains a list of the index files (.MYI
files) that should
be used as one.
For the moment you need to have SELECT
, UPDATE
, and
DELETE
privileges on the tables you map to a MERGE
table.
MERGE
tables can help you solve the following problems:
myisampack
, and then create a MERGE
to use these as one.
MERGE
table on this could be much faster than using
the big table. (You can, of course, also use a RAID to get the same
kind of benefits.)
MERGE
tables active, with possible overlapping files.
MERGE
file than trying to repair a real big file.
MERGE
table uses the
index of the individual tables. It doesn't need an index of its one.
This makes MERGE
table collections VERY fast to make or remap.
MERGE
table on them on demand.
This is much faster and will save a lot of disk space.
The disadvantages with MERGE
tables are:
INSERT
on MERGE
tables, as MySQL
can't know in which of the tables we should insert the row.
MyISAM
tables for a MERGE
table.
MERGE
tables uses more file descriptors. If you are using a
MERGE that maps over 10 tables and 10 users are using this, you
are using 10*10 + 10 file descriptors. (10 data files for 10 users
and 10 shared index files.)
MERGE
handler will need to issue a read on all underlying tables to check
which one most closely matches the given key. If you then do a 'read-next'
then the merge table handler will need to search the read buffers
to find the next key. Only when one key buffer is used up, the handler
will need to read the next key block. This makes MERGE
keys much slower
on eq_ref
searches, but not much slower on ref
searches.
See section 7.29 EXPLAIN
Syntax (Get Information About a SELECT
).
DROP TABLE
, ALTER TABLE
or DELETE FROM
table_name
without a WHERE
clause on any of the table that is
mapped by a MERGE
table that is 'open'. If you do this, the
MERGE
table may still refer to the original table and you will
get unexpected results.
The following example shows you how to use MERGE
tables:
CREATE TABLE t1 (a INT AUTO_INCREMENT PRIMARY KEY, message CHAR(20)); CREATE TABLE t2 (a INT AUTO_INCREMENT PRIMARY KEY, message CHAR(20)); INSERT INTO t1 (message) VALUES ("Testing"),("table"),("t1"); INSERT INTO t2 (message) VALUES ("Testing"),("table"),("t2"); CREATE TABLE total (a INT NOT NULL, message CHAR(20), KEY(a)) TYPE=MERGE UNION=(t1,t2);
Note that we didn't create a UNIQUE
or PRIMARY KEY
in the
total
table as the key isn't going to be unique in the total
table.
Note that you can also manipulate the .MRG
file directly from
the outside of the MySQL server:
shell> cd /mysql-data-directory/current-database shell> ls -1 t1.MYI t2.MYI > total.MRG shell> mysqladmin flush-tables
Now you can do things like:
mysql> select * from total; +---+---------+ | a | message | +---+---------+ | 1 | Testing | | 2 | table | | 3 | t1 | | 1 | Testing | | 2 | table | | 3 | t2 | +---+---------+
To remap a MERGE
table you can do one of the following:
DROP
the table and re-create it
ALTER TABLE table_name UNION(...)
.MRG
file and issue a FLUSH TABLE
on the
MERGE
table and all underlying tables to force the handler to
read the new definition file.
You can also use the deprecated ISAM table type. This will disappear
rather soon because MyISAM
is a better implementation of the same
thing. ISAM uses a B-tree
index. The index is stored in a file
with the .ISM
extension, and the data is stored in a file with the
.ISD
extension. You can check/repair ISAM tables with the
isamchk
utility. See section 15.4 Using myisamchk
for Crash Recovery.
ISAM
has the following features/properties:
Most of the things true for MyISAM
tables are also true for ISAM
tables. See section 8.1 MyISAM Tables. The major differences compared to MyISAM
tables are:
pack_isam
rather than with myisampack
.
HEAP
tables use a hashed index and are stored in memory. This
makes them very fast, but if MySQL crashes you will lose all
data stored in them. HEAP
is very useful for temporary tables!
The MySQL internal HEAP tables use 100% dynamic hashing
without overflow areas. There is no extra space needed for free lists.
HEAP
tables also don't have problems with delete + inserts, which
normally is common with hashed tables:
mysql> CREATE TABLE test TYPE=HEAP SELECT ip,SUM(downloads) as down FROM log_table GROUP BY ip; mysql> SELECT COUNT(ip),AVG(down) FROM test; mysql> DROP TABLE test;
Here are some things you should consider when you use HEAP
tables:
MAX_ROWS
in the CREATE
statement
to ensure that you accidently do not use all memory.
=
and <=>
(but are VERY fast).
HEAP
tables can only use whole keys to search for a row; compare this
to MyISAM
tables where any prefix of the key can be used to find rows.
HEAP
tables use a fixed record length format.
HEAP
doesn't support BLOB
/TEXT
columns.
HEAP
doesn't support AUTO_INCREMENT
columns.
HEAP
doesn't support an index on a NULL
column.
HEAP
table (this isn't common for
hashed tables).
HEAP
tables are shared between all clients (just like any other
table).
ORDER BY
).
HEAP
tables are allocated in small blocks. The tables
are 100% dynamic (on inserting). No overflow areas and no extra key
space are needed. Deleted rows are put in a linked list and are
reused when you insert new data into the table.
DELETE FROM heap_table
,
TRUNCATE heap_table
or DROP TABLE heap_table
.
MyISAM
table to a HEAP
table.
HEAP
tables bigger than max_heap_table_size
.
The memory needed for one row in a HEAP
table is:
SUM_OVER_ALL_KEYS(max_length_of_key + sizeof(char*) * 2) + ALIGN(length_of_row+1, sizeof(char*))
sizeof(char*)
is 4 on 32-bit machines and 8 on 64-bit machines.
Innobase is included in the MySQL source distribution starting from 3.23.34 and will be activated in the MySQL-max binary.
Berkeley DB (http://www.sleepycat.com) has provided
MySQL with a transaction-safe table handler. This will survive
crashes and also provides COMMIT
and ROLLBACK
on
transactions. In order to build MySQL Version 3.23.x (BDB support first
appeared in Version 3.23.15) with support for BDB
tables, you
will need Berkeley DB Version 3.2.3h or newer which can be downloaded from
http://www.mysql.com/downloads/mysql-3.23.html. This is a patched
version of Berkeley DB that is only available from MySQL; the
standard Berkeley DB will not yet work with MySQL.
If you have downloaded a binary version of MySQL that includes support for Berkeley DB, simply follow the instructions for installing a binary version of MySQL. See section 4.6 Installing a MySQL Binary Distribution.
To compile MySQL with Berkeley DB support, first uncompress the
BDB
distribution into the MySQL top-level source
directory, and follow the instructions for building MySQL from
source. Configure will automatically detect and use the Berkeley
DB source you just uncompressed. See section 4.7 Installing a MySQL Source Distribution.
cd /path/to/source/of/mysql-3.23.31 gzip -cd /tmp/db-3.2.3h.tar.gz | tar xf - ./configure # this will use Berkeley DB automatically
If you would like to install Berkeley DB separately, to use with
other applications and MySQL, this is possible. Follow the directions
for installing Berkeley DB in the Berkeley DB README file. Then, pass
the --with-berkeley-db=DIR
option to MySQL's configure
,
where DIR
refers to the installation prefix used when installing
Berkeley DB (by default it is
/usr/local/BerkeleyDB.3.2). You can give additional options to
MySQL configure, --with-berkeley-db-includes=DIR
and
--with-berkeley-db-libs=DIR
, if the BDB
includes and/or libs
directory is not under the first directory (by default they are).
Then complete the MySQL installation as normal.
Please refer to the manual provided by BDB
distribution for
more/updated information.
Even though Berkeley DB is in itself very tested and reliable, the MySQL interface is still considered beta quality. We are actively improving and optimizing it to get it stable very soon.
If you are running with AUTOCOMMIT=0
then your changes in BDB
tables will not be updated until you execute COMMIT
. Instead of commit
you can execute ROLLBACK
to forget your changes. See section 7.31 BEGIN/COMMIT/ROLLBACK
Syntax.
If you are running with AUTOCOMMIT=1
(the default), your changes
will be committed immediately. You can start an extended transaction with
the BEGIN WORK
SQL command, after which your changes will not be
committed until you execute COMMIT
(or decide to ROLLBACK
the changes).
The following options to mysqld
can be used to change the behavior of
BDB tables:
Option | Meaning |
--bdb-home=directory | Base directory for BDB tables. This should be the same directory you use for --datadir. |
--bdb-lock-detect=# | Berkeley lock detect. One of (DEFAULT, OLDEST, RANDOM, or YOUNGEST). |
--bdb-logdir=directory | Berkeley DB log file directory. |
--bdb-no-sync | Don't synchronously flush logs. |
--bdb-no-recover | Don't start Berkeley DB in recover mode. |
--bdb-shared-data | Start Berkeley DB in multi-process mode (Don't use DB_PRIVATE when initializing Berkeley DB)
|
--bdb-tmpdir=directory | Berkeley DB tempfile name. |
--skip-bdb | Don't use berkeley db. |
-O bdb_max_lock=1000 | Set the maximum number of locks possible. See section 7.28.4 SHOW VARIABLES. |
If you use --skip-bdb
, MySQL will not initialize the
Berkeley DB library and this will save a lot of memory. Of course,
you cannot use BDB
tables if you are using this option.
Normally you should start mysqld without --bdb-no-recover
if you
intend to use BDB tables. This may, however, give you problems when you
try to start mysqld if the BDB log files are corrupted. See section 4.16.2 Problems Starting the MySQL Server.
With bdb_max_lock
you can specify the maximum number of locks
(10000 by default) you can have active on a BDB table. You should
increase this if you get errors of type bdb: Lock table is out of
available locks
or Got error 12 from ...
when you have do long
transactions or when mysqld
has to examine a lot of rows to
calculate the query.
You may also want to change binlog_cache_size
and
max_binlog_cache_size
if you are using big multi-line transactions.
See section 7.31 BEGIN/COMMIT/ROLLBACK
Syntax.
BDB
tables:--bdb_log_dir
options.
FLUSH LOGS
at any time
to checkpoint the Berkeley DB tables.
For disaster recovery, one should use table backups plus
MySQL's binary log. See section 21.2 Database Backups.
Warning: If you delete old log files that are in use, BDB will
not be able to do recovery at all and you may loose data if something
goes wrong.
PRIMARY KEY
in each BDB table to be
able to refer to previously read rows. If you don't create one,
MySQL will create an maintain a hidden PRIMARY KEY
for
you. The hidden key has a length of 5 bytes and is incremented for each
insert attempt.
BDB
table are part of the same index or
part of the primary key, then MySQL can execute the query
without having to access the actual row. In a MyISAM
table the
above holds only if the columns are part of the same index.
PRIMARY KEY
will be faster than any other key, as the
PRIMARY KEY
is stored together with the row data. As the other keys are
stored as the key data + the PRIMARY KEY
, it's important to keep the
PRIMARY KEY
as short as possible to save disk and get better speed.
LOCK TABLES
works on BDB
tables as with other tables. If
you don't use LOCK TABLE
, MYSQL will issue an internal
multiple-write lock on the table to ensure that the table will be
properly locked if another thread issues a table lock.
BDB
tables is done on page level.
SELECT COUNT(*) FROM table_name
is slow as BDB
tables doesn't
maintain a count of the number of rows in the table.
MyISAM
tables as one has data in BDB
tables stored in B-trees and not in a separate data file.
BDB
table may make an automatic rollback and any
read may fail with a deadlock error.
BDB
tables compared to MyISAM tables which don't use
PACK_KEYS=0
.
DELETE
or ROLLBACK
:s this number should be
accurate enough for the MySQL optimizer, but as MySQL
only store the number on close, it may be wrong if MySQL dies
unexpectedly. It should not be fatal even if this number is not 100 %
correct. One can update the number of rows by executing ANALYZE
TABLE
or OPTIMIZE TABLE
. See section 7.15 ANALYZE TABLE
Syntax . See section 7.11 OPTIMIZE TABLE
Syntax.
BDB
table, you will get an error
(probably error 28) and the transaction should roll back. This is in
contrast with MyISAM
and ISAM
tables where mysqld will
wait for enough free disk before continuing.
--no-auto-rehash
with the mysql
client. We plan to partly fix this in 4.0.
SHOW TABLE STATUS
doesn't yet provide that much information for BDB
tables.
If you are running in not auto_commit
mode and delete a table you
are using you may get the following error messages in the MySQL
error file:
001119 23:43:56 bdb: Missing log fileid entry 001119 23:43:56 bdb: txn_abort: Log undo failed for LSN: 1 3644744: Invalid
This is not fatal but we don't recommend that you delete tables if you are
not in auto_commit
mode, until this problem is fixed (the fix is
not trivial).
Innobase is included in the MySQL source distribution starting from 3.23.34 and will be activated in the MySQL-max binary.
Innobase provides MySQL with a transaction safe table handler with
commit, rollback, and crash recovery capabilities. Innobase does
locking on row level, and also provides an Oracle-style consistent
non-locking read in SELECTS
, which increases transaction
concurrency. There is neither need for lock escalation in Innobase,
because row level locks in Innobase fit in very small space.
Innobase is a table handler that is under the GNU GPL License Version 2 (of June 1991). In the source distribution of MySQL, Innobase appears as a subdirectory.
Technically, Innobase is a database backend placed under MySQL. Innobase
has its own buffer pool for caching data and indexes in main
memory. Innobase stores its tables and indexes in a tablespace, which
may consist of several files. This is different from, for example,
MyISAM
tables where each table is stored as a separate file.
To create a table in the Innobase format you must specify
TYPE = INNOBASE
in the table creation SQL command:
CREATE TABLE CUSTOMERS (A INT, B CHAR (20), INDEX (A)) TYPE = INNOBASE;
A consistent non-locking read is the default locking behavior when you
do a SELECT
from an Innobase table. For a searched update and an
insert row level exclusive locking is performed.
To use Innobase tables you must specify configuration parameters
in the MySQL configuration file in the [mysqld]
section of
the configuration file. Below is an example of possible configuration
parameters in my.cnf for Innobase:
innobase_data_home_dir = c:\ibdata\ innobase_data_file_path = ibdata1:25M;ibdata2:37M;ibdata3:100M;ibdata4:300M set-variable = innobase_mirrored_log_groups=1 innobase_log_group_home_dir = c:\iblogs\ set-variable = innobase_log_files_in_group=3 set-variable = innobase_log_file_size=5M set-variable = innobase_log_buffer_size=8M innobase_flush_log_at_trx_commit=1 innobase_log_arch_dir = c:\iblogs\ innobase_log_archive=0 set-variable = innobase_buffer_pool_size=16M set-variable = innobase_additional_mem_pool_size=2M set-variable = innobase_file_io_threads=4 set-variable = innobase_lock_wait_timeout=50
The meanings of the configuration parameters are the following:
innobase_data_home_dir | The common part of the directory path for all innobase data files. |
innobase_data_file_path | Paths to individual data files and their sizes. The full directory path to each data file is acquired by concatenating innobase_data_home_dir to the paths specified here. The file sizes are specified in megabytes, hence the 'M' after the size specification above. Do not set a file size bigger than 4000M, and on most operating systems not bigger than 2000M. innobase_mirrored_log_groups Number of identical copies of log groups we keep for the database. Currently this should be set to 1. |
innobase_log_group_home_dir | Directory path to Innobase log files. |
innobase_log_files_in_group | Number of log files in the log group. Innobase writes to the files in a circular fashion. Value 3 is recommended here. |
innobase_log_file_size | Size of each log file in a log group in megabytes. Sensible values range from 1M to the size of the buffer pool specified below. The bigger the value, the less checkpoint flush activity is needed in the buffer pool, saving disk i/o. But bigger log files also mean that recovery will be slower in case of a crash. File size restriction as for a data file. |
innobase_log_buffer_size | The size of the buffer which Innobase uses to write log to the log files on disk. Sensible values range from 1M to half the combined size of log files. A big log buffer allows large transactions to run without a need to write the log to disk until the transaction commit. Thus, if you have big transactions, making the log buffer big will save disk i/o. |
innobase_flush_log_at_trx_commit | Normally this is set to 1, meaning that at a transaction commit the log is flushed to disk, and the modifications made by the transaction become permanent, and survive a database crash. If you are willing to compromise this safety, and you are running small transactions, you may set this to 0 to reduce disk i/o to the logs. |
innobase_log_arch_dir |
The directory where fully written log files would be archived if we used
log archiving. The value of this parameter should currently be set the
same as innobase_log_group_home_dir .
|
innobase_log_archive | This value should currently be set to 0. As recovery from a backup is done by MySQL using its own log files, there is currently no need to archive Innobase log files. |
innobase_buffer_pool_size | The size of the memory buffer Innobase uses to cache data and indexes of its tables. The bigger you set this the less disk i/o is needed to access data in tables. On a dedicated database server you may set this parameter up to 90 % of the machine physical memory size. Do not set it too large, though, because competition of the physical memory may cause paging in the operating system. |
innobase_additional_mem_pool_size | Size of a memory pool Innobase uses to store data dictionary information and other internal data structures. A sensible value for this might be 2M, but the more tables you have in your application the more you will need to allocate here. If Innobase runs out of memory in this pool, it will start to allocate memory from the operating system, and write warning messages to the MySQL error log. |
innobase_file_io_threads | Number of file i/o threads in Innobase. Normally, this should be 4, but on Windows NT disk i/o may benefit from a larger number. |
innobase_lock_wait_timeout |
Timeout in seconds an Innobase transaction may wait for a lock before
being rolled back. Innobase automatically detects transaction deadlocks
in its own lock table and rolls back the transaction. If you use
LOCK TABLES command, or other transaction safe table handlers
than Innobase in the same transaction, then a deadlock may arise which
Innobase cannot notice. In cases like this the timeout is useful to
resolve the situation.
|
You can query the amount of free space in the Innobase tablespace (=
data files you specified in my.cnf) by issuing the table status command
of MySQL for any table you have created with TYPE =
INNOBASE
. Then the amount of free space in the tablespace appears in
the table comment section in the output of SHOW. An example:
SHOW TABLE STATUS FROM TEST LIKE 'CUSTOMER'
if you have created a table of name CUSTOMER in a database you have named TEST. Note that the statistics SHOW gives about Innobase tables are only approximate: they are used in SQL optimization. Table and index reserved sizes in bytes are accurate, though.
Note that in addition to your tables, the rollback segment uses space from the tablespace.
Since Innobase is a multiversioned database, it must keep information of old versions of rows in the tablespace. This information is stored in a data structure called a rollback segment, like in Oracle. In contrast to Oracle, you do not need to configure the rollback segment in any way in Innobase. If you issue SELECTs, which by default do a consistent read in Innobase, remember to commit your transaction regularly. Otherwise the rollback segment will grow because it has to preserve the information needed for further consistent reads in your transaction: in Innobase all consistent reads within one transaction will see the same timepoint snapshot of the database: the reads are also 'consistent' with respect to each other.
Some Innobase errors: If you run out of file space in the tablespace, you will get the MySQL 'Table is full' error. If you want to make your tablespace bigger, you have to shut down MySQL and add a new datafile specification to my.conf, to the innobase_data_file_path parameter.
A transaction deadlock or a timeout in a lock wait will give 'Table handler error 1000000'.
Contact information of Innobase Oy, producer of the Innobase engine:
Website: Being registered, probably http://www.innobase.fi. This should open about March 3rd, 2001.
phone: 358-9-6969 3250 (office) 358-40-5617367 (mobile) Innobase Oy Inc. World Trade Center Helsinki Aleksanterinkatu 17 P.O.Box 800 00101 Helsinki Finland
Go to the first, previous, next, last section, table of contents.