How to optimize mysql queries. Optimizing MySQL - the basics of proper implementation. Supporting tools: Percona Toolkit for identifying duplicate indexes

Working with a database is often the weakest point in the performance of many web applications. And it's not just DBAs who have to worry about this. Programmers must choose correct structure tables, write optimized queries and good code. The following are methods for optimizing work with MySQL for programmers.

1. Optimize Queries for Query Cache

Most MySQL servers Query caching is enabled. One of the best ways performance improvements are simply to leave caching to the database itself. When a query is repeated many times, its result is taken from the cache, which is much faster than accessing the database directly. The main problem is that many people simply use queries that cannot be cached:

// the request will not be cached$r = mysql_query( "SELECT username FROM user WHERE signup_date >= CURDATE()"); // and so it will be! $today = date("Y-m-d" ); $r = mysql_query();

"SELECT username FROM user WHERE signup_date >= "$today"" The reason is that the first query uses the CURDATE() function. This applies to all functions like NOW(), RAND() and others whose result is non-deterministic. If the result of a function can change, then MySQL does not cache such a query. IN in this example

this can be prevented by calculating the date before executing the query.

2. Use EXPLAIN for your SELECT queries// create a prepared statement if ($stmt = $mysqli ->prepare()) { "SELECT username FROM user WHERE state=?"// bind values $stmt ->bind_param("s" , $state );// execute $stmt ->execute(); // bind the result$stmt ->bind_result($username );

// get data

$stmt ->fetch();
printf("%s is from %s\n" , $username , $state );

"mysql_unbuffered_query() sends an SQL query to MySQL without retrieving or automatically buffering the result rows as mysql_query() does. On the one hand, this saves a significant amount of memory for SQL queries that produce large result sets. On the other hand, you can start working on the slicing result set after you get the first row: you don't have to wait for the full SQL query to run."

However there is certain restrictions. You will have to read all the records or call mysql_free_result() before you can run another query. Also, you cannot use mysql_num_rows() or mysql_data_seek() on the function result.

14. Store IP in UNSIGNED INT

Many programmers store IP addresses in a VARCHAR(15) type field, not knowing that it can be stored in integer form. INT occupies 4 bytes and has a fixed field size.
Make sure to use UNSIGNED INT because IP can be written as a 32-bit unsigned number.
Use INET_ATON() in your request to convert an IP address to a number, and INET_NTOA() to convert it back. The same functions exist in PHP - ip2long() and long2ip() (in PHP these functions can also return negative values. Note from The_Lion).

$r = "UPDATE users SET ip = INET_ATON("($_SERVER["REMOTE_ADDR"])") WHERE user_id = $user_id";

15. Fixed size tables (static) are faster

If each column in a table has a fixed size, then the table is called "static" or "fixed size". Example of non-fixed length columns: VARCHAR, TEXT, BLOB. If you include such a field in a table, it will no longer be fixed and will be processed differently by MySQL.
Using such tables will increase efficiency, because... MySQL can look up records in them faster. When to choose the desired line table, MySQL can calculate its position very quickly. If the record size is not fixed, it is searched by index.
These tables are also easier to cache and restore after a database crash. For example, if you convert VARCHAR(20) to CHAR(20), the entry will occupy 20 bytes, regardless of its actual content.
Using the "vertical split" method, you can move columns with variable row lengths into a separate table.

16. Vertical separation

Vertical partitioning refers to dividing a table into columns to improve performance.
Example 1. If addresses are stored in the users table, then it’s not a fact that you will need them very often. You can split the table and store addresses in separate table. Thus, the user table will be reduced in size. Productivity will increase.
Example 2: You have a field "last_login" in a table. It is updated every time the user logs into the site. But all changes to the table clear its cache. By storing this field in another table, you will keep changes to the users table to a minimum.
But if you constantly use joins on these tables, it will lead to poor performance.

17. Separate large DELETE and INSERT queries

If you need to make a large request to delete or insert data, you need to be careful not to break the application. Performance big request may lock the table and cause the entire application to malfunction.
Apache can run multiple parallel processes at the same time. Therefore, it works more efficiently if scripts are executed as quickly as possible.
If you lock tables on long term(for example, 30 seconds or longer), then with high site traffic, a large queue of processes and requests may arise, which can lead to slow work site or even a server crash.
If you have queries like this, use LIMIT to run them in small bursts.

while (1 ) ( mysql_query( "DELETE FROM logs WHERE log_date<= "2009-10-01" LIMIT 10000" ); if (mysql_affected_rows() == 0 ) ( // removed break ; )// short pause

usleep(50000); )

18. Small columns are faster
For a database, working with the hard drive is perhaps the weakest point. Small and compact records are usually better in terms of performance because... reduce disk work.
The MySQL documentation has a list of data storage requirements for all data types.
If your table will store few rows, then it makes no sense to make the main key an INT type; it might be better to make it MEDIUMINT, SMALLINT, or even TINYINT. If you don't need to store time, use DATE instead of DATETIME.

However, be careful that things don't turn out like Slashdot.

19. Choose the right table type

20. Use ORM

21. Be careful with persistent connections
mysql_pconnect() in PHP
But this only sounds good in theory. From my personal experience (and the experience of others), using this feature is not justified. You will have serious problems with connection limits, memory limits, and so on.
Apache creates many parallel threads. This is the main reason why persistent connections don't work as well as we would like. Before using mysql_pconnect(), consult your system administrator.

From the author: one of my friends decided to optimize his car. First, he took off one wheel, so he cut off the roof, then the engine... In general, now he walks. These are all the consequences of the wrong approach! Therefore, in order for your DBMS to continue to run, MySQL optimization must be done correctly.

When to optimize and why?

It’s not worth going into the server settings and changing parameter values ​​once again (especially if you don’t know how this could end). If we consider this topic from the “bell tower” of improving the performance of web resources, then it is so extensive that an entire scientific publication in 7 volumes needs to be devoted to it.

But I clearly don’t have that kind of patience as a writer, and neither do you as a reader. We will do it simpler and try to delve only slightly into the thicket of optimization of the MySQL server and its components. By optimally setting all DBMS parameters, several goals can be achieved:

Increase the speed of query execution.

Improve overall server performance.

Reduce the waiting time for resource pages to load.

Reduce consumption of hosting server capacity.

Reduce the amount of disk space consumed.

We will try to break down the entire topic of optimization into several points, so that it is more or less clear what makes the “pot boil.”

Why set up a server

In MySQL, performance optimization should start from the server. First of all, you should speed up its operation and reduce the time it takes to process requests. A universal means to achieve all of the above goals is to enable caching. Don't know “what is it”? Now I’ll explain everything.

If caching is enabled on your server instance, the MySQL system automatically “remembers” the query entered by the user. And the next time it is repeated, this query result (for sampling) will not be processed, but taken from the system memory. It turns out that in this way the server “saves” time on issuing a response, and as a result, the site’s response speed increases. This also applies to the overall download speed.

In MySQL, query optimization is applicable to those engines and CMS that operate on the basis of this DBMS and PHP. In this case, the code written in a programming language, to generate a dynamic web page, requests some of its structural parts and contents (records, archives and other taxonomies) from the database.

Thanks to enabled caching in MySQL, executing queries to the DBMS server is much faster. Due to this, the loading speed of the entire resource as a whole increases. And this has a positive effect on both the user experience and the site’s position in the search results.

Enable and configure caching

But let's get back from “boring” theory to interesting practice. We will continue further optimization of the MySQL database by checking the caching status on your database server. To do this, using a special query, we will display the values ​​of all system variables:

It's a completely different matter.

Let's make a small overview of the obtained values, which will be useful to us for optimizing MySQL databases:

have_query_cache – the value indicates whether query caching is “ON” or not.

query_cache_type – displays the active cache type. We need the value "ON". This indicates that caching is enabled for all types of selection (SELECT command). Except for those that use the SQL_NO_CACHE parameter (prohibits saving information about this query).

We have all the settings set correctly.

We measure the cache for indexes and keys

Now you need to check how much RAM is allocated for indexes and keys. It is recommended to set this parameter, important for optimizing the MySQL database, to 20-30% of the amount of RAM available to the server. For example, if 4 “hectares” are allocated for a DBMS instance, then feel free to set 32 ​​“meters”. But it all depends on the characteristics of a particular database and its structure (types) of tables.

To set the parameter value, you need to edit the contents of the my.ini configuration file, which in Denver is located at the following path: F:\Webserver\usr\local\mysql-5.5

Open the file using Notepad. Then we find the key_buffer_size parameter in it and set the optimal size for your PC system (depending on the “hectares” of RAM). After this, you need to restart the database server.

The DBMS uses several additional subsystems (low-level), and all their main settings are also specified in this configuration file. Therefore, if you need to optimize MySQL InnoDB, then welcome here. We will study this topic in more detail in one of our next materials.

Measuring the level of indices

The use of indexes in tables significantly increases the speed of processing and generating a DBMS response to an entered query. MySQL constantly “measures” the level of index and key usage in each database. To get this value, use the query:

SHOW STATUS LIKE "handler_read%"

SHOW STATUS LIKE "handler_read%"

In the resulting result, we are interested in the value in the Handler_read_key line. If the number indicated there is small, then this indicates that indexes are almost never used in this database. And this is bad (like ours).

MySQL is still the world's most popular relational database, but it is also the least optimized. Many people stay with the default settings without digging deeper. In this article we'll look at some MySQL optimization tips combined with some new features that have come out relatively recently.

Configuration optimization

The first thing every MySQL user should do to improve performance is tweak the configuration. However, most people skip this step. In 5.7 (the current version), the default settings are much better than those of its predecessors, but it is still possible and easy to improve them.

We hope that you are using Linux or something like Vagrant -box (like our Homestead Improved) and, accordingly, your configuration file will be located in /etc/mysql/my.cnf . It's possible that your installation will actually load an additional configuration file into this one. So look, if the my.cnf file contains little, then look in /etc/mysql/mysql.conf.d/mysqld.cnf .

Manual tuning

The following settings should be done out of the box. According to these tips, add to the config file in the section:

Innodb_buffer_pool_size = 1G # (here change about 50%-70% of the total RAM) innodb_log_file_size = 256M innodb_flush_log_at_trx_commit = 1 # can be changed to 2 or 0 innodb_flush_method = O_DIRECT

  • innodb_buffer_pool_size . The buffer pool is a kind of “warehouse” for caching data and indexes in memory. It is used to store frequently accessed data in memory. And when you use a dedicated or virtual server, where the database is often the bottleneck, it makes sense to give it the majority of the RAM. Hence, we give it 50-70% of the total RAM. There is a guide to setting up this pool in the MySQL documentation.
  • innodb_log_file_size . Setting the log file size is well described, but in a nutshell it is the amount of data stored in the logs before it is cleared. Please note that the log in this case is not error records, but a kind of delta snapshot of changes that have not yet been flushed to disk in the main innodb files. MySQL writes in the background, but this still affects performance at the time of writing. A larger log file means higher performance due to fewer new and smaller checkpoints being created, but also longer recovery time in case of a crash (more data must be rewritten to the database).
  • innodb_flush_log_at_trx_commit is described and shows what happens to the log file. Value 1 is the safest, because the log is flushed to disk after each transaction. With values ​​of 0 and 2, less ACID is guaranteed, but more performance. The difference is not large enough to outweigh the stability benefits at 1.
  • innodb_flush_method . To top it all off when it comes to flushing data, this setting needs to be set to O_DIRECT - to avoid double buffering. I advise you to always do this while the I/O system remains very slow. Although on most hostings, like DigitalOcean, you will have SSD drives, so the I/O system will be more productive.

There is a tool from Percona that will help us find the remaining problems automatically. Note that if we were to run it without this manual setting, only 1 of the 4 settings would be defined, since the other 3 depend on the user's preferences and the application environment.

Variable Inspector

Installing variable inspector on Ubuntu:

Wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb sudo dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all. deb sudo apt-get update sudo apt-get install percona-toolkit

For other systems, follow these instructions.

Then run the toolkit:

Pt-variable-advisor h=localhost,u=homestead,p=secret

You will see this result:

# WARN delay_key_write: MyISAM index blocks are never flushed until necessary. # NOTE max_binlog_size: The max_binlog_size is smaller than the default of 1GB. # NOTE sort_buffer_size-1: The sort_buffer_size variable should generally be left at its default unless an expert determines it is necessary to change it. # NOTE innodb_data_file_path: Auto-extending InnoDB files can consume a lot of disk space that is very difficult to reclaim later. # WARN log_bin: Binary logging is disabled, so point-in-time recovery and replication are not possible.

Note translator:
On my local machine, in addition to this, I also received the following warning:

# NOTE innodb_flush_method: Most production database servers that use InnoDB should set innodb_flush_method to O_DIRECT to avoid double-buffering, unless the I/O system is very low performance.

The fact that the innodb_flush_method parameter needs to be set to O_DIRECT and why was discussed above. And if you followed the tuning sequence as in the article, then you will not see this warning.

None of these ( approx.: indicated by the author) warnings are not critical, they do not need to be corrected. The only thing that can be corrected is setting up a binary log for replication and snapshots.

Note: in new versions the default binlog size is 1G and this warning will not occur.

Max_binlog_size = 1G log_bin = /var/log/mysql/mysql-bin.log server-id=master-01 binlog-format = "ROW"

  • max_binlog_size . Determines how large the binary logs will be. They record your transactions and requests and make checkpoints. If a transaction exceeds the maximum, then the log may exceed its size when saved to disk; otherwise MySQL will support it within this limit.
  • log_bin. This option enables binary logging in general. Without it, snapshots or replications are impossible. Please note that this can have a big impact on your disk space. server-id is a required option when enabling binary logging, so the logs "know" which server they came from (for replication), and binlog-format is simply the way they are written.

As you can see, the new MySQL has defaults that are almost production ready. Of course, each application is different and has additional tricks and tweaks it applies.

MySQL Tuner

Supporting tools: Percona Toolkit for identifying duplicate indexes

The Percona Toolkit we previously installed also has a tool for detecting duplicate indexes, which can be useful when using third-party CMSs or just checking yourself if you accidentally added more indexes than necessary. For example, the default WordPress installation has duplicate indexes on the wp_posts table:

Pt-duplicate-key-checker h=localhost,u=homestead,p=secret # ############################### ######################################### # homestead.wp_posts # #### ################################################## ################## # Key type_status_date ends with a prefix of the clustered index # Key definitions: # KEY `type_status_date` (`post_type`,`post_status`,`post_date` ,`ID`), # PRIMARY KEY (`ID`), # Column types: # `post_type` varchar(20) collate utf8mb4_unicode_520_ci not null default "post" # `post_status` varchar(20) collate utf8mb4_unicode_520_ci not null default "publish" " # `post_date` datetime not null default "0000-00-00 00:00:00" # `id` bigint(20) unsigned not null auto_increment # To shorten this duplicate clustered index, execute: ALTER TABLE `homestead`.` wp_posts` DROP INDEX `type_status_date`, ADD INDEX `type_status_date` (`post_type`,`post_status`,`post_date`);

As you can see from the last line, this tool also gives you tips on how to get rid of duplicate indexes.

Helper tools: Percona Toolkit for unused indexes

Percona Toolkit can also detect unused indexes. If you are logging slow queries (see bottlenecks section below), you can run the utility and it will check if and how those queries use indexes on tables.

Pt-index-usage /var/log/mysql/mysql-slow.log

For detailed information on using this utility, see .

Narrow places

This section describes how to detect and monitor database bottlenecks.

First, let's enable logging of slow queries:

Slow_query_log = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes = 1

The lines above must be added to the mysql configuration. The database will keep track of queries that took more than 1 second to complete and those that do not use indexes.

Once there is some data in this log, you can analyze it for index usage using the pt-index-usage utility above, or using pt-query-digest, which will produce results something like this:

Pt-query-digest /var/log/mysql/mysql-slow.log # 360ms user time, 20ms system time, 24.66M rss, 92.02M vsz # Current date: Thu Feb 13 22:39:29 2014 # Hostname: * # Files: mysql-slow.log # Overall: 8 total, 6 unique, 1.14 QPS, 0.00x concurrency ________________ # Time range: 2014-02-13 22:23:52 to 22:23:59 # Attribute total min max avg 95% stddev median # ============ ======= ======= ======= ======= ===== == ======= ======= # Exec time 3ms 267us 406us 343us 403us 39us 348us # Lock time 827us 88us 125us 103us 119us 12us 98us # Rows sent 36 1 15 4.50 14.52 4.18 3.89 # Rows examine 87 4 30 10.88 28.75 7.37 7.70 # Query size 2.15k 153 296 245.11 284.79 48.90 258.32 # === = === ========== === ===== ====== ===== =============== # Profile # Rank Query ID Response time Calls R/Call V/M Item # ==== ==== ============ ===== ====== === == =============== # 1 0x728E539F7617C14D 0.0011 41.0% 3 0.0004 0.00 SELECT blog_article # 2 0x1290EEE0B201F3FF 0.0003 12.8% 1 0.0003 0.00 SELECT portfolio _item # 3 0x31DE4535BDBFA465 0.0003 12.6% 1 0.0003 0.00 SELECT portfolio_item # 4 0xF14E15D0F47A5742 0.0003 12.1% 1 0.0003 0.00 SELECT portfolio_category # 5 0x8F848005A09C9588 0.0003 11.8% 1 0.0003 0.00 SELECT blog_category # 6 0x55F49 C753CA2ED64 0.0003 9.7% 1 0.0003 0.00 SELECT blog_article # ==== ============ ====== ============= ===== ====== ===== =============== # Query 1: 0 QPS, 0x concurrency, ID 0x728E539F7617C14D at byte 736 ______ # Scores: V/M = 0.00 # Time range: all events occurred at 2014-02-13 22:23:52 # Attribute pct total min max avg 95 % stddev median # ============ === ======= ======= ======= ======= === ==== ======= ======= # Count 37 3 # Exec time 40 1ms 352us 406us 375us 403us 22us 366us # Lock time 42 351us 103us 125us 117us 119us 9us 119us # Rows sent 25 9 1 4 3 3.89 1.37 3.89 # Rows examine 24 21 5 8 7 7.70 1.29 7.70 # Query size 47 1.02k 261 262 261.25 258.32 0 258.32 # String: # Hosts localhost # Users * # Query_time distribution # 1us # 10us # 100us #### ################################################## ########## # 1ms # 10ms # 100ms # 1s # 10s+ # Tables # SHOW TABLE STATUS LIKE "blog_article"\G # SHOW CREATE TABLE `blog_article`\G # EXPLAIN /*!50100 PARTITIONS*/ SELECT b0_.id AS id0, b0_.slug AS slug1, b0_.title AS title2, b0_.excerpt AS excerpt3, b0_.external_link AS external_link4, b0_.description AS description5, b0_.created AS created6, b0_.updated AS updated7 FROM blog_article b0_ ORDER BY b0_.created DESC LIMIT 10

If you prefer to analyze these logs manually, you can do the same, but first you will need to export the log to a more parsable format. This can be done like this:

Mysqldumpslow /var/log/mysql/mysql-slow.log

With additional options, you can filter the data to export only what you need. For example, the top 10 queries, sorted by average execution time:

Mysqldumpslow -t 10 -s at /var/log/mysql/localhost-slow.log

Conclusion

In this comprehensive MySQL optimization post, we have covered various methods and techniques through which we can make our MySQL fly.

We figured out configuration optimization, we upgraded the indexes, and we got rid of some bottlenecks. This was all mostly theory, however, it is all applicable to real world applications.

October 9, 2008 at 11:37 pm

Optimizing MySQL Queries

  • MySQL

In everyday work, you encounter fairly similar errors when writing queries.

In this article I would like to give examples of how NOT to write queries.

  • Select all fields
    SELECT * FROM table

    When writing queries, do not use a selection of all fields - "*". List only the fields you really need. This will reduce the amount of data fetched and sent. Also, don't forget about covering indexes. Even if you actually need all the fields in the table, it is better to list them. Firstly, it improves the readability of the code. When using an asterisk, it is impossible to know which fields are in the table without looking at it. Secondly, over time, the number of columns in your table may change, and if today there are five INT columns, then in a month TEXT and BLOB fields may be added, which will slow down the selection.

  • Requests in a cycle.
    You need to clearly understand that SQL is a set-operating language. Sometimes programmers who are accustomed to thinking in terms of procedural languages ​​find it difficult to shift their thinking to the language of sets. This can be done quite simply by adopting a simple rule - “never execute queries in a loop.” Examples of how this can be done:

    1. Samples
    $news_ids = get_list("SELECT news_id FROM today_news ");
    while($news_id = get_next($news_ids))
    $news = get_row("SELECT title, body FROM news WHERE news_id = ". $news_id);

    The rule is very simple - the fewer requests, the better (although there are exceptions to this, like any rule). Don't forget about the IN() construct. The above code can be written in one query:
    SELECT title, body FROM today_news INNER JOIN news USING(news_id)

    2. Inserts
    $log = parse_log();
    while($record = next($log))
    query("INSERT INTO logs SET value = ". $log["value"]);!}

    It is much more efficient to concatenate and execute one query:
    INSERT INTO logs (value) VALUES (...), (...)

    3. Updates
    Sometimes you need to update several rows in one table. If the updated value is the same, then everything is simple:
    UPDATE news SET title="test" WHERE id IN (1, 2, 3).!}

    If the value being changed is different for each record, then this can be done with the following query:
    UPDATE news SET
    title = CASE
    WHEN news_id = 1 THEN "aa"
    WHEN news_id = 2 THEN "bb" END
    WHERE news_id IN (1, 2)

    Our tests show that such a request is 2-3 times faster than several separate requests.

  • Performing operations on indexed fields
    SELECT user_id FROM users WHERE blogs_count * 2 = $value

    This query will not use the index, even if the blogs_count column is indexed. For an index to be used, no transformations must be performed on the indexed field in the query. For such requests, move the conversion functions to another part:
    SELECT user_id FROM users WHERE blogs_count = $value / 2;

    Similar example:
    SELECT user_id FROM users WHERE TO_DAYS(CURRENT_DATE) - TO_DAYS(registered)<= 10;

    Will not use an index on the registered field, whereas
    SELECT user_id FROM users WHERE registered >= DATE_SUB(CURRENT_DATE, INTERVAL 10 DAY);
    will.

  • Fetching rows only to count their number
    $result = mysql_query("SELECT * FROM table", $link);
    $num_rows = mysql_num_rows($result);
    If you need to select the number of rows that satisfy a certain condition, use the SELECT COUNT(*) FROM table query rather than selecting all the rows just to count the number of rows.
  • Fetching extra rows
    $result = mysql_query("SELECT * FROM table1", $link);
    while($row = mysql_fetch_assoc($result) && $i< 20) {

    }
    If you only need n fetch rows, use LIMIT instead of discarding the extra rows in the application.
  • Using ORDER BY RAND()
    SELECT * FROM table ORDER BY RAND() LIMIT 1;

    If the table has more than 4-5 thousand rows, then ORDER BY RAND() will work very slowly. It would be much more efficient to run two queries:

    If the table has an auto_increment primary key and no gaps:
    $rnd = rand(1, query("SELECT MAX(id) FROM table"));
    $row = query("SELECT * FROM table WHERE id = ".$rnd);

    Or:
    $cnt = query("SELECT COUNT(*) FROM table");
    $row = query("SELECT * FROM table LIMIT ".$cnt.", 1");
    which, however, can also be slow if there are a very large number of rows in the table.

  • Using a large number of JOINs
    SELECT
    v.video_id
    a.name,
    g.genre
    FROM
    videos AS v
    LEFT JOIN
    link_actors_videos AS la ON la.video_id = v.video_id
    LEFT JOIN
    actors AS a ON a.actor_id = la.actor_id
    LEFT JOIN
    link_genre_video AS lg ON lg.video_id = v.video_id
    LEFT JOIN
    genres AS g ON g.genre_id = lg.genre_id

    It must be remembered that when connecting tables one-to-many, the number of rows in the selection will increase with each next JOIN. For such cases, it is faster to split such a query into several simple ones.

  • Using LIMIT
    SELECT… FROM table LIMIT $start, $per_page

    Many people think that such a query will return $per_page of records (usually 10-20) and therefore will work quickly. It will work quickly for the first few pages. But if the number of records is large, and you need to execute a SELECT... FROM table LIMIT 1000000, 1000020 query, then to execute such a query, MySQL will first select 1000020 records, discard the first million and return 20. This may not be fast at all. There are no trivial ways to solve the problem. Many simply limit the number of available pages to a reasonable number. You can also speed up such queries using covering indexes or third-party solutions (for example sphinx).

  • Not using ON DUPLICATE KEY UPDATE
    $row = query("SELECT * FROM table WHERE id=1");

    If($row)
    query("UPDATE table SET column = column + 1 WHERE id=1")
    else
    query("INSERT INTO table SET column = 1, id=1");

    A similar construction can be replaced with one query, provided that there is a primary or unique key for the id field:
    INSERT INTO table SET column = 1, id=1 ON DUPLICATE KEY UPDATE column = column + 1

Read