I've collected a semantic core, what's next? Is it worth ordering SY from specialists? What to do with the semantic core

Many web publications and publications talk about the importance of the semantic core.

Similar texts are available on our website “What to Do.” At the same time, only the general theoretical part of the issue is often mentioned, while the practice remains unclear.

All experienced webmasters insist that it is necessary to create a basis for promotion, but only a few clearly explain how to use it in practice. To remove the veil of secrecy from this issue, we decided to highlight the practical side of using the semantic core.

Why do you need a semantic core?

This is, first of all, the basis and plan for further filling and promoting the site. The semantic basis, divided according to the structure of the web resource, are pointers on the way to the systematic and targeted development of the site.

If you have such a foundation, you don't have to think about the topic of each next article, you just need to follow the bullet points. With the core, website promotion moves much faster. And promotion gains clarity and transparency.

How to use semantic core on practice

To begin with, it is worth understanding how the semantic basis is generally compiled. Essentially, this is a list of key phrases for your future project, supplemented by the frequency of each request.

Collecting such information is not difficult using the Yandex Wordstat service:

http://wordstat.yandex.ru/

or any other special service or program. The procedure will be as follows...

How to create a semantic core in practice

1. Collect in a single file (Exel, Notepad, Word) all requests for your key topic, taken from statistical data. This should also include phrases “out of your head”, that is, logically acceptable phrases, morphological variants (as you yourself would search for your topic) and even variants with typos!

2. List semantic queries sorted by frequency. From requests from maximum frequency– for queries with a minimum of popularity.

3. All junk queries that do not correspond to the theme or focus of your site are removed and cleared from the semantic basis. For example, if you tell people about washing machines for free, but don't sell them, you don't need to use words like:

  • "buy"
  • "wholesale"
  • "delivery"
  • "order"
  • "cheap"
  • “video” (if there are no videos on the site)…

Meaning: do not mislead users! Otherwise, your site will receive a huge number of failures, which will affect its rankings. And this is important!

4. When the main list is cleared of unnecessary phrases and queries and includes a sufficient number of items, you can use the semantic core in practice.

IMPORTANT: a semantic list can never be considered completely ready and complete. In any topic, you will have to update and supplement the core with new phrases and queries, periodically monitoring innovations and changes.

IMPORTANT: the number of articles on the future site will depend on the number of items in the list. Consequently, this will also affect the volumes required content, on the working hours of the author of the articles, on the duration of filling the resource.

Imposing a semantic core on the site structure

In order for the entire list received to make sense, you need to distribute requests (depending on frequency) across the site structure. It is difficult to give specific numbers here, since the scale and frequency difference can be quite significant for different projects.

If, for example, you take a query with a millionth frequency as a basis, even a phrase with 10,000 queries will seem mediocre.

On the other hand, when your main request is 10,000 frequency, the average frequency will be about 5,000 requests per month. Those. a certain relativity is taken into account:

“HF – CP – LF” or “Maximum – Middle – Minimum”

But in any case (even visually) you need to divide the entire core into 3 categories:

  1. high-frequency queries (HF - short phrases with maximum frequency);
  2. low-frequency queries (LF - rarely requested phrases and word combinations with low frequency);
  3. mid-frequency queries (MF) - all average queries that are in the middle of your list.

The next step is to support 1 or more (maximum 3) requests for the main page. These phrases should be of the highest possible frequency. High frequencies are placed on the main page!

Next, from the general logic of the semantic core, it is worth highlighting several main key phrases from which sections (categories) of the site will be created. Here you could also use high-frequency queries with a lower frequency than the main one, or better - mid-frequency queries.

Low-frequency remaining phrases are sorted into categories (under created sections and categories) and turned into topics for future publications on the site. But it's easier to understand with an example.

EXAMPLE

A clear example of using the semantic core in practice:

1. Home page(HF) – high-frequency request – “site promotion”.

2. Section pages (SP) – “custom website promotion”, “ independent promotion", "site promotion with articles", "site promotion with links". Or simply (if adapted for the menu):

Section No. 1 - “to order”
Section No. 2 – “on your own”
Section No. 3 – “article promotion”
Section No. 4 – “link promotion”

All this is very similar to the data structure on your computer: logical drive (main) - folders (partitions) - files (articles).

3. Pages of articles and publications (AP) - “quick site promotion for free”, “cheap promotion to order”, “how to promote a site with articles”, “promotion of a project on the Internet to order”, “inexpensive site promotion with links”, etc. .

In this list you will have the largest number of diverse phrases and phrases that you will have to create further publications site.

How to use a ready-made semantic core in practice

Using a query list is internal optimization content. The secret is to optimize (adjust) each page of a web resource to the corresponding core item. That is, in fact, you take a key phrase and write the most relevant article and page for it. A special service will help you assess the relevance, available at the following link:

To have at least some guidelines in your SEO work, it is better to first check the relevance of sites from the TOP search results for specific queries.

For example, if you are writing text on the low-frequency phrase “inexpensive website promotion with links,” then first simply enter it in the search and evaluate the TOP 5 sites in the search results using the relevance assessment service.

If the service showed that sites from the TOP 5 for the query “inexpensive website promotion with links” have a relevance of 18% to 30%, then you need to focus on the same percentages. Better yet, create unique text with keywords and relevance of approximately 35-50%. Slightly ahead of competitors at this stage, you will lay a good foundation for further advancement.

IMPORTANT: the use of the semantic core in practice implies that one phrase corresponds to one unique page resource. The maximum here is 2 requests per article.

The more fully the semantic core is revealed, the more informative your project will be. But if you are not ready for long work and thousands of new articles, there is no need to take on broad thematic niches. Even a narrow specialized area, developed 100%, will bring more traffic than an unfinished large website.

For example, you could take as the basis of the site not the high-frequency key “site promotion” (where there is enormous competition), but a phrase with a lower frequency and narrower specialization - “article site promotion” or “link promotion”, but reveal this topic to the maximum in all articles on the virtual platform! The effect will be higher.

Useful information for the future

Further use of your semantic core in practice will consist only of:

  • adjust and update the list;
  • write optimized texts with high relevance and uniqueness;
  • publish articles on the website (1 request – 1 article);
  • increase the usefulness of the material (edit ready-made texts);
  • improve the quality of articles and the site as a whole, monitor competitors;
  • mark in the kernel list those queries that have already been used;
  • complement optimization with other internal and external factors(links, usability, design, usefulness, videos, online help tools).

Note: The above is a very simplified version of the events. In fact, based on the kernel, sublevels, deep nesting structures, and branches into forums, blogs, and chats can be created. But the principle will always be the same.

PRESENT: useful tool to collect the kernel in the Mozilla FireFox browser -

If you have a question “How to compose a semantic core,” then before deciding, you must first figure out what you are dealing with.

Semantic core of the site is a list of phrases that users enter into search engines. Accordingly, promoted pages must respond to user queries. Of course, you can’t shove a bunch of different types of key phrases onto the same page. One main search query = one page.

It is important that the keywords correspond to the theme of the site, do not have grammatical errors, have a reasonable frequency, and also correspond to a number of other characteristics.

The semantic core is usually stored in an Excel table. This table can be stored/created anywhere - on a flash drive, in Google Docs, on Yandex.Disk or somewhere else.

Here clear example simplest design:

Features of selecting the semantic core of a site

First, you need to understand (at least roughly) what phrases your audience uses when working with a search engine. This will be quite enough for working with tools for selecting key phrases.

What keywords does the audience use?

Keys- these are precisely the same phrases that users enter into search engines to obtain this or that information. For example, if a person wants to buy a refrigerator, he writes “buy a refrigerator,” or “buy an inexpensive refrigerator,” “buy a Samsung refrigerator,” etc., depending on his preferences.

Now let's look at the characteristics by which keys can be classified.

Sign 1 - popularity. Here the keys can be roughly divided into high-frequency, mid-frequency and low-frequency.

Low-frequency queries (sometimes referred to as LF) have a frequency of up to 100 impressions per month, mid-frequency (MF) - up to 1000, and high-frequency (HF) - from 1000.

However, these figures are purely conditional, because there are many exceptions to this rule. For example, the topic of cryptocurrency. Here it is much more correct to consider low-frequency queries with a frequency of up to 10,000 impressions per month, medium-frequency - from 10 to 100 thousand, and high-frequency - everything else. Today, the keyword “cryptocurrency” has a frequency of more than 1.5 million impressions per month, and “bitcoin” has exceeded 3 million.

And despite the fact that “cryptocurrency” and “bitcoin”, at first glance, are very tasty search queries, it is much more correct (at least in the initial stages) to focus on low-frequency queries. Firstly, because these are more precise queries, which means you can prepare relevant content will be simpler. Secondly, there are ALWAYS tens to hundreds of times more low-frequency queries than high-frequency and mid-frequency queries (and in 99.5% of cases, also combined). Thirdly, the “low-frequency core” is much easier and faster to expand than the other two. BUT... This does not mean that the mids and highs should be ignored.

Sign 2 - user needs. Here we can roughly divide into 3 groups:

  • transactional - imply some kind of action (contain the words “buy”, “download”, “order”, “delivery”, etc.)
  • informational - simply searching for certain information (“what will happen if”, “what is better”, “how to do it correctly”, “how to do it”, “description”, “characteristics”, etc.)
  • others. This is a special category, because it is not clear what exactly the user wants. For example, let's take the request “cake”. "Cake" what? Buy? Order? Bake according to the recipe? View photos? Unclear.

Now about the application of the second sign.

Firstly, it is better not to “mix” these requests. For example, we have 3 search queries - “ dell laptop 5565 amd a10 8 GB hd buy”, “dell 5565 amd a10 8 GB hd laptop review” and “dell 5565 amd a10 8 GB hd laptop”. The keys are almost completely identical. However, it is the differences that play a decisive role. In the first case, we have a “transactional” request, according to which we need to promote the product card. In the second - “information”, and in the third - “other”. And if a separate page is needed for the information key, then it is logical to ask the question - what to do with the third key? It’s very simple - view the TOP 10 of Yandex and Google for this query. If there is a lot trade offers- this means the request is still commercial, and if not, then it is informational.

Secondly, transactional requests can also be divided into “commercial” and “non-commercial”. In commercial requests you will have to compete with “heavyweights”. For example, for the request “buy samsung galaxy” you will have to compete with Euroset, Svyaznoy, for the request “buy an ariston oven” - with M.Video and Eldorado. So what should I do? It’s very simple to “swing” at requests that have a much lower frequency. For example, today the request “buy samsung galaxy” has a frequency of about 200,000 impressions per month, while “buy samsung galaxy a8” (and this is a very specific model of the line) has a frequency of 3,600 impressions per month. The difference in frequency is enormous, but for the second request (precisely due to the fact that a very specific model is implied) you can get much more traffic than for the first.

Anatomy of search queries

The key phrase can be divided into 3 parts - body, qualifier, tail.

For clarity, let’s take the previously discussed “other” query - “cake”. What the user wants is unclear, because... it consists only of a body and has no specifier and tail. However, it is high-frequency, which means it has fierce competition in search results. However, 99.9% of people who visit a site will say “no, this is not what I was looking for” and simply leave, and this is a negative behavioral factor.

Let’s add the “buy” specifier and get a transactional (and as a bonus, also a commercial) request “buy a cake.” The word “buy” reflects the user’s intent.

Let’s change the specifier to “photo” and get the request “cake photo”, which is no longer transactional, because the user is simply looking for photos of cakes and is not going to buy anything.

Those. It is with the help of the specifier that we determine what kind of request it is - transactional, informational or other.

We've sorted out the sale of cakes. Now, to the request “buy a cake” we will add the phrase “for a wedding”, which will be the “tail” of the request. It is the “tails” that make requests more specific, more detailed, but at the same time do not cancel the user’s intentions. In this case, since the cake is a wedding, then cakes with the inscription “happy birthday” are immediately discarded, because... they are not suitable by definition.

Those. if we take the queries:

  • buy a birthday cake
  • buy a wedding cake
  • buy an anniversary cake

then we will see that the user’s goal is the same - “buy a cake”, and “for the birth of a child”, “for a wedding” and “for an anniversary” reflect the need in more detail.

Now that you know the anatomy search queries, we can derive a certain formula for selecting a semantic core. First, you define some basic terms that are directly related to your activity, and then collect the most suitable specifiers and tails (we’ll tell you a little later).

Clustering of the semantic core

Clustering refers to the distribution of previously collected requests across pages (even if the pages have not yet been created). This process is often called “grouping the semantic core.”

And here many people make the same mistake - they need to separate queries according to their meaning, and not according to the number of pages available on the site or in a section. Pages can always be created if necessary.

Now let's figure out which keys should be distributed where. Let's do this using the example of a structure that already has several sections and groups:

  1. Home page. For it, only the most important, competitive and high-frequency queries are selected, which are the basis for promoting the site as a whole. (“beauty salon in St. Petersburg”).
  2. Categories of services/products. It is quite logical to place queries here that do not contain any particular specifics. In the case of a “beauty salon in St. Petersburg”, it is quite logical to create several categories using the keys “make-up artist services”, “men’s room”, “women’s room”, etc.
  3. Services/products. More specific queries should already appear here - “wedding hairstyles”, “manicure”, “evening hairstyles”, “coloring”, etc. To some extent, these are “categories within a category.”
  4. Blog. Suitable here information requests. There are many more of them than transactional ones, so there should be more pages that will be relevant to them.
  5. News. Keys that are most suitable for creating short news notes are highlighted here.

How Query Clustering Is Performed

There are 2 main methods of clustering - manual and automatic.

Manual clustering has 2 main disadvantages: long, labor-intensive. However, the entire process is controlled personally by you, which means you can achieve very High Quality. For manual clustering, Excel, Google Sheets or Yandex.Disk will be quite sufficient. The main thing is to be able to filter and sort data according to certain parameters.

Many people use the Keyword Assistant service for clustering. Essentially, this is manual clustering with elements of automation.

Now let’s look at the pros and cons of automatic grouping; fortunately, there are many services (both free and paid) and there is plenty to choose from.

For example, the free clustering service from the SEOintellect team is worthy of attention. It is suitable for working with small semantic cores.

For “serious” volumes (several thousand keys), it makes sense to use paid services (for example, Topvisor, SerpStat and Rush Analytics). They work as follows: You load key queries, and at the end you receive a ready-made Excel file. The 3 services mentioned above work approximately according to the same scheme - they group by meaning, analyze the intersection of phrases, and also view the TOP-30 for each request search results to find out how many URLs the requested phrase appears in. Based on the above, distribution into groups occurs. All this happens “in the background.”

Programs for creating a semantic core

To collect relevant search queries, there are many paid and free tools to choose from.

Let's start with the free ones.

Service wordstat.yandex.ru. This is a free service. For convenience, it is recommended to install the Wordstat Assistant plugin in your browser. That is why we will consider these 2 tools in pairs.

How it works?

Very simple.

For example, we will put together a small core of travel packages to Antalya. As “basic” we will have the 2nd request - “tours to Antalya” (in this case, the number of “basic” requests is not important).

Now go to https://wordstat.yandex.ru/, log in, insert the first “basic” query and get a list of keys. Then, using the plus signs, we add suitable keys to the list. Please note that if a key phrase is colored blue and marked with a plus on the left, it means it can be added to the list. If the phrase is “discolored” and marked with a minus, it means it has already been added to the list, and clicking on the “minus” will lead to its removal from the list. By the way, the list of keys on the left and the pros and cons are the very features of the Wordstat Assistant plugin, without which working in Yandex.Wordstat makes no sense at all.

It is also worth noting that the list will be saved exactly until it is corrected or cleared by you personally. Those. If you type “Samsung TVs” into the line, the list of Yandex.Wordstat keys will be updated, but previously collected keys will be saved in the plugin list.

According to this scheme, we run all the pre-prepared “basic” keys through Wordstat, collect everything we need, and then by clicking on one of these two buttons we copy the previously collected list to the clipboard. Please note that the button with two leaves copies the list without frequencies, and with two leaves and the number 42 - with frequencies.

The list copied to the clipboard can then be pasted into an Excel spreadsheet.

Also during the collection process, you can view impression statistics by region. For this purpose, Yandex.Wordstat has the following switch.

Well, as a bonus, you can look at the request history - find out when the frequency increased and when it decreased.

This feature can be useful in determining the seasonality of a request, as well as for identifying a decline/growth in popularity.

One more interesting feature is the statistics of impressions for the specified phrase and its forms. To do this, you must enclose the query in quotation marks.

Well, if you add an exclamation mark before each word, then the statistics will display the number of impressions by key without taking into account word forms.

No less useful is the minus operator. He cleans key phrases, which contain the word (or several words) you specified.

There is another tricky operator - the vertical separator. It is necessary in order to combine several lists of keys into one (we are talking about keys of the same type). For example, let’s take two keys: “tours to Antalya” and “trip to Antalya”. We write them in the Yandex.Wordstat line as follows and get 2 lists for these keys, combined into one:

As you can see, we have a lot of keys where there are “tours”, but no “vouchers” and vice versa.

Another important feature is the frequency binding to the region. You can select your region here.

Using Wordstat to collect a semantic core is suitable if you are collecting mini-cores for some individual pages, or do not plan large cores (up to 1000 keys).

SlovoEB and Key Collector

We're not kidding, that's exactly what the program is called. In a nutshell, the program allows you to do exactly the same thing, but in automatic mode.

This program was developed by the LegatoSoft team - the same team that developed Key Collector, we will also talk about it. In essence, Slovoeb is a heavily trimmed (but free) version of Key Collector, but it is quite capable of working with the collection of small semantic cores.

Especially for Slovoeb (or Key Collector) it makes sense to create a separate account on Yandex (if they ban you, it’s not a pity).

It is necessary to make small adjustments one-time.

The login-password pair must be entered separated by a colon and without spaces. Those. if your login [email protected] and the password is 15101510ioioio, then the pair will look like this: my_login:15101510ioioio

Please note that there is no need to enter @yandex.ru in your login.

This setup is a one-time event.

Let's make a couple of points clear:

  • How many projects to create for each site is up to you to decide
  • Without creating a project, the program will not work.

Now let's look at the functionality.

To collect keys from Yandex.Wordstat, on the “Data Collection” tab, click on the “Batch collection of words from the left column of Yandex.Wordstat” button, insert a list of previously prepared key phrases, click “Start collection” and wait for it to finish. There is only one drawback to this collection method - after parsing is completed, you have to manually delete unnecessary keys.

At the output we get a table with collected from Wordstat key words and base frequency of impressions.

But we remember that you can also use quotation marks and an exclamation mark, right? This is what we will do. Moreover, this functionality is implemented in Sloyoba.

We start collecting frequencies in quotes and watch how the data gradually appears in the table.

The only negative is that the data is collected through the Yandex.Wordstat service, which means that even collecting frequencies for 100 keys will take quite a lot of time. However, this problem is solved in Key Collector.

And one more function that I would like to talk about is the collection of search tips. To do this, copy the list of previously parsed keys to the clipboard, click the button for collecting search tips, paste the list, select the search engines from which search tips will be collected, click “Start collection” and wait for it to finish.

As a result, we get an expanded list of key phrases.

Now let’s move on to Slovoeb’s “big brother” - Key Collector.

Key Collector is paid, but has much wider functionality. So, if you are professionally involved in website promotion or marketing, Key Collector is simply a must-have, because Wordfucker will no longer be enough. In short, Kay Collector can do:

  • Parse keys from Wordstat*.
  • Parse search tips*.
  • Cutting off search phrases using stop words*.
  • Sorting requests by frequency*.
  • Identification of duplicate requests.
  • Determination of seasonal requests.
  • Collection of statistics from Liveinternet.ru, Metrica, Google Analytics, Google AdWords, Direct, Vkontakte and others.
  • Determination of relevant pages for a particular request.

(the * sign indicates the functionality available in Slovoyobe)

The process of collecting keywords from Wordstat and collecting search tips is absolutely identical to that implemented in Slovoyobe. However, frequency collection is implemented in two ways - through Wordstat (as in Slovoyobe) and through Direct. Through Direct, the collection of frequencies is accelerated several times.

This is done as follows: click on the D button (short for “Direct”), check the box to fill out the Wordstat statistics columns, check the boxes (if necessary) about what frequency we want to get (base, in quotes, or in quotes and with exclamation marks", click "Get data" and wait for the collection to complete.

Collecting data through Yandex.Direct takes much less time than through Wordstat. However, there is one drawback - statistics may not be collected for all keys (for example, if the key phrase is too long). However, this minus is compensated by collecting data from Wordstat.

Google Keyword Planner

This tool is extremely useful for collecting a core based on the needs of Google search engine users.

Using Google Keyword Planner, you can find new queries by query (no matter how strange it may sound), and even by site/topic. Well, as a bonus, you can even predict traffic and combine new search queries.

For existing requests, statistics can be obtained by selecting the appropriate option on the main page of the service. If necessary, you can select a region and negative keywords. The result will be output in CSV format.

How to find out the semantic core of a competitor’s website

Competitors can also be our friends, because... You can borrow ideas for choosing keywords from them. For almost every page you can get a list of keywords for which it is optimized, manually.

The first way is to study the page content, Title, Description, H1 and KeyWords meta tags. You can do everything manually.

The second way is to use Advego or Istio services. This is quite enough to analyze specific pages.

If you need to perform a comprehensive analysis of the semantic core of the site, then it makes sense to use more powerful tools:

  • SEMrush
  • Searchmetrics
  • SpyWords
  • Google Trends
  • Wordtracker
  • WordStream
  • Ubersuggest
  • Topvisor

However, the above tools are more suitable for those who are engaged in the professional promotion of several sites at the same time. “For myself” even manual method will be quite enough (as a last resort - Advego).

Errors when compiling a semantic core

The most common mistake is a very small semantic core

Of course, if this is some highly specialized niche (for example, the hand-made production of elite musical instruments), then in any case there will be few keys (one hundred, one and a half, two hundred).

The larger the semantic core (but without “garbage”), the better. In some niches, the semantic core can consist of several... MILLIONS of keys.

The second mistake is synonymizing. More precisely, its absence

Remember the example of Antalya. Indeed, in this context, “tours” and “vouchers” are the same thing, but these 2 lists of keys can be radically different. “Stripper” may well be searched for “wire stripper” or “insulation removal tool.”

At the bottom of the search results, Google and Yandex have this block:

It is there that you can often spot synonyms.

Compiling a semantic core exclusively from high-frequency queries

Remember what we said at the beginning of the post about low-frequency queries, and the question “why is this an error?” You won't have any more problems. Low-frequency queries will bring the bulk of traffic.

"Garbage", i.e. non-targeted requests

It is necessary to remove from the assembled kernel all requests that do not suit you. If you have a store cell phones, then for you the request “cell phone sales” will be targeted, and “cell phone repair” will be garbage. In the case of a service center for repairing cell phones, everything is exactly the opposite: “repair of cell phones” is targeted, and “sale of cell phones” is garbage. The third option is if you have a cell phone store with a service center “attached” to it, then both requests will be targeted.

Once again, there should be no garbage in the kernel.

No grouping of requests

It is strictly necessary to split the core into groups.

Firstly, this will allow you to create a competent site structure.

Secondly, there will be no “key conflicts”. For example, let’s take a page that is promoted by the queries “buy self-leveling floor” and “buy acer laptop" The search engine may be confused. As a result, it will fail for both keys. But for the queries “hp 15-006 laptop buy” and “hp 15-006 laptop price” it already makes sense to promote one page. Moreover, it doesn’t just “make sense”, but will be the only correct solution.

Thirdly, clustering will allow you to estimate how many pages still need to be created so that the core is completely covered (and most importantly, is it necessary?).

Errors in separating commercial and information requests

The main mistake is that requests that do not contain the words “buy”, “order”, “delivery”, etc. can also turn out to be commercial.

For example, the request "". How to determine whether a request is commercial or informational? It’s very simple - look at the search results.

Google tells us that this is a commercial request, because... in our search results, the first 3 positions are occupied by documents with the word “buy”, and although the fourth position is occupied by “reviews”, look at the address - this is a fairly well-known online store.

But with Yandex everything turned out to be not so simple, because... in the TOP 5 we have 3 pages with reviews/feedback and 2 pages with trade offers.

Nevertheless, this request still refers to commercial ones, because There are commercial offers both here and there.

However, there is also a tool for mass verification of keys for “commerce” - Semparser.

We picked up “empty” queries

Both base and quoted frequencies must be collected. If the frequency in quotes is zero, it is better to delete the request, because it's a dummy. It often happens that the base frequency exceeds several thousand impressions per month, and the frequency in quotes is zero. And right away specific example- key “inexpensive skin cream.” Base frequency 1032 impressions. Looks delicious, doesn't it?

But all the flavor is lost if you put the same phrase in quotation marks:

Not all users type without errors. Because of them, “crooked” key queries end up in the database. Including them in the semantic core is pointless, since search engines still redirect the user to the “corrected” query.

And it’s exactly the same with Yandex.

So we delete “crooked” requests (even if they are high-frequency) without regret.

An example of the semantic core of a site

Now let's move from theory to practice. After collection and clustering, the semantic core should look something like this:

Bottom line

What do we need to compile a semantic core?

  • at least a little businessman (or at least marketer) thinking
  • at least some SEO skills.
  • it is important to pay special attention to the structure of the site
  • figure out what queries users can use to search for the information they need
  • based on “estimates”, collect a list of the most suitable queries (Yandex.Wordstat + Wordstat Assistant, Slovoeb, Key Collector, Google Keyword Planner), frequencies taking into account word forms (without quotes), and also without taking into account (in quotes), remove “garbage” "
  • the collected keys must be grouped, i.e. distribute across site pages (even if these pages have not yet been created).

No time? Contact us, we will do everything for you!

Hello everyone!

What to do with the semantic core? This question is probably asked by all beginners in SEO promotion (judging by myself) and for good reason. After all, in reality, at the initial stages, a person does not understand why he sat for so long and collected keywords for the site, or using other tools. Since I, too, have been struggling with this issue for a long time, I’ll probably publish a lesson on this topic.

What is the purpose of collecting the semantic core?

First, let's figure out why we collected the semantic core in the first place. So, all SEO promotion is based on the use of keywords that users enter into search strings. Thanks to them, things like the structure of the site and its content are created, which are essentially the main factors in .

Also, do not forget about external optimization, in which the semantic core plays an important role. But more on that in the next lessons.

To summarize: SY is necessary for:

  • Creating a site structure that will be understandable search engines, and ordinary users;
  • Content creation. Content nowadays is the main way to promote a website in search results. The higher the quality of the content, the higher the site ranks; the more quality content, the site is located higher. Read more about creating quality content;

What to do with the semantic core after compilation?

So, after you have compiled a semantic core, that is, you have collected keywords, cleaned them and grouped them, you can begin to form the structure of the site. Essentially, when you grouped requests the way we did in lesson #145, you have already created the structure of your web resource:

You just need to implement it on the website and that’s it. Thus, you will form a structure not based on what you have in stock, but on the basis of consumer demand. By doing this, you will benefit not only the web resource from an SEO point of view, but also do the right thing from the point of view of the business as a whole. It’s not for nothing that they say: if there is demand, then there must be supply.

We seem to have sorted out the structure, now let's move on to the content. Once again, by grouping the queries in Key Collector, you have found topics for your future content with which you will fill the pages. For example, let’s take the “Mountain Bikes” group and divide it into small subgroups:


Thus, we created two subgroups with key queries under individual pages. Your task at this stage is to form groups (clusters) so that each cluster contains semantically identical keywords, that is, identical in meaning.

Remember one rule: each cluster has a separate page.

This, of course, is not very convenient for beginners to group, since you need to have a certain skill, so I will show you another way to form topics for articles. This time we'll use Excel:


Based on the resulting data, you can create separate pages.

This is how I carry out clustering (grouping) and I am quite happy with everything. I think that now you understand what to do with the semantic core after compilation.

Perhaps the example given in this lesson is too general, as it does not give a specific picture. I just want to convey to you the very essence of the action, and then you can use your head yourself. So I apologize in advance.

If this lesson was useful for you and helped you solve the problem, please share the link on social networks. And, of course, subscribe to blog updates if you haven’t already.

Good luck, friends!

See you soon!

Previous article
Next article

The semantic core is a scary name that SEOs came up with to denote a rather simple thing. We just need to select the key queries for which we will promote our site.

And in this article I will show you how to correctly compose a semantic core so that your site quickly reaches the TOP, and does not stagnate for months. There are also “secrets” here.

And before we move on to compiling the SY, let's figure out what it is and what we should ultimately come to.

What is the semantic core in simple words

Oddly enough, but the semantic core is the usual excel file, which lists the key queries for which you (or your copywriter) will write articles for the site.

For example, this is what my semantic core looks like:

I have marked in green those key queries for which I have already written articles. Yellow - those for which I plan to write articles in the near future. And colorless cells mean that these requests will come a little later.

For each key query, I have determined the frequency, competitiveness, and come up with a “catchy” title. You should get approximately the same file. Now my CN consists of 150 keywords. This means that I am provided with “material” for at least 5 months in advance (even if I write one article a day).

Below we will talk about what you should prepare for if you suddenly decide to order the collection of the semantic core from specialists. Here I will say briefly - they will give you the same list, but only for thousands of “keys”. However, in SY it is not quantity that is important, but quality. And we will focus on this.

Why do we need a semantic core at all?

But really, why do we need this torment? You can, after all, just write high-quality articles and attract an audience, right? Yes, you can write, but you won’t be able to attract people.

The main mistake of 90% of bloggers is simply writing high-quality articles. I'm not kidding, they have really interesting and useful materials. But search engines don’t know about it. They are not psychics, but just robots. Accordingly, they do not rank your article in the TOP.

There is another subtle point with the title. For example, you have a very high-quality article on the topic “How to properly conduct business in a face book.” There you describe everything about Facebook in great detail and professionally. Including how to promote communities there. Your article is the highest quality, useful and interesting on the Internet on this topic. No one was lying next to you. But it still won't help you.

Why high-quality articles fall from the TOP

Imagine that your website was visited not by a robot, but by a live inspector (assessor) from Yandex. He realized that you have the coolest article. And hands put you in first place in the search results for the request “Promoting a community on Facebook.”

Do you know what will happen next? You will fly out of there very soon anyway. Because no one will click on your article, even in first place. People enter the query “Promoting a community on Facebook,” and your headline is “How to properly run a business in a face book.” Original, fresh, funny, but... not on request. People want to see exactly what they were looking for, not your creativity.

Accordingly, your article will empty its place in the TOP search results. And a living assessor, an ardent admirer of your work, can beg the authorities as much as he likes to leave you at least in the TOP 10. But it won't help. All the first places will be taken by empty articles, like the husks of seeds, that yesterday’s schoolchildren copied from each other.

But these articles will have the correct “relevant” title - “Promoting a community on Facebook from scratch” ( step by step, in 5 steps, from A to Z, free etc.) Is it offensive? Still would. Well, fight against injustice. Let's create a competent semantic core so that your articles take the well-deserved first places.

Another reason to start writing SYNOPSIS right now

There is one more thing that for some reason people don’t think much about. You need to write articles often - at least every week, and preferably 2-3 times a week - to gain more traffic and faster.

Everyone knows this, but almost no one does it. And all because they have “creative stagnation”, “they just can’t force themselves”, “they’re just lazy”. But in fact, the whole problem lies in the absence of a specific semantic core.

I entered one of my own into the search field basic keys- “smm”, and Yandex immediately gave me a dozen hints about what else might be interesting to people who are interested in “smm”. All I have to do is copy these keys into a notebook. Then I will check each of them in the same way, and collect hints on them as well.

After the first stage of collecting key words, you should end up with a text document containing 10-30 broad basic keys, which we will work with further.

Step #2 — Parsing basic keys in SlovoEB

Of course, if you write an article for the request “webinar” or “smm”, then a miracle will not happen. You will never be able to reach the TOP for such a broad request. We need to break the basic key into many small queries on this topic. And we will do this using a special program.

I use KeyCollector, but it's paid. You can use free analogue- SlovoEB program. You can download it from the official website.

The most difficult thing about working with this program is setting it up correctly. I show you how to properly set up and use Sloboeb. But in that article I focus on selecting keys for Yandex Direct.

And here let’s look step by step at the features of using this program for creating a semantic core for SEO.

First we create new project, and call it by the broad key you want to parse.

I usually give the project the same name as my base key to avoid confusion later. And yes, I will warn you against one more mistake. Don't try to parse all base keys at once. Then it will be very difficult for you to filter out “empty” key queries from golden grains. Let's parse one key at a time.

After creating the project, we carry out the basic operation. That is, we actually parse the key through Yandex Wordstat. To do this, click on the “Worstat” button in the program interface, enter your base key, and click “Start collection”.

For example, let's parse the base key for my blog “contextual advertising”.

After this, the process will start, and after some time the program will give us the result - up to 2000 key queries that contain “contextual advertising.”

Also, next to each request there will be a “dirty” frequency - how many times this key (+ its word forms and tails) was searched per month through Yandex. But I do not advise drawing any conclusions from these figures.

Step #3 - Collecting the exact frequency for the keys

Dirty frequency will not show us anything. If you focus on it, then don’t be surprised when your key for 1000 requests does not bring a single click per month.

We need to identify pure frequency. And to do this, we first select all the found keys with checkmarks, and then click on the “Yandex Direct” button and start the process again. Now Slovoeb will look for the exact request frequency per month for each key.

Now we have an objective picture - how many times what query was entered by Internet users per last month. I now propose to group all key queries by frequency to make it easier to work with them.

To do this, click on the “filter” icon in the “Frequency” column. ", and specify - filter out keys with the value "less than or equal to 10".

Now the program will show you only those requests whose frequency is less than or equal to the value “10”. You can delete these queries or copy them to another group of key queries for future use. Less than 10 is very little. Writing articles for these requests is a waste of time.

Now we need to select those key queries that will bring us more or less good traffic. And for this we need to find out one more parameter - the level of competitiveness of the request.

Step #4 — Checking the competitiveness of requests

All “keys” in this world are divided into 3 types: high-frequency (HF), mid-frequency (MF), low-frequency (LF). They can also be highly competitive (HC), moderately competitive (SC) and low competitive (LC).

As a rule, HF requests are also VC. That is, if a query is often searched on the Internet, then there are a lot of sites that want to promote it. But this is not always the case; there are happy exceptions.

The art of compiling a semantic core lies precisely in finding queries that have a high frequency and a low level of competition. It is very difficult to manually determine the level of competition.

You can focus on indicators such as the number of main pages in the TOP 10, length and quality of texts. level of trust and tits of sites in the TOP search results upon request. All of this will give you some idea of ​​how tough the competition is for rankings for this particular query.

But I recommend you use Mutagen service. It takes into account all the parameters that I mentioned above, plus a dozen more that neither you nor I have probably even heard of. After analysis, the service gives an exact value - what level of competition this request has.

Here I checked the query "configuration contextual advertising in google adwords." Mutagen showed us that this key has a competitiveness of "more than 25" - this is the maximum value it shows. And this query has only 11 views per month. So it definitely doesn’t suit us.

We can copy all the keys that we found in Slovoeb and do a mass check in Mutagen. After that, all we have to do is look through the list and take those requests that have a lot of requests and a low level of competition.

Mutagen is a paid service. But you can do 10 checks per day for free. In addition, the cost of testing is very low. In all the time I have been working with him, I have not yet spent even 300 rubles.

By the way, about the level of competition. If you have a young site, then it is better to choose queries with a competition level of 3-5. And if you have been promoting for more than a year, then you can take 10-15.

By the way, regarding the frequency of requests. We now need to take the final step, which will allow you to attract a lot of traffic even for low-frequency queries.

Step #5 — Collecting “tails” for the selected keys

As has been proven and tested many times, your site will receive the bulk of traffic not from the main keywords, but from the so-called “tails”. This is when a person enters strange key queries into the search bar, with a frequency of 1-2 per month, but there are a lot of such queries.

To see the “tail”, simply go to Yandex and enter the key query of your choice into the search bar. Here's roughly what you'll see.

Now you just need to write out these additional words V separate document, and use them in your article. Moreover, there is no need to always place them next to the main key. Otherwise, search engines will see “over-optimization” and your articles will fall in search results.

Just use them in different places your article, and then you will receive additional traffic also on them. I would also recommend that you try to use as many word forms and synonyms as possible for your main key query.

For example, we have a request - “Setting up contextual advertising”. Here's how to reformulate it:

  • Setup = set up, make, create, run, launch, enable, place...
  • Contextual advertising = context, direct, teaser, YAN, adwords, kms. direct, adwords...

You never know exactly how people will search for information. Add all these additional words to your semantic core and use them when writing texts.

So, we collect a list of 100 - 150 key queries. If you are creating a semantic core for the first time, it may take you several weeks.

Or maybe break his eyes? Maybe there is an opportunity to delegate the compilation of FL to specialists who will do it better and faster? Yes, there are such specialists, but you don’t always need to use their services.

Is it worth ordering SY from specialists?

By and large, specialists in compiling a semantic core will only give you steps 1 - 3 from our scheme. Sometimes, for a large additional fee, they will do steps 4-5 - (collecting tails and checking the competitiveness of requests).

After that, they will give you several thousand key queries that you will need to work with further.

And the question here is whether you are going to write the articles yourself, or hire copywriters for this. If you want to focus on quality rather than quantity, then you need to write it yourself. But then it won't be enough for you to just get a list of keys. You will need to choose topics that you understand well enough to write a quality article.

And here the question arises - why then do we actually need specialists in FL? Agree, parsing the base key and collecting exact frequencies (steps #1-3) is not at all difficult. This will literally take you half an hour.

The most difficult thing is to choose HF requests that have low competition. And now, as it turns out, you need HF-NCs, on which you can write a good article. This is exactly what will take you 99% of your time working on the semantic core. And no specialist will do this for you. Well, is it worth spending money on ordering such services?

When are the services of FL specialists useful?

It’s another matter if you initially plan to attract copywriters. Then you don't have to understand the subject of the request. Your copywriters won’t understand it either. They will simply take several articles on this topic and compile “their” text from them.

Such articles will be empty, miserable, almost useless. But there will be many of them. On your own, you can write a maximum of 2-3 quality articles per week. And an army of copywriters will provide you with 2-3 shitty texts a day. At the same time, they will be optimized for requests, which means they will attract some traffic.

In this case, yes, calmly hire FL specialists. Let them also draw up a technical specification for copywriters at the same time. But you understand, this will also cost some money.

Summary

Let's go over the main ideas in the article again to reinforce the information.

  • The semantic core is simply a list of key queries for which you will write articles on the site for promotion.
  • It is necessary to optimize texts for precise key queries, otherwise even your highest-quality articles will never reach the TOP.
  • SY is like a content plan for social networks. It helps you avoid falling into a “creative crisis” and always know exactly what you will write about tomorrow, the day after tomorrow and in a month.
  • To compile a semantic core, it is convenient to use the free program Slovoeb, you only need it.
  • Here are the five steps of compiling the NL: 1 - Selection of basic keys; 2 - Parsing basic keys; 3 - Collection of exact frequency for queries; 4 — Checking the competitiveness of keys; 5 – Collection of “tails”.
  • If you want to write articles yourself, then it is better to create a semantic core yourself, for yourself. Specialists in the preparation of synonyms will not be able to help you here.
  • If you want to work on quantity and use copywriters to write articles, then it is quite possible to delegate and compile the semantic core. If only there was enough money for everything.

I hope this instruction was useful to you. Save it to your favorites so as not to lose it, and share it with your friends. Don't forget to download my book. There I show you the fastest way from zero to the first million on the Internet (a summary from personal experience over 10 years =)

See you later!

Yours Dmitry Novoselov

Organic search is the most effective source of attracting targeted traffic. To use it, you need to make the site interesting and visible to users of the Yandex and Google search engines. There is no need to reinvent the wheel here: it is enough to determine what the audience of your project is interested in and how they search for information. This problem is solved when constructing a semantic core.

Semantic core- a set of words and phrases that reflect the theme and structure of the site. Semantics- a branch of linguistics that studies the semantic content of language units. Therefore, the terms “semantic core” and “semantic core” are identical. Remember this remark, it will prevent you from slipping into keyword stuffing or cramming content with keywords.

By creating a semantic core, you answer the global question: what information can be found on the site. Since customer focus is considered one of the main principles of business and marketing, the creation of a semantic core can be looked at from a different perspective. You need to determine what search queries users use to find information that will be published on the site.

Constructing a semantic core solves another problem. It's about about the distribution of search phrases across resource pages. By working with the core, you determine which page most accurately answers a specific search query or group of queries.

There are two approaches to solving this problem.

  • The first assumes creating a website structure based on the results of analyzing user search queries. In this case, the semantic core determines the framework and architecture of the resource.
  • The second approach involves preliminary planning of the resource structure before analyzing search queries. In this case, the semantic core is distributed over the finished frame.

Both approaches work one way or another. But it is more logical to first plan the structure of the site, and then determine the queries by which users can find this or that page. In this case, you remain proactive: you choose what you want to tell potential clients. If you tailor the resource structure to the keys, then you remain an object and react to the environment, rather than actively changing it.

Here it is necessary to clearly emphasize the difference between the “SEO” and marketing approaches to building the core. Here is the logic of a typical old-school SEO: to create a website, you need to find keywords and select phrases that will easily get you to the top of the search results. After this, you need to create the site structure and distribute the keys among the pages. Page content needs to be optimized for key phrases.

Here is the logic of a businessman or marketer: you need to decide what information to broadcast to the audience using the site. To do this, you need to know your industry and business well. First you need to plan the approximate structure of the site and a preliminary list of pages. After this, when building a semantic core, you need to find out how the audience searches for information. With the help of content, you need to answer the questions that the audience asks.

What negative consequences does using the “SEO” approach lead to in practice? Due to development according to the principle of “dancing from the stove”, the information value of the resource decreases. Businesses must set trends and choose what to tell customers. Businesses should not limit themselves to reactions to the statistics of search phrases and create pages just for the sake of optimizing the site for some key.

The planned result of constructing a semantic core is a list of key queries distributed across the pages of the site. He contains Page URLs, search queries and indication of their frequency.

How to build a website structure

The site structure is a hierarchical layout of pages. With its help, you solve several problems: plan information policy and logic for presenting information, ensure the usability of the resource, and ensure that the site meets the requirements of search engines.

To build a structure, use a tool convenient for you: table editors, Word or other software. You can also draw the structure on a piece of paper.

When planning your hierarchy, answer two questions:

  1. What information do you want to communicate to users?
  2. Where should this or that information block be published?

Imagine that you are planning the website structure of a small confectionery shop. Resource includes information pages, a publications section and a showcase or product catalog. Visually the structure might look like this:

For further work with a semantic core, arrange the site structure in the form of a table. In it, indicate the names of the pages and indicate their subordination. Also include columns in the table for page URLs, keywords, and their frequency. The table might look like this:

You'll fill in the URL, Keys, and Frequency columns later. Now move on to searching for keywords.

What you need to know about keywords

To select a semantic core, you must understand what are keywords And what keywords does the audience use?. With this knowledge, you will be able to correctly use one of the keyword research tools.

What keywords does the audience use?

Keys are words or phrases that use potential clients to find necessary information. For example, to make a cake, a user enters the query “Napoleon recipe with photo” into the search bar.

Keywords are classified according to several criteria. By popularity, high-, medium- and low-frequency queries are distinguished. According to various sources, search phrases are grouped like this:

  • TO low frequency These include requests with a frequency of impressions of up to 100 per month. Some specialists include requests with a frequency of up to 1000 impressions in the group.
  • TO mid-frequency Requests with a frequency of up to 1000 impressions are included. Sometimes experts increase the threshold to 5,000 impressions.
  • TO high frequency queries include phrases with a frequency of 1000 impressions or more. Some authors consider keys with 5,000 or even 10,000 queries to be high-frequency.

The difference in frequency estimates is due to the different popularity of topics. If you are creating a core for an online store that sells laptops, the phrase “buy a Samsung laptop” with a display frequency of about 6 thousand per month will be mid-frequency. If you are creating a core for the site sports club, the query “aikido section” with a frequency of about 1000 queries will be high-frequency.

What do you need to know about frequency when compiling a semantic core? According to various sources, from two-thirds to four-fifths of all user requests are low-frequency. Therefore, you need to build the broadest possible semantic core. In practice, it should be constantly expanded to include low-frequency phrases.

Does this mean that high- and mid-frequency queries can be ignored? No, you can't do without them. But consider low-frequency keywords as the main resource for attracting target visitors.

According to user needs, keys are combined into the following groups:

  • Information. The audience uses them to find some information. Examples of information requests: “how to properly store baked goods”, “how to separate the yolk from the white”.
  • Transactional. Users enter them when they plan to take an action. This group includes the keys “buy a bread machine”, “download a recipe book”, “order pizza for delivery”.
  • Other requests. We are talking about key phrases that are difficult to determine the user’s intent. For example, when a person uses the key "cake", he may be planning to buy a culinary product or prepare it himself. In addition, the user may be interested in information about cakes: definition, characteristics, classification, etc.

Some experts highlight separate group navigation requests. With their help, the audience searches for information on specific sites. Here are some examples: “connected laptops”, “city express track delivery”, “register on LinkedIn”. Navigation queries that are not specific to your business can be ignored when compiling a semantic core.

How to use this method of classification when building a semantic core? First, you must consider the needs of your audience when distributing keywords across pages and creating a content plan. Everything is obvious here: publications of information sections must respond to information requests. This should also contain most of the key phrases without any expressed intent. Transactional questions should be answered by pages from the “Store” or “Showcase” sections.

Secondly, remember that many transactional issues are commercial. To attract natural traffic for the request “buy a Samsung smartphone,” you will have to compete with Euroset, Eldorado and other business heavyweights. You can avoid unequal competition using the recommendations given above. Maximize the kernel and reduce the request frequency. For example, the frequency of the request “buy a smartphone Samsung Galaxy s6” is an order of magnitude lower than the frequency of the key “Buy a Samsung Galaxy smartphone.”

What you need to know about the anatomy of search queries

Search phrases consist of several parts: body, specifier And tail. This can be seen with an example.

What about the query “cake”? It cannot be used to determine the user's intent. It is high-frequency, which determines high competition in the search results. Using this request for promotion will bring a large share of untargeted traffic, which negatively affects behavioral metrics. The high-frequency and non-specific nature of the request “cake” is determined by its anatomy: it consists only of the body.

Pay attention to the request “buy a cake”. It consists of a body "cake" and a specifier "buy". The latter determines the user's intent. It is the specifiers that indicate whether the key is transactional or informational. Look at the examples:

  • Buy a cake.
  • Cake recipes.
  • How to serve the cake.

Sometimes specifiers can express exactly the opposite of the user's intentions. A simple example: users are planning to buy or sell a car.

Now look at the request “buy cake with delivery.” It consists of a body, a specifier and a tail. The latter does not change, but details the user's intention or information need. Look at the examples:

  • Buy cake online.
  • Buy a cake in Tula with delivery.
  • Buy homemade cake in Orel.

In each case, the person’s intention to purchase the cake is clear. And the tail of the key phrase details this need.

Knowledge of the anatomy of search phrases allows you to derive a conditional formula for selecting keys for the semantic core. You must define core terms related to your business, product, and user needs. For example, customers of a confectionery company are interested in cakes, pastries, cookies, pastries, cupcakes and other confectionery products.

After that, you need to find the tails and qualifiers that the project's audience uses with the base terms. With tail phrases, you simultaneously increase your reach and reduce core competition.

A long tail or long tail is a term that defines the strategy for promoting a resource through low-frequency key queries. It consists of using the maximum number of keys with low level competition. Promotion through low frequencies ensures high efficiency of marketing campaigns. This is due to the following factors:

  • Promotion using low-frequency keywords requires less effort compared to promotion using high-frequency competitive queries.
  • Working with long-tail queries is guaranteed to bring results, although marketers cannot always accurately predict which keywords will generate traffic. When working with high-frequency queries, decent marketers cannot guarantee results.
  • Low-frequency drivers provide higher specificity of results to user needs.

For large sites, the semantic core can contain tens of thousands of queries, and it is almost impossible to select and correctly group them manually.

Services for compiling a semantic core

There are quite a lot of tools for selecting keywords. You can build the core using paid or free services and programs. Choose a specific tool depending on the tasks you face.

Key Collector

You cannot do without this tool if you are engaged in Internet marketing professionally, develop several sites, or form the core of a large site. Here is a list of the main tasks that the program solves:

  • Selection of keywords. Key Collector collects requests through Yandex's Wordstat.
  • Parsing search suggestions.
  • Cutting off inappropriate search phrases using stop words.
  • Filtering requests by frequency.
  • Finding implicit duplicate queries.
  • Determination of seasonal requests.
  • Collecting statistics from third party services and platforms: Liveinternet.ru, Metrica, Google Analytics, Google AdWords, Direct, Vkontakte and others.
  • Search for pages relevant to the query.
  • Clustering of search queries.

Key Collector- a multifunctional tool that automates the operations necessary to build a semantic core. The program is paid. You can do everything Key Collector can do with alternative free tools. But for this you will have to use several services and programs.

SlovoEB

This is a free tool from the creators of Key Collector. The program collects keywords through Wordstat, determines the frequency of queries, and parses search suggestions.

To use the program, enter the login and password for your Direct account in the settings. Do not use your main account, as Yandex may block it for automatic queries.

Create a new project. On the Data tab, select the Add Phrases option. Indicate the search phrases that the project's audience is likely to use to find information about products.

In the “Collect keywords and statistics” menu section, select the desired option and run the program. For example, determine the frequency of key phrases.

The tool allows you to select keywords, as well as automatically perform some tasks related to analyzing and grouping queries.

Keyword selection service Yandex Wordstat

To see which phrases the page is shown for in Yandex results, in the Yandex.Webmaster panel you need to open the “Search Queries” tab -> "Latest requests".

We see phrases that were clicked on or the site snippet was shown in the TOP 50 of Yandex over the last 7 days.

To view data only for the page that interests us, we need to use filters.

The possibilities for searching for additional phrases in Yandex.Webmaster are not limited to this.

Go to the “Search Queries” tab -> "Recommended queries."

There may not be many phrases here, but you can find additional phrases, for which the promoted page does not fall into the TOP 50.

Query history

The big disadvantage of visibility analysis in Yandex.Webmaster, of course, is that the data is available only for the last 7 days. To get around this limitation a little, you can try to supplement the list using the “Search Queries” tab -> "Request History".

Here you will need to select “Popular Searches”.

You will receive a list of the most popular phrases for the last 3 months.

To get phrases from Google Search Console, go to the “Search Traffic” tab -> "Analysis of search queries." Next, select “Impressions”, “CTR”, “Clicks”. This will allow you to see more information, which can be useful when analyzing phrases.

By default, the tab displays data for 28 days, but you can expand the range to 90 days. You can also select the desired country.

As a result, we get a list of requests similar to that shown in the screenshot.

New version of Search Console

Google already made some tools available in the new version of the panel. To view requests for a page, go to the “Status” tab - > "Efficiency".

In the new version, the filters are located differently, but the filtering logic remains the same. I think there is no point in dwelling on this issue. From significant differences It is worth noting the possibility of data analysis for more a long period, and not just 90 days. Significant advantage, when compared with Yandex.Webmaster (only 7 days).

Competitive website analysis services

Competitors' websites are a great source of keyword ideas. If you are interested specific page, you can manually determine the search phrases for which it is optimized. To find the main keywords, it is usually enough to read the material or check the contents of the keywords meta tag in the page code. You can also use semantic text analysis services, for example, Istio or Advego.

If you need to analyze the entire site, use comprehensive competitive analysis services:

You can use other tools to collect keywords. Here are some examples: Google Trends, WordTracker, WordStream, Ubersuggest, Topvisor. But don’t rush to master all the services and programs at once. If you are creating a semantic core for your own small website, use a free tool, for example, the Yandex keyword selection service or Google planner.

How to choose keywords for the semantic core

The process of selecting key phrases is combined into several stages:

  1. First, you will identify the basic keywords with which the audience searches for your product or business.
  2. The second stage is devoted to expanding the semantic core.
  3. In the third step, you will remove inappropriate search phrases.

Defining base keys

List common search phrases related to your business and products in a spreadsheet or write down on paper. Gather your colleagues and brainstorm. Record all proposed ideas without discussion.

Your list will look something like this:

Most of the keys you have written down are characterized by high frequency and low specificity. To get mid- and low-frequency search phrases with high specificity, you need to expand the core as much as possible.

Expanding the semantic core

You will solve this problem using keyword research tools such as Wordstat. If your business has a regional link, select the appropriate region in the settings.

Using the key phrase selection service, you need to analyze all the keys recorded at the previous stage.

Copy the phrases from the left column of Wordstat and paste them into the table. Pay attention to the right column of Wordstat. In it, Yandex offers phrases that people used along with the main request. Depending on the content, you can immediately select the appropriate keywords from the right column or copy the entire list. In the second case, unsuitable requests will be eliminated at the next stage.

And the result of this stage of work will be a list of search phrases for each basic key that you obtained using brainstorming. The lists may contain hundreds or thousands of queries.

Removing inappropriate search phrases

This is the most labor-intensive stage of working with the kernel. You need to manually remove inappropriate search phrases from the kernel.

Do not use frequency, competition or other purely “SEO” metrics as a criterion for evaluating keys. Do you know why old-school SEOs consider certain search phrases to be trash? For example, take the key “diet cake”. The Wordstat service predicts 3 impressions per month for it in the Cherepovets region.

To promote pages for specific keywords, old-school SEOs bought or rented links. By the way, some experts still use this approach. It is clear that search phrases with low frequency in most cases do not recoup the funds spent on buying links.

Now look at the phrase “diet cakes” through the eyes of an ordinary marketer. Some representatives of the confectionery company's target audience are really interested in such products. Therefore, the key can and should be included in the semantic core. If the confectionery prepares the corresponding products, the phrase will be useful in the product descriptions section. If for some reason a company does not work with diet cakes, the key can be used as a content idea for information section.

What phrases can be safely excluded from the list? Here are examples:

  • Keys mentioning competing brands.
  • Keys mentioning goods or services that you do not sell and do not plan to sell.
  • Keys that include the words “inexpensive”, “cheap”, “at a discount”. If you are not dumping, cut off cheap lovers so as not to spoil behavioral metrics.
  • Duplicate keys. For example, of the three keys “custom cakes for a birthday”, “custom cakes for a birthday” and “custom cakes for a birthday”, it is enough to leave the first one.
  • Keys that mention inappropriate regions or addresses. For example, if you serve residents of the Northern district of Cherepovets, the key “custom cakes industrial district” is not suitable for you.
  • Phrases entered with errors or typos. Search engines understand that the user is looking for croissants, even if he enters the key “croissants” into the search bar.

After removing inappropriate phrases, you received a list of queries for the base key “custom cakes”. The same lists need to be compiled for other basic keys obtained during the brainstorming stage. After that, move on to grouping key phrases.

How to group keywords and build a relevance map

Search phrases with which users find or will find your site are combined into semantic clusters, this process is called search query clustering. These are closely related groups of queries. For example, the semantic cluster “Cake” includes all key phrases associated with this word: cake recipes, order a cake, photos of cakes, wedding cake, etc.

Semantic cluster- this is a group of queries united in meaning. It is a multi-level structure. Inside the first-order cluster “Cake” there are second-order clusters “Cake Recipes”, “Ordering Cakes”, “Photos of Cakes”. Within the second-order cluster “Cake Recipes”, it is theoretically possible to distinguish a third order of clustering: “Recipes for cakes with mastic”, “Recipes for sponge cakes”, “Recipes for shortbread cakes”. The number of levels in a cluster depends on the breadth of the topic. In practice, in most topics, it is enough to identify business-specific second-order clusters within first-order clusters.

Theoretically, a semantic cluster can have many levels.
In practice, you will have to work with clusters of the first and second levels

You identified most of the first level clusters during brainstorming when you wrote down basic key phrases. To do this, it is enough to understand your own business, as well as look at the site diagram that you drew up before starting work on the semantic core.

It is very important to correctly perform clustering at the second level. Here, search phrases are modified using qualifiers to indicate user intent. A simple example is the “cake recipes” and “custom cakes” clusters. The first search phrases are used by people in need of information. The keys of the second cluster are used by clients who want to buy a cake.

You identified the search phrases for the “custom cakes” cluster using Wordstat and manual screening. They must be distributed between the pages of the “Cakes” section.

For example, in the cluster there are search queries “custom football cakes” and “custom football cakes”.

If the company’s assortment includes a corresponding product, you need to create a corresponding page in the “Mastic Cakes” section. Add it to the site structure: indicate the name, URL and search phrases with frequency.

Use Keyword Research or similar tools to see what other search terms potential customers are using to find football-themed cakes. Add relevant pages to your list of keywords.

In the list of cluster search phrases, mark the distributed keys in a way convenient for you. Distribute the remaining search phrases.

If necessary, change the site structure: create new sections and categories. For example, the page “custom cakes for PAW Patrol” should be included in the “Children’s Cakes” section. At the same time, it can be included in the “Mastic Cakes” section.

Please note two things. First, the cluster may not have suitable phrases for the page you are planning to create. This can happen by various reasons. For example, these include imperfection of tools for collecting search phrases or their incorrect use, as well as low popularity of the product.

The absence of a suitable key in the cluster is not a reason to refuse to create a page and sell a product. For example, imagine that a confectionery company sells children's cakes featuring characters from the cartoon Peppa Pig. If the list does not include the relevant keywords, clarify the needs of your audience using Wordstat or another service. In most cases, suitable requests will be found.

Secondly, even after removing unnecessary keys, there may still be search phrases in the cluster that are not suitable for the created and planned pages. They can be ignored or used in another cluster. For example, if for some reason a confectionery shop fundamentally does not sell Napoleon cake, the corresponding key phrases can be used in the “Recipes” section.

Clustering search queries

Search queries can be grouped manually, using Excel programs or Google Sheets, or automated, using special applications and services.

Clustering allows you to understand how requests can be distributed across website pages for the fastest and fastest effective promotion.

Automatic clustering or grouping of search queries of the semantic core is carried out based on the analysis of sites included in the TOP 10 search engine results of Google and Yandex.

How automatic request grouping works: for each request, the results among the TOP 10 sites are viewed. If there are matches among at least 4-6 of them, then requests can be grouped to be placed on one page.

Automatic grouping is the fastest and effective method combining keywords to form a site structure that is almost ready for use.

If it is not correct, from the point of view of search engine statistics, to form a site structure and distribute queries among its pages, it will, alas, be impossible to successfully promote pages to the TOP!

Applications and services for automatic grouping of search queries

Among the services that automate the grouping of keywords, it is worth highlighting:

  • Key Collector.
  • Rush Analytics.
  • TopVisor.

After all the keys have been distributed, you will receive a list of existing and planned site pages indicating the URL, search phrases and frequency. What to do with them next?

What to do with the semantic core

A table with a semantic core should become a road map and the main source of ideas when forming:

Look: you have a list with pre-titled pages and search phrases. They determine the needs of the audience. When drawing up a content plan, you just need to clarify the name of the page or publication. Include your main search query. This is not always the most popular key. In addition to popularity, the query in the title should best reflect the need of the page's audience.

Use the remaining search phrases as an answer to the question “what to write about.” Remember, you don’t have to enter all the search phrases into the information material or in the product description. Content should cover the topic and answer user questions. Please note again: you need to focus on information needs, and not on search phrases and their inclusion in the text.

Semantic core for online stores

The specificity of the preparation and clustering of semantics lies in the presence of four very important, for subsequent, groups of pages:

  • Home page.
  • Pages of sections and subsections of the catalog.
  • Product card pages.
  • Blog article pages.

We have already talked above about different types search queries: informational, transactional, commercial, navigational. For pages of sections and products of an online store, transactional ones are primarily of interest, i.e. queries using which search engine users want to see sites where they can make a purchase.

You need to start forming a core with a list of products that you already sell or plan to sell.

For online stores:

  • as " body»requests will be made product names;
  • as " specifiers" phrases: " buy», « price», « sale», « order», « photo», « description», «