Yandex will start using Paid Links as a negative ranking signal!

Today Yandex’s head of search, Alexander Sadovsky, announced they will start using paid links as a negative ranking signal. Official announcement: Новый этап в борьбе со ссылочным спамом from today. After discounting links in a few specific niches and areas around Moscow last year, they now decide to turn it around and use paid links as a negative signal.

Paid links are still massively used in the Russian search market, after removing links from their algorithm they expected SEO would start using different ways of getting links. Nothing changed at all. 70% of the links in popular, commercial interesting niches, are still consider Paid Links by Yandex. Yandex found the SEO market to be stubborn (what a surprise!), the number of paid links being used in those niches dropped just 16% last year. Links will be back in their main algorithm, but now can have a positive, neutral or negative effect on a website’s performance in the search engine result pages. Yandex is quite confident their machine learning algorithms are good enough to detect paid links at the moment.

The rollout of this updated algorithm will start in May. Personally I really like the research and efforts of the Yandex research department. Comparing to the other major search engines, they publicly try to battle the amount of crappy SEO spam, good job!

Optimising Google’s Knowledge Graph – #SMX Munich

Many of you will have notices the search engine result pages are currently containing more elements compared to a few years back. Not only 10 blue links, accompanied with 3 to 10 AdWords advertisements but for many queries we are getting additional regarding the query. These additional cards are a result of Google’s Knowledge Graph. Not familiar with the concept Knowledge graph? Just ask Google: “The Knowledge Graph is a knowledge base used by Google to enhance its search engine’s results with semantic-search information gathered from a wide variety of sources.”

Continue reading

Searchmetrics bookmarklet

Quick post today: since I’m using Searchmetrics every day, I got tired typing in all the URLs I want to check manually so I created a quick bookmarklet you can use to instantly go to the Searchmetrics interface for the URL you are currently on. Add the following code as a bookmark in your favorites:

javascript:void(window.open(%27http://suite.searchmetrics.com/en/research?url=%27+window.location.href,%27_blank%27%29);

You can also add a bookmark, give it a name (eg. Instant Searchmetrics ) and copy and paste the above code as a URL. Update 23-12: people asked me if this is possible for every tool. Basically, if it is webbased: Yes. So for example the URL Google Mobile Friendly test uses is https://www.google.com/webmasters/tools/mobile-friendly/?url=http://www.notprovided.eu so that will give the following JavaScript code to add to your bookmarks:

javascript:void(window.open(%27https://www.google.com/webmasters/tools/mobile-friendly/?url=%27+window.location.href,%27_blank%27%29);

For SEMRush:

javascript:void(window.open(%27http://www.semrush.com/info/%27+window.location.href,%27_blank%27%29);

Google Mobile Friendly test:

javascript:void(window.open(%27https://www.google.com/webmasters/tools/mobile-friendly/?url=%27+window.location.href,%27_blank%27%29);

Scraping Webmaster Tools with FMiner

Screen Scraping Webmaster Tools!

The biggest problem (after the problem with their data quality) I am having with Google Webmaster Tools is that you can’t export all the data for external analysis. Luckily the guys from the FMiner.com web scraping tool contacted me a few weeks ago to test their tool. The problem with Webmaster Tools is that you can’t use web based scrapers and all the other screen scraping software tools were not that good in the steps you need to take to get to the data within Webmaster Tools. The software is available for Windows and Mac OSX users.

FMiner is a classical screen scraping app, installed on your desktop. Since you need to emulate real browser behaviour, you need to install it on your desktop. There is no coding required and their interface is visual based which makes it possible to start scraping within minutes. Another possibility I like is to upload a set of keywords, to scrape internal search engine result pages for example, something that is missing in a lot of other tools. If you need to scrape a lot of accounts, this tool provides multi-browser crawling which decreases the time needed.
This tool can be used for a lot of scraping jobs, including Google SERPs, Facebook Graph search, downloading files & images and collecting e-mail addresses. And for the real heavy scrapers, they also have built in a captcha solving API system so if you want to pass captchas while scraping, no problem.

Below you can find an introduction to the tool, with one of their tutorial video’s about scraping IMDB.com:
Continue reading

Cheatsheet: managing search robot behaviour

Search Robot Management Cheatsheet

Many discussions have been taking place about the differences between crawling, indexing and caching. The way search engine robots are behaving can be controlled in many ways. Due to all the different possibilities, I often have discussions and have to clarify my point of view over and over again. So to be sure everyone is clear about the way you can control the crawling and indexing behaviour of the major search engines (Google, Bing, Yandex and Baidu), make sure you remember the following table or print the table and hang it next to your screen to win the next discussion with your fellow SEOs 🙂
Continue reading