Today Yandex’s head of search, Alexander Sadovsky, announced they will start using paid links as a negative ranking signal. Official announcement: Новый этап в борьбе со ссылочным спамом from today. After discounting links in a few specific niches and areas around Moscow last year, they now decide to turn it around and use paid links as a negative signal.
Paid links are still massively used in the Russian search market, after removing links from their algorithm they expected SEO would start using different ways of getting links. Nothing changed at all. 70% of the links in popular, commercial interesting niches, are still consider Paid Links by Yandex. Yandex found the SEO market to be stubborn (what a surprise!), the number of paid links being used in those niches dropped just 16% last year. Links will be back in their main algorithm, but now can have a positive, neutral or negative effect on a website’s performance in the search engine result pages. Yandex is quite confident their machine learning algorithms are good enough to detect paid links at the moment.
The rollout of this updated algorithm will start in May. Personally I really like the research and efforts of the Yandex research department. Comparing to the other major search engines, they publicly try to battle the amount of crappy SEO spam, good job!
Many of you will have notices the search engine result pages are currently containing more elements compared to a few years back. Not only 10 blue links, accompanied with 3 to 10 AdWords advertisements but for many queries we are getting additional regarding the query. These additional cards are a result of Google’s Knowledge Graph. Not familiar with the concept Knowledge graph? Just ask Google: “The Knowledge Graph is a knowledge base used by Google to enhance its search engine’s results with semantic-search information gathered from a wide variety of sources.”
My Friends of Search started the evening before, with catching up with some of my search buddies I’ve met during previous conferences and meeting some new interesting people! For me, most conferences are still about meeting with equally smart or smarter people within the world of search I’m working in. Conferences like these, keep your thoughts about what your doing most of the time, fresh and tricks you into thinking differently. The day it self was filled with interesting sessions and I summed up a few of them. I will update the post once all the slideshare decks are available.
Quick post today: since I’m using Searchmetrics every day, I got tired typing in all the URLs I want to check manually so I created a quick bookmarklet you can use to instantly go to the Searchmetrics interface for the URL you are currently on. Add the following code as a bookmark in your favorites:
Google Mobile Friendly test:
The biggest problem (after the problem with their data quality) I am having with Google Webmaster Tools is that you can’t export all the data for external analysis. Luckily the guys from the FMiner.com web scraping tool contacted me a few weeks ago to test their tool. The problem with Webmaster Tools is that you can’t use web based scrapers and all the other screen scraping software tools were not that good in the steps you need to take to get to the data within Webmaster Tools. The software is available for Windows and Mac OSX users.
FMiner is a classical screen scraping app, installed on your desktop. Since you need to emulate real browser behaviour, you need to install it on your desktop. There is no coding required and their interface is visual based which makes it possible to start scraping within minutes. Another possibility I like is to upload a set of keywords, to scrape internal search engine result pages for example, something that is missing in a lot of other tools. If you need to scrape a lot of accounts, this tool provides multi-browser crawling which decreases the time needed.
This tool can be used for a lot of scraping jobs, including Google SERPs, Facebook Graph search, downloading files & images and collecting e-mail addresses. And for the real heavy scrapers, they also have built in a captcha solving API system so if you want to pass captchas while scraping, no problem.
Below you can find an introduction to the tool, with one of their tutorial video’s about scraping IMDB.com: