Recap of Friends of Search Amsterdam #fos15

Friends of Search Amsterdam

My Friends of Search started the evening before, with catching up with some of my search buddies I’ve met during previous conferences and meeting some new interesting people! For me, most conferences are still about meeting with equally smart or smarter people within the world of search I’m working in. Conferences like these, keep your thoughts about what your doing most of the time, fresh and tricks you into thinking differently. The day it self was filled with interesting sessions and I summed up a few of them. I will update the post once all the slideshare decks are available.

One step ahead of the rest in Search by Michael King
With his great personality, Michael was able to get the conference going by making sure everyone understands algorithm chasing will not bring you any benefit at all, let alone your clients. Doing the same as everyone else will never bring you the best results possible. Google is getting smarter by the day, don’t waste your time on focusing on Google’s move. Think about how human behaviour is changing and adopt your marketing strategies accordingly. Think about the latest changes, all resulting from user behavior and informational needs: the internet of things, Google Now, structured data and above all, the transition to mobile.

Strategic thinking for Search (aka. One trick ponies get shot!) by Ian Lurie

I could write a few words about the takeaways of Ian’s session, but he actually wrote an extensive post about it earlier: Digital Marketing Strategy in 2015: One-trick ponies get shot.

SEO Quick Wins: The Small Things that Make The Big Differences by Richard Baxter

I haven’t attended this session myself, but the slides provide some usefull quick wins!

Keywords are the past – The Future of Search by Marcus Tober

The future of search are Concepts, Entities & user-centered Topics according to Marcus. To make a strong argument, he shared some of his secret tactics from the past. Since I have done a few sessions about that last 12 months, I have to agree with him wholeheartedly :) A few lessons Marcus shared:

  • Forget keywords, never focus on a specific keyword anymore
  • Hummingbird changed everything, there is a shift from keywords to concepts and entities.
  • Think about the intention of your page

What UX, IZ and SEO can teach each other by Marianne Sweeny

During her talk, she used a unique point of view to look to the world of optimizing for search engines. The reason she thinks differently is because it makes no sense how people currently think. According to her (I have to disagree :) ) people did not saw Panda and Penguin coming. The same with a lot of the earlier updates like Florida, can you still remember that? Google started with indexing the web, ranking websites with the help of the PageRank scoring method. A lot has changed since then it it tries to incorporate human signals but isn’t succeeding very well in it. Marianne shares a few examples of Google getting it completely wrong and she is not happy about the facts Google is not showing any useful information in Google Webmaster tools. Also read my article: Why you should not use Google’s WMT data. Since it still is important to make Google happy, focus on the UX aspects of your website. Think about your site architecture, try not to build more than three levels. Do a content audit, make sure there is no duplicate content since it is a negative factor and uses a lot of crawl budget. Searching is UX, so think about the next step by Google.

Beyond Search: The next playing fields for marketing tech geeks by Pascal Fantou

The CMO’s of tomorrow will be search marketeers. Today search is about the balance between IT, Data and marketing. Make sure you understand the individual areas of expertise and you’ll have a future in search. To make your life easy, understand your business, get the data and if you’re smart, automate all these processes. Some of the areas of expertise that are required:

  • JavaScript
  • Regular Expressions
  • API’s
  • Browser Automation

A few of the tools Pascal is using and ruined my weekend schedule (thanks for the nice chats Pascal :)):

Jaws in Space – How to develop & pitch creative ideas by Hannah Smith

I am always enjoying the talks by Hannah. Not because of all the cases, since I’ve seen so many of them already, but the way she presents makes the session really inspirational and puts your brain into gear. Seven easy to remember factors to take into consideration when doing content marketing:

Using creative campaigns to WIN in SEO by Lisa Myers

Starting with the obvious: buying links is risky in 2015. Instead of that, focus on making good shit. Focus on getting the big links by targeting very high authority websites and get secondary links as a result. Especially in countries like the few Nordics, there is not that much link equity available so really target your content on a few specific websites. Make it relevant for those specific websites. Don’t think you can’t do it: the difference between someone working at McDonalds and Richard Branson is not the money, it’s the way of thinking that differs.

Conclusions

Overall, the conference met my expectations. High number of international quality speakers and Amsterdam makes up for a great atmosphere. My recommendations to the organizers for next year and keep on the same level as the previous two years:

  • Divide the sessions over two days. To much to choose from in the current program, not enough time.
  • If you program two days, more people stay in town so the social part of the conference will be better too :)
  • Try to get some localized content. It is good to have people share content marketing cases, but search consultants in the Netherlands often have to deal with lower budgets compared to companies in the UK and US.

I thoroughly enjoyed Amsterdam and want to thank the organisation! See you next year?!

Searchmetrics bookmarklet

searchmetrics-bookmarklet

Quick post today: since I’m using Searchmetrics every day, I got tired typing in all the URLs I want to check manually so I created a quick bookmarklet you can use to instantly go to the Searchmetrics interface for the URL you are currently on. Add the following code as a bookmark in your favorites:

javascript:void(window.open(%27http://suite.searchmetrics.com/en/research?url=%27+window.location.href,%27_blank%27%29);

You can also add a bookmark, give it a name (eg. Instant Searchmetrics ) and copy and paste the above code as a URL. Update 23-12: people asked me if this is possible for every tool. Basically, if it is webbased: Yes. So for example the URL Google Mobile Friendly test uses is https://www.google.com/webmasters/tools/mobile-friendly/?url=http://www.notprovided.eu so that will give the following JavaScript code to add to your bookmarks:

javascript:void(window.open(%27https://www.google.com/webmasters/tools/mobile-friendly/?url=%27+window.location.href,%27_blank%27%29);

For SEMRush:

javascript:void(window.open(%27http://www.semrush.com/info/%27+window.location.href,%27_blank%27%29);

Google Mobile Friendly test:

javascript:void(window.open(%27https://www.google.com/webmasters/tools/mobile-friendly/?url=%27+window.location.href,%27_blank%27%29);

Scraping Webmaster Tools with FMiner

Screen Scraping Webmaster Tools!

The biggest problem (after the problem with their data quality) I am having with Google Webmaster Tools is that you can’t export all the data for external analysis. Luckily the guys from the FMiner.com web scraping tool contacted me a few weeks ago to test their tool. The problem with Webmaster Tools is that you can’t use web based scrapers and all the other screen scraping software tools were not that good in the steps you need to take to get to the data within Webmaster Tools. The software is available for Windows and Mac OSX users.

FMiner is a classical screen scraping app, installed on your desktop. Since you need to emulate real browser behaviour, you need to install it on your desktop. There is no coding required and their interface is visual based which makes it possible to start scraping within minutes. Another possibility I like is to upload a set of keywords, to scrape internal search engine result pages for example, something that is missing in a lot of other tools. If you need to scrape a lot of accounts, this tool provides multi-browser crawling which decreases the time needed.
This tool can be used for a lot of scraping jobs, including Google SERPs, Facebook Graph search, downloading files & images and collecting e-mail addresses. And for the real heavy scrapers, they also have built in a captcha solving API system so if you want to pass captchas while scraping, no problem.

Below you can find an introduction to the tool, with one of their tutorial video’s about scraping IMDB.com:
Continue reading

Cheatsheet: managing search robot behaviour

Search Robot Management Cheatsheet

Many discussions have been taking place about the differences between crawling, indexing and caching. The way search engine robots are behaving can be controlled in many ways. Due to all the different possibilities, I often have discussions and have to clarify my point of view over and over again. So to be sure everyone is clear about the way you can control the crawling and indexing behaviour of the major search engines (Google, Bing, Yandex and Baidu), make sure you remember the following table or print the table and hang it next to your screen to win the next discussion with your fellow SEOs :)
Continue reading

GWT Hack: Fetching external websites without verifying

For a simple quickscan of a random website, you can’t use the standard Fetch as Googlebot functionality without verifying a domain first. Since this is becoming more and more important, now Google is also looking to hidden layers of content, ads etcetera and you want to test smartphone robot too, you can make use of a simple workaround.

Make sure you create a simple clean HTML file on a domain you already own. In my case, I use notprovided.eu which is verified in my Google Webmaster Tools account. Within this HTML you can easily add a iframe or embed element, containing the URL you want to test with the Fetch As Google function.

Within WMT you fetch the URL which includes the iframe or embed section. This will show the external website, parsed by the selected Google Bot. Make sure the file is not blocked by robots.txt from crawling, so add a noindex tag. If you want to fetch a HTTPS url, you also need to have a HTTPS domain verified in WMT to including such an URL by making use of iframe or embed codes.

fetch-as-google-iframe-example