In an apparent move intended to be evil, Google have just rolled out a new SERP interface.

First impressions are:

  1. Although it seems to have more space, it also seems to make everything harder to read
  2. Ads are very much more ‘discreet’ (ie they blend in with the ‘organic’ listings)
  3. The Goooooooooogle pagination no longer straddles main content and r/h side content, making it appear tiny.

Harder to Read

The reason it’s harder to read is that they’ve done away with the underline. Compare the following two screens taken just now with new Google on the left and old Google on the right (take from Google News SERPs as they seem unaffected by the change):

oldvsnew1

 

Goooooooooogle is smaller

At first glance, the Gooooooogle pagination looked a lot smaller than it actually was – just 3px difference

gooooooogle

Ads are more ‘discreet’

Well I say discreet, I really mean that if you didn’t already know where ads are usually placed on a Google SERP, you would just start clicking ads without realising.

NEWGOOGLE

In case you are having difficulty deciphering what is an advert, then take a look at the image below

NEWGOOGLEWITHPOINTERS

Wait, Google said “Don’t be evil”, right?

Well, those days are long gone. If Google believed they were building a better ‘search experience’ for the consumers then they wouldn’t have made shops pay to be included in the Shopping search results.

It’s all about the money, honey. And that’s a real damn shame.

I watched the Tom Cruise Sci-Fi movie, Oblivion the other day and wondered, in a geeky sort of way, where the co-ordinates that flash up on a computer screen transpose to.

Set in a dystopian future, a shuttle falls to earth at a long/lat of 41.146576,-73.975739.

And that equates to 17 Mein Drive.

Now, if I were in charge of making up co-ordinates for a movie, I would have chosen something ironic – maybe the Oblivion Alton Towers ride, or the Oblivion Taproom, Florida – but 17 Mein Drive? Didn’t make sense.

So, I googled 17 Mein Drive and I found the following (from LocateFamily.com) - 17 Mein Drive David Feinsilber.

Didn’t mean much, but it was an unusual name, so onto IMDB and we find that David Feinsilber was the visual effects production supervisor for Oblivion.

Great perk of the job.

I tried to get in touch with David to see if he would confirm this, but so far he hasn’t responded. If he does, I’ll update the post.

Hands on a computer - ooooOOoooh. #technology

Alexa vs Reality

I’d always wondered how closely Alexa’s traffic graphs mirror reality. In a recent article on how the Sun’s traffic was diving uncontrollably, I used an Alexa comparison graph to illustrate my point but I’d never really put the time in to measure its statistics. I think it’s about time that I did.

Slapdash Methodology

My methodology was to use two high traffic sites so that the margin of error would be smaller. Pretty basic stuff. I’m sure someone with more high traffic sites and more time could do a better comparison, but I couldn’t see anything out there.

Site Number 1

Site 1's GA graph

So I took a screengrab of the last 2 years’ GA.

Then I got the same site’s graph from Alexa.

Site 1's Alexa Graph

Then I stretched the Alexa graph to make sure the legends matched up.

1_alexa

And tweaked with the colours, removed Alexa watermark and away we go

1_ga-with-alexa

Then I made some final compensations for baseline disparity to get something that looks like this…

The Results of Site 1

1_ga-with-alexa-overlay

I was a little surprised with this, because the results are much more accurate than I would have imagined. In April through to January, most spikes are faithfully reproduced in Alexa.

Spike A is perfect, but its size is the same on the Alexa graph as future traffic in September 2013, whereas according to GA, we don’t hit traffic like this until Ocbober/November 2013.

Spike B is matched again, but this time the GA spike towers over the Alexa graph and again with spike C, although the dip and subsequent spike is matched perfectly. This happens once more with spike D.

What Does This Mean?

Broadly speaking, traffic peaks and troughs seem to match pretty well. The only discrepancy is the scale of the graph. With Alexa, the traffic increases are, in general, disproportionate. But we should expect this, as Alexa’s metrics are not traffic related, but ‘reach’ related.

The analytics from Alexa is gathered from ‘thousands’ of plugins. A site’s metrics are based on how your site compares to all the others.

The Alexa Traffic Rank of a given website isn’t determined solely by the traffic to that site, but takes into account the traffic to all sites and ranks sites relative to each other.   Since your site is ranked relative to other sites, changes in traffic to other sites affect your site’s rank.

 

So, if your traffic goes up by 10%, but all the sites above you increase theirs by 20%, Alexa will show a drop in traffic on their graphs.

How about site 2?

Results for 2 mostly mimicked site 1: peaks and troughs mainly mirrored, but the overall scale of the graph looked a little off. So, I decided to overlay the 2 site’s Alexa graphs and their 2 GA graphs. This, I’m afraid was harder to do than it sounds.

Whilst site 1 and site 2 have similar traffic levels, the spikes in site 1 throw the scales out a bit. Whilst this was easy to compensate for with GA, with Alexa it was tricky. Alexa graphs use a logarithmic axis to denote reach, and when I scale the image in Photoshop it does this by default in a linear fashion. I would have had to distort the image to get it to play ball and match up the lines, and perhaps I shall one day, but for now, we’ll have to live with a graph whose axes are slightly out.

2-vs-1 2-vs-1-ga

As you can see, blue (site 1) is regularly above red (site 2) and the period from January-June is shown as a big disparity between the sites. The period from August to December is shown on Alexa as a big increase for blue, but in reality the blue site on trounces the red in real life by mammoth spikes in traffic.

Conclusion

1. Alexa will show which site is ahead of another one, but don’t expect the differences to be as marked as they are in the graph

2. Alexa will show most peaks and troughs of all sites.

3. Alexa is not a proper analytics tool. Please use responsibly.

browsers

I was looking through Google Analytics stats recently to see how many poor old sods there are who still use IE8 and it surprised me that the audience share of the Internet Explorer suite of browsers was less than 11%.

The reason that surprised me was that IE was still in the top 3 browsers (at #3, but still hanging in) but getting a titchy market share.

When I started web development, IE was the top of the tree; in fact, IE even had a version for mac. Internet Explorer commanded the web: Netscape was near retirement and starting to hit the bottle, Mozilla was in short trousers, Opera was struggling to pick up girls at the disco  and Firefox, Chrome and Safari were just glints in the tech entrepreneurs’ eyes. By 2004 IE commanded 95% of all browser traffic.

Fast forward 10 years and the top 3 IE browser versions (10,9,8) account for 10.8% of all browser traffic and 95.5% of all IE traffic. These days the top 3 browsers take less than 60% of the market share.

The 10 most popular browsers (based on millions of impressions last month) were:

  1. Safari (26%)
  2. Chrome (22.3%)
  3. IE (11%)
  4. Android (9.9%)
  5. in-app Safari (9.8%)
  6. Firefox (7.9%)
  7. Opera mini (6%)
  8. Mozilla (2.5%)
  9. Mozilla Compatible Agent (1.2%)
  10. Blackberry (1%)

 The Magic 5%

Up until very recently, the  rule of thumb when designing a website was that if it was used by more than 5% of the current (or predicted) user base, a site should behave perfectly in that browser. So reproduce those rounded corners, make sure that any browser quirks were hacked and go that extra mile to ensure everyone has the same user experience.

Those browsers used by fewer than 5% of the audience would not be tested on. Or at least, not properly tested; one would ensure that users could view the site without it looking like a broken mess.

Sounds sensible. It was sensible. But things have to change…

Today’s 5 percenters

I took the top 5% of browsers based on version and browser type and out of the lot just five had over 5% of the market share. FIVE.

  1. Safari 7
  2. Chrome 31.0.1650.57
  3. Safari in-app
  4. Android 4.0
  5. Firefox 25.0

Internet Explorer 10.0 missed the cut by a tiny margin, so if we’re being generous, we can say that developers these days should just develop to 6 browsers, right?

Well, we can’t really develop for in-app Safari, as the app can use a myriad of settings, so we’ll bring it down to 5 browsers again.

But what if the client is using Firefox? Or Amazon kindle? Or Opera mini? What if there’s a slight difference between Chrome 31.0.1650.57 and 31.0.1650.63?

Well, maybe we can see how many browser/version combinations are there and we’ll create some kind of cut off.

Good idea, but if we look at last month alone we recorded 4,470 different browser / version combinations on Google Analytics alone (ie no spiders or bots) and a mind-blowing 725 unique browsers.

4,470 Browsers

This is why the market share of the top 5 browsers is so titchy in comparison to previous years and leaves developers staring into an Escher staircase of browser testing.

1 year ago, we had 2,984 browser/version combos and 652 unique browsers and 7 years ago we had fewer than 40 browsers to deal with. 

Browser Growth 2006-2013

So what next?

Well, I sure as hell ain’t browser testing 5,000 browsers next month, but I have been in situations where the site is deemed successful if it works on the CEO’s machines, regardless of how niche they are.

The most reasonable method for testing would be to ensure it works on an agreed set of the most popular browsers – IE10, Chrome, Safari, Firefox and on an agreed set of tablets/phones.

If a CEO gets in touch saying that the site’s looks funny on a new device launched after the site was launched, or that a jquery/css effect flickers on his obscure phone, then the account handlers have to have an awkward conversation.

Hopefully telling them this story of the 4,500 browsers will save you a bit of cash.

It’s my party and I’ll fly if I want to

It’s my birthday on Saturday, and to celebrate, I thought I’d bring you all this amazing video of an incredible cake. A stop motion film detailing the making of a cake, the film is actually a light-hearted take on the making of the Dreamliner video.

If you take a look at that video, you can see that the Boeing Dreamliners are put together like a flatpack wardrobe, with some of the larger pieces being transported on an enormous plane (‘The Dreamlifter‘) built specifically for the task.

I’m not sure which is more impressive. I mean, sure the Royal Brunei Airlines’ plane is more environmentally friendly and quieter than other planes, but a cake that uses toothpicks for structural integrity is something I’ve never seen on The Great British Bake Off. Ruby Tandoh take note.

 

Sponsored by Royal Brunei

When OK! Magazine relaunched it’s website recently, I wasn’t expecting much. Another (somewhat) high profile responsive site that completely disintegrates in IE6, IE7 & IE8.

New OK! site rendered in IE8

Remember that for users of windows XP, IE8 is the latest Internet Explorer browser that they can access and is still the 2nd most popular IE browser, with 25% of the IE stats.

So if you’re building a site this winter, make sure you think of the poor buggers who get a broken looking page and not the finished article.

Screen Shot 2013-11-04 at 20.15.24

New Relic

 

We had a problem today with a 3rd party aggregator, News Now, who have been using wget to scrape the Daily Express site to gather content. All of a sudden their files were ending prematurely by around 100 characters.

I tried it myself with curl and reproduced the same problem, but there were a few odd things about this:

  1. The source code for the same page was not missing any characters.
  2. My working copy on a Unix machine for the same story was intact when using curl and wget.
  3. The HTML on the staging server on EC2, was also intact using wget and curl.

So, after eliminating the impossible (which I won’t bore you with), we were left with a problem that looked very improbable: New Relic were inserting JS code into the head and before the closing html tag to monitor users, but were not updating the HTTP Content-Length header.

Browsers are smart enough to ignore the Content-Length if it’s missing or incorrect, but wget and curl are set up by default to adhere strictly to the content length, hence the discrepancy.

Short Term Solutions

1. Add the ‘–ignore-length’ option to wget.

2. Take New Relic off the live servers.

Medium Term Solution

We spoke to New Relic, who told us we could take the automated JS injection off and instead insert it ourselves onto every page. Doesn’t sound like much fun.

Long Term Solution

The long term solution for this would be for New Relic to update the Content-Length after it has messed around with the HTML, or even remove it entirely, but it doesn’t look like this is going to happen.

iPad

 

The latest in a series of Eureka moments concerning deviously tricky problems, this one drove me nuts. But it’s over now. Monster’s gone. Think of this as therapy.

The Problem

One of our clients (The Daily Express) had an issue affecting YouTube videos on their site. When they embedded the new(-ish) iFrame code on the site, it wouldn’t play for iPad users.

It wasn’t a problem at YouTube’s end, as the play button appeared properly, but there was an invisible layer that prevented the red button being played.

Travails

Well, the first suspect in anyone’s line up would be z-index. I used webkit’s web inspector to up the z-index to absurd levels. Starting at 50 and ending up at 9,999,999 before I cut my losses and moved on.

More Clues

The web inspector didn’t show anything in front of the movie and text above and below was selectable. Even the link to YouTube’s page was still clickable/touchable.

Trial by Trial and Error

So, now I cut through the HTML and take out huge chunks of code until it starts working. By reducing this code bit by bit, I could get the player working if I removed all input boxes on the page by using display:none.

Thinking glory was just a minute away, I turned off all custom styles on input boxes expecting to reproduce the success I had by hiding them. Alas, nothing.

To cut a long story short (too late), I made a local copy of the page with local CSS file. Cut the CSS (2,000+ lines) in half, then select the working half. Repeat until you’re down to one line.

J’accuse

So what was it?

1
a, object, input, :active, :focus {outline:none}

And dissecting further it became clear that just this

1
:active {outline:none}

is enough to stop YouTube / Brightcove / other iFrame content working on iOS devices.

The Solution

As we only really care about a:active and input:active, we were able to manipulate the CSS line to look like this

1
a, object, input, a:active, a:focus, input:active, input:focus {outline:none}

and still work.

But Why, But Why?

Good question. One to which I don’t have an answer. The content of iFrames can’t take any CSS from the parent container and as the links at the top of the page were working and the button was not, which makes me wonder. And why would it work when I took out the input boxes on the screen?

Perhaps if I have the time I can investigate, but for now I’m just relieved that it now works and we have a happy client once more.

So you can all have a tinker, I’ve included a video below and embedded that line of code.

The Proof

A few weeks ago I wrote an article on how The Sun allowed GoogleBot to access its site. It appears that this was in contravention of Google’s terms and News International have subsequently revoked GoogleBot’s privileged access, which has resulted in no more access for sneaky chaps like me and a further plummet in traffic for the website.

thesungraph

 

The Sun in violation of Google Webmaster Guidelines

Before The Sun’s decision to block GoogleBot, it was in clear violation of Google Webmaster guidelines on cloaking, which clearly state that “Cloaking refers to the practice of presenting different content or URLs to human users and search engines. Cloaking is considered a violation of Google’s Webmaster Guidelines because it provides our users with different results than they expected“.

General violations of Google’s cloaking guidelines were often used by many ‘black hat‘ SEO companies and were punished harshly, as BMW found to their cost in 2006, but with Google News, the ‘greyhat‘ technique of serving content to GoogleBot (and hence having your site properly indexed) and blocking the same pages to users is explicitly banned -”If you cloak for Googlebot, your site may be subject to Google Webmaster penalties“.

One cannot help thinking that Google were helpfully notified of this transgression by their friends in the media.

The Sun Dips Below The Mirror On Alexa

For the first time ever, The Sun’s main rival is now ahead in Alexa traffic (presumably the reason that The Sun has withdrawn from the ABC web traffic audit this month). The difference right now is slight, but blocking GoogleBot from its site will ensure that The Sun’s traffic continues to go into free fall for some time to come.

Mirror.co.uk beating thesun.co.uk for the first time

 

The Sun Almost Disappears from Google News

Google News now only indexes ~200 articles, compared with 10,000 for The Express and each link comes with a parenthesis of death:

Screen shot 2013-10-01 at 16.11.31 Screen shot 2013-10-01 at 16.11.39 Screen shot 2013-10-01 at 16.11.49 Screen shot 2013-10-01 at 16.12.05 Screen shot 2013-10-01 at 16.12.16 Screen shot 2013-10-01 at 16.12.52 Screen shot 2013-10-01 at 16.13.02

 

 

 

 

 

 

 

UPDATE DEC 2013

It looks like the Sun keeps on falling and will soon be overtaken by the Daily Star online:

Sun vs Mirror vs Daily Star

IOS 7

Seems that iOS 7 has had a bit of a mixed review since its launch less than a week ago, but today I came across an issue which will bring your site down for iOS 7 users in certain circumstances.

Select Boxes now ignore onBlur

The site we had the problem with is a car warranty specialist site. During the quote process, we use AJAX and onChange events to load up data directly into select boxes. For example, a user chooses their car make (‘Ford’), which then populates the range field (‘C-Max’,'Cougar’…’StreetKa’,'Tourneo’).

With the previous OS, we had to attach an onChange listener to the select box in order to blur the user’s focus. If we didn’t do this, the user could click the ‘next’ button on their keyword and focus on the range field. The browser would then receive the data, but would be unable to amend the DOM as the select box was being viewed. Essentially what we were doing was preventing the user from using the next button on certain fields.

iOS 7, however, prevents the developer from doing this. Now the select box ignores the blur function, which leaves the user with an empty select box node.

Luckily, this is relatively easy to get around:

Old Code:

1
2
3
4
5
6
// get range data after a manufacturer is selected
$("#make").blur(function(){
getmake();
}).change(function(){
$(this).blur();
});

So, in line 1, we are adding a blur listener to the select box with the id ‘make‘. When ‘make‘ is blurred (ie it loses focus; the user clicks off it, somewhere else on the screen), we run a function called ‘getmake()’. We then chain another listener, an onChange event, to the same select box and tell it to blur whenever the value changes (ie when someone chooses a different option).

Essentially we are saying that if the user clicks the ‘done’ button on their keyboard (which counts as losing focus from our ‘make‘ element), we should run the ‘getmake()‘ function. If the user clicks the ‘next‘ button on their keyboard, it triggers the onChange listener and we then tell the browser to lose focus from the ‘make‘ element. This, in turn, will fire the blur function in line 1.

New Code:

1
2
3
4
5
6
7
8
9
10
 // get range data after a manufacturer is selected
$("#make").blur(function(){
getmake();
}).change(function(){
$(this).blur();
}).focus(function(){
if($(this).children().length==1 && $(this).children().first().attr('value')=='error'){
$(this).blur();
}
});

So now we add an onFocus listener. What this does is check to see if the select element has one child and if so, it checks that the child has a value of ‘error‘. If both conditions are true, it then runs the blur function – ie tells the browser to lose focus.

We then make sure our select boxes have one child (before DOM manipulation), like so:

1
<option value="error">Please wait...</option>

Why does this solution work? 

It seems that Safari in iOS7 will skip the blur listeners if they happen synchronously. So, the next button/link in iOS7 will essentially pause slightly before running the focus, probably to render the blurry soft focus effect. Therefore, our onChange event does still happen, but the browser then runs a focus event a little while after.

By adding a focus listener with conditions, we are restoring the balance of how we want the site to work for the user.

Mixing onChange and alerts freezes Safari in iOS7

But as these things tend to go, once you find one error another soon surfaces and the next one was massive. It elicited this response from one annoyed customer: “Will you tell the clowns who set up the page to fix the frigging the website so I can leave it”.

So what could have gone so badly wrong?

On the same site, we don’t allow customers will older cars purchase a warranty. The customers choose a registration date from two select boxes. If the month and the year values when chosen mean that the car is over 12 years old, we show an alert box and disable the form.

These were triggered by an onChange listener that would run a function called ‘checkcardate‘:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
function checkcardate(){
var d = new Date();
var year = d.getFullYear();
var month = d.getMonth();
var regyear = $('#regyear').val();
var regmonth = $('#regmonth').val();
if((regyear &lt; year-12)||((regyear == (year-12))&amp;&amp;(regmonth&lt;(month+1))))
{
alert("As your vehicle is over 12 years old we are unable to provide a warranty quote");
window.document.quoteform.action='index.html';
$('#submit').hide();
return false;
} else {
$('#submit').show();
window.document.quoteform.action='';
return true;
}
}

But when customers on iOS 7 chose an option that triggered the listener function, they would get the following screen:

photo

And they could not escape from the screen. All elements in Safari became unresponsive. The only way to exit was to hit the home button, switch to multi tasking and swipe Safari out of the list.

We first tried blurring the browser focus as we did on the first problem, but that proved ineffective. The only solution was to introduce a slight delay in the alert box call.

Instead of <select onChange=”checkcardate()”… we used <select onChange=”ios7checkcardate()”… and created a new function like so:

1
2
3
4
var ios7timer = null;
function ios7checkcardate(){
ios7timer = setTimeout('checkcardate()',500);
}

That half a second delay was enough for Safari to draw whatever it needed to before displaying the alert. Before we introduced this, it looked as though Safari had the user’s focus trapped somewhere on the page, and then set a modal layer on top of everything that required a response. Until the user taps OK on the alert box, the page will remain unresponsive, but as the focus is underneath this layer, the user is trapped forever.

Conclusion

Please check your sites for onChange / onBlur listeners when used in conjunction with alert functions or DOM manipulation.

Apple being Apple, they’re bound to fix this sooner or later, but you don’t want to be picking up emails at 11pm entitled “Your pathetic website” when it has nothing to do with your code and everything to do with Apple’s.

top