SEO Forecasting Headaches: Time To Rank

So you’ve got your handy chart of keyword volumes? You’ve filtered out the off topic items, and have even have a decent list of terms you’re confident you CAN rank.

Can You Predict Outcomes Though On A Specific Schedule?

How Long Does It Take To Rank #1?

In 2017, Ahrefs ran a study to examine how long it took currently top ranking content, to BECOME top ranking content after it was published. At the time of the study the average #1 ranking site had been published for around 930 days, ~850 for #2 ranking content and dwindling down to an average of around 650 days. However, this is all based on averages, which is not a perfect answer as it obscures what % of content ranking on page one is really really old, and how much of that ranking content is really fresh.

Further though, the study uncovers that less than 22% of Page 1 search results were published in the last year. Unfortunately, their stats do no give us detail or percentages averages for this group, instead they break down the pecentage% of 1st year content ranking content. About 1.5% of #1 ranking sites were published in the last year, down to about 4.2% of position 10 results were published in the last year.

Of course, the huge volume of the internet obscures some valuable insights that should be explored. Is there a statistically significant difference between types of content and queries and the age of the ranking document/page? For example, if the query is topical in nature and has mostly blog content ranking does that type of content tend to be younger than for more dictionary or definition type searches. So if you’re trying to forecast your SEO outcomes, how does time factor in?

How Long Do Professional SEOs Expect Before It Ranks? It Depends

The more competition there is the longer it takes because that’s almost always dependent on inbound links. If you shoot for 80% relevancy on [a content relevance tool like orbitwise] you have good opportunity for quick success.

Terry VanHorne

If the keyword is brand new and nobody has written anything, you’ll rank #1 as soon as the content is published and google indexes it.
If it’s highly competitive like “car insurance quotes”, “best credit cards” or “mesothelioma”, you might never get to #1.
So, I’d say that the average time across every query imaginable probably doesn’t matter. What does, is how long it will take in your specific vertical or niche.
So, as they say, it depends.

Jonas Sickler

There is no set time. It’s dependent on the strength of the competing pages. You can take top spot Very fast, if it’s barely competed, and the best competitors are weak, unoptimized, and coincidentally ranking.
Alternatively, you may be facing a battle that goes on for years, clawing your way to position 2, take position 1, only to loose it again within hours/days. This normally happens against Mega+Brand sites, for terms they have a ton of content on, and lots of link equity.

Lyndon – Darth Na

To add, it depends on the already established E-EAT factors of the whole website. If the whole website or majority of it has strong E-EAT factors, then the new page could quickly get to #1.

Sannidhi

It depends.
It really does. For one thing, brand new sparkly fresh content has a massive advantage in QDF query spaces, while ‘evergreen’ just isn’t. Additionally, ANY query can suddenly become QDF affected when a word/phrase trends or changes context or makes news.

Ammon Johns

How long is a string? is the boring answer. It can take a day, it can take multiple years. I have seen for a lot of our clients it’ll take a couple of months. Ranking number #1 for “online casino” is going to be an uphill battle but ranking #1 for “Dental Clinic Kuala Lumpur” would most likely be a lot quicker.

Axel Hannson

0-50% – Same day. >95%: 3 months. Time is an observer, not input, the time decay is the time taken to update all of the inputs into the page + updating the competitors

David Quaid

How Does Time To Rank Impact ROI Calculus?

The impetus is not about “chasing rankings”, but trying to quantify the difficulties that one might have when you’ve ALREADY done the keyword filtering and targetting. The subset of queries and traffic that are close to conversion (high ROI) are a SUBSET of the total number of relevant terms.

So even if you filter down, and are creating content relevant just to a very specific niche element of valuable, high conversion terms, if you do not know if your published post will rank in 1 day or 1 year then your content creation strategy is shooting in the dark.

Gathering insights, data and experience and having this conversation is important. If you know for example that less than 22% of pages on the first page of search are LESS than a year old, and that less than ~1.2% of #1 rankings are less than 1 year old that SHOULD impact your strategy. While the SCALE of total content online is going to skew that data to be less meaningful (and there’s no detail on what types of content have older published results vs newer), logically it would lead toward publishing and optimizing content over time being a better strategy than taking repeated NEW shots with different content pieces. No?

This is all to explore the headaches and challenges you face when you’re trying to do even basic “forecasting“, market size research and calculating effort/resources(investiment) needed to be spent to GET the return.

Jeremy Rivera

Erdogocity & SEO Forecasting

Ergodicity. That implies that freshness is a linear factor across all queries, when it isn’t the case for all queries – so it would only impact your strategy if on multiple instances Google is ranking “fresher” content for the same query/query clusterthat can’t be extrapolated across a strategy for anything and everything.

Not everything is ergodic in real life. It’s an economic term, for example – , the experience of a single individual over time might not represent the average experience of the entire population at a point in time. Another example is coin flips, observing one person flip a coin 10 times, versus 10 people flipping a coin 10 times, you’ll get varying averages.

When I do forecasting, I don’t look at time to rank, as it varies so heavily across keyword sets – so I look at incremental gains, e.g. if we made a 10% improvement in ranking to this cluster of keywords, what based on MSV, est CTR, and the site CVR, what would be the potential traffic and revenue/lead impact on the business. You can then map this against real historical traffic numbers, and create a BSTS forcast with upper and lower bounds to visually help any stakeholder understand.

Likewise I ignore keyword difficulty scores, as they are averaged out at keyword level across the top 10 ranking results – but doesn’t take into account the top 10 ranking results source type, and that the quality threshold for that site (and result type) to be indexed, and served for that query, will vary from others on that page.

I understand what you’re trying to achieve, and the question being asked – in theory I think it can be modeled; statistically however, given the number of elastic variables, I don’t think it can to any meaningful degree unless it’s being done at a tactical level, and at keyword level – in which case, that’s just SEO?

Dan Taylor
Scroll to Top