Category Archives: BI

Are VC’s just stabbing around in the Big Data dark? – Part II

This post is a continuation from Part I of this blog, where I covered “Database/Platform” and “Querying / Analytics”.

Business Intelligence

look at meOnce we have some kind of standardized SQL-like languages for doing analysis of the data stored in the “big data” databases/platforms (see Part I of this blog), I think a lot of investors are concluding that people will want nice technologies to visualize and interact with this data, ergo, BI companies also make a good investment. I partially agree with this. The specific issue I have is that most of the BI companies getting funded today look exactly the same as all the old BI companies, and they all try and make the exact same set of claims (that’s a topic of another post, but in short, everyone says they are special because they are “beautiful”, “fast”, “easy”, “no IT”, “cloud”, etc). Nobody is able to stand out from a messaging point of view since everyone says the same thing, and nobody stands our from a product point of view as there are few real differentiators between any of the products. As a result, I’m not sure why anyone of those companies are likely going to be able to disrupt the existing BI firms. 

I do believe however, that there is huge potential in the BI sector as a result. Qliktech and Tableau were the last vendors who really innovated, and their market cap in the $Billions shows what can happen when you do something special here (I’m always on the look out for people who are doing innovative work, so please do ping me if you feel like I’m overlooking anyone). But, I don’t think it’s fair for me to say no one is innovating, without offering up my own solution for where the opportunity in BI lies, so I’ll try and cover here quickly where I see the future of BI. Firstly though, remember:

“An innovation that is disruptive allows a whole new population of consumers access to a product or service that was historically only accessible to consumers with a lot of money or a lot of skill.” – Clayton Christensen, The Innovator’s Dilemma

Despite the marketing hype, BI today is still limited to those with skill, and as a result, it is usually rolled out to a small subset (<20%) of a total company. So I personally think the big opportunity lies in making BI as simple to use as Google. Specifically, it should be a combination of Google Search PLUS Google Knowledge Graph. So I don’t just mean a search interface to BI (a NLP search interface for a BI tool sounds like a great concept, but is actually terrible when you see it implemented).

GKG forecast

So what I mean is a system that understands the actual meaning of the data (i.e. it understands the difference between a customer and a product, and what operations might be relevant to each), as well as the context of who you are and what you typed in to a search box. This should work the same way that GKG understands when I type in “forecast 94104”, I want to see the weather for San Francisco right there in my search results… not a link to a webpages that contain the keywords “forecast” and “94104”. This type of search has little to do with keywords, and a lot to do with semantics, and BI needs to have the same level of understanding going forward.

A BI system should also be able to leverage public data sets as easily as it can private data. There is a huge amount of data available online these days, and an intelligent system should be able to automatically leverage it without someone needing to identify the data, “ETL” it into the system, and model how it relates to your existing data sets. Data, whether private or public, should be automatically connected together where it makes sense to do so. Think of a graph of entities rather than tables/views and star schemas.

Note, I said the data would be automatically connected together, which means no modeling/joins/etc. No modeling and ETL work sounds like a recipe for a system with poor quality right? Sort of. I am skipping details for brevity here, but I am also suggesting that business analytics has pursued the vision of a “single version of the truth”  for a long time, but it’s an unobtainable goal and we need to think differently going forward. Chris Anderson in The Long Tail describes this best:

“These probabilistic systems [Google, Wikipedia, etc] aren’t perfect, but they are statistically optimized to excel over time and large numbers. They’re designed to scale, and to improve with size. And a little slop at the microscale is the price of such efficiency at the macroscale”

Business Analytics has always been the Britannica model: highly curated to be as accurate as possible, but exceptionally slow to react to changes in the business and always falling further and further behind. We need to start looking to speed over precision for analytics (which sounds scary to some – e.g. what about my financial reports?). The way we achieve that speed and flexibility is to build highly adaptive systems that are intelligent and can do much heavy lifting behind the scenes (hint: through Semantics + Machine Learning, instead of IT users configuring the system). For the first time ever, we have technology available that would be capable of actually pulling all those things together into a single product, which I believe would completely disrupt the BI market again. I’m sure people have other ideas, but that’s where I see the future of BI.

Overall: I think investments into the BI market can make a lot of sense, but only if there is something truly innovative there. This doesn’t mean it has the be the same vision that I have for where BI should go next, but if there’s no more to it than another “we make BI beautiful and simple”, then I’d skip it.

In Part III of this blog, I’ll cover the final category: Analytic Applications.