Author Archives: obeleask

About obeleask

Currently VP Products at SignalDemand, an analytics startup for commodity manufacturers. Previously VP Product Management at SAP. where I was most recently leading a project to create a disruptive technology in the area of business analytics, leveraging graph databases, search, in-memory columnar databases, machine learning and mobility (several patents pending – was pretty cool stuff). I’ve led product management for OutlookSoft (post acquisition), been involved in a huge amount of M&A and partnership transactions/evaluations, ran the Data & Analytics technology strategy for SAP, had roles in sales operations, architecture & engineering. I also spent 5 years in consulting where I worked for a ton of different companies. I’ve worked in basically all areas of Enterprise Analytics (Business Intelligence, Data Warehousing & Big Data, Enterprise Performance Management, Enterprise Information Management, Analytic Applications, Data Mining / Machine Learning / Predictive Analytics). I live in the SF Bay Area. Husband, dad, car guy, interested in startups & VC, space, enterprise software, analytics, technology, stronglifts, and anything innovative that piques my interest. I’m currently getting my MBA, but don’t hold it against me – I can code, and I’m only doing the MBA to accelerate getting a green card (7 years and counting on visas!).

Are VC’s just stabbing around in the Big Data dark? – Part II

This post is a continuation from Part I of this blog, where I covered “Database/Platform” and “Querying / Analytics”.

Business Intelligence

look at meOnce we have some kind of standardized SQL-like languages for doing analysis of the data stored in the “big data” databases/platforms (see Part I of this blog), I think a lot of investors are concluding that people will want nice technologies to visualize and interact with this data, ergo, BI companies also make a good investment. I partially agree with this. The specific issue I have is that most of the BI companies getting funded today look exactly the same as all the old BI companies, and they all try and make the exact same set of claims (that’s a topic of another post, but in short, everyone says they are special because they are “beautiful”, “fast”, “easy”, “no IT”, “cloud”, etc). Nobody is able to stand out from a messaging point of view since everyone says the same thing, and nobody stands our from a product point of view as there are few real differentiators between any of the products. As a result, I’m not sure why anyone of those companies are likely going to be able to disrupt the existing BI firms. 

I do believe however, that there is huge potential in the BI sector as a result. Qliktech and Tableau were the last vendors who really innovated, and their market cap in the $Billions shows what can happen when you do something special here (I’m always on the look out for people who are doing innovative work, so please do ping me if you feel like I’m overlooking anyone). But, I don’t think it’s fair for me to say no one is innovating, without offering up my own solution for where the opportunity in BI lies, so I’ll try and cover here quickly where I see the future of BI. Firstly though, remember:

“An innovation that is disruptive allows a whole new population of consumers access to a product or service that was historically only accessible to consumers with a lot of money or a lot of skill.” – Clayton Christensen, The Innovator’s Dilemma

Despite the marketing hype, BI today is still limited to those with skill, and as a result, it is usually rolled out to a small subset (<20%) of a total company. So I personally think the big opportunity lies in making BI as simple to use as Google. Specifically, it should be a combination of Google Search PLUS Google Knowledge Graph. So I don’t just mean a search interface to BI (a NLP search interface for a BI tool sounds like a great concept, but is actually terrible when you see it implemented).

GKG forecast

So what I mean is a system that understands the actual meaning of the data (i.e. it understands the difference between a customer and a product, and what operations might be relevant to each), as well as the context of who you are and what you typed in to a search box. This should work the same way that GKG understands when I type in “forecast 94104”, I want to see the weather for San Francisco right there in my search results… not a link to a webpages that contain the keywords “forecast” and “94104”. This type of search has little to do with keywords, and a lot to do with semantics, and BI needs to have the same level of understanding going forward.

A BI system should also be able to leverage public data sets as easily as it can private data. There is a huge amount of data available online these days, and an intelligent system should be able to automatically leverage it without someone needing to identify the data, “ETL” it into the system, and model how it relates to your existing data sets. Data, whether private or public, should be automatically connected together where it makes sense to do so. Think of a graph of entities rather than tables/views and star schemas.

Note, I said the data would be automatically connected together, which means no modeling/joins/etc. No modeling and ETL work sounds like a recipe for a system with poor quality right? Sort of. I am skipping details for brevity here, but I am also suggesting that business analytics has pursued the vision of a “single version of the truth”  for a long time, but it’s an unobtainable goal and we need to think differently going forward. Chris Anderson in The Long Tail describes this best:

“These probabilistic systems [Google, Wikipedia, etc] aren’t perfect, but they are statistically optimized to excel over time and large numbers. They’re designed to scale, and to improve with size. And a little slop at the microscale is the price of such efficiency at the macroscale”

Business Analytics has always been the Britannica model: highly curated to be as accurate as possible, but exceptionally slow to react to changes in the business and always falling further and further behind. We need to start looking to speed over precision for analytics (which sounds scary to some – e.g. what about my financial reports?). The way we achieve that speed and flexibility is to build highly adaptive systems that are intelligent and can do much heavy lifting behind the scenes (hint: through Semantics + Machine Learning, instead of IT users configuring the system). For the first time ever, we have technology available that would be capable of actually pulling all those things together into a single product, which I believe would completely disrupt the BI market again. I’m sure people have other ideas, but that’s where I see the future of BI.

Overall: I think investments into the BI market can make a lot of sense, but only if there is something truly innovative there. This doesn’t mean it has the be the same vision that I have for where BI should go next, but if there’s no more to it than another “we make BI beautiful and simple”, then I’d skip it.

In Part III of this blog, I’ll cover the final category: Analytic Applications.


Are VC’s just stabbing around in the Big Data dark? – Part I

Big Data

Image Source: The Internet (i.e. I don’t remember where I found it)

For the past few years, we’ve seen Big Data after Big Data company getting funded. For a while it was my job to stay on top of all the new technologies, but I still do it because I’m interested in the space and where things are going. Unfortunately, I am almost always underwhelmed by what I am seeing getting funded, as there is significant amount of “me too” out there. It seemed like if you had “Big Data” in your pitch, and your founders are coming from Google, Facebook or a few other companies, you could probably raise a few million. Unfortunately, I believe a lot of these companies lack true enterprise experience, and end up with solutions chasing a problem. So what’s going on? Well, I can’t answer the investment rationale behind what has happened (even though I may have a few theories), so instead, I am going to try and categorize the big data market into a few high level segments, and talk about where I see the over-funded vs under-funded (i.e. opportunity!) areas.

Database / Platform

All aboard the big data bandwagonBetween NoSQL, NewSQL, Hadoop + affiliated technologies, etc, there are a plethora of products in this category (10gen/MongoDB, DataStax/Cassandra, Couchbase, MarkLogic, Redis, Cloudera & Hortonworks for Hadoop, Mapr, Neo4J, Basho/Riak, VoltDB, etc etc). And this is a very small list of a few of the more popular ones – there are hundreds more – and that’s without even including the MPP or In-Memory crowd either. It is easily the most over-funded category of “big data” (IMHO)… although the good news is that the rate of new investment here appears to have slowed finally.

I remember when these technologies first came out, I’d often try them out or even just try to understand under what scenarios each one was better suited for. There were lots of discussions about the CAP theorem then, and how each new product tied back to that. You’d see lots of blog posts/questions/etc on NoSQL A vs. NoSQL B, or “why I moved from NoSQL A to NoSQL B”, etc. But after a while, I would (and I think most people would) ignore a lot of the new products coming out because the amount of differentiation between each new company/product was asymptoting towards zero.

The market is tired with these products. Most people have had their fill with playing with new technologies, and want to get back to the business of actually building their own product. As a result, my personal belief is that the success of these companies will now be driven more by network effects than by the value of their underlying technology. Yes, developers want to use the best tools available to them, but the reality is that if there is no community, no information, no tools, etc behind something newly built, you probably aren’t going to invest your time and skills into it unless it is massively better (or cheaper) than what’s already out there. If it’s incrementally better, most will likely go with the safer “mature” options that already have traction. So I would see the established companies continuing to get bigger and bigger now. Of course, there will always be exceptions and new technologies can still break through, but the market today is not the same one as ~2008/2009.

Overall: I’d personally want to see some extremely compelling reason to fund another product in this space right now.

Querying / Analytics

skeptical hippoGiven the proliferation of database companies, we are beginning to see investment now go into the next layers up the stack. That is, since there are so many databases out there, there was a dearth of standardization in querying these databases, so SQL is getting much love again. Further, since many of these databases were designed for transactional processing and not analytics, we have also seen investment go towards the problem of analyzing “big” data with real-time performance instead of batch analysis. Some of the technologies I put into this category are Impala, Stinger, Spark/Shark, Apache Drill, Platfora (although they span the next category too), Google Big Query, and Hadapt. The query language and analytical processing problem is a hard and necessary one, and a huge market too, but I’m skeptical about it making a good investment for a startup right now. Let me explain why.

The combination of really powerful computing power at lower and lower costs, means a system that can do transactions and analytics, batch analysis and real-time, etc all in one technology stack, is the direction the market is swimming in. And all these vendors who own the underlying platforms are well aware of the need for querying, analytics and real-time performance, so most of them are furiously working towards building solutions. I believe they are the ones who will ultimately dominate, and a startup that tries to do just analytics (as a standalone platform) will have to fight upstream against this market direction.

Anyone who has been around in the analytics world should also know the reality of how hard it is to actually manage data by moving it between systems (integration, quality, transforms, etc), which is why getting 1 platform to do more and more in the same stack is an attractive value prop. At this point in time, Hadoop in particular, has created enough publicity (and value) that you (as NewCo) would have to fight really hard to get mind share with most managers/IT depts (i.e. you’d have to put a lot of marketing dollars to work now). I’d love to be proven wrong on this, as I think this market has huge potential while being under-served by startups (there are tons of existing companies with expensive solutions here, but few startups). But, I don’t know that I personally would want to bet on a company in this space unless they had some seriously interesting technology (which I haven’t seen yet).

You could argue: but wait, a startup could still build the best solution for an existing DB/platform, instead of rolling a completely new product. Sure they could, but then I would (especially if I was an investor) be worried about building a company that was completely dependent on somebody else’s platform (a la Zynga on Facebook). It might work for you for a while, but the platform may release something that changes the game on you. Perhaps in this category, you could argue this might be the case with Platfora building their product exclusively for Hadoop, and then Impala and Stinger coming out. To be fair to Platfora here, this is their message on how they see themselves relative to Impala, and nobody “owns” the Hadoop platform so you have a bit more control than a truly private platform (or at least you have a sense of control anyway).

Overall: I think this market has huge potential, and there aren’t a ton of startups here, but I’m not sure it would be a good area to invest in either. This is because (1) the incumbents are all working towards solutions, (2) it’s not clear which platform to bet on, (3) tying your company to 1 platform is very risky anyway, and (4) people are unlikely to buy standalone if the integrated solution from the DB/platform vendor is already good enough.

In Part II of this blog, I cover Business Intelligence. Part III of this blog will discuss Analytic Applications.

Dealing With Conflicting Advice As A Product Manager

homer getting conflicting advice

One of the most underrated skills for Product Management is the ability to synthesize a lot data, with much of it simultaneously pointing you in conflicting directions. It’s not dissimilar to trying to make a decision by looking at what “internet experts” recommend…

As a Product Manager, you will be getting input from a lot of different sources: customers (of course), analysts, competitors, prospects, advisors, industry experts, colleagues, partners, management and more. If you think you’re likely to basically hear the same thing over and over from all these stakeholders, so you can just focus on what you hear the most frequently (or loudest), you’d be wrong (for more than one reason). In fact, if these type of conflicts give you a headache, then Product Management is probably not the job for you.

Fred Wilson recently wrote about a similar experience that startups in an accelerator program go through, with every person they meet as part of the program (smart, experienced people) offering them solid but contradictory advice. He called that specific phenomenon mentor/investor whiplash. What these new startups & CEO’s are going through is what a Product Manager also has to deal with, day in, day out.

But I believe conflicting information should not pose a problem for a few reasons. Firstly, you should already be absolutely crystal clear on what your vision is for where your product is going… what will it do, what will it not do, who will use, why will they use it, what does it look like, what makes it unique, etc. If the feedback you are getting would mean steering you away from your vision, then you want to apply a very high discount factor to it. I’m not saying ignore it or don’t be open to new ideas, but it’s critical you have total commitment to your vision.

There’s nothing worse than a Product Manager who flip flops around on their product strategy based on who they spoke to last (believe me – I’ve worked for more than one of them). You will lose the confidence of your team very quickly, and you’re going to end up with a crappy product in the not-so-distant future. So as long as you have a clear picture of what your future looks like, it makes it very easy to decide how some feedback/advice/idea falls into that picture (or not). If you’re operating without a vision, you have no “filing cabinet” to put the feedback into, so you can’t distinguish good/bad, helpful/useless, relevant/irrelevant.

One of the character traits I can’t stand are people who like to surround themselves by “yes men”. To hear only one view point before you decide something is the worst case scenario: you miss implications and opportunities. So this leads me to the second reason why I don’t see conflicting information as a bad thing: when you get all differing data points, you can understand everybody’s perspective, get new ideas, and see both sides of any decision. This in turn should give you personally new ideas, things to consider, and stretch your own thinking.

People tell me I tend to ask a lot of hard questions, but I just don’t see it that way. I think if you don’t have answers to all of the questions coming your way, then maybe you haven’t thought through everything in the first place. And if that’s the case, I personally would appreciate the back and forth dialog, not complain that the questions are too hard. Questions are not criticism – it’s a way to make sure you’ve thought everything through. I personally always ask people on my team to push back hard on anything I say that they disagree with – it’s much better to hear it early, than after you’ve finished executing on a bad assumption.

5 whysLastly, getting conflicting information will often provide you with the prompt to really make sure you get to the root of the feedback/suggestion. Some people might explain this as asking “why” 5 times. Let’s say you have a customer request your product to add feature XYZ, it looks logical and something other customers you know will want it, so you put it on your backlog. But then another customer asks you for the exact opposite thing of feature XYZ. Because of the resulting cognitive dissonance, you will ask “why” they want that. Only when you ask why, will you get to the real feedback/insight. This is what you should have done the first time around anyway, but my point is that conflicting information often forces you to have that discipline because your brain will want to resolve these two opposing view points.

So as you can tell by now, i think if you are Product manager, you should be seeking out conflicting information, not shying away from it – it is a good thing, not a bad thing. Whenever my managers used to tell me I did a good job on something, I would say “don’t tell me what I’m doing well – tell me what I could be doing better”. So I was very glad to read the following earlier this week:

“They won’t always be right, but I find the single biggest error people make is to ignore constructive, negative feedback. Don’t tell me what you like, tell me what you don’t like.” – Elon Musk

Note that this does not mean you have to agree with the negative feedback and change your approach, but always be seeking alternative views. And on that bombshell, I’ll open up the comments to any hard questions you may have for me 🙂

9 Ways People Screw Up Their Product Demos


I don’t know about you, but a bad product demo can turn me off a deal/product/company really quickly. A poor demo shows a lack of pride in your work, or laziness – neither are good signs. Here is a list of the 9 most common mistakes I have seen people make in their demos.

1. They show features and don’t tell a story. Your product is only as good as the problems it can solve for someone. What I want to hear during a demo is what problems you are solving and for who, not a laundry list of features in your product. Tell me a story about whose “day in the life of” we are about to embark on. Some people like to get really specific (e.g. Sally is the VP of Sales at XYZ Co, etc), some prefer to just use a title only. I’ve heard people argue about the merits of each approach, but I believe it doesn’t matter. What is critical is that you need to be a storyteller! Make it compelling, interesting, unexpected… something that grabs my attention and keeps it. It’s even better if you can work in a story from an actual customer that benefited from your product. A week later, people will have forgotten the demo, but they will likely still remember the stories.

2. They try to show everything In the product. You may love all your new cool features, and remember how and why each one was created. But I don’t, and I usually just want to get a quick overview and a feel for the product. Most people aren’t going to love your product like you do, so leave them wanting more… don’t bore them to tears.

3. Executives bring in somebody else to do the demo. I’m not saying your Exec’s need to be able to answer all product questions, demo everything in the product, etc – but if you have a standard demo that you pull out for meetings with partners, conferences, etc, everyone should be capable of showing it without any help. This one is particularly egregious if you talk about how “easy to use” your product is – if it’s so easy, why can’t your own executives learn how to use it to show the demo?

4. They give no context before the demo. Before you begin a demo, please at least tell us what your company does – your sub 1 minute elevator pitch – and what problems you’re trying to solve. Otherwise I spend the entire demo trying to work out what box to put you in, instead of actually analyzing the product.

5. They don’t reinforce their USP’s throughout the demo. As I wrote in my last blog, you need to know what makes you unique. Make sure you draw people’s attention to these key differentiators during your product demo, and make sure your demo is actually supporting what you are saying.

6. They show features that are not in production. This one seems so obvious that you may wonder what it is doing in here, but I’ve seen it so many times now that it has to get included. People show things their product can’t actually do (yet) as part of the standard demo, without disclaiming that what I’m actually looking at is a PoC, WIP, whatever. Please, especially for a “first meeting” type of situation, just show real product.

7. They use really old data. If the data in your demo is 2 years old, you’re fine. If your data is 5 years old, we have an issue. If you show 5 year old data, I am going to assume that either: you don’t care about putting your best foot forward; your product hasn’t changed enough in the last 5 years for you to even need to update your data; your product is so complex to configure, that changing the data is so hard that nobody has the time/motivation to go through the work. Please make sure your data is reasonably up to date.

8. They don’t practice. Technical glitches happen – especially during demos – so most of the time you’re going to get a pass for those type of issues. What you never get a pass on is not knowing your demo script. You should never, ever been saying “I click here and then… oh wait… no, I click on this…”. Please don’t let this be you.

9. In the world of Business Analytics / Intelligence, they have a pre-defined discovery path to insight. I left this one until last, because it only applies to Enterprise Analytics companies, but, this one of my biggest demo pet peeves. I can’t stand demos of BI or Analytics software that goes something like: “I can see Sales are down in Europe this quarter, so I filter to Europe and drill down by Product Group. I can see Product Group A is causing the issue, so I drill down by Sales Rep, and find that it’s Bill who has been underperforming for this particular Product Group…”. I think almost every BI vendor demos this way, and in my opinion, they are all doing it wrong. An “intelligent” system would be telling you what the issue is, not making you hunt around in the data looking for answers. In your demo, you knew to drill down by Product Group then Rep. In the real world, the business analyst looking at the data might have needed an hour to find the same thing. Putting all the onus on the user to find out what is going on is a crappy way to build a product, so it bugs me to see a demo that tries to actually sell this back to me as a good way to analyze your data!

So that’s my list, but I’m sure I missed some other good ones. Let me know what your demo pet peeves are in the comments.

4 Steps To Effectively Handle Competitive Landmines

Competitive Landmine

Image Source: Sarah Pickering

Boom… You’re in the middle of a deal when your prospect asks about an area where you’re weak. A lot of people don’t handle this situation well, and come out of the meeting worried about the deal, scrambling to get the feature added to their roadmap. Often times, the source of this issue actually came from a competitor who planted the landmine that you just walked into. You can always tell if a competitor was the original source because:

  • The question comes out of nowhere
  • It’s a feature where one of your competitors is strong
  • The prospect asks the question in terms of a specific feature and not in terms of their business requirements
  • When you ask them “why” they’d actually need this feature, you get lots of “ums and ahs”, and a vague reference to something they might want to do, possibly, one day in the future

Mark Suster’s phenomenal blog post on  knowing your USP’s (Unique Selling Proposition) touches on this topic:

“It’s equally useful to know what your weaknesses are against your key competitors and honestly capturing those. You’ll need it so that you can work on “objection handling” when prospects naturally bring up these areas … Knowing how your competitors position their USPs against you is a very important part of winning”

What Mark doesn’t talk about though, is what typically happens from the Product point of view:

  • Sales comes out of the meeting, usually with a few battle scars from walking into the landmine
  • Sales jumps on a call with Product to tell them we are getting beat up by not having this feature in our product
  • Product team adds said feature to the roadmap/backlog (if it wasn’t already sitting on there anyway)
  • Competition learns you didn’t handle this landmine particular well, so they try to repeat this move in as many other deals as possible
  • Sales feels more heat, so puts even more pressure on Product team to deliver it
  • Product team now prioritizes it, and feature gets added
  • The competition then says your implementation is crappy… which it probably is. You just wanted a “marketing check box” so you could claim to have this feature anyway… you don’t really expect anyone to actually use it, and it wasn’t really what you cared about for this release – you just had to do it to keep sales happy
  • The competitor then usually moves on to a new landmine to use, and the cycle repeats

Mark Suster says handling this request needs to be part of your sales methodology, but let’s be clear – this cycle is not the fault of sales. This is another symptom of poor Product Management. Here are the four things your Product team should be doing in this type of scenario:

  1. Go Meet With The Prospect Directly – Get out of the office. Hear their concerns, and find an effective way to neutralize the issue. Some examples might include explaining how your product doesn’t need this feature because you have an alternative and better way of doing it, or how nobody actually uses this feature in reality and it’s nothing more than demo-ware because of X, Y and Z. Learn what is the most effective argument to handle this objection.
  2. Plant Some Landmines Of Your Own – After you have neutralized the issue, turn it around quickly by reinforcing your own USP’s. If your doing your job well, you should have hopefully been able to work out which competitor the initial landmine came from and you should know where they are weak, so you can place your own landmine accordingly. If you don’t know your competition – get studying. If you don’t know who planted the landmine, at a minimum you should definitely be reinforcing your USP’s. It is also much more effective if you can plant a landmine for your competitor that is related to an actual business requirement of the prospect. Most people don’t take this extra step – they pull out the same landmine for the same competitor all of the time. That’s why at the beginning of this article, I said that most prospects have no real clue on why they might actually need this feature, and that usually makes it fairly easy to neutralize. You will be significantly more effective if you can tie back your competitors weakness to a current pain point of the prospect.
  3. Evaluate The Real Importance Of This Feature – Don’t add the feature to your product just to get the marketing (or RFP) check box. That is how you build a crappy product. In fact, default to an assumption that says this feature should NOT be added to your roadmap (if it was relevant, you’d hear about it from your actual customers, and your prospect would have a real business requirement in mind for it). Always evaluate it with an open perspective, but just make sure you apply sound critical thinking before you blindly add it to your roadmap.
  4. Educate Your Field – Once you know how to handle the objection and turn it the conversation around to your USP’s, then make it repeatable. Roll this information out to your field so that the sales team can handle this complaint themselves in the future. That is why I said this issue is not the fault of sales – they first have to be enabled to deal with the situation. Also explain to Sales whether this feature is coming in your roadmap or not, and if not, why not.

Often times, a landmine is to serve two purposes: to get the prospect questioning you as a valid selection, and to keep your product team busy on something that doesn’t really matter. You can see in the typical cycle above, your competitor can waste a lot of your cycles working on what might be a completely irrelevant feature. So make sure you don’t fall into the trap of having your roadmap indirectly controlled by your competitor. Know your USP’s, your customers, your competitors, your market, and your vision – and make strategic choices about your roadmap.

I’d love to get your feedback in the comments on any other tips or tricks you might have in handling (or placing) competitive landmines too.

Do Your Product Managers Suffer From Featuritis?

The Featuritis Curve

Image Source: Kathy Sierra (

I think The Featuritis Curve is one of the most critical things for a good Product Manager to really internalize. People tend to look at this curve and nod their heads, smile, say they “get it”, perhaps even look at me like “of course, why are you even bringing this up”, then they go right back to the business of jamming as many features into their product as they can fit.

Does this story sound familiar to you? Product Management and Engineering are fighting again. Engineering is coming back to Product Management to tell them they have to start dropping some previously agreed upon features if they want the release to come out on time. Product Management is saying that they are all critical features, and we won’t make our revenue targets if we drop them. PM blames engineering for under-delivering again, and there’s lots of tension between the teams. If it does sound familiar, it’s likely that your Product Managers suffer from the deadly condition known as “FEATURITIS”…

PMs jamming features

The Featuritis Curve is actually providing a similar message to The Innovators Dilemma, which essentially says disruptive innovation comes from by building products that are less functional and significantly cheaper than the market leaders. In doing this, you can capture the needs of customers who were not previously willing to buy the market leaders due to their price and/or complexity. So it is probably even more critical for startups to really have this message sink in if they want to disrupt existing businesses.

Of course, there are several companies who abide by this, and they tend to be the most successful companies, so by definition, those are therefore likely the ones you HAVE heard of. But think about all the other startups (especially enterprise software startups) who fail that you don’t know about, and take a look at their products. A lot of them seem to do be doing their best to get straight onto the right hand side of the curve. Why does this continue to happen over and over?

I will offer up one overarching reason: poor Product Management (or in the case of many startups, no dedicated product management at all). It takes A LOT of disciple not to slide down the curve. In fact, it is the natural evolution of a product to move from left to right along the curve as everybody you talk to will have feature requests for your backlog for you. It’s your job as a Product Manager to fight for your users though. When you dont understand your users and know what they REALLY want, or you don’t have a clear vision for your product, or you add a feature just because a competitor has it (the topic of my next post), you will end up over that side of the curve. Poor Product Management equates more features with a more viable product (Kathy Sierra called this “fear” – fear that your product is inferior with less features).

Aside from poor Product Management though, perhaps the best argument against The Featuritis Curve I have seen was laid out by Joel Spolsky. I will paraphrase his key argument as: even though it might be true that feature usage follows the Pareto principle (80% of customers use 20% of the features), each customer uses a different 20% of the features – there is no magical 20% set of features that appeals to everyone. Therefore, more features = more ability to attract new customers. I would add that his argument is particularly true if you are designing products are inherently complex (e.g. an ERP suite vs an iPod). However, there are few things missing in this argument:

  1. Customer Satisfaction and Churn – you might be signing on new customers, but are all your extra features making your software more complex, and therefore reducing user happiness? What’s most troubling about this is that the numbers won’t show this to you until it’s too late – customer’s won’t churn on you instantly (especially in Enterprise Software). It will likely take some time, at which point you are far down the right side of The Featuritis Curve. And once you are down there, you can almost never travel back up towards the top – it’s a one way street.
  2. Technical Debt – Ceteris Paribus (all things being equal), the more features you are adding to your product, the more likely it is that you are accumulating Technical Debt. The big problem with Technical Debt is that it then becomes even more difficult for you to add more features to your product. So now you’re on the far right side of the curve, unable to move back up the hill AND unable to add any new features without breaking something else. If you ever look at product from a big company, and think that the product looks almost identical to what it did 5 years ago despite lots of new releases coming out, you now know the reason for it.
  3. Crossing The Chasm – A false dichotomy in Spolsky’s argument is that people are buying your software only because of your features. In reality, there are so many other reasons why people buy (especially in the world of Enterprise Software). For just one example, consider Crossing The Chasm – the Late Majority and Laggards are buying because everybody else is buying. They might rationalize it (to themselves even) as now you have the features they want, but it’s much more likely to be that they just needed the social proof of others having bought you before them.

Despite saying all this, I will grant that Spolsky is somewhat correct – you will always be enhancing your product in new releases, and at least some of those enhancements will be new features, therefore your feature count will always go up over time. You don’t hit the top of the curve, then fire all your PM and Dev teams, and expect the revenue to come rolling in since you’re now at the perfect peak. The Featuritis Curve is overly simplistic to make a point, and it doing so it conflates a few different concepts like design and usability into the “features” axis.

So rather than saying you shouldn’t add features, it’s that you should be extremely diligent about adding them, and make sure that the features you do add are focusing on maximizing your user happiness. Unfortunately, most Product Managers do suffer from Featuritis. I believe this is the true test for good Product Management – knowing what features to NOT add to your product. This means fighting to keep things out, not fighting to jam more in.

Why I’m starting a blog, and why you should too

no reviews

If you don’t have an online presence today, the reality is that your the equivalent of the product with no reviews on Amazon. And just how often do you buy products with no reviews, or try a new restaurant with no reviews on Yelp/OpenTable?

I believe there has been a general shift in people’s thinking, where a lack of social proof is now an indictment of mediocrity (or worse). In the past, all products, businesses, and people were given “the benefit of the doubt”, but now, having no information available online is actually taken as a negative signal.

I noticed this in my own behavior fairly recently when my wife and I were trying to pick a pediatrician when our daughter was born. It was so natural for us to go online and check out reviews, that it didn’t occur to me until afterwards that we are no doubt excluding a bunch of great doctors who just happen to have no online presence, Yet even if that’s true, how would I even evaluate these other doctors to decide who to go and see?

Note that this behavior is not about reviews, it’s about getting more data so you can make more informed decisions. If I wanted to go find a personal trainer right now, it would be important they share the same core beliefs as me. For example, a lot of people tell you squats are bad because they hurt your knees, which I disagree with (as long as you have good form). I also don’t agree with a lot of the nutritional advice out there, so I wouldn’t want someone who is going to try to convert me over to their beliefs. As a result, a 5-star rating means nothing to me if that person is a cardio-specialist vegan whose never squatted a day in their life, because it’s not at all what I’m looking for. Without that person putting out there in the public domain, what they believe in and what they know, how do I know which personal trainer would be the best choice for me?

So, I guess it has sunk in for me that if you have any kind of ambition in your professional life, whether it be starting your own company, promotion, change of career, changing companies, hiring great people to come work with you, selling your products or services, or whatever really, you need to put yourself out there in the public… just like any other product or service on the web.

And just like in the personal trainer example, it’s not about getting everyone to love you so you get 5 star reviews. It’s just about putting your own thoughts out there to find like-minded people. I’m not shooting for “internet famous” here with this blog – I just believe that not having a public persona is no longer a tenable strategy if you’re ambitious.