Monthly Archives: July 2013

9 Ways People Screw Up Their Product Demos


I don’t know about you, but a bad product demo can turn me off a deal/product/company really quickly. A poor demo shows a lack of pride in your work, or laziness – neither are good signs. Here is a list of the 9 most common mistakes I have seen people make in their demos.

1. They show features and don’t tell a story. Your product is only as good as the problems it can solve for someone. What I want to hear during a demo is what problems you are solving and for who, not a laundry list of features in your product. Tell me a story about whose “day in the life of” we are about to embark on. Some people like to get really specific (e.g. Sally is the VP of Sales at XYZ Co, etc), some prefer to just use a title only. I’ve heard people argue about the merits of each approach, but I believe it doesn’t matter. What is critical is that you need to be a storyteller! Make it compelling, interesting, unexpected… something that grabs my attention and keeps it. It’s even better if you can work in a story from an actual customer that benefited from your product. A week later, people will have forgotten the demo, but they will likely still remember the stories.

2. They try to show everything In the product. You may love all your new cool features, and remember how and why each one was created. But I don’t, and I usually just want to get a quick overview and a feel for the product. Most people aren’t going to love your product like you do, so leave them wanting more… don’t bore them to tears.

3. Executives bring in somebody else to do the demo. I’m not saying your Exec’s need to be able to answer all product questions, demo everything in the product, etc – but if you have a standard demo that you pull out for meetings with partners, conferences, etc, everyone should be capable of showing it without any help. This one is particularly egregious if you talk about how “easy to use” your product is – if it’s so easy, why can’t your own executives learn how to use it to show the demo?

4. They give no context before the demo. Before you begin a demo, please at least tell us what your company does – your sub 1 minute elevator pitch – and what problems you’re trying to solve. Otherwise I spend the entire demo trying to work out what box to put you in, instead of actually analyzing the product.

5. They don’t reinforce their USP’s throughout the demo. As I wrote in my last blog, you need to know what makes you unique. Make sure you draw people’s attention to these key differentiators during your product demo, and make sure your demo is actually supporting what you are saying.

6. They show features that are not in production. This one seems so obvious that you may wonder what it is doing in here, but I’ve seen it so many times now that it has to get included. People show things their product can’t actually do (yet) as part of the standard demo, without disclaiming that what I’m actually looking at is a PoC, WIP, whatever. Please, especially for a “first meeting” type of situation, just show real product.

7. They use really old data. If the data in your demo is 2 years old, you’re fine. If your data is 5 years old, we have an issue. If you show 5 year old data, I am going to assume that either: you don’t care about putting your best foot forward; your product hasn’t changed enough in the last 5 years for you to even need to update your data; your product is so complex to configure, that changing the data is so hard that nobody has the time/motivation to go through the work. Please make sure your data is reasonably up to date.

8. They don’t practice. Technical glitches happen – especially during demos – so most of the time you’re going to get a pass for those type of issues. What you never get a pass on is not knowing your demo script. You should never, ever been saying “I click here and then… oh wait… no, I click on this…”. Please don’t let this be you.

9. In the world of Business Analytics / Intelligence, they have a pre-defined discovery path to insight. I left this one until last, because it only applies to Enterprise Analytics companies, but, this one of my biggest demo pet peeves. I can’t stand demos of BI or Analytics software that goes something like: “I can see Sales are down in Europe this quarter, so I filter to Europe and drill down by Product Group. I can see Product Group A is causing the issue, so I drill down by Sales Rep, and find that it’s Bill who has been underperforming for this particular Product Group…”. I think almost every BI vendor demos this way, and in my opinion, they are all doing it wrong. An “intelligent” system would be telling you what the issue is, not making you hunt around in the data looking for answers. In your demo, you knew to drill down by Product Group then Rep. In the real world, the business analyst looking at the data might have needed an hour to find the same thing. Putting all the onus on the user to find out what is going on is a crappy way to build a product, so it bugs me to see a demo that tries to actually sell this back to me as a good way to analyze your data!

So that’s my list, but I’m sure I missed some other good ones. Let me know what your demo pet peeves are in the comments.


4 Steps To Effectively Handle Competitive Landmines

Competitive Landmine

Image Source: Sarah Pickering

Boom… You’re in the middle of a deal when your prospect asks about an area where you’re weak. A lot of people don’t handle this situation well, and come out of the meeting worried about the deal, scrambling to get the feature added to their roadmap. Often times, the source of this issue actually came from a competitor who planted the landmine that you just walked into. You can always tell if a competitor was the original source because:

  • The question comes out of nowhere
  • It’s a feature where one of your competitors is strong
  • The prospect asks the question in terms of a specific feature and not in terms of their business requirements
  • When you ask them “why” they’d actually need this feature, you get lots of “ums and ahs”, and a vague reference to something they might want to do, possibly, one day in the future

Mark Suster’s phenomenal blog post on  knowing your USP’s (Unique Selling Proposition) touches on this topic:

“It’s equally useful to know what your weaknesses are against your key competitors and honestly capturing those. You’ll need it so that you can work on “objection handling” when prospects naturally bring up these areas … Knowing how your competitors position their USPs against you is a very important part of winning”

What Mark doesn’t talk about though, is what typically happens from the Product point of view:

  • Sales comes out of the meeting, usually with a few battle scars from walking into the landmine
  • Sales jumps on a call with Product to tell them we are getting beat up by not having this feature in our product
  • Product team adds said feature to the roadmap/backlog (if it wasn’t already sitting on there anyway)
  • Competition learns you didn’t handle this landmine particular well, so they try to repeat this move in as many other deals as possible
  • Sales feels more heat, so puts even more pressure on Product team to deliver it
  • Product team now prioritizes it, and feature gets added
  • The competition then says your implementation is crappy… which it probably is. You just wanted a “marketing check box” so you could claim to have this feature anyway… you don’t really expect anyone to actually use it, and it wasn’t really what you cared about for this release – you just had to do it to keep sales happy
  • The competitor then usually moves on to a new landmine to use, and the cycle repeats

Mark Suster says handling this request needs to be part of your sales methodology, but let’s be clear – this cycle is not the fault of sales. This is another symptom of poor Product Management. Here are the four things your Product team should be doing in this type of scenario:

  1. Go Meet With The Prospect Directly – Get out of the office. Hear their concerns, and find an effective way to neutralize the issue. Some examples might include explaining how your product doesn’t need this feature because you have an alternative and better way of doing it, or how nobody actually uses this feature in reality and it’s nothing more than demo-ware because of X, Y and Z. Learn what is the most effective argument to handle this objection.
  2. Plant Some Landmines Of Your Own – After you have neutralized the issue, turn it around quickly by reinforcing your own USP’s. If your doing your job well, you should have hopefully been able to work out which competitor the initial landmine came from and you should know where they are weak, so you can place your own landmine accordingly. If you don’t know your competition – get studying. If you don’t know who planted the landmine, at a minimum you should definitely be reinforcing your USP’s. It is also much more effective if you can plant a landmine for your competitor that is related to an actual business requirement of the prospect. Most people don’t take this extra step – they pull out the same landmine for the same competitor all of the time. That’s why at the beginning of this article, I said that most prospects have no real clue on why they might actually need this feature, and that usually makes it fairly easy to neutralize. You will be significantly more effective if you can tie back your competitors weakness to a current pain point of the prospect.
  3. Evaluate The Real Importance Of This Feature – Don’t add the feature to your product just to get the marketing (or RFP) check box. That is how you build a crappy product. In fact, default to an assumption that says this feature should NOT be added to your roadmap (if it was relevant, you’d hear about it from your actual customers, and your prospect would have a real business requirement in mind for it). Always evaluate it with an open perspective, but just make sure you apply sound critical thinking before you blindly add it to your roadmap.
  4. Educate Your Field – Once you know how to handle the objection and turn it the conversation around to your USP’s, then make it repeatable. Roll this information out to your field so that the sales team can handle this complaint themselves in the future. That is why I said this issue is not the fault of sales – they first have to be enabled to deal with the situation. Also explain to Sales whether this feature is coming in your roadmap or not, and if not, why not.

Often times, a landmine is to serve two purposes: to get the prospect questioning you as a valid selection, and to keep your product team busy on something that doesn’t really matter. You can see in the typical cycle above, your competitor can waste a lot of your cycles working on what might be a completely irrelevant feature. So make sure you don’t fall into the trap of having your roadmap indirectly controlled by your competitor. Know your USP’s, your customers, your competitors, your market, and your vision – and make strategic choices about your roadmap.

I’d love to get your feedback in the comments on any other tips or tricks you might have in handling (or placing) competitive landmines too.

Do Your Product Managers Suffer From Featuritis?

The Featuritis Curve

Image Source: Kathy Sierra (

I think The Featuritis Curve is one of the most critical things for a good Product Manager to really internalize. People tend to look at this curve and nod their heads, smile, say they “get it”, perhaps even look at me like “of course, why are you even bringing this up”, then they go right back to the business of jamming as many features into their product as they can fit.

Does this story sound familiar to you? Product Management and Engineering are fighting again. Engineering is coming back to Product Management to tell them they have to start dropping some previously agreed upon features if they want the release to come out on time. Product Management is saying that they are all critical features, and we won’t make our revenue targets if we drop them. PM blames engineering for under-delivering again, and there’s lots of tension between the teams. If it does sound familiar, it’s likely that your Product Managers suffer from the deadly condition known as “FEATURITIS”…

PMs jamming features

The Featuritis Curve is actually providing a similar message to The Innovators Dilemma, which essentially says disruptive innovation comes from by building products that are less functional and significantly cheaper than the market leaders. In doing this, you can capture the needs of customers who were not previously willing to buy the market leaders due to their price and/or complexity. So it is probably even more critical for startups to really have this message sink in if they want to disrupt existing businesses.

Of course, there are several companies who abide by this, and they tend to be the most successful companies, so by definition, those are therefore likely the ones you HAVE heard of. But think about all the other startups (especially enterprise software startups) who fail that you don’t know about, and take a look at their products. A lot of them seem to do be doing their best to get straight onto the right hand side of the curve. Why does this continue to happen over and over?

I will offer up one overarching reason: poor Product Management (or in the case of many startups, no dedicated product management at all). It takes A LOT of disciple not to slide down the curve. In fact, it is the natural evolution of a product to move from left to right along the curve as everybody you talk to will have feature requests for your backlog for you. It’s your job as a Product Manager to fight for your users though. When you dont understand your users and know what they REALLY want, or you don’t have a clear vision for your product, or you add a feature just because a competitor has it (the topic of my next post), you will end up over that side of the curve. Poor Product Management equates more features with a more viable product (Kathy Sierra called this “fear” – fear that your product is inferior with less features).

Aside from poor Product Management though, perhaps the best argument against The Featuritis Curve I have seen was laid out by Joel Spolsky. I will paraphrase his key argument as: even though it might be true that feature usage follows the Pareto principle (80% of customers use 20% of the features), each customer uses a different 20% of the features – there is no magical 20% set of features that appeals to everyone. Therefore, more features = more ability to attract new customers. I would add that his argument is particularly true if you are designing products are inherently complex (e.g. an ERP suite vs an iPod). However, there are few things missing in this argument:

  1. Customer Satisfaction and Churn – you might be signing on new customers, but are all your extra features making your software more complex, and therefore reducing user happiness? What’s most troubling about this is that the numbers won’t show this to you until it’s too late – customer’s won’t churn on you instantly (especially in Enterprise Software). It will likely take some time, at which point you are far down the right side of The Featuritis Curve. And once you are down there, you can almost never travel back up towards the top – it’s a one way street.
  2. Technical Debt – Ceteris Paribus (all things being equal), the more features you are adding to your product, the more likely it is that you are accumulating Technical Debt. The big problem with Technical Debt is that it then becomes even more difficult for you to add more features to your product. So now you’re on the far right side of the curve, unable to move back up the hill AND unable to add any new features without breaking something else. If you ever look at product from a big company, and think that the product looks almost identical to what it did 5 years ago despite lots of new releases coming out, you now know the reason for it.
  3. Crossing The Chasm – A false dichotomy in Spolsky’s argument is that people are buying your software only because of your features. In reality, there are so many other reasons why people buy (especially in the world of Enterprise Software). For just one example, consider Crossing The Chasm – the Late Majority and Laggards are buying because everybody else is buying. They might rationalize it (to themselves even) as now you have the features they want, but it’s much more likely to be that they just needed the social proof of others having bought you before them.

Despite saying all this, I will grant that Spolsky is somewhat correct – you will always be enhancing your product in new releases, and at least some of those enhancements will be new features, therefore your feature count will always go up over time. You don’t hit the top of the curve, then fire all your PM and Dev teams, and expect the revenue to come rolling in since you’re now at the perfect peak. The Featuritis Curve is overly simplistic to make a point, and it doing so it conflates a few different concepts like design and usability into the “features” axis.

So rather than saying you shouldn’t add features, it’s that you should be extremely diligent about adding them, and make sure that the features you do add are focusing on maximizing your user happiness. Unfortunately, most Product Managers do suffer from Featuritis. I believe this is the true test for good Product Management – knowing what features to NOT add to your product. This means fighting to keep things out, not fighting to jam more in.

Why I’m starting a blog, and why you should too

no reviews

If you don’t have an online presence today, the reality is that your the equivalent of the product with no reviews on Amazon. And just how often do you buy products with no reviews, or try a new restaurant with no reviews on Yelp/OpenTable?

I believe there has been a general shift in people’s thinking, where a lack of social proof is now an indictment of mediocrity (or worse). In the past, all products, businesses, and people were given “the benefit of the doubt”, but now, having no information available online is actually taken as a negative signal.

I noticed this in my own behavior fairly recently when my wife and I were trying to pick a pediatrician when our daughter was born. It was so natural for us to go online and check out reviews, that it didn’t occur to me until afterwards that we are no doubt excluding a bunch of great doctors who just happen to have no online presence, Yet even if that’s true, how would I even evaluate these other doctors to decide who to go and see?

Note that this behavior is not about reviews, it’s about getting more data so you can make more informed decisions. If I wanted to go find a personal trainer right now, it would be important they share the same core beliefs as me. For example, a lot of people tell you squats are bad because they hurt your knees, which I disagree with (as long as you have good form). I also don’t agree with a lot of the nutritional advice out there, so I wouldn’t want someone who is going to try to convert me over to their beliefs. As a result, a 5-star rating means nothing to me if that person is a cardio-specialist vegan whose never squatted a day in their life, because it’s not at all what I’m looking for. Without that person putting out there in the public domain, what they believe in and what they know, how do I know which personal trainer would be the best choice for me?

So, I guess it has sunk in for me that if you have any kind of ambition in your professional life, whether it be starting your own company, promotion, change of career, changing companies, hiring great people to come work with you, selling your products or services, or whatever really, you need to put yourself out there in the public… just like any other product or service on the web.

And just like in the personal trainer example, it’s not about getting everyone to love you so you get 5 star reviews. It’s just about putting your own thoughts out there to find like-minded people. I’m not shooting for “internet famous” here with this blog – I just believe that not having a public persona is no longer a tenable strategy if you’re ambitious.