whateverneedsdoing
Whatever Needs Doing
16 posts
Product Leadership, Software Teams, and Startups
Don't wanna be here? Send us removal request.
whateverneedsdoing · 6 years ago
Text
User Experience and Measurement Lessons From Bike Sharing at the Beach
Let’s pretend for a second that the bike and scooter sharing startups have sound business models.  They’ll break even on the bikes/software/maintenance, and make money selling the route data to mapping and advertising companies or something.  Have you suspended your disbelief?  Great.  Now picture the perfect setup for customer acquisition for these startups…
It might look something like this:
Tumblr media
A sunny day at Alki Beach in Seattle.  A jogging & biking path that goes on for 4 miles of beautiful views and people-watching.  A place where it’s really hard to find parking, there are tons of people outside, and they can’t walk all the way back to their cars in flip flops.  Where couples and families come specifically to move around in the sun, to see all the views, and to make it all the way from one end to the other and back.
People at Alki have the need for temporary bikes.  Pedestrians see folks riding colorful mobile-app-rented bikes all around them.  It’s like free advertising to a perfect target market.
This is the situation I found myself in a few weekends ago.  On a beautiful sunset walk with my wife.  Enjoying the views with tired legs and a hungry belly.  Bikers, skateboarders, and foursomes on rented rickshaws zooming by us.  We saw a number of people using bright green Limebikes, yellow Ofo bikes, and orange Spin bikes.  We parked our car and walked two miles West to a pirate-infested beach bar with bacon pizza and Adirondack chairs (no joke about the pirates).  On the way back, we wanted to pedal.  I’d used Limebike twice before, so I knew what to expect: a heavy, poorly-fitting bike that we could ride for a few miles for a dollar.  My wife was a bikeshare n00b.
Bike Sharing #ProductFail x3 Companies
So what was the #ProductFail?  Where are the lessons?  Basically everything about the user experience starting at the point where I decided I wanted us to bike back to our car.
There was a Limebike right outside our pirate beach bar.  “Perfect!” I thought, and started fiddling with my phone to unlock it.
Problem #1: “This bike no worky.”
Tumblr media
No dice.  “OK - well let’s walk a little farther away from the car to get a bike - I think I see another Limebike way down there.”
Problem #2: Two would-be bikers interested at the same time.
Just as we made it to the bike at the end of the line, another guy was walking toward it with his friend.  We talked about it awkwardly for a minute and he decided to let me have the bike.  OK, time to try unlocking it.
Problem #3: Out of battery
Tumblr media
Two bad bikes in a row, ouch.  
“Wait, can’t we pedal an electric bike that’s out of battery?  No.  I can’t believe this.  OK, let’s start walking back the way we came from and keep our eyes peeled, there are lots of bikes around.”
After 5 minutes of walking back the way we’d come from… “Here’s a Yellow one.  I guess I can download another app and enter my credit card info - they all work the same way, one brand’s as good as another, right?”
Wait for the download over cellular.  Open the app and create an account.  Fiddle with my credit card out on the sidewalk - you can’t get your first free ride without a credit card entered...
Problem #4: This yellow bike no worky either :-(
Tumblr media
At this point I was wondering if I just had unbelievably bad luck, or if all of these bikes are running on such a rickety operational model and software platform, that they’re basically unsuited for busy outdoor areas.  
It also occurs to me that when there’s high rideshare bike demand, there’s a much higher chance that any idle bikes which appear to be available are actually the ones no one can get to work.
...more walking with the wife… she’s resigned to walk back, but my feet hurt and I’m more determined than ever to get a working bike to ride.
Problem #5: Third green bike no worky.
Tumblr media
“OMG - this is a laughably bad experience.  I’m gonna blog about it.”
...more walking...
Problem #6: Vandalism.
Tumblr media
“We don’t want a third app do we?”
“Don’t bother, it has no seat. (orange)
...more walking...
Problem #7: I use my app to unlock a bike, but someone standing nearby says it’s his.
We see a bunch of parked yellow bikes and I’m thinking “finally, these must work!”  
I unlock the first one.  But then a kid looking at the view with his family says “that’s my bike, wasn’t it already unlocked?”  But my app has the bike checked out to me…  “Oh, sorry, but my app is counting time now.”  There are four yellow bikes here and three of you, is that OK?  He agrees, and I get a sweet, sweet, ride on a heavy, poorly-fitting bike.
Problem #8: A second Ofo bike is “closed for maintenance.”
The only problem is the other yellow bike that my wife tries (after downloading the same new app to her own phone, creating an account and entering a credit card) also doesn’t work and shows an error message.
So we pretend we’re 10 again and we actually try riding two people to a bike.  I thought it was hilarious.  But, my wife can’t keep from sliding off the seat and this is a bit nuts - we don’t really want to explain a bike crash with both of us on one bike with no helmets to our kids.  We decide to walk back.
===
Building a Better Bike Sharing Product
OK - so we can all agree that was a horrible customer experience.  How could the bike share companies build better products?
Let’s follow a straightforward approach that starts with understanding users and the problems they run into, prioritizes a good user experience, and tracks success in the metrics.
Understanding users and their bike share problems:
User feedback will come in through many mechanisms, including app store reviews and twitter… but to tangibly improve our user understanding, we want to survey some small percentage of our users.  When?  All the important times: right after they’ve finished a bike ride, right after they’ve encountered a problem, and when they’ve used the app an unusually long amount of time (this indicates a problem or an uncommon use case).  And we should couple this quantitative survey work with qualitative work (e.g. tag-alongs or diary studies involving frequent bike share users, and in-person street intercepts of new users in busy bike-sharing areas like Alki Beach).  
Prioritize a good user experience and track success:
In addition to our “understand work” above, we want to log all the bad experiences we can catch (such as bike inoperable error messages), and set goals to drive down the number of users having bad experiences per day/week/month.  With a goal set to minimize bad experiences, it will be easy to prioritize fixes to problems, and we can rank those problems based on how often they are experienced X the severity of the effect on user sentiment (e.g. Net Promoter Score).
Some ways to fix the problems I encountered:
Bike inoperable product fixes: 
In all the cases that I hit bike inoperable error messages above, I was clearly, obviously “having a bad time.”  This is the equivalent of a crash or hang with most software.  I was all ready to ride a bike and pay for my ride, but I could not.
The product teams could make major improvements by doing the following:
Log all these errors by type
Goal on reducing them both in terms of raw counts and in terms of total people affected
Do follow up field research in areas where error rates are abnormally high
Goal the maintenance teams on reducing the length of time bikes are inoperable and out in the field generating bad user experiences
Help users avoid these situations in the first place by much more obviously directing them to the closest working bikes, and warning them boldly in app (both visually and audibly) about any very nearby bikes that are offline for maintenance or out of battery.
Help users who hit a bike inoperable case find a working ride as quickly as possible, and both apologize and compensate them for the bad experience to turn a negative situation into a positive if possible.
Vandalism product fixes:
A bike with a missing seat, missing front wheel, or missing chain is just another form of inoperable bike.
The product team could invest in:
Detecting missing bike components via bluetooth or RFID sensors
Treating any bike with a missing component just like an inoperable bike (with the techniques above)
Social product fixes:
People will occasionally walk around and want the same bike at the same time.  But we can both minimize these cases, and help folks navigate these cases via the product. The product team could:
Attempt to steer nearby users with the app open to different nearby bikes
Institute rules about scanning bikes that are still on someone else’s clock - buzzing both phones and asking the current bike holder if they’re willing to release the bike to someone new or not.
Put a “two people want this bike” button front and center on the apps map view.  Hitting this button could suggest the next closest bike, offer some suggestions for communication and decision-making, and have an option to randomly select a “winner” in the case that neither person is willing to defer.  Underneath the random assignment function, the team could prioritize any person who just came off a bad experience such as losing a random assignment or finding an inoperable bike recently.
Wrap-up:
Many products are a combination of physical goods, software, and operations.  Many products are used in complex social environments that can complicate usage.  Some products are local, and may perform well in terms of metrics at the top level (like across a city or a country), while still harboring very broken user experiences in lower-level locations and for a minority of users.  But any product with these attributes will hit a growth ceiling, as the bad experiences result in churn and negative word-of-mouth that creates a drag on the overall product.
Today we looked at three high-flying bike sharing companies with plenty of VC money, that are likely in this zone right now: showing positive growth and usage numbers, but perhaps also struggling with declining growth and metrics ceilings.  Thinking about your best case and worst case scenarios… doing a deeper level of understand work, and then prioritizing and thoroughly solving your top user problems can unstick your product and unlock a new round of growth.
Have a story like this?  A horrible user experience or a product with strong top-level metrics that’s slowing down or burdened by some bad user experiences?  Ping me on LinkedIn or Facebook Messenger - I’d love to hear about it.
--
- Mike    
1 note · View note
whateverneedsdoing · 7 years ago
Text
3 Kinds of Product Consistency (how not to design the world’s worst elevators)
Tumblr media
I work in an amazing building... with awful elevators.
Tumblr media
Visitors tend not to notice the problems.  The elevators are new.  They look modern.  They have touch screen interfaces.  They’re reasonably fast in carrying people up and down.  Plus who really cares about how an elevator is designed?  But people who work here feel the design problems deeply.  I’ve lived with the aggravation, I’ve surveyed my coworkers, and I can’t help but write about this #productfail as the perfect cautionary tail of what happens when you break certain kinds of consistency expectations in product design.
I’d use these three categories of “Must Fix Problems” to cover the very worst kinds of design flaws for elevators:
Elevator kills or maims people
Elevator stops working, trapping people inside and forcing others to use the stairs
Elevator makes people intensely angry at random intervals
Our elevators fail at #3 all the time.  Many of us are taken on rides up and down without reaching our intended floor.  Many of us have been stuck inside (briefly) waiting for someone new to call the elevator.  We can’t seem to re-train ourselves to the UI well enough to avoid the problems.  And whenever these things happen, we’re helpless to correct the problem from inside the elevator.  We have to get out on the wrong floor to fix the problem, then get back in.  I’ve been here 18 freaking months, and just last week I went on another “ride of shame” to the wrong floor.  My coworker has been here 7 months and confessed to me that she got stuck inside the elevator recently and waited 5 minutes for another employee to call it to a floor.
Why are we suffering?  Because someone made the design decision that there would be no controls to direct the elevator inside the elevator.  But as human beings who have visited and lived in cities, we’ve been trained our entire lives that you can safely get into an elevator without thinking, then tell it your intended destination.
I’m sure there were valid reasons for making this design choice.  These elevators could be 12% more efficient because people going to the same floors are better grouped into elevators, leading to fewer stops on average.  But as users, we’d really like to stop being stuck, sent to the wrong place, and made to feel stupid.  Our elevators are inconsistent with all the other elevators we’ve ever used, and re-training is no fun at all.
Does consistency matter for products?
The importance of consistency is well-known in product management, design, and marketing.  When your product is “inconsistent,” it hurts usability and leads to confusion.  It can aggravate your loyal users (like these elevators) and hurt your overall retention rates.  It can even result in ground-swell community backlash efforts which in turn scare away new users and limit growth.
This isn’t new.  It was a hot topic in the Microsoft hallways 20 years ago when we planned user experience updates for new versions of Office.  The biggest design offense of the web 1.0 era was a lack of consistency across web sites (remember Flash-based websites for restaurants?).  In 2012-13, a desire for consistency in our everyday products was at the core of user backlashes against Apple’s change to the flat, content-first design of iOS 7 and Microsoft’s changes to drop the start menu and better support touch-screen computers with Windows 8.  And a healthy respect for consistency in design is just as relevant today.  Snapchat just aggravated 83% of their user base with their January 2018 UX changes.  
If you are working on software and interacting with designers you are probably aware of their efforts to define and adhere to consistent design patterns.  If you work with marketers, you’ve  probably heard them defend the importance of consistent marketing messages, content, and tone.  If you work on a platform, you’re probably providing common controls and enforcing guidelines for developers that use your platform… all so that the end users can have a consistent experience.
What does consistency mean, exactly?
Consistency in product design is a bit subtle.  There are 3 important kinds of consistency worth striving for:
Your product should be consistent with itself.
Your product should be consistent from version to version.
Your product should be consistent with what your users already know and expect.
#1 is straightforward to get right as long as someone on the team is putting in the effort.  Most designers strive for this automatically, pointing out consistency flaws in your current product design and making their new designs consistent with each other.  Engineers may tend to reuse existing UX code in their area, perpetuating inconsistencies across a large project, but these inconsistencies will be visible with a little testing and some usability work.  If you prioritize consistency fixes from time to time, you should stay on track.
#2 is harder, because there are often very good reasons to make substantial product changes.  You may have a product with a stable user base that’s not growing.  Bold new ideas can jump-start growth, but they require substantial changes to both functionality and design.  You may want to rebrand your product in a minor way to reach new people and/or re-engage users.  Nearly every major mobile app has done this (most more than once).  Deciding which changes are worth the “costs” of the consistency breaks, and how these changes should be rolled out is one of the key ways you’ll earn your paycheck in innovative releases.
#3 really deserves your attention though, because chances are that no one else on the team is thinking deeply about it.  Assume from the start of any project that you need a broad, real-world understanding of your users and the world they’ve been living in.  Supplement your team’s instincts with user research and go through thought exercises on user empathy and expectations as early as possible.  “Where within our product should we innovate and where within our product should we copy what people are used to?” is one of the questions you should be ready to answer at every stage of your product development cycle.
How does a savvy product leader balance consistency and innovation?
Very carefully :-)  
You have some real obligations when you break product consistency from version to version (type #2):
To your users
To your organization
To your product funnels
User obligations: I’m sure you’re already planning to test major changes with a fraction of your users to understand and usability problems and positive/negative behavioral impacts.  You might also want to consider pre-announcing the change and your reasons for the change before pushing it to a majority of customers.
Organizational obligations: You’re probably already on top of stakeholder agreement on vision, priorities, and goals.  You have your team buy-in for your execution plan too.  Now put some work into understanding the risks of breaking consistency across versions and ensure the organization is bought into these risks as being justified, depsite the likely difficulty in winning over users, press, etc.
Minding your funnels: Change can be good, but it comes with risk.  Take stock of the likely range of outcomes for your upcoming break with consistency and whether your product can survive the change.  For example, if you have a mobile product where your churned user numbers are close to your retained new user numbers, then any meaningful drop in new user sign-ups or retention will lead to a decline in overall active users.  Or, if your monetization funnel is dominated by a few percent of long-term, happy customers - you may see a disproportionate impact to revenue by breaking with consistency in your new version.
Mistakes I’ve learned from:
Product consistency decisions are hard.  I’ve messed them up on more than one occasion.  
A version to version screw-up: In one case, we sought to 10x our mobile app’s success through significant changes and new capabilities, but the organization was not willing to risk the current user base.  Instead of making a major version to version change, we should have spun up a new mobile app with the new features and design and used hooks in our current mobile app to encourage users to try the new app.  By the framework here, I didn’t correctly assess my org’s commitment to a type #2 break with consistency.
Internal product consistency arguments: I’ve seen more than one team get into a temporarily dysfunctional state over disagreements on the importance of “design debt.”  This kind of problem results from lack of communication and team buy-in to the priority of type #1 consistency.  At times it’s not the right call to prioritize small user experience changes (and it’s often hard to measure the impact of these fixes in the short run).  A good technique here is to set a fixed budget (e.g. schedule a “design debt” week every six months) to address these types of issues.
User expectation problems: Having worked on several new versions of Microsoft Office where our biggest competitor was the last version of Microsoft Office, I’ve had the importance of #3 beaten into me from the start of my career.  For those product updates, external user expectations and version to version consistency expectations often amounted to the same thing.   When I see these types of flaws in other products like our elevators above (or software products with gratuitous new design paradigms for common tasks), they are often real killers.  
Nevertheless, these mistakes aren’t that rare.  Got an example you find particularly aggravating?  Let me know about it.
Related Reading:
Consistency as a design principle
Snapchat’s consistency struggles
Justification for Apple’s iOS 7 design changes
Major apps rebranding their mobile icons over time
-Mike
0 notes
whateverneedsdoing · 9 years ago
Text
How to Work with Sales, Marketing, and That Platform Team You Depend On
Tumblr media
 AKA: How to Handle More than You Can Handle
I was talking to a high growth startup recently about their process challenges when the product manager confessed that she and the engineering team were totally overloaded. “Have you worked with sales teams before? We have constant new feature requests from our customers and sales prospects. We can’t keep up, and I worry that it’s leading to an overly complex product. I’d really like to improve some key things, but I don’t know when we’ll be able to get to them.” We talked about a few techniques she could use to get a handle on the situation, but her dilemma stuck with me as a nearly universal problem.
There are always more feature ideas than time to build, test, and deliver them. That’s the natural order of things in software. But every team comes up with a way to prioritize new work. What I want to cover here is a slightly more complicated situation: multiple competing priorities and constituents. In the example above, the unstated priorities were: (1) land new customer accounts, (2) retain customers and keep them satisfied, (3) improve the product to provide unique new value and a terrific user experience, (4) keep the product UI uncluttered, and (5) keep the product codebase and architecture relatively easy to maintain and improve. #1 was being met, and possibly over-served. #2 was definitely being over-served. #3, #4, and #5 were being under-served.
Without planning and attention, a situation with multiple priorities will generally result in some priorities over-served and other priorities starved. The more fluid the context, and the more often prioritization decisions are made, the faster things tend to get out of whack. Sales, marketing, and direct customer request channels are commonly over-served because of their internal champions and obvious importance to the business. But ANY product priority can be over-served. The optimal mix changes continually along with the business context and goals. Even technology or tools teams serving internal customers struggle with their “multiple masters.”
So how can you deal with a situation like this? How can you work on multiple priorities at the same time without starving a critical area of investment or letting down one or more of your constituent groups? Here are six techniques that have worked for me and many others who have walked in shoes like yours.
Clarify the plan
Establish a process for changes
Include your constituents in the decisions
Make customers pay for their new requests
Use budgets for competing priorities
Use proportional efforts for lower priorities
Clarify the plan:
If you haven’t made a plan for your efforts over the next month/quarter/year or you haven’t taken the time to clearly communicate it across your constituents, then start here. People will always ask for something new that’s top of mind when they don’t know what they’re giving up by asking for the new thing. Your plan will establish a clear ordering of your team’s priorities, and your constituents will either acknowledge this or push for a different prioritization. If there are disagreements, you want to have the discussion as early as possible. After you’ve done a good job setting and communicating your plan, your constituents may start asking you “how soon can I have the new release” instead of “can I have this one new thing by next week?” Who knows, you may even have ideas for how they can help you deliver the release sooner (by gathering feedback, testing an early build, etc.).
Establish a process for changes:
Plans change, often, and even if your plan is great there will likely be new business opportunities, customer pain points, or operational problems that arise and take precedence over executing your plan on time and on budget. What you need to stay sane amid a cacophony of requests is a clear process and rhythm for gathering up new input and deciding what to do about it. Your process can be as simple as “any time a new issue has the potential to really affect our business, email [email protected], and product/engineering/sales/marketing will review and evaluate the issues every Tuesday and Thursday.” In a larger organization, you may want to more carefully define examples and criteria to try to limit the volume of incoming requests before the review stage, or even have champions for key business functions pre-filter the requests from their teams.
Include your constituents in the decisions:
Maybe you noticed that in the process example above, sales and marketing had a say in priorities. When customer, partner, or internal efficiency-oriented requests take a significant amount of work, the best thing to do is lay out the whole capacity of your engineering/design/product team, and organize the big new requests alongside the tasks from your team’s plan. Then have the constituents discuss how to prioritize and fit the most important work into the timeline. I’ve run very effective versions of this meeting I called “Schedule Tetris” with paper cut outs for capacity and project length. It was an eye opening experience for our sales and business development leaders, and by the end of the meeting we had an updated plan that everyone accepted as the best path forward.
Make customers pay for their new requests:
With big name customers and million dollar contracts in play, it can be very hard for account managers to ask for more money, and even harder for engineers to push back on new requests. Everyone knows how important that big account is to the company, so shouldn’t anything they ask for be a pretty top priority so that we can exceed their expectations? Well, yes, if they’re your only customer, and if you can still turn a profit on everything they are asking you to do. In any scaled situation, 100% free customer support and product customization is unsustainable. An amazing technique to focus a customer’s decision-making process is to bid out additional costs for their requests. It may take new discipline and exert a slight strain on the customer relationship at first, but it will drastically cut down on customer feature requests. It will also give you valuable insight into those few customer needs that are so darn important they’re willing to pay you more to meet them.
Use budgets for competing priorities:
Budgets can be defined in terms of money, people, time allocated, or any kind of resources. When two or more priorities are really important at the same time, and no one person is in a perfect position to understand the exact importance and consequences across all of the areas, budgeting is a very good way to deal with the situation. E.g. Let’s imagine you have to both boost sales this quarter, and invest in new technology for a new product next year. You can split into two separate teams and give each team one of these missions, or you can establish a budget guideline like “70% of our monthly story points are allocated to sales boosting work, and 30% are allocated to new technology prototypes.” No matter how cool the new technology is, once the effort spent gets above 30%, everyone knows that some of the cool new work will have to wait.
Use proportional efforts for lower priorities:
Eric Ries’ has a great startup blog, and one of my favorite articles is Five Whys. I like it because it illustrates a commitment to quality by fully fixing both the code and team processes that led to problems. But I love it for how it lays out a framework for investing an appropriate amount of effort given the severity of the problem being fixed. There will be many times when someone argues that the right thing to do is re-write a code base from scratch, start over with a whole new design, or hire 3 new people to focus on devops. Often, the “big response” isn’t the best use of resources for the business. Investing new efforts in proportion to the time lost or damage done by the error is a rational way to improve product maintainability, team processes, and even customer satisfaction, one bite-sized chunk at a time. Progress is made in these areas that feel so poignant without over-investment or interfering with the team’s plan.
Every software team I’ve been a part of has had an appetite for new features that outstripped its capacity to build, test, and deliver them. This was true at Microsoft, where imposed resource constraints were critical to driving forward progress in regularly spaced releases. It was true at startups, where we were always under-resourced for our ambitious plans and short deadlines. I’ve been on the constituent side as well, trying to influence a key dependency, offer help, or work-around the team that just couldn’t meet my needs on a near-term schedule.
I’ve heard board members explain that sometimes what it takes to succeed is to knowingly over commit to your early customers and then somehow find a way to deliver what you didn’t think was possible. I believe this – there are those critical times when the best available plan is for a team to work its butt off and try to achieve something incredible. But heroic product development is not a sustainable process, and even the most heroic team has a limit.
When priorities come from a single paying customer or a single product plan, it’s not hard to stay sane and make forward progress. But at soon as multiple customers start asking for different work or we have 3 distinct priorities to execute on at the same time, we enter the more complex world of multiple overflowing priority queues and we need a toolkit of techniques to deal with them. Pick one of these six to try in your situation this week.
-Mike
  (Title photo courtesy of Rafael Castillo)
2 notes · View notes
whateverneedsdoing · 9 years ago
Text
How To Answer Your Product’s 2 Most Important Questions
Tumblr media
In my last post, I went through 5 categories of input that you should be using to drive the best product decisions. It’s a useful checklist I like to revisit a few times a year.
This post dives much more deeply into how to learn the most important things every time you put a variation of your marketing or product experience in front of real users: (1) what those users are doing and (2) why.
Science for Product Managers
To bring some science into product management and marketing you have to observe the natural behavior of your users. Your goal is to predict the behavior of a larger market based on the behavior of a smaller sample group. If the sample group loves the product update, uses the new features, clicks through the ads, or pays for the subscription - then it’s time to invest more resources and deploy the updates much more broadly. If the sample group ignores the new stuff, is confused by the update, or generally underperforms on your metrics compared to past variations, then you likely need to change some things. The closer you can get to this ideal of observing the natural behavior of your users, the more accurate your predictions will be.
The Ideal Setup for Experiments:
If you’re working on improving a popular product, site, or service you already have the best possible setup for learning and validation - your own active users. The trick is making sure your experiments are done with an appropriate level of risk. E.g. if you’re working on Expedia flight and hotel listings, I’m sure you already have an optimization framework in place that lets you filter a fraction of 1% of your traffic through new variations. When new variations perform poorly, you’ve only risked a fraction of 1% of sales revenue. When new variations perform well, you can quickly roll them out to the other 99%. This kind of framework allows for rapid experimentation, minimizing #fails by default and allowing you to maximize #wins.
So what about the rest of us? There are all kinds of situations that are less than ideal for product and marketing experiments, but with just as great a need: 
Early stage startups that don’t have customers yet, or have too few customers to collect accurate data in a reasonable amount of time.
Established companies going after new target markets (e.g. Amazon.com went after 3rd party sellers, advertisers, and cloud developers - all very different markets than online shoppers).
Product delivery situations limited by partnerships, contracts, or market makers (iOS and Android apps, software bundled into other products, white labeled services for major brands, etc.).
Products without sophisticated optimization platforms in place.
If one of these setups describes your situation - the key is to get as close to the ideal setup as practical and go forward with your experiments. Imperfect learning is light years better than none.
Get real users (not friends, auditors, or testers) to discover your product through a real marketing channel (digital ads aimed at your target market are a clean way to do this).
Get enough users to draw conclusions. You can start to make really useful predictions with about 1000 daily active users. But in some cases you can evaluate a new idea with as few as 150 or 200 users. The math is a little involved but free calculators can help. 
Present your product as released (not a beta or trial), so people make real decisions about purchasing and quality. I’ve been surprised by strong positive reactions to products the internal team thought weren’t ready, so we released faster than planned! As per lean methodologies, you can do this with new features inside a product and even marketing campaigns before you have a product. People who think they’re evaluating a real, finished thing act normally. Beta testers do not.
What Are They Doing? Why?
Once you have users interacting with something they think is real, the behavioral data you get from your product or marketing analytics packages can answer really important questions like these:
Are people responding to the new thing? (If not, present a different offer or dig deeper into user needs)
“What are users doing in my product?” (Invest more in these areas)
“What aren’t they doing?” (Change the approach, or cut the feature to simplify the product)
“What are the effects of our latest updates in terms of user actions?” (Even better if you run A/B tests with new updates)
“Are people staying engaged long enough to grow this as a business?” (Most easily viewed as cohort tables)
“Where are the biggest drop-offs in my user lifecycle?” (Best viewed as progression funnels)
We’ll call this the WHAT data. It’s vital, and you should record everything your users do, down to the individual click stream level. You record everything for two reasons: (1) you won’t know all the important questions in the beginning, new important questions will come up as the product evolves, and (2) you don’t want to end up with metrics debt - your team should get into the habit of recording all user interactions with the product, so your metrics are always up to date with the latest changes and features. 
Set up reports for the most important behavioral metrics, share these metrics with the whole team, and judge your product progress based on them. User engagement measurements are vital to most products. User progressions through product stages or a product lifecycle can also yield big insights. Social metrics (Likes, Shares, Reads, K-factor) and sales figures (Last purchase, Average purchase size, LTV) help you identify your best customers and guide marketing spending. Use charts and visualizations that work for the various kinds of data and review it at least once a week to spot problems and opportunities.
Tumblr media
But behavioral data like this isn’t the whole picture. What your users are doing will sometimes be confusing or frustrating. If people are churning off your service after a few days the important question is WHY? When you understand WHY, you can fix it. You’ll naturally form hypotheses as to WHY on your own whenever the data is surprising, but the main ways to figure out the true WHYs are by holding Q&A sessions with users who fit the confusing behavior pattern (e.g. pop-up surveys or phone calls), or by much more closely observing the way a few people behave and react to the product to draw accurate conclusions.
Use Qualitative Research to Learn Why: 
At many companies, qualitative research is underutilized or completely missing. This is a waste, because it’s the best way to gain deeper insights into what excites people, what confuses them, and what their unmet needs are. These unmet needs will become the future hallmark features and differentiators for your product. You don’t need to be an expert to do this well. Read some articles, and go do your first contextual inquiry session. You simply go to where users are, and you watch them do the things you think your product might be able to improve for them, just like a non-interfering scientific observer/ethnographer. I used to do this years ago with the Microsoft Office team. Just watch people do their jobs, whether or not they were using our products. Some of the best new feature ideas came from what people were trying to do with pen and paper (OneNote) or coordinating with their work teams (SharePoint). At Ontela, we went and watched Cellular South retail reps sell new phones, knowing that we’d have to adapt the product to their sales environment. At INRIX, we set up “drive-along” sessions to observe drivers struggling with traffic and think of new ways to meet their needs. I recommend you do this kind of in-person learning whenever you’re planning something big and important. You can do it before you even have a product.
As for usability tests, everyone’s heard of them, but few people do them regularly. I believe in the value of user testing, and some of my teams have still gotten stuck in the pattern of a usability test every 3 months. But with the newer online services available for designing, recruiting, running, and recording studies, there’s really no valid excuse anymore. It’s *so* worth it. You can now spend 1 hour and $200 to get some meaningful insights into how people are reacting to a product or planned update. At Mazlo, we incorporated this pattern into marketing optimization tests. We’d build a variant, run two hours of user tests in the evening, and if there weren’t any major surprises, we’d go live with ad campaigns the next morning to get useful conversion data. This process is lightweight enough to do multiple times a week.
Tumblr media
Another thing that can limit the value of usability tests is guiding users too narrowly through a list of tasks. This is the traditional technique, and it’s designed to come up with success rate information. “5/8 users tested successfully completed the task of uploading a profile photo during setup.” Not so revelatory, is it?  In the early stages of product development, you can learn a lot more by running usability tests without clearly defined tasks. E.g. “Your friend Mike just sent you this email, giving you $10 off a new service. He said he likes it and he thinks you would like it to. Go ahead and do what you’d normally do in this situation, and speak your thoughts aloud as you do.” Now the user you’re testing will set her own goal, maybe to figure out if this website is worth her time and attention, and you’ll likely learn deeper truths about user expectations and problems, in addition to spotting the usability gotchas along the way.
How Your What and Why Inform Your Who
You can go far understanding what your users are doing and why. But there’s one more vital question that requires another technique altogether: “Who are my best customers and how do I find more of them?” The clues you need to answer this question are hidden in your behavioral data. By looking at the set of users who have the highest lifetime value, the greatest engagement, who’ve made the most referrals, or who’ve contributed the highest quality content to your ecosystem - you’ve already identified a subset of users who are by some measure ideal for your product. Now the trick is to figure out what they have in common. Which marketing campaigns or referring web sites brought them in to the product to begin with? Do they share a demographic profile? What features do they use the most and how does this differ from the overall averages? Break out your surveys, phone follow-ups, and qualitative techniques for these users specifically. By understanding them more deeply - why they love your product, what problems of theirs it solves, what pain points they still have - you’ll be able to steer your future marketing efforts and product investments to find, onboard, and retain more people like them. And more customers like your best customers will have a supersized impact on revenue, community cohesion, and viral growth.
Wrapping Up
Using these techniques and your own creativity, you should be able to move closer to an ideal set up for your product and marketing experiments in just a few weeks. Then you’ll start getting good answers to your What and Why questions that will lead you to make your marketing and product better each iteration. Your next step is to turn this whole cycle into a natural, habitual process for your team. What metrics do you look at every week? What metrics are you aiming to improve with new feature ideas? How will you set up the right amount of process for your team to incorporate behavioral data and qualitative research into a regular rhythm?
Then when you have a good learning process and active users, take the time to figure our what your very best customers have in common. The answers will help you direct your marketing to find more people like them, and better meet their needs.
Want a quick assignment to learn something new about your product and users next week? Spend 1 hour right now to set up your first online usability test against your marketing web site. Or send out a customer email asking for volunteers who will let you come watch them in action for 2 hours in order to make the product better.
Happy experimenting!
--
-Mike  
(Title photo courtesy of Gabriela Talarico, usability test sign photo courtesy of Aaron Fulkerson, both under the CC license)
1 note · View note
whateverneedsdoing · 9 years ago
Link
This is a more personal article that I published over on the Lifehacker family of blogs, but it’s timely if you’re thinking about how to make a few meaningful changes in 2016.
1 note · View note
whateverneedsdoing · 9 years ago
Text
Make Better Product Decisions with 5 Vital Input Channels
Tumblr media
OK, let’s say you’ve formed a solid product vision and the team is moving forward. There are new features being demoed, tested, and deployed every week. You’ve got 200+ user stories in a backlog.  The company is jazzed about the product and information from the outside world is coming in fast: how people reacted at the latest sales demo, new things the competition is doing, customer support tickets, data from your user analytics, click-through rates from marketing campaigns... There’s a lot of energy and enthusiasm, and more than a little information overload. Congratulations on reaching this point!
By putting your product in market and opening up to feedback, you’re on your way to avoiding two pitfalls that still affect many product teams: (1) failing to design something people really want (see Paul Graham’s #1 reason startups fail) and (2) burning too much time perfecting the product prior to feedback (see “perfect is the enemy of the good”).
A key part of your job now is to make sure you have the right information coming in, quickly enough, and being shared openly enough that the team can make good product decisions.  It also helps to know what questions are most important and which data to act on vs. ignore.
The web is full of good presentations on product analytics (see Dave McClure’s Startup Metrics for Pirates), well-documented tools for running optimization experiments, and easy reads on how to run your own usability tests.  My goal here is to give you a checklist and framework for thinking about the health of your product data loops.  Think of all the various lines of product feedback as tubes and sensors connected to a patient on the operating table. All the decisions you’re making as the surgeon (aka: product leader) have real consequences, and it’s critical that you have the right instruments and tools providing timely feedback.
#1: The Business Context
Business goals for the product
Company strategy
Desired timelines
Available resources
Contractual and partnership limitations
This may seem like the easiest one to get right because you can arrive at the answers through discussion with internal stakeholders, and sometimes the business context is stable for quarters or even years. But the business context can change quickly, and sometimes you’ll be the first to realize that it needs to be changed. It’s easy for a little bit of misalignment here to have an major impact on the team’s effectiveness, so you have to make sure context is clear and risks are understood.
Examples of the business context needing your attention: The most common example here is a product timeline that’s turning out to be too long for the business context. Scope/resource/timeline decisions need to be made clearly and with outside participation. A more subtle example is the product team thinking the goal is to build a successful new online service, but the real business goal being to determine whether the new service is a promising enough opportunity for further investment. These aren’t the same thing, and since a timeline is always part of the equation, you can significantly improve your chances of hitting the business goals by making sure efforts are exactly aligned with the business context.
Frequency: Communicate/repeat the business context to the team every week. Resolve any mismatches or impossibilities ASAP.
#2 The Product Vision
What we’re building
For whom we’re building it
Value proposition
Keys points of differentiation
UX principles
Your product vision is your framework for weighing opportunities and navigating feedback from all the other channels. It’s important to communicate it clearly so it can drive priorities. And it’s important to update it when things change. Feedback is often in conflict. There will be a never-ending stream of support requests, behavioral data anomalies, funnel optimization ideas, usability fixes, and technical debt. But it’s rarely the right call to spend a year just solving and fixing and updating based on feedback. You also need to move the product forward in order to grow users, increase revenue, and keep ahead of the competition.
Examples of the vision needing your attention: Sometimes early experiments show that a great differentiating feature idea from the vision document isn’t well received by users. Judgement is needed in this case - should the idea be abandoned, changed, tested with a different market?  What’s been learned and what you want to learn next need to ripple through the plan. In early stage start-ups, the high level vision may be understood but accompanied by so much churn on the fundamentals (Who are our best users? What do they want? How much will they pay? Is it selling?) that you’ll get into a weekly learning cycle perfectly appropriate for your stage. You should still write a short vision doc (even if it’s just a 2 page doc or a few slides), and revisit whenever team members start asking questions about what to focus on next and why.
Frequency: Within an Agile process, create small vision statements at the Epic level. Revisit the product-level or major-release-level vision as needed to keep up with new information and market changes. 
#3 Users Reacting to Your Product
Behavioral data (your analytics, and any reports or dashboards you build)
Qualitative research (usability, contextual inquiry, follow-up calls)
Customer support requests, reviews, & tweets
Customer satisfaction and referral scores
If you’ve ever read about Lean Startups and Minimum Viable Products, it’s all about getting users reacting to your product (and product ideas) as soon as possible. While there is some risk of user backlash, abandonment, or false negative responses to experiments if you put out half-baked software, you really don’t know jack until you see how people interact with your product. And in software, everyone pretty much gets this now. Almost every product team today has some kind of analytics package (Google Analytics, MixPanel, Flurry, etc.) and collects some usage data. And if you release to any kind of online store, you’re almost certainly on top of app reviews and social channels. What teams aren’t doing as consistently is qualitative research and measuring customer satisfaction and referrals. These really aren’t hard to do, and yield information about your users that you can’t get any other way.
Examples of the user input channel needing your attention: The most common issues you’ll see with behavioral analytics are holes in the data collection or filtering capabilities (“Oh, I can’t view the actions of only those users who are having a specific problem? Then how can we figure out what’s going wrong?”). Your product analytics might also be misaligned with marketing statistics (Ugh - one system says we had 52,000 new users last week, and the behavioral system says 43,500). Or they could be in such a hard-to-use format that really important questions about the product go unanswered. If team members are making guesses about any question that fits into the framework of “our current users are doing XYZ,” that points to a hole in your analytics data or the team’s abilities or habits related to using it. 
As for qualitative research, whenever there’s an internal discussion about users in the form “why are they doing this?” or "why aren’t they doing that?” then you have a gap in your qualitative research. Some techniques in this area are usability tests, focus groups, and ethnographic studies. If you know all about these but think you just don’t have the time and/or money and/or expertise to run them, it’s time to revisit the latest online tools. You can now do really useful studies in a few hours for a few hundred dollars - my next blog post will go into the details.
Frequency: Daily behavioral data. Real-time when you’re looking at how a new release is received. Qualitative research as part of planning and implementing new features. Weekly reviews of support requests.
#4 People Reacting to Your Marketing
Campaign results (e.g. impressions, clicks, new users or sales) 
Channel tests
Landing page analytics, replays, and chat
Current bests: Customer Acquisition Costs, Messaging, Target markets
Competitive research
Marketing results are just as important as user behavior to validate your product and business. They get huge emphasis in the MVP/Lean methodology, and many writers argue you should do throw-away marketing tests before even building a usable product. If you do this, your ad campaign or sales presentation is your MVP for testing. Speaking from experience, it’s worth being creative and running marketing tests as early as possible, because if you discover that the marketing isn’t working it often means you have the wrong market, the wrong messaging, or the wrong value proposition. Changing to the right ones will require a real pivot with significant changes to your product. You should try to do those as early and efficiently as you can.
Examples of marketing data needing your attention: Where teams usually struggle is in tying everything together between marketing, product, and business context. Almost every advertising platform has built-in reporting, but it’s worth putting in the effort to record the channel and campaign through which you acquired your users to the actions they’re taking inside your product. This will make it possible for you to figure out not only your return on each ad campaign for the most profitable campaigns and channels, but also to figure out where your most engaged users are being found.  
If acquisition costs are too high, you may also need to explore business model or strategic changes. If you’re off by 10x, look for a new channel or approach.  If you’re just off by 2x - try optimizing within your current channel to get to a sustainable marketing routine. Traction is a great, quick read that details this process. 
Current Bests: The market segments, messaging, and visual imagery that are performing best are important in the same way that the product vision and business context are important. Incorporate the latest into the the vision and drive product updates that ensure your product experience is consistent with your marketing promises and tuned for your best customers. 
Competitive Research: From now on, whenever someone at your company points out the competition and what they’re doing, I want you to imagine Admiral Ackbar from Star Wars yelling “It’s a traaaap.” It’s not that you shouldn’t look at competitors, but it’s very easy to waste a lot of time and energy focusing on your competitors when you’d be better off focusing on your customers and/or prospects. I’ve listened to CEOs confess that they copied parts of competitors’ designs only to find out later that those competitive products weren’t working. I’ve worked through business school case studies in which companies have made false moves on purpose to get their competitors to waste time and energy by following. Unless your primary market segment is unhappy customers of a much bigger competitor, your users deserve 95% of your attention. When should you put in this 5%? When you’re assessing alternatives to your product (including those that aren’t traditional competitive products) in order to establish your value proposition, positioning, and differentiation. And rarely, when there’s such a great innovative thing happening with a competitor that all your customers will come to demand it. Think of the iPhone’s touch screen.
Frequency: As quickly as you can for Channel tests until something looks promising. Daily for campaign results and landing page stats. Review marketing as needed when new best performers emerge. Limit competitive research on purpose to just when you’re establishing your differentiation, or searching for promising new features from a whole variety of sources. 
#5 Your Own Internal Pain Points
R&D pain (technical debt, technical support, risk mitigation, tools)
Operational pain (user administration, customer support, billing, reporting)
Sales and marketing pain (demos, free trials, etc.)
Whenever a group of people work together, each person forms unique perspectives on their jobs and the processes that touch them: what’s hard to do, which work is error-prone, what seems redundant and like a waste of time that could be better spent elsewhere, how the company and the product, and the workflows could be better and more automatic than they are today. That’s natural output of smart people working, and you want all of that input to come in, because sometimes internal pain points can severely limit your ability to scale and succeed as a business, further develop your product, or maintain and grow a happy and productive team. 
The thing to keep in mind is that the people you work with know how to reach you 100x more effectively than your users and sales prospects. Plus you will naturally want to help your co-workers by devoting energy to solving internal problems. Because of that, you have to listen, but automatically turn down the volume of the input to be about 1/3 as important as it sounds. When your job is to make the best product and business, the right call is often one more partial improvement to code maintainability or admin tools, rather than a larger, more complete investment, so that the bulk of your team’s energy can continue to go into growth. If your product is about to fall over from unmaintainable spaghetti code or over-taxed manual workarounds in business operations, you will hear about it 50 times or more. 
Wrapping Up
When your leading product development, there’s always too much to do and too little time. Depending on the situation, it can be tempting to stick to an established plan despite feedback that calls the plan into question, or focus all the team’s energy on a single feedback channel to try to make one group of users or internal stakeholders happy. But the best path forward usually needs to be discovered, first in broad strokes, and then in more detail through a process of learning and optimization. To be in the best position to make these discoveries and validate your product, ensure you have these 5 distinct input channels flowing into your prioritization and roadmap process and being shared across your teams. Drill into your user data and marketing data to make sure these are connected, and set up to answer the right questions. And ramp up on qualitative techniques to get at the reasons why people are reacting the way they do.
What about your story? Missing one of these input channels? Stuck getting the team to use it? Let me know in the comments.
--
-Mike  
0 notes
whateverneedsdoing · 9 years ago
Text
Adding New Use Cases To Your Products? Keep Your Original Brand Promise Sacred.
Tumblr media
If your business and product lines are growing, congratulations! It’s terrific if you’re in a position to add new use cases to provide more value to your customers.
In this post we'll take a look at how Zillow and Netflix changed their core user experiences over time and how easy it is (even for great product teams like these) to forget why their current customers were visiting every day as they worked hard to plan, design, and build new capabilities.
Case Study: Zillow
For much of the last year, I’ve been actively looking for a new home in Seattle.  My wife and I have visited about 40 houses and made a few offers.  Along the way, we spent tens of hours using two well-known Seattle real estate technology services: Zillow and Redfin.  I used Zillow and my wife used Redfin.  I chose Zillow without even thinking about the choice.  The brand was burned into my brain as the best way to find information about homes.
The Original Use Case
Why was Zillow burned into my brain?  Because 9 years ago I learned about the company from the mother-in-law of a high school friend living in Montana.  The circumstances were so weird that I still remember it.  She told me this new Zillow web site was great and I had to try it, and I wondered why a grandmother in Montana knew all about a Seattle startup that I hadn’t even heard about yet.  I used the site the same way everyone used it at first - to look up my home’s estimated worth, see the history of how it had appreciated over time, and compare my statistics to those of my neighbors’ homes.
5 years ago Rich Barton described his early-stage strategy to my TechStars class, and it turns out that my personal story of discovering Zillow fit right into his playbook.  Rich said he focused his companies on unlocking useful information and a nailing a “provocative use case” that created word-of-mouth traffic to the site.  Utility was very important, but so was satisfying consumer curiosity and providing a service worth talking about.  
More Uses Over Time
By 2010 of course Zillow had picked up more consumer use cases such as finding homes listed for sale, finding a real estate agent, and finding a loan.  Like so many advertising-driven businesses, real estate and mortgage professionals had become Zillow’s paying customers, and consumers interested in buying and selling homes had become part of Zillow’s product offering.  This additional complexity is a natural evolution of a product line in a growing business, and the positioning of Zillow in the minds of millions of users evolved over the years to accommodate these new scenarios.
Much like these millions of users, I was already used to using Zillow’s cool map, historical information, and price estimates.  So I turned to the same service to try to find homes where my family might want to live last year.  But before long, I started to notice that many of the listings on Zillow were out of date.  In many cases when I found I home I wanted to visit and called the listing agent, she’d already have sold the home.  I wondered: were the realtors that forgetful, or were they leaving homes listed on purpose to generate free leads of interested home buyers?
That was wasted time and a lousy consumer experience for me... but still, Zillow had a great map, great search, and notifications based on new search results... so I kept using it.  My wife told me she liked Redfin better, and I took a quick look, but it looked pretty much the same to me.  I wasn't interested in Redfin agents, I just wanted the best map and home search.  Zillow still seemed like the best option for me even though some agents might be abusing it.  Such is the power of brand familiarity and rationalization.
A Broken Promise
My wife and I continued our split real estate service usage for a few more months.  Then in February 2015, I decided to update my calculations for some build and remodel options.  Homes were appreciating monthly and I needed a current estimate of my home’s worth to do the remodel math.  So I did something I’ve done at least 20 times in the last decade: I opened up Zillow.com and typed my own address into the home page search box.
But my home did not show up.  No address.  No tax history.  No price estimate.  Instead, I got a map of my overall neighborhood, zoomed too far out to see individual addresses, and a list of homes currently for sale near my home.  
Tumblr media
(Your address isn’t interesting. Why don’t you buy one of these homes?)
I was dumbfounded.  I went back to the home page, and tried typing my address in again, then a 3rd time with the city name and zip code.  I tried exploring the navigation of the Zillow site.  I started to get angry.  And finally I googled for a solution and found my way to this page, a special page that had been created just in case someone like me wanted to do the thing that I’ve been trained for a decade is the thing that Zillow was invented to do.
That same night, after spending 30 minutes of frustration trying to make ZIllow behave like Zillow again, I set up a Redfin account, installed Redfin on my iPad, and set up Redfin search alerts.
By the way, if you are looking to buy in an aggressive real estate market like Seattle, I strongly recommend Redfin no matter who you want to work with as an agent.  They are accurate up to the minute with the MLS real estate listing service.  You’ll basically have the same information your real estate agent has with a much nicer user experience.  But I wouldn’t have bothered to learn this if it weren’t for my Zillow broken promise experience.
What To Take Away
Outside the heat of that moment as a frustrated user, I’m not surprised this happened.  I’m sure there was a discussion in a Zillow office about ideas for growing revenue by promoting high revenue features.  Steering users toward houses that are actually for sale rather than information on inactive properties seems like a sensible plan.  Why not test it for a few months and collect the data?  One could argue that this is an easy experiment, easy to undo, and the potential rewards justified the risk of change.  But compare this natural line of optimization thinking to the company founder’s attribution of early success to that first “provocative use case.”  The experiment being justified is an abrupt and total departure from that.  One that I experienced as a broken brand promise.
I don’t expect the behavioral data they’ll be looking at to judge this design change will tell the whole story, either.  There could be a short term uptick in lead generation revenue over a few months, and it could be that not very many people complain through support channels.  Average time on site or monthly unique visitor stats may not noticeably decrease in the first few months.  But all the while, some users like me are sure to be so flabbergasted that they turn to alternative real-estate services.  This slow trickle of conversions might become noticeable in the traffic stats several months down the road without an obvious explanation.
The key lesson here for entrepreneurs, product folks, marketers, and design leaders is that one of the fastest ways you can kill a really strong customer relationship is to completely neglect your original use case.  That use case can be such a basic expectation that your users will view any problems with it as a broken brand promise.  And broken brand promises cause people to rethink their relationship with the brand and look for alternatives.  There are always many potential design solutions - and some of these will be compatible with what your users already expect.  Zillow could highlight homes for sale all along the paths that their users take to get the answers they’re looking for.  They could show both the exact address that I typed in and nearby homes for sale on the same screen, thereby keeping that original use case intact while testing revenue improvements.  
Here’s a alternate mock-up that took me 10 minutes in the Mac OS X Image Preview app:
Tumblr media
(You want to know about 724 Arguello Blvd? Of course, that’s what Zillow does. By the way, if you want a professional estimate you might contact these good people.  If you want to refinance, we’ll help you get the best rate on a loan.  If you want to price your home to sell it, or buy a nearby property, here are some comparable we can flip through one at a time on the bottom...)
There are always design solutions that build from a starting point rather than begin with an abrupt departure.  The critical thing is simply to remember and revere what the majority of your users are already expecting from your products.
Note if you type in a specific residential address on Zillow’s home page today (July 4, 2015), you will get that home’s public information and price estimate at least some of the time.  I’m happy to report that at some point between February and now, Zillow partially corrected the lapse in their original use case, or dialed back the frequency of testing for the new “promote homes for sale near the searched for address” behavior.
Case Study: Netflix
I wasn’t an early Netflix user.  By the time I signed up for Netflix in 2009, they’d already started offering limited content for streaming, and you could get most of the DVDs in Blu-ray if you preferred it.  But despite being late to the party, I liked the service right away.  
The Original Use Case
What I liked about Netflix from the start was what I’ll call “search and add to playlist.”  You could type in any movie or TV show you’d ever heard of, and Netflix would show you the listing, and whether you could stream it, get it on DVD, or had to wait a few months until it was officially released for rentals.  This worked even for pre-release movies.  Every film (so it seemed) showed up in Netflix search results, and regardless of the specifics, you could add it to your list of stuff to watch and forget about it.  DVDs would come to your house each week, and you’d watch them.  You could stream some things right away.  You never had to remember that a friend told you how much they liked Who Killed the Electric Car? or Let the Right One In at lunch, you just added it to your queue and you’d get around to watching it some day soon.  Life was a little simpler because of Netflix.
More Uses Over Time
Of course Netflix was also known for content recommendations - a feature that was good for users as well as company operations.  At first, this was a virtualization of the video store experience, and it was very compatible with the “search and add to playlist” use case.  With recommended titles at your finger tips plus the ability to browse genres and new releases, it was easy to find more good content and add those titles to your playlists along with the movies you already knew you wanted to watch.  Later, as more and more content became available for streaming, recommendations and browse-ability made for a new use case - users who wanted something vaguely interesting to watch right now could use Netflix as a reasonable alternative to network or cable channel flipping. 
in 2010, I got familiar with another way to use Netflix - binge watching TV shows by the season. I plowed through 75 episodes of the 2004 version of Battlestar Gallactica in about 3 months, then caught up on 2 seasons of Breaking Bad and the first season of The Walking Dead.  And additional use cases kept coming.  My kids started to watch kid-oriented shows on Netflix (which led to wildly inappropriate recommendations as adult and kid titles blended into one list).  My family started to think about dropping cable entirely and using a combination of network TV streaming and Netflix.  In 2013, Netflix moved aggressively into creating their own original content just like HBO and AMC, and they’ve undeniably produced some popular and compelling television.  In fact, I just heard a radio ad this morning that called Netflix “the world’s leading internet television network.”  Quite an evolution in use cases.
A Broken Promise
Even after 4 years, Netflix’s decision to split DVD by mail and streaming video into two separate companies (in late 2011) is still seen as one of the biggest customer expectations blunders of the decade.  It took them only 3 weeks to partially reverse their decision amidst the public outcry, but it took 3 full years to repair the damage to their market value. 
Tumblr media
(Netflix vs. the NASDAQ Composite index)
Yes, Netflix users were angry about what they viewed as a sudden price hike.  They were also let down by having to make a choice that they didn’t want to make (Which service to continue? Both?) and by the usability problems caused by the separation.  But the biggest issue is one that Netflix still hasn’t fixed.  For anyone who continued with Netflix streaming, search and add to playlist, the original use case for me and millions of other customers has been abandoned since late 2011.
Tumblr media
(Harry Potter?  You don’t want to watch that.  Why don’t you watch something else?)
I’ve considered canceling Netflix a few times in the last year.  Why?  Nearly every time I actually try to find a movie by name, I don’t even see the listing in the Netflix search.  To add insult to injury, my carefully crafted playlist dating back to 2009 has lost nearly 50% of its contents as Netflix’s licensing deals have expired over the years.  I had to resuscitate my old IMDB wish list since Netflix just doesn’t do it any more.
And because they have so few of the movies I want to watch available for streaming now, I find that I’m using Google Play and Chromecast to get through my IMDB playlist.  The quality is high, the price is reasonable, and Netflix has been displaced from my original use case.
Netflix is also letting me down in my second use case these days, because they’re always a full season behind for the TV series I like.  My problems started with The Walking Dead when I didn’t want to wait a year to see the new episodes.  Amazon Instant Video was my solution to that problem.  Archer is my latest vice, and I won’t be waiting a year to finish season 6.
What To Take Away
I know that Netflix has to negotiate with lots of different media companies, and the terms of their contracts likely restrict them from giving me a perfect user experience.  Their options are constrained in a way that many other technology companies options are not.  In addition, Netflix probably couldn’t afford to keep offering both DVDs by mail and a larger and larger streaming catalog with growing bandwidth needs for the same low price.
But what they did was decide to split their brand, raise prices, make their services harder to use, and break their original use case all at the same time.
They had better alternatives for their users.  If they had made maintaining the “search and add to playlist” use case a top priority, they could have simply kept their search catalog omniscient, and offered users the specific options available for each title at the prices allowed by their deal terms.  Got that movie ready for instant streaming?  Terrific!  The latest season of this TV show has to be purchased for $2 and episode?  Well, OK.  This documentary can’t be streamed, but the DVD can be mailed to my house for $4?  Deal.  Their membership catalog could continue to vary as contracts they continually renegotiated deals with Universal, Sony, Disney, and the rest.  Netflix could have retained its position as the center of my video and TV content world, and made more money from my household each month simply by allowing the same instant purchases and rentals I can get from Google, Apple, Amazon, and Blockbuster as a la carte options above and beyond their membership catalog.
Wrapping Up
A lot has been written about keeping things simple and avoiding complexity, and that’s good advice in many situations.  But in a growth company, it is natural for the complexity of the customer relationship and the product offering to increase.
What we should learn from some of the most successful internet brands like Zillow and Netflix is how easy it is to forget about why your current customers are visiting every day as you work hard to plan, design, and build your future experiences.  We should realize how serious the consequences of forgetting these reasons can be.  We should take steps every release to remember these reasons and treat them as sacred.  You can think of it as your brand promise, or your primary use case or your core user scenario.  By holding this up as a vital customer expectation, and making sure everyone on the team understands it, you’ll be in a position to design and code your new capabilities in a way that enriches your customer experience rather than leaving them scratching their heads and looking for alternatives.
Got a story to share about adding new capabilities and what happened to your original use case?  I’d love to hear about it in the comments.
--
-Mike  
(Banner photo above courtesy of Neal Whitehouse Piper and the CC license)
0 notes
whateverneedsdoing · 10 years ago
Text
Need a Better Way to Drive Change? Stop Enforcing Process & Design Team Habits.
Tumblr media
Most Process Changes Don’t Stick
OK, let’s do a quick show of hands of everyone who’s come up with a new process to solve an important problem.  Keep your hand up if you’ve had a personal experience in which a week or a month later the new process wasn’t being followed.  All of us then?
Most managers learn to "think process" early in their careers and develop the skills to spot and fix process flaws.  A few study the art of driving process and culture change: communicating their reasons, getting buy-in, and maintaining ongoing accountability.  I’ve worked to improve team processes at big tech companies and designed new processes to fit the ever-changing goals at startups.  I've learned to appreciate great startup cultures and tried to re-create similar cultures at bigger companies.
What I learned from these experiences is that even when people agree a new process is a good idea, it's extremely hard to follow it consistently over time.  A new process can succeed if you can make it the top priority and maintain that focus for months.  But in a chaotic environment with many competing priorities (e.g. a startup) or a long-standing corporate culture (e.g. any company that’s been around for 10 years), the deck is stacked against you. 
Given that iterative change is necessary to compete in business and build high-functioning teams, we need some new tools in our toolkit.  Tools with a higher success rate that require much less effort.
Behavior Science: a Powerful New Tool:
About a year and a half ago I read The Power of Habit by Charles Duhigg.  It was a great book, as useful in product design and marketing as one of my older favorites, Influence: The Psychology of Persuasion.  It was eye opening to learn that some very accomplished leaders spend the majority of their time thinking about the right habits to instill in their teams rather than who to hire, what processes to follow, or what goals to set.  I was excited to understand habit design as a new framework for performance management, a different way to evolve team culture, and a better way to achieve goals.  I smiled to discover a new movement based on skipping quarterly goals, and instead focusing on the repeated activities the team needs to take to lead to great outcomes.
I also found the book meaningful on a personal level.  It made me think about my own habits and failed new year’s resolutions of Januaries past.  My prior setbacks in behavior change certainly weren't due to a lack of motivation, or a lack of planning ability, or any difficulty visualizing success.  The issue was poor habit design.
Then, last summer, I joined the team at Mazlo.me, a startup combining expert coaches and mobile technology to make positive behavior change practical for anyone who can dedicate a little bit of time each day.  With their last company, our founders helped two million people quit smoking by applying a deep expertise in behavior change.  Suffice it to say that for the last nine months I've been drinking from the firehose when it comes to what works and why for new habits.
Habits > Culture > Process 
Studying behavior science while creating a new product to help people form positive habits has led me to a few conclusions:
Designing habits is hacking brains.  UX designers, product managers, and marketers are already hacking human behavior whether or not they understand how it works.  You can do it better by learning how it works.
There are multiple techniques you can try to change people's habits, but most don't stick.  For practical purposes, there's a "right way" to do habit design.
The difference between team processes and team habits is significant.  Processes are externally imposed, and will be ignored when they conflict with culture, motivations, or daily priorities.  Habits are internal and automatic.  It's the habits that determine your outcomes. 
Over the long term, the sum of a team's habits becomes that team's culture.
Bottom line: Habits are more powerful than culture which is more powerful than process.
Learning It The Hard Way
I’ve worked with several teams to handle challenges related to product quality: Microsoft teams with high bug counts about to enter stabilization and code-chill periods, startup teams stretched to their limits to deliver apps across different phone platforms, game dev teams in which in-app purchases went offline on our least-tested app stores, and navigation apps that got overloaded during peak rush hour periods.
In these situations we threw everything but the kitchen sink at the problems: hiring more testers or co-opting temporary testers, investing in better test plans and deployment checklists, instituting sign-off processes, requiring code reviews, investing in automated unit, functional, and load tests, re-architecting the most bug-prone areas, etc.  Some of these techniques helped. Most didn't help enough.  The solutions with staying power - the ones that made a permanent change for the better, were the ones we stuck with as a regular part of our daily and weekly routine for months.  That is, the ones that became habits.
Real World Example 1: Thinking about requiring code reviews for 3 weeks to stabilize a product before shipping?  It’s better than nothing, but you’ll end up in the same situation again.  I have in the past.  Now we've gotten in the habit of jumping on code reviews in an open chat channel and rewarding the most prolific code-reviewer each week with a bottle of their favorite liquor.  It's become an automatic habit that has lead to a permanent culture change.  
Real World Example 2: Want to add code to track key user stats to figure out how users are progressing through your product lifecycle?  Good idea.  But because people are busy they will go back to focusing on their daily tasks pretty quickly after seeing the first few reports.  To create a permanent team focus on user funnels, we do daily reviews of user progression stats. Do this for a few months and it becomes natural and automatic.  That’s what we’ve done with an ongoing series of Mazlo pilot programs to guide our product iterations.
Right now you might be nodding your head, thinking “Sure, but those examples sound suspiciously like new processes.  So what’s the difference between a new team process and a new team habit?”
How to Design a Habit:
Others have written quite a lot about this, so I’ll keep it super short, and add some links below.  A habit is done again and again.  A habit becomes automatic.  Many team processes are driven externally, (e.g. a manager schedules a meeting and walks through a regular list of items) but habits are internal and happen without effort or attention.
To get to this magic point where an important action is habitual for team members, you have to design the habit with three factors in mind.
Trigger  When this thing happens, people will *always* do the habit.  
Action This is the new thing that people will do.
Reward This reinforces the habit and helps it become automatic with regular repetition.
To design a habit well, follow these pro tips:
Focus on the trigger First.  You want this to be as automatic and reliable as possible.  Ideally, it’s something that already happens every work day, or every checkin, or every new design.  Even after you think about this carefully, you will almost always have the wrong trigger the first time.  When the habit isn’t sticking after a few days, experiment with changes to the trigger. If your trigger is “the Dev Mgr will ask each developer about their unit tests each week” you have a lousy trigger that will make more work for the dev manager and be unreliable in the long run.  If instead your trigger is “every time someone accepts a code review, the first thing they’ll review is the unit tests,” that could actually work.
Keep the action as small as possible.   Before the new behavior becomes an automatic habit, people will think about it consciously each time and decide whether or not to do it.  You need the thing to be a small enough effort that even when people are feeling busy/overwhelmed/lazy, they still do it. If your action is “round up all of the stakeholders for a design review meeting” that’s too much work for someone to do every time they have a new design.  It will likely never become a habit.  If instead your action is “post the InVision link to the #DesignReview Slack channel,” that’s such a small thing that it could develop into a permanent habit.
Summary:
At the end of the day, "The Process" as spelled out and upheld by project managers or functional managers is much less impactful than team habits.  You can adjust the intended process, but it's the habits (automatic behaviors) that make for your success and failure over the long run and ultimately change the team culture.  
You can learn to design habits well, and you should.  Then you can use your new super power for the good of your team.
Got a story to share about process, culture, or team habits?  I’d love to hear about it in the comments.
--
-Mike  
(Photo above courtesy of Government and Heritage Library of NC and the CC license)
Resources: 
The Power of Habit by Charles Duhigg
Hooked: How to Build Habit-Forming Products by Nir Eyal
Persuasive Technology by B.J. Fogg
James Clear (on designing habits that don’t rely on willpower) 
Tiago Forte’s Introduction to Behavior Design Class
5 notes · View notes
whateverneedsdoing · 10 years ago
Text
One Size Doesn’t Fit All: 7 PM Job Descriptions Your Team Needs to Cover
Tumblr media
Every discipline has sub fields and specializations, but PM roles are often the least well defined and least consistent from team to team. This can make it hard to figure out if product engineering teams really have all the support they need to be successful, and what jobs within the scope of “all the work that isn’t getting done right now” should become your top priorities to cover next. 
At Microsoft in the early 2000s we used to talk about how the program manager job title had been adapted to so many different kinds of jobs that we couldn’t count on a common skill set. There were program managers who figured out what new functionality to build into the marquee Microsoft products, wrote the specs, and worked with the teams to build and ship new releases. Other PMs focused more on getting content published to MSN properties, worked with scientists in the research division, worked with external standards bodies on XML and accessibility guidelines, guided internal IT projects, and implemented specific enterprise solutions for Microsoft Consulting Services. PM-like title variations across the company included product managers in the marketing department, project managers and product planners attached to product engineering teams, people focused on engineering excellence across the company, etc. The day to day PM work with standards-bodies or enterprise consulting clients was so different than designing and building the next release of Outlook, that it actually required different skills and talents for success.
Working with early stage startups is the polar opposite of the big technology company environment. The team starts small, and the company fills roles when the pain of the missing skill set or the unassigned workload gets too great. In the meantime, people “wear multiple hats” or “pinch-hit” by doing whatever needs doing, even if it’s work they’ve never done before. Consultants or outside firms are called in to fill temporary expertise and capacity gaps, and startup employees and founders constantly reach out to friends/colleagues/friends of friends to get the perspective of prior experience in the midst of tackling partial first-time roles. As the company grows and becomes more successful, a few additional roles are created, but even successful companies that have grown past the startup stage and hold valuations in the $500M+ range are likely to have gotten by without many of the specializations and distinct job functions that a world-renowned technology company has. I know a pre-IPO tech company that has gotten by for years with a few technical program managers and is just now creating a separate product management discipline – it’s not that unusual.
Given these extremes of a few technology titans with lots of PM sub-specialties and title confusion problems, versus many more companies operating daily with missing PM roles and skills, how can a product manager best find the right fit? What’s a good way to chart a PM career? How can a team best indentify and prioritize the skills and contributions it may need next?
The Engineering-PM-Marketing Job Spectrum:
Here’s a tool that I created to help organize my thoughts and prioritize future hires a few years ago. When I made the first version of this table, I was responsible for both product and marketing at my startup, and I was thinking through how I could best cover all the most important work in the PM-Marketing parts of the spectrum with just a few people.  In doing so, I looked at other startups and larger companies and their job descriptions in addition to drawing from our company’s day to day gaps and needs. 
Tumblr media
Seven PM roles within the white band above makes me think of this part of the Enginering-PM-Marketing spectrum as the PM Rainbow. :-)
Tumblr media
Note almost no product organizations have separate full time employees in all seven of these roles. But it’s very inconsistent as to which roles/responsibilities get grouped together into a single PM position. Almost no job candidates are talented and experienced in all these areas either – which means it’s especially tricky to find role-PM candidate fit, even though there are plenty of people with PM experience on their resumes.
Let’s dig into specific PM sub-roles:
1. Project Management and Process Facilitation:
Specialized job titles: Scrum Master, Project Manager.
Key responsibilities: work estimates, setting and driving to schedules, load balancing, tracking project metrics, status communications, identifying inefficiencies and correcting them, project management tools and encouraging their usage, providing the right frameworks for decision-making and resource allocation, minimizing distractions and switching costs, ensuring that all disciplines are set up to do their best work, driving tradeoff decisions on time/resources/product scope.
2. Technical Program Management:
Specialized job titles: Program Manager, TPM.
Key responsibilities: feature and technical design, specs, product/project plan communication and buy-in, answering all questions about feature details and decisions, ensuring functionality and quality meet the bar, decisions about bugs and feature changes, keeping development and test unblocked, whatever needs doing to finish the release.
3. User Experience:
Specialized job titles: UX Program Manager, Interaction designer, Information architect
Key responsibilities: innovative early designs in planning, usable UI designs for products, validation of designs (usability tests, customer satisfaction, A/B tests and metrics), management of creation/delivery of graphic design assets, ensuring consistency in designs (design guidelines), maintaining relationships with a set of customers or customer proxies for product management purposes.
4. Whole Product Management:
I’ll write a future article about this, but the concept is that every touch point between customers and the company matters from a product perspective, and a drastic improvement in any of these touch points can transform your business. E.g. Zappos and shipping – if they’d only worked on their web experience, would they have built the customer loyalty they did? Differentiated themselves the way they did?
Specialized job titles: Product Manager
Key responsibilities: Product strategy and vision, product goals, product roadmap, product and release priorities, requirements and feature set, tradeoffs, usage metrics, whole-product experience (from customer awareness, to installation, to first use, to subsequent use, customer support, payment, upgrades, etc.), understanding all facets of product value, understanding all customer needs (partners, end users, customers, etc.). 
5. Traditional Product Management (Marketing):
Specialized job titles: Product Manager, Product Marketing Manager
Key responsibilities: product positioning, value proposition, product naming, benefits statements, product marketing materials and “packaging” (e.g. website content, tradeshow flyers, sales decks, etc.), product demos at events, to customers, in sales training, etc.
6. Product Planning and Market Research:
Specialized job titles: Product Planner, Research Analyst
Key responsibilities: market sizing, trends, opportunities, and opportunity ranking/internal pitching, identifying unserved and underserved customer needs, assessing willingness to pay, and ranking opportunities, segmentation studies and evaluating go-to-market opportunities, competitive research (including category alternatives) and strategy, potential channel and partnership opportunities, maintaining outside industry and customer contacts for research purposes, contributions to company revenue growth strategy and product pricing strategy, focus groups and research into existing customers to identify best segments for future growth. 
7. Product metrics, analytics, and optimization:
Specialized job titles: Analytics Manager, Marketing Sciences Consultant
Key responsibilities: Create product metrics model, report on product metrics, identify flaws and opportunities based on metrics, A/B and multivariate tests, product funnels and funnel optimization, cohort analyses and acquisition channel assessment, etc.  
Summary:
Very few people (maybe none of us) can become experts across all the skills and sub-fields in this spectrum. But by knowing what they all are and getting familiar with what a great job looks like in each area you have the right frame of reference to lead. You can prioritize what new responsibilities to tackle yourself for the sake of your team’s success and your own personal growth. You can figure out what areas to ask your colleagues to cover for the sake of the product. Perhaps most importantly, you can rationally pick some things that just aren’t going to get done right now because it's more important to focus elsewhere. If you’re hiring, you'll want to go through a similar exercise to figure out what skills and expertise your team needs most, and how to best “cover” all the work that needs to be done to build the best products you can build.
Share your thoughts: what PM role gaps are the most important for you to fill right now on your team? What are your concerns with hiring for this or further loading up the existing team members?
--
-Mike
Tumblr media Tumblr media
(Hand photo above courtesy of Rain Sun and the CC license)
(Prism photo above courtesy of Kenji Oka and the CC license)
0 notes
whateverneedsdoing · 10 years ago
Text
The 7 Facets of Product Value
Tumblr media
A few weeks ago I read a really great article by Daniel Elizalde about building APIs and managing your open APIs just like you’d manage a first-class product. If you’re already involved in building APIs you should read this. If you never think about APIs or how other teams and or companies could build on top of what you’re building, you should read it right now, because a well-designed platform could lead to faster internal progress and increase the value of your product and company.
Ask a seasoned tech entrepreneur about how they are building value, and they’ll tell you that while the API/platform facet of their product is a part of the grand plan, there are several other facets of their product value that they are thinking about as well. Depending on the stage of the company, one or two of these facets will make up the primary strategy now, but they are all important to the future. Some of the best startup pivots are product value pivots that lead to a stronger market demand, a working business model, or faster growth without requiring completely new technologies or expertise.  
Product managers today should be versed in these 7 different facets of product value to keep the broadest perspective, and be best positioned to lead their teams and companies to the most successful outcomes
Customer benefits as product value: Marketers and MBAs learn to describe benefits rather than features in order to get customer attention, and any product with features designed to meet a need or solve a problem for the person who actually pays (the customer) has this straightforward kind of product value. With Netflix, I pay $9/month, and I can watch anything they have, any time, on any device.  Including binge-watching 5 episodes of Orange Is the New Black in a row if I want to. If Netflix didn’t have software that was easily available and worked reliably on my iPads, my Chromecast, my laptop, and my Xbox, then they wouldn’t have as much value for me, no matter how much content they created or licensed.
User experience as product value: If users feel better, happier, smarter, more competent, more connected, more entertained, more relaxed, or more excited while using a product than they do using any of the alternatives – then the product has real value in the user experience. The iPhone didn’t really give owners any unique customer benefits. It made calls, played voicemeails, sent email and text messages, took pictures, and browsed the web just like BlackBerry and Windows phones that sold for $49-$99. Many of these things it didn’t even do as well. But man, did iPhone provide a better experience than the small screen, no touch devices it was competing against. People payed hundreds of dollars more for an iPhone because they felt better using it.
Users as product value: We’ve all heard that “users who don’t pay for a product, are the product.” This is a cute phrase that holds a gem of truth. But it’s more accurate to say “your user base always makes up part of your product’s value.” Advertising-supported media businesses are the classic example here.  TechCrunch writes articles, users read them, and advertisers are the customers who pay for access to TechCrunch’s users. Without the advertisers, TechCrunch runs out of money and dies. Without the users, they have nothing to sell, and without the content, TechCrunch’s users would quickly disappear. But free users are valuable in all other business models as well, as some percentage of them can become paying customers for other content, products, and services in the future, and zero-cost marketing channels to known market segments are very valuable indeed, even if your business would never sell this access to a 3rd party. The key in a model like this is that even as you’re focusing on meeting the needs of your paying customers to drive revenues, you can never fail to deliver on the expectations of your free users, or they will evaporate.
Content as product value: “Content is king” is another one of those entertainment business sound bites, and I agree that truly compelling content can pull free users or paying customers across brands, channels, to new devices and more – so it does seem like the head of many content-rich products rather than the tail. The main thing to realize in this post web 2.0 age is that even if you’re crowd-sourcing your content or your users are generating it as part of using your service, that this content is an independent valuable asset. Think of Tumblr, Flickr, YouTube, and Epinions. Even if they closed off all uploads and visits to their sites at 4pm today, they would still hold tremendous value in all the content they’ve collected over the years. People will still want to read/watch/look at/use that content, and the companies could continue to create new users and customers from it.
Data as product value: Big data and machine learning are high-investment technology fields right now because they have the power to enable new scenarios and transform many technologies and user experiences for the better. Now, more than ever, companies are collecting all the data they can, and storing it for as long as possible, under the assumption that it’s very hard to predict what data will be useful and what data science advances will enable in the not-too-distant future. Some of this data is related to people and remains a hot topic for privacy-related discussions, but other data is quite separate from people and their interests and habits. My last company, INRIX, is a data company at its core. INRIX constantly computes real-time traffic conditions, speeds, road closures, and incidents (accidents, construction, etc.) on 4 million miles of roads around the world. GPS companies, Auto OEMs, government agencies, and media/news companies then use this data to save time, save money, report on problems. The INRIX data ages quickly, but while it’s fresh, it’s very valuable.
Customer relationships as product value: Paying customers are valuable beyond the revenues they generate. Case studies and references from existing customers increase your credibility and help convert new leads to new customers. Cross-selling and account expansion can be a faster and more cost-effective way to increase revenues than winning more new business. Customers who play an active role in roadmap discussions, focus groups, and product tests can help you create even better and more profitable products in the future. In some cases, customers even become strategic investors or bank-roll new development efforts. Love your customers, not just your sales targets, and respect how much value they can bring if you work with them in a multi-faceted way.
Platform/ecosystem as product value: This is where we started in this article, and companies that successfully create a thriving ecosystem around their platforms are some of the most powerful and valuable in the world. Microsoft created the most vibrant ecosystem for PC software, and Windows was dominant because of this. Game consoles slug it out over exclusive games and the depth of their game title catalogues for precisely this reason – their device sales and company revenues are directly driven by game sales (game developers are both API partners and customers to game console companies). I first thought I might switch from iPhone to Android in 2010. But I didn’t do it until May, 2014. Why? Because iPhone had transitioned over 7 years from a product where 90% of its value was in the user experience to a product for which 90% of its value was in the ecosystem and rich catalogue of apps made for it. Android had grown to offering a better experience and more benefits in several aspects, but its platform ecosystem was not as strong.
Are you picking up on how all these facets are so tightly coupled? You can view them as independent assets, and emphasize one or a few of these for the main thrust of your product's evolution or growth strategy. But almost all modern technology products rate on several of these “value axes” simultaneously, and at all stages in their lifecycles. Even the simplest technology products are multi-faceted, or could be if it makes sense to expand them.
Let’s take a look at a few well-known tech products and companies and plot their product value here:
Tumblr media
Customers = Netflix users/consumers. They have just a few series of original content, so their strongest value is the benefits they offer: a better price point and greater convenience, plus discovery of new movies and TV shows that match consumer preferences. At the same time, without the licensed content, Netflix has almost no value - the depth of their library is still key to their value.
Tumblr media
Customers = Facebook advertisers. Their ad and geographic targeting system is a huge benefit to these customers. Ads made more credible by referencing friends of the ad viewers are also unique in the industry. But the biggest asset by far for Facebook is the ~1.3 billion monthly users, ~1 billion of whom are using Facebook on their smart phones. The users, their interconnections, and their communication and content-creation activities make the service indispensable for a massive cross-section of the planet. Facebook also made a great transition to a leading platform for games and apps back in 2008, and this strength generated even more usage, user stickiness, and revenue. While the strength of the Facebook web app ecosystem is waning, newer efforts in Facebook Connect and Parse/Mobile APIs are a major focus for the company. 
Tumblr media
Customers = iPhone buyers/users. Back in the day, iPhone had the best mobile user experience by far, and a huge amount of customer loyalty (consumers who wanted to upgrade to the next iPhone, buy the next iPad, purchase iTunes content, etc.). Until July of 2008, they hadn't yet opened up their platform or created an ecosystem of iOS apps.
Tumblr media
Customers = iPhone buyers/users. Today, iPhones are smaller, slower, and selling at a significantly lower rate than their Android competition. There's a steady trickle of former iPhone users switching over to the latest Android phones, indicating that users aren't as loyal as they used to be. But the Apple App store still monetizes better than Google Play on a per-user basis, and mobile developers are split between targeting iPhone and Android equally versus choosing iPhone as their first and best platform.
Summary:
Whether you’re defining a v1 minimum-lovable-product, or forming a strategy to grow and revolutionize a mature product and business, it's a good use of time to think through these 7 different facets of product value and use them in focusing your strategy and setting your priorities.
Share your own thoughts in the comments: What makes up the biggest % of your product's value? What should make up the biggest portion of its value in the future?.
--
-Mike
Tumblr media Tumblr media
(Photo above courtesy of Jason Samfield and the CC license)
0 notes
whateverneedsdoing · 10 years ago
Text
Process Mavens & Minimizers: 3 Factors to Determine the Right Process
Tumblr media
The extent to which you focus on product and engineering processes is one of the greatest dividers in the product management field. For some PMs, whom I’ll call the “Process Mavens,” it’s the center point of their expertise. On the other side of the fence, there are the “Process Minimizers.” Process Minimizers are happy to squeak by without a well-defined process or follow a process led by someone else, as long as it produces results.
Obviously, we all fall along a spectrum between these two stereotypes. But I’d bet any PM out there $1 that if I put you to the question over a beer, you’d self-identify more strongly as either Process Maven or a Process Minimizer.
Regardless of whether you love it or hate it, the questions of “do we need a process for this?” and “what’s the right process for this?” will often come up. The sweet spot in my experience is figuring out how to put just enough process in place to get good results. Processes can be really helpful in establishing clarity for the team, managing risks, and producing predictable results. But too much process can be de-motivating and hurt productivity for smart, creative teams, not to mention leading the majority of team members to take shortcuts and skip steps. Let’s dig into some of the key factors that will lead you to the right decisions the next time these questions come up for you.
Factor 1: How many people are involved now, and how many new people will be involved in the future?
The more people involved, the more important it is to have clarity so that the team can actually work as a team toward common goals in a collaborative manner. Microsoft got very good at planning and executing release cycles of complex, mass market software. Even in the 90s they had carefully managed planning processes, scheduling processes, engineering processes, testing processes, build management processes, and tools to support the work of thousands of engineers contributing to the same products and operating systems. Amazon and Google got very good at decoupled service-oriented innovation in the 2000s. They had tools, processes, automation, and key architectural principles that enabled scores of development teams to drive forward in a decoupled manner without destabilizing the rest of their online services. On the other side of the spectrum, many startups, game studios, and software teams rely on the judgment and oversight of a few very experienced people or simply trust in near-constant communication between the team members. Many of them make out OK without much formal process at all.
Rule of thumb: if it’s just a few people who have to do it, make the process as minimal/simple as possible. If the teams are large, invest more in making sure the process is clear to everyone, efficient to follow, and yields the expected results.
Examples of right-sized engineering processes:
Startup Weekend is a great event in which teams spontaneously assemble on a Friday evening, work all weekend, and pitch a new company with a functional product demo on Sunday evening. Teams tend to be 4-10 people. Almost all the teams manage their tasks via variations of physical Kanban boards and have multiple stand-up meetings each day.
Software teams with 15-50+ people, multiple deliverables, and external commitments to keep can be better served with a more detailed process like Scrum, which allows for some forward-looking estimation, assessments of velocity, and project progress tracking. The INRIX mobile teams deliver consumer apps on Android and iOS, white labeled apps for B2B customers, Mobile SDKs, and cloud services to support these. To support this, we organized into three Scrum teams, and managed the interdependencies between the teams.
The Microsoft division working on Office 2007 had thousands of people, and was using a milestone-based vision-spec-engineering process, with planned beta releases and stabilization periods, and lots of testing throughout the cycle. The bug management process dominated most team members’ working hours for the majority of the 2+ year software release cycle, and project risk was managed by postponing/cutting less important features as necessary. This was very thorough, very trackable, and produced shippable releases of coordinated, complex software without too many schedule slips. (Note that many Microsoft teams have adopted Agile processes since then, and just this week Satya Nadella adjusted the decision making process at Microsoft to favor speed and quality for individual engineering teams – process tuning can have a major impact at big tech companies.)
Note: it’s important to think through both the size of the current team, and the new people that will likely be on-boarded over time.  If you have 2 engineers right now, but you’ll be hiring 18 more in the next year, then it may make sense to put a little more formal process in place now than you otherwise would.
Factor 2: What will happen without a process? (aka: What are the consequences of messing up?)
Consider the likely outcomes without a process. If you’re planning ahead, you get to do this as an intellectual exercise.  Just as likely, some mistakes have already been made and the consequences felt by the time you’re looking at adding a new process to your team’s workflow.
Consequences vary, but they can be serious. For embedded software in aircraft and vehicles, communications software, and other mission-critical applications, the stakes can be as high as life and death. When there are legal requirements to be met, a lack of process could result in fines or jail time. For most tech startups, botched processes mean product slips, failed demos, service outages, and other consequences that could lead to running out of money before the company can sustain itself. At larger companies, results are measured in budget overruns and lost revenue.
Rule of thumb: if the stakes are high and the consequences are dire, you need likely need an agreed-upon process, regardless of how many people are involved.
Some examples of risk-management processes:
At my first mobile startup, we would actually FedEx phones pre-loaded with our software to industry analysts, reviewers, and decision-makers at wireless carriers we were wooing as customers. At first I sent these demos out myself. I wrote a step by step process for myself, because the opportunities were too important to allow for mistakes. Later, as more people in the company shared in this task, having the process written out helped others get it right the first time and distribute the workload.
At a games company I worked at, a few of the most senior engineers did our first few cloud deployments without any agreed-upon process. Some of the deployments worked, but some had problems. After our second outage, we created a process that involved more people and had a clear pattern – deploy at night, test both new app installs and previously installed apps, then give the all clear or roll-back to the previous service.
At Zipline Games, we had our first game featured on Google Play before we’d carefully designed and tested the monetization model. Two million players installed and played the game, but revenues were disappointing. The new process became live single-market launches with small marketing efforts to attract the first few thousand players and validate the game’s success at keeping players interested and engaged as well as making money.  That worked a lot better for the second title.
For some of the marketing web sites I’ve owned, we’ve simply made periodic backups and worked directly on the live site. The consequences of an HTML mistake are usually pretty low and can be fixed in 10 minutes or less – so why treat content like code?
Factor 3: How well do we know how to do this? Have we done this specific thing before?
If you’re working with real experts, they’re highly likely to do a great job. They’ve done similar things before, and built up their skills through practice, and they’re keenly aware of pitfalls and potential mistakes. On the other extreme, when you’re about to crowdsource work to people you’ve never met (MTurkers, your own users, your YouTube audience), then you have no idea what people’s backgrounds are and whether they even have the knowledge and skills to do good work. Most of our situations will fall somewhere in between these extremes, but this is the third key consideration for figuring out the right process. 
Similarly, even if your team is set and you know everyone’s skills quite well, the times when you’re tackling a new technology platform, a new architecture, a new target customer, or new features that have never been built before call for a different kind of process than when you’re building your 10th REST API or your 20th responsive marketing website together. This is because when the work ahead is reasonably predictable, you reap the benefits of predictable schedules and trackability if you have a little more process in place to estimate and track. When the work isn’t predictable, you want to minimize wasted time for the team and unrealistic plans of the future.
Rule of thumb: the more novice or interchangeable your contributors are, the more process you need to guide, track, and validate the work. If you’re creating something brand new, your process should allow for exploration, learning, and discovery, which usually means a little less formality and trackability.
Some examples:
Software schedule and workload estimation is notoriously hard for things the team hasn’t built before. Scrum’s comparative point estimates only become meaningful as the team tackles more and more iterations of reasonably similar work. If you’re doing something brand new and without a short-term deadline, it often makes sense to work without estimates at all (e.g. with a Trello board or Kanban board), or just guess at complexity with “T-shirt sizes” for features and keep moving forward.
When we were trying to assess the usability of our services at Ontela versus alternatives in the market, we had to recruit a lot of “average mobile users” and see how successful they were in using our service versus the alternatives. Because our pool of internet participants was global and unqualified (plus ignorant of our service), We invested a lot of thought in surveying, screening, and tasking these participants as well as assessing their results. We’d often updating the surveys, forms, and task definitions 4 or 5 times in the course of a run.  
Working with professional game artists in the period when game builds are highly functional is a great example. Speed, quality, and consistency of the art work all matter quite a bit – but there’s a high level of expertise and lots of communication and minimal process. The only really necessary process at all is some way of tracking and ordering all the art that’s needed, and regular feedback/guidance from the art director.
Summary:
Love process or hate it, one of the keys to getting the job done with your teams is coming up with “just enough” process for critical work. When you’re faced with the question of whether a new process is needed and what it should be – think through the key factors: How many people will be involved? What are the consequences of mistakes? How well do we know how to do this? And set up just enough process given the situation. Tread lightly and keep people focused on what they do best for the majority of their time. Just like with a new product design, “what’s the simplest process we could make?” is nearly always a worthwhile question.
Pro tip: Even if you don’t design or communicate a process for something, you do have one (consisting of what people are doing right now) – it just hasn’t been thought through, and may not be producing the results the business needs.
Pro tip: In practice, ingrained habits trump processes most of the time. I’ll cover this in a future post.
--
Share your own example in the comments. When did your team have too much process?  When did it have too little?
--
-Mike
Tumblr media Tumblr media
(Photo above courtesy of Gemma Busquets and the CC license)
0 notes
whateverneedsdoing · 10 years ago
Text
No Pain, No Gain: 7 Obstacles Blocking Your Path to Scrum Utopia
Tumblr media
Six years ago, I was super grumpy about work but didn't yet know it. I joined a former colleague for drinks one night and he asked how things were going.
"Great. We just raised more funding, we're hiring more people, and our photo service is live on Verizon now." I said.
"That's terrific. And how about you? Are you enjoying it?" my buddy pushed back. 
Suddenly I felt the grumpiness bubble up.  "Actually, this Scrum process is driving me crazy." I said, and went on for another 15 minutes about how hard the transition was.
Afterward, I was embarrassed that I'd turned a casual drinks meeting into a complaint session. This, at a time when I should have been thrilled about the day to day work, because we'd recently hired a new full time VP of Engineering, and I finally had the time to go deeper with the product team and focus on making our products/services better. 
But the pain was there, and it was real. What caused it was a sustained loss of product development pace, a feeling that the team-level competence that we'd built up over the last year had taken a big step backward, and some new difficulties in managing the PM work that distracted me from the work itself.
These issues were all real. But they were also all temporary. Four months later, I was convinced that our new processes, expectations, and habits made us an even stronger team, with more positive leadership coming from our engineers and testers every day, and the automation habits that we needed to keep quality higher, while moving faster over the long haul.
Now, after working with Scrum in two more companies, and Kanban in four smaller team environments, I can say with confidence that Scrum is one of the best processes I've seen for a team of senior engineers. It works well for the business, can support high product quality, enables a positive team culture, and provides plenty of opportunity for people to grow in their contributions and careers. 
...
As I found out myself, transitioning to an effective Scrum process is usually hard for both PMs and engineers. There's real pain involved. Think of it as a "Valley of Darkness" you'll have to walk through before you make it to Scrum Utopia. But if you have a strong team, Scrum Utopia is a really great place to live.
7 Obstacles Blocking Your Path to Scrum Utopia:
[1] There will be a period (2-6 months) of reduced team productivity and predictability. Relative point estimates are different than what people are used to, and the expectations for completing the sprints are different too. This improves naturally with practice as the iterations go by, so the main thing you need to do is realize that this is normal and temporary.
If pace and predictability are not improving as fast as you expect, check on two things in particular. (1) Is your process changing too much? Until it stabilizes into a rhythm with set expectations, you won't see much improvement. (2) Your code complexity may be too high vs. your automation coverage, which means you're going to have to invest in automation to reduce risk and improve predictability.
[2] It's usually very hard for senior management and external partners/customers to accept planned schedule uncertainty (e.g. the classic Agile burn-down chart), to the point where PMs and engineering managers really will have to translate all the time between the true uncertainty and a delivery date that makes sense to people outside of engineering.
If you're in an environment where you can replace delivery dates with burn-down charts then do it. If this simply won't work, then apply reasonable schedule padding and use "must" vs "like" story prioritization to manage the team to your committed dates. Address important bugs along the way (each iteration), rather than saving them all for the end, or you may find yourself stuck.
[3] It's very easy for PMs new to the process to create poorly written stories, and establish a poor balance between engineering degrees of freedom and product progress. Too much detail and too many exit criteria, and the engineers will often fail to meet all of the exit criteria for the stories (e.g. you may be adding too much complexity). Too little detail, and you may be doing weeks of "make up stories" to get to shippable product behavior and dealing with waste in the engineering process (too little product thinking or communication).
Every product and team is different, so the best advice is to pay attention each iteration to find the right balance for your situation. At the end of every sprint, you should feel that the product has made meaningful progress that you could ship. You'll be able to tell the good sprints from the bad in short order, and tune your stories to be a little more complete or open-ended as fits the team.
[4] PMs can get so caught up in prepping stories and working with engineers to finish those stories that they fail to set higher-level release goals, business goals, context, and vision for the product being built. This can be done periodically with a planning sprint, but many people fail to do it. Actually doing it can be cumbersome and hard as well. Thinking through the plan for release 6 months from now and trying to sort through 250 individual stories to make a thematically coherent product (either with physical cards or any digital tool of choice).
While there are plenty of reasonable Scrum and Agile management tools out there, it's much easier to organize your high-level product thinking in hierarchical or document form, based on your business goals or end user experiences. Write these vision docs when you need to, then create the stories you need to realize the product vision. Make product release contents and goals clear as well - these days in mobile and web, there are often several releases in the span of one thematic product vision. And even though it could be a tough sell to business management, you do have to take a sprint zero every so often to really ensure that the whole team understands the product vision and personally contributes to making it better.
[5] Teams that are transitioning to and learning Scrum often fail to get all the way done with their stories, or keep accruing technical debt each iteration, which means you're not reaping the commitment, ownership, teamwork, swarming, or stability/sanity benefits of a well-run Agile process.
There are a few techniques to getting this right. The first is investing in the unit tests and functional test automation as a part of being done with any new features, each iteration. Generally, this should be built into the point total for every story's points estimate, and stories that don't yet have unit tests should not be presented for acceptance or demo. The second is that the Scrum teams need to really understand that success is judged by finishing stories all the way, ideally as many as have been committed to. Thus it's far better to swarm and finish the top 3 of 5 stories (100% done) than to finish 80% of the work for all 5 stories but fail to complete any of them. A quick chat about re-tasking needs to happen every day or two within the team in order to optimize for this. The last part is a cultural thing, and the hardest to achieve. But the Scrum team that really kicks butt is the one that takes their story commitment seriously. Management and PM can aid in this up by asking the right questions at the right times, but it's a sensitive thing. People on the team have to feel that it's possible to meet their commitment, as well as accountable for meeting it. This gets to the root of the team culture, and it's made up of all the 1:1 interactions on the team each day.
[6] Engineering teams with dedicated testers will have to find a new way of working together, and often will fail to plan and pitch-in in the most complimentary way for finishing high-quality engineering work in the first several iterations. Teams with SDETs may find that they start to blur the lines possibly drop the distinction between roles altogether. Engineers who are used to working with dedicated testers may be reluctant to tackle testing for Scrum features or might not be good at it at first. 
To aid overcoming this obstacle, ask questions about testing all the time, call out key test cases in your stories, and scenario-test all the new features yourself. After a few months, the team should work out a new normal: New features need automated tests, and they also need thoughtful manual test cases/plans. Part of tasking is making sure that someone's assigned to create the manual test plan and do the testing while there's still time left in the sprint to make some fixes and be ready to pass acceptance. Everyone on the team is responsible for quality.
[7] Teams will often have to experiment with iteration lengths to find one that works well for their work. Many teams find it tempting to set a long iteration period (3 weeks or more) and only partially adjust their work rhythms. While this may feel more comfortable to teams who are transitioning to Scrum, and it may also seem more efficient to have the iteration-based meetings (planning, tasking, retrospective) less frequently, I find that this makes it hard to spot and fix problems with the new process, or can lead to a hybrid or "partially-agile" process that fails to deliver the benefits you're looking for.
After trying this with my last team, I'd suggest shortening your iterations to 1 week from the beginning. This helps you really practice and perfect the processes, habits, and teamwork of Scrum, and improve more rapidly than you would with a less frequent cycle.  Problems will be super obvious in this format, and there will be fewer stories in planning and fewer, more actionable items to address from retrospectives.
...
All in all, I've never found the transitions from longer milestones and waterfall/spiral-style release schedules to Scrum to be easy, and I've seen organizations who follow a few of the trappings of Scrum (stand up meetings, user stories) but fail to do the automation/TDD/swarming/commitment necessary to really benefit from their a fully Agile process.
But for companies that stick with it, and get all the key habits right, I've seen this process make for fully committed engineers, more stable products, and definitely more agile product management.
What are your thoughts? Is your team still in a Valley of Darkness, or have you reached your Scrum Utopia?  What other pitfalls do you see teams struggling with?
--
-Mike
Tumblr media Tumblr media
(Photo above courtesy of Felipe Venâncio and the CC license)
0 notes
whateverneedsdoing · 10 years ago
Text
Personal Story: 1st Startup, 1st PRD.
Tumblr media
(photo attribution)
My last post covered how to write a good product vision document, and why it's important to write one from time to time even when you're using an Agile development process.
This post is all about how I went through that process in a new company environment and new technology space several years back.
>>> queue flashback sequence >>>
It was early 2007, before the iPhone. It had been about 3 months since I'd left Microsoft for the startup world, and I was still drinking from the fire hose. I was ramping up on cell phone apps and wireless carriers as fast as I could. Meanwhile, Ontela had raised a series A venture round and we were off to the races.
In the last 30 days we’d hired a head of sales, a VP of engineering, pulled the first public-facing marketing web site together, and retained a PR firm. Board meetings were scheduled every month, and the board expected sales and product plans, and progress toward those plans.
We needed a focused product plan and schedule, and we needed the engineering team (including the new people we were hiring every few weeks) to work together toward a common vision in building our consumer-ready product & service.
While we were building our v1, the sales team would be selling the tech demo and the spec. The board wanted a PRD, and the CEO set a clear goal for me: “Finish a PRD for our first product that the head of sales agrees he can sell, and the head of engineering agrees he can build in the next 60 days.”
The only problems were I’d never written a PRD before, I didn’t understand our wireless carrier customers yet, and I’d never designed a mobile app before.
What I did have was experience shipping a lot of software, processes I'd learned for figuring out what to do and communicating it clearly, and a love of this part of the job. So I started chugging through the work.
Starting from the basic questions Who? Why? and What?
Who is the document for? I had two main audiences at the time: company management (including sales and the board), and the engineering team responsible for building the product.
Why do they care? Management needed to understand what the company is planning to build. The engineering team needed the exact same thing. We all needed to believe that the plan would produce a product that works for the market/customers. We couldn't do everything we'd talked about in passing, so we all needed to understand which business goals were in scope for this project and which were not.
What should we build? We'd raised money and garnered initial customer attention on the impressive ease of use of our technology demo, so we knew the core use case we'd be delivering, but there were a whole bunch of unknowns about what would make for the right product that could be delivered in the right amount of time...
What #1: Form an understanding of and empathy the market.
I was starting from a disadvantage here, and getting to the right place took synthesizing information and opinions from several wireless industry veterans, tagging along to sales meetings, watching users try out our product, and pretty much every quick and dirty customer development technique I could fit into my weeks.
Summary: For Ontela’s automatic photo uploading service in 2007, we were selling the service to wireless carriers in the United States, who at the time wanted to increase their ARPU (Average Revenue Per User) with cellular data services and keep their subscriber churn rates down. The retail sales people working in cellular stores across the country were a sub-group within our customers – even if our service added more monthly revenue to the carrier, it also had to be worth remembering for the sales rep in the midst of a phone sale. Our users were consumers who were snapping more and more photos on their phones (which more and more were being chosen for the 1+ megapixel cameras), and who in many cases never managed to save or share the pictures. It was common back then for someone buying a new phone to ask their cellular sales person if they could help transfer the photos from their old phone to their new one. This often involved removing an SD card from behind the battery inside the device.
What #2: Stay focused and narrow the scope.
We were hiring our staff as we built the product and so our people resources were very tight. We had some funding and did use some contract and off-shore staff to accelerate progress, but The Mythical Man Month aspects were real and we had to track closely for risk, quality, and process bottlenecks.
We made several scoping decisions to keep things focused, the most massive of which was targeting only the BREW platform for v1 of the Ontela service. This was very controversial, because it limited our market to about 35% of the US, and even less of the world-wide market, most of whom used a GSM network + J2ME mobile platform combination. But it was essential in that we could release our best user experience on this platform, which was our key differentiator. On top of that, it really helped focus our sales efforts to land our first few accounts.
What #3: Think through all the user interactions with the service.
For Ontela, our key user experience points were:
Awareness/purchase decision: users would be asked at point-of-sale when buying a new phone if they wanted all their photos automatically copied from their phone to their email or PC where they could decide to share/save/print them later.
First use: they would leave the store with a test photo already delivered to their email account, and every subsequent photo set up to be delivered automatically.
Ongoing use: the technology was robust against network outages and phone power cycles, and every photo taken again showed the value of the service.
Expanded use/upgrade: users could upgrade from links in any photo email message to add direct-to-PC folder downloads (much like DropBox) and/or online services (e.g. Photobucket or Facebook). We also had the wireless carrier experience to think about, including ease of service setup, billing, and product support.
What #4: Clarify the keys to the product’s success..
For Ontela, these differentiators for the users were the ease of setup, the simplicity of the product, and the worry-free operation for the whole lifecycle. Grandmas who wanted to print pictures of their grandkids from their new phones could literally forget about this service they’d said yes to when buying their new camera phone, have no idea how to find or interact with the user interface of the on-phone app, and that would be just fine – they’d get every picture from that phone delivered to their email, automatically.  It was a nice design point that the more any user took pictures, the more useful the service would be for them, and the more "happy reminders" they would get that the service was working on their behalf.
For the wireless carriers, this was the only add-on service that had to do with pictures. Whenever the service could be added to a new camera phone sale, it would boost ARPU for the carrier, and provide a spot bonus to sales person who sold the service.
What #5: Use pictures.
At Ontela, I highlighted key goals, timelines, and the key parts of the user interaction model (as described above) in picture/chart/table form. I showed the wireframe design of the on-device UI screens, the email to users, and PC client. I created a screen flow diagram or two to make the user experience and app behaviors really clear. These diagrams covered things like first run, setup, photo uploads, and settings changes.
Bonus step: over-share and edit, edit, edit.
I went through about 10 drafts of the product document at Ontela in the course of about 4 weeks. I got regular feedback from sales, engineering, BD, the CEO, and an outside voice or two. Every draft the document got better.
>>>
So where did I end up? Well, not surprisingly, writing the PRD had a way of absolutely clarifying the planning work that was left to do, the questions that were still open about the users and the market, the concerns the team had, the tricky technical parts, and the goals or features that could be put off until a later version or milestone. Progress sped up as the holes in the document got plugged. We had realistic schedule scoping discussions, and adjusted the plan to make the best trade-offs.
In the end, we converged about a week before the 60-day deadline. The sales head said he could sell the product. The engineering head said he could build it. And our company advisors from the wireless industry, and two board members with previous wireless exits all read the document. The consensus was that it was the best PRD they’d ever read. We gave them permission to use it as an example with non-competitive portfolio companies. But we all understood this was just one step in the journey to a successful product and business.
Pro tip: When you have moments like this in which your team and/or management fully get behind your product vision, it feels great. When you have the opposite kind of moment (say resources are cut for a product idea that you still believe has merit) - it feels devastating. That's part of the gig. Just try to always remember that while it's a very important milestone to align the team and get them all pulling in the same direction, this ultimately doesn't mean you've succeeded or failed in your role. Only the market can decide that.
Do the users love the product? Do the customers pay for the product? Those are the ultimate yardsticks of success.
--
I'd love to get comments about your stories of product plan triumphs and stumbles. Post them and we can commiserate here.  ;-)
--
-Mike
Tumblr media Tumblr media
0 notes
whateverneedsdoing · 10 years ago
Text
The Vision Thing and How to Communicate It
Tumblr media
For most product leadership roles, you’re expected to own “what the team builds,” to communicate a clear and compelling vision for that future product, and both inspire and help the team to realize this product vision. Communication experts like Edward Tufte make it clear that the best way to do that is with a well-written document, so let’s dig into how to write a good one.
If you’re already thinking “long product docs are dead – everyone’s doing Agile development and user stories,” just stay with me for another few sentences before you click away to Penny Arcade. Obviously, beefy product documents are not needed every two weeks to shepherd an Agile or Kanban process. When you’re in full-on execution mode, live demos, quick design mock-ups, whiteboard drawings, or 1-page wikis get the job done better, faster, and with less wasted effort. But there are lots of cases in which a document from the spectrum of {Product Requirements Document (PRD) / Product Vision / Release Charter / Epic Description / Big ol’ Spec} is just what the doctor ordered. This is a tool you’ll need to have in your toolbox to drive clarity when the team has 5 different ideas of how the product needs to evolve, win support when the marketing team doesn’t know enough to commit a launch budget, or communicate effectively with a CEO who travels a lot.
If you ever find yourself in a similar situation, there’s no need to panic. Start from the basic questions Who? Why? and What?
Who is the document for? You may have two (or more) internal audiences: company management and other internal stakeholders (i.e. the “chickens” in SCRUM lingo), and the team responsible for building the product (i.e. the “pigs”).  You may also have an external audience (like distribution partners, a manufacturing firm, or key customers).
Why do they care? Management needs to understand what the company is planning to build in order to be effective. The engineering team needs the exact same thing. Management needs to believe that the plan will produce a product that will work for the market/customers, and that will likely meet the internal goals of the company. The engineering team needs the exact same thing. The engineering team also needs to understand the business goals related to the product – context to answer the question, “why is this product the most important thing for us to build right now?” The management team needs to understand which of all the business goals on their minds are in scope for this project and which are not.
What should we build? This is the hard question. Answer this well and fame and glory await, answer it poorly and bad things™ will happen. Your desire to be the one in the hot seat to answer this question is why you signed up for the job in the first place, right?
Here are the steps that serve me well in answering “What should we build?”
Form an understanding of and empathy for your market.
Stay focused and narrow the scope.
Think through all the user’s interactions with your product.
Identify the keys to your product’s success.
Use pictures.
#1 Form an understanding of and empathy for your market.
Figure out the different groups that matter in your ecosystem, the problems they have, the benefits you bring to users, customers, and partners, and how these benefits solve their problems. There are a bunch of techniques, most of which require you getting out of the building to watch people, talk to people, and experience things yourself from new perspectives.
#2 Stay focused and narrow the scope.
The company is counting on you to narrow the scope. Resources are always limited, and complexity does more than add schedule risk, and make quality more difficult to maintain. It also increases the burden of maintaining and improving your software product/service for the rest of its lifecycle.
#3 Think through all the user’s interactions with your product.
“Whole product management” as Ian Macmillian from Wharton described it in a leadership class for me a decade ago is a framework that I still use it today. I’ll do a separate post on this way of thinking, but for now, I’ll summarize it as sketching out the lifecycle of a new user interacting with your product, accepting that all of these interactions constitute your end to end user experience, and prioritizing the design of the key parts of this that will make your product work well as an experience and a business.
#4 Identify the keys to your product’s success and make this really clear.
What will really set your product apart from alternatives and competition? What will make the user experience delightful? Or provocative? Worth talking about? Call out just a few of these things, and emphasize them like crazy. Everything else that needs to be built to make for a complete product should be optimized for dev speed, simplicity, and ease of maintenance.
#5 Use pictures.
Sometimes pictures are worth more than many thousands of words. Describing user interactions with yet-to-be built products is one of those times. I like to highlight key goals, timelines, and the key parts of the user interaction model in picture/chart/table form. Team members who skim the text or forget key points a few weeks down the road, will often have printed those pictures out and keep them handy on their desk while they’re coding.
Bonus step: over-share and edit, edit, edit.
You may be tempted to plug away on writing your document, stamping out the unknowns or uncertainties, etc. and hold off on sharing it until you think it’s worthy. My advice: don’t wait. As soon as it’s reasonably coherent and sets a preferred direction for the product, start getting 1:1 input from multiple team members. It will speed the process up, and get the team into the mode of helping to come up with the right product plan, just as you’ll be helping them to create and deliver and market it.
Pro tip: Agile works better with a little vision. 
Your process may be Agile, but your team needs a periodic dose of “The Vision Thing” to really gel. If you find you’ve writing nothing but user stories for 6 months, and you’re not periodically popping up to the level of the goals of the product, the business goals, and the vision for what you’re building together, then ask few of your team members what they think are the most important things to be doing in the next 90 days.  Chances are you’ll get lots of different answers, and that means your team is not operating at peak performance, and they’re not 100% engaged in creatively helping driving the product forward.
Pro tip #2: the vision doc shouldn't always be a document.
Depending on how many people you need to influence, and how directly they're interested in what your doing, you may have a heck of a time getting all the people who should read your document to actually read your document. Last year I ran into this problem and turned to creating 3 minute product vision video as a more easily consumable format.
--
So what do you think? Have you found time in the last 12 months to put a first-class product document together? What other artifacts and communication tools have you seen that really help fill the vision void?
--
-Mike
Tumblr media Tumblr media
(Photo above courtesy of Joe Dyndale and the CC license)
0 notes
whateverneedsdoing · 10 years ago
Text
The Story Behind the Name "Whatever Needs Doing"
Tumblr media
Leadership Without Authority
If you've been a PM, you're probably very familiar with the phrase "leadership without authority." While it's your responsibility to figure out what to build and work with a team to achieve success, your team doesn't report to you and they don't have to listen to you. To achieve anything, you have to be clear and convincing in your vision and goals for the product. You have to be honest when you don't know the right thing to do, and include your team members in the process you lead in order to figure it out.
Some people think this is a great set up. You can lead a whole team to create something great without having to write performance evaluations and deal with the hassles that come with direct team management. Other people think this sounds pretty horrid. "Is every little decision going to be an argument? How can I be responsible for what other people produce if they don't even have to listen to me?"
The secret is that no one has to listen to their boss, either. There's a big difference between minimal compliance to stay employed, and the creative and energetic pursuit of the best possible product. So the way PMs lead by necessity is also the way that most creative and highly skilled functions (engineering, marketing, art, design) are led by those in authority at the worlds best technology companies and startups. Whether you're a PM, a functional manager, an exec, or a startup CEO, you're going to be more successful in the role if if you continue to develop your skills in getting people to believe in the mission and actually want to do the right things. If you rely on the authority of your position too often to win people over, you're going to have trouble.
Not convinced? Read the Steve Jobs biography again and focus on the passionate energy he spent in the early days convincing people of the importance of every Mac feature and design decision. Or read how Steve Newcomb delegates the most important CEO decision of all - hiring - to the members of his startup team. There are dozens of other examples I could list. Have a personal story that supports or undermines this principle? Please post it in the comments below. >>>
Whatever Needs Doing
Quick story break: when I first became a PM, my first manager (a large athletic man from Finland) described my job to me in a collection of small quips delivered over the first few months.  Feel free to imagine these being spoken in Arnold Schwarzenegger's Terminator voice, and you won't be far off.  :-)
"Other jobs have degrees, credentials, etc.  We don't, we're just people who for no particular reason are trusted to make important decisions."
"You don't have a job description.  There's no line where you get to say 'that's someone else's job.'"
"If the product is awesome, then you're awesome.  If the product sucks, then you suck."
"Always bring the bagels."
I took these quips to heart and they've always stuck with me. As a product leader, your top concerns are "are we building the right product?" and "how can I get the most out of the members of this team and their unique skills?" Sometimes you get the farthest ahead for your product by discovering just the right new feature or the design change that needs to be made to make the product more usable. More often, you have to be the enabler - moving the product forward by getting the right three engineers to hash out an interface before it's too late to test it properly, or asking a whole slew of questions about a technical design to figure out if it can be simplified so that the product can still ship on time. Just as often, you fill in whatever gaps are there to ensure the product succeeds: hiring a contract designer, negotiating with a technology partner, naming a whole bunch of asset files, or setting up another workflow tool, wiki, or web site. I view this as the PM version of "Leading by example." It's not that you want your team members to start doing all these things too, you're doing it in part to keep the engineers, testers, and designers focused and productive. But it proves day after day that you're a part of the team, that your motivation is the success of the product, and there's no job you're "too good to do." And those are habits and attitudes you want to become a core part of the team's culture.
It certainly doesn't hurt to build a sense of camaraderie and a habit of open conversation over regular Friday bagels, either.
Whatever Needs Doing is the name of the blog, because it's one of the real keys to success in product roles. Whether you're a new associate PM, SVP of Products, or Jeff Bezos, there's some version of "whatever needs doing" that applies to your job. Maybe you could delegate it, but the right thing to do in order to stay humble and to build the right kind of team culture is to take care of it yourself, sweat the details, take pride in the task that no one else wanted to do or knew how to do because you helped the product along.
What do you think? What's your example of doing "whatever needs doing" and making the team and product better? Or maybe you have a story of a time when no one wanted to do something in particular and the team and product suffered for it?
--
-Mike
Tumblr media Tumblr media
2 notes · View notes
whateverneedsdoing · 10 years ago
Text
First Post: Who, Why, What?
The best way I can think of to kick off a product leadership blog is with three of my favorite product questions applied to the blog itself.  
I ask these Who, What, and Why questions almost every time I'm considering a new product, feature, or project - and one of the keys to answering them well is distilling the answer down to something people can immediately understand and use.
Who is it for? (aka: the target market, and the user problems)
Whatever Needs Doing is a blog for:
People working as PMs who want to keep improving their products, their skills, their breadth, and their scope of responsibility.  
People who want to enter the field for the first time, or thinking of transitioning from a fully technical role to a PM role.  
Business leaders who may need to hire someone focused on product, how to make the right hire and define the PM role and org as distinct from R&D, engineering, and/or marketing.
Designers who want to broaden and develop product chops to compliment their UX skills.
Engineers who want to better understand what their PMs do, how to work with them better, and if they're doing a good job.
Why should they care?  (aka - the benefits to the users)
Readers will get these benefits from reading:
Every post here will be something product leaders can apply either to their own products or skill development: lessons learned, popular product case studies, quick and dirty startup-style experiments and validations, tools, & techniques. 
Posts will be constructed to be easy to read and remember. Sure, I'll share stories that illustrate a point well, or add a bit of humor, but every post will distill, distill, distill because you are busy and you already have a lot on your mind.
Readers will get useful tools when I have them, not just blog posts. Demographic question blocks for surveys, PowerPoint templates for wireframing that work as well as many dedicated wireframing tools, brainstorming spreadsheet scripts.  I'll share.
What are the goals? (internal business goals & product goals) 
I will count Whatever Needs Doing as a success, if it succeeds in:
Producing relevant, useful content for this audience and reach a readership of 1000 readers/week by the end of 2014.
To generate an active and engaging community to discuss PM topics and help PMs with tough problems within 12 months.
To facilitate the improvement of at least three live products in the first 12 months, thus making the world just a little bit better.  :-)
What do you think?  Have you thought carefully through these three questions for your own product recently (like in the last 30 days)?
Do you still believe that the team's assumed answers to these questions are the right answers, and that there's some data to back those answers up?
1 note · View note