Episode Transcript
Richard: Welcome to the Humanizing Work Show. I'm Richard Lawrence here with Peter Green. If you start an initiative the way most organizations do, you're at huge risk of wasting time and energy and losing credibility. We've developed a systematic way to prevent that from happening though, and on today's episode, we'll walk you through our approach, so you can get to value, learning, and risk reduction early in any project instead of waiting until it's too late.
Peter: I've been so happy that we have this approach, Richard, because over the last year, I would say probably a half a dozen clients have brought us in, where they've got that big initiative all planned out, or they're in the very early stages of sketching it out, and want some advice.
These projects have a lot of similarities. We've noticed there are lots of different systems they're trying to integrate and get data out of different things to conform and be de-duped and be clean. There are dozens of features that they consider required to even get to what they're calling an MVP or a minimum viable product. They're using multiple vendors, different departments, multiple teams to work on different parts of the system.
And lots of untested assumptions about which part of the initiative really matter. What are customers gonna use? Where's the value? What's never gonna get used? What should we not build because it's not important or it's not gonna work? And thankfully we've been able to jump in and help them avoid what we usually see here, which is usually going to one of two places: comprehensive analysis, which some of these clients have done. They've mapped everything out, they've got big architectural diagrams, they know exactly where the data lives and what it's gonna look like when it's done. Or sort of a quick win approach where they say, well, we know we're gonna need a new data warehouse, so let's fire up the new data warehouse and start figuring that part out, and we'll get that data warehouse done and we'll be well on our way to success.
Big initiatives often start with one of those steps. It's, it's the wrong first slice. It's either too big, it's multiple teams working on different pieces. It's too obvious, there's those, sort of, quick wins that don't actually teach you anything. Or it's too haphazard. Sometimes organizations say, well, let's just pick something and get started.
And in a recent newsletter called How You Start is Everything, Richard, you wrote that early work on an initiative should do five things. Would you give us a quick summary of those five things?
Richard: Yeah. Early work on any big thing should give you value, should give you some risk mitigation. It should be less risky once you've done your early steps. You should get some learning. You should know more than you did when you started, instead of just confirming the things you already believe. It should create motivation because people are making meaningful progress. And it should get you credibility as other stakeholders see that you're getting meaningful things done.
And the path that these clients were taking on their big initiatives are unlikely to give them any of the first three for months or quarters. There wasn't gonna be a lot of value. If you're just pulling things together into that data warehouse nobody can see. Not really reducing your risk about building the right thing and having it be useful, not learning a whole lot, especially on the customer side of things.
And motivation and credibility would only really be kind of by proxy. We're, we're getting something done, so that feels motivating. We can tell our stakeholders, oh yeah, things are green. We're getting stuff done, but it's not real. We're not showing real value, risk mitigation, and learning to earn that motivation and credibility.
Peter: Yeah, when we've seen clients take this other approach, often the first slice takes way too long to get to any kind of value, and sometimes things have changed by the time that they actually deliver that. So it's like, you hit the expiration date on when that feature would've been useful.
Sometimes that big thing requires these huge dependency maps and coordination between teams. Check out our last episode on how to reduce the pain of dependencies if you want some advice on that and you can't change the team structure.
Often we find that stakeholders think they're aligned once they have this big plan going into it, only to discover way too late that they actually had a different understanding of what was important.
And then sometimes teams get stuck between those two sides, where you have some people advocating to do more and more analysis to be more and more certain, or other people saying, come on, we just need to get started. And you start to get tension on the team between those two poles when there really is a third and different way to do it that's better.
And that's where Feature Mining comes in. And Feature Mining is what, uh, we named this technique many years ago. We've now incorporated it into CAPED. We teach it in all of our product classes. Anytime we're teaching somebody how to get started, we teach the Feature Mining approach.
So, Richard, you invented this years ago. Why don't you give us a quick overview of where this came from, what problem you were trying to solve, and what benefits you saw early on.
Richard: About a decade ago now, I was working with a couple clients who were starting many new initiatives and they'd gotten some value out of my story splitting patterns for things where they were building on top of an existing foundation, like a second release of a thing. And they both were asking, how can we get some of the benefits we're seeing with small, valuable stories, but earlier and at a higher level of detail, more like the first feature? The first release rather than the first little story.
So we started working through how do we come up with early slices that get you value learning and risk mitigation. Experimenting with different things, figuring out what information and what participants needed to be in the conversations.
And the result was what we came to call Feature Mining. A side note, I a little bit regret calling it Feature Mining for those early projects. They were really software heavy, so the first slice was almost always the first feature. As we've used this more and more, we've figured out that this is really not about features.
It's about designing probes for big, complex things. How do we go straight for the core complexity of a complex thing? And that probe is not always a feature. If you're doing an organizational change and it's complex and you want to probe that complexity, your first slice is probably a pilot of the organizational change and not really a feature at all.
But by the time we realized that, we were several years in, and everybody was calling it Feature Mining. So Feature Mining it is!
Let's talk through an example. I'll anonymize a composite of a few real examples I had at different clients to illustrate this. So imagine, we've got a big retailer that does sales reporting for store managers once a month. And if you're a manager in one of these stores, you find out in mid-January how things went in early December. And of course early December is a huge time for retail and finding out in January that a certain thing wasn't selling isn't all that useful.
They wanted to move towards, every Friday you come in in the morning, and if you're a store manager, you get an update on how sales are going that week, going into the weekend when things are gonna pick up. And you can, you know, move the the right things to an end cap, run a special, whatever you do to increase sales there. So it was a move from monthly sales reporting to weekly sales reporting.
The traditional way they would've handled this thing was figure out all the changes to the data warehouse, all the changes to the reports, like plan out the whole thing. And they were fairly agile about their delivery, so they would've iterated on that, but they wouldn't have experienced any of the value, or really learned if they were doing it right, until it was mostly all done and they got that first release. What big companies often call an MVP now: that first release that will disappoint everybody, but equally.
Uh, so here's how Feature Mining would work on that thing. First move is going to be, get the right people in the conversation. You need several perspectives for this. You need people who understand the problem space. What problem are we trying to solve and why is that complex? And you need people who understand the solution space. What do our solutions look like? What is our technology and what's complex about that? So you get that group together. That's often product people and technical people. In a software organization, but it could be a broader mix of people with a business perspective and a solution perspective.
Name the thing that you're talking about. So you can have a focused conversation about it. In this case, something like weekly sales reporting.
And then the bulk of your conversation is getting information out of people's heads and into a shared visual around four topics.
Number one, first list, you're brainstorming is: what's the impact? What are we trying to accomplish here? Often a big source of complexity is what's new, and the impact we're trying to create is gonna be tied to that newness.
So what are we trying to accomplish here for this weekly sales reporting? It's things like store managers can respond faster to changes in sales.
Second list: what makes it big? Why is this a big effort? Why is it gonna take a lot of time? And most of what goes here is, what is there many of, what is there much of, what's always time consuming?
Training the stores is gonna take a long time. Um, maybe we just have a lot of data. We have different ways of calculating sales in different regions. You brainstorm out all those things. These are mostly things that are known and analyzable, so they're complicated rather than complex in Cynefin terms.
And then two more lists: risks, what could go wrong that would cause us to fail, and we describe those risks going wrong. Like we build it and they don't use it.
And uncertainties we need to answer to be successful. Like how should the weekly report look different than the monthly report? What kinds of things do they wanna do on a weekly basis that they might not do on a monthly basis? We'd actually capture those as questions that we need to answer. And risks and uncertainties can be technical or they could be business or customer oriented. We're trying to pull all that together so everybody can see it.
Then in each of those four categories, impact, bigness, risk, uncertainty, the group figures out what's most important. What's the key impact we're going for? What most makes it big? What's the scariest risk? What's the most important question to answer? We often do that through dot voting.
Pick your winners and then here's the key move. Once you have those winners, we're going to talk about them in a structured way to brainstorm some smaller slices that could get us some impact, some risk mitigation, some uncertainty resolution or learning, without all that bigness.
And you do that by putting some of the winners together. We're gonna say, how can we get some of that top impact without having to take on all that top bigness? So for that weekly sales reporting, it might sound like, how can we begin to respond faster to changes in sales than the current monthly reporting allows, without having to take on all the different regions and their different ways of reporting sales.
And then you ask, what if we just, uh, we could just do it for one region at first. We could just pick the most common way of calculating sales across the regions. We could just do it for one store and see what happens there. So we're slicing through that bigness in ways that still preserve the impact, and this is how you avoid that, quick wins that aren't really wins.
Then we're gonna do the same thing with risk. How can we begin to mitigate that top risk with, without taking on all the bigness? What if we just, and then with uncertainty, how can we start answering that question, without taking on all the bigness. What if we just,...
And then I'll usually ask, how could we make it even smaller still, without giving up some of those things we care about? What if we just... 'Cause sometimes there's just like, arbitrary bigness in there that we can carve out. Only do it for certain kinds of customers, or certain kinds of products or, um, whatever is relevant in the context. There's usually ways to make it smaller still that aren't, uh, in the first pass.
And then once you've got all those, what if we just..., brainstormed, everybody can look at that list and start proposing some interesting combinations. Like what if we just do it manually for four weeks for this one store, kind of paper prototype style. Get it in front of them. That store will get a benefit from it, and we can watch how they use it and interview them and learn more about that uncertainty on the content of the report and the user experience of the report.
That's gonna be a really nice first slice that's going to address some uncertainty. It's gonna get a little bit of value, but it's actually quite small. And it doesn't really look anything like the so-called MVP that they were going for. It's not, pull all the data together, make it work for all the stores, but only a little bit so nobody's really happy.
It's a lot of value for a narrow slice and a lot of learning for a narrow slice. And then of course you'll iterate from there to make it work for more stores and more variations and and all that. But it's a way to go straight after the core complexity.
Peter: It strikes me as you describe this, Richard, that the biggest risk is usually business related. It's, will customers want it? Do we know the right thing to build? We have a tendency to start with, sort of the technical things. Which are maybe still complex, but not as big a risk to the project. If you could do the fanciest, fastest, immediate sales report: anytime somebody scans an item at the store, it immediately generates a new report and everybody has it. But the format of that isn't useful. It doesn't matter, right?
So it really does say, what's the most important thing to do? And in those examples, that happened to be sort of customer, uh, end user related. We have seen examples of this where the biggest complexity really is technical.
Richard: Like, can we solve this problem?
Peter: Yeah, exactly. You end up with a slightly different. Uh, mix of "what if we just..." And then, pretty quickly, if it's feasible to do technically, then we get right back into the business questions.
Richard: And for this team, that's exactly what happened. The complexity behind this initial one, would it be beneficial? was the first complexity. Once they got evidence around that, the very next thing is, can we optimize the performance of the reporting so that they actually can run overnight every week? They weren't sure they could. So they did have a technical complexity right behind it.
But optimizing the performance when nobody was going to benefit from the more frequent reports would've been a waste of time. So we almost always do customer complexity first and then technical complexity.
Peter: As you've used this over the years, Richard, what are some of the big benefits? Obviously it's stuck around, we teach it all the time. We see benefits all the time for clients. What are some that stand out?
Richard: One of the happy accidents of those first two clients is that they both had strong consensus cultures. So these meetings were way bigger than I would've initially. Uh, wanted them to be. But the happy accident part of that, is that it meant that every stakeholder around these things felt heard, like they had input to it. And then it created a lot of alignment for all of them. Everyone understood why we were starting the way we were, even if it wasn't the thing that they originally expected.
And a great example of that was where you had two different stakeholders who were both excited about a new initiative. And once everybody brainstormed around impact, we realized they were both excited about wildly different value propositions. Which they wouldn't have discovered until later that they were misaligned around that because everybody felt so positive.
Getting that clear early, and agreeing, we're starting with a focus on this one and not that one. Uh, it wasn't what that second stakeholder wanted as much, but it avoided some misunderstandings and pain later.
Peter: That's been reinforced a little bit recently for us, Richard. Because I think you and I think about the big benefit as early value and learning, a clear place to start that starts, that tackles that core complexity.
But we've had multiple clients recently say, yeah, when we did this, everybody left the room just feeling so much more aligned. And we actually had a client hire us recently to say, the sponsor of this project and the team's working on it keep drifting out of alignment. Can you come in and do Feature Mining so that we can get them actually aligned?
So there are clients that are using this primarily as an alignment tactic.
Richard: Right? Um, there are two other benefits we've seen that might be interesting to some of our listeners. One is clarity about whether there's actually a return on investment for a thing. So if you're in a context where the loudest voice always wins, or the most senior voice always wins and they're not always good ideas, this can be a way to make it less about the people and more about the idea, by getting it out on the board for everybody to talk about.
And I've seen bad ideas killed 20 minutes into a Feature Mining conversation with good feelings about it. Everyone's aligned that, oh yeah, this wasn't a good idea. Let's not do it. Way better to discover that there, than months down the road on the project.
And then another benefit that some of our listeners may appreciate is that just going through cycles of this gets the organization thinking in a more agile way. People start being more interested in small slices of value and learning from the get-go, and that creates support for a whole bunch of other agile and iterative practices that are are really useful.
Peter: Because this has been such a useful approach, we teach this in lots of different formats. We teach it onsite with clients. We facilitate Feature Mining live, like we mentioned. Uh, but you can learn more about how to facilitate Feature Mining in our 80 20 product backlog refinement course, which is an online course, self-guided. We'll drop a link to that.
We also teach it live in our CSPO class in our A-C-S-P-O class, and it's a core part of the CAPED process in the active planning phase. So we'll link to all of those opportunities to learn more about it.
You can also hire us, like many of our clients have, to facilitate a Feature Mining session on your next big strategic initiative. I'll be doing that in a couple of weeks actually, flying out to help a client do that.
Richard: And if you got value from this and you enjoy our show, uh, if you're watching on YouTube, would you do all the things that people ask you to do? Uh, subscribe to the channel so you get um, new updates from us. Click the bell to get notified. Give this episode a like, so other people can find it, and you'll make the YouTube algorithm happy.
And if you've used Feature Mining before, if you have something where this sounds really compelling, um, drop us a comment and tell us and other viewers about it.
If you're enjoying this via the podcast, would you get on your favorite podcast platform and give us a five star review and tell people how the show is useful to you? That'll help more people find it, who'd benefit from it.
Peter: And thanks for tuning into this episode of The Humanizing Work Show. We'll see you next time.