In our last post we reviewed Intervention Point 5 of the 12 Intervention Points in a System. By addressing Intervention 5: The rules of the system we were able to increase both the scope of the workflows covered by our approach and the scope of team members capable of contributing to said workflows.

In this post, we will examine how the interventions we have implemented thus far have set us up for Intervention Point 4, and what we can do to further optimize as we go deeper.

All of the interventions we have leveraged thus far have been building to Intervention 4: The power to add, change, evolve, or self-organize system structure. The checklists and supporting documentation, the continuous improvement program, the rules by which to uphold our approach… all of these elements come together to create something powerful: a learning system.

4. The power to add, change, evolve, or self-organize system structure

Checklists and documentation establish a foundation for an effective system; continuous improvement delivers insights that drive the elimination of errors; the structure of information hosts and disseminates the changes; and as a result of these elements coming together, our approach is now self-organizing against errors: in other words, our system is learning how to eliminate errors.

Now that we have this structure in place, what can we do to get the most out of our learning system? Let’s start by looking inward…

Looking Inward

The first and simplest action we can take in order to optimize our system is to give it the ability to run A/B testing, or in layman’s terms, the ability to change one part of the workflow in order to see if the original “A” or the new “B” approach is better.

The current build of this continuous improvement program is strictly designed to eliminate errors; which in turn will create efficiencies via input accuracy, but it won’t drive innovation. However, by adding an A/B testing program, it will drive innovation to further improve the workflows.

There are three sources of data that the A/B testing will pull from: 1) The error log that is part of the continuous improvement program, 2) The machine time (IE a processing engine running), and 3) The human time it takes to complete each workflow.

With these data points available, we can compare the data from the original workflow “A” and then run the proposed new workflow “B” to see if the new workflow is faster while simultaneously not creating more errors than the original workflow (Keep in mind that the proposed new workflow should be run through simulations and not live data — this way we don’t introduce unforeseen errors to live case data).

Gone are the days of implementing something new and relying on the hypothesis that it’s an improvement. Adding A/B testing to our continuous improvement program allows us to test new human processes and/or new softwares in order to constantly optimize our approach.

Looking Outward

Now that we have intervened internally in order to improve our continuous improvement program, we can look to augment the learning system by exploring external opportunities.

The obvious external opportunity here is to expand the learning system to clients and outside vendors: continuing to eliminate errors while the workflow is outside of the organization will stop errors from sneaking into the organization upon return.

The less obvious external opportunity is to include your competitors in a move called co-opetition. It might sound a bit crazy to join forces with the enemy, but really, who else is going to be performing the exact same workflows as you are? Sharing the learning from your learning system with the learning systems of a competitor will push forward your continuous improvement program faster than your organization alone could accomplish; and of course, we can always outline a set of guidelines in order to ensure that giving away one competitive advantage creates a new one, such as dictating that the sharing of information generated from learning systems is only shared with other competitors who are leveraging a compatible learning system (no freeloaders).

As you exchange information, you may not be getting ahead of your cooperating competitors, but you are all getting ahead of the non-cooperative competitors: if you have 10 competitors but pair up with 2 others, your cooperative learning system is going to differentiate the 3 of you from the other 7. The pond just got smaller, and now you can focus on what you do best, being a kickass law firm, not a technology company (while your non-cooperative competitors are still trying to be both). Co-opetition might seem a bit unusual compared to business-as-usual, but powerful interventions tend to look that way.

The learning system that we have been building over the previous 8 intervention points is powerful in itself. Analyzing how it can be improved via Intervention 4: The power to add, change, evolve, or self-organize system structure, takes it to another level. Once implemented, the impact of the peaks and valleys of eDiscovery workloads start to flatten quickly.

Next post, we will take a look at Intervention 3: The goals of the system, in order to avoid a catastrophic mistake that could derail all of the work done to ease the pain of workload surges, while simultaneously augmenting the effect of previous interventions. Stay tuned…

--

--