Trends

Here’s what happens when engineers experiment like scientists

Presented by Adobe 


In the engineering world, companies are all about deadlines. Development teams typically focus on making their deliverables, on hitting their target market, and on getting the right feature set.
These teams tend to be heavily invested in engineering methodology — particularly agile processes: working in two-week sprints, building larger schedules based on past “project velocity,” retrospecting on the process, and ideally getting better at delivering scope and schedule with each iteration.
The world I work in now, and one that many of us are heading toward, is a world of uncertainty, with much bigger potential gains and much more potential for failure. Project velocity, schedule estimation, burn down charts have been replaced by objective functions, experiments, and instrumentation. Many of the experiments will fail, but each will increase the amount we know, increasing our chance of succeeding in the audacious goal we set at the beginning.
The world of science can still be agile: we definitely improve over time, but what will be hard, if not impossible, is making a meaningful schedule. You can not plan on experiments succeeding, as you can sprints. Rather you must do the experiments, and plan on acquiring knowledge about what sort of worked, until you find out what does work.

The problem with bugs

Engineers and scientists think about deliverables and deadlines very differently. Engineers tend to be focused on eliminating as many bugs as realistically possible before the deadline, in order to get to a minimum viable product — while scientists, by contrast, want to understand their product as thoroughly as possible, and will continue to improve it until it does exactly what it’s trained to do, no matter how long that takes.
When an engineering program manager asks me, “What’s your bug ramp?” My response is, “‘Here is the current performance on the objective function. You can see that we’re currently at 80 percent, but need to be at 94 percent or higher to have a viable feature.’”
If this sounds like an impractical way to approach product design, keep in mind that this iterative scientific process doesn’t have to stop at the First Customer Ship. Amazon Alexa, for example, continues to learn, iterate, and improve even as she’s in active use in millions of customers’ homes.
Users shouldn’t think of Alexa’s occasional misunderstandings as “bugs” — they just think, “Well, this month Alexa got 99 percent of my commands right, and that’s pretty impressive.” Meanwhile, the scientists at Amazon are asking, “What can we do to get Alexa’s accuracy to 99.5 percent next month?” It’s not about whacking bugs; it’s about reducing the error rate by a small margin with each iteration.
A/B testing is a familiar example from the world of Digital Marketing. Each A/B test helps the marketer come up with a better message, and better conversion. Change the color of a button or the wording of some text, and your clickthrough rate spikes or drops in response.
But while A/B tests are amazing, they tend to offer very specific incremental improvement around messaging, or possibly UI constructs. They don’t help create what Warren Buffet calls a “moat” — a killer app or other business innovation that’s nearly impossible for competitors to tackle head-on. To build your moat, you need to approach hard technical problems using a more open-ended experimental framework.

From deadlines to experiments

Of course, engineers iterate, too — but the key disconnect between science and engineering is that scientists regard each iteration as a set of experiments, not as a viable set of feature improvements. They’re willing to run many experiments in parallel. And instead of regarding errors as bugs or failures, they see them as useful data points for the next round of learning.
Scientists know that greatness doesn’t adhere to their schedule. You can measure and track your progress, sure — but beyond that, everything depends on the results of your experiments, and how you respond to them.
What’s more, traditional engineering concerns like bug counts, burn down charts, and code quality aren’t as relevant in a science context — especially when we’re dealing with machine learning, which is increasingly the norm. An ML-based feature might only be a few dozen lines of code, and a model that took many months to train. We still write code, but the really hard part of our work is buried in our models, which are in turn derived from our data.
In other words, learn to live without the deadline, and focus instead in which experiments in your “portfolio” pay off. And a lot of those investments don’t pay off, which is fine too. That’s not failure — that’s learning!
Science teams still require a meaningful set of key performance indicators (KPIs). In the case of Alexa, one KPI is her speech recognition error rate. For a commerce site, the crucial KPIs might be clicks and conversions. Ocean Spray, meanwhile, is known for their delicious cranberry juice, not for their data science — but for its $10 million research department, the biggest KPI might be marketable data on cranberry juice’s antioxidant properties.
It’s time to unsilo the science and engineering departments, and get their complementary skillsets and approaches working together. Engineers stay focused on deadlines and debugging, while scientists focus on experiments, data gathering, and open-ended iteration. Shifting to an experimental focus requires a lot of culture shift, but the pay off will be rich, hard-to-replicate features that customers will take delight in. Losing a little thing like schedule certainty seems like a more than reasonable trade off, doesn’t it?
Adobe relies on this methodology with Adobe Sensei, our artificial intelligence platform, which helps drive innovation across Adobe Creative Cloud, Document Cloud, and Experience Cloud. Learn more about Adobe Sensei here.
David Parmenter is Director of Data and Engineering, Adobe Document Cloud.


Source: VentureBeat

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button