This article originally appeared on behalf of the Forbes Technology Council, a community for world-class CIOs, CTOs, and technology executives. Read the original post here.
When it comes to artificial intelligence (AI) and machine learning, in particular, the way we engineer software is fundamentally changing. Traditional engineers didn’t have to contemplate the idea of software needing to “learn” in order to be useful. We defined the “rules” we wanted to account for, hard-coded these into the applications being built and released them. Then, we iterated and improved on them in a continuous cycle.
This is different with AI. Rather than hard-coding rules into applications, AI products rely on training data in order to work. For example, when GPS applications first came out, they changed everything — goodbye paper-based maps! About a decade later, navigation software app Waze redefined that experience yet again. Waze figured out that by aggregating the data from all of their users, they could not only tell one user where to go next but also the fastest way to get there and update those recommendations in real time.
As we got smarter at building software applications, we learned that development practices like the waterfall model don’t work because they don’t contemplate the user nearly enough in the software development lifecycle. By the end, users likely have new requirements. So, we’ve moved to new approaches, such as those made famous in books like The Lean Startup. While people today challenge concepts like the “minimum viable product,” the ideas are absolutely right: Start small and get your product into the hands of your users as soon as possible so you can get their feedback and improve the product along the way.
AI should be approached in the same way. It’s tempting to spend years building the perfect artificial intelligence system, trained by huge volumes of perfect data sets. But don’t be surprised if the product is completely obsolete and irrelevant by the time you introduce it to the world.
Maybe your data set reflected old practices that don’t make sense anymore, or your algorithm has never been exposed to a particular idiom. Or perhaps the person you thought would use your product isn’t who ends up using it. An AI trained in a vacuum can only react to what it’s been exposed to. I’m a firm believer in getting your algorithm out there, where it can learn, adapt and improve. Here’s why it’s OK to let your AI start out “dumb.”
Find Your Focus
We already know that AI tools aren’t yet capable of replacing people, and we don’t expect them to be able to do so in the near future. Keep that in mind when designing your solution. Make your user the focus of your algorithm and intentionally narrow in and deliberate on one use case this user cares about.
One example here is Textio, an AI-based coaching network focused on helping talent professionals write better job descriptions. That’s a very specific task. They aren’t focused on turning everyone into better writers. They picked one specialty — job descriptions — and went deep. The greatest AI achievements we’ve seen start with one discrete task and then expand. And the more narrowly focused the solution is, the faster the AI will learn.
Don’t Put The Ghost Before The Machine
Once you’ve found your focus, don’t get too excited about changing the world just yet. Just considering the things that have to happen to make an AI system (even a dumb one) function is an intense, exhaustive process that includes:
- Setting up the technical environment
- Setting up the system that stores all the training data
- Setting up the all-important algorithm that trains the data and renders suggestions back
While the cloud has made these steps easier, they’re still a chore. That’s why you should ultimately focus the bulk of your efforts on getting the processes above set up and stable, which will allow you to move much faster once you start testing your product with potential customers than if you chose to spend the bulk of your time training your data. If you work in a theoretical world and try to gather training data without real customer input, you’re working in a vacuum that will feed your preexisting suppositions back to you.
Get Your AI In Front Of People
Your training data is integral to the beginning of the process, but to make an AI product that can get better over time, you have to take the leap to the biggest data set of all: human experience. And doing that requires that you invest in your user experience (UX). The better you can make the experience of using your AI, the more people will want to use it, which means your model will gather more data far more quickly.
It’s critical to connect the importance of UX with the success of your AI initiative. Unfortunately, most people don't think this way. They get caught up in the idea of better living through algorithms that they tend to assume that AI is about the machinery. The reality is that you’re doing all of this work so you can have access to data. But the data has to come from somewhere.
The often forgotten, fundamental concept to grasp here is that “somewhere” is the people using your software. AI works when you treat it as a partnership between humans and machines. That’s why if you don't have good UX, you're never going to have good AI. If you don't start by saying, "I'm going to create a system that people want to use, that's easy to use and that they’ll use often," then none of the rest matters.
An algorithm can always be adjusted. The longer it’s out there in the real world, the better it gets. It’s less important to have it be brilliant right out of the gate than it is to find the specific problem you want to solve and get your technical environment ready to absorb data. In the end, a smart AI is simply one that works.