Language Understanding and the Future of Software
DALL-E: An illustration of a person standing at the edge of a cliff looking outward, hyperwave style

Language Understanding and the Future of Software

In the last 5 years, advances in machine learning (specifically transformer-based large language models) have fundamentally improved a computer's ability to understand human language. In the next 5 years, this same technology will fundamentally change how we interact with computers and how software is written.

The impact of language understanding on software goes well beyond traditional speech recognition. When you interact with software today, you typically issue a series of commands (often in the form of clicks and search queries). In 5 years, you will be able to use language to communicate your intent. Software will be designed to understand what you want to do, then generate and execute commands on your behalf. It will also actively solicit feedback to improve performance over time. UX design will transition from command-driven to intent-driven. The best software will leverage language understanding to improve automatically, perpetually adapting to the needs of new users.

Of equal significance, over the next 5 - 10 years, computers will begin to communicate with each other using language. Language understanding will enable software-to-software interfaces (i.e., APIs and SDKs) that are far more flexible than what we have today. One piece of software will literally be able to ask another piece of software for help, using human-like language. Thanks to machine learning and language understanding, computers will be able to dynamically turn a person's intensions into executable code.

The impact of language-aware, intent-driven software will be huge. It will give us faster development times, more agile workflows, and – what I find most inspiring – products and services that learn to meet users where they are. Today, the burden is on people to learn how to use a computer. In 5 to 10 years, it will be the job of software to learn what people want to accomplish, and how to deliver it.

So that's where we're at... on the cusp of a word-fueled revolution.

My goal, for the first few posts of this newsletter, is to convince you that by 2030, we will look back on today's computers as if they were broken. An app, website, or device which cannot be controlled with language will feel as broken in 2030 as a computer feels today when it cannot communicate with the internet. Starting next week, I'm going to make this case using concrete examples.

In my next post, I'll introduce CLIP and GPT-3, two specific language models developed by OpenAI. I'll describe one simple trick (clickbait alert!), shown to me by my friend and colleague Matt Loper. When I saw this trick a year ago, it was an "Aha!" moment. It immediately opened my eyes to GPT-3's enormous potential to change how software UXs are made. In the last year, I've seen the same design pattern bubble up in a number of other situations. Hopefully, you'll find it as illuminating as I did.

Stay tuned...


Written by Eric Rachlin, Proofread by GPT3

Albert Baldwin

Founder/CEO at Alfa Omega Grafx LLC

1y

Great read Eric, thanks can't wait for future updates installments.

To view or add a comment, sign in

Insights from the community

Explore topics