The 3 Biggest Obstacles to Artificial Intelligence Right Now

Often, our technological threshold runs far ahead of our social threshold. This is when AI proponents need to step back and consider the human impact of their work.

That’s what’s happening with artificial intelligence today, according to Zack Kass, AI futurist and former head of commerce at OpenAI. Ultimately, AI, running in the background, will enable us to interact with cars and apps as easily as we interact with each other, he said, speaking at the recent Precisely conference in Philadelphia.

“My prediction is that it gets weird before it gets great,” Kass said. “And we will have to accept that. All progress has a cost. But one of the most interesting things we have to start preparing for in this transition is understanding the idea of ​​technological thresholds and societal thresholds.”

A technological threshold “is simply asking the question, ‘what can a machine do?'” he explained. “The social threshold poses the questions, ‘what do we want a car to do?’ Or are we willing to allow that?’”

Meanwhile, there are three obstacles that could slow or hinder progress, he warns: people’s fear of losing control; disproportionate views of AI risks; and low machine failure tolerance.

These challenges are related to the rise of autonomous vehicles, which Kass identified as the “bellwether” of AI adoption. Just as Otis Elevators tried to ease people’s fear of elevators in the late 1800s and early 1900s, there is a similar fear and loathing of autonomous vehicles.

“In the autonomous vehicle I think we’re about to unlock a tremendous understanding of how we see technology specifically related to AI,” he said. He talks about three challenges facing autonomous vehicles – and by extension, AI.

  • Loss of control. “People want control,” he explained. “We love getting into the car putting our foot down on the pedal, turning the wheel and controlling this massive machine.”
  • Disproportionate fear. “Fifteen times more people in the United States fear flying than driving,” Kass said. “One is empirically safe and the other is empirically dangerous. Also, people underestimate how good autonomous vehicles are. Most people don’t know that today, 50,000 people will be driving autonomous vehicles in Arizona without an accident. 10,000 in the Bay Area.”
  • Low tolerance for machine failure. “Humans have an incredible tolerance for human failure, and we have zero tolerance for mechanical failure,” he noted. “That’s why 20,000 people can die from drunk drivers a year, and we’re very willing to say that’s just the cost of doing business. But if a Tesla on Autopilot goes into the wrong lane, everyone is calling for the program to be shut down.” Holding machines or AI to a higher standard isn’t necessarily a bad thing, he added. “That’s why the building I’m in will never fall, that’s why the planes we fly don’t fall out of the sky. And it’s never been safer to fly than it is today. We are building so much durability into our mechanical systems. Our expectations for the delivery of these technologies are very high.”

What’s happening, Kass said, is that technology has gotten ahead of people’s ability to deal with it. Looking at the Otis Elevator analogy, people were afraid to ride in elevators, but the company responded with human touches—music, mirrors, and human elevator operators. “It worked. People started using elevators. The technological threshold, once reached, was updated by truly analogous adjustments to the social threshold.”

Likewise, fear or confusion about AI will diminish as we see more human touches added to solutions. For example, there is agentic AI. or autonomous agents. “We will assign AI tasks or goals and have those systems execute the tasks and goals across apps and browsers. Imagine the world where we separate ourselves from the 100 or 150 apps on our phones.”

This is being facilitated by natural language operating systems, in which “we’re going to a world where we interact with machines the way we interact with each other,” Kass explained. “The reason is the digital divide we live in today. The systems we design like the personal computer are not actually second nature. You have to spend a lot of time getting to know the car in order to exploit its full potential.”

Even searching for keywords on Google “isn’t very clear to some people,” Kass said. “ChatGPT gives us this first glimpse of what the world will look like in the future, where you can interact with a machine like you can interact with each other. And the natural language operating system that we think will potentially arrive within the next 10 years, or certainly the next 15 years, will shift from this difficult communication with machines to a much more natural communication.

Leave a Comment