On a family road trip recently, like most people, we started our journey by entering the destination into our GPS. Despite knowing the location was south, the GPS told us to head north—and still we thoughtlessly followed the GPS’s directions—assuming it knew better. Predictably, it led us in the wrong direction, had us make a U-turn, pass where we originally started, and eventually put us on the right course. This got me thinking: why did we so readily trust a computerized system instead of our first-hand familiarity of the roadways? Though we recognized that we were being misdirected, we still believed that the system was smarter than us.
How have we come to trust the decisions of computers while downplaying the power of our own human experience? Will this become a growing problem as we continue to accept and rely on artificial intelligence (AI) and automation more frequently in our workplaces and personal lives? It worried me that we were so quick to discount our knowledge, reasoning, and problem-solving abilities in favor of pre-programmed code.
It’s become commonplace in nearly every industry, from asset management to fast food, to hear concerns of jobs being replaced by robotics and automation. As part of that same conversation, we also hear talk of jobs being saved from robotic replacement due to required “soft skills” and unique cognitive reasoning which can’t necessarily be replicated by machines. But when it comes to interacting with automation, are these inherently human skills also working to sabotage our own basic logic?
As we consider the effects of our current and future synchronicity with emerging tech, perhaps now is the time to ask ourselves some serious questions:
How do we trust that simple human reasoning will consistently recognize when computer-driven outcomes are not necessarily best?
How do we avoid becoming blindly dependent on automated systems while our acceptance and expectation of automation grows?
Can we confidently determine the situational limitations of AI and automation while not second guessing our own rationale?
It will be interesting to see how the social psychology of human and machine takes shape over the next decade and beyond. I hope my children and grandchildren’s future will see a harmonious coexistence between artificial intelligence and human intellect. But as the age of automation evolves, we need to be able to confidently question when an algorithm-generated outcome seems off. We must maintain our ability to decipher real-world right from machine-generated wrong.
Because robots don’t really know everything (yet).