I recently had the pleasure of speaking with Neil Sahota for an episode of the Future Squared podcast.
Sahota is an IBM Master Inventor, United Nations A.I. subject matter expert, professor at UC Irvine.
Heâs a founding member of the UNâs Artificial Intelligence for Social Good Committee, and he co-wrote Own the A.I. Revolution, which provides a future-forward look at A.I., focusing on how businesses can use it to commercialize while doing good in the world.
We explored the depths of the esoteric, covering questions like:
âŠand so much more.
â
3. Bursting the Jargon bubbles â Deep Learning
4. How Can We Improve the Quality of Our Data?
You can listen to the entire conversation below.
Episode #362: Own the AI Revolution with Neil Sahota
Neil Sahota is an IBM Master Inventor, United Nations A.I. subject matter expert, professor at UC Irvine, andâŠ
www.futuresquared.xyz
As I like to do, to both reiterate my own learning, and share lessons learned with the world, find below key take-aways from my conversation with Neil Sahota.
â
So many companies peddle what they purport to be A.I., including IBM, but for a product to truly earn this definition, it needs to meet the following three criteria.
1 â it learns from experience and consumptions
2 â understands natural language
3 â interacts like a human being
â
â
â
Based on these criteria, IBMâs Watson is not actually A.I., despite its success at Jeopardy all of those years ago!
As Berkeleyâs John Searle noted, Watson manipulates symbols but doesnât understand the meaning behind the symbols as a human would.
AGI: artificial general intelligence (think human cognition â we are not there yet)
ANI: artificial narrow intelligence (think A.I. that does just one thing really well â this is where we are at today. eg. Google Translate)
Itâs difficult to determine how far away AGI. It could be next month or could be 50 years away.
â
â
â
Commercial incentives donât support development of AGI.
There are much better rewards to be reaped in the short-term based on investing in narrow A.I., for which there are numerous use cases, and for which the cost of investment and unknowns are much lower. This makes the ROI higher on narrow A.I., and delays the development of AGI.
â
AI helps us make better decisions, but it has three key challenges:
â
â
People might have different experiences of A.I., as we do with Google search.
This means we might develop different world views as a consequence.
As Sahota put it, âthe truth may change but the facts remain the sameâ.
â
â
â
The paperclip maximizer is a thought experiment showing how AGI, even one designed competently and without malice, could ultimately destroy humanity. An extremely powerful A.I. could seek goals that are completely alien to ours, and as a side-effect destroy us by consuming resources essential to our survival.
This tendency to optimize for a particular outcome, at the expense of ethics, morality or reason, is known as âperverse instantiationâ.
Sahota says that the paperclip maximizer is real, insofar as it is a possibility. âIt could actually happenâ. In order to counter this, we need to set constraints to avoid adverse outcomes. With so many potential constraints to account for, this represents challenges.
â
A.I. may, in a single generation, produce more technological breakthroughs than humankind has managed during the first 20,000 years of its existence.
47% of US jobs are likely to be automated by 2050.
The goal of A.I. is to free people up for higher-value tasks.
âJobs will go away, but new jobs will be createdâ, says Sahota.
On the other side of the spectrum, many fear that there wonât be enough work to go around, which is why universal basic income (UBI) has become such a big talking point as of late.
Since 1980, the gap between US productivity and labor compensation has gotten larger, thanks to technology doing more of what humans once did.
This gap is set to get larger and with each disruptive innovation, it can take decades for organizations and societies to reorganize around it, during which time we might experience a downturn in both productivity and potentially, average living standards. This was true of the transition from steam to electricity, and it is known in economic circles as a âproductivity paradoxâ.
â
â
â
â
Given the pushback against big-tech that weâre seeing, thanks partially to the attention merchant economy and the Big Brother nature of companies like Facebook, itâs critical that we embed philosophy and the arts into the design of technology.
The challenge then becomes, which philosophy? Since Socrates, there have been debates between rival schools of philosophy, and ideas in general. But the need to reconcile the gaps between being able to commercialize technology, and being able to commercialize technology that does good for humanity, is painfully evident each time you walk onto a Subway train. Iâm talking about the army of people mindlessly staring at their screens.
â
According to Sahota, A.I. can become conscious. As such, it might demand rights.
This is not just a thought experiment. Hanson Roboticsâ âSophiaâ was granted citizenship in Saudi Arabia.
Question to ponder: Is it harassment if a maintenance worker touches a sentient A.I. bot?
â
â
â
As technology gets smaller (tablet, smartphone, smartwatch, airpodsâŠ) it seems inevitable that we will begin to embed technology and A.I. into our bodies.
Sahota believes in a human-machine integrated future, a cyborg future if you will.
Question to ponder: Once we start embedding technology into our bodies, at what point do we stop being human and start being cyborg?
This could present challenges, as it may only be affordable to the rich and wealthy, to begin with, opening the door for a âpowerful get more powerfulâ, or ârich get richerâ situation.
However, smartphones provide an optimistic case study. They were once incredibly expensive, but now almost everyone has one â even in the developing world.
Sahota does, however, fear that the powerful may try and prevent the less powerful from embedding true A.I. â the first-mover advantage also applies to humans, as much as it does to business.
â
â
â
First mover advantage in AI space is huge for companies, and hard to make up. Itâs kind of like trying to compete with Google today, which has an insurmountable advantage thanks to the data it keeps getting fed. Sure, you can use another search engine but why would you when the quality of your results would be suboptimal?
Startups have a blank slate when it comes to their organizational structure â less legacy, less politics, less inertia â and as such, they are in pole position to leverage A.I.
Large companies tend to look at A.I. through a traditional lens and simply look to incrementally improve â get faster, or cheaper â as opposed to taking a quantum leap in how they do business.
Startups have an opportunity, therefore, to âleapfrogâ the competition (see below).
When you visit a developing economy, perhaps in Southeast Asia, you might be surprised to find that their wifi might trump what you find in developed economies. This is because they are starting with a blank slate, and donât have vested political and financial interests in existing infrastructure.
Large companies have the resources, but entrepreneurs have the ideas. This is a painful disconnect and one that might become allayed by large organizations partnering with startups.
Initial opportunities might be in the âurinal cakeâ sector⊠the jobs that humans donât like to do, but require some cognition.
Many early-stage opportunities exist for entrepreneurs who want to âsell the shovelsâ in the space of A.I.
â
To leverage A.I., businesses need adequate, quality data, and financial resources.
Fortunately, options exist if data is missing; licensing data, and synthesizing data.
Domain expertise is also required to coach A.I., however, there are misaligned incentives at play here too. Cancer researchers are busy researching cures to cancer, not coaching A.I.
â
Humans have an innate biology tendency to fear the unknown. We feared automobiles and planes too once upon a time, but few think twice about jumping aboard a passenger airliner nowadays.
Tools are, however, as good as how you use them.
In the case of A.I., itâs clear that it presents untold opportunities to drive the human race â or the cyborg race â forward, but at the same time, it presents untold risks and challenges, most of which are yet to be worked out.
Having multi-disciplinary minds come together â not just technologists, but philosophers, artists, economists, and so on â to help solve these challenges using a multi-dimensional approach, will be key.
â
â
Steve Glaveski is the co-founder of Collective Campus, author of Employee to Entrepreneur and host of the Future Squared podcast. Heâs a chronic autodidact, and heâs into everything from 80s metal and high-intensity workouts to attempting to surf and do standup comedy.
Steve Glaveski is on a mission to unlock your potential to do your best work and live your best life. He is the founder of innovation accelerator, Collective Campus, author of several books, including Employee to Entrepreneur and Time Rich, and productivity contributor for Harvard Business Review. Heâs a chronic autodidact and is into everything from 80s metal and high-intensity workouts to attempting to surf and hold a warrior three pose.