As the innovating head of SpaceX and Tesla, Elon Musk champions the implementation of cutting-edge technology and sustainability as both the driver of business and optimism for a more sustainable future; however, he recognizes this advancement isn’t all rainbows and unicorns.

One technological development, in particular, seems to occupy a troubling volume of space in the entrepreneur’s mind — one that, he now says, possesses the singularly terrifying potential ability to initiate World War III — if it “decides that a prepemptive [sic] strike is most probable path to victory,” Musk tweeted of differences between human actors and Artificial Intelligence’ in assessing strategy.

“China, Russia, soon all countries with strong computer science,” he continued tweeting. “Competition for AI superiority at national level most likely cause of WW3.”

Musk’s admonition comes just days after Russian President Vladimir Putin told children upon the start of the school year, “the future belongs to artificial intelligence” — thus, whoever manages to harness it will tap unmitigated power.

“Artificial intelligence is the future, not only for Russia, but for all humankind,” Putin explained on Friday. “It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

While the latter may be debatable, that AI presents a host of threats is best evidenced in Musk’s example of the current belligerence and hostility between North Korea and the United States — one that, some observers surmise, balances delicately at a tipping point between de-escalation and breakout nuclear conflict.

But, in the event of the latter — and despite blustery rhetoric from Pyongyang championing the annihilation of the U.S. — the SpaceX founder warns, President Kim Jong-Un and his regime may be less inclined to act aggressively with nuclear weapons than would the case be if AI were calling the shots.

For Pyongyang, Musk added, “launching a nuclear missile would be suicide for their leadership, as South Korea, [the U.S.] and China would invade and end the regime immediately.”

He does not believe such a strike would be sufficient aggression to explode the world into war, as he noted in a tweet,

“Should be low on our list of concerns for civilizational existential risk. NK has no entangling alliances that wd polarize world into war.”

This wasn’t the sustainable energy guru’s first dismissal of North Korea as the U.S.’ most pressing existential threat in contrast to artificial intelligence. In response to Pyongyang’s posturing on a Guam nuclear strike just last month, Musk pulled no punches, cautioning,

“If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.”

And in July, artificial intelligence was deemed a “fundamental risk to the existence of human civilization” by the Tesla founder and CEO, whose warning for humanity continued,

“Once there is awareness, people will be extremely afraid, as they should be. AI is a fundamental risk to the future of human civilization in a way that car accidents, airplane crashes, faulty drugs or bad food were not. They were harmful to a set of individuals in society, but they were not harmful to individuals as a whole.”

Musk has distinguished company sounding alarm bells on the perils of AI — preeminent theoretical physicist and scholar, Stephen Hawking, joined him in issuing a cautionary letter on that topic at the very beginning of 2015. Among several areas of concern, they firmly asserted of the planet’s push forward, “[o]ur AI systems must do what we want them to do.”

Overall, Musk and others have had difficulty adequately relaying the dangers inextricable from the fostering of artificial intelligence, and how its integration into ordinary life could run horrendously amok, in part due to the subject matter’s vague and hidden position in what amounts to, for the technologically illiterate, the miasmic ether.

“I keep sounding the alarm bell,” Musk opined to the Washington Post, “but until people see like robots going down the streets killing people, they don’t know how to react because it seems so ethereal. I think we should be really concerned about AI.”

(Featured image: ‘Artificial Intelligence’ Colin Anderson via Getty Images, 2nd image, Source)