[hibiscus_horz_sbtp]

The dark side of AI: Scientists say there’s a 5% chance of AI causing humans to go extinct

In the largest survey of artificial intelligence (AI) researchers to date, a majority say there is a 5% chance that AI tech could pose an existential threat to humanity due to the possible development of superhuman AI.

The findings come from a survey involving 2,700 AI researchers who have presented their work at six leading AI conferences, marking the most extensive survey of its kind to date. Participants were queried about their perspectives on the potential timelines for forthcoming AI milestones and the positive or negative societal impacts of these advancements. Nearly 58% of the researchers expressed the belief that there is a 5% chance of scenarios like human extinction or other highly adverse outcomes related to AI, The New Scientist reported.

The study also revealed that researchers anticipate major AI milestones, such as the creation of music indistinguishable from human-made compositions, to occur sooner than initially expected.

“It’s an important signal that most AI researchers don’t find it strongly implausible that advanced AI destroys humanity,” says Katja Grace at the Machine Intelligence Research Institute in California, an author of the paper. “I think this general belief in a non-minuscule risk is much more telling than the exact percentage risk.”

The potential emergence of superhuman AI is viewed by many in the AI research community as having a noteworthy chance of causing human extinction, though there is considerable disagreement and uncertainty surrounding these risks. The survey, the largest of its kind to date, collected insights from AI researchers who had recently presented their work at six top AI conferences.

The findings further underscore the fact that a substantial portion of AI researchers do not dismiss the possibility of advanced AI posing a serious threat to humanity. Katja Grace from the Machine Intelligence Research Institute emphasizes the importance of the general belief in non-negligible risks rather than focusing solely on specific percentage figures.

However, Émile Torres from Case Western Reserve University suggests that there’s no need for immediate panic. They point out that AI expert surveys have a questionable track record in predicting future developments, citing a 2012 study showing that expert predictions were no more accurate than public opinion.

Comparing the current survey to a 2022 version, many AI researchers now anticipate AI achieving certain milestones sooner than previously predicted. This coincides with the widespread deployment of AI chatbot services like ChatGPT and similar models in late 2022.

Despite concerns about AI behaving in ways misaligned with human values, some argue that current technology cannot cause the catastrophic consequences predicted by skeptics. Nir Eisikovits, a philosophy professor, contends that AI systems cannot make complex decisions and do not have autonomous access to critical infrastructure.

While the fear of AI wiping out humanity grabs attention, an editorial in Nature contends that the more immediate societal concerns lie in biased decision-making, job displacement, and the misuse of facial recognition technology by authoritarian regimes. The editorial calls for a focus on actual risks and actions to address them rather than fearmongering narratives.

The prospect of AI with human-level intelligence raises the theoretical possibility of AI systems creating other AI, leading to uncontrollable “superintelligence.” Authors Otto Barten and Joep Meindertsma argue that the competitive nature of AI labs incentivizes tech companies to create products rapidly, possibly neglecting ethical considerations and taking risks. They caution that historical patterns show humans are not adept at predicting the future consequences of new technologies.

Humans “have historically not been very good at predicting the future externalities of new technologies,” the researchers wrote.


TOP