Artificial Intelligence (AI) can best be described a machine’s ability to perform tasks that can ordinarily be perform by human beings and mimic intelligence at a similar level as humans. AI can perform some tasks like traffic controls, mining, medical robotics, etc.
AI has the potential to be dangerous, but these dangers may be mitigated by implementing legal regulations and by guiding AI development with human-centred thinking.
AI has been hailed as revolutionary and world-changing, but it’s not without risks and drawbacks.
As AI grows more sophisticated and widespread, the voices warning against the potential dangers and risks grow louder by the day. This is because of the dangers ahead, if care is not taken.
“These Robots could get more intelligent than humans and could decide to take over, and we need to worry now about how we prevent that happening,” said Geoffrey Hinton, known as the “Godfather of AI” for his foundational work on AI algorithms. The renowned computer scientist isn’t alone in his concerns.
The risks of AI are many and not limited to automation-spurred job loss, privacy violations, socioeconomic inequality, market volatility and weapons automatization.
Whether it’s the increasing automation of certain critical high risks jobs like mining or autonomous weapons that operate without human oversight, unease abounds on a number of fronts. And we’re still in the very early stages of what AI is really capable of doing. The earlier the unforeseeable dangers are contained, the better it will be.
Questions about who’s developing AI and for what purposes make it all the more essential to understand its potential downsides and risks. Below we take a closer look at the possible dangers of artificial intelligence and explore how to manage its risks.
The tech community has long debated the threats posed by artificial intelligence. Automation of jobs, the spread of fake news and a dangerous arms race of AI-powered weaponry have been mentioned as some of the biggest dangers posed by AI.
AI and e-learning models can be difficult to understand, even for those that work with technology. This leads to a lack of transparency for how and why AI comes to its conclusions, creating a lack of explanation for what data AI algorithms use, or why they may make biased or unsafe decisions.
AI-powered job automation is a pressing concern as the technology is adopted in industries like mining, manufacturing, marketing, healthcare, etc. In the nearest future, tasks that account for up to 30 percent of hours currently being worked by humans could be automated. According to some business analyst, over 300 million full-time jobs could be lost to AI automation.
“The reason we have a low unemployment rate, which doesn’t actually capture people that aren’t looking for work, is largely that lower-wage service sector jobs have been pretty robustly created by this economy,” and with AI on the rise, though, “I don’t think that’s going to continue,” he said.
As AI robots and machine learning become smarter and more dexterous, the same tasks will require fewer humans. And while AI is estimated to create more jobs, many employees won’t have the skills needed for these technical roles and could get left behind if companies don’t up skill their workforce with additional training. “It may likely that the new job requires lots of education or training or maybe even intrinsic talents — really strong interpersonal skills or creativity — that you might not have?
Even professions that require graduate degrees and additional post-college training aren’t immune to AI displacement.
As technology strategist has pointed out, fields like law and accounting are primed for an AI takeover. In fact, some of them may well be decimated. AI already is having a significant impact on healthcare, mining and marketing. Law and accounting are next, the former being poised for “a massive shakeup.”
“Think about the complexity of contracts and understanding what it takes to create a perfect deal structure,” in regards to the legal field. “It’s a lot of attorneys reading through a lot and hundreds or thousands of pages of data and documents. It’s really easy to miss things. So, AI that has the ability to comb through and comprehensively deliver the best possible contract for the outcome you’re trying to achieve is probably going to replace a lot of corporate attorneys.”
Social manipulation also stands as a danger of artificial intelligence. This fear has become a reality as politicians rely on platforms to promote their viewpoints. TikTok, which is just one example of a social media platform that relies on AI algorithms, fills a user’s feed with content related to previous media they’ve viewed on the platform. Criticism of the app targets this process and the algorithm’s failure to filter out harmful and inaccurate content, raising concerns over TikTok from misleading information.
Online media and news have become even murkier in light of AI-generated images and videos, AI voice changers as well as deep fakes infiltrating political and social spheres. These technologies make it easy to create realistic photos, videos, audio clips or replace the image of one figure with another in an existing picture or video. As a result, bad actors have another avenue for creating a nightmare scenario where it can be impossible to distinguish between creditable and faulty news.
“No one knows what’s real and what’s not,” some analyst said. “So it really leads to a situation where you literally cannot believe your own eyes and ears; you can’t rely on what, historically, we’ve considered being the best possible evidence… That’s going to be a huge issue.”
In addition to its more existential threat, some AI experts are focused on the way AI will adversely affect privacy and security. A prime example is the use of facial recognition technology in offices, schools and other venues. Besides tracking a person’s movements, it may be able to gather enough data to monitor a person’s activities, relationships and political views.
Another example is Security officer embracing predictive algorithms to anticipate where crimes will occur. The problem is that these algorithms are influenced by arrest rates is some Areas. Police departments then double down on these communities, leading to over-policing and questions over whether self-proclaimed democracies can resist turning AI into an authoritarian weapon.
“Authoritarian regimes use or are going to use it,” according to some AI pundits. “The question is, how much does it invade people privacy, democracies, and what constraints do we put on it?
“The creators of AI must seek the insights, experiences and concerns of people across ethnicities, genders, cultures and socio-economic groups, as well as those from other fields, such as economics, law, medicine, philosophy, history, sociology, communications, human-computer-interaction, psychology, and Science and Technology Studies.”
Balancing high-tech innovation with human-centered thinking is an ideal method for producing responsible AI technology and ensuring the future of AI remains hopeful for the next generation. The dangers of artificial intelligence should always be a topic of discussion, so leaders can figure out ways to wield the technology for noble purposes.
“I think we can talk about all these risks, and they’re very real,” some experts said, “But AI is also going to be the most important tool in our toolbox for solving the biggest challenges we face.”