As More Firms in the Middle East Use AI, Regulating the Tech Becomes Crucial
With limited oversight and no regulations yet from governments in the Middle East, the potential risk of AI bias is even more significant
Topics
NEXTTECH
News
- Customer Centricity Summit & Awards Explores Brand-Customer Relationships in KSA
- GITEX Global 2024 to Showcase Global Innovation, Investment, and Cybersecurity Trends
- The Perfectly Imperfect Start of Disruptive Innovations
- GovTech Conclave to Explore Cutting-Edge Solutions for Modern Governance
- New Report Shows Cautious Optimism Among Enterprises Adopting AI
- Majority of CISOs Feel Their Organizations are Unprepared for Cybersecurity Regulations
Artificial intelligence (AI) has been around for decades; only in the last few years did the technology take off, testing the bounds of its ethical use and impact on society.
Its promise to enhance efficiency, reduce costs, and analyze complex data has been met with concerns that it can do societal harm along with economic good.
With limited oversight and no regulations yet from governments in the Middle East, AI can now influence decisions on who gets a job, who is eligible for a loan, and what kind of medical treatment a patient receives, making the potential risk of AI bias even more significant.
Despite growing concerns, the potential impact of AI in the region is massive. According to a recent PwC report, AI will contribute about $320 million in the Middle East by 2030. The largest gains are expected to accrue in Saudi Arabia, where AI is estimated to contribute over $135.2 billion, or close to 14% of GDP, by 2030.
“With great power comes great responsibility,” says Aliah Yacoub, Artificial Intelligence Philosopher at Synapse Analytics, an Egypt-based AI and data science company. “AI has emerged as an unparalleled force of power, but the answer to whether it’s employed in service of humanity or as a new tool for oppression depends on its ethical development and deployment.”
What is AI Bias and Why it Matters
Walking a tightrope between the perils and promises of AI, the technology has the power to disrupt and transform economies in the Middle East. Making it ever more pressing to ensure that it is implemented transparently, reliably, and with clear justification for decision-making.
“AI algorithms, if not properly designed, can reinforce biases present in the data they’re trained on, leading to unfair outcomes for certain groups and perpetuating existent social inequalities in vulnerable or marginalized communities,” says Yacoub.
Since people write the algorithms, choose the data, and decide how to apply the results of the algorithms, they can pass on their biases, prejudices, and insecurities to AI. That’s why it’s critical to have diverse teams and rigorous testing to identify potential biases in data sets.
One example of this is the underrepresentation of women in AI.
According to the World Economic Forum, women only comprise 22% of AI professionals globally, potentially hindering data accuracy and skewing AI algorithms. “Women’s underrepresentation in tech is not just a ‘social justice’ issue; it’s actually a problem for your dataset, your business, and the whole of society,” says Yacoub. “If datasets used to train AI models are not diverse and inclusive, the resulting algorithms may be inaccurate.”
More importantly, however, AI can make decisions based on historical data. This means that an algorithm based on biased data will amplify and perpetuate such biases incrementally, making it challenging to course-correct them later. “AI is a massive amplifier,” says Noor Sweid, General Partner for Global Ventures, a UAE-based venture capital firm. “It’s taking existing data and predicting the future based on that pattern, making it harder to change the future.”
Managing Ethical AI Concerns
Although many businesses and individuals know AI bias, more needs to be done to manage its risks. Carrington Malin, a UAE-based marketing consultant, says it is challenging to use AI ethically since there is no transparency on the source of AI-generated content. “From an end user’s point of view, it’s quite complicated to consider the big picture because there’s no transparency for where this generated AI content comes from.”
A case in point is the recent lawsuit filed by the New York Times against Microsoft and Open AI over copyright issues related to its written works. Writers and authors have also filed copyright lawsuits against Open AI, claiming it allegedly misused their work in AI training.
Implementing self-regulatory measures might be more challenging for businesses. According to a 2021 PwC survey, 50% stated that responsible AI is one of their top three priorities, while 32% said they will focus on addressing fairness in their AI algorithms. However, over two-thirds haven’t yet taken action to reduce AI bias. According to the survey, one of the main reasons for the lack of adherence to ethical AI is that defining and evaluating bias is too dependent on each organization’s algorithms and stakeholders.
Yet, organizations can still take some steps to reduce bias. “The most effective thing is to teach more people about AI,” says Sweid. “To increase representation in AI by teaching more people in the underrepresented segments how to be in AI. With more diversity in the room, you end up with less of an echo chamber. And with less of an echo chamber, you end up with a better conversation that is likely more ethical.”
Regulating AI
While several AI policies and guidelines are in place, there has yet to be a regulation in the Middle East. But with many businesses integrating AI into their operations, experts say regulating AI is crucial for ensuring transparency and fairness.
“AI regulation is the only way we can ensure that it is developed and deployed responsibly and ethically,” says Yacoub. “We need guardrails, as opposed to mere red flags by users or organizations, when engaging with emerging technologies.”
The EU AI Act, which was recently passed in March, is the first-ever legal framework on AI. It is likely to prompt other countries to follow suit. Just like when the General Data Protection Regulation (GDPR) was first applied in Europe a few years ago and was later adopted by other countries, the AI Act could do the same.
“The Middle East is certainly watching the progress of the EU AI Act, and we expect to see AI regulation come into the Middle East,” says Malin. “I think, as with the GDPR Act, it inspired more data protection laws worldwide and continues to do so. And so, that’s a role that the EU AI Act might play.”
As more organizations and individuals increasingly rely on AI, they will need to consider its social and ethical implications and decide what role regulations should play in this space.
“There’s a consensus that AI needs to be central to future planning, and to future economies,” says Malin. “We’re also going to see AI start to become used at the edge of computer networks. So, we’re going from a point where GenAI was on a few computers, in a few places, to the prospect in ten years, where AI will be everywhere. And with that amount of impact, regulation is necessary.”
At the NextTech Summit, the region’s foremost summit focusing on emerging technologies, global experts, MIT professors, industry leaders, policymakers, and futurists will discuss AI Black Box, Quantum Computing, Enterprise AI among many other technologies and their immense potential. The summit, with Astra Tech as Gold Sponsor, will be held on May 29, 2024.