The recent decision to use an algorithm to decide A-level results has put the use of advanced algorithms, underpinned by artificial intelligence (AI) technologies into the spotlight. What made this situation so unique is that it was also one of the first examples of an algorithm being relied on to make far-reaching decisions on a national level which would affect millions of people in the blink of an eye.
[Dr.] Nicolai Baldin, CEO and founder of Synthezised – an AI platform solving data privacy issues to enable organisations to share data more securely and comply with GDPR regulations – believes artificial intelligence needs to be fed a more ‘widely representative’ data to aid its algorithm to reach its full and fair potential and avoid its technology being biased.
“This episode reminded me of the whole impetus for founding my company, Synthesized, which was to find a better way of unlocking the true potential of AI and data but to do so in an ethical way, with fairness at the heart of our approach.
While pursuing my PhD at University, the idea for the company came from the frustration I experienced while working with large, external partners.
The deeper I got into my career, the more I found my conviction strengthened that there must be a better way to access, manage, and also make data available in a safe and transparent way.
Fast forward to today and many of the same benefits and also potential issues are clear from using technology like AI from the recent A-levels situation.
The most significant misconception about algorithms and AI is that it plays a central role in discriminating.
It is essential to realise – not that AI technology is biased before even being put to use, it is that bias happens when the data that is put into this technology is poor quality or wildly unrepresentative.
AI relies on data being fed into it and from this base, it is trained to create different predictive models.
These models are simply a way to understand and predict certain scenarios and situations to make better, automated, and data-driven decisions.
In practice, what this feeding process looks like is the input of structured data – a way to call data points contained in a spreadsheet – into the software.
However, if this data has an unequal representation of different communities or groups, the insights and decisions from the AI tool will only be a reflection of under-representation already present.
For example, one company might end up collecting data from one location, such as a capital city, where only one minority group is present, the knock-on effect of this is that AI models will be affected due to this skewed data.
Therefore, in my mind, the biggest issue facing AI currently is boosting and enriching the quality of data it is fed.
Yet I believe there are reasons to be hopeful that AI can be a force for good both now and into the future.
Firstly, companies like Synthesized show that the flawed data that plagues many organisations can be fixed. Our AI has already developed to a stage whereby it is trained to identify and avoid making biased decisions.
The technology can rebalance biased data through benchmarked against a very wide variety of groups, based on a host of sensitive personal information to remove any under-representation.
In fact, we’re taking a leading role in dealing with the issue of biases by building the so-called data clean rooms.
This is a simulated, virtual environment that preserves the good parts of data, but also identifies biases and fixes them – all within ten minutes.
The idea behind this is that we can tackle issues around bias, but also help everyone access better quality data to make much more informed decisions.
Furthermore, from my own experiences, the topic of AI bias was not clearly understood or even investigated five years ago.
Yet now that the technology has evolved and is being increasingly used in everyday life, we’ve seen a profound shift in investigating the impacts AI may have, especially from equality and social justice perspective.
By better scrutinising but also understanding the technology through research, I think we can use it in a more positive way.
Finally, I strongly believe that in the near future AI will be regulated both at a global and local level.
Regulations will provide clear rules and will focus heavily on the protection of people’s privacy, but also that AI is used in a fair way.
By constantly evaluating AI and the issues that affect its performance, particularly as it is slowly coming into every facet of our lives, I truly believe the technology’s impact could be limitless, and also positive for the world.”
Images Copyright – Synthesized (main image) / edited by I Am New Generation Magazine / Shutterstock
Updated – 1st October 2020