Sixty-two per cent of Australian organisations anticipate increased dependence on artificial intelligence (AI) and machine learning (ML) decision-making in the coming years even as 66 per cent admit to suffering from data bias currently, according to a new survey by Progress, which provides application development and infrastructure software.
Biases are often inherited by cultural and personal experiences. When data is collected and used in the training of machine learning models, the models inherit the bias of the people building them, producing unexpected and potentially harmful outcomes. Yet, despite the potential legal and financial pitfalls associated with data bias, there is a lack of understanding around the training, processes and technology needed to tackle it successfully.
“AI bias is a growing concern for Australian organisations. But while both business and IT decision makers are aware of the challenges it raises, there seems to be a lot of confusion and lack of understanding around the ways to address it,” said John Yang, VP, Progress, APJ.
The Progress survey indicated that 86 per cent of Australian business and IT decision makers believe data bias will become a bigger concern as AI/ML use increases (higher than the 78 per cent respondents globally), yet over a third (34 per cent) have not looked at technology and tools to help in tackling it.
The biggest barriers they see are lack of awareness of potential biases, understanding how to identify bias as well as the lack of available expert resources, such as having access to data scientists.
As many as 84 per cent believe they need to be doing more to understand and address data bias in their organisation, higher than any other countries surveyed.
The main obstacles to tackling potential data bias in Australian organisations, according to respondents, include the following top five causes:
- Lack of awareness and understanding of potential biases (62 per cent)
- Lack of understanding of how to best identify data bias (50 per cent)
- Lack of understanding of how to best avoid or address data bias (48 per cent)
- Lack of ownership/responsibility (42 per cent)
- Lack of tools for identifying and mitigating data bias (34 per cent)
64 per cent of Australian respondents say they are concerned about their organisation currently being exposed to data bias when using analytics or AI/ML, including biased datasets, as well as model, algorithm or training bias, and unconscious people bias
According to Australian business and tech leaders, the main factors that increase the likelihood of data bias in organisations like theirs include:
- Focus on quantity over quality of data (56 per cent)
- Lack of awareness and understanding of data bias generally (54 per cent)
- Lack of skilled data or AI scientists in the business (48 per cent)
- Lack of standardisation and governance over how data is used or how models are products (48 per cent)
There are also certain types of business decisions for which decision-makers using AI seem more concerned about when it comes to data bias. That includes IT and digital (50 per cent), but also finance decisions (46 per cent), sales and customer acquisition (38 per cent), and HR - including recruitment (30 per cent).
On what AI bias issues would be the most of concern for their organisation, respondents identified the following.
- Data bias that creates or contributes to poor customer experience (88 per cent)
- Data bias that negatively impacts diversity and inclusion efforts in the employee base (86 per cent)
- Data bias that encourages unfair stereotyping (82 per cent)
- Data bias that leaves the organisation legally exposed (82 per cent)
“Addressing AI bias requires education around why and how bias is created and spread. Then you need the right tools and technologies to address it, combined with the right human skills. That includes having access to qualified data and AI scientists, as well as putting in place organisation-wide processes that include developers, the ones often at the foundation of building the organisation’s software, apps and digital products,” added Yang.
Encouragingly, Australian organisations have started addressing the issue and many are actively working to address data bias, including by ensuring completeness of datasets (72 per cent), educating and training programmers and users about data bias (50 per cent), evaluating their vendors and suppliers for any data bias that could impact the organisation (48 per cent), and spending more time on AI model training (44 per cent).
However, only 12 per cent are performing regular data bias assessments.
Australian respondents think that the most urgent areas that need to be addressed within their organisation to help address data bias are technology (70 per cent), skills and training (56 per cent), policy framework and governance (44 per cent), and strategy/vision (42 per cent).
A large majority also believe that data bias is something that should be tackled centrally (78 per cent), rather than handled by individual departments and functions (22 per cent). The survey also showed that respondents believe CIOs and CTOs are the best placed to take ultimate ownership of addressing data bias (38 per cent) as only 16% think this should be placed in the Chief Data Officers’ hands.
A significant proportion of respondents hold the view that much more needs to be done from an industry-wide perspective, such as more collaboration and sharing of best practice within industries (58 per cent), increased investment in multidisciplinary research into bias (58 per cent), more regulatory action (54 per cent), and more exploration into how humans and machines can best work together (48 per cent).
Yang said AI bias has been discussed a lot in the past several years, but with new generative AI tools such as ChatGPT becoming mainstream across Australian organisations and with the general public, reducing AI bias needs to become more than just a conversation. The ethics around AI and decision automation will become more important and urgent over the next two years.
“Organisations wanting to capitalise on the power of AI first need to make sure they are doing more to understand data bias in their organisation. They then need to make sure they train AI models using unbiased data and algorithms as it will allow them to avoid perpetuating or reinforcing existing biases,” he added.
“86 per cent of Australian business and tech decision makers we surveyed said that organisations have a social responsibility to understand and address data bias. This awareness is very positive, but now there needs to be more action if we want to avoid wider social impacts from enterprises not adequately addressing data bias issues."
Commissioned by Progress, the survey, “Data Bias: The Hidden Risk of AI,” was conducted by UK-based research firm Insight Avenue. The findings are based on interviews with more than 640 business and IT professionals, director level and above (including 200 across APAC and Australia), who use data to make decisions and are using or plan to use AI and ML to support their decision making.