TechHR
ex
L&D
UNPLUGGED
Sphere
About Us • Contact Us
People Matters ANZ
People Matters Logo
Login / Signup
People Matters Logo
Login / Signup
  • Current
  • Top Stories
  • News
  • Magazine
  • Research
  • Events
  • Videos
  • Webinars
  • Podcast

© Copyright People Matters Media Pte. Ltd. All Rights Reserved.

 

 

  • HotTopic
    HR Folk Talk FutureProofHR
  • Strategy
    Leadership Csuite StrategicHR EmployeeRelations BigInterview
  • Recruitment
    Employer Branding Appointments Permanent Hiring Recruitment
  • Performance
    Skilling PerformanceMgmt Compensation Benefits L&D Employee Engagement
  • Culture
    Culture Life@Work Diversity Watercooler SheMatters
  • Tech
    Technology HR Technology Funding & Investment Startups Metaverse
  • About Us
  • Advertise with us
  • Become a sponsor
  • Contact Us
  • Feedback
  • Write For Us

Follow us:

Privacy Policy • Terms of Use

© Copyright People Matters Media Pte. Ltd. All Rights Reserved.

People Matters Logo
  • Current
  • Top Stories
  • News
  • Magazine
  • Research
  • Events
  • Videos
  • Webinars
  • Podcast
Login / Signup

Categories:

  • HotTopic
    HR Folk Talk FutureProofHR
  • Strategy
    Leadership Csuite StrategicHR EmployeeRelations BigInterview
  • Recruitment
    Employer Branding Appointments Permanent Hiring Recruitment
  • Performance
    Skilling PerformanceMgmt Compensation Benefits L&D Employee Engagement
  • Culture
    Culture Life@Work Diversity Watercooler SheMatters
  • Tech
    Technology HR Technology Funding & Investment Startups Metaverse
Making AI responsible by weeding out human bias

Story • 6th Sep 2022 • 2 Min Read

Making AI responsible by weeding out human bias

Employee RelationsEmployee EngagementTechnology#Future of Work

Author: Mamta Sharma Mamta Sharma
9.8K Reads
Artificial Intelligence (AI) may amplify historical biases against certain sections of society and actively neglect them and hence, organisations need to deliberately and consciously reflect on the impact AI systems have towards humans, society and the planet.

A focus purely on getting the best artificial intelligence (AI)  may lead to unintentional consequences that affect human lives negatively and many large tech organisations are realising this.

Certain sections of the society can actively be neglected because of historical biases in society that the AI can amplify – credit-worthy loan applicants can be denied just based on gender, health access can be reduced to certain sections of society, or AI-based recruitment bots can neglect women who can code.

“These are real problems of AI today. Organisations need to deliberately and consciously reflect on the impact AI systems have towards humans, society and the planet. Focusing purely on rational objectives may not necessarily result in outcomes aligned with legal, societal or moral values,” says Akbar Mohammed, lead data scientist at US-based artificial intelligence firm Fractal.

In an interaction with People Matters, Mohammed underscores major biases in AI in organisations and ways to ensure that they are not replicated in 'responsible AI'.

What are some of the major biases we see in AI?

Largely, there are two that often crop up in AI – one is systemic biases where institutional operations that have historically neglected certain groups or individuals, based on their gender, race or even region can be a major concern.

Secondly, human biases, such as how people use data or interpret data to fill in missing information, such as a person’s neighbourhood of residence, influencing how likely loan officers consider the person to be credit-worthy.

When human, and systemic biases combine with computational biases, they can cause significant risks to both society and individuals — especially when explicit guidance is lacking for addressing the risks associated with leveraging AI systems.

How would you define Responsible AI?

Responsible AI is a practice of creating AI in an ethical manner and one that can act, behave or help make decisions in a responsible manner towards humans, society and even the planet.

What are the challenges for organisations to ensure responsible AI practices?

One is simply becoming aware of the risks of AI.

Second is placing the right policy, guidelines and governance on responsible AI practices.

Finally, encouraging behaviour like contestability of any AI or augmented human decisions - making it a safe place to openly discuss ethical challenges and issues is less likely to create harmful AI and encourages people to tackle the problems not just through technology but also through the human-centred lens.

What are some of the ways to root out AI bias?

Some of the ways to root out AI bias will be by being aware of the risks and aligning your organisation towards a shared principle will initiate the journey.

However, organisations need to go beyond principle-based frameworks and incorporate the right behaviours and toolkits to empower people to deal with responsible practices.

We see a combination of design and behavioural science-driven frameworks, along with technology-driven toolkits, can be combined to deliver responsible AI for any enterprise or government.

Read More

Did you find this article helpful?


You Might Also Like

Return to office: the legalities

STORY • 30th Apr 2025 • 3 Min Read

Return to office: the legalities

Employee Relations#EmploymentLaw
Employees tired of subpar workplace practices

STORY • 14th Apr 2025 • 4 Min Read

Employees tired of subpar workplace practices

Employee Relations
The "Great Resignation" 3 years down the road

STORY • 9th Apr 2025 • 3 Min Read

The "Great Resignation" 3 years down the road

Employee Relations
NEXT STORY: New work arrangements: Will we see a shift from ‘Days Off’ to ‘Days On?

Trending Stories

  • design-thinking-hr

    8 skills that are critical for people analytics and why

  • design-thinking-hr

    Which HR tools are winning in New Zealand this year?

  • design-thinking-hr

    HR technology adoption: Trends, growth, and impact

  • design-thinking-hr

    How HR tech is reshaping work in Australia

People Matters Logo

Follow us:

Join our mailing list:

By clicking “Subscribe” button above, you are accepting our Terms & Conditions and Privacy Policy.

Company:

  • About Us
  • Advertise with us
  • Become a sponsor
  • Privacy Policy
  • Terms of Use

Contact:

  • Contact Us
  • Feedback
  • Write For Us

© Copyright People Matters Media Pte. Ltd. All Rights Reserved.

Get the latest Articles, Insight, News & Trends from the world of Talent & Work. Subscribe now!
People Matters Logo

Welcome Back!

or

Enter your registered email address to login

Not a user yet? Lets get you signed up!

A 5 digit OTP has been sent to your email address.

This is so we know it's you. Haven't received it yet? Resend the email or then change your email ID.

People Matters Logo

Welcome! Let's get you signed up...

Starting with the absolulte basics.

Already a user? Go ahead and login!

A 5 digit OTP has been sent to your email address.

This is so we know it's you. Haven't received it yet? Resend the email or then change your email ID.

Let's get to know you better

We'll never share your details with anyone, pinky swear.

And lastly...

Your official designation and company name.