TechHR
ex
L&D
UNPLUGGED
Sphere
About Us • Contact Us
People Matters Logo
Login / Signup
People Matters Logo
Login / Signup
  • Current
  • Top Stories
  • News
  • Magazine
  • Research
  • Events
  • Videos
  • Webinars
  • Podcast

© Copyright People Matters Media Pte. Ltd. All Rights Reserved.

 

 

  • HotTopic
    LeadersSpeak
  • Strategy
    Leadership Csuite StrategicHR EmployeeRelations
  • Recruitment
    Employer Branding Appointments Permanent Hiring Recruitment
  • Performance
    Skilling PerformanceMgmt Compensation Benefits L&D Employee Engagement
  • Culture
    Culture Life@Work Diversity Watercooler SheMatters
  • Tech
    Technology HR Technology Funding & Investment Startups Metaverse
  • About Us
  • Advertise with us
  • Become a sponsor
  • Contact Us
  • Feedback
  • Submission Guidelines

Follow us:

Privacy Policy • Terms of Use

© Copyright People Matters Media Pte. Ltd. All Rights Reserved.

People Matters Logo
  • Current
  • Top Stories
  • News
  • Magazine
  • Research
  • Events
  • Videos
  • Webinars
  • Podcast
Login / Signup

Categories:

  • HotTopic
    LeadersSpeak
  • Strategy
    Leadership Csuite StrategicHR EmployeeRelations
  • Recruitment
    Employer Branding Appointments Permanent Hiring Recruitment
  • Performance
    Skilling PerformanceMgmt Compensation Benefits L&D Employee Engagement
  • Culture
    Culture Life@Work Diversity Watercooler SheMatters
  • Tech
    Technology HR Technology Funding & Investment Startups Metaverse
Making AI responsible by weeding out human bias

Story • 6th Sep 2022 • 2 Min Read

Making AI responsible by weeding out human bias

Employee RelationsEmployee EngagementTechnology#FutureOfWork

Author: Mamta Sharma Mamta Sharma
5.8K Reads
Artificial Intelligence (AI) may amplify historical biases against certain sections of society and actively neglect them and hence, organisations need to deliberately and consciously reflect on the impact AI systems have towards humans, society and the planet.

A focus purely on getting the best artificial intelligence (AI)  may lead to unintentional consequences that affect human lives negatively and many large tech organisations are realising this.

Certain sections of the society can actively be neglected because of historical biases in society that the AI can amplify – credit-worthy loan applicants can be denied just based on gender, health access can be reduced to certain sections of society, or AI-based recruitment bots can neglect women who can code.

“These are real problems of AI today. Organisations need to deliberately and consciously reflect on the impact AI systems have towards humans, society and the planet. Focusing purely on rational objectives may not necessarily result in outcomes aligned with legal, societal or moral values,” says Akbar Mohammed, lead data scientist at US-based artificial intelligence firm Fractal.

In an interaction with People Matters, Mohammed underscores major biases in AI in organisations and ways to ensure that they are not replicated in 'responsible AI'.

What are some of the major biases we see in AI?

Largely, there are two that often crop up in AI – one is systemic biases where institutional operations that have historically neglected certain groups or individuals, based on their gender, race or even region can be a major concern.

Secondly, human biases, such as how people use data or interpret data to fill in missing information, such as a person’s neighbourhood of residence, influencing how likely loan officers consider the person to be credit-worthy.

When human, and systemic biases combine with computational biases, they can cause significant risks to both society and individuals — especially when explicit guidance is lacking for addressing the risks associated with leveraging AI systems.

How would you define Responsible AI?

Responsible AI is a practice of creating AI in an ethical manner and one that can act, behave or help make decisions in a responsible manner towards humans, society and even the planet.

What are the challenges for organisations to ensure responsible AI practices?

One is simply becoming aware of the risks of AI.

Second is placing the right policy, guidelines and governance on responsible AI practices.

Finally, encouraging behaviour like contestability of any AI or augmented human decisions - making it a safe place to openly discuss ethical challenges and issues is less likely to create harmful AI and encourages people to tackle the problems not just through technology but also through the human-centred lens.

What are some of the ways to root out AI bias?

Some of the ways to root out AI bias will be by being aware of the risks and aligning your organisation towards a shared principle will initiate the journey.

However, organisations need to go beyond principle-based frameworks and incorporate the right behaviours and toolkits to empower people to deal with responsible practices.

We see a combination of design and behavioural science-driven frameworks, along with technology-driven toolkits, can be combined to deliver responsible AI for any enterprise or government.

Read More

Did you find this article helpful?


You Might Also Like

Over 50% of Gen Z suffer from poor mental health

STORY • 24th Mar 2023 • 3 Min Read

Over 50% of Gen Z suffer from poor mental health

Employee RelationsEmployee EngagementLife @ Work#Wellbeing
Why is D&I key to employee engagement?

STORY • 8th Mar 2023 • 2 Min Read

Why is D&I key to employee engagement?

Employee RelationsEmployee EngagementDiversity#DEIB
Mentors are crucial for employee growth

STORY • 17th Feb 2023 • 3 Min Read

Mentors are crucial for employee growth

Learning & DevelopmentEmployee Relations
NEXT STORY: New work arrangements: Will we see a shift from ‘Days Off’ to ‘Days On?

Trending Stories

  • design-thinking-hr

    Move over, rock stars! Here's what people look for in teams

  • design-thinking-hr

    Here's what Bill Gates would advise his younger self

  • design-thinking-hr

    Top 10 hottest jobs in Australia 2023

  • design-thinking-hr

    Power to the people? How workplace dynamics are shifting

People Matters Logo

Follow us:

Join our mailing list:

By clicking “Subscribe” button above, you are accepting our Terms & Conditions and Privacy Policy.

Company:

  • About Us
  • Advertise with us
  • Become a sponsor
  • Privacy Policy
  • Terms of Use

Contact:

  • Contact Us
  • Feedback
  • Submission Guidelines

© Copyright People Matters Media Pte. Ltd. All Rights Reserved.

Get the latest Articles, Insight, News & Trends from the world of Talent & Work. Subscribe now!
×

How likely are you to recommend our content to a friend or colleague?

01
10
Selected Score :
People Matters Logo

Welcome Back!

Enter your registered email address to login

Not a user yet? Lets get you signed up!

A 5 digit OTP has been sent to your email address.

This is so we know it's you. Haven't received it yet? Resend the email or then change your email ID.

People Matters Logo

Welcome! Let's get you signed up...

Starting with the absolulte basics.

Already a user? Go ahead and login!

A 5 digit OTP has been sent to your email address.

This is so we know it's you. Haven't received it yet? Resend the email or then change your email ID.

Let's get to know you better

We'll never share your details with anyone, pinky swear.

And lastly...

Your official designation and company name.