Generative AI Laws in India & Asia: What HR Leaders Must Know

by Akanksha Mishra on
Generative AI Laws in India & Asia: What HR Leaders Must Know

If you work in HR and you're using generative AI, or even thinking about it, you can’t ignore the legal side of it anymore. Especially in India and across Asia, where the rules are evolving fast, often quietly, and sometimes without much clarity.

Let’s break it down. No legalese, no fake urgency. Just what you need to know if you're leading HR and already experimenting with Generative AI in HR in India or anywhere in this region.

There are no perfect AI laws. But that doesn’t mean you’re off the hook.

India doesn’t have a dedicated AI law yet. But that doesn’t mean there’s a free pass to do whatever you want. The Digital Personal Data Protection Act (DPDP), 2023 is already in force, and it covers a big chunk of what HR leaders care about, especially if your AI tools are collecting, storing, or analyzing candidate or employee data.

What it really means is this: If your AI-powered ATS, chatbot, or resume screener is touching personal data, you're responsible. Period. Consent, storage, transfer, purpose, it’s all on you.

Same goes for most of Asia. Whether it’s Singapore’s PDPA, Japan’s APPI, or South Korea’s PIPA, the themes are the same. Data control. Transparency. User rights. And now with AI in the mix, regulators are watching how companies collect and use data through automated systems.

If you’re automating decisions, you need to disclose it

This is where it gets real for HR. Let’s say your system uses AI to score candidates or auto-reject resumes. Sounds efficient, right? But legally, this can fall under automated decision-making, and some countries are starting to regulate that hard.

In Singapore and Japan, there are already guidelines saying people must be informed when they’re being evaluated by AI. In India, this will likely be part of upcoming updates to the IT Rules and AI ethics frameworks the government is working on.

So don’t wait. If your process involves AI-driven evaluations, be transparent. Tell candidates. Add it in your privacy notice. And make sure there’s a human in the loop for important decisions. That’s not just smart, that’s the safest move.

Cross-border tools? You better know where your data is going
A lot of the most popular generative AI tools aren’t built in India. They’re hosted in the US or Europe. So if you’re sending candidate or employee data into those systems, even through integrations, you need to know where that data is going, how it’s stored, and whether it’s compliant with Indian or local laws.

The DPDP Act will soon require companies to notify individuals of cross-border data transfers. And countries like China have already locked down outbound data flows. Don’t get caught off guard here.

If your vendor can’t tell you exactly where data is hosted and what they do with it, drop them. Fast.

Bias and discrimination can now become legal liabilities

This is the part HR leaders can’t ignore. If your AI tool is screening candidates and shows bias, even unintentionally, that’s not just a tech issue. That’s an HR risk and potentially a legal one.

In India, the current laws around workplace discrimination are mostly limited to protected categories like gender or disability. But courts are starting to treat algorithmic bias like real bias. In other parts of Asia, especially places like South Korea and Singapore, AI ethics guidelines are already pushing for fairness, accountability, and explainability in hiring tech.

The bottom line is this: if your AI tool rejects a qualified candidate, and you can’t explain why, you’re exposed. Make sure the vendors you work with can audit their models. And make sure you can explain, in plain English, how decisions are being made.

Here’s what you need to start doing right now

First, get your privacy policy in shape. Most HR teams don’t update their candidate-facing policies often, and almost none mention AI. That needs to change.

Second, vet your AI vendors harder. Ask where their data is stored, what data they collect, if they retrain on your inputs, and how you can audit their logic if something goes wrong.

Third, document your process. If you’re using AI anywhere in hiring or HR, write it down. How it’s used, who reviews it, how bias is checked, how people can contest decisions. This becomes your fallback when someone asks questions, whether that’s a regulator or your own team.

And finally, don’t be passive. Just because Indian law isn’t heavy-handed yet doesn’t mean you should wait for a court order to take action. The best HR teams are moving ahead of the law, because trust, transparency, and control aren’t legal requirements. They’re how you win the talent game.