Talent & Tech Asia Summit 2025
Legal Resolve: Practical tips for HR to balance AI, compliance and fairness

Legal Resolve: Practical tips for HR to balance AI, compliance and fairness

Adam Hugill, Partner at Hugill & Ip Solicitors highlights the bias or discrimination risks in using AI, while providing guidelines on tackling candidates who may 'game' the system.

AI has taken up a handful of HR tasks, from administrative routines to now recruitment, allowing HR professionals to focus on more strategic aspects of their roles.

While AI adoption in the workplace has brought significant convenience, it has also raised concerns about data privacy, compliance, and fairness.

As AI and digital tools become integral to HR’s agenda, balancing these complexities has become more crucial than ever.

In this exclusive conversation with HRO’s Tracy Chan, Adam Hugill, Partner at Hugill & Ip Solicitors shares essential tips to help HR professionals and organisations navigate and avoid potential legal pitfalls, and effectively equip their teams for AI-driven practices.

1. As AI tools process vast amounts of employee data, how can organisations ensure compliance with data protection laws while avoiding risks like data leakage or misuse? Are there any specific legal frameworks they should prioritise?

To ensure compliance with Hong Kong data protection laws while using AI tools to process employee data, organisations must first understand the legal framework. The Personal Data (Privacy) Ordinance, Cap. 486 (“PDPO”) is the primary legislation governing personal data privacy in Hong Kong. Organisations should familiarise themselves with its principles, including:

  i. Data collection: Collect data for lawful and clear purposes;
 ii. Data use: Use data only for the purposes for which it was collected;
iii. Data retention: Retain personal data only as long as necessary; and
iv. Data security: Implement appropriate security measures to protect data.

These principles should form the core of an organisation’s data protection policies which it should then continuously review and update to reflect changes in the legal landscape and emerging risks associated with new technologies.

Organisations can/should also:

  • Conduct regular data protection impact assessments and audits to identify and mitigate risks specifically related to AI data processing activities. This helps in assessing the impact of AI tools on employee data privacy.
  • Utilise other technology to implement robust data security measures to prevent data leakage or misuse, including encryption (both during data transit and storage) and access controls limiting access to authorised personnel only.
  • Maintain a response plan for data breaches, including notification procedures to the relevant authorities and affected individuals as required by the PDPO.
  • Promote a culture of compliance through regular training.

By prioritising these strategies, organisations can effectively navigate the complexities of data protection in Hong Kong while leveraging AI tools for processing employee data.

2. AI has now been increasingly used for recruitment, which has also raised concerns about potential bias and discrimination. What are some common pitfalls organisations should take note of, and what steps can they take to mitigate the risks and ensure fair hiring practices with AI?

When using AI for recruitment, there are several common pitfalls that can lead to bias and discrimination, including:

  • Bias in training data – AI systems often learn from historical data that may reflect existing biases, leading to discriminatory outcomes against certain groups.
  • Lack of transparency – AI algorithms can be opaque, making it difficult for organisations to understand how decisions are made, which can obscure bias issues.
  • Over-reliance on AI – Relying solely on AI for candidate selection can overlook the nuanced understanding that human recruiters provide.
  • Ignoring contextual factors – AI may not account for contextual factors that can affect a candidate's qualifications or potential.

3. Are you concerned that candidates may try to 'game' the system? As in use keywords that are in demand, despite not having the right experiences. If so, what can be done to manage this practice?

Candidates, especially young candidates who tend to be very technically savvy and are early in their careers, may be tempted to try this - but they should be aware that this practice can skew the hiring process and lead to mismatches between candidate qualifications and job requirements.

However, recruiters can combat this through the use of algorithms that can assess the context of keywords, not just their presence, helping identify candidates who genuinely possess the necessary skills and experience. ‘Natural Language Processing’ techniques can also analyse resumes and applications more deeply and help identify patterns in language usage that might indicate superficial or exaggerated claims.

That said, nothing replaces a vigorous interview process, ideally in person but also virtually, to verify a candidate’s background. Interviews that incorporate behavioural and situational interview questions requiring candidates to provide specific examples of their skills can be very revealing and helps proof against ‘keyword stuffing’.

If a role requires specific technical knowledge, candidates can be asked to complete practical assessments rather than relying solely on resumes / applications. This can include coding tests, writing samples, or other relevant tasks.

4. If a candidate or employee feels they’ve been unfairly treated by an AI-driven process, how should companies prepare to handle such disputes?

To effectively handle these kind of disputes, companies should provide candidates with access to clear procedures for the reporting of concerns or disputes related to AI-driven processes.

Ideally this would include multiple channels for reporting, such as direct supervisors, HR representatives, or dedicated hotlines. HR personnel and management should also be specifically trained on how to address complaints regarding AI-driven decisions, requiring these personnel to have an understanding of and, ideally, be able to explain the AI processes employed by the organisation.

It may sound obvious, but also ensure that any concerns or disputes of this nature are addressed via a human review, ensuring that candidates and employees can appeal unfair decisions.

A way to prevent rather than problem-solve could be to:

  • Clearly inform candidates that AI will be used in the hiring and evaluation processes. If possible, the criteria, algorithms used, and how decisions are made should also be communicated.
  • Regularly audit AI systems for bias and fairness. This proactive approach can help build trust and show a commitment to equitable practices.
  • Keep detailed records of AI processes, decisions, and communications. This documentation can provide clarity and context when addressing disputes.
  • Use external / third-party experts or consultants to review AI systems and dispute processes can lend credibility to the company’s approach and provide impartial insights.

5. You’ve highlighted the importance of clear policies and training for HR teams. What are the key elements of an effective policy for using AI in HR, and how can organisations ensure their teams are adequately trained to navigate potential legal issues?

An effective policy for using AI in HR should encompass several key elements, including:

  • Purpose and scope – Clearly define the purpose of using AI in HR processes, including recruitment, performance evaluation, and employee engagement. Specify the scope of the policy to cover all AI applications within HR.
  • Compliance with laws and regulations – Ensure that the policy complies with relevant laws, such as data protection regulations (e.g., GDPR, PDPO) and anti-discrimination laws. Include a commitment to uphold legal standards.
  • Transparency and explainability – Establish guidelines for transparency in AI decision-making processes and that candidates understand how AI systems work and decisions are made.
  • Data Privacy and security – Outline measures for protecting personal data processed by AI systems. This includes data collection, storage, access controls, and data breach response protocols.
  • Bias mitigation strategies – Include procedures for identifying and mitigating biases in AI algorithms, regularly auditing them for fairness and making corrections as necessary.
  • Human oversight – Define roles for human oversight in AI-driven processes, specifying situations where human judgment is required to review AI-generated decisions.
  • Feedback and reporting mechanisms – Create channels for employees and candidates to provide feedback or report concerns regarding AI use and that there is a clear process for addressing these concerns.
  • Continuous monitoring and improvement – Include provisions for the ongoing evaluation of AI systems and policies to adapt to changing legal standards, technological advancements, and societal expectations.

Training strategies for teams:

  • Comprehensive training programmes – Develop training programmes that cover the ethical, legal, and practical aspects of using AI in HR. Include topics like data privacy, bias recognition, and the importance of human oversight.
  • Regular workshops and seminars – Host workshops and seminars that focus on emerging legal issues related to AI, such as recent case studies, regulatory changes, and best practices.
  • Scenario-based training – Use real-world scenarios and case studies to help employees understand the implications of AI decisions and how to handle disputes or concerns effectively.
  • Collaboration with legal experts – Engage legal professionals to conduct training sessions on compliance and risk management related to AI in HR. This ensures that teams are well-informed about potential legal issues.
  • Ongoing support and resources – Provide access to resources, such as guidelines, handbooks, and online courses, for continuous learning. Encourage employees to stay updated on legal developments related to AI.
  • Feedback mechanisms for training – Establish feedback mechanisms to evaluate the effectiveness of training programmes. Use this input to refine and improve training content.

6. As AI continues to evolve, how can organisations future-proof their HR policies to adapt to emerging legal challenges and technological advancements?

To future-proof HR policies in the face of fast evolving AI technologies, organisations should schedule time for regularly reviewing and updating their policies to reflect industry advancements, legal requirements, and societal expectations.

Policies should also be flexible and adaptable to new technologies and practices, such as the inclusion of broad language to allows for adjustments as AI tools and legal standards develop.

More than ever, organisations should stay informed and provide continuous education and training on legal developments – monitoring changes in laws and regulations related to data protection, employment, and AI.

In this regard it might be prudent to create cross-functional teams that include HR, legal, IT, and compliance personnel to oversee AI initiatives, with clearly defined roles and responsibilities. All of these stakeholders should stay connected with industry peers and professional organisations to share knowledge, ethical considerations, and best practices regarding AI and HR policies. This will ensure that the values of fairness, transparency, and accountability are embedded into their organisational culture.


Photo / Provided (Adam Hugill, Partner, Hugill & Ip Solicitors)

Follow us on Telegram and on Instagram @humanresourcesonline for all the latest HR and manpower news from around the region!

Free newsletter

Get the daily lowdown on Asia's top Human Resources stories.

We break down the big and messy topics of the day so you're updated on the most important developments in Asia's Human Resources development – for free.

subscribe now open in new window