share on
Jonathan Isaacs, Asia Pacific Chair, Baker McKenzie’s Employment & Compensation Practice, calls on HR leaders to take decisive steps against deepfake threats — invest in training, stay alert, tighten verification checks, strengthen policies, and promote a culture of reporting.
The rise of technological sophistication is creating a new era for workplace risks. The misuse of artificial intelligence (AI) to create deepfakes has key implications for operational, financial and reputational management for HR leaders. Explore more to find out what deepfakes are, how they are impacting the HR landscape and what key actions leaders should be taking now.
What are deepfakes?
Deepfakes, which are AI-generated audio, images or videos, manipulate source material in order to misrepresent a person’s image or voice. The subject therefore appears to be saying or doing something that they are not in fact saying or doing, which opens up the risk of its use for malicious purposes, including to spread misinformation or being used by scammers for financial gain. Deepfake fraud in Asia Pacific is now a real threat.
How can deepfakes in the workplace affect a company’s financial position?
The impersonation of senior level employees through deepfakes can lead to leaked access to sensitive information or company finances. Employees in the finance or accounts department could be targeted as victims of such deepfake fraud — tricked into believing that they are following senior executive instructions by executing urgent money transfers outside of the organisation, thereby causing financial loss.
Fraudsters achieve this by using AI-generated videos or voice clones of the relevant senior executive to convey instructions to the (often more junior) employee and typically use urgency tactics to minimise time for verification.
Why is deepfake workplace harassment a real concern for HR leaders?
Deepfakes can be used to create manipulated and compromising images of a fellow employee, which is then distributed to others. Dissemination of a sexually compromising deepfake picture of a fellow employee could amount to sexual harassment under the legislation.
"Aside from detrimental impacts on wellbeing and the workplace environment, harassment can also expose employers to liability since they can be vicariously liable for the discriminatory acts of their employees."
In Hong Kong, sexual, breastfeeding, disability and racial harassment are prohibited. Under the Sex Discrimination Ordinance, sexual harassment occurs when a person engages in unwelcome conduct of a sexual nature in circumstances in which a reasonable person, having regard to all the circumstances, would have anticipated that the recipient would be offended, humiliated or intimidated. Sexual harassment can also occur if a person engages in conduct of a sexual nature which creates a hostile or intimidating environment for the recipient.
What data privacy risks arise in the age of deepfakes in the workplace?
Threat actors have been known to use deepfake technology to access/use personal data. Under the Personal Data (Protection) Ordinance, an employer has an obligation to take all practicable steps to ensure that any personal data it holds is protected against unauthorised or accidental access, processing, erasure, loss or use. If deepfakes have been created using employer-stored personal data, the employer may face investigation by the Privacy Commissioner.
Why is deepfake fraud through evidence manipulation a key concern for HR and legal leaders?
As deepfake technology becomes more sophisticated, evidence manipulation is becoming more difficult to spot. When conducting internal investigations into employee misconduct, employers may come across AI-manipulated evidence, which complicates the timeliness, costliness and due process of the investigation.
Furthermore, if manipulated evidence is used as a basis to take disciplinary action against employees (knowingly or otherwise), risks of unfair or wrongful dismissal claims may cause further time and cost burdens.
What can HR leaders and employers do to mitigate the risks associated with deepfakes in the workplace?
Five key actions to protect against the risk of deepfakes:
- Invest in training: Provide training to employees on what deepfakes are and how to spot them in calls and video meetings e.g. video glitching, unusual speech patterns or mannerisms or unnatural movements etc.
- Be vigilant: When investigating employee wrongdoing and in particular when collating and evaluating evidence, be mindful of deepfakes and the possibility of evidence manipulation.
- Enforce multi-layer verification: Train employees to be skeptical of unusual urgent or pressurising requests, particularly any requests for personal data or payment. Put in place a second layer of verification in response to such requests such as dual authorisation. Implement verification procedures such as call-backs when in doubt.
- Have robust policies: Ensure policies are in place that prohibit the unauthorised use of any employee’s image, voice or personal data. Specifically reference the prohibition of deep-fake related harassment in the organisation’s anti-harassment policies.
- Promote a positive culture of reporting: Ensure your organisation has an open culture for reporting suspicious activity and that employees are clear on the relevant reporting channels.
Photo / Provided
share on