Security risk from third-party meeting bots

Executive Summary

  • External, third-party virtual meeting bots intruding into virtual meetings without organisers’ consent pose a growing security risk.

  • Third-party bots can intrude into meetings through a variety of pathways, including several covert pathways which elevates the risk of bots going undetected and sensitive information being exfiltrated.

  • These security risks are often overlooked by organisations and enable sensitive information to leave the organisation, increasing the likelihood of data breaches and serious privacy violations.   


If you are unsure about how third-party virtual meeting bots affect your organisation

CONTACT US


Virtual meeting bots

Meeting bots are typically used to streamline processes such as transcription, recording, and summary generation. Some bots are built into common productivity suites, such as Gemini within Google Workspace. Others are produced by external vendors. 

The number and use of third-party meeting bots has increased significantly as companies seek to capitalise on increased demand for automation in virtual meetings. Tools such as Fireflies.ai, Read.ai and Otter.ai offer compatibility with Zoom, Microsoft Teams, and Google Meet.

Security risks from intruding bots

Virtual meetings are key enablers of the modern economy. However, they also pose security risks, as outlined by high-profile breaches. As virtual meetings became more common, organisations discovered that poor security practices could allow uninvited guests to join the meeting; a practice sometimes termed ‘Zoombombing’

Recently, however, we have observed that these uninvited guests will increasingly likely not be human, but virtual meeting bots. The risk is greatest with third-party meeting bots that can operate outside organisational boundaries.

Common third-party bots may be overlooked by the meetings intended participants, who may assume that the bot is supposed to be attending, or not even notice this silent attendee. This poses an obvious security risk, with the electronic eavesdropper recording the meeting, as well as capturing transcripts. 

This risk is particularly elevated and pronounced during meetings with external parties, as outsider users covertly introduce bots. The host organisation often cannot enforce its technical controls on external users, leaving it unable to prevent intrusions and the transfer of confidential data.

Risk multipliers

The use of third-party bots to conduct such surveillance is in some respects more damaging than some other forms of targeted surveillance. Not only is the information being captured by the attacker, it is also likely being stored and processed by the third-party service provider. Many of these vendors use broadly worded Terms of Service, for example Otter.ai specifies that it may use data collected for any purpose and use the data to improve its platform even after service termination. 

The risk of data breach increases significantly when intrusions stem from bot vendors with poor security practices. In June 2025 security researchers discovered that McDonald’s AI chatbot had fundamental security vulnerabilities, including the use of the weak default password ‘123456’, which granted them access to approximately 64 million McDonald’s applicant records. This raises the prospect of data being accessible not only to the attacker and the service provider, but also openly online. 

Moreover, intruding bots which automatically record meetings pose serious privacy and legal risks. Recording meetings without informed participant consent constitutes a breach of DPA and GDPR requirements, potentially leading to fines of up to EU 20 million or 4% of global annual turnover. Individuals recorded in virtual meetings without consent have reported that they were ignored by third-party bot vendors when they requested that recordings be deleted. Even if a bot intrudes without detection, the organisation hosting the virtual meeting may still be liable if found to have negligently enabled the bot’s presence through insecure technical controls or policies.

Blocking the intruding bot

Organisations should begin by ensuring that security policies cover the use of third-party meeting bots. Universities including Cambridge, Oxford, and Stanford have produced advisories outlining how to prevent note-taking bots from attending seminars and lectures. Campaigns to raise awareness of the threat among organisational personnel will also be beneficial. 

Awareness efforts should highlight the longer-term risks of using unapproved AI systems. For instance, users who have tested third-party bots and then discovered risks associated with them have struggled for months with purging this technology. Training exercises could have highlighted the risks posed by such technology, greatly increasing the likelihood that the threat and damage could have been mitigated in the first place.

Technical security controls against bot intrusions begin with hardening meeting policies on platforms. In our experience, Microsoft Teams has relatively lax baseline entry policies that require hardening to reduce the risk of intrusion. Some of the most effective technical measures include changing meeting policies to prevent meeting invitation links from being forwarded, introducing waiting lobbies, and restricting which users can admit participants.  

Challenges arise when external collaborators bring third-party bots into virtual meetings. This situation can also be aggravated when external collaborators use harder-to-detect vectors, such as deployment through browser plug-ins or automatic attendance after synchronisation with participants’ calendars. In these circumstances, tailored technical and organisational solutions are the best option for organisations.


TYBURN RECOMMENDATIONS

At Tyburn St Raphael, we specialise at countering evolving cyber and hybrid threats risk-sensitive organisations. Our experts come from UK government, military, and academic backgrounds. We provide training designed to develop best security practices with impactful exercises for businesses and provide tailored cybersecurity solutions. 

We recommend:

Ensure organisational policies on the use of third-party bots are clear

  • Organisational policies should clarify an organisation’s stance on the use of third-party bots.  

Integrate security risks from third-party bot intrusions within assurance processes

  • Third-party bot intrusions should be considered as a threat vector when developing exercises to test the organisation’s security posture and strengthen crisis management. 

Engage security experts to develop solutions tailored to your environment

  • Third-party bot intrusions can be complex and challenging to counter. Experts can provide specialist advice and solutions tailored to your organisation to reduce the risk posed by third-party bot intrusions. 

Next
Next

ASSESSMENT: Academic Researchers Exposed to US Travel Risk