|
||
|
||

A new security risk has recently been brought to my attention. I was on a Teams call that included an attorney who would not let the call continue while an AI notetaker was present. His comment was that the notetaker is listening to everything that is said and transmitting it verbatim to a data center somewhere in the cloud. He said he was aghast that people would hold meetings about sensitive topics and then give everything that was said to unknown parties outside of the call. He used the analogy that having an AI notetaker is the equivalent of inviting a reporter into a meeting.
It didn’t take much research to realize he is right. An AI notetaker records everything that is said in a meeting so that AI servers somewhere in the cloud can make a transcript or summary of the meeting. Every word said in a meeting, from the brilliant to the mundane, is sent to a data center out of the control of the people on the call.
There is no way to know what the folks who control the recording will do. At a minimum, it’s almost certain they are using the data to further train AI models, which are voracious for more data. A record of the meeting could be sold to others. It’s possible, and even likely, that somebody really good at AI prompts can figure out what is discussed at a corporate meeting.
Of course, the AI notetaker companies can all swear that they don’t use the data for purposes other than making a summary of the meeting. But I have to ask, does anybody have the slightest idea of the identity of the people who own and work at these businesses, and do you trust them?
Nobody would let an unknown stranger into a work meeting, but that’s exactly what companies are doing with AI notetakers. But suddenly, companies have begun willingly sharing conversations with the cloud that they might not even want to share with everybody else inside their company. It’s hard to see this as anything but a self-inflicted data breach.
Before writing this blog, I asked a few people about this. One friend who is an AI expert said that it would be too tempting for anybody in this kind of business to monetize the data they are gathering by selling it to others to train AI models. He said that most AI companies are struggling to be profitable, and that secondary revenue streams have to be tempting (just as it is tempting for ISPs to sell user data). He thought that it’s too expensive for companies to routinely sift through the data for tidbits of corporate espionage, but that it would be possible for anybody willing to spend the processing time, or who is interested in a specific business or a specific person. He also said he would be worried that AI companies could be using the data to gather a voice print of meeting participants, something that they might otherwise have a hard time finding for most people.
I have no knowledge that the companies in this line of business are engaging in any nefarious activities with the data they gather, and perhaps they are not. But letting key information out of a closed circle of people on a call is practically the definition of a security risk. There is no way to know if this might harm a business.
There are a few companies that sell notetakers that claim to keep all data on a user’s computer and not share it in the cloud. The AI engine that summarizes a call is still going to be in the cloud, so unless that can be proven somehow, that still feels like a risk. Tech companies have been lying to the public about how they use the data they gather since AOL and early web companies figured out how to monetize user data.
This is one of the oddest blogs I’ve ever written because it makes me wonder if I’m being paranoid. But that feeling is probably a sign that this is a real concern.
Sponsored byWhoisXML API
Sponsored byVerisign
Sponsored byDNIB.com
Sponsored byVerisign
Sponsored byCSC
Sponsored byIPv4.Global
Sponsored byRadix