Crisis Text Line has decided to stop sharing conversation data with spun-off AI company Loris.ai after facing scrutiny from data privacy experts. “During these past days, we have listened closely to our community’s concerns,” the hotline service writes in a statement on its website. “We hear you. Crisis Text Line has had an open and public relationship with Loris AI. We understand that you don’t want Crisis Text Line to share any data with Loris, even though the data is handled securely, anonymized and scrubbed of personally identifiable information.” Loris.ai will delete any data it has received from Crisis Text Line.
Politico recently reported how Crisis Text Line (which is not affiliated with the National Suicide Prevention Lifeline) is sharing data from conversations with Loris.ai, which builds AI systems designed to increase empathetic conversation by customer service reps. Crisis Text Line is a not-for-profit service that provides a text line for mental health discussions, but it is also a shareholder in Loris.ai and, according to Politico, at one point shared a CEO with the company.
Before hotline users speak with volunteer counselors, they consent to data collection and can read the company’s data-sharing practices. Politico quoted one volunteer who claimed that the people who contact the line “have an expectation that the conversation is between just the two people that are talking” and said he was fired in August after raising concerns about CTL’s handling of data. Since then, that same employee started a petition pushing CTL “to reform its data ethics.”
Politico noted how Crisis Text Line says data use and AI play a role in how it operates:
“Data science and AI are at the heart of the organization — ensuring, it says, that those in the highest-stakes situations wait no more than 30 seconds before they start messaging with one of its thousands of volunteer counselors. It says it combs the data it collects for insights that can help identify the neediest cases or zero in on people’s troubles, in much the same way that Amazon, Facebook and Google mine trends from likes and searches.”
A recent story cherry-picked and omitted information about our data privacy policies. We want to clarify and fill in facts that were missing so people understand why ethical data privacy is foundational to our work.
A thread :
— Crisis Text Line (@CrisisTextLine) January 29, 2022
Following the report, Crisis Text Line released a statement on its website and via a Twitter thread. In a statement, Crisis Text Line said it does not “sell or share personally identifiable data with any organization or company.” It went on to claim that “[t]he only for-profit partner that we have shared fully scrubbed and anonymized data with is Loris.ai. We founded Loris.ai to leverage the lessons learned from operating our service to make customer support more human and empathetic. Loris.ai is a for-profit company that helps other for-profit companies employ de-escalation techniques in some of their most notoriously stressful and painful moments between customer service representatives and customers.”
In its defense, Crisis Text Line said “Our data scrubbing process has been substantiated by independent privacy watchdogs such as the Electronic Privacy Information Center, which called Crisis Text Line “a model steward of personal data.” It was citing a 2018 letter to the FCC, however, that defense is shakier now that the Electronic Privacy Information Center (EPIC) has responded with its own statement saying the quote was used outside of its original context:
“Our statements in that letter were based on a discussion with CTL about their data anonymization and scrubbing policies for academic research sharing, not a technical review of their data practices. Our review was not related to, and we did not discuss with CTL, the commercial data transfer arrangement between CTL and Loris.ai. If we had, we could have raised the ethical concerns with the commercial use of intimate message data directly with the organization and their advisors. But we were not, and the reference to our letter now, out of context, is wrong.”
On the Loris.ai website, it claims “safeguarding personal data is at the heart of everything we do,” and that “we draw our insights from anonymized, aggregated data that have been scrubbed of Personally Identifiable Information (PII).” That’s not enough for EPIC, which makes the point that Loris and CTL are seeking to “extract commercial value out of the most sensitive, intimate, and vulnerable moments in the lives (of) those individuals seeking mental health assistance and of the hard-working volunteer responders… No data scrubbing technique or statement in a terms of service can resolve that ethical violation.”
Update, 10.15PM ET: This story has been updated to reflect Crisis Text Line’s decision to stop sharing data with Loris.ai.