When New Scientist published a leader that Britain’s National Health Service (NHS) had quietly shared the healthcare data of 1.6 million patients with DeepMind, Google’s London-based artificial intelligence (AI) division, there were complaints across the nation’s newspapers.
While the information was ostensibly shared to build an app that would help hospital staff monitor kidney disease, the journal reported that the agreement went much further. The notion of a tech behemoth poking around the most private of personal data, without direct consent, concerned patient and privacy groups. The unwillingness of DeepMind or the NHS Trust involved to discuss its plans for using the data — anonymized yet intimate details including patient location, visitor information, pathology reports, HIV status, past drug overdoses and abortions — didn’t help matters.
Among clinicians who analyze large datasets in the hopes of curing disease and improving care, this was a non-story. Seeking insights and trends in anonymized data is common practice and leads to vital medical discoveries. Applying AI to this task promises to extend and accelerate the benefits. Recognizing this, New Scientist commented: “This tension between privacy and progress is a critical issue for modern society. Powerful technology companies can tell us valuable things, but only if we give them control of our data.”
I think this is a false and risky assumption. As a computer scientist who focuses on data mining and works as the CTO of a cognitive computing company, I have no lack of enthusiasm for the possibilities of AI and data analytics. However, as a former military officer, I also understand why it’s important, often essential, to keep certain data private.
To read full article, click here.