Skip navigation

Tag Archives: Matt Hancock

The contract between the UK’s National Health Service (NHS) and ecommerce giant Amazon — for a health information licensing partnership involving its Alexa voice AI — has been released following a Freedom of Information request.

The government announced the partnership this summer. But the date on the contract, which was published on the gov.uk contracts finder site months after the FOI was filed, shows the open-ended arrangement to funnel nipped-and-tucked health info from the NHS’ website to Alexa users in audio form was inked back in December 2018.

The contract is between the UK government and Amazon US (Amazon Digital Services, Delaware) — rather than Amazon UK. Although the company confirmed to us that NHS content will only be served to UK Alexa users. 

Nor is it a standard NHS Choices content syndication contract. A spokeswoman for the Department of Health and Social Care (DHSC) confirmed the legal agreement uses an Amazon contract template. She told us the department had worked jointly with Amazon to adapt the template to fit the intended use — i.e. access to publicly funded healthcare information from the NHS’ website.

The NHS does make the same information freely available on its website, of course. As well as via API — to some 1,500 organizations. But Amazon is not just any organization; It’s a powerful US platform giant with a massive ecommerce business.

The contract reflects that power imbalance; not being a standard NHS content syndication agreement — but rather DHSC tweaking Amazon’s standard terms.

“It was drawn up between both Amazon UK and the Department for Health and Social Care,” a department spokeswoman told us. “Given that Amazon is in the business of holding standard agreements with content providers they provided the template that was used as the starting point for the discussions but it was drawn up in negotiation with the Department for Health and Social Care, and obviously it was altered to apply to UK law rather than US law.”

In July, when the government officially announced the Alexa-NHS partnership, its PR provided a few sample queries of how Amazon’s voice AI might respond to what it dubbed “NHS-verified” information — such as: “Alexa, how do I treat a migraine?”; “Alexa, what are the symptoms of flu?”; “Alexa, what are the symptoms of chickenpox?”.

But of course as anyone who’s ever googled a health symptom could tell you, the types of stuff people are actually likely to ask Alexa — once they realize they can treat it as an NHS-verified info-dispensing robot, and go down the symptom-querying rabbit hole — is likely to range very far beyond the common cold.

At the official launch of what the government couched as a ‘collaboration’ with Amazon, it explained its decision to allow NHS content to be freely piped through Alexa by suggesting that voice technology has “the potential to reduce the pressure on the NHS and GPs by providing information for common illnesses”.

Its PR cited an unattributed claim that “by 2020, half of all searches are expected to be made through voice-assisted technology”.

This prediction is frequently attributed to ComScore, a media measurement firm that was last month charged with fraud by the SEC. However it actually appears to originate with computer scientist Andrew Ng, from when he was chief scientist at Chinese tech giant Baidu.

Econsultancy noted last year that Mary Meeker included Ng’s claim on a slide in her 2016 Internet Trends report — which is likely how the prediction got so widely amplified.

But on Meeker’s slide you can see that the prediction is in fact “images or speech”, not voice alone…

Screenshot

So it turns out the UK government incorrectly cited a tech giant prediction to push a claim that “voice search has been increasing rapidly” — in turn its justification for funnelling NHS users towards Amazon.

“We want to empower every patient to take better control of their healthcare and technology like this is a great example of how people can access reliable, world-leading NHS advice from the comfort of their home, reducing the pressure on our hardworking GPs and pharmacists,” said health secretary Matt Hancock in a July statement.

Since landing at the health department, the app-loving former digital minister has been pushing a tech-first agenda for transforming the NHS — promising to plug in “healthtech” apps and services, and touting “preventative, predictive and personalised care”. He’s also announced an AI lab housed within a new unit that’s intended to oversee the digitization of the NHS.

Compared with all that, plugging the NHS’ website into Alexa probably seems like an easy ‘on-message’ win. But immediately the collaboration was announced concerns were raised that the government is recklessly mixing the streams of critical (and sensitive) national healthcare infrastructure with the rapacious data-appetite of a foreign tech giant, with both an advertising and ecommerce business, plus major ambitions of its own in the healthcare space.

On the latter front, just yesterday news broke of Amazon’s second health-related acquisition: Health Navigator, a startup with an API platform for integrating with health services, such as telemedicine and medical call centers, which offers natural language processing tools for documenting health complaints and care recommendations.

Last year Amazon also picked up online pharmacy PillPack — for just under $1BN. While just last month it launched a pilot of a healthcare service offering to its own employees in and around Seattle, called Amazon Care which looks intended to be a road-test for addressing the broader U.S. market down the line. So the company’s commercial designs on healthcare are becoming increasingly clear.

Returning to the UK, in response to early critical feedback on the Alexa-NHS arrangement, the IT delivery arm of the service, NHS Digital, published a blog post going into more detail about the arrangement — following what it couched as “interesting discussion about the challenges for the NHS of working with large commercial organisations like Amazon”.

A core critical “discussion” point is the question of what Amazon will do with people’s medical voice query data, given the partnership is clearly encouraging people to get used to asking Alexa for health advice.

“We have stuck to the fundamental principle of not agreeing a way of working with Amazon that we would not be willing to consider with any single partner – large or small. We have been careful about data, commercialisation, privacy and liability, and we have spent months working with knowledgeable colleagues to get it right,” NHS Digital claimed in July.

In another section of the blog post, responding to questions about what Amazon will do with the data and “what about privacy”, it further asserted there would be no health profiling of customers — writing:

We have worked with the Amazon team to ensure that we can be totally confident that Amazon is not sharing any of this information with third parties. Amazon has been very clear that it is not selling products or making product recommendations based on this health information, nor is it building a health profile on customers. All information is treated with high confidentiality. Amazon restrict access through multi-factor authentication, services are all encrypted, and regular audits run on their control environment to protect it.

Yet it turns out the contract DHSC signed with Amazon is just a content licensing agreement. There are no terms contained in it concerning what can or can’t be done with the medical voice query data Alexa is collecting with the help of “NHS-verified” information.

Per the contract terms, Amazon is required to attribute content to the NHS when Alexa responds to a query with information from the service’s website. (Though the company says Alexa also makes use of medical content from the Mayo Clinic and Wikipedia.) So, from the user’s point of view, they will at times feel like they’re talking to an NHS-branded service (i.e. when they hear Alexa serving them information attributed to the NHS’ website.).

But without any legally binding confidentiality clauses around what can be done with their medical voice queries it’s not clear how NHS Digital can confidently assert that Amazon isn’t creating health profiles. The situation seems to sum to, er, trust Amazon. (NHS Digital wouldn’t comment; saying it’s only responsible for delivery not policy setting, and referring us to the DHSC.)

Asked what it does with medical voice query data generated as a result of the NHS collaboration an Amazon spokesperson told us: “We do not build customer health profiles based on interactions with nhs.uk content or use such requests for marketing purposes.”

But the spokesperson could not point to any legally binding contract clauses in the licensing agreement that restrict what Amazon can do with people’s medical queries.

We also asked the company to confirm whether medical voice queries that return NHS content are being processed in the US. Amazon’s spokeswoman responded without a direct answer — saying only that queries are processed in the “cloud”. (“When you speak to Alexa, a recording of what you asked Alexa is sent to Amazon’s Cloud where we process your request and other information to respond to you.”)

“This collaboration only provides content already available on the NHS.UK website, and absolutely no personal data is being shared by NHS to Amazon or vice versa,” Amazon also told us, eliding the key point that it’s not NHS data being shared with Amazon but NHS users, reassured by the presence of a trusted public brand, being encouraged to feed Alexa sensitive personal data by asking about their ailments and health concerns.

Bizarrely, the Department of Health and Social Care went further. Its spokeswoman claimed in an email that “there will be no data shared, collected or processed by Amazon and this is just an alternative way of providing readily available information from NHS.UK.”

When we spoke to DHSC on the phone prior to this, to raise the issue of medical voice query data generated via the partnership and fed to Amazon — also asking where in the contract are clauses to protect people’s data — the spokeswoman said she would have to get back to us. All of which suggests the government has a very vague idea (to put it generously) of how cloud-powered voice AIs function.

Presumably no one at DHSC bothered to read the information on Amazon’s own Alexa privacy page — although the department spokeswomen was at least aware this page existed (because she knew Amazon had pointed us to what she called its “privacy notice”, which she said “sets out how customers are in control of their data and utterances”).

If you do read the page you’ll find Amazon offers some broad-brush explanation there which tells you that after an Alexa device has been woken by its wake word, the AI will “begin recording and sending your request to Amazon’s secure cloud”.

Ergo data is collected and processed. And indeed stored on Amazon’s servers. So, yes, data is ‘shared’. Not ‘NHS data’, but UK citizens’ personal data.

Amazon’s European Privacy Notice meanwhile, sets out a laundry list of purposes for user data — from improving its services, to generating recommendations and personalization, to advertising. While on its Alexa Terms of Use page it writes: “To provide the Alexa service, personalize it, and improve our services, Amazon processes and retains your Alexa Interactions, such as your voice inputs, music playlists and your Alexa to-do and shopping lists, in the cloud.” [emphasis ours]

The DHSC sees the matter very differently, though.

With no contractual binds covering health-related queries UK users of Alexa are being encouraged to whisper into Amazon’s robotic ears — data that’s naturally linked to Alexa and Amazon account IDs — the government is accepting the tech giant’s standard data processing terms for a commercial, consumer product which is deeply integrated into its increasingly sprawling business empire.

Terms such as indefinite retention of audio recordings. Unless users pro-actively request that they are deleted. And even then Amazon admitted this summer it doesn’t always delete the text transcripts of recordings. So even if you keep deleting all your audio snippets, traces of medical queries may well remain on Amazon’s servers.

On this, Amazon’s spokeswoman told us that voice recordings and related transcripts are deleted when Alexa customers select to delete their recordings — pointing to the Alexa and Alexa Device FAQ where the company writes: “We will delete the voice recordings and the text transcripts of your request that you selected from Amazon’s Cloud”. Although in the same FAQ Amazon also notes: “We may still retain other records of your Alexa interactions, including records of actions Alexa took in response to your request.” So it sounds like some metadata around medical queries may remain, even post-deletion.

Earlier this year it also emerged the company employs contractors around the world to listen in to Alexa recordings as part of internal efforts to improve the performance of the AI.

A number of tech giants recently admitted to the presence of such ‘speech grading’ programs, as they’re sometimes called — though none had been up front and transparent about the fact their shiny AIs needed an army of external human eavesdroppers to pull off a show of faux intelligence.

It’s been journalists highlighting the privacy risks for users of AI assistants; and media exposure leading to public pressure on tech giants to force changes to concealed internal processes that have, by default, treated people’s information as an owned commodity that exists to serve and reserve their own corporate interests.

Data protection? Only if you interpret the term as meaning your personal data is theirs to capture and that they’ll aggressively defend the IP they generate from it.

So, in other words, actual humans — both employed by Amazon directly and not — may be listening to the medical stuff you’re telling Alexa. Unless the user finds and activates a recently added ‘no human review’ option buried in the Alexa app settings.

Many of these ‘speech grading’ arrangements remain under regulatory scrutiny in Europe. Amazon’s lead data protection regulator in Europe confirmed in August it’s in discussions with it over concerns related to its manual reviews of Alexa recordings. So UK citizens — whose taxes fund the NHS — might be forgiven for expecting more care from their own government around such a ‘collaboration’.

Rather than a wholesale swallowing of tech giant T&Cs in exchange for free access to the NHS brand and  “NHS-verified” information which helps Amazon burnish Alexa’s utility and credibility, allowing it to gather valuable insights for its commercial healthcare ambitions.

To date there has been no recognition from DHSC the government has a duty of care towards NHS users as regards potential risks its content partnership might generate as Alexa harvests their voice queries via a commercial conduit that only affords users very partial controls over what happens to their personal data.

Nor is DHSC considering the value being generously gifted by the state to Amazon — in exchange for a vague supposition that a few citizens might go to the doctor a bit less if a robot tells them what flu symptoms look like.

“The NHS logo is supposed to mean something,” says Sam Smith, coordinator at patient data privacy advocacy group, MedConfidential — one of the organizations that makes use of the NHS’ free APIs for health content (but which he points out did not write its own contract for the government to sign).

“When DHSC signed Amazon’s template contract to put the NHS logo on anything Amazon chooses to do, it left patients to fend for themselves against the business model of Amazon in America.”

In a related development this week, Europe’s data protection supervisor has warned of serious data protection concerns related to standard contracts EU institutions have inked with another tech giant, Microsoft, to use its software and services.

The watchdog recently created a strategic forum that’s intended to bring together the region’s public administrations to work on drawing up standard contracts with fairer terms for the public sector — to shrink the risk of institutions feeling outgunned and pressured into accepting T&Cs written by the same few powerful tech providers.

Such an effort is sorely needed — though it comes too late to hand-hold the UK government into striking more patient-sensitive terms with Amazon US.

This article was updated with a correction to a reference to the Alexa privacy policy. We originally referenced content from the privacy policy of another Amazon-owned Internet marketing company that’s also called Alexa. This is in fact a different service to Amazon’s Alexa voice assistant. We also updated the report to include additional responses from Amazon 

Read more: https://techcrunch.com/2019/10/24/alexa-where-are-the-legal-limits-on-what-amazon-can-do-with-my-health-data/

The UK government has rejected a parliamentary committee’s call for a levy on social media firms to fund digital literacy lessons to combat the impact of disinformation online.

The recommendation of a levy on social media platforms was made by the Digital, Culture, Media and Sport committee three months ago, in a preliminary report following a multi-month investigation into the impact of so-called ‘fake news’ on democratic processes.

Though it has suggested the terms ‘misinformation’ and ‘disinformation’ be used instead, to better pin down exact types of problematic inauthentic content — and on that at least the government agrees. But just not on very much else. At least not yet.

Among around 50 policy suggestions in the interim report — which the committee put out quickly exactly to call for “urgent action” to ‘defend democracy’ — it urged the government to put forward proposals for an education levy on social media.

But in its response, released by the committee today, the government writes that it is “continuing to build the evidence base on a social media levy to inform our approach in this area”.

“We are aware that companies and charities are undertaking a wide range of work to tackle online harms and would want to ensure we do not negatively impact existing work,” it adds, suggesting it’s most keen not to be accused of making a tricky problem worse.

Earlier this year the government did announce plans to set up a dedicated national security unit to combat state-led disinformation campaigns, with the unit expected to monitor social media platforms to support faster debunking of online fakes — by being able to react more quickly to co-ordinated interference efforts by foreign states.

But going a step further and requiring social media platforms themselves to pay a levy to fund domestic education programs — to arm citizens with critical thinking capabilities so people can more intelligently parse content being algorithmically pushed at them — is not, apparently, forming part of government’s current thinking.

Though it is not taking the idea of some form of future social media tax off the table entirely, as it continues seeking ways to make big tech pay a fairer share of earnings into the public purse, also noting in its response: “We will be considering any levy in the context of existing work being led by HM Treasury in relation to corporate tax and the digital economy.”

As a whole, the government’s response to the DCMS committee’s laundry list of policy recommendations around the democratic risks of online disinformation can be summed up in a word as ‘cautious’ — with only three of the report’s forty-two recommendations being accepted outright, as the committee tells it, and four fully rejected.

Most of the rest are being filed under ‘come back later — we’re still looking into it’.

So if you take the view that ‘fake news’ online has already had a tangible and worrying impact on democratic debate the government’s response will come across as underwhelming and lacking in critical urgency. (Though it’s hardly alone on that front.)

The committee has reacted with disappointment — with chair Damian Collins dubbing the government response “disappointing and a missed opportunity”, and also accusing ministers of hiding behind ‘ongoing investigations’ to avoid commenting on the committee’s call that the UK’s National Crime Agency urgently carry out its own investigation into “allegations involving a number of companies”.

Earlier this month Collins also called for the Met Police to explain why they had not opened an investigation into Brexit-related campaign spending breaches.

It has also this month emerged that the force will not examine claims of Russian meddling in the referendum.

Meanwhile the political circus and business uncertainty triggered by the Brexit vote goes on.

Holding pattern

The bulk of the government’s response to the DCMS interim report entails flagging a number of existing and/or ongoing consultations and reviews — such as the ‘Protecting the Debate: Intimidating, Influence and Information‘ consultation, which it launched this summer.

But by saying it’s continuing to gather evidence on a number of fronts the government is also saying it does not feel it’s necessary to rush through any regulatory responses to technology-accelerated, socially divisive/politically sensitive viral nonsense — claiming also that it hasn’t seen any evidence that malicious misinformation has been able to skew genuine democratic debate on the domestic front.

It’ll be music to Facebook’s ears given the awkward scrutiny the company has faced from lawmakers at home and, indeed, elsewhere in Europe — in the wake of a major data misuse scandal with a deeply political angle.

The government also points multiple times to a forthcoming oversight body which is in the process of being established — aka the Centre for Data Ethics and Innovation — saying it expects this to grapple with a number of the issues of concern raised by the committee, such as ad transparency and targeting; and to work towards agreeing best practices in areas such as “targeting, fairness, transparency and liability around the use of algorithms and data-driven technologies”.

Identifying “potential new regulations” is another stated role for the future body. Though given it’s not yet actively grappling with any of these issues the UK’s democratically concerned citizens are simply being told to wait.

“The government recognises that as technological advancements are made, and the use of data and AI becomes more complex, our existing governance frameworks may need to be strengthened and updated. That is why we are setting up the Centre,” the government writes, still apparently questioning whether legislative updates are needed — this in a response to the committee’s call, informed by its close questioning of tech firms and data experts, for an oversight body to be able to audit “non-financial” aspects of technology companies (including security mechanism and algorithms) to “ensure they are operating responsibly”.

“As set out in the recent consultation on the Centre, we expect it to look closely at issues around the use of algorithms, such as fairness, transparency, and targeting,” the government continues, noting that details of the body’s initial work program will be published in the fall — when it says it will also put out its response to the aforementioned consultation.

It does not specify when the ethics body will be in any kind of position to hit this shifty ground running. So again there’s zero sense the government intends to act at a pace commensurate with the fast-changing technologies in question.

Then, where the committee’s recommendations touch on the work of existing UK oversight bodies, such as Competition and Markets Authority, the ICO data watchdog, the Electoral Commission and the National Crime Agency, the government dodges specific concerns by suggesting it’s not appropriate for it to comment “on independent bodies or ongoing investigations”.

Also notable: It continues to reject entirely the idea that Russian-backed disinformation campaigns have had any impact on domestic democratic processes at all — despite public remarks by prime minister Theresa May  last year generally attacking Putin for weaponizing disinformation for election interference purposes.

Instead it writes:

We want to reiterate, however, that the Government has not seen evidence of successful use of disinformation by foreign actors, including Russia, to influence UK democratic processes. But we are not being complacent and the Government is actively engaging with partners to develop robust policies to tackle this issue.

Its response on this point also makes no reference of the extensive use of social media platforms to run political ads targeting the 2016 Brexit referendum.

Nor does it make any note of the historic lack of transparency of such ad platforms. Which means that it’s simply not possible to determine where all the ad money came from to fund digital campaigning on domestic issues — with Facebook only just launching a public repository of who is paying for political ads and badging them as such in the UK, for example.

The elephant in the room is of course that ‘lack of evidence’ is not necessarily evidence of a lack of success, especially when it’s so hard to extract data from opaque adtech platforms in the first place.

Moreover, just this week fresh concerns have been raised about how platforms like Facebook are still enabling dark ads to target political messages at citizens — without it being transparently clear who is actually behind and paying for such campaigns…

New ‘Dark Ads’ pro-Brexit Facebook campaign may have reached over 10M people, say researchers

In turn triggering calls from opposition MPs for updates to UK election law…

Yet the government, busily embroiled as it still is with trying to deliver some kind of Brexit outcome, is seemingly unconcerned by all this unregulated, background ongoing political advertising.

It also directly brushes off the committee’s call for it to state how many investigations are currently being carried out into Russian interference in UK politics, saying only that it has taken steps to ensure there is a “coordinated structure across all relevant UK authorities to defend against hostile foreign interference in British politics, whether from Russia or any other State”, before reiterating: “There has, however, been no evidence to date of any successful foreign interference.”

This summer the Electoral Commission found that the official Vote Leave campaign in the UK’s in/out EU referendum had broken campaign spending rules — with social media platforms being repurposed as the unregulated playing field where election law could be diddled at such scale. That much is clear.

The DCMS committee had backed the Commission’s call for digital imprint requirements for electronic campaigns to level the playing field between digital and print ads.

However the government has failed to back even that pretty uncontroversial call, merely pointing again to a public consultation (which ends today) on proposed changes to electoral law. So it’s yet more wait and see.

The committee is also disappointed about the lack of government response to its call for the Commission to establish a code for advertising through social media during election periods; and its recommendation that “Facebook and other platforms take responsibility for the way their platforms are used” — noting also the government made “no response to Facebook’s failure to respond adequately to the Committee’s inquiry and Mark Zuckerberg’s reluctance to appear as a witness“. (A reluctance that really enraged the committee.)

In a statement on the government’s response, committee chair Damian Collins writes: “The government’s response to our interim report on disinformation and ‘fake news’ is disappointing and a missed opportunity. It uses other ongoing investigations to further delay desperately needed announcements on the ongoing issues of harmful and misleading content being spread through social media.

“We need to see a more coordinated approach across government to combat campaigns of disinformation being organised by Russian agencies seeking to disrupt and undermine our democracy. The government’s response gives us no real indication of what action is being taken on this important issue.”

Collins finds one slender crumb of comfort, though, that the government might have some appetite to rule big tech.

After the committee had called for government to “demonstrate how seriously it takes Facebook’s apparent collusion in spreading disinformation in Burma, at the earliest opportunity”, the government writes that it: “has made it clear to Facebook, and other social media companies, that they must do more to remove illegal and harmful content”; and noting also that its forthcoming Online Harms White Paper will include “a range of policies to tackle harmful content”.

“We welcome though the strong words from the Government in its demand for action by Facebook to tackle the hate speech that has contributed to the ethnic cleansing of the Rohingya in Burma,” notes Collins, adding: “We will be looking for the government to make progress on these and other areas in response to our final report which will be published in December.

“We will also be raising these issues with the Secretary of State for DCMS, Jeremy Wright, when he gives evidence to the Committee on Wednesday this week.”

(Wright being the new minister in charge of the UK’s digital brief, after Matt Hancock moved over to health.)

We’ve reached out to Facebook for comment on the government’s call for a more robust approach to illegal hate speech. Update: A company spokesperson has now emailed the following statement: “The Committee has raised important issues and we’re committed to working with Government to make the UK the safest place to be online. Transparency around political advertising is good for democracy, and good for the electoral process and we’re pleased the Government welcomed our recent new tools to ensure that political ads on Facebook are open for public scrutiny. We also share the Committee’s concern to keep harmful content off Facebook and have doubled the number of people working on safety and security to 20,000 globally.” 

Last week the company announced it had hired former UK deputy prime minister, Nick Clegg, to be its new head of global policy and comms — apparently signalling a willingness to pay a bit more attention to European regulators.

Read more: https://techcrunch.com/2018/10/22/fake-news-threat-to-democracy-gets-back-burner-response-from-uk-govt/