Stay in touch

Application security insights and invitations to exclusive events in your inbox


Your data will stay confidential Private and Confidential

Robin Wilton about GDRP compliance, AI and IoT challenges

Friday, May 11, 2018 By

Robin Wilton, Technical Director for Identity and Privacy at the Internet Society, provides expert opinion on the hottest cybersecurity, privacy and regulations topics.


Robin Wilton about GDRP compliance, AI and IoT challenges
Robin Wilton, Technical Outreach Director for Identity and Privacy, the Internet Society.

Robin Wilton brings 28 years of industry experience to the role of Technical Outreach Director for Identity and Privacy, in the Internet Society's Trust and Identity Initiatives group. Since 2001 he has specialised in digital identity, privacy and public policy, building a reputation as a thought leader, communicator and translator between different stakeholder groups.

Before joining the Internet Society Robin spent two years as a research analyst in Gartner's Identity and Privacy Strategies team, where – in addition to his privacy work - he specialised in public key infrastructure, electronic signature, single sign-on and federated identity.

Robin's experience includes: 12 years with IBM in systems engineering, technical support and consulting roles, at the UK and EMEA level; 3 years as Principal Consultant at JCP Trustbase Ltd., a start-up specialising in Java cryptography and PKI-enabling middleware; 8 years with Sun Microsystems in technical pre-sales and the CTO team; 18 months establishing Future Identity Ltd. as an independent consultancy on privacy and digital identity. During his time at Future Identity he was also Director of Privacy and Public Policy for the Kantara Initiative.

Robin has an MA in Philosophy and Modern Languages, and is a Fellow of the British Computer Society with Chartered IT Professional status.

Robin agreed to answer ten questions about the hottest cybersecurity, privacy and regulation challenges:

1. Do you think that countries should adopt national privacy laws, such as GDPR for the EU, or develop coherent and globally-compatible regulations to harmonize privacy in civilized society?

The Internet is inherently cross-border, as are many of the services it enables, but different cultures and countries have different laws and expectations regarding privacy. We have to find ways of ensuring that people's privacy expectations and legal entitlements are met, even when the services they use are located in other jurisdictions. GDPR attempts empower users with respect to their own personal data and to protect their fundamental right to privacy, even when (and perhaps especially when) their data is collected and used outside the EU. GDPR is also a step towards greater international consistency around data protection laws and rights: it seeks to protect individuals' data whether or not they are EU citizens, and backs this up with provisions for heavy administrative fines for non-compliance, and potential denial of data transfers to third countries that lack adequate levels of protection.

Non-EU countries have a choice as to how to respond to GDPR: adopt, adapt, conform... or ignore.

  • They can just adopt the legislation wholesale;

  • They can take GDPR and adapt it to suit their national laws and circumstances;

  • They can change their own laws so as to conform with GDPR requirements, but without adopting GDPR text itself;

  • Or, of course, they can do nothing and stick with their own current arrangements.

Whichever option they choose, cross-border cooperation and harmonization of laws on privacy and data protection is an important step, without which we incur a greater risk of privacy violations, and undermine some of the fundamental benefits of a global Internet.

However, the effort should be towards a globally-compatible framework. The GDPR is a significant step forward for the right of privacy, not least because it tries to address the issue of extraterritoriality. That issue may raise complex questions, but unless they are answered, we risk confusion, inconsistent laws in different jurisdictions, and obstacles to enforcement. The international community should work towards an internationally workable framework for privacy.


2. Do you think that GDPR merely protects the necessary minimum of privacy or places an excessive burden on businesses who may raise their prices in response?

I think that's probably a false opposition: it's not a question of "minimum privacy protection" versus "excessive burden." First, GDPR can help data controllers minimise their own risk (of data breach, insurance cost, reputational damage) by encouraging them to consider privacy as a set of requirements, a business asset, and a competitive differentiator. Designing privacy in by default helps the bottom line, much as security does if you build it in from the outset rather than trying to bolt it on afterwards.

Second, what GDPR tries to do is establish an over-arching set of privacy principles, so that data controllers (whether commercial or not) know where they stand. There isn't one set of rules for telcos and another for retailers, or one set for medical data and another for video rental records. This isn't about setting the minimum requirement for privacy as a "must exceed" threshold. It's about acknowledging privacy as a fundamental right that benefits society and the individual, and then recognising the implications which flow from that right. We also cannot ignore the overall economic impact of better privacy and data protection, as GDPR provisions promoting accountability, transparency and governance are also aimed to enhance the data driven economy through stronger protection of personal data.


3. How can manufacturers implement privacy by design into their IoT devices without raising the costs of production?

Perhaps we can reframe the problem. The cost of producing widget A (without privacy) and widget B (with privacy) might differ, but is the privacy option necessarily more expensive? Imagine two kinds of smart light bulb systems:

  • Bulb A talks to its controller, which then talks to a server where data about usage is collected, analysed and mined.

  • Bulb B talks to its controller, which only talks to the server to check for a firmware update. It has less functionality than the first version, and requires no data-handling functionality at the server.

The second of these options would actually be simpler and cheaper to produce, but more privacy-respecting.

If what you do is offset the cost of production against the mining of data gathered from your IoT device, is it really clear to the consumer why they are getting an apparently "cheaper" device? And what are the hidden costs in terms of personal data collection and privacy compromise?

So perhaps the real, underlying question is this: If "respect for privacy" means we can't offset product prices against the monetization of personal data from those products, and we therefore offer the consumer a cheaper widget A and a more expensive widget B, how can we make the compelling case for widget B?

In my view, the answer to a manufacturer is: "You need to acknowledge that the shift from dumb light bulb to smart light bulb changes the relationship you have with your customer." They used to buy your light bulbs because they were reliable and good value. They even preferred them to other cheaper bulbs, and so they came to trust your brand.

The trust equation has changed, but you haven't yet reflected that in your relationship with the customer. You're still trading on the fact that they trust your brand for reliability and value for money; that's not the same as trusting you not to misuse the personal data you're now able to collect about their household. You're no longer just selling a light bulb; update your relationship with the customer, so that you both understand the need for a privacy-respecting product, then it becomes a competitive advantage for your business.

Conversely, if the customer thinks they are still in a "dumb lightbulb" relationship and then discovers that you have been recording and mining the activity patterns of their household... that does not create competitive advantage for your business.


4. Do you think that AI technologies, that often require Big Data and PII usage, can be implemented in a privacy-friendly manner?

Yes, but subject to a very standard set of privacy principles: transparency, accountability, fairness, choice and respect. The implementers of AI need to take responsibility for the outcomes of AI and machine-learning processes, regardless of what happens "inside the black box." If you design an algorithm to modify its own rules, you're still responsible for the outcomes.

A couple of privacy considerations:

  • AI and machine-learning algorithms have been shown to exhibit bias if there is inherent bias in the "training" data they are fed. That can have implications in terms of discriminatory outcomes, which may have disproportionate impact on some stakeholder groups, including privacy impact.

  • Regardless of the input data (big data, PII, etc.) and the processing method (AI/ML, human, manual, etc.), the privacy outcomes have to be a measure of whether the technology is being used acceptably or not.


5. Should the government regulate all privacy-related matters or should the industries have some capacity to develop and implement their own privacy regulations adopted for their markets?

Attitudes to this question will vary (particularly on either side of the Atlantic). There's no single right answer. The EU approach (as mentioned above) is to establish the link between a fundamental right to privacy, and the practicalities of privacy and data protection in a digital age. The law sets a general and codifiable baseline regardless of industry sector; this can, however, be complemented by additional regulation for specific kinds of data (sensitive personal information is subject to stricter regulations, for instance) or individual industries (such as financial services).

There is also, of course, scope for industry-specific self-regulation or codes of conduct within such a framework; this opens the way for service providers to define and adopt a code of conduct, for instance, if they see it as a way of adding value and differentiating themselves from competing offers. (Outside the privacy context, "Fair Trade" goods are an example of such an approach; it's not legally mandated, but a number of producers and retailers perceive the value of a self-imposed code of conduct to themselves and their customers). That said, governments should strive to be flexible and pragmatic in their approach when implementing regulations, providing principles and then allowing industries flexibility on how best to comply, as is the approach with the GDPR.


6. What organizations can do to find a competent Data Privacy Officer (DPO) in light of cybersecurity skills shortage?

One step they might take is to draw a distinction between cybersecurity and data privacy. The two disciplines are complementary, but not the same. Separating the two into distinct roles might simplify the search process because it avoids the need to find a single person who is competent in both disciplines.


7. Do you think that public schools and universities should start offering Bachelor and Master degrees in privacy management?

The approach here will differ depending on the country and the structure of the education system, but I certainly think that tertiary education should offer modules, courses and potentially full degree courses in privacy (setting the scope intentionally broadly, to include privacy management, digital and offline privacy, societal and ethical implications, economics of data usage, technical and design factors, and so on), and that secondary education should prepare students for such courses by laying the foundations in related disciplines such as information technology, software design and programming, civics, business studies, and so on. To the extent that any subject is dependent on IT these days, it should explore the ethical and privacy-related factors as a matter of course.


8. Do you think that in the future privacy will become a fundamental human right recognized by all civilized countries?

Under Article 2 of the Universal Declaration of Human Rights, it already is...

The right to respect for private and family life, and the right to protection of personal data, are also expressed in Articles 8 and 9, respectively, of the EU Charter of Fundamental Rights.

While not explicit in the US Constitution, certain rights to privacy have been construed as arising from the First, and Fourth Amendments in particular.

So, as things stand, I think most countries do acknowledge that privacy is a fundamental right: how much they do as a consequence of that acknowledgement is another matter...


9. What organizations should do in case of a data breach assuming there are no regulations or obligatory disclosure in their industry?

They should act in the interests of the people/organisations on which the data breach might have an impact. Their actions should reflect an organisational principle of respect for the interests of their users, customers and partners, and be linked to a corporate risk management process.

The reaction of Johnson & Johnson to the 1982 Tylenol poisoning crisis is instructive. The company's chairman set two priorities: first, protect the people; second, save the product. There was no hesitation or ambiguity about his priorities. Now, in his case, he was dealing with a potential threat to life. Breach notifications may sometimes reach that level of risk, but are unlikely to do so in general - hence the need for a graduated response which is informed by a more general risk management process.

The corporate risk management strategy for data breaches should consider, at a minimum: risks to users/customers/partners; financial and reputational risk to the organisation; and risk to the organisation's assets and operational viability.


10. How can common people enhance, monitor and prosecute violations of their privacy rights?

As far as prosecution is concerned, this will depend on the jurisdiction, and on the nature of the violation. Where formal legal protections and recourse are available, these are most likely to be through data protection and/or consumer protection bodies (for example, the U.S. Federal Trade Commission). Class action suits may be an option, again depending on the jurisdiction. In exceptional cases, it may be possible for an individual to "escalate" a legal action through national data protection bodies to a supra-national authority, as in the case of Max Schrems, Facebook and the Court of Justice of the European Union (CJEU).

In terms of monitoring and enhancing their protection against privacy violations, individuals will always tend to be at risk because of the dependency we have on others in our online interactions. Everything we do online is mediated through at least one third party - and even if we secure our data using end-to-end encryption (to keep it private from intermediaries), we still want the intended recipient to be able to read and process it - at which point, there is little we can do technically, to keep it private.

For the individual, protection will often be a matter of changing one's behaviour to reflect the value one places on privacy (as opposed to, say, convenience). Some practical steps to monitor and mitigate your privacy risks include:

  • Ensuring that your browser asks for an encrypted session (for instance, by using a plug-in like "https everywhere")

  • Using a trusted VPN service to reduce the information your browser leaks about your location and destinations

  • Using an email service athat allows you to generate different email addresses for different sites that ask for one. This will make it more obvious if a site has disclosed that email address to a third party who subsequently spams you

  • Using separate browsers, VPN connections and even devices, to keep separate contexts separate (work, home, online banking, instant messaging, social media, etc.)

  • Using separate payment cards: reduce the risk of card fraud by using a low-limit debit card for online purchases

  • Be conscious of the different contexts in which you interact online, and selective about the information you disclose across contexts.


High-Tech Bridge's series of Cybersecurity Leaders Interviews compile thoughts of the cybersecurity executives, thought leaders, visionaries and eminent technology experts.

User Comments
Add Comment

High-Tech Bridge on Facebook High-Tech Bridge on Twitter High-Tech Bridge on LinkedIn High-Tech Bridge RSS Feeds Send by Email
Share
Let's Talk