Much is said about the gatekeeper function of providers of crypto services. Pursuant to Directive 2018/843, crypto service providers in the Netherlands are required to register with De Nederlandsche Bank (DNB). In this context, DNB requires providers of crypto services to demonstrate how they ensure that no money or cryptos are made available to sanctioned (legal) persons. The Preliminary Relief Judge of the Rotterdam District Court recently ruled that this implementation of the registration obligation may be in conflict with European law.
Under the European fifth anti-money laundering directive (Directive 2018/843, hereinafter: the Directive), Member States must ensure that crypto service providers are registered. In the Netherlands, registration takes place with DNB and requires a provider of crypto services to submit data relating to compliance with regulations under the Anti-Money Laundering and Terrorist Financing Act (Wwft) or the Sanctions Act 1977 (Sw). According to DNB, this means, among other things, that providers of crypto services must verify for each individual transaction to whom the crypto address of the beneficiary of the transaction belongs.
Bitonic offers exchange services of fiat currency to bitcoin and vice versa and operates a platform for trading bitcoin and litecoin. Bitonic’s platform only supports transactions to the customer’s own wallet and does not facilitate transactions with third parties. For Bitonic, the registration requirement nevertheless means that prior to all transactions of its customers, regardless of the transaction amount, it must take measures to ensure that the crypto address used is actually used by the customer himself and not by another party or counterparty. According to Bitonic, these measures are not only ineffective and disproportionate, but also violate European law as the Dutch legislator and DNB have implemented the registration obligation under the Directive effectively in the form of a permit requirement, with extensive prior auditing.
The Preliminary Relief Judge expresses doubts as to whether DNB, in view of the aforementioned Directive and other legislation, was allowed to elaborate the registration requirement in the manner in which DNB has done. However, the fact that doubts exist about the legality of the stated registration requirement does not mean that it must be considered that the requirement is clearly incorrect or unlawful. According to the Preliminary Relief Judge, this requires a more in-depth investigation, for which summary proceedings are not well suited. DNB is therefore ordered to take a decision in administrative proceedings within six weeks time. If necessary, Bitonic can appeal this decision and have the registration requirement reviewed in substantive proceedings.
Crypto service providers active in the Netherlands are advised to closely monitor developments in the coming weeks and months.
Do you have questions about the legal aspects of blockchain or crypto services? Feel free to contact our team.
Jeroen van Helden, attorney at law IT, IP & Privacy
Uber keeps the legal community busy. In the UK, the Supreme Court recently confirmed that drivers working for Uber are not self-employed contractors, but workers and are therefore entitled to a minimum wage and holiday allowance. Similar proceedings are ongoing in France. Recently, the Amsterdam District Court ruled in two cases related to these foreign proceedings. The rulings are especially important because they address whether or not Uber takes part in an unlawful form of automated decision-making, a subject that is undeniably increasingly important, but on which little case law exists.
According to the UK Supreme Court, an employment relationship exists between the British drivers and Uber London. However, the owner of the Uber app is not the UK entity, but the Dutch parent company Uber B.V. In order to be able to appreciate the implications of the aforementioned ruling and also in preparation for other proceedings, some British drivers (case 1) therefore exercised their right of access on the basis of the GDPR against Uber B.V. They wanted to know which of their personal data Uber processes, what they are used for and to what extent profiling and automated decision-making take place.
A number of other drivers (case 2) believed that Uber had made a fully automated decision to deactivate their accounts due to suspicions of fraud, which allegedly violated the prohibition of automated individual decision-making. More specifically, these drivers were accused of falsely collecting cancellation fees from Uber by posing as both passenger and driver, or manipulating the Uber Driver app so that more expensive rides could be identified before the driver accepted the ride (which is not allowed to avoid cherry picking of rides).
The GDPR has some specific rules about automated decision-making. First of all, an organisation that wants to use automated decision-making must inform the persons concerned about this in advance. Part of this obligation is to provide meaningful information about the “logic involved” of the application. This does not require that the operation of the algorithm is explained in (technical) detail or that the algorithm is made public. However, it must be made clear in an understandable manner how decision-making works and on the basis of which criteria a decision is reached.
Furthermore, the data subject has the right not to be subject to a decision based solely on automated processing that has legal consequences for him or her or which otherwise significantly affects him / her. Although this article has been formulated as a “right”, according to the European privacy regulators, there is actually a general ban on fully automated individual decision-making, with a few exceptions. It is therefore prohibited to terminate an employment contract solely on the basis of an automated decision. Targeted advertising based on profiles is generally permitted, as this will normally not affect the data subject to a significant extent.
Uber uses a batched matching system. This system groups the closest drivers and passengers in a batch (a group) and determines the optimal match (link) between a driver and a passenger within that group. According to Uber, it uses location, direction of travel, traffic volume, geographic factors, estimated time of arrival at the passenger’s pick-up point, and personal preferences specified by drivers. The system no longer matches a passenger with a driver if this passenger has rated the driver with one of the five available stars in the past. The driver is then matched with another passenger in the batch. According to Uber, the automated allocation of available rides has no legal consequences and the data subject is not significantly affected, so that no automated decision-making takes place. The court follows Uber in this. Although it is obvious that the system will have a certain influence on the performance of the agreement between Uber and the driver, the court is of the opinion that this influence is not so great as to have a significant effect. The court dismisses the drivers’ claim.
Uber disputes that it deactivated the drivers’ accounts based solely on automated decision-making. According to Uber, this was preceded by a thorough investigation conducted by a specialised team of Uber employees. Uber does use software to detect potential fraudulent activities. In response to a fraud signal, this team investigates the facts and circumstances based on internal protocols and its own knowledge and experience in order to confirm or rule out the existence of fraud. Deactivation of an account requires a unanimous decision of two employees of this team. On the basis of this explanation, the court rules that Uber’s decisions were made after “meaningful human intervention” and that there was therefore no unlawful form of automated decision-making. This claim of the drivers is also rejected.
Would you like to know more about the legal aspects of online platforms or automated decision-making? Feel free to contact our team.
Jeroen van Helden, attorney at law IT, IP & Privacy
An entire chapter in the General Data Protection Regulation (GDPR) is devoted to international transfers of personal data. But what exactly is an international transfer of personal data in the context of the Internet? Is that publication on the World Wide Web, sending an email abroad or data entry in a SaaS application? And what about routing and hacks?
Under the GDPR, personal data can, in principle, circulate freely within the European Union. However, the transfer of personal data to an international organisation or to a country outside the Union (a third country) is only permitted if strict requirements are met. The European legislator wanted to prevent that the legal protection that the GDPR aims to guarantee could be easily circumvented by moving data processing operations abroad.
Despite advice of the European Data Protection Supervisor (EDPS) to include a definition in the GDPR, one searches in vain for a definition of an ‘international transfer of personal data’ in the GDPR. In order to define the term, it is therefore necessary to look at case law and at opinions and guidelines of supervisors.
Suppose you place personal data on a freely accessible website that is hosted in the European Union. Anyone in the world can then access that personal data simply by typing in the URL and clicking enter. The server hosting the website will transmit the personal data to the computer that requests the website, regardless of where that computer is located. Is the act of publication an international transfer of personal data?
The Court of Justice of the European Union (hereinafter: the European Court) answered precisely that question in 2003 in the Lindqvist case. Mrs. Lindqvist had made some internet pages at home on her computer for the Swedish congregation of which she was a member. On those pages she had published personal data of herself and of some of her colleagues. After a complaint about this, the Swedish public prosecutor decided to prosecute Lindqvist for violating Swedish privacy rules, including the transfer of personal data to third countries without adequate permission. However, the European Court was of the opinion that uploading personal data on a generally accessible website cannot as such be regarded as a transfer of personal data, even if in principle that act makes the data accessible worldwide.
Sending an email containing personal data to a person or organisation outside the Union appears at first sight to be a clear example of a transfer of personal data to a third country. In practice, however, it is not so easy to apply this measure. A postal address is, by its very nature, linked to a physical location somewhere in the world. A telephone number has a fixed structure, preceded by an international access code (00 in the Netherlands) and a country code (31 for the Netherlands). An email address, on the other hand, provides much less information about the whereabouts of the user or the account’s hosting location.
An email address consists of a (self-chosen) username, the @ sign, a server or ISP name and the top-level domain. If the account has been assigned by a local ISP then you have some clue as to where the account is likely to be read and hosted. But when it comes to an account of a webmail service it is often not clear in advance where exactly you are sending the message. It is as yet unclear to what extent the sender is expected to investigate this.
Since the 10s, a lot of computing power and data processing has moved from local PCs to central servers of IT service providers. Previously this was not possible because browsers were not powerful enough and insufficient bandwidth was available for fast and reliable communication between client and server. There is little doubt that data transfers to and from the data centres operated by the likes of Microsoft, Google, Amazon and Tencent can qualify as international transfers of personal data. Due to Cloud computing, the importance of the rules on international transfers of personal data has therefore increased considerably.
If a company based in the Netherlands uses a Cloud service, then every time that company adds or changes personal data in those applications or virtual environments, there is a transfer of personal data to these service providers. The easiest solution to ensure that such transfers are allowed is to agree that the service provider will only store and process the data in data centres within the European Union.
However, data storage within the European Union is not always possible or even sufficient. For example, if a group uses a central HRM system hosted in the Union, then whenever a group company from outside the Union retrieves information from such a system, that would qualify as an international transfer of personal data. A group can bring these transfers in accordance with the GDPR by, for example, using binding corporate rules.
Data over the Internet is sent in packets. A packet contains a string of bits and bytes that are formatted in a specific format. An IP packet can contain about 65 KB of data. A longer message must therefore be cut into several packets, which are sent separately. Each of these packets travels through multiple networks, connected by gateways or routers, to their final destination. Once arrived, the packets are put in the correct order so that the original message can be delivered.
It is quite possible that if you send a message over the Internet from a Dutch server to, for example, a Portuguese server, this message – or some of its packets – will take a detour through third countries. For example, some packets could be routed through one of the Transatlantic cables to the United States, from there through local networks to Brazil and from Brazil via the Atlantis-2 cable to Portugal. Part of the message, including the personal data therein, then went through various non-EU countries. Has this been an international transfer of personal data?
According to the EDPS, this is not the case. The EDPS believes that the term ‘international transfer’ refers to intentionally or knowingly disclosing or making personal data available to a person in a third country. Because routing within and between networks does not involve knowingly disclosing data to persons in third countries, this does not fall under the concept of an international transfer, according to the supervisor.
Imagine your company network is hacked by Russian cyber criminals who steal sensitive customer data. Of course you are concerned about the consequences for your customers and the reputation of your company. Perhaps you are similarly concerned about a possible fine of the Dutch Data Protection Authority for not taking appropriate technical and organisational security measures. Would it also be necessary to worry about a fine for the unlawful transfer of personal data to a third country, namely Russia?
If it is up to the EDPS, the latter is certainly not necessary. Just like with routing of Internet traffic, in the case of hacking according to the EDPS, there is no deliberate transfer of personal data and therefore there is no transfer within the meaning of the GDPR.
Last but not least
So far, the European legislator has been reluctant to define the term ‘international transfer of personal data’. The reason for that will be obvious. It is not easy to formulate an unambiguous definition of this term. The European Court and the EDPS have already made the first steps. It would be good if the joint privacy supervisors, united in the European Data Protection Board (EDPB), now took the next step. This by drawing up guidelines for further delineation of this important concept in European privacy legislation.
Jeroen van Helden, attorney at law IT, IP & Privacy
This is a modified version of an article that appeared in the June/July 2020 edition of the AG Connect magazine.
It was in the air after A-G Saugmandsgaard’s conclusion last December, but now it’s official. The transfer of personal data from the European Union to organizations in the United States cannot be based on the Privacy Shield instrument, according to the highest judicial institution of the European Union. What does this mean?
The GDPR stipulates that the transfer of personal data to a third country can in principle only take place if the third country guarantees an adequate level of protection. The protection should broadly correspond to the protection afforded within the Union, the European Court ruled in 2015 in Schrems I, which declared the Safe Harbor instrument invalid.
In 2016, the European Commission and the authorities in the United States made new arrangements about the exchange of personal data, the Privacy Shield. According to the Commission, an appropriate level of protection would now be in place, provided the recipient organization in the United States certified for the Privacy Shield.
The European Court thinks otherwise.
Companies and organizations in the European Union that currently transfer personal data to organizations in the United States that are Privacy Shield certified have two options: either they stop the transfers (for example, by terminating the contract or agreeing that the personal data will be stored and processed in European data centers) or they provide an alternative instrument.
Regarding alternatives, it makes sense to use standard contractual clauses (SCCs) approved by the European Commission. The judgment of the European Court shows that these SCCs are valid (at least the controller to processor variant thereof), although the European Court also makes some comments that should caution the use of SCCs. It is therefore still possible to transfer personal data to organizations in the United States, but caution is advised.
If you want to know more, please contact us.
Jeroen van Helden, attorney at law IT, IP & Privacy
25 May 2019 marks the one-year anniversary of the much-discussed General Data Protection Regulation (GDPR) entering into force. Since the Dutch Data Protection Authority (Dutch DPA) also recently published its 2018 annual report, now is a good time to take stock of it all. What has one year of GDPR yielded? Have high fines been imposed and, if so, for what type of violations? And what do you need to watch out for in 2019?
Annual report 2018
As shown in the annual report, 2018 was a hectic year for the DPA. Not only did the GDPR come into force, the DPA also hired many new employees along with setting-up and implementing a new organisational structure.
In the report, the DPA indicates that in 2018 it deliberately chose to focus on promoting compliance with the privacy regulations. Their approach entailed investing heavily in providing information and advice, on the one hand, and seeking to stop any violations instead of imposing sanctions after the fact, on the other hand.
Some headline figures for 2018:
- The DPA received over 11,000 complaints and nearly 21,000 data breach notifications.
- 720 complaints and 298 data breach notifications were dealt with through an intervention, such as a letter or discussion in which the DPA explained the privacy regulations.
- The DPA completed 16 investigations and started 17 enforcement procedures, which resulted in sanctions being imposed on the likes of Uber, the Tax and Customs Administration and the Employee Insurance Agency (UWV)*, to name a few. Uber was fined EUR 600,000 for failing to notify a data breach promptly. The Employee Insurance Agency (UWV) was fined for failing to equip the secure access to its employer portal with multifactor authentication. The National Police were ordered to pay a fine for the inadequate security of an IT system. InsingerGilissen Bankiers were fined EUR 48,000 for not complying with a personal data access request. Finally, the Tax and Customs Administration was prohibited from using the Citizen Service Number in the VAT number as from 1 January 2020.
A comparison of these figures with those of previous years, shows an explosive increase in the number of data breach notifications since this obligation was introduced in 2016. However, enforcement and sanctions (including fines) are increasing slowly.
|Data breach notifications||–||–||5,700||10,009||20,881|
Elsewhere in Europe
What are regulators doing elsewhere in Europe? Several Member States have already seen the first fines imposed for violations of the GDPR, examples include:
- In March 2019, the Polish regulator fined a Polish company EUR 220,000 for breach of the obligation to provide information. The company had created an extensive database of personal data that it had collected from public sources, but without informing the data subjects.
- In January 2019, the French regulator fined Google EUR 50,000,000 for lack of transparency, breach of the obligation to provide information and obtaining consent unlawfully.
- In February 2019, the Maltese regulator fined the Maltese Land Registry EUR 5,000 for failing to properly secure an online portal.
- In September 2018, the German regulator (more specifically the one from Baden-Württemberg) fined a German social network EUR 20,000. The social network had reported a data breach in which users’ passwords and e-mail addresses had been leaked. Follow-up investigations by the regulator revealed that password encryption had not been used, thus violating the obligation to take appropriate security measures.
- In September 2018, the Austrian regulator imposed a fine of EUR 5,280 on a betting shop that used a video surveillance system for which there was no legal basis and that stored the recordings for too long.
What does 2019 have in store?
In its annual report, the DPA explicitly states that in 2019 the focus will shift from information to enforcement. Enforcement will be stepped up in 2019. While the DPA currently still often focuses on ending any (possible) violation when dealing with a complaint, 2019 will see it initiating investigations and imposing sanctions more often. The DPA’s stated areas of focus for 2019 include: i) security measures and the legal bases for processing personal data in the healthcare sector, ii) unreported data breaches and iii) trade in personal data.
Please contact Jeroen van Helden (email@example.com or 071-5815310) if you have any questions about the GDPR or related laws and regulations.
*The infringements often date from the pre-GDPR era and are therefore assessed and sanctioned according to the Personal Data Protection Act and not the GDPR.
Time is running out. If the deadline is not extended, the UK will leave the EU on 29 March 2019, with or without a deal. For many organizations it is unclear what the effects of Brexit will be on the protection of personal data processed in the UK. What are the implications for transferring personal data to the UK? The implications of transferring personal data will depend on whether or not a deal is reached by the end of March.
If the British leave with a deal, then the GDPR will remain in force until the end of 2020. This means that, until then, nothing will change with regard to the transfer of personal data to the UK.
However, given the current circumstances, the chance of a no-deal scenario continues to grow and becomes more likely every day. It is therefore vital to start preparing for a no-deal Brexit now.
A no-deal Brexit will have a major impact on the transfer of personal data to the UK – regardless of whether the transfer is for instance to the UK branch of a multinational or a British cloud provider. In the event of a no-deal Brexit, the UK will be considered to be a ‘third country’ after 29 March 2019 and will be subject to the rules that are applicable to the transfer of personal data outside the EU.
Personal data may no longer be transferred freely to the UK; data transfer will need to be based on one of the following instruments:
- Standard or ad-hoc data protection clauses (the European Commission has prepared three sets of Standard Contractual Clauses that provide an appropriate safeguard);
- Binding Corporate Rules (these are codes of conduct that multinationals impose on themselves; these must be approved by the Dutch Data Protection Authority);
- Codes of Conduct (these are intended for self-regulation by, for example, industry associations) or Certification Mechanisms (both of which also need to be approved).
The Commission could also consider (in a so-called adequacy decision) that the level of data protection in the UK is in line with European legislation. However, in the event of a no-deal Brexit, an adequacy decision will not be available immediately and the aforementioned instruments will have to be used, at least for the time being.
Because different rules will immediately become applicable after 29 March in the event of a no-deal Brexit, it is imperative to start taking steps immediately to prepare for this situation. According to the European Data Protection Board you can do this by means of the following five steps:
- Make an inventory, showing if and what personal data transfers are made to organisations (or branches) in the UK.
- Choose an instrument; determine which instrument is the best for your situation. For example, in the case of a multinational with a branch in the UK, creating or updating Binding Corporate Rules might be an option; whereas with data processors the Standard Contractual Clauses of the European Commission could be used.
- Make sure that whatever instrument you decide on is ready to use on 30 March 2019 (or as of the new deadline if the deadline is extended);
- Amend the privacy statement for data subjects to inform data subjects about the transfer to ‘outside the EU’.
Data transfers from the UK
A no-deal Brexit will not lead to any changes in the reverse situation, i.e. personal data transfers from the UK to an EU country. The British government has stated that data can be freely transferred from the UK to the EU, as is currently the case.
As Brexit may become a reality this month, there is no time to lose in making preparations.
If you have any questions, please contact Natascha van Duuren