The Digital Markets Act is a security nightmare

Originally published by New Europe (11 February 2022).

In their zeal to curb big tech through the Digital Markets Act, the European legislators are risking the privacy and security of all Europeans. It is time to accept the reality that the measures meant to force big platforms to be more open, will force them to lower their defences and to open the data of Europeans to bad actors. No amount of wishful thinking will change the fact that forced openness is in a tug of war with security. The DMA’s privacy and security provisions do not come close to taking the problem seriously and unreasonably expect the tech companies to solve a new class of risks that the DMA will create.

It cannot be disputed that some of the DMA’s headline ideas, like those on forced interoperability or data portability, will create new privacy and security risks. The real problem is how inadequately those risks are being addressed in the DMA. The legislators seem to think that it is sufficient for them to tell tech companies: “Do this very risky and difficult thing while making sure it does not pose privacy and security risks.” Or, in other words, “nerd harder”. This is not a responsible way to regulate and puts in jeopardy the data of Europeans and our access to services we rely on.

Excluding bad and unreliable actors is the essence of security

One of the DMA’s most discussed ideas, forced interoperability, can very easily become one of the biggest tech policy failures. The original DMA proposal was relatively reasonable compared to the radical amendments adopted by the European Parliament, especially with respect to social networks and messaging services. If adopted, those rules would open the data of Europeans to exploitations by bad actors on a scale that will make Cambridge Analytica not even worth mentioning.

Refusing to exchange our data with unidentified, unvetted third parties is precisely what we should expect from digital service providers. If taken seriously, ‘guaranteeing a high level of security’ would mean that data should only be exchanged with businesses that provide at least an equivalent level of security. And here is the problem: the level of security provided by the major tech companies to their users is largely unparalleled in the commercial world and even the vast majority of government organisations could not match them.

We need to avoid a race to the bottom. Safe interoperability is possible, but it will likely mean notable friction and exclusion of “two guys in a basement” start-ups. Will the DMA be interpreted in a way that accepts this reality or will this concern be addressed by more hand waving and by blaming big tech for the failings of their competitors? The answer is not hard to guess.

Similarly to forced interoperability, the DMA would force the in-scope companies to share user data with other businesses, including the data that could allow to trace back individual searches by users of search engines. As even the European Data Protection Supervisor noticed, a great deal of sensitive information about anyone can be gathered just from knowing what they searched for online.

To say, as does the DMA, that such sharing should simply be subject to anonymisation betrays a lack of understanding of how difficult it is to share user-level data in a way that could not be de-anonymized. De-anonymization techniques are only growing in sophistication. There can be little hope that a rule on sharing user-level data would be enforced competently, hence it would be much safer for all of us if it never becomes law.

Combining data is needed for cybersecurity

To be effective, cyber-defenders need information. The more information they have about the actions of attackers, the better they can protect the users in their care. For example, scanning incoming e-mails for security threats may require the integration of external security services. Looking just at the e-mail may give some cues, but it is easy for attackers to prepare bait e-mails that will not be easily identified this way.

The DMA would prohibit combining personal data from various services, unless the user provides specific consent. To keep providing the current level of security under the DMA, service providers will have to start bombarding users with a new kind of consent popups. The providers will also risk massive fines for providing both too much information and too little or for presenting consent in a positive light. At a time when cyberattacks are becoming increasingly dangerous, making it more difficult for service providers to protect their users is not wise.

How could the DMA be made safer for users? One step in this direction has been proposed by some EU governments — namely, adding a general provision in Article 7 DMA that gatekeepers should only be required to act proportionately under the DMA, taking into account the need to protect privacy, user safety, quality and functionality of the services.

Increasing information privacy and security through the law is notoriously difficult even if that is the explicit goal of the legislation. However, we should at least expect the law not to decrease the level of privacy and security. The DMA, especially in the European Parliament’s version, clearly sacrifices user privacy and security in favor helping some businesses fight with big tech. The DMA needs to focus more clearly on the interests of the users.