The Final Verdict: How the CJEU’s Google Shopping Judgment of 2024 Cemented the EU’s Approach to Digital Self-Preferencing and Empowered the DMA

The CJEU’s Definitive Stance on Platform Abuse

The judgment delivered by the Court of Justice of the European Union (CJEU) on September 10, 2024 (Case C-48/22 P), represents the definitive legal conclusion to the decade-long Google Shopping antitrust dispute. This final ruling definitively upheld the European Commission's original finding of abuse of dominance and confirmed the landmark fine of €2.42 billion.  

Crucially, the ruling established a fundamental new legal standard for platform behavior. It firmly characterized active discriminatory leveraging (self-preferencing) as an independent category of abuse under Article 102 of the Treaty on the Functioning of the European Union (TFEU), fundamentally differentiating it from the stringent “refusal to supply” test.

This landmark victory provides the critical judicial underpinning necessary for the  ex-ante enforcement of the Digital Markets Act (DMA), validating the core anti-competitive theories currently applied against designated dominant "gatekeepers". Furthermore, the judicial finality immediately transforms the corporate risk landscape, clearing the path for substantial follow-on damages claims across the EU, already estimated to exceed €12 billion.  

Conclusion of the Saga: History and the New Legal Precedent

The saga began with the European Commission's June 2017 decision, which found that Google had breached EU antitrust rules (AT.39740) by abusing its dominant position in the online general search market. The abuse centered on illegal leveraging: Google was found to have actively promoted its own comparison shopping service (CSS) through prominent placement in its search results, while simultaneously demoting rivals via algorithmic adjustments . The long appeal process—culminating in the CJEU's September 2024 verdict after the EU General Court largely upheld the finding in November 2021—demonstrates the inherent inertia of  ex-post antitrust enforcement against sophisticated digital behavior, serving as the central operational justification for the EU's subsequent shift to pre-emptive regulation via the DMA. The Court’s verdict was welcomed by consumer advocacy groups for confirming that the harm focused on denying consumers access to unbiased online information and competitive pricing.  

Defining Self-Preferencing as an Independent Abuse

The most critical shift was the CJEU's explicit rejection of Google's defense, which argued the case should have been assessed against the strict Bronner refusal-to-supply test. The Court determined the Bronner conditions were inapplicable because the case did not involve a refusal of access to the search results page. Instead, it involved access being granted but subjected to discriminatory conditions.  

By making this distinction, the Court established Active Discriminatory Leveraging—the active favoring of a firm's own products or services to the detriment of rivals—as an independent form of abuse. This principle requires only a finding that the dominant undertaking provided access to its infrastructure but subjected that access to "unfair conditions," such as algorithmic demotion or unfavorable display. Crucially, the CJEU confirmed that the Commission discharged its duty by demonstrating the conduct's   capability of foreclosing rivals, rather than being required to prove the actual elimination of competition.  

The Collision of Law: Antitrust Meets Ex-Ante Regulation

The final CJEU ruling arrived at a critical juncture for EU policy, providing robust judicial confirmation that the anti-competitive practices targeted by the DMA are indeed harmful and abusive under established TFEU competition law. This precedent drastically simplifies the Commission’s task under the DMA, especially concerning self-preferencing practices covered by DMA Article 6(5).  

This regulatory drive is converging significantly with the framework for Artificial Intelligence. The AI Act entered into force in August 2024, with governance rules and specific obligations for General-Purpose AI (GPAI) models becoming applicable in August 2025. The nexus between the competition ruling (requiring equal algorithmic treatment ), the DMA (requiring non-discrimination), and the AI Act (requiring transparency for GPAI systems ) creates a single, integrated EU mandate. This effectively requires that all critical algorithms used by gatekeepers must be auditable, explainable, and provably neutral.  

The ruling also solidifies the financial consequences. Google was mandated to cease its illegal conduct and apply the "same processes and methods" to the positioning and display of rival CSS results as it provided to its own service. The judicial finality triggers massive liability exposure, with follow-on damage claims from price-comparison websites across at least seven European countries currently estimated to total a minimum of  €12 billion. This decisive European outcome provides a sharp contrast to the parallel, "light touch" US Department of Justice (DOJ) antitrust case against Google, which in September 2025 avoided imposing the harshest penalties and ordered only limited remedies.  

The Broader Belgian/EU Regulatory Environment

The context of the Google Shopping decision must be viewed alongside equally stringent and dynamic regulatory enforcement occurring at the national and sectoral levels.

In Belgium, the Data Protection Authority (DPA) demonstrated a heightened enforcement appetite. In December 2024, the Belgian DPA imposed a substantial fine of €200,000 on a hospital following a ransomware attack, explicitly ruling that the hospital had failed in its active duty to take preventive cybersecurity measures.  

Simultaneously, the CJEU recently refined data privacy standards under the GDPR, clarifying that the simple pseudonymisation of data does not automatically remove its status as "personal data" in all contexts. The identifiability of data subjects must be rigorously assessed from the  controller's perspective, meaning if a third party could reasonably re-identify individuals by cross-referencing auxiliary data, the original data remains personal. This significantly increases the burden on organizations, including gatekeepers utilizing vast internal datasets for AI model training, to conduct rigorous Data Protection Impact Assessments (DPIAs) that comprehensively consider re-identification risks.  

The CJEU's final judgment in the Google Shopping case has finalized the legal standard for digital self-preferencing, validating the entire philosophy underpinning the EU’s assertive approach to regulating digital gatekeepers and confirming massive financial liability exposure.

Previous
Previous

The Imprint of War: Cross-Examining Atrocities in Gaza and Israel Through the Prism of International Law

Next
Next

The EU AI Act vs. GCC AI Regulation: A Business Guide to Expanding into MENA