Skip to content
    GigKiln

    Article 22: challenging automated decisions

    Factual guidanceFresh — reviewed 19 April 2026Sources: 6Next review: 18 July 2026

    What it is

    Article 22 of the UK GDPR used to give a broad right not to be subject to solely automated decisions with legal or similarly significant effects. The Data (Use and Access) Act 2025, in force from 26 June 2025, rewrote that framework. Significant automated decisions are now allowed more widely in the UK, but platforms must build in four safeguards: information about the decision, a route to make representations, a route to obtain human intervention, and a route to contest the outcome. The ICO says "human involvement" only counts as meaningful if a person carefully analyses the decision and can actually change the outcome.

    How it applies to you

    For Uber, Deliveroo, Amazon Flex and the smaller platforms, the classic Article 22 trigger is deactivation, suspension, fraud flagging, route throttling, block allocation changes or safety scoring. If the app says "your account has been deactivated" with no usable explanation, and you suspect a fraud score, rating model, identity check or automated complaint system drove the result, the amended Article 22 framework gives you four things to demand, not one right to refuse. Take a 22 year old Uber driver in Glasgow deactivated on 18 February 2026 for alleged fraud. Before DUAA, the worker's opening move was "Article 22 says you cannot make solely automated decisions about me". That is no longer the law. The new opening move is "the decision has legal or similarly significant effects because it cuts off my income, please confirm whether it was solely automated or involved meaningful human review, and in either case provide the information, representation, human intervention and contest routes required by the amended framework". The worker also asks which data, rules, scores and profiling inputs were used. If Uber claims a human reviewed the case, the worker presses on what that human actually looked at, because an auto-summary rubber-stamped by a support agent is not meaningful involvement under the ICO's test. The ICO's 2026 messaging on automated decisions in recruitment and hiring shows the regulator is actively warning organisations that automation without proper review is not acceptable. That messaging is about employers, but it applies with equal force to platforms using fraud scoring and account risk models to remove workers. Public ICO enforcement against Uber, Deliveroo or Amazon Flex specifically is still limited, so workers should treat ICO complaints as useful pressure and evidence, not guaranteed reinstatement. The real leverage comes from combining an automated decision challenge with an Article 15 SAR and, where appropriate, ACAS Early Conciliation within three months minus one day of the deactivation.

    Action steps

    • Send an automated decision challenge as soon as a significant adverse decision hits, alongside your SAR.
    • Ask directly whether the decision was solely automated, and if human, who reviewed and what they actually looked at.
    • Demand the four safeguards: information, representations, human intervention, contest route.
    • Attach a short representation explaining why the decision is wrong or incomplete, so the platform has to engage with facts.
    • If the platform gives a thin or formulaic reply, complain to the ICO and keep the ACAS clock in view.

    What it is

    Article 22 of the UK GDPR used to give a broad right not to be subject to solely automated decisions with legal or similarly significant effects. The Data (Use and Access) Act 2025, in force from 26 June 2025, rewrote that framework. Significant automated decisions are now allowed more widely in the UK, but platforms must build in four safeguards: information about the decision, a route to make representations, a route to obtain human intervention, and a route to contest the outcome. The ICO says "human involvement" only counts as meaningful if a person carefully analyses the decision and can actually change the outcome.

    How it applies to gig workers

    For Uber, Deliveroo, Amazon Flex and the smaller platforms, the classic Article 22 trigger is deactivation, suspension, fraud flagging, route throttling, block allocation changes or safety scoring. If the app says "your account has been deactivated" with no usable explanation, and you suspect a fraud score, rating model, identity check or automated complaint system drove the result, the amended Article 22 framework gives you four things to demand, not one right to refuse.

    Take a 22 year old Uber driver in Glasgow deactivated on 18 February 2026 for alleged fraud. Before DUAA, the worker's opening move was "Article 22 says you cannot make solely automated decisions about me". That is no longer the law. The new opening move is "the decision has legal or similarly significant effects because it cuts off my income, please confirm whether it was solely automated or involved meaningful human review, and in either case provide the information, representation, human intervention and contest routes required by the amended framework". The worker also asks which data, rules, scores and profiling inputs were used. If Uber claims a human reviewed the case, the worker presses on what that human actually looked at, because an auto-summary rubber-stamped by a support agent is not meaningful involvement under the ICO's test.

    The ICO's 2026 messaging on automated decisions in recruitment and hiring shows the regulator is actively warning organisations that automation without proper review is not acceptable. That messaging is about employers, but it applies with equal force to platforms using fraud scoring and account risk models to remove workers. Public ICO enforcement against Uber, Deliveroo or Amazon Flex specifically is still limited, so workers should treat ICO complaints as useful pressure and evidence, not guaranteed reinstatement. The real leverage comes from combining an automated decision challenge with an Article 15 SAR and, where appropriate, ACAS Early Conciliation within three months minus one day of the deactivation.

    What you should do about it

    • Send an automated decision challenge as soon as a significant adverse decision hits, alongside your SAR.
    • Ask directly whether the decision was solely automated, and if human, who reviewed and what they actually looked at.
    • Demand the four safeguards: information, representations, human intervention, contest route.
    • Attach a short representation explaining why the decision is wrong or incomplete, so the platform has to engage with facts.
    • If the platform gives a thin or formulaic reply, complain to the ICO and keep the ACAS clock in view.

    Last reviewed

    19 April 2026

    Internal links this page emits:

    Primary source used:

    • C:\Users\thest\Documents\GigKiln\Research\Gap\G1.4-gdpr-sar-article-22.md

    Before you leave

    Sources

    • UK GDPR Article 22 automated individual decision-making
    • Data (Use and Access) Act 2025
    • ICO guidance on automated decision-making and profiling
    • ICO 2026 automated decisions in recruitment warning
    • Data Protection Act 2018 section 14
    • Equality Act 2010 protected characteristics
    Fresh — reviewed 19 April 2026