In April, the EU Commission called for regulation of AI in a new white paper. Meanwhile, Twitter and Facebook increased their efforts to improve the fairness of their algorithms, and group All Tech is Human prepares to design its autumn report. More on that here, in April's tech ethics roundup.

EU Commission calls for AI regulation

In a bid to “turn Europe into the global hub for trustworthy Artificial Intelligence”, the EU Commission has proposed a new set of rules that aim to “make sure that Europeans can trust what AI has to offer”. 

The rules, outlined in its AI Regulation proposal, involve both tighter restrictions on the use of high-risk AI and outright bans on AI systems that bear ‘unacceptable risks’. AI systems with ‘limited risks’ will also be required to be more transparent about how they work.

Whilst the UK is no longer an EU member it is worth noting that if the proposal is accepted, any AI that enters the EU market should adhere to these rules.

“The legal framework will apply to both public and private actors inside and outside the EU as long as the AI system is placed on the Union market or its use affects people located in the EU [..] It does not apply to private, non-professional uses.”

In its press statement, the EU Commission has called it “the first-ever legal framework of AI”.

The categories and expectations, broken down in a table.

Based on what has been 'drafted' so far, we have provided a summary of the three types of AI and proposed regulations in the table below (or zoom in as image here). It is worth noting that the core focus of the proposals right now is on high-risk AI. While other types of AI have been mentioned, examples of what they entail are still very limited.

Type of Risk
Risk Elaboration
Risk Examples
Steps before going to market
Unacceptable risk
AI that violates fundamental human rights. These systems pose a clear threat to the safety, livelihoods and rights of people.
- Social scoring by governments

-Exploitation of vulnerabilities of children (e.g. toys using voice assistance to encourage dangerous behaviours of minors)

-Live remote biometric identification systems in public spaces

-Systems that circumvent free will

Completely banned from going to market
AI systems that pose a high risk to the health, safety and fundamental rights of people. May impact the allocation of resources.
-Remote biometric identification systems that are not live in public spaces

- Critical infrastructures (e.g. transport) that could put the life and health of citizens at risk

- Educational or vocational training that may determine access to education and professional course of someone’s life

- Safety components of products (e.g. AI application in robot-assisted surgery)
Employment, workers management and access to self-employment (e.g. CV-sorting software)

- Essential private and public services (e.g. credit scoring)

- Law enforcement that may interfere with people’s fundamental rights (e.g. evidence evaluation)

-Migration, asylum and border control management (e.g. verification of travel documents)

- Administration of justice and democratic processes

This AI needs to undergo a conformity assessment, giving it a “CE marking”.

The following AI requirements need to be complied with:

- Adequate risk assessment and mitigation systems;

- High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;

- Logging of activity to ensure traceability of results;

-Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;

- Clear and adequate information to the user;
Appropriate human oversight measures to minimise risk;

-High level of robustness, security and accuracy.

Limited Risk
Systems with specific transparency obligations and risk of manipulation
- Systems that interact with humans, e.g. chatbots

- Systems used to detect emotions or determine association with categories based on biometric data

- Systems that generate or manipulate content, e.g. deepfakes
Users should be able to make informed decisions. This means that "When persons interact with an AI system or their emotions or characteristics are recognised through automated means, people must be informed of that circumstance."
Minimal Risk
Vast majority of AI systems
- AI enabled videogames
- Spam filters
- & More..
Free use of those applications. No regulation in place.

Providers may choose to adhere to voluntary codes of conduct.

Image sourced from the EU Commission.


Whilst it's only a proposal, the AI regulation as outlined above is flawed. In particular, the differences between risk categories are not clear enough to warrant legal enforcement. For example, differences between 'forbidden AI' and 'high-risk AI' expect an understanding of what constitutes a 'clear threat' and what's seen as 'a possible/high risk' to livelihoods and safety. But who decides? Furthermore, it must be noted that the report itself (while calling its criteria “solid”), also acknowledges it “may expand the list of high-risk AI systems used within certain pre-defined areas, by applying a set of criteria and risk assessment methodology” - therefore acknowledging it may need further elaboration.

In addition, criteria for banned AI risks such as the “circumvention of free will” will need to be clarified.

However, there are some good things about the segmentation of AI risks. For example, the proposal highlights that most 'everyday' developers work on minimal risk AI and therefore don't need to be regulated as rigorously. This helps demystify possible negativity about the risks of AI overall, avoiding generalisation of all AI being a possible threat to human rights.

Further analysis: The challenge with AI and liability

Another key challenge in AI regulation is the question of who is liable for faults with the AI. The regulation needs to clearly elaborate on the responsibility of the developer and the company deploying AI -and cases of open-source AI. For example, if an SME or manufacturer uses AI from a third party / developer, who is liable for harmful outcomes? 

All the report states is that:

“It is appropriate that a specific natural or legal person, defined as the provider, takes the responsibility for the placing on the market or putting into service of a high-risk AI system, regardless of whether that natural or legal person is the person who designed or developed the system."  (p.32)

In this scenario, ‘provider’ means "a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge" (p.41).

It doesn’t go into much further detail but this suggests that, if the proposals are approved, contracts are going to have to include clear licensing agreements for high risk AI, and open-source algorithms may not be as easily distributed within high-risk AI specifically. This would echo the concerns of third parties surrounding the regulation potentially stifling innovation.

AI Liability in Manufacturing

This liability goes a bit further in scenarios where AI is used to ensure product safety in machinery. Alongside the proposed AI regulation, the EU Commission has proposed updating its "Machinery Regulation" (formerly known as machinery directory) to address 'new' risks that have been created by 'emerging technologies'.

In a nutshell, the proposal stipulates that when using AI, SMEs with high-risk machinery should conduct a conformity assessment through a third party to ensure the AI can be safely integrated “into the overall machinery”. This assessment will come with a fee, though the proposal is calling for subsidised costs for SMEs.

Whilst the proposal is very specific to the manufacturing sector, it does indicate how liability could be perceived within non-digital sectors - where bringing in 'external AI' implies some responsibility of assessing compatibility and integration.  

Read the EU's press release here, or view the full proposal here.

“On artificial intelligence, trust is a must, not a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,”

Join the Responsible Tech Working Group

All Tech is Human is working toward improving the talent pipeline in the responsible tech industry. Every year, it launches its Guide to Responsible Tech to raise awareness of careers, job roles, and opportunities in responsible tech across the globe. 

The organisation has now launched a collaborative 'responsible tech working group' and is looking for individuals to join. It is also encouraging suggestions and contributions to the upcoming Autumn report. For more information, visit its website.

Only 6% of big global companies have adopted AI

A study by Juniper Networks found that on average only 6% of businesses have adopted AI - despite 95% of respondents indicating the organisation would benefit from AI.

This was the result of a global survey conducted with 700 c-suite executives and IT decision-makers working at large enterprises (only 13% earning less than $50m). The majority of those respondents (399) came from North America, and 201 came from Europe.

Whilst the report doesn't necessarily apply to our North East tech ecosystem, it is interesting to see that the adoption of AI is still not commonplace even among organisations with large financial resources. Furthermore, the three key challenges to the adoption of AI all highlighted a lack of understanding of how to qualify and use good data, govern AI across the organisation, and access the right talent. Vice versa, the two key ‘enablers’ to AI adoption would be better access to quality data and AI tools. This emphasizes the need for better support with acquiring representative data and nurturing more knowledgeable staff within a growing field.

Read the full report at Juniper (Thx Venturebeat)

Twitter & Facebook improve algorithms

If you're looking for examples of flawed algorithms, social media would make a good target. With no shortage of public scandals and recent anti-trust cases, social media giants have increasingly felt pressures to do better or risk losing their legitimacy. Whilst not necessarily that applaudable, it is great to see Facebook and Twitter introduce new initiatives to create fairer algorithms – and while these giants still have long ways to go, their efforts are likely to standardise initial steps on how to create better AI.


Twitter introduced its Responsible machine learning (ML) working group in order to improve and scrutinise its algorithms.

The working group will study the effects of Twitter's algorithms overtime and highlighted the following three projects as examples of focus points: conducting a gender and racial bias analysis of its image cropping algorithm, a fairness assessment of its timeline recommendations across racial subgroups, and an analysis of content recommendations for different political ideologies.

It is worth noting that these are all topics that have recently been scrutinised in the media, which indicates it is more mitigation of harm caused by algorithms rather than prevention.

What is interesting though, is that Twitter is working on making its machine learning explainable, which means that in time the company should be able to explain exactly how its algorithms may lead to producing certain outcomes and recommendations. This could make a big difference to the ‘secrecy’ behind algorithms of social media giants.

When Twitter uses ML, it can impact hundreds of millions of Tweets per day and sometimes, the way a system was designed to help could start to behave differently than was intended. These subtle shifts can then start to impact the people using Twitter and we want to make sure we’re studying those changes and using them to build a better product. “

Read the official statement over at Twitter (Thx TNW).


Facebook, on the other hand, introduced its Casual Conversations initiative to improve its data input. To try and reduce biased outcomes, Facebook sought to improve the data it uses within its algorithms by inviting a range of individuals to label themselves on video. This would enable Facebook to use self-labeled data instead of predefined data sets that might contain prejudiced labelling. Prejudiced labelling is a big issue in data ethics and can lead to false outputs and biased decisions automatically made based on protected characteristics and incorrect assumptions.

Over 3,000 individuals were paid to participate in the project. However, it is worth noting that use of this data set by Facebook employees for evaluation purposes is not mandatory and merely encouraged. Nonetheless, Facebook is hoping to continue to build on its data sets and it is an interesting example of how organisations are seeing the value in acquiring quality data instead of using ‘random’ labels.

“— Casual Conversations — is the first of its kind featuring paid people who explicitly provided their age and gender as opposed to labeling this information by third parties or estimating it using models."

Via Venturebeat.


This list is by no means exhaustive and, to keep this post within a reasonable length, we have left out some stories. 

All the articles in this post have been gathered through a variety, but still limited number, of news sources including Arxiv (research papers), Next Reality, Road to VR, Tech Crunch, Tech Talks, The Next Web, Venturebeat, Virtual Reality Times, VR Focus, and Wired.

We're always looking for new stories and articles for the next edition. If you are producing an article or news story in the next few weeks that you would like us to publish, let us know by emailing