Welcome to our new monthly "Ethics in Tech" roundup series. In these series, we will be outlining some of the key insights, news articles and research papers we came across in relation to tech and data ethics.
Github acknowledges tech language is biased and changes it
Github is among one of several open-source companies which is acknowledging that the way it's been using language is deeply problematic. It's now proposing changes to its language which includes biased terms such as ‘master’, ‘whitelisting’ and ‘blacklisting’.
While these changes are a welcome start, language will need to change uniformly across the industry in order to make programming languages consistent. Some popular replacements mentioned on Twitter include the terms allowlist and blocklist. Read more about this on the TNW.
Reddit CEO steps down and asks for a black CEO to be appointed
Another big tech website changing up its representation is Reddit, whose CEO has stepped down in a call for a new, black CEO to be appointed. Michael Seibel, CEO of Y Combinator, has now taken on the role. While the call was met with a lot of skepticism, as some of the community content featured on Reddit is racist, it is a step in the right direction and Reddit hopes to revisit its content with the new appointment. Read more about this on The Verge.
YouTube's been sued again for using biased algorithms
YouTube is finding itself in a second on-going lawsuit regarding the way it's using its algorithm. This time, the tech giant has been accused of unfairly banning and excluding content that includes racially-sensitive topics, such as videos on Black Lives Matter. The giant has yet to respond to the claim - but what this shows is that the algorithms tech companies are using can unintentionally restrict content access based on protected characteristics, which causes unfair censorship. Read more about this on TNW.
*Image caption: via Flickr
IBM ceases its facial recognition technology
IBM has announced that it will cease the development of general-purpose facial recognition technology as it has the potential to cause more harm than good, particularly when used for surveillance purposes. Of course, IBM isn’t the first to cease sale of facial recognition technology (Google did in 2018), and nor will it be the last. There are still plenty of companies that are selling facial recognition technology, and as such the firm has called for a “national dialogue” surrounding the use of these technologies. - Read more about this on TechCrunch.
Progress made on deepfake detection technology but it's still only 65% accurate
The fight to tackle deepfakes remains. A year ago, Facebook launched the “deepfake challenge”: a hackathon which encouraged talented developers to test or create an algorithm that would successfully detect deepfakes. The winner has now been selected, but the technology still only has a 65% accuracy rate (on videos that weren't specifically provided by Facebook). However, the winning models will be made open source, inviting collaboration and further improvement of the technology. Read more about this on Wired.
New in: Podcast
Ethical Intelligence has launched its own monthly podcast series and the first episode was aired early June. One of the items discussed on the show was the use of biased algorithms in Africa.
The podcast highlighted that African countries often import AI algorithms from countries in the west, such as the United States. However, these algorithms are based on biased datasets that represent Western culture, and therefore don't necessarily fit into African cultures. To improve fairness, the podcast suggests supporting African engineers to develop their own AI systems that are better fit for purpose, and that are less likely to be biased.
Is diversity in the AI workforce an issue for creating fair technology?
“Facebook CTO Mike Schroepfer endorses the idea that hiring is an important part of diversity in AI and preventing bias for teams building products for users, but he can’t tell you the number of Black people who work at Facebook AI Research.”
This article by Venturebeat poses an interesting question surrounding the diversity in AI workforces. One of the key ethical issues within AI is its unintended bias, which means that having diversity in the development team is incredibly important – particularly among companies that intend to produce AI that is commonplace - Read more about it on Venturebeat.
On AI regulation: Is a council of citizens the answer?
Wired Opinion Piece: A council of citizens should regulate algorithms. Wired produced an article suggesting that a council of citizens should be in charge of regulating algorithms: an idea derived from the ancient Athens. The article coherently explains the difficulty surrounding the effective regulation of AI, stating that regulation often relies on social trade-offs and that it is therefore important that a diverse set of voices is represented. And, if we’re looking for a diverse set of voices, we could be looking at a diverse range of citizens.
“Citizens’ deliberations would be informed by agency self-assessments and algorithmic impact statements for decision systems used by government agencies, and internal auditing reports for industry, as well as reports from investigative journalists and civil society activist” - Read more about it on Wired.
New Research (via Venturebeat)
New study found bias in ride-sharing apps affecting protected characteristics
A preprint study has suggested that social bias in algorithms of ride-sharing apps has directly led to unfair increases or decreases in fare rates. The research showed that correlations between visit frequencies and particular neighbourhoods (characterised by age, income or race), has directly impacted fare rates. This means that, for example, neighbourhoods with young people who are more likely to use taxis, are quoted higher prices. The research is still refining and analysing data, and it’s worth noting that it hasn’t had access to other pieces of data such as time of day and trip purposes.
“The coauthors report an increase in ride-hailing prices when riders were picked up or dropped off in neighborhoods with a low percentage of 1) people over the age of 40, (2) people with a high school education or less, and 3) houses priced under the median for Chicago. Separately, they found that fares tended to be higher for drop-offs in Chicago neighborhoods with high non-white populations." - Read more about the research on VentureBeat.
1000s appeal against a predictive criminality AI research paper
The issues surrounding biased AI systems that seek to predict criminality and offending rates are widespread. Take COMPAS, one of the most-used examples of how AI decision-making can cause harm (for those unfamiliar; COMPAS was a system designed to predict re-offending rates among prisoners and, due to bias, incorrectly ‘predicted’ that people of colour were more likely to reoffend). Now, a paper that outlines a deep neural network model to predict criminality in AI has been slated, as a letter requesting its removal has received more than 1000 signatures. This is the second time this paper's request to be published has been denied.
Why this is so important, is that it shows the strides that we are making in AI ethics. It’s now something that is actively monitored, and people are scrutinising, at least to some extent, whether research papers could benefit or cause harm to the wider tech ethics debate. Read more about this on Venturebeat.
New AI system proposed that could reverse-engineer black box apps
This one is a much more technical and jargon-y paper.
One of the biggest issues in data ethics is explainability: the notion that in complex machine learning systems (black box AI), developers are unable to explain why certain decisions were made. This is particularly a problem in cases where AI controls the distribution of resources, such as cases where AI might decide if someone gets a loan or mortgage. That’s why, where possible, developers are recommended to avoid ‘black box AI’. However, for complex systems, it cannot always be avoided.
Many researchers have tried to come up with a way to reverse-engineer black box systems, and this paper is a great example. From my understanding, the researchers are testing a model which assesses known inputs and outputs data, and which then produces several ‘clone programs’ in order to determine which factors were most likely to have led to said outcomes. It is said the system has a 78% success rate. You can read Venturebeat’s take on it, and check out the research paper, on VB's website.
Uber is investigating moral decision making frameworks for AI agents.
One of the most discussed debates in tech ethics is the question whether AI can ever make ethical decisions. The difficulty surrounding this question, is that ethics are often context-bound; which is difficult to program-for. Furthermore, ethical decisions are often ambiguous (what one person thinks is ethical, another person thinks is not).
To determine whether AI could ever be ethical, researchers often suggest applying existing ethical frameworks to AI's decision-making. However, in their latest findings, Uber researchers found that human beings are often inconsistent in their views and use different ethical frameworks based on different contexts. Machines must equally be able to adopt different frameworks on different occasions, and current approaches do not meet these requirements.
This is not a ground-breaking finding, but more interesting is that as part of the research Uber is now working on a “plan to test algorithms for moral uncertainty (and machine ethics in general) in more complex domains", in order to develop new ethical decision-making frameworks for applications in AI.
It's still a bit vague, so we’re looking forward to keeping an eye on how that research goes! - Read more on Venturebeat.
This list is by no means exhaustive and, to keep this post within a reasonable length, we have left out some stories.
All the articles in this post have been gathered through a variety, but still limited number, of news sources including Arxiv (research papers), Next Reality, Road to VR, Tech Crunch, Tech Talks, The Next Web, Venturebeat, Virtual Reality Times, VR Focus, and Wired.
We're always looking for new stories and articles for the next edition. If you are producing an article or news story in the next few weeks that you would like us to publish, let us know by emailing email@example.com.