The Human Union

Generative AI
is not good for us. We must act to protect against its many harms: social, economic and political.

CONTENTS


5 harms of Generative AI

Recommended legislation against Generative AI

What you can do about it

5 harms of Generative AI – Expanded

Conclusion

5 harms of
Generative AI

1. Job Loss

  • Generative AI has the potential to be more productive than any human.
  • Generative AI does not need to be paid like a human.
  • Employers are inherently incentivised to replace human employees with AI.

2. Creative Degradation

  • Even if unemployment does not increase as projected, Generative AI reduces the human creativity involved in many fields.
  • Reduced creativity leads to reduced fulfillment and workplace happiness for people in these fields.
  • Flood of Generative AI content undermines faith and value in non-AI creations.
  • Generative AI content reinforces algorithmic silos.

3. Weakening Education

  • Generative AI offers an easily accessible means to cheat.
  • Work produced by Generative AI is almost impossible to reliably detect, and this problem is increasing as the technology evolves.
  • Increasing prevalence of cheating  weakens the quality of education and impacts students’ development.

4. Plagiarism

  • Generative AI creations adapt the work of other people without permission.
  • This is in violation of the Copyright Act 1968.

5. Disinformation & Misinformation

  • Generative AI can create increasingly photo-realistic images and video.
  • These images are an easily accessible means to spread potentially dangerous disinformation online.
  • Intentional disinformation can then spread even more rapidly via unintentional misinformation.
  • As the technology evolves, the regular person’s ability to distinguish real and fake imagery will diminish, loosening our grip on truth.

It is our firm belief that the above reasons against Generative AI outweigh any reason for. The capability of these programs to output human-level content is incredible and grows by the day – but it is this capability which is precisely the problem. We may be creating a world for ourselves in which we outsource our creativity to machines, rob ourselves of the very thing that makes us human for the sake of technological progress.

This is not to say we are against all forms of technological progress. There are many areas in which humanity could stand to benefit greatly from new technologies. But we are against this specific new technology: Generative AI.

The Australian Government has decided specific technologies are not worth the risk in the past; the Environment Protection and Biodiversity Conservation Act 1999 for instance, prohibits the construction of nuclear power plants in Australia. We call on this precedent for the Australian Government to safeguard against potentially harmful technology now in regard to Generative AI.

Recommended legislation against Generative AI

We recommend legislation under two key categories:

1. Restrict Access

  • Block internet access to ChatGPT and other Generative AI programs like it in Australia.
  • Precedent for the Australian Government to block internet access to sites determined harmful in the Broadcasting Services Act 1992, the Suicide Related Materials Offences Act 2006, The Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019, as well as the 2018 amendment to the Copyright Act 1968.
  • Provisions could be made to allow for Generative AI use in very restricted areas, i.e. medical analysis which could not otherwise be done by a human. This is consistent with most medical technologies (drugs and other treatments), which are restricted for use in the medical field, by medical professionals.

We do recognise that the above may be seen as draconian, however it is our stance that such restriction of Generative AI would have the most benefit for society. Nevertheless, we also recognise that legislation of this sort is very unlikely to be passed in the near future – and so failing the above, we recommend the below:

2. Limit Impact

  • Amend the Fair Work Act 2009 to cover replacement by Generative AI under Unfair Dismissal.
  • Introduce strong penalties against intentional disinformation using Generative AI, and empower media regulators to monitor and remove cases of disinformation online.
  • Apply strong penalties on the use of Generative AI by students, when presenting AI generated work as their own.

What you can do about it

Unless you are a politician or major corporate figure reading this, chances are you do not have the power to directly bring any of the above legislation about. However, this definitely does not mean that there is nothing you can do.

The governments who care only about boosting GDP, and the corporations who care only about their bottom line – they rely on our complacency to slowly push Generative AI further and further into our daily lives.

Whether we can stop all of the above harms of Generative AI remains to be seen – some are already coming to fruition – but what is certain is that nothing will happen if we don’t do anything about it.

You have the power to do two key things:

1. Abstain

  • Every prompt entered into a Generative AI program trains it, improves it. Furthermore, these companies rely on a continuous userbase (sometimes under a subscription model) to keep themselves afloat. The simplest step you can take against Generative AI is to not use it.
  • This can be difficult if there is professional pressure to use Generative AI. However, if you can, making the conscious decision to not use Generative AI – whether for work or leisure – means that you are not contributing to its normalisation, that you are not improving the very thing that is eroding the way we communicate, the exchange of art, our social bonds – eroding society.
  • This also means not supporting individuals and companies who produce works using Generative AI – boycotting their work can send a message that this is not what the audience desires.

2. Agitate

  • Just abstaining from Generative AI is enough – every small act adds up. However, if you are wanting to take things further then there are ways you can make your voice heard to the people making decisions about whether to restrict Generative AI.
  • It may not be glamorous, but one simple bit of activism you can do is letter-writing. Every state government in Australia has a website where you can find your local MP and their contact details. Writing letters to this MP expressing your concerns around Generative AI may seem useless, but in enough volume change can happen. Amnesty International has a proven track record of affecting real outcomes through such letter-writing campaigns.

5 harms of Generative AI – Expanded

1. Job Loss


McKinsey & Company (2017) estimates that automation will displace the jobs of between 400 million and 800 million people by 2030. Of the total displaced, 75 to 375 million may need to switch occupational categories entirely and learn new skills. Whether the global education system has the funds or capacity to retrain 375 million people remains to be seen, and the risk of mass unemployment remains large; “you would break the university system” says Gabe Dalporto, CEO of online course provider, Udacity, in a 2020 article by Time Magazine.

UPDATE – NOVEMBER 2023: A survey of 750 business leaders conducted by online platform Resume Builder, released in November of 2023, showed that 37% of companies using AI say the technology replaced workers in 2023. 44% said it will replace workers in 2024. The trend is clear – the threat to workers is here, and it is growing.

Some argue that Generative AI will never supplant humans because it is not truly creative, that it merely mimics existing content and does not have original thought. However, we argue that the means by which Generative AI creates is irrelevant. So long as the outcome – the content – is deemed useful, then it seems inevitable that the technology will be embraced by employers; cutting down on employees to pay, and boosting the productivity of those who remain. Entire creative teams replaced by individuals feeding prompts to AI.

2. Creative Degradation

Even if the above job loss projections are wrong, even if there is no mass employment, even if everyone ends up reconfigured into some different role around the presence of AI – have we not still lost something?

Creativity is about more than the initial idea and final outcome; it is about the process in between, and this is where much of the joy in creating lies. Generative AI removes that process from the equation, reducing the role of humans to feeding Generative AI prompts and tweaking what comes out; curators of AI creation, rather than creators themselves. And that’s only right now, as things advance, could there be some stage where the need for humans to feed prompts is removed altogether, where the AI itself analyses the market to produce the most in-demand creative product? Currently, Generative AI still struggles with certain tasks… [Human hands were used as an example here, however even this has become outdated in the months since first writing]. But with how rapidly Generative AI has evolved in recent years, it doesn’t seem implausible that it will be able to overcome these hurdles and no longer require even the editing of humans.

And where does that leave us?

It is becoming not at all uncommon to see people commenting under art online with sentiments like: “Is this AI?”, and “looks like AI”, or “an AI could do better”.

Generative AI undermines faith and value in non-AI creations, and places unfair pressure on human artists to output at the same level. Artists have a choice whether to use this technology or not, but as AI art becomes more and more prevalent, it will become increasingly hard for human artists to not buckle under the pressure. People don’t only create for creation’s sake – this is a part of it for sure – but many understandably seek the appreciation of others for their creative efforts. And so they should! Art should not exist in a vacuum, it deserves to be shared, appreciated and reciprocated. The flood of Generative AI works take attention away from genuine human artists, and can have the demoralising effect of being equally appreciated, while taking a fraction of the effort to produce.

Some people argue that the solution for AI harms lies in a Universal Basic Income (UBI), to offset the perils of mass unemployment with a guaranteed monthly allowance from the government, affording everyone a reasonable standard of living regardless of their job status. This, they argue, would pave the way for a future of humanity free from labour, able to pursue whatever creative pursuits they so desire in their now limitless free time. This solution relies on two assumptions, which we find equally shaky.

A) That the government would introduce such a system in the first place. UBI remains a fringe political concept in most countries – Australia included – and  there are little signs of this changing under the current system.

B) That in a world where AI generated content is commonplace, that anyone would find value in human created works, and therefore that anyone would find fulfillment in creating these works. This is a reiteration of a previous point, but still it stands that the more numerous and capable Generative AI becomes, the less need people will have to turn for others to create pieces of art, entertainment, music, etc.

In a world where everyone can have exactly what they desire through a few words into a text-prompt, what incentive will there be to appreciate the creations of fellow humans – flawed as they may be? What incentive will there be to create at all? The idea of UBI certainly has benefits in itself, separate from Generative AI, however we argue that UBI proposals are not an adequate solution against AI harms specifically.

But for many, work has never offered traditionally creative opportunities, and so any arguments around this could be seen as coming from a privileged perspective.

However, there is more at stake than the fulfillment of creators, flipping perspectives, we must also consider what a world of AI generated content means for audiences, and how it could exacerbate an increasingly siloed online landscape.

Today, algorithms dictate the kinds of content people consume online. The content shown across most social media feeds differs from individual to individual, and is determined by what the algorithm thinks that individual wants to see in the moment. This creates a feedback-loop of sorts, where content is presented to an individual, the individual consumes said content, and is then presented with more of that same kind of content. People confined within their own personal filter bubbles, unexposed to content which may contradict their immediate pleasure. Generative AI could further silo people by generating the content itself, perfectly tailored to each individual – no shared experience – only a constant stream of pleasant sounds and visuals.

3. Weakening Education

Generative AI offers a free, easily accessible means for students to cheat, weakening the quality of their education and self-sabotaging their own development. In January 2023, online course provider Study.com released results of a survey of 1,000 students over the age of 18, around their knowledge and use of ChatGPT. It found that 89% of students had used ChatGPT to help with a homework assignment, 53% had it write an essay, 48% used it for an at-home test or quiz, and 22% had it write an outline for a paper.

We cannot blame students for wanting to take advantage of this new technology. It’s an easy workaround, a way to circumvent what would otherwise be hours of toil. But education is not meant to be easy. It shouldn’t be agonising either, but some level of difficulty is needed to challenge, to give yourself the opportunity to overcome and grow from the experience. The brain is like a muscle, it needs working out to become stronger, and a lack of exercise will weaken it over time. This is the same risk Generative AI poses to students. If we allow students to become reliant on it, then the quality of their education will be lessened and their development affected.

To deter such cheating, there have been attempts to develop programs which can detect work produced by Generative AI, but by OpenAI’s (creators of ChatGPT) own admission, “it is impossible to reliably detect all AI-written text”. Originality.ai and Winston AI are two major AI detection tools on the market at the moment. Competitor analysis conducted by Originality.ai in April 2023, showed that when tested on 7 different sample texts which were AI generated, that Originality.ai could only accurately detect the AI text an average of 79.71% of the time, while Winston AI could only manage 42.29%.

And this is only right now, as Generative AI evolves, how accurate will these detection programs be? Will it be accurate enough?

4. Plagiarism

The work Generative AI produces does not come from thin air, in order to write an essay or produce an image, the AI must first be trained on immense quantities of existing content. What is produced by the Generative AI, is in effect an amalgam, an adaptation, of all the content it has previously absorbed.

What’s worth noting, is that this use of content is largely done without the permission of the people who created it in the first place. Going further, and a Generative AI can be requested to produce work in the style of a specific creator, in which case it will explicitly draw on that creator’s work when generating its own – still without permission.

This plagiarism amounts to a violation of the Commonwealth Copyright Act 1968, which prohibits the adaptation of works without permission.

5. Disinformation & Misinformation

Note: The relentless pace of AI development means that any examples included in this section will most likely be outdated at the time of reading.

Every day AI gets a little bit better. Every day a sense of shared truth gets further away.

On Tuesday 4 April 2023, Donald Trump was arrested. But two weeks earlier on 21 March, images of Trump’s arrest were already circulating on social media – images generated by AI. The original post on Twitter (now X), by user Eliot Higgins, has now been viewed 6.5 million times (as of writing), and has been spread across many other social media platforms. The images are not perfect – there are imperfections in how the AI has rendered faces and hands, unusual positioning of legs, particularly for figures in the background – but of the 6.5 million people who saw this original post alone, it’s not hard to believe some were fooled, even if just for a moment. As said by the original poster Eliot Higgins himself: “Fact-checking is something that takes a lot more time than a retweet.”

A more innocuous example of this occurred later that week in March, when Reddit user, Pablo Xavier posted an AI generated image depicting Pope Francis in a white puffer jacket from fashion brand Balenciaga. In response, model and TV personality Chrissy Teigen tweeted: “didn’t give it a second thought. no way am I surviving the future of technology.” Now not everyone was fooled by the above two instances of AI generated disinformation, but if even a portion of people were then there is harm done. The threat lies not just in the bad actors who produce the disinformation, but in the people who are fooled by the disinformation, and then go on to spread it to their peers.

Some argue that scares around Generative AI disinformation are no different from worries about photo-editing software of the past like Photoshop, which has long been a means to create seemingly photo-realistic, yet fake imagery. But editing images in Photoshop is time-intensive and takes specialist knowledge and skill of the program to do well. Generative AI is unprecedented because of how easily it can be used, and the speed at which it can output, generating images from text prompts in a matter of mere seconds as opposed to hours. In a February 2023 article for The New Daily, RMIT University professor of digital communication Rob Cover was quoted as saying: “So we’re no longer looking at people needing any kind of professional skill at all, these are very much everyday people who are able to generate amazing deepfake images and video.” Just because Photoshop did not cause an informational crisis, is no reason to say that Generative AI won’t. It is one more step along the technological chain, and it could prove one step too far.

What’s more, the ability of Generative AI to produce photorealistic imagery is constantly improving, and what is easily identifiable now as fake may not be in the near future; “Synthetic content is evolving at a rapid rate and the gap between authentic and fake content is becoming more difficult to decipher,” said Mounir Ibrahim of Truepic, a digital content analysis company.

Amidst all this there are calls for increased media literacy, putting the onus on the general public to distinguish fact from fiction. But Generative AI images can already fool at a glance, if it is allowed to reach the point where it is indistinguishable from real photos even under closer examination, then we can hardly blame people for being fooled. For decades we have relied on photographic evidence as the high-bar of truth, visuals speak to our sense of reality in a way text never can. To ask people to throw all that away, to never trust an image again unless it has been exhaustively fact-checked – it’s a hard ask. And even if we can, even if we can drill it into the brain of every child to never trust an image they see online again, is that the kind of future we want to live in? The internet, once a place of sharing and communication, warped into a minefield of doubt and paranoia.

The social consequences of this kind of informational crisis are grave, the potential political consequences even more so. In an already fractured world, is it all worth the risk?

Conclusion

Our final point is not a harm in itself, but more to stress the urgency of the situation.
We must act now, as the five harms listed above will only become worse with time. This is because Generative AI is constantly evolving. Until 2012, the computational power of AI doubled every two years. But In 2018, a report by OpenAI (creators of ChatGPT) showed that since 2012, the computational power of AI had doubled once every 3.4 months – that’s a 300,000x increase in computational power between 2012 and 2018.

This exponential growth is largely thanks to the vast amounts of data available on the internet with which these AIs are trained, but this is only part of what’s driving their evolution. Generative AI like ChatGPT also train on user inputs, so that each time someone asks it to write an email or an essay, the program is learning from the process, improving for future responses. So the more popular Generative AI becomes, the more it’s used, the better it becomes, the more popular it becomes, the more it’s used, the better it becomes . . . so on and so forth. We could be standing at the top of a very slippery slope.

Advocates for Generative AI will often compare it to any other technological innovation of the past: the printing press, the computer, Photoshop, etc. These technologies were each disruptive for their time, they may have made some jobs obsolete, but in the end it all worked out for the better right? Maybe. But Generative AI is none of those things. It may be along the same line of technological progress, but it in itself is a different thing, a new thing. Generative AI’s potential benefits, and its potential dangers should be judged on their own.

Just because previous technological progress has been to humanity’s subjective benefit, gives no reason to assume that all technological progress must be beneficial forever more. There is no law of the universe which dictates it so, no assurance that everything will turn out okay so long as we relentlessly pursue every possible technological avenue. There must be some tipping points at which benefit turns to harm, points at which we must stop and reassess, choose whether we want to go further or not.

This could be one of those points. We can either dive head-first onwards, or be cautious, consider all that we could gain, but also all that we could lose.

If nothing else, we hope you choose caution.

Join the Union

Recommended Reading

Blood in the Machine: The Origins of the Rebellion Against Big Tech – Book by Brian Merchant

  • The challenges Generative AI presents are unprecedented, however they are not without parallel.
  • Technology journalist and author, Brian Merchant, weaves the 19th Century struggle of the Luddite movement with contemporary concerns around Generative AI.
  • The specific technology and circumstances may be different, but the threat of industrialised dehumanisation remains the same.

Blood in the Machine – Blog

  • An extension of many of the ideas explored in the book – Brian Merchant continues to post regular, information-filled updates as he continues to stand against Generative AI.

OpenAI (ChatGPT) Server Locations

  • The above only lists Stargate Site 1, which is currently under construction.
  • However, OpenAI is currently operating through hosting on other companies’ servers – such as Microsoft Azure or Amazon AWS.
  • You can see further information on the specific servers on Netify.

Microsoft Server Locations

Amazon Server Locations

Google Server Locations

Meta Server Locations


  • Subscribe Subscribed
    • The Human Union
    • Already have a WordPress.com account? Log in now.
    • The Human Union
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar