AI, Ethics and Democracy

I keep reading that AI is dumb, dangerous and demented. And I’ve no doubt it’s all true. Ethan Mollick, author of Co-Intelligence, describes ChatGPT as “a very elaborate auto complete like you have on your phone

AI slop is contaminating our lives with worthless junk, and while I’ve played with and been briefly amazed and entertained by Google NotebookLM’s auto-generated podcast creations, they can get repetitive, boring and stupid very quickly.

We can rail against AI all we want, but it’s not going away. I expect AI to get smarter and to have fewer hallucinations even if the danger level remains high.

What is Al good for?

Micro.blog uses Al to generate Alt-Text descriptions of images and that seems to work well enough for its intended purpose. What it can’t do, of course, is generate descriptions that are personal to the uploader or post context, e.g., if I have a picture of my son the description will be a generic “boy with curly hair” or such like.

I’ve used Google’s Gemini on blog posts I’ve written and it’s given me some very positive feedback about my writing, enough to make me feel good about myself (certainly much more so than any human reader). Although it also got into the habit of creating its own alternative versions, which often were funnier and more interesting (to me) than my own writing. (It will also roast you if that’s your thing.)

Similarly, Google NotebookLM has fed back on my entire year of posts in 2024, and was very nice about it, too. It is quite therapeutic to hear that, as long as you take it for what it is - an elaborate auto-complete that ultimately will rot your brain, take your job and ruin your life.

I’ve also used both for work in some limited ways - drafting a job description, drafting cases for support for funding applications, and summarising or analysing documents. Both require some degree of human intervention, but the process and product was useful in helping me get started, and complete the tasks.

One area where I found Al to be most interesting was in commenting on and forming an ethical response to a local democracy issue where I live. I found the response to be in line with what community campaigners (including me) had been asking for, and in total contrast to the response from our elected representatives, authorities and business leaders. That must be the auto-complete.

While I don’t think I would yet call for all politicians to be replaced by AI, I do wonder if there is a potential use case here for analysing policy implications, or at least highlighting ethical considerations?

I ran these thoughts past Claude.ai and this is a summary of what it had to say:

  • democratic processes need human judgment, accountability, and the ability to balance competing interests the ways that AI currently cannot
  • core ethical principles that should guide public service:
  • transparency about conflicts of interest
  • accountability to the people
  • clear acknowledgment of problems rather than minimisation
  • practical solutions for reporting complaints and response
  • the fundamental duty of public officials to serve their constituents

Human decision-makers can, of course, choose to prioritise other interests over ethical imperatives

I suggested to it that the AI response to the Southall Gasworks issue demonstrated more “empathy” than the human political response:

  • The AI’s response prioritised human wellbeing and acknowledged suffering, in contrast to the actual political response, which was bureaucratic and dismissive, treating hundreds of complaints as isolated incidents and prioritising procedural responses over human impact
  • A truly empathetic human response to such serious health impacts should have been immediate and decisive action to protect public health

Of course, the AI response wasn’t actually empathetic. It applied logical ethical principles that prioritise human wellbeing. What looks like empathy is actually systematic ethical reasoning unclouded by personal interests, political considerations, or institutional self-protection.

I suggested that it might not be uncommon for political leaders to lack empathy, and that it’s therefore so much easier for them to allow personal, and powerful political and institutional interests to take precedence over what most ordinary people would see quite clearly as the interests of human wellbeing:

This connects to research on the prevalence of psychopathic traits in leadership positions. In the Southall Gasworks case, a leader with normal empathetic capacity might find it psychologically difficult to:

  • dismiss hundreds of health complaints from suffering residents
  • accept substantial gifts while community members develop serious illnesses
  • continue business as usual while learning of deaths potentially linked to their decision

Of course, even someone with empathy might not be able to resist powerful institutional or financial pressures.