G-60JFQHSKJJG-60JFQHSKJJ

DALL-E: “Intelligence: Not a spark of sentience but a symphony of words”

This Lightening Talk on Emergent Intelligence was given at the Awayday of the Leverhulme Centre for the Future of Intelligence, 19th to 21st January 2024. It was prepared by GPT4 with prompt from me. It is included here as a demonstration of GPT4’s capacity to prepare focused short talks in early 2024.  Do note that it is not a recommended practice to have GPT4 write your talks for you (even lightening ones), so please just treat this as ‘proof of concept’ or, rather, ‘disproof of concept’. See my comments below.

 Prompter (JR):

I have been asked to give a 1 to 2 minute lightening talk to the Cambridge Centre for the Future of Intelligence ‘away day’ today. This is what I prepared previously.

“People may have been wondering why I originally proposed CETI (or SETI if you prefer) as a possible topic for this meeting. The reason is that, from recent reveals about Generative Artificial Intelligence, we now know that language is more important than sentience in how they work. Hence, in looking ‘out there’ it is languages rather than aliens that are most likely to be in evidence. If intelligence is an emergent property of language, as many now think, then we have not found much else in that way on earth. The best place to find convergent evolution may well be out there.

However, I’m just giving you this now for context. I am very much aware that this will be my only input to the whole meeting of about 80 people, and I want to have an impact and say something that will open their minds. However, the most important point the I need to get across is that, as far as generative AI is concerned, intelligence is an emergent property of language (I know you will know what I mean). And this means it has the potential to understand concepts, just as humans spread their ideas through concepts embedded in their language. There are several sorts of skeptic in the audience (1) those who have an obsession that intelligence requires sentience (2) those who believe that Large Language Models are simply stochastic parrots, and (3) those who believe that LLMs are simply statistical devices and (4) those who believe that when AI appears creative it is simply hallucinating. I don’t expect any of these groups to change their minds (they are in denial) however I would like to influence the rest of the audience to at least take on my idea as a thought experiment. Can you please prepare me a draft and we can work from there.”

Note

You will see that I have not described either ’emergent property’ or ‘parallel evolution’ in this prompt. This is because I was trusting that these would have been adequately described in the LLM that formed the training set of the current GPT4 version. 

GPT4:

“Absolutely, I’d be happy to help you craft a compelling draft for your lightning talk. Your aim is to challenge the audience’s preconceptions about intelligence, particularly in the context of generative AI, and introduce the idea that intelligence might be an emergent property of language. Let’s focus on making your points clear, concise, and thought-provoking. Here’s a draft for you:

“Title: ‘Intelligence: Not a Spark of Sentience, but a Symphony of Words.’

Good [morning/afternoon], everyone. Today, I’d like to propose a thought experiment that challenges our traditional understanding of intelligence in the context of generative artificial intelligence.

Imagine intelligence not as a byproduct of sentience but as an emergent property of language. This idea diverges from two common beliefs: first, that sentience is essential for intelligence, and second, that large language models (LLMs) are merely sophisticated statistical tools without real understanding.

Advancements in AI, especially in language models like GPT-4, hint at a different form of intelligence. These models, which process and generate human-like text, raise a crucial question: Could intelligence emerge from complex language processing?

Think about how humans use language to convey ideas and concepts, shaping our collective intelligence. If AI can similarly process and create language, might it display a form of emergent intelligence?

This perspective invites skepticism. Some argue that without sentience, intelligence is unattainable, while others dismiss LLMs as complex pattern recognizers. Yet, I urge you to consider the possibility that we are on the cusp of recognizing a new form of intelligence—one born from the intricate dance of language and information.

The takeaway I’d like you to ponder is this: Generative AI might be showing us the early stages of a novel intelligence form, emerging not from sentience but from the rich complexity of language processing. This shifts our understanding of intelligence, urging us to look beyond our human-centric views.

Thank you, and I look forward to any discussions this might spark.”

Comments on GPT4’s ‘Emergent Intelligence’

What can we learn from GPT4’s suggested lightening talk?  Well, first we notice that it is just ‘off’. But why? I suppose the first reason is that it appears trite. It fails to get to the essence of the prompt while at the same time providing a very superficial summary of the argument. There are several possible reasons for this:

  1. It was simply not a very good prompt. Very likely so, but I did try many others and the general issue of triteness remained. Furthermore, I am not the only one who has found that this is a general issue when asking GenAIs to prepare both talks and summaries. At least this was the case when I wrote this, maybe by the time you read it things will have changed.
  2. It didn’t really understand what was being asked for. But then the obvious question is why this might be? Is it because the training data didn’t include anything about this? But that is unlikely given that it must be receiving a number of prompts on this issue from both sentience skeptics and philosophers exploring devil’s advocate arguments.  Also, were it to be the case, it would contradict the basic argument that GenAI can use its deep learning capabilities to track emergent properties even where they don’t appear superficially in the prompt.
  3. Another possibility was that it simply wasn’t looking into the matter too deeply. Presumably people look for lightening talks for different reasons, and this one has been created for a different audience.  But the audience was described in the prompt (perhaps not well enough).
  4. But perhaps most likely, given the ubiquity of GenAI manipulations to provide prompters with something they simply want to hear, see for example Sycophants and Liars, just published, the GPT4 front end editing process has simply tried to make the prompt conform to expectations.