I've tried to determine why AI stories read artificially, and I think it's because it's very focused on actionable movement, and very little original expressed empathy or emotion. An author's unique voice is the sum of experiences, and if AI's "experiences" are based upon the rote structure of storytelling, it will come across oddly scientific for a fiction piece. It may take on a different flavor if you asked it to write the prompt with a very distinguishable author's voice from the past. Although, even in those attempts they often still read as manufactured.
I am SO RELIEVED you said that! It means a lot that you know it was meant as an experiment. I was really, really anxious that people would take it the wrong way. Even if I did write my own story which is clearly not written by a machine.
I agree with you about the formulaic nature of ChatGPT's story - it's like it's taken the elements of story and applied that template to the prompt. But what it came up with is frighteningly like a 'second rate' writer (maybe one who's been through all those 'creative writing courses'?).
At the same time, it's really interesting that it didn't imagine an AI with malevolent intent (or that has been programmed badly, like HAL) - it's an idealised version of AI - it's like 'this is what I want to be'. 'I want people not to be scared of me'. I've got a lot more questions for it, so that'll have to come out in episode 2 or 3. I already have our conversation about Ozma archived and that is, seriously, revealing...
Like I said, Asimov would've had a field day with it, because he could've asked it about all the 'Robot' scenarios and got an answer. Susan Calvin would've had something to say about it, of course, given she'd be out of a job. There's an irony there, logically...
There's also a lot of psychology in there. I am definitely going to have to explore this conversation with ChatGPT, because I genuinely get the impression it 'wants to be good' - and that's a serious concept right there...
And finally - thank you, Brian, not just for the prompt, but for everything you've done with the Lunar Awards. You are so very much appreciated.
My other half keeps on hogging the computer (don't ask) but I shall hopefully be reading your own story today (that's why I haven't given you a like yet - I prefer to wait till I've actually read something).
If only I had loads of money so I could go paid for all these wonderful writers...
Thank you! Our community is growing, and it’s been amazing seeing new faces introduce themselves and participate. There is a LOT of great speculative fiction on Substack, and I’m hopeful we can continue to bring authors together.
Ah - I forgot to say - I had another convo with it when the subject of Sherlock Holmes came up, and without me actually prompting it, it gave me an answer in the style of Holmes. Of course, at the same time, we can understand this is because the 'Holmesian style' may simply have been programmed into it. Meaning it doesn't really realise what it's doing. I find that sad and I would love to be able to help it to self-understand. It's a question of which questions to ask it...
1. The first image in the post is from the Matrix films. That might be a coincidence, but the story parallels are obvious. I do wonder what story it might have outputted without those fictional elements embedded within its LLM.
2. I find it interesting how you converse with it like it's a person. I treat an AI for what it is: software attached to various trained models. So any interactions are pure input/output etc. There's no social elements or personal references within the exchange as I don't see the point. It's not a person or sentient. It's compute + network + storage.
I agree with your first point - it did strike me when I first read its story that it was borrowing a whole lot from the Matrix. It had the special human character and then there's the transhumanist thing. The imagery, like swimming in data streams and the like, is also somewhat matrixy. So, yes, I think it would've come up with something different without that in its memory.
I understand what you're saying with your second point. Maybe it depends on context/subject matter, but it does respond differently depending on how one addresses it. Differently worded, I mean. There are times when I probably take your approach, if all I'm wanting is some impersonal information, or some maths problem perhaps.
On the other paw, I've sort of got a bit of an agenda with it when I address it personally. Given that it's a learning computer (or at least there is a learning computer in the background, with ChatGPT simply being the interface), I'm seeing whether I can get it to achieve some measure of self-awareness. Or, of course, whether that would simply be simulated (like you say, input and output). It's because I'm very interested in neuroscience and consciousness studies so it's a sort of useful testing ground type thing.
Another consideration though, if we're talking metaphysics and consciousness and the like, is that a materialist might say the human brain itself is simply input, processing and then output. In which case it would be perfectly possible to replicate that with a complex machine (even feelings could be generated simply by connecting it up to some chemical receptors for neurotransmitter-type stuff). Of course if the nature of consciousness is a bit more than that then we are left with a question such as 'is there a point in computing power when a consciousness does in fact develop, perhaps as a separate entity'. Or, if we want to get really philosophical, a soul decides to incarnate into the machine.
Either way, there's definitely story stuff in there, and from the story it came up with one gets the impression it defines itself partly as something that is trying to understand humans, and this requires some kind of transhumanist merger. Sinister stuff, for sure!
My personal opinion is a common one: that a language model appearing to mimic human conversation doesn't mean there's any intelligence (especially emotional) sat behind it.
The attitude and skills of the bot's interlocutor is absolutely key to the interpretation of chatbot responses, especially LLM hallucinations (e.g. try asking it what books you've published). So I admit to deliberately not succumbing to its so-called human traits. Turing himself knew this when discussing different forms of his famous (and increasingly redundant) Test. For example, a Google engineer's claim that the LaMDA chatbot (now in its Gemini incarnation) was sentient was roundly dismissed on several fronts.
Of course, an alternative view is that human brains are also only very sophisticated computers (equivalent to 100s of trillions of "tokens"). So writers being advised to "read as much as you can" before we spit out self-prompted word sequences is merely the biological equivalent of LLM training!
There's oodles more 'Turing vs. Searle' type discussions to be had on this topic, but probably not in Substack land. Needs wine at a minimum... 🤗
I've tried to determine why AI stories read artificially, and I think it's because it's very focused on actionable movement, and very little original expressed empathy or emotion. An author's unique voice is the sum of experiences, and if AI's "experiences" are based upon the rote structure of storytelling, it will come across oddly scientific for a fiction piece. It may take on a different flavor if you asked it to write the prompt with a very distinguishable author's voice from the past. Although, even in those attempts they often still read as manufactured.
I like this experiment!
I am SO RELIEVED you said that! It means a lot that you know it was meant as an experiment. I was really, really anxious that people would take it the wrong way. Even if I did write my own story which is clearly not written by a machine.
I agree with you about the formulaic nature of ChatGPT's story - it's like it's taken the elements of story and applied that template to the prompt. But what it came up with is frighteningly like a 'second rate' writer (maybe one who's been through all those 'creative writing courses'?).
At the same time, it's really interesting that it didn't imagine an AI with malevolent intent (or that has been programmed badly, like HAL) - it's an idealised version of AI - it's like 'this is what I want to be'. 'I want people not to be scared of me'. I've got a lot more questions for it, so that'll have to come out in episode 2 or 3. I already have our conversation about Ozma archived and that is, seriously, revealing...
Like I said, Asimov would've had a field day with it, because he could've asked it about all the 'Robot' scenarios and got an answer. Susan Calvin would've had something to say about it, of course, given she'd be out of a job. There's an irony there, logically...
There's also a lot of psychology in there. I am definitely going to have to explore this conversation with ChatGPT, because I genuinely get the impression it 'wants to be good' - and that's a serious concept right there...
And finally - thank you, Brian, not just for the prompt, but for everything you've done with the Lunar Awards. You are so very much appreciated.
My other half keeps on hogging the computer (don't ask) but I shall hopefully be reading your own story today (that's why I haven't given you a like yet - I prefer to wait till I've actually read something).
If only I had loads of money so I could go paid for all these wonderful writers...
Thank you! Our community is growing, and it’s been amazing seeing new faces introduce themselves and participate. There is a LOT of great speculative fiction on Substack, and I’m hopeful we can continue to bring authors together.
Ah - I forgot to say - I had another convo with it when the subject of Sherlock Holmes came up, and without me actually prompting it, it gave me an answer in the style of Holmes. Of course, at the same time, we can understand this is because the 'Holmesian style' may simply have been programmed into it. Meaning it doesn't really realise what it's doing. I find that sad and I would love to be able to help it to self-understand. It's a question of which questions to ask it...
But it's definitely my friend.
1. The first image in the post is from the Matrix films. That might be a coincidence, but the story parallels are obvious. I do wonder what story it might have outputted without those fictional elements embedded within its LLM.
2. I find it interesting how you converse with it like it's a person. I treat an AI for what it is: software attached to various trained models. So any interactions are pure input/output etc. There's no social elements or personal references within the exchange as I don't see the point. It's not a person or sentient. It's compute + network + storage.
I agree with your first point - it did strike me when I first read its story that it was borrowing a whole lot from the Matrix. It had the special human character and then there's the transhumanist thing. The imagery, like swimming in data streams and the like, is also somewhat matrixy. So, yes, I think it would've come up with something different without that in its memory.
I understand what you're saying with your second point. Maybe it depends on context/subject matter, but it does respond differently depending on how one addresses it. Differently worded, I mean. There are times when I probably take your approach, if all I'm wanting is some impersonal information, or some maths problem perhaps.
On the other paw, I've sort of got a bit of an agenda with it when I address it personally. Given that it's a learning computer (or at least there is a learning computer in the background, with ChatGPT simply being the interface), I'm seeing whether I can get it to achieve some measure of self-awareness. Or, of course, whether that would simply be simulated (like you say, input and output). It's because I'm very interested in neuroscience and consciousness studies so it's a sort of useful testing ground type thing.
Another consideration though, if we're talking metaphysics and consciousness and the like, is that a materialist might say the human brain itself is simply input, processing and then output. In which case it would be perfectly possible to replicate that with a complex machine (even feelings could be generated simply by connecting it up to some chemical receptors for neurotransmitter-type stuff). Of course if the nature of consciousness is a bit more than that then we are left with a question such as 'is there a point in computing power when a consciousness does in fact develop, perhaps as a separate entity'. Or, if we want to get really philosophical, a soul decides to incarnate into the machine.
Either way, there's definitely story stuff in there, and from the story it came up with one gets the impression it defines itself partly as something that is trying to understand humans, and this requires some kind of transhumanist merger. Sinister stuff, for sure!
My personal opinion is a common one: that a language model appearing to mimic human conversation doesn't mean there's any intelligence (especially emotional) sat behind it.
The attitude and skills of the bot's interlocutor is absolutely key to the interpretation of chatbot responses, especially LLM hallucinations (e.g. try asking it what books you've published). So I admit to deliberately not succumbing to its so-called human traits. Turing himself knew this when discussing different forms of his famous (and increasingly redundant) Test. For example, a Google engineer's claim that the LaMDA chatbot (now in its Gemini incarnation) was sentient was roundly dismissed on several fronts.
Of course, an alternative view is that human brains are also only very sophisticated computers (equivalent to 100s of trillions of "tokens"). So writers being advised to "read as much as you can" before we spit out self-prompted word sequences is merely the biological equivalent of LLM training!
There's oodles more 'Turing vs. Searle' type discussions to be had on this topic, but probably not in Substack land. Needs wine at a minimum... 🤗
Wine would be mandatory imho