A Twitter thread by Janelle Shane.

Struggling with crafting the first sentence of your novel?

Be comforted by the fact that AI is struggling even more.

https://aiweirdness.com/post/189170306297/how-to-begin-a-novel …

Last year I trained torch-rnn on 10,096 unique first lines of novels, all contributed by fans of AI Weirdness.

The neural net struggled to make sense for longer than a few words at a time, although it did produce some gems.

https://aiweirdness.com/post/168051907512/the-first-line-of-a-novel-by-an-improved-neural …

In just a year, neural nets have become powerful enough to generate consistently grammatical sentences. Once finetuned on the dataset of novel first lines, GPT-2 can generate readable sentences.

They don't necessarily make SENSE, but the rhythm is spot on.

Ah you refer to my FIRST attempt last year, which is best forgotten
https://aiweirdness.com/post/167049313837/a-neural-network-tries-writing-the-first-sentence …

Today's GPT-2 neural net gets the feel and rhythm of a story's first line. It also inserts surrealist narrative hooks, although this is most likely because it is trying to be mundane and is making mistakes because it doesn't understand what it's saying.

Sometimes the neural net's uncomprehending surrealism ventures out of "interesting narrative hook" territory and into "failed simile" territory

Ah yes I see you have encountered the bonus material. Fun fact: GPT-2 can’t quite believe that the sentences are all supposed to be independent so it can be induced to follow certain themes

The disadvantage of a neural net that can string together a grammatical sentence is that its writing now can begin to be terrible in a more human sense, rather than merely incomprehensible

The neural net is particularly prone to a brand of awfulness that seems to stem from wordy Victorian prose.

Grammatically impeccable, utterly unreadable

The neural net was originally trained on internet writing, so it doesn't quite believe that the first lines are independent.

The happy result is that I can nudge it in certain directions with a Harry Potter prompt.
(3901 more examples exported here https://github.com/janelleshane/novel-first-lines-dataset/blob/master/iteration150_temperature0p8_potter.txt … )

My Little Pony is one of the most persistent genres - prompt the neural net with "Twilight Sparkle was out of cupcakes." and it will respond with dozens more MLP story openings.

They do end up being a bit grimmer though. (4284 more examples here https://github.com/janelleshane/novel-first-lines-dataset/blob/master/iteration150_temperature0p8_ponies.txt … )

You would think that prompting the neural net with "It is a truth universally acknowledged" would send it to new heights of Victorian wordiness, but instead it mostly gets scarier.

(2822 more examples here https://github.com/janelleshane/novel-first-lines-dataset/blob/master/iteration150_temperature0p8_victorian.txt …)

I tried leaning into the scariness by prompting with "It is a terrible, terrible idea, even if entirely accidental, to talk to one of the Ancient Ones." and oh god did it ever work

(3574 more examples here https://github.com/janelleshane/novel-first-lines-dataset/blob/master/iteration150_temperature0p8_ancient.txt …)

There are so many gems in the raw neural net output here https://github.com/janelleshane/novel-first-lines-dataset …

looking forward to seeing what others discover in there