...Be patient...the story will appear...meanwhile you can read explanations about all this...
As I was playing manually with GPT-2 text generator. After a while (10 loops) and whatever the startpoint was, the model leads often to discussions about sports, politics, war, murders.
One goal of the experience is running this process thousands of times and generate small texts (20 sentences) to show/notice the existence of biases.
A secondary goal could be to point out how those tools are not harmless. In this first version, nothing is filtered. The system/model can create and assign texts to people who never pronounced them. This is a problem.
A third goal could be to determine a rate of convincing, logical, original, funny texts.
I also had purely technical questions: How difficult/long/painful is it to implement, automate and publish? Interesting.
I made the choice to provide a live-feed and not publish full generated texts. Maybe, I'll release the funniest ones.
The reason is that the generated content can sound real. It can involve real names and situations but 100% of the time the global output is a fakery.
A fakery built mathematically and statistically from existing texts.
How does it work?
- The system does work on its own: outputs are not modified.
- The entrypoint for the next sentence is the previous so I can execute GPT-2 process within loops.
- Stories are limited to 20 entries. Then the context changes and a new story starts. Introduction sentences are randomly picked from a selection of "http://americanbookreview.org/100BestLines.asp".
- The number of live-viewers/readers linked with the live feed is limited. If you encounter an error, please retry later.
- You need to be patient. You do not need to refresh anything.
- The live feed will appear automatically in red in the section below this line and be refreshed every 30 to 50 seconds.