We all know it's AI Dungeon, I helped create it after all.
Thanks, I should have made that important clarification.
But yeah, AI Dungeon was definitely better before Nick fucked it up.
He should have asked me for my advice on how to make it better first before horsefucking the whole thing, but he always did have a habit of just going around doing things without asking. Oh well.
I can think of a couple reasons why this would be.
Censorship. Especially now that OpenAI is monetizing GPT-3, they want to make sure it's not producing content that reflects badly on them. They also want to prevent people who don't have access to their APIs from using GPT-3-powered apps to gain indirect access. Ultimately, they trained GPTs on a big chunk of raw internet, so unchecked, yeah, they're going to get some less-than-desireable content. Whatever method they use to weed that stuff out, it's another layer that can end up diluting the output.
Limits. GPT-2 works within a limit of 1,024 "tokens" (sort of equates to words) maximum for both input and output. (And it's possible the servers are limiting that further to save on costs.) The more it has to remember, the less it can output and vice versa, and the longer a session goes on, the less consistent it becomes. Addition of stats could be overloading it as well; if the network has been retrained to keep more stats in mind, that's more for it work with, thus making things harder to make sense of.
Generation settings. It can be a balancing act to get the settings just right. For instance, increasing "temperature" increases randomness, which makes the output less repetetive, but it also increases the chance of producing nonsense. If responses have been getting weirder, maybe the temperature has been increased.