AI is Stoopid
Talk to the Donald
1

About me

ChatGPT and other LLMs work by selecting the next most likely word (with a sprinkling of randomness) based on what its neural network was trained on. This app works in a somewhat similar manner but instead of picking the next most likely word it picks the next likely character one at a time, also with a sprinkling of randomness.

The big difference here is that this app doesn't use a deep neural network.

When you as it a question it searches through all the posts from Twitter and Truthsocial from 2010 though May 27th 2024 looking for any post that might be relevant (and probably some that are not). It then plays a game of Markov Chain Monte Carlo as described by Brian Hayes in his November 1983 article in Scientific American, and later in this article in American Scientist.

Essentially it selects and prints a new character from among the next most probable characters that follow a string characters it has seen in the input text. It then tacks this character on to the end of the string, removes the first character in the string, and starts over.

The "incoherence/covfefe level" adjusts the length of the string that is sought to produce each new character. The recipe is 18-incoherence level. Thus if the level is set to 11 the string will be of length 7. If the level is 1 then the string sought for each new character will be 17 characters long. The longer the string the fewer matches and therefore options there will be to choose from for the next character. This means longer strings will result in output that is closer to the original tweet structure.

The entirety of this site was created in, and is powered by, software from Rampart.dev. Rampart is an expedient and efficient full-stack JS and RDBMS environment.