Next app


A text generator powered by data from 4chan's /pol/ board

About GPT-4chan

The creator of GPT-4chan worked for three and a half years to create a language model by studying more than 134.5 million posts from 4chan's politically incorrect (/pol/) board.

The thread structure of the board was incorporated into the program, so an artificial intelligence was created that could post on /pol/ in a way that was indistinguishable from a real human.

Model Description

GPT-4chan is a language model that was fine-tuned from GPT-J 6B using data from the Politically Incorrect board on 4chan over a period of 3.5 years.

Training data

GPT-4chan was fine-tuned on the dataset Raiders of the Lost Kek: 3.5 Years of Augmented 4chan Posts from the Politically Incorrect Board.

Training procedure

The model was trained for 1 epoch following GPT-J's fine-tuning guide.

Intended Use

GPT-4chan is designed to reproduce text based on the data it was trained on, which consists of discourse from anonymous online communities about political topics. It can also be used to analyze discourse in such communities, and has potential applications in tasks such as toxicity detection, as initial experiments have shown promising zero-shot results when comparing a string's likelihood under GPT-4chan to its likelihood under GPT-J 6B.

GPT-4chan screenshots

GPT-4chan - screen 1
GPT-4chan - screen 2

Read in Ukrainian or Ru