Business & IndustrialTechnologyNews & MediaArts & EntertainmentScience & HealthHobbies & LeisureJobs & EducationHome &GardenTravelCommunityLaw & Gov.Inside the Black BoxMillions of websites are used to trainAI’s biggest chatbots
To look inside this black box, we analyzed Google’s C4 data set, a massive snapshot of the contents of 15 million websites that have been used to instruct some high-profile English-language AIs, called large language models, including Google’s T5 and Facebook’s LLaMA. (OpenAI does not disclose what datasets it uses to train the models backing its popular chatbot, ChatGPT)
The Post worked with researchers at the Allen Institute for AI on this investigation and categorized the websites using data from Similarweb, a web analytics company. About a third of the websites could not be categorized, mostly because they no longer appear on the internet. Those are not shown.
Hover over the boxes above to view the top sites in each category
We then ranked the remaining 10 million websites based on how many “tokens” appeared from each in the data set. Tokens are small bits of text used to process disorganized information — typically a word or phrase.
Read more at Washington Post.