Posted by on
Tags: , , , , , , ,
Categories: Uncategorized

Language supermodel: How GPT-3 is quietly ushering in the A.I. revolution  Digital Trends

OpenAI’s GPT-2 text-generating algorithm was once considered too dangerous to release. Then it got released — and the world kept on turning.

In retrospect, the comparatively small GPT-2 language model (a puny 1.5 billion parameters) looks paltry next to its sequel, GPT-3, which boasts a massive 175 billion parameters, was trained on 45 TB of text data, and cost a reported $12 million (at least) to build.

“Our perspective, and our take back then, was to have a staged release, which was like, initially, you release the smaller model and you wait and see what happens,” Sandhini Agarwal, an A.I. policy researcher for OpenAI told Digital Trends. “If things look good, then you release the next size of model. The reason we took that approach is because this is, honestly, [not just uncharted waters for us, but it’s also] uncharted waters for the entire world.”

Jump forward to the present day, nine months after GPT-3’s release last summer, and it’s powering upward of 300 applications while generating a massive 4.5 billion words per day. Seeded with only the first few sentences of a document, it’s able to generate seemingly endless more text in the same style — even including fictitious quotes.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.