Everything you need to know about GPT-3

It is finally time to talk more in detail about GPT-3.

We’ve been talking about language models here on Voice Crumb and about how this market is currently moving. OpenAI’s GPT-3 is arguably one of the most talked about language models out there and it rightfully deserves some more attention.

In this article, we’ll discuss:

  • What is GPT-3?
  • The differences between the 4 codes that fall under the GPT-3 umbrella
  • What can GPT-3 do?
  • Is GPT-4 coming soon?

Let’s get into it.

What is GPT-3?

As mentioned in our introductory article about language models, GPT stands for Generative Pre-trained Transformer. Thus, this is an OpenAI language model that is able to produce human-like text. The “3” represents the fact that this is the third generation that OpenAI has released of this model.

The first ever GPT came out in 2018 and it relied on 117 million parameters. Exactly one year later, a ten-times larger model, GPT-2 was released, relying on 1.5 billion parameters. When GPT-3 came out in mid-2020 with over 175 billion parameters, it set a completely new standard for language models, not only within OpenAI, but for all players in the AI space.

For most texts written by GPT-3, it is pretty difficult to be able to tell it apart from a human-written article. So much so that an opinion piece written by the AI was published by The Guardian.

But, what exactly does working with GPT-3 look like? In the next section, we explore the different codes that fall under the GPT-3 umbrella and break down how the magic happens.

Different GPT-3s for different jobs: the models

When we talk about GPT-3, we generally think of it as a language model that in some mysterious way is able to write up really natural-sounding texts.

In truth, GPT-3 is more of an umbrella term to describe four different models that OpenAI offers: Davinci, Curie, Babbage, and Ada. Each of these has a different level of power, different expertise and comes at a different price point.

Davinci or, as its model name goes, text-davinci-002 is the most advanced and competent of all GPT-3 models. It’s basically the reason why OpenAI is celebrated and what experts have been impressed with. Davinci is able to do what all the other models do, with a lower amount of contextual information. The downside? It’s significantly more expensive than the others.

Ada- or text-ada-001 -is the fastest and cheapest of all, but it’s considered to be only really capable of very simple tasks. Between those two ends of the spectrum lie Curie- or text-curie-001 -and Babbage- or text-babbage-001. 

You might be wondering, “why did they create so many models?”. As OpenAI itself explains, different use cases will benefit from different models. So, if you’re thinking about working with GPT-3, it is worth testing how each one works for your projects, using GPT-3’s Playground.

Screenshot of the GPT-3 Playground on the OpenAI platform.

Keep in mind that testing out OpenAI’s Playground is only free for a limited time and for a limited amount of tokens (i.e. terms that the AI generates). Make sure you use all your free credits before they expire to understand exactly if GPT-3 works for you and which model suits your needs

What can GPT-3 do?

We’ve discussed the history behind GPT-3 and what it was created for, but let’s find out what GPT-3 is actually able to do.

OpenAI offers many examples of possible applications for GPT-3. Some are more similar to things we’ve seen before, like grammar correction or translation, and others are more at the OMG-this-sounds-like-a-human end of the spectrum.

Some of the most fun and interesting examples include:

  • translating a difficult text into one that a 2nd grader would be able to understand;
  • turning a product description into ad copy;
  • finding names for a new product you have in mind;
  • creating a 3-sentence horror story based on a topic of your choice;
  • find a color to describe the mood of something you wrote;
  • creating a recipe from a list of ingredients.

As these examples illustrate, the possibilities are virtually endless and some have yet to be discovered. This is evidenced also by the fact that many of the applications OpenAI is including in its Examples section are, in fact, created by the community or influenced by a prompt that was diffused in the community.

In terms of the positioning GPT-3 and its future evolutions seem to be veering towards, the technology is taking on the role of enabler. There are more and more startups popping up to develop GPT-3 based products that directly cater to the needs that companies and individuals have today. The most outstanding example is, which has just recently been able to raise $125M at a $1.5B valuation, but the list of existing commercial applications has been expanding with names like Algolia, Copysmith AI,,,, and Quickchat.

This all points to the fact that there is much still to be discovered about GPT-3, although, as we’ve discussed in our last article about the state of language models, it is not considered to be the best AI available on the market in 2022. 

Does this mean that OpenAI has been ousted from the throne forever? Let’s find out.

Is GPT-4 coming soon?

TL;DR. We don’t know. And nobody really knows right now, except for OpenAI of course.

What we do know is that it is coming sometime and this time is supposed to be sooner than later. There have also been some spoilers about what we can expect for the new update of GPT to be like and the rumors are surprising to say the least.

Contrary to what has been done thus far by OpenAI, its CEO Sam Altman has declared that the approach to building GPT-4 will not be focused on having more parameters. Meaning, it probably will be bigger than GPT-3, but not that much.

This is all because things have recently changed in the AI game. One of their competitors, DeepMind, has been able to prove with its model Chinchilla that improving language models can be done without relying on an ever increasing number of parameters. Instead, it turns out, working on other features can be equally, if not more, rewarding. Hence Altman’s declared intention of shifting the focus on other aspects to create the new GPT-4, working on things like data, algorithms, parameterization, and so on. 

While knowing this might not seem like much, this signals a big shift in the space and it will be interesting to find out how this veering will work and whether it can bring language models to new, unknown heights. 

In the meantime, if you’re still struggling to understand what language models are all about and how this market is currently progressing, you can catch up here.

Still hungry for knowledge? Follow us on LinkedIn for weekly updates on the world of conversational AI, or check out our article about language models and what's been going on in the AI industry.

More case studies & conversational AI guides

Leggi di più

Iscriviti alla nostra newsletter

Ricevi aggiornamenti importanti dal nostro team esperto

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Iscrivendoti accetti la nostra Privacy Policy

Voice technology and Conversational AI: not so easy to digest.

Get our free Voice Bites 1/month

Waaaaaay easier to digest.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
By signing up you agree to our Privacy Policy

(Wanna check Voice Bites first?)