top of page

ChatGPT is everywhere, but here's where it all starts


OpenAI's breakthrough was a sensation, but it was built on decades of research.


ChatGPT is at its peak. Launched in late November last year as a web app by San Francisco-based OpenAI, the chatbot took off almost overnight. By some estimates, it is the fastest-growing Internet service, reaching 100 million users in January, just two months after launch. Through OpenAI's $10 billion deal with Microsoft, the technology is already being built into Office software and the Bing search engine.


Spurred to action by a recently reawakened erstwhile rival in the search battle, Google is accelerating the rollout of its own chatbot based on its own large PaLM language model.

But OpenAI's breakthrough didn't come out of nowhere. Chatbot is the most advanced large language model to date, compared to large language models dating back years.

But how did it get here?


1980–90s: recurrent neural networks


ChatGPT is a version of GPT-3, a large language model also developed by OpenAI. A large language model (or LLM) is a type of neural network that is trained on a large amount of text.

Neural networks are software inspired by the way neurons in animal brains signal each other.

Because text is made up of sequences of letters and words of varying lengths, language models require a type of neural network that can make sense of this kind of data. Recurrent neural networks invented in the 1980s can process sequences of words, but they learn slowly and can forget previous words in a sequence.

In 1997, computer scientists Sepp Hochreiter and Jürgen Schmidhuber fixed this by inventing LSTM (Long Short-Term Memory) networks, recurrent neural networks with special components that allow previously provided data in an input sequence to be preserved for a longer time. These networks can handle strings of text several hundred words long, but their language skills are limited.


2017: Transformers


The breakthrough behind today's generation of large language models came when a team of Google researchers invented transformers, a type of neural network that can track where each word or phrase appears in a sequence. The meaning of words often depends on the meaning of other words that come before or after it. By tracking this contextual information, transformers can process longer lines of text and capture word meanings more accurately. For example, "hot dog" means very different things in the sentences "A hot dog should be given plenty of water" and "A hot dog should be eaten with mustard."


2018–2019: GPT and GPT-2


OpenAI's first two major language models appeared only a few months apart. The company aims to develop general-purpose, multi-functional artificial intelligence and believes that large language models are a key step towards this goal.

GPT (short for Generative Pre-trained Transformer) made a huge breakthrough, surpassing the state-of-the-art natural language processing standards available at the time.

GPT combines transformers with unsupervised learning, which is a way of training machine learning models on data (in this case, lots and lots of text) that is not annotated beforehand.

This allows the software to discover patterns in the data on its own without having to be told what to look for. Many previous successes in machine learning have relied on supervised learning and annotated data, but labeling data by hand is slow work and thus limits the size of datasets available for training.

But GPT-2 provoked a lot of comments. At the time, OpenAI said it was so concerned that people could use GPT-2 "to generate deceptive, biased, or offensive language" that the company would not release the full model. How times change!


2020: GPT-3


GPT-2 is impressive, but OpenAI's follow-up, GPT-3, made the jaws drop! Its ability to generate human text was a huge leap forward. GPT-3 can answer questions, summarize documents, generate stories in different styles, translate between English, French, Spanish, and Japanese, and more. His mimicry is incredible.

One of the most notable findings is that GPT-3's successes came from augmenting existing techniques rather than inventing new ones. GPT-3 has 175 billion parameters (the values in a network that are adjusted during training), compared to GPT-2's 1.5 billion. Moreover, it is trained on much more data.

But teaching on text taken from the Internet brings new problems. GPT-3 absorbed much of the misinformation and prejudice it found online and reproduced it on demand.

As OpenAI admits: "Internet-trained models have internet-scale biases."

December 2020: Toxic text and other issues While OpenAI was grappling with GPT-3 biases, the rest of the tech world was grappling with curbing toxic trends in AI.

It's no secret that large language models can spew out false — even hateful — text, but researchers have found that fixing the problem isn't high on the priority list of most big tech firms.


January 2022: InstructGPT


OpenAI attempted to reduce the amount of misinformation and offensive text that GPT-3 produces by using reinforcement learning to train a version of the model on human testers' preferences (a technique called "reinforcement learning from human feedback").

The result, InstructGPT, was better at following the instructions of the people using it and produced less offensive language, less misinformation, and fewer errors overall. In short, InstructGPT behaves much more benevolently unless asked to do otherwise.


May–July 2022: OPT, BLOOM


A common criticism of large language models is that the cost of training them makes it difficult for all but the wealthiest labs to build one. This raises concerns that such powerful AI is being built by small corporate teams behind closed doors, without proper oversight and input from the wider research community.

In response, several collaborative projects have developed large language models and released them freely for use by any researcher who wants to study and improve the technology.


December 2022: ChatGPT


Even OpenAI was amazed at how ChatGPT was received. In the company's first demo, made the day before ChatGPT went online, the chatbot was presented as an incremental update to InstructGPT.

Like this model, ChatGPT was trained using reinforcement learning with feedback from human testers who rated its behavior as a natural, accurate, and harmless conversationalist.

In practice, OpenAI trained GPT-3 to master the conversational game and invited everyone to come and play. Millions of us have been playing since then.


October 2023


GPT-4: you can communicate with it not only by text but also by image. This means that after you input an image into GPT-4, it will output natural language, code, instructions, or comments as a response to the image you posted in the chat window. Unique, isn't it?


If you are interested, you can enroll in our ChatGPT training for adults 60+! Think of this training as a digital adventure where you discover what ChatGPT is all about and how to chat like a professional with ChatGPT, ask questions and get answers.

 
 
 

留言


LogoEU.png

TEAM - Cooperation for Thriving Elder Academy
This project is funded with support from the European Commission under
the
Erasmus + program
KA210-ADU - Small-scale partnerships in adult education

bottom of page