OpenAI introduced the GPT-4 multimodal model

OpenAI introduced the GPT-4 multimodal model

OpenAI unveiled a big multimodal GPT-4 model.

According to the announcement, the GPT-4 can solve “complex problems with greater accuracy due to its broader general knowledge and capabilities.”

According to the developers, the model can take on a particular role at the request of the user. For example, she can be asked to become a lawyer or a tutor. In this case, GPT-4 will more accurately process queries related to a particular area of ​​knowledge.

In demo video OpenAI President Greg Brockman showed how you can teach the service to quickly answer questions related to taxation.

“This model is so good at mental arithmetic. It has broad and flexible capabilities,” he said.

OpenAI added that, compared to GPT-3.5, the new algorithm is more robust, more creative, and able to handle fine instructions.

Compared to its predecessor, GPT-4 generates much more voluminous texts: 25,000 words versus 3,000.

Word limit set in ChatGPT and GPT-4. Data: OpenAI.

A separate video says that the model has a number of capabilities that the previous version of the technology did not have, including the ability to “reason” based on images uploaded by users.

GPT-4 describes what it sees in the picture. Data: OpenAI.

“GPT-4 is a large multimodal model that, although less functional than humans in many real world scenarios, demonstrates human-level performance across a variety of professional and academic tests,” the announcement reads.

According to OpenAI staffer Andrey Karpathy, image processing means the AI ​​can “see”.

In addition, OpenAI released scientific work about GPT-4. However, the developers decided not to disclose details about the size of the model, the course of its training and the data used as part of this process.

The technology is available to ChatGPT Plus subscribers with some restrictions. The company has also opened a waiting list for those wishing to use API new model.

OpenAI said it is already working with several companies to integrate the algorithm into their applications, including Duolingo, Stripe and Khan Academy.

Duolingo app with integrated GPT-4. Data: Duolingo.

After the Microsoft announcement confirmed Rumor has it that the new Bing runs on a search-optimized version of GPT-4.

Recall that in November 2022, OpenAI introduced ChatGPT. In two months, the service became the fastest growing in history, reaching 100 million active users.

In February 2023, Microsoft released an updated Bing search engine based on ChatGPT.

In March, the number of active users of the service exceeded 100 million.

Subscribe to bitcoinlinux on social networks

Found a mistake in the text? Select it and press CTRL+ENTER

bitcoinlinux Newsletters: Keep your finger on the pulse of the bitcoin industry!