Recently, OpenAI has released its latest AI model, GPT-4, which exhibits human-level performance on various professional and academic benchmarks.
More Creative and Collaborative
As we know, OpenAI has officially announced the release of its latest AI language model, GPT-4.
This model is “more creative and collaborative than ever before” and is capable of solving difficult problems with greater accuracy than its predecessors, as the company claims.
It can parse both text and image inputs, notably it can respond via text only.
It is also noteworthy here that GPT-4 retains many of the same problems as earlier language models, including the ability to make up information (or “hallucinate”).
Besides this, the model still lacks knowledge about events that have occurred after September 2021, according to OpenAI.
Still Flawed But Interpretes Complex Inputs
OpenAI CEO Sam Altman wrote, “It is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it,” on Twitter while announcing GPT-4.
“GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks,” said OpenAI on their website.
Interestingly, OpenAI has already partnered with several companies to integrate GPT-4 into their products, including Duolingo, Stripe, and Khan Academy despite these limitations.
If you want to access the latest model then it can be accessed by people who have ChatGPT Plus, OpenAI’s $20 monthly ChatGPT subscription.
Notably, this is also powering Microsoft’s Bing chatbot and will be accessible as an API for developers to build on.
The difference between GPT-4 and its predecessor GPT-3.5 is not easily noticeable in everyday conversation, OpenAI noted.
But, the company claims that GPT-4’s enhancements are apparent in its performance on various tests and benchmarks such as the Uniform Bar Exam, LSAT, SAT Math, and SAT Evidence-Based Reading & Writing exams.
In recent times GPT-4 has amazed everyone when it scored 88th percentile or higher on these exams.
Although, GPT-4 is multimodal, it can only accept text and image inputs and emit text outputs.
It can not be denied that this model’s ability to parse text and images simultaneously allows it to interpret more complex input.