OpenAI Releases GPT-4 AI Model with Human-Level Performance


OpenAI Releases GPT-4 AI Model with Human-Level Performance

TEHRAN (Tasnim) - OpenAI has released its latest artificial intelligence model, GPT-4, which boasts "human-level performance" on several academic and professional benchmarks, including the US bar exam, advanced placement tests, and the SAT school exams.

Accessible via the $20 paid version of ChatGPT, GPT-4 is a multimodal model capable of accepting input in both text and image form, which it can then parse and respond to using text, according to Financial Times.

The company claims to have embedded GPT-4 into various applications, including language-learning app Duolingo, which is using it to build conversational language bots, education company Khan Academy, which has designed an online tutor, and Morgan Stanley Wealth Management, which is testing an internal chatbot using GPT-4 to retrieve and synthesize information for its employees.

GPT-4's ability to accept images and text as input means it can generate detailed descriptions and answer questions based on the contents of a photograph. OpenAI has also teamed up with Danish start-up Be My Eyes to build a GPT-4-based virtual volunteer that can guide or help those who are blind or partially sighted.

OpenAI claims GPT-4 is its "most advanced system yet," with higher reliability and improved ability to handle nuanced queries than its predecessor, GPT-3.5. However, the company acknowledges that GPT-4 is not fully reliable and may suffer from "hallucinations" and has a limited context window and doesn't learn from experience.

Microsoft recently confirmed a "multibillion-dollar investment" in OpenAI over several years, and GPT-4 will underpin Microsoft's Bing chatbot. Google has also opened up its conversational chatbot, Bard, to a limited pool of testers and will allow Google Cloud customers to access its large language model PaLM.

OpenAI has put GPT-4 through stress tests to assess the risks of bias, disinformation, privacy, and cybersecurity. They discovered that GPT-4 can generate potentially harmful content, such as advice on planning attacks or hate speech, represent various biases and world views, and generate compromised or vulnerable code. The company also revealed that GPT-4 is not yet capable of carrying out autonomous actions without human input.

Despite its capabilities, OpenAI warns that GPT-4 should be used with caution, especially in contexts where reliability is essential. OpenAI also states that it won't reveal any details about the technical aspects of GPT-4, including the model's architecture, training data, and hardware and computing capacity used to deploy it, due to competitive and safety concerns.

Most Visited in Space/Science
Top Space/Science stories
Top Stories