OpenAI launches a new version of ChatGPT, called GPT-4
The most recent version of ChatGPT released by OpenAI, “GPT-4,” is an incredibly well-liked artificial intelligence chatbot.
The new model can react to images, for example, by producing tags and explanations and suggesting recipes based on images of the ingredients.
ChatGPT was launched in November 2022, and more than 1 million people have used it. And it also has eight times more words than the older version of ChatGPT.
While teachers advise against using it, it is frequently requested for use in producing songs, poems, marketing copy, computer code, and homework assistance.
As its knowledge base, ChatGPT uses the internet as it was in 2021 to answer questions in a manner that is as natural and human-like as possible. It can also imitate the writing styles of other authors and songwriters.
Several tasks currently performed by people could potentially be replaced by it at some point, which raises worries.
With the use of human feedback, OpenAI claimed to have spent six months developing safety features for GPT-4. It cautioned, nevertheless, that it might still be prone to spreading false information.
GPT-4 will be accessible to subscribers, who pay $20 monthly for a level of service. It provided a response to a challenging tax query in a live demo, but there was no mechanism to validate the accuracy of the result.
A form of “generative artificial intelligence,” like ChatGPT, is GPT-4. Generative AI employs algorithms and predictive language to create new content depending on cues. According to OpenAI, GPT-4 has “higher-level reasoning skills” than ChatGPT. The model, for instance, is able to locate open meeting times for three schedules.
Moreover, OpenAI announced new collaborations with the app for the visually challenged, “Be My Eyes,” and the language learning service, “Duolingo,” to develop AI chatbots that can communicate with users in natural language.
OpenAI has cautioned that GPT-4 is still not entirely trustworthy and may “hallucinate” a condition in which AI fabricates information or uses incorrect reasoning.