OpenAI released GPT-4, a multi-modal model that can take both text and images as inputs, accompanied by a 98-page paper, which is light on details about the architecture, hardware and training. GPT-4 has longer memory and is better at tests, exam and structured-writing tasks when compared to GPT-3.5. However, the model is slower, and still hallucinates and generates outputs that are untrustful, toxic or unhelpful at times. The paper asserts that large models can generate such outputs, and GPT-4 addresses this by being able to remember roughly 50 pages of content. Companies like Morgan Stanley are using GPT-4 to sort their databases of documents, while Be My Eyes is using the GPT-4's image feature to help the visually-impaired navigate their world.
Here are the key facts extracted from the text:
1. GPT-4 was released.
2. It is based on a 98-page paper.
3. GPT-4 is a multi-modal model that can accept both images and text as input.
4. It was trained in the Microsoft Azure cloud.
5. GPT-4 was tested on various tasks and performed better than GPT-3.5 in exams.
6. It can remember up to 50 pages of content, eight times more than GPT-3.5.
7. GPT-4 is slower than GPT-3.5.
8. Users have high expectations for GPT-4, but it still has limitations, including hallucinations.
9. Some companies, like Morgan Stanley, are using GPT-4 for document sorting.
10. The longer memory of GPT-4 has practical applications in handling large databases of documents.
Please note that these facts are based on the provided text and do not include opinions or interpretations.