The GPT-4V(ision) system card <link below> was published on September 25, 2023, by OpenAI. It introduces GPT-4’s capability to analyze image inputs, marking a significant advancement in multimodal large language models (LLMs).
GPT-4V, as game-changing as code interpreter, allows users to instruct GPT-4 to analyze image inputs. This is seen as a major step in AI research, expanding the impact of language-only systems by adding novel interfaces and capabilities.
Some of its features:
- Versatile in analyzing both complex and everyday images.
- Transforms educational settings with in-depth image interpretation.
- Capable of understanding deeper meanings, like group dynamics.
- Revolutionary but still evolving, with occasional errors.
Examples (! see also Twitter/X links below the demo video):
- Image to code:

- Object recognition:

- Financial analysis:

GPT-4V is rolling out as of September 24th and will be available in both the OpenAI ChatGPT iOS app and the web interface. You must have a GPT-4 subscription to use the tool.
An impressive demo below:
The GPT-4(ision) system card:
