Microsoft reveals Phi-3-vision, a new multimodal AI small language model
Related Articles
-
Microsoft reveals the Phi Silica AI small language model for NPU-based Copilot+ PCs
During Build 2024, Microsoft announced another member of the Phi AI small language model family, called Phi Silica. It will be able to be run locally on the NPU chips in Copilot+ PCs.
-
Top AI Companies Commit To Safe AI Development at the AI Seoul Summit
16 global AI companies a has signed agreements pledging safe AI development at the AI Seoul Summit hosted by UK and South Korea.
-
Microsoft brings out a small language model that can look at pictures
Microsoft announced a new version of its small language model, Phi-3, which can look at images and tell you what’s in them. Phi-3-vision is a multimodal model — aka it can read both text and images — and is best used on mobile devices. Microsoft says Phi-3-vision, now available on preview, is a 4.2 billion parameter model (parameters refer to how complex a model is and how much of its training it understands) that can do general visual reasoning tasks like asking questions about charts or...