File size: 14,310 Bytes
5853bf1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 |
# Image understanding
Source: <https://ai.google.dev/gemini-api/docs/image-understanding>
---
Gemini models are built to be multimodal from the ground up, unlocking a wide range of image processing and computer vision tasks including but not limited to image captioning, classification, and visual question answering without having to train specialized ML models.
**Tip:** In addition to their general multimodal capabilities, Gemini models (2.0 and newer) offer **improved accuracy** for specific use cases like object detection and segmentation, through additional training. See the Capabilities section for more details.
## Passing images to Gemini
You can provide images as input to Gemini using two methods:
* Passing inline image data: Ideal for smaller files (total request size less than 20MB, including prompts).
* Uploading images using the File API: Recommended for larger files or for reusing images across multiple requests.
### Passing inline image data
You can pass inline image data in the request to `generateContent`. You can provide image data as Base64 encoded strings or by reading local files directly (depending on the language).
The following example shows how to read an image from a local file and pass it to `generateContent` API for processing.
from google.genai import types
with open('path/to/small-sample.jpg', 'rb') as f:
image_bytes = f.read()
response = client.models.generate_content(
model='gemini-2.5-flash',
contents=[
types.Part.from_bytes(
data=image_bytes,
mime_type='image/jpeg',
),
'Caption this image.'
]
)
print(response.text)
You can also fetch an image from a URL, convert it to bytes, and pass it to `generateContent` as shown in the following examples.
from google import genai
from google.genai import types
import requests
image_path = "https://goo.gle/instrument-img"
image_bytes = requests.get(image_path).content
image = types.Part.from_bytes(
data=image_bytes, mime_type="image/jpeg"
)
client = genai.Client()
response = client.models.generate_content(
model="gemini-2.5-flash",
contents=["What is this image?", image],
)
print(response.text)
**Note:** Inline image data limits your total request size (text prompts, system instructions, and inline bytes) to 20MB. For larger requests, upload image files using the File API. Files API is also more efficient for scenarios that use the same image repeatedly.
### Uploading images using the File API
For large files or to be able to use the same image file repeatedly, use the Files API. The following code uploads an image file and then uses the file in a call to `generateContent`. See the [Files API guide](/gemini-api/docs/files) for more information and examples.
from google import genai
client = genai.Client()
my_file = client.files.upload(file="path/to/sample.jpg")
response = client.models.generate_content(
model="gemini-2.5-flash",
contents=[my_file, "Caption this image."],
)
print(response.text)
## Prompting with multiple images
You can provide multiple images in a single prompt by including multiple image `Part` objects in the `contents` array. These can be a mix of inline data (local files or URLs) and File API references.
from google import genai
from google.genai import types
client = genai.Client()
# Upload the first image
image1_path = "path/to/image1.jpg"
uploaded_file = client.files.upload(file=image1_path)
# Prepare the second image as inline data
image2_path = "path/to/image2.png"
with open(image2_path, 'rb') as f:
img2_bytes = f.read()
# Create the prompt with text and multiple images
response = client.models.generate_content(
model="gemini-2.5-flash",
contents=[
"What is different between these two images?",
uploaded_file, # Use the uploaded file reference
types.Part.from_bytes(
data=img2_bytes,
mime_type='image/png'
)
]
)
print(response.text)
## Object detection
From Gemini 2.0 onwards, models are further trained to detect objects in an image and get their bounding box coordinates. The coordinates, relative to image dimensions, scale to [0, 1000]. You need to descale these coordinates based on your original image size.
from google import genai
from google.genai import types
from PIL import Image
import json
client = genai.Client()
prompt = "Detect the all of the prominent items in the image. The box_2d should be [ymin, xmin, ymax, xmax] normalized to 0-1000."
image = Image.open("/path/to/image.png")
config = types.GenerateContentConfig(
response_mime_type="application/json"
)
response = client.models.generate_content(model="gemini-2.5-flash",
contents=[image, prompt],
config=config
)
width, height = image.size
bounding_boxes = json.loads(response.text)
converted_bounding_boxes = []
for bounding_box in bounding_boxes:
abs_y1 = int(bounding_box["box_2d"][0]/1000 * height)
abs_x1 = int(bounding_box["box_2d"][1]/1000 * width)
abs_y2 = int(bounding_box["box_2d"][2]/1000 * height)
abs_x2 = int(bounding_box["box_2d"][3]/1000 * width)
converted_bounding_boxes.append([abs_x1, abs_y1, abs_x2, abs_y2])
print("Image size: ", width, height)
print("Bounding boxes:", converted_bounding_boxes)
**Note:** The model also supports generating bounding boxes based on custom instructions, such as: "Show bounding boxes of all green objects in this image". It also support custom labels like "label the items with the allergens they can contain".
For more examples, check following notebooks in the [Gemini Cookbook](https://github.com/google-gemini/cookbook):
* [2D spatial understanding notebook](https://colab.research.google.com/github/google-gemini/cookbook/blob/main/quickstarts/Spatial_understanding.ipynb)
* [Experimental 3D pointing notebook](https://colab.research.google.com/github/google-gemini/cookbook/blob/main/examples/Spatial_understanding_3d.ipynb)
## Segmentation
Starting with Gemini 2.5, models not only detect items but also segment them and provide their contour masks.
The model predicts a JSON list, where each item represents a segmentation mask. Each item has a bounding box ("`box_2d`") in the format `[y0, x0, y1, x1]` with normalized coordinates between 0 and 1000, a label ("`label`") that identifies the object, and finally the segmentation mask inside the bounding box, as base64 encoded png that is a probability map with values between 0 and 255. The mask needs to be resized to match the bounding box dimensions, then binarized at your confidence threshold (127 for the midpoint).
**Note:** For better results, disable [thinking](/gemini-api/docs/thinking) by setting the thinking budget to 0. See code sample below for an example.
from google import genai
from google.genai import types
from PIL import Image, ImageDraw
import io
import base64
import json
import numpy as np
import os
client = genai.Client()
def parse_json(json_output: str):
# Parsing out the markdown fencing
lines = json_output.splitlines()
for i, line in enumerate(lines):
if line == "```json":
json_output = "\n".join(lines[i+1:]) # Remove everything before "```json"
output = json_output.split("```")[0] # Remove everything after the closing "```"
break # Exit the loop once "```json" is found
return json_output
def extract_segmentation_masks(image_path: str, output_dir: str = "segmentation_outputs"):
# Load and resize image
im = Image.open(image_path)
im.thumbnail([1024, 1024], Image.Resampling.LANCZOS)
prompt = """
Give the segmentation masks for the wooden and glass items.
Output a JSON list of segmentation masks where each entry contains the 2D
bounding box in the key "box_2d", the segmentation mask in key "mask", and
the text label in the key "label". Use descriptive labels.
"""
config = types.GenerateContentConfig(
thinking_config=types.ThinkingConfig(thinking_budget=0) # set thinking_budget to 0 for better results in object detection
)
response = client.models.generate_content(
model="gemini-2.5-flash",
contents=[prompt, im], # Pillow images can be directly passed as inputs (which will be converted by the SDK)
config=config
)
# Parse JSON response
items = json.loads(parse_json(response.text))
# Create output directory
os.makedirs(output_dir, exist_ok=True)
# Process each mask
for i, item in enumerate(items):
# Get bounding box coordinates
box = item["box_2d"]
y0 = int(box[0] / 1000 * im.size[1])
x0 = int(box[1] / 1000 * im.size[0])
y1 = int(box[2] / 1000 * im.size[1])
x1 = int(box[3] / 1000 * im.size[0])
# Skip invalid boxes
if y0 >= y1 or x0 >= x1:
continue
# Process mask
png_str = item["mask"]
if not png_str.startswith("data:image/png;base64,"):
continue
# Remove prefix
png_str = png_str.removeprefix("data:image/png;base64,")
mask_data = base64.b64decode(png_str)
mask = Image.open(io.BytesIO(mask_data))
# Resize mask to match bounding box
mask = mask.resize((x1 - x0, y1 - y0), Image.Resampling.BILINEAR)
# Convert mask to numpy array for processing
mask_array = np.array(mask)
# Create overlay for this mask
overlay = Image.new('RGBA', im.size, (0, 0, 0, 0))
overlay_draw = ImageDraw.Draw(overlay)
# Create overlay for the mask
color = (255, 255, 255, 200)
for y in range(y0, y1):
for x in range(x0, x1):
if mask_array[y - y0, x - x0] > 128: # Threshold for mask
overlay_draw.point((x, y), fill=color)
# Save individual mask and its overlay
mask_filename = f"{item['label']}_{i}_mask.png"
overlay_filename = f"{item['label']}_{i}_overlay.png"
mask.save(os.path.join(output_dir, mask_filename))
# Create and save overlay
composite = Image.alpha_composite(im.convert('RGBA'), overlay)
composite.save(os.path.join(output_dir, overlay_filename))
print(f"Saved mask and overlay for {item['label']} to {output_dir}")
# Example usage
if __name__ == "__main__":
extract_segmentation_masks("path/to/image.png")
Check the [segmentation example](https://colab.research.google.com/github/google-gemini/cookbook/blob/main/quickstarts/Spatial_understanding.ipynb#scrollTo=WQJTJ8wdGOKx) in the cookbook guide for a more detailed example.
 An example segmentation output with objects and segmentation masks
## Supported image formats
Gemini supports the following image format MIME types:
* PNG - `image/png`
* JPEG - `image/jpeg`
* WEBP - `image/webp`
* HEIC - `image/heic`
* HEIF - `image/heif`
## Capabilities
All Gemini model versions are multimodal and can be utilized in a wide range of image processing and computer vision tasks including but not limited to image captioning, visual question and answering, image classification, object detection and segmentation.
Gemini can reduce the need to use specialized ML models depending on your quality and performance requirements.
Some later model versions are specifically trained improve accuracy of specialized tasks in addition to generic capabilities:
* **Gemini 2.0 models** are further trained to support enhanced object detection.
* **Gemini 2.5 models** are further trained to support enhanced segmentation in addition to object detection.
## Limitations and key technical information
### File limit
Gemini 2.5 Pro/Flash, 2.0 Flash, 1.5 Pro, and 1.5 Flash support a maximum of 3,600 image files per request.
### Token calculation
* **Gemini 1.5 Flash and Gemini 1.5 Pro** : 258 tokens if both dimensions <= 384 pixels. Larger images are tiled (min tile 256px, max 768px, resized to 768x768), with each tile costing 258 tokens.
* **Gemini 2.0 Flash and Gemini 2.5 Flash/Pro** : 258 tokens if both dimensions <= 384 pixels. Larger images are tiled into 768x768 pixel tiles, each costing 258 tokens.
## Tips and best practices
* Verify that images are correctly rotated.
* Use clear, non-blurry images.
* When using a single image with text, place the text prompt _after_ the image part in the `contents` array.
## What's next
This guide shows you how to upload image files and generate text outputs from image inputs. To learn more, see the following resources:
* [Files API](/gemini-api/docs/files): Learn more about uploading and managing files for use with Gemini.
* [System instructions](/gemini-api/docs/text-generation#system-instructions): System instructions let you steer the behavior of the model based on your specific needs and use cases.
* [File prompting strategies](/gemini-api/docs/files#prompt-guide): The Gemini API supports prompting with text, image, audio, and video data, also known as multimodal prompting.
* [Safety guidance](/gemini-api/docs/safety-guidance): Sometimes generative AI models produce unexpected outputs, such as outputs that are inaccurate, biased, or offensive. Post-processing and human evaluation are essential to limit the risk of harm from such outputs.
|