Interpret prediction results from video classification models
Stay organized with collections
Save and categorize content based on your preferences.
After requesting a prediction, Vertex AI returns results based on your
model's objective. Predictions from a classification model return shots and
segments in your videos that have been classified according to your own defined
labels. Each prediction is assigned a confidence score.
The confidence score communicates how strongly your model associates each
class or label with a test item. The higher the number, the higher the model's
confidence that the label should be applied to that item. You decide how high
the confidence score must be for you to accept the model's results.
Score threshold slider
In the Google Cloud console, Vertex AI provides a slider that's
used to adjust the confidence threshold for all classes or labels, or an
individual class or label. The slider is available on a model's detail page in
the Evaluate tab. The confidence threshold is the confidence level that
the model must have for it to assign a class or label to a test item. As you
adjust the threshold, you can see how your model's precision and recall
changes. Higher thresholds typically increase precision and lower recall.
Example batch prediction output
The following sample is the predicted result for a model that identifies
cats and dogs in a video. The result includes segment, shot, and one-second
interval classifications.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-25 UTC."],[],[],null,["# Interpret prediction results from video classification models\n\nAfter requesting a prediction, Vertex AI returns results based on your\nmodel's objective. Predictions from a classification model return shots and\nsegments in your videos that have been classified according to your own defined\nlabels. Each prediction is assigned a confidence score.\n\n\nThe confidence score communicates how strongly your model associates each\nclass or label with a test item. The higher the number, the higher the model's\nconfidence that the label should be applied to that item. You decide how high\nthe confidence score must be for you to accept the model's results.\n\n\u003cbr /\u003e\n\n#### Score threshold slider\n\n\nIn the Google Cloud console, Vertex AI provides a slider that's\nused to adjust the confidence threshold for all classes or labels, or an\nindividual class or label. The slider is available on a model's detail page in\nthe **Evaluate** tab. The confidence threshold is the confidence level that\nthe model must have for it to assign a class or label to a test item. As you\nadjust the threshold, you can see how your model's precision and recall\nchanges. Higher thresholds typically increase precision and lower recall.\n\n\u003cbr /\u003e\n\n#### Example batch prediction output\n\nThe following sample is the predicted result for a model that identifies\ncats and dogs in a video. The result includes segment, shot, and one-second\ninterval classifications.\n\n\n| **Note**: The following JSON Lines example includes line breaks for\n| readability. In your JSON Lines files, line breaks are included only after each\n| each JSON object.\n\n\u003cbr /\u003e\n\n\n```\n{\n \"instance\": {\n \"content\": \"gs://bucket/video.mp4\",\n \"mimeType\": \"video/mp4\",\n \"timeSegmentStart\": \"1s\",\n \"timeSegmentEnd\": \"5s\"\n }\n \"prediction\": [{\n \"id\": \"1\",\n \"displayName\": \"cat\",\n \"type\": \"segment-classification\",\n \"timeSegmentStart\": \"1s\",\n \"timeSegmentEnd\": \"5s\",\n \"confidence\": 0.7\n }, {\n \"id\": \"1\",\n \"displayName\": \"cat\",\n \"type\": \"shot-classification\",\n \"timeSegmentStart\": \"1s\",\n \"timeSegmentEnd\": \"4s\",\n \"confidence\": 0.9\n }, {\n \"id\": \"2\",\n \"displayName\": \"dog\",\n \"type\": \"shot-classification\",\n \"timeSegmentStart\": \"4s\",\n \"timeSegmentEnd\": \"5s\",\n \"confidence\": 0.6\n }, {\n \"id\": \"1\",\n \"displayName\": \"cat\",\n \"type\": \"one-sec-interval-classification\",\n \"timeSegmentStart\": \"1s\",\n \"timeSegmentEnd\": \"1s\",\n \"confidence\": 0.95\n }, {\n \"id\": \"1\",\n \"displayName\": \"cat\",\n \"type\": \"one-sec-interval-classification\",\n \"timeSegmentStart\": \"2s\",\n \"timeSegmentEnd\": \"2s\",\n \"confidence\": 0.9\n }, {\n \"id\": \"1\",\n \"displayName\": \"cat\",\n \"type\": \"one-sec-interval-classification\",\n \"timeSegmentStart\": \"3s\",\n \"timeSegmentEnd\": \"3s\",\n \"confidence\": 0.85\n }, {\n \"id\": \"2\",\n \"displayName\": \"dog\",\n \"type\": \"one-sec-interval-classification\",\n \"timeSegmentStart\": \"4s\",\n \"timeSegmentEnd\": \"4s\",\n \"confidence\": 0.6\n }]\n}\n```\n\n\u003cbr /\u003e"]]