Skip to content

Evaluation via available checkpoints #7

@asasarkaya

Description

@asasarkaya

Hi, thanks for the great work. I wanted to run some evaluations using the available model checkpoints. but I am having problems while following the instructions.

I downloaded kosmos_ph_oxe-pretrain.pt, and kosmos_ph_oxe-pretrain.json. And I modified eval/simpler/eval_ckpts_bridge.py file as follows:

import os

ckpt_paths = [
    (
        "~/projects/RoboVLMs/models/kosmos_ph_oxe-pretrain.pt",
        "~/projects/RoboVLMs/configs/kosmos_ph_oxe-pretrain.json",
    )
]

for i, (ckpt, config) in enumerate(ckpt_paths):
    print("evaluating checkpoint {}".format(ckpt))
    os.system("bash scripts/bridge.bash {} {}".format(ckpt, config))

It goes through eval/simpler/main_inference.py via scripts/bridge.bash. However, at line 9 when importing
from eval.simpler.model_wrapper import BaseModelInference, in eval/simpler/model_wrapper.py, it fails, because CustomModel is not defined.

I would appreciate any help.
Thanks
PS: It would be great to have at least 1 section in readme that walks through how to run an available (one of the three available) checkpoint, tbh, I overall found the readme a bit confusing

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions