-
Notifications
You must be signed in to change notification settings - Fork 307
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
'Use this Model' code snippets for timm
models in Transformers could use improvements
#1124
Comments
The Also based on this dataset: https://huggingface.co/datasets/huggingface/transformers-metadata/tree/main |
@coyotte508 thanks! my day to day does not involve any internal repos so not an hf-internal member and can't see the code there. Might be good time for me to cc @pcuenca and @qubvel for visibility |
Looking at the two models:
vs
These are the different
vs
In https://huggingface.co/datasets/huggingface/transformers-metadata/blob/main/pipeline_tags.json#L1118 we do have a line And in https://huggingface.co/datasets/huggingface/transformers-metadata/blob/main/frameworks.json#L248 we do have a line Btw those come from https://huggingface.co/datasets/huggingface/transformers-metadata/tree/main which is maintained by the transformers team / @LysandreJik So in theory we have all the info we need to provide the correct Going back to https://huggingface.co/api/models/timm/vit_base_patch16_224.augreg2_in21k_ft_in1k, we have an empty "config": {
"architectures": [
"ViTForImageClassification"
],
"model_type": "vit"
}, So the problem is probably the empty config for the timm model. If we compare the two The timm model lacks the How we generate So my best guess would be to add something like Keep in mind I'm way out of my depth on the feasibility/reasonableness of this ask 🙏 |
@coyotte508 is correct, they're inferred from config indeed |
yes it's on the open source / transformers team side but probably not very prio imo |
So looks like the metadata does indeed have the right info. The config.jsons for timm are not Transformers though, so adding those fields doesn't make sense, it'd be more infer model_type = timm wrapper from the fact that it's a timm model and then use the values that are there... something (fuzzily) along those lines. |
can you explain what is a "Timm model in transformers" BTW? I haven't followed and i'm not clear what it actually means |
@julien-c Can use all of the timm models as image classifiers or feature extractors with transformers, including the AutoModel/AutoProcessor and pipeline APIs (https://huggingface.co/blog/timm-transformers). Also allows timm models to work with the HF Trainer, can push the models back to the hub and they work with either timm or transformers. The hub models remain natively in timm format (checkpoint formats, keys, etc are timm), and the config.json remains timm. But the timm wrapper adapts the model & image processor for use in Transformers. The reason for this issue is that the pipelines snippets for timm models (for Transformers lib) should now be the same as for equivalent types of native Transformer models. e.g. pick a timm model on the hub, e.g. https://huggingface.co/timm/vit_so150m_patch16_reg4_gap_256.sbb_e250_in12k With transformers: import transformers
pipe = transformers.pipeline("image-classification", model="timm/vit_so150m_patch16_reg4_gap_256.sbb_e250_in12k")
pipe('torch2up.jpg')
Out[8]:
[{'label': 'chromatic color, chromatic colour, spectral color, spectral colour',
'score': 0.9763004779815674},
{'label': 'parallel', 'score': 0.005799838807433844},
{'label': 'circle', 'score': 0.003302227472886443},
{'label': 'triangle, trigon, trilateral', 'score': 0.0012768494198098779},
{'label': 'graduated cylinder', 'score': 0.0011402885429561138}] or from transformers import (
AutoModelForImageClassification,
AutoImageProcessor,
)
image_processor = AutoImageProcessor.from_pretrained('timm/vit_so150m_patch16_reg4_gap_256.sbb_e250_in12k')
model = AutoModelForImageClassification.from_pretrained('timm/vit_so150m_patch16_reg4_gap_256.sbb_e250_in12k').eval() |
The relevant code is here in huggingace.js: huggingface.js/packages/tasks/src/model-libraries-snippets.ts Lines 848 to 892 in 11274e4
eg: const info = model.transformersInfo; You can add something like this maybe: const info = model.transformersInfo;
if (info && model.library_name === "timm" && model.pipeline_tag === "image-classification") {
info.processor = ...;
info.auto_model = ...;
} That would fix the snippets. We could also change our internal codebase to change So the ~4 line-change in the huggingface.js snippets ⬆probably the simplest way |
There are a few issues with the code snippets currently show for timm models with Transformers (via the wrapper model)
The 1st issue may be partially addressed by #1120
I don't immediately see a path to fixing 2 & 3. The snippets should be the same as for Transformers, so it's not missing snippet code, it is somehow the identification of the AutoModel / Processors.
For 2/3, if a timm model is tagged with
image-classification
task I'd expectbut we currently get this
See e.g.
vs
Am I correct that the fields of TransformersInfo in ModelData are derived from a combo of metadata in the model repo + possibly config and/or preprocessor config files? The code for the population doesn't appear to be here, but seems the auto_model, processor fields may not be populated for timm models in a manner that results in correct snippets.
The text was updated successfully, but these errors were encountered: