Commit Graph

18 Commits

Author SHA1 Message Date
sarveshwar-s
c5478aac7b Removed unused f-string (#273) 2022-07-27 00:30:08 -07:00
Jong Wook Kim
b4ae44927b ViT-L/14@336px (#234) 2022-04-21 16:45:46 -07:00
liuxingbaoyu
3482bb6ed3 fix utf8 username on windows (#227) 2022-04-10 14:07:46 -07:00
In-Ho Yi
7ef63f265b Patch clip model for ONNX compatibility (#219)
* Patch clip model for ONNX compatibility

Changes to use INT32 for tokenization, since ONNX doesn't yet support ArgMax(INT64)
Use explicit dimension for norm

* Add compatibility fix for torch 1.7
2022-04-10 13:35:32 -07:00
Jong Wook Kim
67fc250eb6 add RN50x64 and ViT-L/14 models 2022-01-25 17:04:00 -08:00
Tan Jia Huei
573315e83f use pkg_resources for PyTorch version comparison (#176)
use `pkg_resources` from `setuptools` to parse version strings, it is required by Pytorch >= 0.4.1 anyway
2021-11-09 01:57:26 -05:00
Tan Jia Huei
1a8b4b2899 Fix PyTorch version check for nightly builds (#173) 2021-11-05 01:17:26 -04:00
Santiago Castro
2867559c5f Fix PyTorch version check (#160)
* Fix PyTorch version check

* Fix suggestion

Co-authored-by: Jong Wook Kim <jongwook@openai.com>
2021-11-04 17:32:10 -04:00
Gianluca Gippetto
c13005fd42 In Compose, replace lambda function with named function (#151)
This prevents the following error on Windows (when using
a multi-process DataLoader, for example):

AttributeError: Can't pickle local object '_transform.<locals>.<lambda>'
2021-09-23 21:42:20 -04:00
Kevin Costa
22fde59cbe Can specify root directory when loading model (#136)
* Can specify root directory when loading model

* specifying download_root instead

Co-authored-by: Jong Wook Kim <jongwook@openai.com>
2021-08-08 23:43:22 -07:00
Santiago Castro
ff339871f3 Use tqdm with 1024 instead of 1000 unit scale (#131) 2021-08-08 23:20:38 -07:00
Jong Wook Kim
dff9d15305 add ViT-B/16 and RN50x16 models 2021-07-19 14:57:31 -07:00
Romain Beaumont
a2737ac264 Add truncate option to tokenize (#126)
* Add truncate_text option to tokenize

This makes it possible to run tokenize on texts that are longer than the number of tokens
that fit the context length without having to try to guess how to cut in number of 
characters beforehand

* add doc, rename to just "truncate", use eot_token

Co-authored-by: Jong Wook Kim <jongwook@openai.com>
2021-07-18 20:17:40 -07:00
Jong Wook Kim
db20393f4a Using non-JIT by default; compat fix with 1.8+ 2021-07-18 18:45:21 -07:00
Jong Wook Kim
fd6c1443c2 add RN101 and RN50x4; update paper URL and model card 2021-03-04 12:30:39 -05:00
Sebastian Berns
4c0275784d Load model from path (#41)
* Load model from path

* showing download progress in "MiB"

* clip.load() now can take a file path

Co-authored-by: Jong Wook Kim <jongwook@openai.com>
2021-02-16 06:19:42 -05:00
Jong Wook Kim
8f6deb52a1 showing download progress in "MiB" 2021-02-15 22:00:33 -05:00
boba_and_beer
3bee28119e Make the repo installable as a package (#26) 2021-01-30 03:05:01 +09:00