sarveshwar-s
d50d76daa6
Removed another unused f-string ( #276 )
2022-07-27 12:42:37 -07:00
Penn
f69a9bc217
Remove inefficient computation from AttentionPool2d Module ( #271 )
...
* fix inefficient attention computation
* remove erroneous formatting
* simplified flatten
Co-authored-by: Jong Wook Kim <jongwook@nyu.edu >
2022-07-21 13:04:35 -07:00
ProGamerGov
b46f5ac758
Don't reuse nn.ReLU modules ( #239 )
2022-05-01 14:31:46 -07:00
In-Ho Yi
7ef63f265b
Patch clip model for ONNX compatibility ( #219 )
...
* Patch clip model for ONNX compatibility
Changes to use INT32 for tokenization, since ONNX doesn't yet support ArgMax(INT64)
Use explicit dimension for norm
* Add compatibility fix for torch 1.7
2022-04-10 13:35:32 -07:00
or-toledano
3b473b0e68
Reduce half of similarity muls after encoding ( #140 )
...
(cAB)^T = c B^T A^T
Saves half of the similarity products in the CLIP model.py after the visual/text encoding stages
2021-08-29 05:15:03 -04:00
Haofan Wang
ea41722f9f
Rename VisualTransformer -> VisionTransformer ( #97 )
...
Fixes #94
2021-07-18 20:41:49 -07:00
Jong Wook Kim
8a665a683d
fixed model loading issue ( #66 )
2021-03-23 03:05:17 -04:00
Jong Wook Kim
290ac5cb15
Correctly initializing the logit scale parameter
...
adding numpy import
2021-03-22 22:09:57 -04:00
Jong Wook Kim
43c953e231
Correctly initializing the logit scale parameter
...
cf. #46
2021-03-22 18:07:08 -04:00
Sebastian Berns
4c0275784d
Load model from path ( #41 )
...
* Load model from path
* showing download progress in "MiB"
* clip.load() now can take a file path
Co-authored-by: Jong Wook Kim <jongwook@openai.com >
2021-02-16 06:19:42 -05:00
Jong Wook Kim
c42a8e3c9e
added parameter initialization ( fixes #15 )
2021-01-30 03:26:10 +09:00
boba_and_beer
3bee28119e
Make the repo installable as a package ( #26 )
2021-01-30 03:05:01 +09:00