* Patch clip model for ONNX compatibility
Changes to use INT32 for tokenization, since ONNX doesn't yet support ArgMax(INT64)
Use explicit dimension for norm
* Add compatibility fix for torch 1.7
This prevents the following error on Windows (when using
a multi-process DataLoader, for example):
AttributeError: Can't pickle local object '_transform.<locals>.<lambda>'
* Add truncate_text option to tokenize
This makes it possible to run tokenize on texts that are longer than the number of tokens
that fit the context length without having to try to guess how to cut in number of
characters beforehand
* add doc, rename to just "truncate", use eot_token
Co-authored-by: Jong Wook Kim <jongwook@openai.com>