* Add truncate_text option to tokenize
This makes it possible to run tokenize on texts that are longer than the number of tokens
that fit the context length without having to try to guess how to cut in number of
characters beforehand
* add doc, rename to just "truncate", use eot_token
Co-authored-by: Jong Wook Kim <jongwook@openai.com>