* llama : support StableLM 2 1.6B
* convert : fix Qwen's set_vocab wrongly naming all special tokens [PAD{id}]
* convert : refactor Qwen's set_vocab to use it for StableLM 2 too
* nix : add tiktoken to llama-python-extra
* convert : use presence of tokenizer.json to determine StableLM tokenizer loader
It's a less arbitrary heuristic than the vocab size.
|
||
|---|---|---|
| .. | ||
| apps.nix | ||
| devshells.nix | ||
| jetson-support.nix | ||
| nixpkgs-instances.nix | ||
| package.nix | ||
| scope.nix | ||