-
Notifications
You must be signed in to change notification settings - Fork 7
Add is_loaded() API #53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This pull request was exported from Phabricator. Differential Revision: D73165546 |
Summary: X-link: pytorch/executorch#10326 Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`. This adds testability to runner components. Differential Revision: D73165546
2ae1d60
to
b88bbc2
Compare
This pull request was exported from Phabricator. Differential Revision: D73165546 |
Summary: X-link: pytorch-labs/tokenizers#53 Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`. This adds testability to runner components. Differential Revision: D73165546
Summary: X-link: pytorch/executorch#10326 Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`. This adds testability to runner components. Differential Revision: D73165546
b88bbc2
to
c7686a9
Compare
Summary: X-link: pytorch-labs/tokenizers#53 Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`. This adds testability to runner components. Differential Revision: D73165546
Summary: Pull Request resolved: #10326 X-link: pytorch-labs/tokenizers#53 Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`. This adds testability to runner components. Differential Revision: D73165546
This pull request was exported from Phabricator. Differential Revision: D73165546 |
c7686a9
to
8724664
Compare
Summary: X-link: pytorch/executorch#10326 Pull Request resolved: #53 Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`. This adds testability to runner components. Differential Revision: D73165546
Summary: X-link: pytorch/executorch#10326 Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`. This adds testability to runner components. Next step would be moving most of the logic out into `extension/llm/runner/` so that it can be used on non-llama models. Currently the logic for getting tokenizer instance should not assume llama, which I can modify in next diff. Differential Revision: D73165546
8724664
to
08fe6cb
Compare
This pull request was exported from Phabricator. Differential Revision: D73165546 |
Summary: X-link: pytorch-labs/tokenizers#53 Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`. This adds testability to runner components. Next step would be moving most of the logic out into `extension/llm/runner/` so that it can be used on non-llama models. Currently the logic for getting tokenizer instance should not assume llama, which I can modify in next diff. Differential Revision: D73165546
08fe6cb
to
01bc9c9
Compare
Summary: X-link: pytorch/executorch#10326 Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`. This adds testability to runner components. Next step would be moving most of the logic out into `extension/llm/runner/` so that it can be used on non-llama models. Currently the logic for getting tokenizer instance should not assume llama, which I can modify in next diff. Differential Revision: D73165546
This pull request was exported from Phabricator. Differential Revision: D73165546 |
Summary: X-link: pytorch-labs/tokenizers#53 Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`. This adds testability to runner components. Next step would be moving most of the logic out into `extension/llm/runner/` so that it can be used on non-llama models. Currently the logic for getting tokenizer instance should not assume llama, which I can modify in next diff. Differential Revision: D73165546
Summary: X-link: pytorch-labs/tokenizers#53 Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`. This adds testability to runner components. Next step would be moving most of the logic out into `extension/llm/runner/` so that it can be used on non-llama models. Currently the logic for getting tokenizer instance should not assume llama, which I can modify in next diff. Differential Revision: D73165546
Summary: Pull Request resolved: #10326 X-link: pytorch-labs/tokenizers#53 Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`. This adds testability to runner components. Next step would be moving most of the logic out into `extension/llm/runner/` so that it can be used on non-llama models. Currently the logic for getting tokenizer instance should not assume llama, which I can modify in next diff. Differential Revision: D73165546
Summary: X-link: pytorch-labs/tokenizers#53 Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`. This adds testability to runner components. Next step would be moving most of the logic out into `extension/llm/runner/` so that it can be used on non-llama models. Currently the logic for getting tokenizer instance should not assume llama, which I can modify in next diff. Differential Revision: D73165546
Summary: X-link: pytorch/executorch#10326 Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`. This adds testability to runner components. Next step would be moving most of the logic out into `extension/llm/runner/` so that it can be used on non-llama models. Currently the logic for getting tokenizer instance should not assume llama, which I can modify in next diff. Reviewed By: iseeyuan Differential Revision: D73165546
01bc9c9
to
57eb76d
Compare
This pull request was exported from Phabricator. Differential Revision: D73165546 |
Summary: X-link: pytorch-labs/tokenizers#53 Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`. This adds testability to runner components. Next step would be moving most of the logic out into `extension/llm/runner/` so that it can be used on non-llama models. Currently the logic for getting tokenizer instance should not assume llama, which I can modify in next diff. Reviewed By: iseeyuan Differential Revision: D73165546
Summary: Pull Request resolved: #10326 X-link: pytorch-labs/tokenizers#53 Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`. This adds testability to runner components. Next step would be moving most of the logic out into `extension/llm/runner/` so that it can be used on non-llama models. Currently the logic for getting tokenizer instance should not assume llama, which I can modify in next diff. Reviewed By: kirklandsign, iseeyuan Differential Revision: D73165546
Summary: Pull Request resolved: #10326 X-link: pytorch-labs/tokenizers#53 Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`. This adds testability to runner components. Next step would be moving most of the logic out into `extension/llm/runner/` so that it can be used on non-llama models. Currently the logic for getting tokenizer instance should not assume llama, which I can modify in next diff. Reviewed By: kirklandsign, iseeyuan Differential Revision: D73165546
Summary:
Add
is_loaded()
API totokenizer.h
Differential Revision: D73165546