Skip to content

Add is_loaded() API #53

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Add is_loaded() API #53

wants to merge 1 commit into from

Conversation

larryliu0820
Copy link
Contributor

@larryliu0820 larryliu0820 commented Apr 21, 2025

Summary:
Add is_loaded() API to tokenizer.h

Differential Revision: D73165546

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Apr 21, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D73165546

facebook-github-bot pushed a commit that referenced this pull request Apr 21, 2025
Summary:
X-link: pytorch/executorch#10326


Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`.

This adds testability to runner components.

Differential Revision: D73165546
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D73165546

facebook-github-bot pushed a commit to pytorch/executorch that referenced this pull request Apr 21, 2025
Summary:

X-link: pytorch-labs/tokenizers#53

Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`.

This adds testability to runner components.

Differential Revision: D73165546
larryliu0820 added a commit that referenced this pull request Apr 22, 2025
Summary:
X-link: pytorch/executorch#10326


Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`.

This adds testability to runner components.

Differential Revision: D73165546
larryliu0820 added a commit to pytorch/executorch that referenced this pull request Apr 22, 2025
Summary:

X-link: pytorch-labs/tokenizers#53

Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`.

This adds testability to runner components.

Differential Revision: D73165546
larryliu0820 added a commit to pytorch/executorch that referenced this pull request Apr 22, 2025
Summary:
Pull Request resolved: #10326

X-link: pytorch-labs/tokenizers#53

Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`.

This adds testability to runner components.

Differential Revision: D73165546
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D73165546

larryliu0820 added a commit that referenced this pull request Apr 22, 2025
Summary:
X-link: pytorch/executorch#10326

Pull Request resolved: #53

Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`.

This adds testability to runner components.

Differential Revision: D73165546
facebook-github-bot pushed a commit that referenced this pull request May 19, 2025
Summary:
X-link: pytorch/executorch#10326


Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`.

This adds testability to runner components.

Next step would be moving most of the logic out into `extension/llm/runner/` so that it can be used on non-llama models.

Currently the logic for getting tokenizer instance should not assume llama, which I can modify in next diff.

Differential Revision: D73165546
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D73165546

facebook-github-bot pushed a commit to pytorch/executorch that referenced this pull request May 19, 2025
Summary:

X-link: pytorch-labs/tokenizers#53

Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`.

This adds testability to runner components.

Next step would be moving most of the logic out into `extension/llm/runner/` so that it can be used on non-llama models.

Currently the logic for getting tokenizer instance should not assume llama, which I can modify in next diff.

Differential Revision: D73165546
facebook-github-bot pushed a commit that referenced this pull request May 19, 2025
Summary:
X-link: pytorch/executorch#10326


Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`.

This adds testability to runner components.

Next step would be moving most of the logic out into `extension/llm/runner/` so that it can be used on non-llama models.

Currently the logic for getting tokenizer instance should not assume llama, which I can modify in next diff.

Differential Revision: D73165546
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D73165546

larryliu0820 added a commit to pytorch/executorch that referenced this pull request May 19, 2025
Summary:

X-link: pytorch-labs/tokenizers#53

Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`.

This adds testability to runner components.

Next step would be moving most of the logic out into `extension/llm/runner/` so that it can be used on non-llama models.

Currently the logic for getting tokenizer instance should not assume llama, which I can modify in next diff.

Differential Revision: D73165546
facebook-github-bot pushed a commit to pytorch/executorch that referenced this pull request May 19, 2025
Summary:

X-link: pytorch-labs/tokenizers#53

Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`.

This adds testability to runner components.

Next step would be moving most of the logic out into `extension/llm/runner/` so that it can be used on non-llama models.

Currently the logic for getting tokenizer instance should not assume llama, which I can modify in next diff.

Differential Revision: D73165546
larryliu0820 added a commit to pytorch/executorch that referenced this pull request May 19, 2025
Summary:
Pull Request resolved: #10326

X-link: pytorch-labs/tokenizers#53

Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`.

This adds testability to runner components.

Next step would be moving most of the logic out into `extension/llm/runner/` so that it can be used on non-llama models.

Currently the logic for getting tokenizer instance should not assume llama, which I can modify in next diff.

Differential Revision: D73165546
facebook-github-bot pushed a commit to pytorch/executorch that referenced this pull request May 20, 2025
Summary:

X-link: pytorch-labs/tokenizers#53

Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`.

This adds testability to runner components.

Next step would be moving most of the logic out into `extension/llm/runner/` so that it can be used on non-llama models.

Currently the logic for getting tokenizer instance should not assume llama, which I can modify in next diff.

Differential Revision: D73165546
Summary:
X-link: pytorch/executorch#10326


Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`.

This adds testability to runner components.

Next step would be moving most of the logic out into `extension/llm/runner/` so that it can be used on non-llama models.

Currently the logic for getting tokenizer instance should not assume llama, which I can modify in next diff.

Reviewed By: iseeyuan

Differential Revision: D73165546
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D73165546

facebook-github-bot pushed a commit to pytorch/executorch that referenced this pull request May 20, 2025
Summary:

X-link: pytorch-labs/tokenizers#53

Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`.

This adds testability to runner components.

Next step would be moving most of the logic out into `extension/llm/runner/` so that it can be used on non-llama models.

Currently the logic for getting tokenizer instance should not assume llama, which I can modify in next diff.

Reviewed By: iseeyuan

Differential Revision: D73165546
@larryliu0820 larryliu0820 changed the title Use dependency injection for runner Add is_loaded() API May 20, 2025
@larryliu0820 larryliu0820 requested a review from jackzhxng May 20, 2025 22:06
larryliu0820 added a commit to pytorch/executorch that referenced this pull request May 20, 2025
Summary:
Pull Request resolved: #10326

X-link: pytorch-labs/tokenizers#53

Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`.

This adds testability to runner components.

Next step would be moving most of the logic out into `extension/llm/runner/` so that it can be used on non-llama models.

Currently the logic for getting tokenizer instance should not assume llama, which I can modify in next diff.

Reviewed By: kirklandsign, iseeyuan

Differential Revision: D73165546
larryliu0820 added a commit to pytorch/executorch that referenced this pull request May 20, 2025
Summary:
Pull Request resolved: #10326

X-link: pytorch-labs/tokenizers#53

Pass in runner components, move most of the instantiation logic from `load()` to a new static API `create()`.

This adds testability to runner components.

Next step would be moving most of the logic out into `extension/llm/runner/` so that it can be used on non-llama models.

Currently the logic for getting tokenizer instance should not assume llama, which I can modify in next diff.

Reviewed By: kirklandsign, iseeyuan

Differential Revision: D73165546
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants