Skip to content

Pack the LiteRT-LM Metadata using the flatbuffer-generated routines in litertln_header_schema_py_generated.py.#2112

Merged
copybara-service[bot] merged 1 commit intomainfrom
litert_lm_pr_907619596
May 6, 2026
Merged

Pack the LiteRT-LM Metadata using the flatbuffer-generated routines in litertln_header_schema_py_generated.py.#2112
copybara-service[bot] merged 1 commit intomainfrom
litert_lm_pr_907619596

Conversation

@copybara-service
Copy link
Copy Markdown
Contributor

Pack the LiteRT-LM Metadata using the flatbuffer-generated routines in litertln_header_schema_py_generated.py.

Also first pack the LiteRT-LM Metadata to get its actual size, then update the section offsets, and then re-pack them with the correct offsets.

This change no longer limits the LiteRT-LM Metadata size to litertlm_core.BLOCK_SIZE, and would allow us to use a more conservative value for the latter, e.g. 4096 bytes (almost ubiquitous memory page size).

@copybara-service copybara-service Bot force-pushed the litert_lm_pr_907619596 branch 5 times, most recently from 4847b1b to a3ac611 Compare May 6, 2026 13:05
… in `litertln_header_schema_py_generated.py`.

Also first pack the LiteRT-LM Metadata to get its actual size, then update the section offsets, and then re-pack them with the correct offsets.

This change no longer limits the LiteRT-LM Metadata size to `litertlm_core.BLOCK_SIZE`, and would allow us to use a more conservative value for the latter, e.g. 4096 bytes (almost ubiquitous memory page size).

LiteRT-LM-PiperOrigin-RevId: 911307927
@copybara-service copybara-service Bot force-pushed the litert_lm_pr_907619596 branch from a3ac611 to fe88012 Compare May 6, 2026 13:48
@copybara-service copybara-service Bot merged commit fe88012 into main May 6, 2026
@copybara-service copybara-service Bot deleted the litert_lm_pr_907619596 branch May 6, 2026 13:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant