Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 6 additions & 7 deletions docs/benchmarks/language/scc25_guide/scc25.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,12 @@ PS: For any query regarding the contribution, feel free to raise an issue in the
If you encounter issues related to SCC, please submit them [here](https://github.com/mlcommons/inference/issues) with **scc-25** label
with proper information about the command used, error logs and any additional usefull information to debug the issue.

> **Note:**
Downloading the models requires service account credentials to be supplied in the run command. These credentials will be shared with participants via their email addresses prior to the start of the competition. Add the following to the existing command described in the sections below:
```
--use_service_account=yes --client_id=<CF-Access-Client-Id> --client_secret=<CF-Access-Client-Secret>
```

## Artifacts to submit to the SCC committee

You will need to submit the following files:
Expand All @@ -44,13 +50,6 @@ You will need to submit the following files:
* `mlperf_submission.md` - description of your platform and some highlights of the MLPerf benchmark execution.
* `<Team Name>` under which results are pushed to the github repository.


## SCC interview

You are encouraged to highlight and explain the obtained MLPerf inference throughput on your system
and describe any improvements and extensions to this benchmark (such as adding new hardware backend
or supporting multi-node execution) useful for the community and [MLCommons](https://mlcommons.org).

## Run Commands

=== "MLCommons-Python"
Expand Down
2 changes: 1 addition & 1 deletion main.py
Original file line number Diff line number Diff line change
Expand Up @@ -210,7 +210,7 @@ def mlperf_inference_implementation_readme(
content += f'{cur_space2}=== "{execution_env}"\n'
content += f"{cur_space3}###### {execution_env} Environment\n\n"
# ref to MLCFlow installation
content += f"{cur_space3}Please refer to the [installation page](site:inference/install/) to install MLCFlow for running the automated benchmark commands.\n\n"
content += f"{cur_space3}Please refer to the [installation page](site:install/) to install MLCFlow for running the automated benchmark commands.\n\n"
test_query_count = get_test_query_count(

model, implementation, device.lower()
Expand Down