-
Notifications
You must be signed in to change notification settings - Fork 629
Add a MCP tool that helps review alt text in context of surrounding text #6820
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🦋 Changeset detectedLatest commit: f98c701 The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
size-limit report 📦
|
Vs-code output: BackgrouundIn the Input:
Response :
This example shows that the on it's own the AI did not create a great alt text. Using the Primer MCP it evaluated it's alt text and self corrected. |
This does what a lot of LLM-powered tools are incapable of. This came from a discussion with
This has some real potential to clear the third level. And given that this is an MCP server for Primer and all the docs and knowledge therein, possibly even the 4th. In other words: 👀 |
}, | ||
}, | ||
], | ||
sampling: {temperature: 0.4}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
temperature
-> Controls how “random” or “creative” the model’s output is. Low temperature is more predictable/conservative while a high temp is more creative/varied. I chose .4 (almost mid range) to allow some flexibility in how the LLM reviews alt text.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces an experimental review_alt_text
tool to the Primer MCP server that evaluates the quality and accessibility compliance of alt text for images. The tool uses MCP sampling to assess whether alt text is meaningful and contextually relevant based on surrounding content.
Key changes:
- Adds a new accessibility-focused tool for reviewing alt text quality
- Implements MCP sampling integration for LLM-based evaluation
- Includes proper documentation and experimental status disclaimers
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.
File | Description |
---|---|
packages/mcp/src/server.ts | Implements the new review_alt_text tool with MCP sampling functionality |
.changeset/strong-lions-tan.md | Documents the addition of the review_alt_text tool for release notes |
packages/mcp/src/server.ts
Outdated
role: 'user', | ||
content: { | ||
type: 'text', | ||
text: `Does this alt text: '${alt}' meet accessibility guidelines and describe the ${image} accurately in context of this surrounding text: '${surroundingText}'?\n\n`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The prompt template directly interpolates the image
parameter which could be either a File object or URL string. This will result in '[object File]' being displayed when a File is passed, making the prompt unclear. Consider extracting a meaningful identifier from the image parameter before interpolation.
Copilot uses AI. Check for mistakes.
…in as image imput
text: response.content.type === 'text' ? response.content.text : 'Unable to generate summary', | ||
}, | ||
], | ||
altTextEvaluation: response.content.type === 'text' ? response.content.text : 'Unable to generate summary', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@joshblack I updated the nextSteps
field after your approval to include: DO NOT run this tool repeatedly on the same image - evaluations may vary slightly with each run.
@kendallgassner this is clever! Rock on. |
Thanks everyone! I am going to merge this on Monday as I am out of office Thursday and Friday and will want to monitor the tool's performance. |
New
This PR introduces the review_alt_text tool to Primer MCP. The tool utilizes MCP sampling to assess whether provided alt text is meaningful and contextually relevant.
Alt text is often written by users or AI in a way that is generic (e.g., alt="Image") or disconnected from the surrounding content of a web page. The review_alt_text tool is designed to evaluate alt text, advocate for improvements, and guide users toward creating more descriptive and useful suggestions.
Note:
This tool is not intended to replace or prevent users from adding alt text. Its purpose is to review and provide feedback, helping users and AI craft more meaningful alt text for accessibility and context. Ideally this could be used by products like Copilot Coding Agent, or Copilot Code review.
Additional note: I could see this one day living in a different MCP. Perhaps an accessibility MCP or GitHub's MCP to target a broader audience. This is not tied to Primer but I think this MCP is a safe space for testing
review_alt_text
Rollout strategy
Testing & Reviewing
This can be tested by pulling down this branch and installing the Primer MCP locally on vs-code.
Merge checklist