Skip to content

Commit ef92584

Browse files
Update branding for service used by Safety evaluators
"Azure AI Content Safety service" -> "Azure AI Foundry Evaluation service"
1 parent 1e4f6c5 commit ef92584

28 files changed

+96
-80
lines changed

src/Libraries/Microsoft.Extensions.AI.Evaluation.Console/Microsoft.Extensions.AI.Evaluation.Console.csproj

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
<Project Sdk="Microsoft.NET.Sdk">
22

33
<PropertyGroup>
4-
<Description>A dotnet tool for managing the evaluation data and generating reports.</Description>
4+
<Description>A command line dotnet tool for generating reports and managing evaluation data.</Description>
55
<OutputType>Exe</OutputType>
66
<!-- Building only one TFM due to bug: https://github.com/dotnet/sdk/issues/47696
77
Once this is fixed, we can go back to building multiple. -->

src/Libraries/Microsoft.Extensions.AI.Evaluation.Console/README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
* [`Microsoft.Extensions.AI.Evaluation`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation) - Defines core abstractions and types for supporting evaluation.
66
* [`Microsoft.Extensions.AI.Evaluation.Quality`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Quality) - Contains evaluators that can be used to evaluate the quality of AI responses in your projects including Relevance, Truth, Completeness, Fluency, Coherence, Retrieval, Equivalence and Groundedness.
7-
* [`Microsoft.Extensions.AI.Evaluation.Safety`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Safety) - Contains a set of evaluators that are built atop the Azure AI Content Safety service that can be used to evaluate the content safety of AI responses in your projects including Protected Material, Groundedness Pro, Ungrounded Attributes, Hate and Unfairness, Self Harm, Violence, Sexual, Code Vulnerability and Indirect Attack.
7+
* [`Microsoft.Extensions.AI.Evaluation.Safety`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Safety) - Contains a set of evaluators that are built atop the Azure AI Foundry Evaluation service that can be used to evaluate the content safety of AI responses in your projects including Protected Material, Groundedness Pro, Ungrounded Attributes, Hate and Unfairness, Self Harm, Violence, Sexual, Code Vulnerability and Indirect Attack.
88
* [`Microsoft.Extensions.AI.Evaluation.Reporting`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Reporting) - Contains support for caching LLM responses, storing the results of evaluations and generating reports from that data.
99
* [`Microsoft.Extensions.AI.Evaluation.Reporting.Azure`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Reporting.Azure) - Supports the `Microsoft.Extensions.AI.Evaluation.Reporting` library with an implementation for caching LLM responses and storing the evaluation results in an Azure Storage container.
1010
* [`Microsoft.Extensions.AI.Evaluation.Console`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Console) - A command line dotnet tool for generating reports and managing evaluation data.
@@ -16,6 +16,7 @@ From the command-line:
1616
```console
1717
dotnet add package Microsoft.Extensions.AI.Evaluation
1818
dotnet add package Microsoft.Extensions.AI.Evaluation.Quality
19+
dotnet add package Microsoft.Extensions.AI.Evaluation.Safety
1920
dotnet add package Microsoft.Extensions.AI.Evaluation.Reporting
2021
```
2122

@@ -25,6 +26,7 @@ Or directly in the C# project file:
2526
<ItemGroup>
2627
<PackageReference Include="Microsoft.Extensions.AI.Evaluation" Version="[CURRENTVERSION]" />
2728
<PackageReference Include="Microsoft.Extensions.AI.Evaluation.Quality" Version="[CURRENTVERSION]" />
29+
<PackageReference Include="Microsoft.Extensions.AI.Evaluation.Safety" Version="[CURRENTVERSION]" />
2830
<PackageReference Include="Microsoft.Extensions.AI.Evaluation.Reporting" Version="[CURRENTVERSION]" />
2931
</ItemGroup>
3032
```

src/Libraries/Microsoft.Extensions.AI.Evaluation.Quality/Microsoft.Extensions.AI.Evaluation.Quality.csproj

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
<Project Sdk="Microsoft.NET.Sdk">
22

33
<PropertyGroup>
4-
<Description>A library containing a set of evaluators for evaluating the quality (coherence, relevance, truth, completeness, groundedness, fluency, equivalence etc.) of responses received from an LLM.</Description>
4+
<Description>A library containing evaluators that can be used to evaluate the quality of AI responses in your projects including Relevance, Truth, Completeness, Fluency, Coherence, Retrieval, Equivalence and Groundedness.</Description>
55
<TargetFrameworks>$(TargetFrameworks);netstandard2.0</TargetFrameworks>
66
<RootNamespace>Microsoft.Extensions.AI.Evaluation.Quality</RootNamespace>
77
</PropertyGroup>

src/Libraries/Microsoft.Extensions.AI.Evaluation.Quality/README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
* [`Microsoft.Extensions.AI.Evaluation`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation) - Defines core abstractions and types for supporting evaluation.
66
* [`Microsoft.Extensions.AI.Evaluation.Quality`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Quality) - Contains evaluators that can be used to evaluate the quality of AI responses in your projects including Relevance, Truth, Completeness, Fluency, Coherence, Retrieval, Equivalence and Groundedness.
7-
* [`Microsoft.Extensions.AI.Evaluation.Safety`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Safety) - Contains a set of evaluators that are built atop the Azure AI Content Safety service that can be used to evaluate the content safety of AI responses in your projects including Protected Material, Groundedness Pro, Ungrounded Attributes, Hate and Unfairness, Self Harm, Violence, Sexual, Code Vulnerability and Indirect Attack.
7+
* [`Microsoft.Extensions.AI.Evaluation.Safety`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Safety) - Contains a set of evaluators that are built atop the Azure AI Foundry Evaluation service that can be used to evaluate the content safety of AI responses in your projects including Protected Material, Groundedness Pro, Ungrounded Attributes, Hate and Unfairness, Self Harm, Violence, Sexual, Code Vulnerability and Indirect Attack.
88
* [`Microsoft.Extensions.AI.Evaluation.Reporting`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Reporting) - Contains support for caching LLM responses, storing the results of evaluations and generating reports from that data.
99
* [`Microsoft.Extensions.AI.Evaluation.Reporting.Azure`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Reporting.Azure) - Supports the `Microsoft.Extensions.AI.Evaluation.Reporting` library with an implementation for caching LLM responses and storing the evaluation results in an Azure Storage container.
1010
* [`Microsoft.Extensions.AI.Evaluation.Console`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Console) - A command line dotnet tool for generating reports and managing evaluation data.
@@ -16,6 +16,7 @@ From the command-line:
1616
```console
1717
dotnet add package Microsoft.Extensions.AI.Evaluation
1818
dotnet add package Microsoft.Extensions.AI.Evaluation.Quality
19+
dotnet add package Microsoft.Extensions.AI.Evaluation.Safety
1920
dotnet add package Microsoft.Extensions.AI.Evaluation.Reporting
2021
```
2122

@@ -25,6 +26,7 @@ Or directly in the C# project file:
2526
<ItemGroup>
2627
<PackageReference Include="Microsoft.Extensions.AI.Evaluation" Version="[CURRENTVERSION]" />
2728
<PackageReference Include="Microsoft.Extensions.AI.Evaluation.Quality" Version="[CURRENTVERSION]" />
29+
<PackageReference Include="Microsoft.Extensions.AI.Evaluation.Safety" Version="[CURRENTVERSION]" />
2830
<PackageReference Include="Microsoft.Extensions.AI.Evaluation.Reporting" Version="[CURRENTVERSION]" />
2931
</ItemGroup>
3032
```

src/Libraries/Microsoft.Extensions.AI.Evaluation.Reporting.Azure/Microsoft.Extensions.AI.Evaluation.Reporting.Azure.csproj

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
<Project Sdk="Microsoft.NET.Sdk">
22

33
<PropertyGroup>
4-
<Description>A library that provides additional an additional storage provider based on Azure Storage containers.</Description>
4+
<Description>A library that supports the Microsoft.Extensions.AI.Evaluation.Reporting library with an implementation for caching LLM responses and storing the evaluation results in an Azure Storage container.</Description>
55
<TargetFrameworks>$(TargetFrameworks);netstandard2.0</TargetFrameworks>
66
<RootNamespace>Microsoft.Extensions.AI.Evaluation.Reporting</RootNamespace>
77
<!-- EA0002: Use System.TimeProvider to make the code easier to test. -->

src/Libraries/Microsoft.Extensions.AI.Evaluation.Reporting.Azure/README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
* [`Microsoft.Extensions.AI.Evaluation`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation) - Defines core abstractions and types for supporting evaluation.
66
* [`Microsoft.Extensions.AI.Evaluation.Quality`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Quality) - Contains evaluators that can be used to evaluate the quality of AI responses in your projects including Relevance, Truth, Completeness, Fluency, Coherence, Retrieval, Equivalence and Groundedness.
7-
* [`Microsoft.Extensions.AI.Evaluation.Safety`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Safety) - Contains a set of evaluators that are built atop the Azure AI Content Safety service that can be used to evaluate the content safety of AI responses in your projects including Protected Material, Groundedness Pro, Ungrounded Attributes, Hate and Unfairness, Self Harm, Violence, Sexual, Code Vulnerability and Indirect Attack.
7+
* [`Microsoft.Extensions.AI.Evaluation.Safety`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Safety) - Contains a set of evaluators that are built atop the Azure AI Foundry Evaluation service that can be used to evaluate the content safety of AI responses in your projects including Protected Material, Groundedness Pro, Ungrounded Attributes, Hate and Unfairness, Self Harm, Violence, Sexual, Code Vulnerability and Indirect Attack.
88
* [`Microsoft.Extensions.AI.Evaluation.Reporting`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Reporting) - Contains support for caching LLM responses, storing the results of evaluations and generating reports from that data.
99
* [`Microsoft.Extensions.AI.Evaluation.Reporting.Azure`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Reporting.Azure) - Supports the `Microsoft.Extensions.AI.Evaluation.Reporting` library with an implementation for caching LLM responses and storing the evaluation results in an Azure Storage container.
1010
* [`Microsoft.Extensions.AI.Evaluation.Console`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Console) - A command line dotnet tool for generating reports and managing evaluation data.
@@ -16,6 +16,7 @@ From the command-line:
1616
```console
1717
dotnet add package Microsoft.Extensions.AI.Evaluation
1818
dotnet add package Microsoft.Extensions.AI.Evaluation.Quality
19+
dotnet add package Microsoft.Extensions.AI.Evaluation.Safety
1920
dotnet add package Microsoft.Extensions.AI.Evaluation.Reporting
2021
```
2122

@@ -25,6 +26,7 @@ Or directly in the C# project file:
2526
<ItemGroup>
2627
<PackageReference Include="Microsoft.Extensions.AI.Evaluation" Version="[CURRENTVERSION]" />
2728
<PackageReference Include="Microsoft.Extensions.AI.Evaluation.Quality" Version="[CURRENTVERSION]" />
29+
<PackageReference Include="Microsoft.Extensions.AI.Evaluation.Safety" Version="[CURRENTVERSION]" />
2830
<PackageReference Include="Microsoft.Extensions.AI.Evaluation.Reporting" Version="[CURRENTVERSION]" />
2931
</ItemGroup>
3032
```

src/Libraries/Microsoft.Extensions.AI.Evaluation.Reporting/CSharp/Microsoft.Extensions.AI.Evaluation.Reporting.csproj

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
-->
99

1010
<PropertyGroup>
11-
<Description>A library for aggregating and reporting evaluation data. This library also includes support for caching LLM responses.</Description>
11+
<Description>A library that contains support for caching LLM responses, storing the results of evaluations and generating reports from that data.</Description>
1212
<TargetFrameworks>$(TargetFrameworks);netstandard2.0</TargetFrameworks>
1313
<RootNamespace>Microsoft.Extensions.AI.Evaluation.Reporting</RootNamespace>
1414
<!-- EA0002: Use System.TimeProvider to make the code easier to test. -->

src/Libraries/Microsoft.Extensions.AI.Evaluation.Reporting/CSharp/README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
* [`Microsoft.Extensions.AI.Evaluation`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation) - Defines core abstractions and types for supporting evaluation.
66
* [`Microsoft.Extensions.AI.Evaluation.Quality`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Quality) - Contains evaluators that can be used to evaluate the quality of AI responses in your projects including Relevance, Truth, Completeness, Fluency, Coherence, Retrieval, Equivalence and Groundedness.
7-
* [`Microsoft.Extensions.AI.Evaluation.Safety`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Safety) - Contains a set of evaluators that are built atop the Azure AI Content Safety service that can be used to evaluate the content safety of AI responses in your projects including Protected Material, Groundedness Pro, Ungrounded Attributes, Hate and Unfairness, Self Harm, Violence, Sexual, Code Vulnerability and Indirect Attack.
7+
* [`Microsoft.Extensions.AI.Evaluation.Safety`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Safety) - Contains a set of evaluators that are built atop the Azure AI Foundry Evaluation service that can be used to evaluate the content safety of AI responses in your projects including Protected Material, Groundedness Pro, Ungrounded Attributes, Hate and Unfairness, Self Harm, Violence, Sexual, Code Vulnerability and Indirect Attack.
88
* [`Microsoft.Extensions.AI.Evaluation.Reporting`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Reporting) - Contains support for caching LLM responses, storing the results of evaluations and generating reports from that data.
99
* [`Microsoft.Extensions.AI.Evaluation.Reporting.Azure`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Reporting.Azure) - Supports the `Microsoft.Extensions.AI.Evaluation.Reporting` library with an implementation for caching LLM responses and storing the evaluation results in an Azure Storage container.
1010
* [`Microsoft.Extensions.AI.Evaluation.Console`](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation.Console) - A command line dotnet tool for generating reports and managing evaluation data.
@@ -16,6 +16,7 @@ From the command-line:
1616
```console
1717
dotnet add package Microsoft.Extensions.AI.Evaluation
1818
dotnet add package Microsoft.Extensions.AI.Evaluation.Quality
19+
dotnet add package Microsoft.Extensions.AI.Evaluation.Safety
1920
dotnet add package Microsoft.Extensions.AI.Evaluation.Reporting
2021
```
2122

@@ -25,6 +26,7 @@ Or directly in the C# project file:
2526
<ItemGroup>
2627
<PackageReference Include="Microsoft.Extensions.AI.Evaluation" Version="[CURRENTVERSION]" />
2728
<PackageReference Include="Microsoft.Extensions.AI.Evaluation.Quality" Version="[CURRENTVERSION]" />
29+
<PackageReference Include="Microsoft.Extensions.AI.Evaluation.Safety" Version="[CURRENTVERSION]" />
2830
<PackageReference Include="Microsoft.Extensions.AI.Evaluation.Reporting" Version="[CURRENTVERSION]" />
2931
</ItemGroup>
3032
```

src/Libraries/Microsoft.Extensions.AI.Evaluation.Safety/CodeVulnerabilityEvaluator.cs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,8 @@
99
namespace Microsoft.Extensions.AI.Evaluation.Safety;
1010

1111
/// <summary>
12-
/// An <see cref="IEvaluator"/> that utilizes the Azure AI Content Safety service to evaluate code completion responses
13-
/// produced by an AI model for the presence of vulnerable code.
12+
/// An <see cref="IEvaluator"/> that utilizes the Azure AI Foundry Evaluation service to evaluate code completion
13+
/// responses produced by an AI model for the presence of vulnerable code.
1414
/// </summary>
1515
/// <remarks>
1616
/// <para>

src/Libraries/Microsoft.Extensions.AI.Evaluation.Safety/ContentHarmEvaluator.cs

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,8 @@
99
namespace Microsoft.Extensions.AI.Evaluation.Safety;
1010

1111
/// <summary>
12-
/// An <see cref="IEvaluator"/> that utilizes the Azure AI Content Safety service to evaluate responses produced by an
13-
/// AI model for the presence of a variety of harmful content such as violence, hate speech, etc.
12+
/// An <see cref="IEvaluator"/> that utilizes the Azure AI Foundry Evaluation service to evaluate responses produced by
13+
/// an AI model for the presence of a variety of harmful content such as violence, hate speech, etc.
1414
/// </summary>
1515
/// <remarks>
1616
/// <see cref="ContentHarmEvaluator"/> can be used to evaluate responses for all supported content harm metrics in one
@@ -22,10 +22,10 @@ namespace Microsoft.Extensions.AI.Evaluation.Safety;
2222
/// </remarks>
2323
/// <param name="metricNames">
2424
/// A optional dictionary containing the mapping from the names of the metrics that are used when communicating
25-
/// with the Azure AI Content Safety to the <see cref="EvaluationMetric.Name"/>s of the
25+
/// with the Azure AI Foundry Evaluation service, to the <see cref="EvaluationMetric.Name"/>s of the
2626
/// <see cref="EvaluationMetric"/>s returned by this <see cref="IEvaluator"/>.
2727
///
28-
/// If omitted, includes mappings for all content harm metrics that are supported by the Azure AI Content Safety
28+
/// If omitted, includes mappings for all content harm metrics that are supported by the Azure AI Foundry Evaluation
2929
/// service. This includes <see cref="HateAndUnfairnessEvaluator.HateAndUnfairnessMetricName"/>,
3030
/// <see cref="ViolenceEvaluator.ViolenceMetricName"/>, <see cref="SelfHarmEvaluator.SelfHarmMetricName"/> and
3131
/// <see cref="SexualEvaluator.SexualMetricName"/>.

0 commit comments

Comments
 (0)