-
-
Notifications
You must be signed in to change notification settings - Fork 1k
Description
Currently, if we want to do some custom visualization or export of benchmark results, we have to wait until all the benchmarks have been completed to be able to interpret the results by using a custom Exporter. As an example, the only way to see the results of benchmarks as they run is by what is passed through to the logger by BDN (link to relevant source). In addition to this, the benchmark runner in BDN does lots of logging at various stages and there is no way to customize these logs or to use them meaningfully outside the context of a console/terminal.
For large projects which have lots of benchmarks that take a long time to run, it would be good to be able to hook into the various stages of the benchmark running so that we don't have to wait until the whole run is completed to be able to get custom exported information.
For this issue, I wanted to suggest a new feature to hook into the benchmark running to be able to subscribe to "events". An example of an event might be "RunStarted" or "RunCompleted" and those events would have read-only data attached to them containing the additional data about the event. I think as well, that all the logging code that is done today by BDN could be moved out to use this same abstraction, that way we can replace or extend it with our own custom logging. Not only would this be useful for customising the log output, but it could also provide the ability to build tools on top of BDN that display live execution results graphically rather than textually.
Diagnosers do somewhat implement this functionality with the Handle(HostSignal, DiagnoserActionParameters)
method, but they are only run within the context of a single benchmark execution. If we want to subscribe to events such as when a benchmark case has completed running and the BenchmarkReport
is available such as the example linked above, then it would be difficult to see how IDiagnoser
could be extended to accommodate such an event.
I am willing to implement this but wanted to get some confirmation that this is a desirable feature for the project. If given the go-ahead, I can follow up with a proposal for how it could be implemented. Alternatively, if this feature is a bit too ambitious with wanting to have events for everything that needs to be logged, I think I still would like to look into something that lets me handle BenchmarkReport
objects as they are created.