Skip to content

Conversation

DavidGrath
Copy link
Contributor

PR checklist

  • Read the contribution guidelines.
  • Pull Request title clearly describes the work in the pull request and Pull Request description provides details about how to validate the work. Missing information here may result in delayed response from the community.
  • Run the following to build the project and update samples:
    ./mvnw clean package || exit
    ./bin/generate-samples.sh ./bin/configs/*.yaml || exit
    ./bin/utils/export_docs_generators.sh || exit
    
    (For Windows users, please run the script in WSL)
    Commit all changed files.
    This is important, as CI jobs will verify all generator outputs of your HEAD commit as it would merge with master.
    These must match the expectations made by your contribution.
    You may regenerate an individual generator by passing the relevant config(s) as an argument to the script, for example ./bin/generate-samples.sh bin/configs/java*.
    IMPORTANT: Do NOT purge/delete any folders/files (e.g. tests) when regenerating the samples as manually written tests may be removed.
  • File the PR against the correct branch: master (upcoming 7.x.0 minor release - breaking changes with fallbacks), 8.0.x (breaking changes without fallbacks)
  • If your PR solves a reported issue, reference it using GitHub's linking syntax (e.g., having "fixes #123" present in the PR description)
  • If your PR is targeting a particular programming language, @mention the technical committee members, so they are more likely to review the pull request.

Fixes #20213
Concerning the test, specifically, since the original issue isn't tied to one specific language, I wanted to know where the most fitting place would be to put it, and if I should bother using all 3 generators that @lesteenman mentioned or just stick to one

@DavidGrath
Copy link
Contributor Author

@Mattias-Sehlstedt
Copy link
Contributor

Hi @DavidGrath,

Could we add a test to the OpenAPINormalizerTest to more precisely illustrate how the fix affects the OpenAPI swagger-core model representation?

@DavidGrath
Copy link
Contributor Author

@Mattias-Sehlstedt , thank you, I'm thinking of adding another assertion to an existing test that seems specifically made for this:

    Schema schema23 = openAPI.getComponents().getSchemas().get("AnyOfNullableAdditionalPropertiesTest");
    assertEquals(((Schema) schema23.getProperties().get("str")).getAnyOf(), null);
    assertTrue(((Schema) schema23.getProperties().get("str")).getNullable());
    assertEquals(schema22.getTypes(), Set.of("string"));
    AnyOfNullableAdditionalPropertiesTest:
      description: to test anyOf with additional properties
      additionalProperties: false
      properties:
        str:
          anyOf:
            - type: string
            - type: 'null'
            - type: null
            - $ref: null

Do you think this is sufficient or should I still go ahead to make a new test or modify another existing test?
In addition, would it be better if I delete my test at DefaultCodegenTest or would it be okay if both tests stayed?

@Mattias-Sehlstedt
Copy link
Contributor

I can start off by clarifying that I am not a maintainer of the repo, only a contributor. So I make personal comments that do not affect whether anything will get approved or not.

Do you think this is sufficient or should I still go ahead to make a new test or modify another existing test?
In addition, would it be better if I delete my test at DefaultCodegenTest or would it be okay if both tests stayed?

I believe it should be fine if you add an additional model and assertion to that test (I believe I might have done so myself previously). The current test in DefaultCodegen I believe could be kept if you find that they are valuable for preventing regression. But if you elect to keep them, then I would argue that you should either move them to their respective Codegen class (so spring and typescript-angular since that are the generators you have tested) and redesign them slightly so they fit there.

Why I make this suggestion is because the current tests in DefaultCodegenTest are testing behavior way beyond that of DefaultCodegen.

The code generation is basically:

  1. OpenAPI specification ->
  2. OpenAPI Java-representation (swagger-core) ->
  3. normalize representation ->
  4. convert representation into common code properties (this is what Codegen does) ->
  5. decorate Codegen on a language basis (see all inheritors of the class) ->
  6. use the common code properties to generate code by inserting them into mustache templates.

Your current test in DefaultCodegen does 3, 4 and 5, while it should preferably only do step 3 in the tests.

So if you want to have tests in DefaultCodegen, you should inspect the created code properties rather than generating a file and inspecting that.

@Mattias-Sehlstedt
Copy link
Contributor

It potentially looks like this change is something that could fix this? I.e., we could skip attempting the "map" processing if the property is only additionalProperties=false?

@DavidGrath
Copy link
Contributor Author

@Mattias-Sehlstedt

Your current test in DefaultCodegen does 3, 4 and 5, while it should preferably only do step 3 in the tests.

I agree, I'll move the java test to JavaClientCodegenTest and delete the other 2 from DefaultCodegenTest while keeping my changes to OpenAPINormalizerTest

It potentially looks like #21742 is something that could fix this?

Do you mean something like this? :

if(!ModelUtils.isModelWithPropertiesOnly(schema)) {
    normalizeMapSchema(schema);
} else {
    normalizeProperties(schema.getProperties(), visitedSchemas);
}

@Mattias-Sehlstedt
Copy link
Contributor

Do you mean something like this? :

if(!ModelUtils.isModelWithPropertiesOnly(schema)) {
normalizeMapSchema(schema);
} else {
normalizeProperties(schema.getProperties(), visitedSchemas);
}
Yes exactly.

I have not tried it out myself, but I read the code as "if there are additional properties, process it as a map, otherwise continue". That seems to work well for 3.0, where the if-case is properly false, but for 3.1 I assume the parser creates a different model for additionalProperties=false that accidentally causes it to be treated as additionalProperties=true by the normalizer.

@DavidGrath
Copy link
Contributor Author

@Mattias-Sehlstedt , thank you. I've subscribed to that PR so that I can make the changes when it's merged.
@wing328 , @martin-mfg , please help review

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG][MULTIPLE] 3.1.0 spec, additionalProperties: false with nullable properties generates invalid models
2 participants