Background: I’m writing unit-tests for a java data-intensive application. The application’s input are user defined database tables that can contain all data types (numeric-types, string-types, date-types..), and is represented by large hierarchical value classes. There are many classes that perform various transformations and I want to test them.
The current approach builds a big instance with a autovalue
Builder, runs the tested class, and then does large asserts on each of the output fields. This is done several times to faithfully represent the various data types that the user can use (numeric-types, string-types, date-types..)
I want to convince a colleague to switch to having the input and output represented in json files! I’m arguing that the current approach cons are:
- manually creating large build and large assert code is error-prone
- it is tightly coupled with the current implementation (i.e. autovalue)
- its so much easier to auto-generate a json and just review it!
the colleague is arguing that:
- not all internal classes have json serde, and some might contain external classes which would be difficult to add serde
- build and assert code is guarded by the compiler
- that’s the java-way 🙂
can you help me with arguments to convince and win this debate? or even better links to reliable sources that champion the json serde for everything approach?