1

I am trying to adapt the pytest tool so that it can be used in my testing environment, which requires that precise test report are produced and stored. The tests report are in xml format. So far I have succeeded in creating a new plugin which produces the xml I want, at one exception : I need to register in my xml report the passed assertion, with the associated code if possible. I couldn't find a way to do so. The only possibility approaching is to overload pytest_assertrepr_compare im py pytest plugin, but it is called only on assertion failure, not on passed assertion.

Any idea to do so ?

Thank for the help!

etienne

2 Answers 2

1

I think this is basically impossible without changing the assertion re-writing itself. py.test does not see things happening on an assert-level of detail, it just executes the test function and it either returns a value (ignored) or raises an exception. In the case where it raises an exception it then inspects the exception information in order to provide a nice failure message.

The assertion re-writing logic simply replaces the assert statement with an if not <assert_expr>: create_detailed_assertion_info. I guess in theory it is possible to extend the assertion rewriting so that it would call hooks on both passing and failure of the <assert_expr>, but that would be a new feature.

Sign up to request clarification or add additional context in comments.

3 Comments

Thanks for the answer, and sorry it took me a wile to answer. After more researches I finally decided to modify rewrite, as you suggested. So far it is working pretty well, although I'm not very comfortable with the ast concepts. But I'm confident I'll find out!
If you're happy working on this please consider making it general enough for a pull request, it would be a great feature to have. I was wondering if in general it's possible to call a hook for each assertion, the hook could get the test result and explanation. Probably best to discuss further on the pytest-dev mailing list, #pylib on IRC or in an issue/PR on bitbucket.
I'm done developing this feature, I've created a special hook for this purpose. It makes my reports richer. I'll create a pull request as soon as I get time to do it.
0

Not sure if I understand your requirements exactly, but another approach is to produce a text/xml file with the expected results of your processing: The first time your run the test, you inspect the file manually to ensure it is correct and store it with the test. Further test runs will then produce a similar file and compare it with the former, failing if they don't match (optionally producing a diff for easier diagnosing).

The pytest-regtest plugin uses a similar approach by capturing the output from test functions and comparing that with former runs.

1 Comment

Thank for the reply! The thing is that I really want my test process to be totally automated, so it's not very compatible with the manual inspection of a text file. I decided to modifiy pytest rewrite instead, so that I can have introspection at assert level, rather than at test case level only

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.