29

I have inherited some code that implements pytest.mark.skipif for a few tests. Reading through the pytest docs, I am aware that I can add conditions, possibly check for environment variables, or use more advanced features of pytest.mark to control groups of tests together. Unfortunately nothing in the docs so far seems to solve my problem.

I'm looking to simply turn off any test skipping, but without modifying any source code of the tests. I just want to run pytest in a mode where it does not honor any indicators for test skipping. Does such a solution exist with pytest?

6
  • maybe write @pytest.mark.skip above the test case, for pytest to skipp that test case Commented May 10, 2019 at 13:29
  • The tests already have this. I am asking about how to disable that. I'm not asking how to disable or skip the test itself. I'm asking how to turn off skipping, so that no test can be skipped at all. It is for diagnostic purposes to examine why tests that are not skipped in a separate environment are failing. Running them locally is very hard because of the skipif annotations, but this type of debugging is rare so I want the skipif annotations to remain, and there is too much code to go and modify all of the places where skipif is used. I'd also strongly prefer not to monkeypatch. Commented May 10, 2019 at 13:34
  • Doing a global find and replace in your IDE shouldn’t be terribly difficult. Replace “skipif” with some word like “temp_enable” it should work. Just put it back when you are done. Commented May 10, 2019 at 13:38
  • What's the condition for skipif? Commented May 10, 2019 at 13:39
  • @soundstripe I'd like this to be configurable, so that in the future if this type of debugging issue happens again, I can just easily re-run with no skipping. Needing to find/replace each time should be avoided if possible. Commented May 10, 2019 at 14:17

5 Answers 5

11

Create a conftest.py with the following contents:

import pytest
import _pytest.skipping


def pytest_addoption(parser):
    parser.addoption(
        "--no-skips",
        action="store_true",
        default=False, help="disable skip marks")


@pytest.hookimpl(tryfirst=True)
def pytest_cmdline_preparse(config, args):
    if "--no-skips" not in args:
        return

    def no_skip(*args, **kwargs):
        return

    _pytest.skipping.skip = no_skip

the use --no-skip in command line to run all testcases even if some testcases with pytest.mark.skip decorator

Sign up to request clarification or add additional context in comments.

Comments

7

Here is a short working solution based on the answer from hoefling:

Add in your conftest.py:

from typing import Any, List
from typing_extensions import Final

NO_SKIP_OPTION: Final[str] = "--no-skip"

def pytest_addoption(parser):
    parser.addoption(NO_SKIP_OPTION, action="store_true", default=False, help="also run skipped tests")

def pytest_collection_modifyitems(config,
                                  items: List[Any]):
    if config.getoption(NO_SKIP_OPTION):
        for test in items:
            test.own_markers = [marker for marker in test.own_markers if marker.name not in ('skip', 'skipif')]

2 Comments

With the pytest_cmdline_preparse hook deprecated, this looks like the best answer.
This only works if the test method is marked with skip not if the test class or module is marked. For this to work aswell, we need to iterate all nodes i.e. for node in test.listchain() and update node.own_markers accodingly.
4

A workaround to ignore skip marks is to remove them programmatically. Create a conftest.py with the following contents:

def pytest_collection_modifyitems(items):
    for item in items:
        for node in reversed(item.listchain()):
            node.own_markers = [m for m in node.own_markers if m.name not in ('skip', 'skipif')]

However, this messes with pytest internals and can easily break on pytest updates; the proper way of ignoring skips should be defining your custom skipping mechanism, for example:

@pytest.hookimpl(tryfirst=True)
def pytest_runtest_setup(item):
    mark = item.get_closest_marker(name='myskip')
    if mark:
        condition = next(iter(mark.args), True)
        reason = mark.kwargs.get('reason', 'custom skipping mechanism')
        item.add_marker(pytest.mark.skipif(not os.getenv('PYTEST_RUN_FORCE_SKIPS', False) and condition, reason=reason), append=False)

Annotate the tests with @pytest.mark.myskip instead of @pytest.mark.skip and @pytest.mark.myskip(condition, reason) instead of @pytest.mark.skipif(condition, reason):

@pytest.mark.myskip
def test_skip():
    assert True


@pytest.mark.myskip(1 == 1, reason='my skip')
def test_skipif():
    assert True

On a regular run, myskip will behave same way as pytest.mark.skip/pytest.mark.skipif. Setting PYTEST_RUN_FORCE_SKIPS will disable it:

$ PYTEST_RUN_FORCE_SKIPS=1 pytest -v
...
test_spam.py::test_skip PASSED
test_spam.py::test_skipif PASSED
...

Of course, you shouldn't use pytest.mark.skip/pytest.mark.skipif anymore as they are won't be influenced by the PYTEST_RUN_FORCE_SKIPS env var.

Comments

3

Ok the implementation does not allow for this with zero modifications. You’ll need a custom marker. Add the following to your conftest.py then change all skipif marks to custom_skipif. Use pytest --no-skips.

import pytest
from _pytest.mark.evaluate import MarkEvaluator

def pytest_addoption(parser):
    parser.addoption(
        "--no-skips", action="store_true", default=False, help="disable custom_skip marks"
    )

@hookimpl(tryfirst=True)
def pytest_runtest_setup(item):
    if item.config.getoption('--no-skips'):
        return

    # Check if skip or skipif are specified as pytest marks
    item._skipped_by_mark = False
    eval_skipif = MarkEvaluator(item, "custom_skipif")
    if eval_skipif.istrue():
        item._skipped_by_mark = True
        pytest.skip(eval_skipif.getexplanation())

    for skip_info in item.iter_markers(name="custom_skip"):
        item._skipped_by_mark = True
        if "reason" in skip_info.kwargs:
            pytest.skip(skip_info.kwargs["reason"])
        elif skip_info.args:
            pytest.skip(skip_info.args[0])
        else:
            pytest.skip("unconditional skip")

    item._evalxfail = MarkEvaluator(item, "xfail")
    check_xfail_no_run(item)

The implementation is copied and modified from pytest itself in skipping.py.

Comments

1

An easy workaround is to monkeypatch pytest.mark.skipif in your conftest.py:

import pytest

old_skipif = pytest.mark.skipif

def custom_skipif(*args, **kwargs):
    return old_skipif(False, reason='disabling skipif')

pytest.mark.skipif = custom_skipif

3 Comments

In this test suite, there are several different conftest.py files and things are configured to use them in a particular way. I'm afraid global-level monkeypatching would not work. I'd likely have to add and remove this mokeypatch from a dozen or so config files each time. If there is a point in the future when I can solve this problem by actually going in and modifying the test code, then I could add something like and make it check for an environment variable or something, so that the mokeypatch itself is only applied in certain conditions. But for the problem at hand this is intractable.
Why are you talking about is only applied in certain conditions if in the question you ask for turning off pytest.mark.skip everywhere? @ely
because turning off mark.skip everywhere is required in the current problem, and the layout of multiple conftest.py files in test subdirectories prevents your monkeypatching solution from being tractable. In a case where your monkeypatch approach was tractable, then I would not use it directly as you coded it, but instead if I was free to edit code that much, I'd add extra conditions like an environment variable flag or something to allow controlling it externally.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.