Most test frameworks follow a "runner mentality"; you write test functions and hand them to a runner that controls discovery, execution order, and flow. The moment you need something dynamic (loop a test N times based on a runtime value, skip a test based on another's result), you're hunting for plugins. We present an alternative that is, in a way, a return to basics: treat your tests as a regular program. Dependencies become if statements. Retries become while loops. Parallel execution becomes a function argument. No plugins — just code, just Python. We use TestFlows (pip3 install testflows) to illustrate the idea, but the core argument is framework-independent: test code should have the same expressive power as production code. Have you hit the ceiling of your test runner? How do you handle dynamic test flows today?
Most test frameworks follow a "runner mentality"; you write test functions and hand them to a runner that controls discovery, execution order, and flow. The moment you need something dynamic (loop a test N times based on a runtime value, skip a test based on another's result), you're hunting for plugins. We present an alternative that is, in a way, a return to basics: treat your tests as a regular program. Dependencies become if statements. Retries become while loops. Parallel execution becomes a function argument. No plugins — just code, just Python. We use TestFlows (pip3 install testflows) to illustrate the idea, but the core argument is framework-independent: test code should have the same expressive power as production code. Have you hit the ceiling of your test runner? How do you handle dynamic test flows today?