Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
ebee633
Update testing.md
andyblundell Mar 10, 2025
ba55229
Update structured-code.md
andyblundell Mar 10, 2025
2cef90f
Update structured-code.md
andyblundell Mar 10, 2025
30212ef
Update everything-as-code.md
andyblundell Mar 10, 2025
a8af75f
Update everything-as-code.md
andyblundell Mar 10, 2025
d70377c
Update everything-as-code.md
andyblundell Mar 10, 2025
1604345
Update principles.md
andyblundell Mar 10, 2025
1f1e6c3
Update principles.md
andyblundell Mar 10, 2025
fc9a2e5
Update principles.md
andyblundell Mar 10, 2025
f62c246
Update review.md
andyblundell Mar 10, 2025
41a7db0
Update review.md
andyblundell Mar 10, 2025
50687f6
Update patterns/everything-as-code.md
andyblundell Mar 10, 2025
c5f2130
Update practices/testing.md
andyblundell Mar 10, 2025
36a29b0
Update insights/review.md
andyblundell Mar 10, 2025
07db966
Add and improve the 'General testing principles' section
stefaniuk Mar 11, 2025
bbf0802
Remove reference to support
andyblundell Mar 11, 2025
3fce283
Update design for testability section
andyblundell Mar 12, 2025
223282f
Typos
andyblundell Mar 12, 2025
e725a91
Update testing.md
andyblundell Mar 12, 2025
1f78675
Update testing.md
andyblundell Mar 12, 2025
74df23b
Update testing.md
andyblundell Mar 12, 2025
b4821e1
Update testing.md
andyblundell Mar 12, 2025
829269f
Update testing.md
andyblundell Mar 12, 2025
e77e5a1
Update testing.md
andyblundell Mar 12, 2025
a50ea9c
Update testing.md
andyblundell Mar 12, 2025
0f7239c
Update testing.md
andyblundell Mar 12, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion insights/review.md
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,11 @@ You may wish to score each individual component or system separately for these a
#### 9. Testing

- We have great test coverage.
- Testing is everyone's responsibility.
- Testing is everyone's responsibility and test is a first-class concern.
- A failing test suite in CI gets immediate attention.
- We support all team members to practice good testing, including by holding no-blame sessions to discuss any automation tests we should have added, and what we can learn from having missed them initially.
- We build code for testability.
- Tests (including both test code and test coverage & whether there are gaps) are part of our standard peer-review process.
- Repetitive tests are automated.
- Testing is considered before each work item is started and throughout its delivery.
- We use the right mix of testing techniques including automated checks and exploratory testing.
Expand Down
3 changes: 3 additions & 0 deletions patterns/everything-as-code.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,9 @@ While effective testing is the best way to detect bugs or non-functional problem
- Is the code clear and simple?
- Is the code layout and structure consistent with agreed style and other code? (please see [enforce code formatting](enforce-code-formatting.md))
- Would it easily allow future modification to meet slightly different needs, e.g. ten times the required data size or throughput?
- Is it [built for testability](../practices/structured-code.md)?
- Are the automated tests positioned appropriately in the [test pyramid](https://martinfowler.com/articles/practical-test-pyramid.html), triggered appropriately in CI builds, and do they block the build when they fail?
- Are there any missing [automated tests](../practices/testing.md), e.g. edge-cases that have not yet been considered?
- Have the non-functional requirements been considered (performance, scalability, robustness, etc)?
- Are common security issues guarded against (e.g. [OWASP Top 10](https://owasp.org/www-project-top-ten/))? Including:
- Is any new input data being treated as potentially hostile?
Expand Down
9 changes: 9 additions & 0 deletions practices/structured-code.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,15 @@
- These notes are part of a broader set of [principles](../principles.md)
- These practices should be read in conjunction with [architect for flow](../patterns/architect-for-flow.md)

## Benefits

The benefits of well-structured & clean code are profound & widespread, some highlights are:

- Promoting *maintainability* by generally making the code easier and safer to work on
- Supporting *building for testability*, which hugely reduces the risk and effort of practicing good testing

The above are fundamental to supporting the [little and often](../patterns/little-and-often.md) delivery approach, which itself has many benefits and is at the heart of this framework

## Details

- Good code structure is essential for maintainability.
Expand Down
93 changes: 82 additions & 11 deletions practices/testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,32 +23,103 @@

## General testing principles

- **Design for testability**, and [shift testing left and right](https://www.redhat.com/en/topics/devops/shift-left-vs-shift-right)

Testing is most effective when it is baked into the system design and runs across the entire lifecycle-from development to production. Teams should build systems that are inherently testable and support both early validation ("shift left") and ongoing validation in live environments ("shift right"). Key Practices:

- Shift left, aka test early
- Testing starts at the design and coding phase, not after.
- Pre-commit hooks, linting, static code analysis, and unit tests run locally before code even hits a branch.
- [Test-Driven Development (TDD)](https://www.thoughtworks.com/en-gb/insights/blog/test-driven-development-best-thing-has-happened-software-design) and [Behavior-Driven Development (BDD)](https://www.thoughtworks.com/en-gb/insights/blog/applying-bdd-acceptance-criteria-user-stories) encourage writing tests before or alongside code, ensuring clarity of requirements and better design.
- Test planning is informed by risk analysis and [architectural decisions](../any-decision-record-template.md) made early on.
- Design for testability
- Build systems as small units, each of which can be tested in isolation.
- Expose clear APIs, provide injection points for test doubles (mocks/stubs), and avoid tight coupling.
- Feature toggles and dependency injection help test components in isolation without complex setups.
- Make non-functional testing (performance, security, resilience) a first-class concern, with hooks and controls to simulate adverse conditions.
- Design for reproducibility
- Tests should be idempotent and easily repeatable in any environment (local, test, staging, production).
- Shift right, aka test in production
- Testing does not stop at deployment: continuous validation in production is essential.
- Implement real-time monitoring, synthetic checks, health probes, and user behavior tracking.
- Use canary deployments and feature flags to support testing changes as they are deployed.
- When safe to do so, employ chaos engineering to test system resilience under real-world failure conditions.
- Instrument systems to detect anomalies, performance degradation, or unexpected behaviors automatically - to support good quality canary deployments.

In a high-throughput environment where deploying at least once a day is the norm, adhering to the design-for-testability principle is paramount. The benefits include: 1) *faster feedback loops* – early testing catches issues when they are cheapest to fix, while testing later in the cycle ensures real-world readiness; 2) *increased confidence* – testing at all stages validates assumptions, improves system reliability, and supports safe, frequent releases; and 3) *higher quality by design* – systems built for testability are easier to maintain, scale, and evolve.

- **Quality is the whole team's responsibility**
- Education on testing and testing principles should be important to the whole team.
- Quality approaches should be driven as a team and implemented by everyone.
- Teams should consider running regular coaching/mentoring sessions to support colleagues who are less experienced in testing to grow their skills, for example by:
- Holding no-blame group discussions to identify edge-case tests which have so far been missed and tests positioned incorrectly in the [test pyramid](https://martinfowler.com/articles/practical-test-pyramid.html).
- Pairing developers with a tester navigating so that the driver learns the necessary testing skills.
- Testing is a shared team concern, not a tester’s job alone. Developers own testing for their code.

- **Combining business knowledge with testing knowledge yields better quality outcomes**
- Include business knowledge and critical thinking as part of assurance
- Intrinsic knowledge and mindset of the team is key to driving quality outcomes
- Include business knowledge and critical thinking as part of technical assurance.
- Intrinsic knowledge and mindset of the team is key to driving quality outcomes.

- **Testing is prioritised based on risk**
- A testing risk profile is defined and understood by the whole team, including the customer
- Risk appetite should be worked across the whole team, including customers and/or users
- Solution Assurance risks and Clinical Safety hazards must also be considered when prioritising risks
- A testing risk profile is defined and understood by the whole team, including the customer.
- Risk appetite should be worked across the whole team, including customers and/or users.
- Assurance risks and Clinical Safety hazards must also be considered when prioritising risks.

- **Testing is context driven**
- Context should be considered when deciding on test techniques and tools to use.

- **Test data management is a first-class citizen**

Frequent deployments require reliable and consistent test environments. Data drift or stale data can undermine test confidence.

- Test data should be easy to generate, isolate and reset.
- Use factories, fixtures or synthetic data generation.
- Make sure that you can generate test data of a scale and complexity representative of the production system, to ensure that performance and exploratory testing is realistic.

- **Consistent, CLI-driven test execution across all environments**

Tests and test processes should execute consistently in every environment, ranging from local developer workstations to cloud-based CI/CD pipelines. Using a CLI-driven approach ensures standardisation, portability and reliability.

- Command-Line interface (CLI) as the default test runner
- All tests (unit, integration, functional, performance) must be executable through straightforward, repeatable CLI commands.
- Ensure a single, consistent command can run the complete test suite, facilitating rapid local and remote execution , e.g. `make test`
- Consistent environment configuration
- Clearly defined and documented dependencies (IaC) ensure that test environments are reproducible, reducing "it works on my machine" scenarios.
- Use Infrastructure as Code (IaC) or containerised test environments (e.g. Docker) to guarantee identical configurations between local machines and cloud pipelines.
- Reproducibility and portability
- Tests must behave identically when run locally and remotely. No tests should rely on hidden state, manual configuration, or proprietary local tooling.
- Standardise environment configuration through version-controlled configuration files or scripts, enabling teams to replicate exact test runs on any workstation or CI/CD environment effortlessly.
- Dependency isolation and management
- Dependencies should be explicitly declared and managed using tools appropriate to your technology stack (e.g. Python’s requirements.txt, Node’s package.json, etc.). Use these tools to ensure that specific versions are locked.
- Employ dependency management tools (e.g. virtual environments, containers, package managers) to enforce consistency.
- Environment parity between development and production
- Aim to eliminate differences between local, staging and production environments. Running tests consistently across environments ensures that deployment to production is predictable and low-risk.
- Teams regularly validate environment parity through automated checks or smoke tests.
- Clear and consistent documentation
- Standardised CLI test commands and environment setups must be clearly documented (e.g. README.md) and version-controlled.
- Onboarding documentation should guide new developers to execute the same tests consistently across their local and cloud environments.

- **Validate continuously through observability**

Effective testing does not stop once software reaches production. By integrating [observability](observability.md) into testing, teams gain real-time insights and continuously validate system behavior under real-world conditions. Observability-driven testing means using telemetry data, such as metrics, logs, tracing and user analytics, to shape test approach, validate assumptions, detect regressions early and drive continuous improvement.

Applying this principle reduces mean-time-to-detection and recovery, improving reliability, enables teams to validate assumptions using real data rather than guesswork, and enhances the quality of future releases by continuously learning from real-world usage patterns. It increases confidence when releasing frequently, knowing production issues can be quickly identified, understood and addressed.

- **Testing is assisted by automation**
- Appreciate that not everything can be automated
- Identify good candidates for automation - particular focus on high risk and repeatable areas
- Automated tests should be used to provide confidence to all stakeholders. This includes test analysts themselves who should be familiar with what the tests are doing to allow them to make decisions on what they want to test.

Test automation is critical for maintaining rapid, frequent deployments while consistently ensuring quality. It provides scalable confidence in software changes, reduces repetitive manual efforts, and frees up human activities for high-value exploratory testing. Automated testing should be seen as a core enabler of development workflow, particularly when combined with a robust approach to design for testability.

- Appreciate that not everything can be automated, however automated testing, supported by intentional design for testability, increases delivery speed, confidence and adaptability.
- Identify good candidates for automation - particular focus on high risk and repeatable areas.
- Automated tests should be used to provide confidence to all stakeholders. This includes test analysts themselves who should be familiar with what the tests are doing to allow them to make decisions on what they want to test.
- It defines clear, technology-neutral contracts and behaviors. This provides stable reference points when migrating or re-implementing systems in new languages or platforms. Automated contract tests (e.g. consumer-driven contract tests) enable safe technology swaps, helping confirm system compatibility across evolving stacks.
- Automated test packs should be maintained regularly to ensure they have suitable coverage, are efficient and providing correct results.
- Consider using testing tools to enhance other test techniques.
- Eg. using record and play tools to aid exploratory UI testing
- Eg. using API testing tools to aid exploratory API testing
- Consider using testing tools to enhance other test techniques, e.g.
- using record and play tools to aid exploratory UI testing,
- using API testing tools to aid exploratory API testing.

- **Testing should be continually improved**
- [Peer reviews](../patterns/everything-as-code.md#code-review) must consider tests as a first-class concern - this includes tests that are present / have been added (e.g. whether they are positioned appropriately in the [test pyramid](https://martinfowler.com/articles/practical-test-pyramid.html), whether they are triggered appropriately in CI builds, etc) and any tests that are missing, e.g. edge-cases not yet considered

- **Testing is continuous**
- Testing is a continuous activity, not a phase of delivery
Expand Down
2 changes: 1 addition & 1 deletion principles.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ The following practices support the principle of building quality in.

**Pair programming**. Avoid quality issues by combining the skills and experience of two developers instead of one. Take advantage of navigator and driver roles. Also consider cross-discipline (e.g. dev-test) pairing.

**[Test automation.](practices/testing.md)** Use test-driven development: Write the tests hand in hand with the code it is testing to ensure code is easily testable and does just enough to meet the requirements.
**[Test automation.](practices/testing.md)** and **[build for testability](practices/structured-code.md)** Use test-driven development: Write the tests hand in hand with the code it is testing to ensure code is easily testable and does just enough to meet the requirements.

**[Protect code quality](patterns/everything-as-code.md)** to keep code easy to maintain.

Expand Down