- Lab
- Core Tech

Guided: Implementing the Red-Green-Refactor Cycle in TDD
Red. Green. Refactor. These three words describe one of the most powerful habits in modern software development. In this Code Lab, you’ll learn how to apply them effectively using Python and Flask. Whether you’ve heard of Test-Driven Development (TDD) or have struggled to apply it in real projects, this hands-on lab gives you a clear, guided introduction to the Red-Green-Refactor cycle. You’ll write failing tests, make them pass with minimal code, and then safely refactor to improve design—all while working on a realistic feature in a Flask app. By the end, you'll not only understand the what and why of TDD, but you’ll also experience how it leads to better, cleaner code through rapid feedback and confident iteration. No theory dump—just a working app, focused tasks, and a habit you can take back to your own codebase

Path Info
Table of Contents
-
Challenge
Introduction
In fast-moving teams, every new feature risks breaking yesterday's code. Test-Driven Development (TDD) is one approach to minimize that risk. You start with a failing test, prove it by writing the simplest code that passes, and safely refactor, knowing your safety-net is green.
In this lab, you'll practice that cycle inside a small Flask API for managing tasks. The API already lets you create tasks and has solid test suite. Your mission is to add tagging support, but only through a Red-Green-Refactor cycle.
You'll begin with a runnable project that has all tests passing. From there, you'll:
- Run the test suite and see "green" initially
- Write a new failing test that describes the tagging behavior you want
- Implement just enough code to make it pass
- Refactor confidently while tests guard against regressions
- Repeat the cycle for an edge case, reinforcing the habit
By the end, you'll have experienced how tight feedback loops lead to cleaner, more reliable code, while still allowing you to confidently ship real features.
--- ## Run the Application and Observe the Current State
Important: Run the following commands before proceeding with the rest of the lab.In this step, you'll set up the Flask application, run the test suite, and observe that all tests are currently passing.
- Open the Terminal
- Activate the virtual environment:
source .venv/bin/activate
- Confirm that the dependencies are all installed correctly:
pip install -r requirements.txt
- Run the test suite:
pytest
You should see 24 passing tests:
============================= 24 passed in 0.15s =============================
The green output shows you're starting from a solid foundation. Among other things, the tests verify that:
- Tasks are created properly
- The
tags
field is instantiated as an empty list
Feel free to have a look at the other tests to see the full range of things tested.
This is the baseline functionality. You're now ready to add new features using TDD!
--- ## What is TDD?
Test-Driven Development (TDD) is a software development approach where tests are written before the code that makes them pass. TDD is not just about testing, but about driving the design and development of your code through a rapid, iterative cycle.
The TDD Cycle
TDD is often described as a short, repeating cycle of three steps:
- Red: Write a test for a new feature or behavior. The test should fail because the feature does not exist yet.
- Green: Write the minimal amount of code necessary to make the test pass.
- Refactor: Clean up the code, improving its structure and readability, while ensuring all tests still pass.
This cycle is repeated for each new piece of functionality.
TDD as a Design Approach
- Design by Example: In TDD, you design your code by specifying its behavior through tests. You write tests as if the API or function you want already exists, describing how it should behave.
- Incremental Development: You build your system one small piece at a time, always guided by the next failing test.
- Feedback Loop: The cycle provides immediate feedback, helping you catch mistakes early and ensuring your code does what you intend.
Writing Tests “As If the API Already Exists”
When practicing TDD, you start by imagining how you want to use your code. You write tests that call functions, methods, or endpoints as if they already exist, specifying the expected inputs and outputs. This helps you:
- Focus on the interface and behavior, not the implementation
- Clarify requirements before writing code
- Ensure your code is testable and meets real needs
Example:
def test_can_create_a_task(client): response = client.post('/tasks', json={'title': 'Buy milk'}) assert response.status_code == 201 assert response.get_json()['title'] == 'Buy milk'
You write this test before implementing the
/tasks
endpoint. The test describes what the API should do, and then you write just enough code to make it pass.
In summary:
TDD is a disciplined way to develop software by writing tests first, letting those tests guide your design, and building your code in small, safe steps. It helps you create reliable, maintainable, and well-designed software. -
Challenge
Red: Creating our first test
Write a New Failing Test (Red)
Problem Statement: Add Tagging Capability to Tasks
Your goal is to add the ability to tag tasks with one or more labels (e.g., 'urgent', 'work', 'reading'). While we already have an update route that allows updating the entire task object, a dedicated tagging feature is valuable for several reasons:
- Granularity: Tagging is a common, focused operation that users may want to perform without changing other task details.
- Convenience : A dedicated endpoint for adding tags makes the API easier to use and understand, especially for clients that only want to modify tags.
- Safety : Updating only the tags reduces the risk of accidentally overwriting other task fields.
- Extensibility : A tagging endpoint can be extended with additional logic (e.g., validation, deduplication, audit logging) without affecting the general update logic.
You know you have a solid foundation with all tests passing, it's time to add new functionality using TDD. You'll start by writing a test that describes the tagging behavior you want to implement.
In this step, you'll write a test that describes adding a single tag to a task. This test will fail initially (red phase), but it clearly defines the behavior you want to implement next.
The key principle here is: write the test first, then implement the feature. This ensures your code is always testable from the start and that you're building exactly what's needed.
-
Challenge
Green: Make the tests pass
Green: Make the Tests Pass
Now that you have a failing test describing the behavior you want, it's time to implement the minimal code needed to make it pass. This is the Green phase of TDD.
The key principle here is: write the simplest code that makes the tests pass.
Don't worry about perfect design or edge cases yet—just get to green. You can always refactor later when you have the safety net of passing tests.
Since your tests are calling API endpoints (such as
POST /tasks/{id}/tags
), you'll need to implement both:- The service logic in
tag_service.py
to perform the actual tagging operations - The route in
routes.py
to handle the incoming HTTP requests
In this step, you'll add the tagging endpoint and implement the tagging functionality, focusing on making both tests pass using minimal, working code. ## Congratulations! You've Completed the Green Phase
Great work! You've successfully implemented the minimal code needed to make your tests pass—a significant milestone in the TDD cycle.
What You've Accomplished
You now have a working, testable implementation that includes:
✅ The
add_tag
function intag_service.py
, which:- Handles tasks with no existing tags
- Appends new tags to existing tag lists
- Preserves all other task fields
- Returns the updated task
✅ The tagging route in
routes.py
, which:- Accepts
POST
requests to/tasks/{id}/tags
- Validates input and handles errors
- Uses the tag service to perform the actual tagging
- Returns the updated task as JSON
✅ All tests are now passing—you have a working, testable implementation!
The Confidence Factor
Having passing tests gives you confidence that your code behaves exactly as intended. This is one of the core benefits of TDD:
- Safety Net: You can now refactor and improve your code without worrying about breaking existing functionality.
- Documentation: Your tests serve as living documentation of how your code should behave.
- Regression Prevention: Any future changes that break the expected behavior will be caught immediately.
- The service logic in
-
Challenge
Refactor: Improve your code
Refactor: Improve your code
The key principle with refactoring is to improve the code without changing its behavior. Your tests act as your guardian, ensuring that any refactoring doesn't break the existing functionality.
What You'll Accomplish
In this step, you'll review your implementation and make improvements to:
- Code structure and organization
- Variable and function naming
- Error handling and edge cases
- Performance and efficiency
- Code readability and maintainability ## Your Task
-
Review your implementation in
tag_service.py
and look for opportunities to improve:- Naming: Are variable and function names clear and descriptive?
- Structure: Can you extract helper functions for better organization?
- Readability: Is the logic easy to follow?
- Duplication: Is there any repeated code that can be extracted?
-
Make small, incremental improvements to your code.
-
After each change, run
pytest -q
to ensure tests still pass. -
Continue until you're satisfied with the code quality. ### Refactoring Suggestions
Here are some potential improvements you might consider:
Extract Helper Functions
def _ensure_tags_list_exists(task): """Ensure the task has a tags list initialized.""" if "tags" not in task: task["tags"] = [] def _add_single_tag(task, tag): """Add a single tag to the task.""" task["tags"].append(tag) def add_tag(task, tag_data): """Add tags to a task, supporting both single and multiple tags.""" _ensure_tags_list_exists(task) tag_value = tag_data["tag"] _add_single_tag(task, tag_value) return task
Improve Variable Names
def add_tag(task, tag_data): if "tags" not in task: task["tags"] = [] tag_value = tag_data["tag"] # More descriptive than just "tag" if isinstance(tag_value, list): task["tags"].extend(tag_value) else: task["tags"].append(tag_value) return task
Add Type Hints (Optional)
from typing import Dict, Union, List, Any def add_tag(task: Dict[str, Any], tag_data: Dict[str, Union[str, List[str]]]) -> Dict[str, Any]: # ... implementation
Refactoring Principles
- Make one change at a time: This makes it easier to identify what broke if tests fail.
- Run tests frequently: After each small change, confirm that all tests still pass.
- Focus on readability: Code should be as self-explanatory as possible.
- Don't add features: Refactoring is about improving existing code, not adding new functionality.
TDD Principle Applied
This demonstrates the "Refactor" phase: improving code quality while maintaining the safety net of passing tests. Refactoring is safe because you know the tests will catch any regressions.
-
Challenge
And repeat
Repeat the Cycle for an Edge Case
Now you've completed one full Red-Green-Refactor cycle—but TDD is a continuous process. In this final step, you'll add support for multiple tags by going through the entire cycle again.
This reinforces the TDD habit and shows how the cycle scales to handle additional requirements. You'll start with a new failing test (Red), implement the minimal solution (Green), and then refactor if needed (Refactor).
The key insight here is that TDD is iterative—each cycle builds on the previous one, and the safety net of tests grows stronger with each iteration. ## Refactor Without Changing Behavior
Your tagging code now works, but “working” isn’t the finish line in TDD. Instead, you should aim for clean, readable code.
In this step, you'll keep every test green while polishing the implementation.
Your Task
-
Open
app/tag_service.py
and read the function as if you’ve never seen it before. -
Improve the code without altering its public behavior:
- Naming – Choose clearer variable / helper names.
- Structure – Extract small helpers (e.g.,
_ensure_tags_list(task)
or_normalize_to_list(tag)
). - Documentation & Type Hints – Add docstrings or typing annotations if they add clarity.
- Readability – Flatten nested
if
statements, split long lines, or add early returns where helpful.
-
After each small change, run:
pytest -q
to ensure all tests remain green.
-
Stop refactoring once you're confident the code is easier to read and maintain.
Outcome
pytest
still exits with0
(all dots green).app/tag_service.py
no longer contains obvious duplication or unclear logic.
Remember: No new features yet—you’re only polishing. Your passing tests are the safety net that allows you to refactor fearlessly.
What's Next?
With all your tests passing, you now have a robust tagging system built entirely through TDD. You can continue applying this pattern to add more features such as:
- Duplicate tag handling
- Tag validation (e.g., no empty tags, length limits)
- Tag removal functionality
- Tag search and filtering
- Tag statistics
Each new feature would follow the same Red-Green-Refactor cycle, building a comprehensive test suite and a reliable codebase.
-
What's a lab?
Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.
Provided environment for hands-on practice
We will provide the credentials and environment necessary for you to practice right within your browser.
Guided walkthrough
Follow along with the author’s guided walkthrough and build something new in your provided environment!
Did you know?
On average, you retain 75% more of your learning if you get time for practice.