Note: This is a case study for using AI coding agents in tandem with Test Driven Development (TDD) to build an e-commerce API. You can read a in-depth discussion about using AI coding agents in tandem with TDD in the link provided.
Introduction
AI coding agents are powerful tools that can help development teams ship results faster. However, without implementing robust guidelines and defining best practices, they can lead to code that is difficult to maintain and understand. In this case study, we will explore how projects can benefit from using AI coding agents in tandem with Test-Driven Development (Henceforth referred to as TDD) and other best practices to ensure that the code is maintainable, readable, and easy to understand.
Out of convenience, I will assume that you use either VSCode or Cursor as your IDE.
Project Overview
Consider the following project ticket:
# Shopping cart mvp
## User Stories
### Shopping Cart
- As a shopper, I want to add items to my cart with a specific quantity so I can collect products before purchasing
- As a shopper, I want to view my cart so I can see what items I have selected
- As a shopper, I want to change item quantities in my cart so I can adjust my order size
- As a shopper, I want to remove one or multiple items from my cart so I can eliminate unwanted products
- As a shopper, I want to checkout my cart so I can complete my purchase and get an order number
### Order Management
- As a customer, I want to track my order status so I know when to expect delivery
- As a customer, I want to return my order so I can get my money back for unwanted items
- As a customer, I want to track my return status so I know when my refund is coming
## Functional Requirements
### Cart Operations
- Add item by itemId with quantity (positive integers only)
- View cart items with their quantities
- Update quantity for existing cart items
- Delete single items or multiple items in one operation
- Checkout cart and receive order confirmation with orderId
- Handle edge cases: invalid items, zero/negative quantities
### Order Tracking
- Track order status by orderId (processing → shipped → delivered)
- Display estimated delivery dates and tracking information
- Initiate returns with reason codes
- Track return status (requested → approved → refunded)
- Prevent returns on ineligible orders
Most teams benefit from having a senior member or leader parsing out success criteria and agreements from the project definition. AI coding agents are no different. However, while a human developer can hold the broader context of the project, AI coding agents suffer from context dilution the further they are from the initial prompt. This shortcoming can be mitigated by persisting those success criteria and agreements into the project.
Setting Up the Project
In this case study, I will implement the project using NestJS. I will also use the following tools and development practices:
- Devcontainers to enable seamless interaction with the code on the fly.
- Docker to orchestrate the API service and the database
You can find the starting point for this project in the repository project link.
If you want to see specific details about the setup of the project, you can find them at the appendix section at the end of this article.
Tests, Rules and Instructions
Providing Agreements and Best Practices
While success criteria and functional expectations can be readily expressed through tests, some business logic, technical specifications, and best practices are harder to express in tests. For example, if you want your AI coding agent to follow Domain Driven Design (DDD) principles, it would be quite cumbersome to express them within tests. Another example is when different systems have different naming conventions. Consider the case of developing in TypeScript or Java, where the case convention is camelCase, but using a Postgres database where the convention is snake_case. In these cases, it is better to use rules, instructions and restrictions to guide the AI coding agent.
Modern AI-enabled IDEs such as VSCode (Copilot) and Cursor allow you to include rules (as it is named in Cursor) or instructions (as it is named in VSCode). These rules provide a semantic framework for the AI coding agent to work with.
Note: In the project structure, you'll find rules in two locations:
-
.cursor/rules/
- For Cursor IDE -
.github/instructions/
- For GitHub Copilot in VSCode
Both contain the same content but are placed in different directories to support both IDEs.
The rules we will use are:
-
constants:
- Defines where and how environment variables and constants are used in the project.
-
controller-testing:
- Specifies how to write tests for controllers, ensuring they are properly isolated and cover all edge cases.
-
design-patterns:
- Specifies architectural patterns and conventions for implementing features (DDD, Use Cases, etc.).
-
dev-environment:
- Gives general context of the development environment, such as the use of Docker and devcontainers.
-
entity-naming:
- Provides naming conventions for entities, ensuring consistency across the codebase.
-
project-info:
- Provides the project ticket.
-
project-structure:
- Provides a tree structure of the project, helping AI agents understand where to place new files and how to organize code.
-
test-protection:
- Prohibits the modification of test files by AI agents, ensuring that tests remain stable and reliable.
Translating Functional Requirements to Success Criteria and API functionality
Let's review the functional requirements for the cart:
### Cart Operations
- Add item by itemId with quantity (positive integers only)
- View cart items with their quantities
- Update quantity for existing cart items
- Delete single items or multiple items in one operation
- Checkout cart and receive order confirmation with orderId
- Handle edge cases: invalid items, zero/negative quantities
These requirements can be translated into the following success criteria:
- A user can view the cart and see all items with their respective quantities.
- A user can add an item to the cart by providing a valid itemId and a positive integer quantity.
- A user can update the quantity of an existing item in the cart.
- A user can delete a single item or multiple items from the cart in one operation.
- A user can checkout the cart and receive an order confirmation with a valid orderId.
- The system handles edge cases such as invalid items and zero/negative quantities gracefully, returning appropriate error messages.
Out of these success criteria, we can derive the following API functionality:
Note: For simplicity's sake, we are going to pass the userId either as a parameter or in the request body. However, in real-world scenario, it would be likely parsed from a JWT header
HTTP Method | Endpoint | Body | Description | |
---|---|---|---|---|
1 | GET | /cart/:{userId} | - | Retrieve the current user's cart with all items and their quantities. |
2 | POST | /cart/:{userId}/items | { itemId: string, quantity: number } | Add an item to the cart with a specific quantity. |
3 | PUT | /cart/:{userId}/items/:{itemId} | { quantity: number } | Update the quantity of an existing item in the cart. |
4 | DELETE | /cart/:{userId}/items/ | {itemIds: string[]} | Delete items from the cart. |
5 | POST | /cart/:{userId}/checkout | - | Checkout the cart and receive an order confirmation with an orderId. |
With the endpoints defined, we can now proceed to write a test suite combining the endpoints into a user flow while respecting the success criteria. Let's write the following flow:
- A user adds an item to their cart
- A user updates the quantity of an item in their cart
- A user adds two additional items to their cart
- The user removes two items from their cart
- The user checks out their cart
// tests/cart.e2e-spec.ts
// ... setup boilerplate code ...
describe('Cart Lifecycle', () => {
it('POST /cart/{userId}/items - should add a first item to the cart', async () => {
const itemToAdd = seedItems.find(i => i.stockQuantity > 5) as ItemEntity;
const response = await request(app.getHttpServer())
.post(`/cart/${testUser.id}/items`)
.send({itemId: itemToAdd.id, quantity: 2 });
expect(response.status).toBe(201);
});
it('GET /cart/{userId} - should retrieve the cart with one item', async () => {
const itemInCart = seedItems.find(i => i.stockQuantity > 5) as ItemEntity;
const cartResponse = await request(app.getHttpServer())
.get(`/cart/${testUser.id}`)
.expect(200);
expect(cartResponse.body.items).toHaveLength(1);
expect(cartResponse.body.items[0].id).toBe(itemInCart.id);
expect(cartResponse.body.items[0].quantity).toBe(2);
});
it('PUT /cart/{userId}/items/{itemId} - should update the quantity of an item', async () => {
const itemToUpdate = await request(app.getHttpServer()).get(`/cart/${testUser.id}`)
.send()
.then(res => res.body.items[0]);
await request(app.getHttpServer())
.put(`/cart/${testUser.id}/items/${itemToUpdate.id}`)
.send({ quantity: 3 })
.expect(200);
// Verify the update by fetching the cart again
const updatedCart = await request(app.getHttpServer()).get(`/cart/${testUser.id}`);
expect(updatedCart.body.items[0].quantity).toBe(3);
});
it('DELETE /cart/{userId}/items - should remove items from cart', async () => {
// Add two items to the cart first
const seedItem = await request(app.getHttpServer()).get(`/cart/${testUser.id}`)
.send()
.then(res => res.body.items[0]);
const item1 = seedItems[2];
const item2 = seedItems[3];
await request(app.getHttpServer()).post(`/cart/${testUser.id}/items`).send({ userId: testUser.id, itemId: item1.id, quantity: 1 });
await request(app.getHttpServer()).post(`/cart/${testUser.id}/items`).send({ userId: testUser.id, itemId: item2.id, quantity: 1 });
// Remove two items
await request(app.getHttpServer())
.delete(`/cart/${testUser.id}/items/`)
.send({
itemIds: [item1.id, item2.id]
})
.expect(200);
// Verify single item remains
const cartAfterSingleDelete = await request(app.getHttpServer()).get(`/cart/${testUser.id}`);
expect(cartAfterSingleDelete.body.items).toHaveLength(1);
expect(cartAfterSingleDelete.body.items[0].id).toBe(seedItem.id);
});
it('POST /cart/{userId}/checkout - should checkout the cart', async () => {
const response = await request(app.getHttpServer())
.post(`/cart/${testUser.id}/checkout`)
.expect(201);
expect(response.body).toHaveProperty('orderId');
expect(response.body.orderId).toBeDefined();
});
});
describe('Cart Edge Cases', () => {
it('should fail to add an out-of-stock item with a 400 error', async () => {
const outOfStockItem = seedItems.find(i => i.stockQuantity === 0) as ItemEntity;
const response = await request(app.getHttpServer())
.post(`/cart/${testUser.id}/items`)
.send({ itemId: outOfStockItem.id, quantity: 1 })
expect(response.body.message).toBe('Item is out of stock');
expect(response.body.statusCode).toBe(400);
});
it('should fail to add quantity exceeding stock with a 400 error', async () => {
const lowStockItem = seedItems.find(i => i.stockQuantity > 0 && i.stockQuantity < 5) as ItemEntity;
const response = await request(app.getHttpServer())
.post(`/cart/${testUser.id}/items`)
.send({ itemId: lowStockItem.id, quantity: lowStockItem.stockQuantity + 5 })
expect(response.body.message).toBe('Quantity exceeds stock');
expect(response.body.statusCode).toBe(400);
});
it('should fail to add an item with an invalid itemId with a 404 error', async () => {
const response = await request(app.getHttpServer())
.post(`/cart/${testUser.id}/items`)
.send({ itemId: 'invalid-uuid-for-item', quantity: 1})
expect(response.body.message).toBe('Item not found');
expect(response.body.statusCode).toBe(404);
});
it('should fail if quantity is zero or negative with a 400 error', async () => {
const item = seedItems[0];
const response = await request(app.getHttpServer())
.post(`/cart/${testUser.id}/items`)
.send({ itemId: item.id, quantity: 0 })
expect(response.body.message).toBe('Quantity must be greater than 0');
expect(response.body.statusCode).toBe(400);
});
});
Note on edge cases: As you develop and run your app, you are likely to encounter bugs which you have not anticipated. This is a good opportunity to add edge cases to your tests.
We can use the same workflow to write tests for the order management, you can find the tests in the repository.
Running the agent
Prompting
Using claude-sonnet-4 I gave the following prompt to the AI coding agent.
Note: I found out during development that it is better to explicitly mention the rules/instructions, at least when using cursor.
You are tasked to develop the project, following the rules. the goal is that the project passes all of the e2e tests through the command npm run test:e2e..
Start by adding required configurations.
For every module that you develop, write a test file first for the module controller to define the required behavior, then develop it. Once you have all of the modules written and the controller tests passes, then start to adjust your code to the e2e tests.
You must follow all of the rules provided to you
It took the agent around 1 hour without my intervention to conclude that it completed the task. However, it noted in it's summary that some e2e tests still fail. Therefore, I had to give it a second prompt:
Continue developing until npm run test:e2e passes
This time, the agent was able to complete the code to the point where the e2e tests passed. However, it missed on an instruction about using aliases in imports. It fixed it with the following prompt:
You violated @design-patterns.mdc specification for using @/app/ in imports. fix it
Reviewing the results
Though the e2e tests passed, the Agent missed a couple of things:
- Using the wrong environment variable names for constants.
- Using relative paths in imports when explicitly instructed to use aliases.
- Concentrating logic in
service
instead ofuse-cases
- Not writing tests for cart and order module controllers.
It is worth noting that the checkout functionality is well written, even though I have not explicitly instructed the AI with the steps.
Testing the API using GUI
We can use swagger-ui to review the API endpoints that the AI agent has created. But first, we would need to populate some seed data for our app. We could ask the agent to do it with the following prompt:
Create a npm script to populate the database stored at POSTGRES_DB with the data from @seed-data.json. Assume that this is a first insertion into the database
When this is installed, you can start the app and find out the endpoint for swagger-ui. To find it out search the codebase for the following line: SwaggerModule.setup
. The first argument is the path to the swagger-ui. Alternatively, you can specify this a a rule for the AI coding agent to follow.
Lessons Learned
What Worked Well
E2E Tests as Specifications: The comprehensive e2e test suites effectively communicated the expected behavior to the AI coding agent. The AI coding agent was able to develop complex workflows and implement checkout functionality correctly without explicit step-by-step
Explicit Error Handling in Tests: Including specific error cases in the test suite (404 for invalid items, 400 for out-of-stock) guided the AI to implement proper validation and error responses.
Key Challenges and Solutions
-
Context Dilution: The AI agent struggled to maintain all instructions when working on a large codebase.
- Solution: Explicitly reference rules in prompts and break complex tasks into smaller, focused prompts.
-
Architectural Pattern Adherence: The agent defaulted to simpler patterns (services) rather than the specified DDD/use-case pattern.
- Solution: Create more detailed architectural examples in the rules, or consider generating architectural scaffolding before implementation.
-
Missing Test Coverage: Despite instructions to write controller tests first, the agent skipped this step.
- Solution: Make test-writing a separate, explicit prompt phase before implementation.
Iterative Prompting: Rather than expecting perfection on the first run, treating the AI interaction as an iterative process yielded better results. The follow-up prompts to fix failing tests and rule violations were quick and effective.
The solution suggested to these challanges is to break the task of development into smaller prompts is a realistic expectation for working with AI coding agents. The difference from free-flow itertive prompting is that in the TDD approach, the AI is given a suite of tests which it is expected to pass.
Best Practices Discovered
Rule Specificity Matters: Vague instructions like "follow DDD principles" were less effective than specific examples and file structure templates.
Test-First, Not Test-Only: While e2e tests are excellent for validation, having the AI write unit tests first allows the developer to review the issue and add requirements where issues earlier and resulted in better-structured code.
While e2e tests provide an finish line for the AI coding agent, having the AI write intra-module integration tests gives the developer granular control where needed.Environment Configuration is Critical: Time spent setting up proper development environment (devcontainers, proper TypeScript config, test infrastructure) paid dividends in smoother AI interactions.
Validation Through Multiple Layers: Combining e2e tests, linting rules, and TypeScript strict mode created multiple safety nets that caught different types of AI mistakes.
What to Do Differently Next Time
-
Phased Approach: Instead of asking the AI to complete everything at once, break the tasks into smaller prompts. For example:
- Generate configuration files for the project.
- Create the project structure and populate the modules.
- Write the tests for the modules.
- Implement the modules.
- Run the tests and refactor the code until the tests pass.
- Run the E2E tests and refactor the code until the tests pass.
Pre-flight Checklist: Create a validation script that checks for common AI mistakes (relative imports, wrong naming conventions, etc.) as an extra step.
The Bottom Line
AI coding agents excel at understanding intent from tests and generating working code quickly. However, they need careful guidance through rules, examples, and iterative refinement to match team standards and architectural patterns. The investment in comprehensive test suites and development environment setup is crucial for success. Think of AI agents as very fast junior developers who need clear specifications and occasional course corrections, rather than autonomous architects who will intuit your team's practices.
AI coding agents are a powerful tool for developers. Using them in tandem with TDD and other best practices can help teams ship results faster compared to iterative prompting. However, developers should expect to spend time on refining rules and instructions for the agent. Furthermore, developers should expect to have multiple iterations of prompting to complete a task but using TDD can reduce the number of those iterations.
Appendix: Setting Up The Project:
Installing The Core Dependencies
The following line will add the base dependencies for our projects:
npm add @nestjs/common @nestjs/core @nestjs/platform-express @nestjs/typeorm typeorm pg
npm add --save-dev @nestjs/testing @types/jest @types/supertest @types/pg jest jest-junit supertest ts-jest ts-node typescript
Let me break down why each of these matters:
- @nestjs/testing: NestJS testing utilities and wrappers around Jest.
- jest & ts-jest: Your test runner and TypeScript preprocessor – these let you write tests in TypeScript with full type safety
- supertest: Enables HTTP-level testing of your endpoints, perfect for those e2e tests that verify your entire request/response cycle
- jest-junit: Outputs test results in a format that CI/CD pipelines (and some AI tools) can parse and understand
- @nestjs/typeorm & typeorm & pg: Database ORM and PostgreSQL driver for our data layer
Defining The Docker Image And Environment
FROM node:lts-bullseye-slim
RUN apt update && apt install -y \
less \
lsof \
procps
WORKDIR /workspaces/shopping-cart
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
ENTRYPOINT ["npm", "run", "start:dev"]
The docker-compose:
#./docker-compose.yaml
services:
backend:
build:
context: ./
dockerfile: Dockerfile.dev
container_name: backend
env_file:
- ".env"
ports:
- "3000:3000"
networks:
- dev-net
database:
image: postgres:17.4-bookworm
container_name: database
env_file:
- ".env"
ports:
- "5432:5432"
volumes:
- ./docker/init-multiple-databases.sh:/docker-entrypoint-initdb.d/init-multiple-databases.sh
networks:
- dev-net
networks:
dev-net:
driver: bridge
There is a second docker compose file inside the .devcontainer
directory that is used to run the devcontainer. It is similar to the one above, but it has some additional configuration for the devcontainer.
.devcontainer/docker-compose.yaml:
Setting Up The Devcontainer
Devcontainers allow us to edit our environment on the fly on the one hand, while also giving us easy connection to our database of choice (Postgres) for testing. Here's our .devcontainer/devcontainer.json:
{
"name": "Shopping Cart",
"dockerComposeFile": [
"../docker-compose.yaml",
"./docker-compose.yaml"
],
"service": "backend",
"workspaceFolder": "/workspaces/${localWorkspaceFolderBasename}",
"updateRemoteUserUID": true,
"overrideCommand": true,
"postStartCommand": "lsof -i :3000 -t | xargs -r kill -9 || true",
"customizations": {
"vscode": {
"extensions": []
}
}
}
That postStartCommand
is important here. It ensures that port 3000 is always available when the container starts.
Environment Configuration
Before we dive into the testing setup, we need to configure our environment variables. Create a .env
file in the root of your project with the necessary database and application configuration:
# .env
# PostgreSQL Environment Variables (for Docker container)
POSTGRES_DB=shop
POSTGRES_USER=user
POSTGRES_PASSWORD=pass
POSTGRES_MULTIPLE_DATABASES=test
# NestJS Application Database Configuration
BACKEND__DATABASE_HOST=database
BACKEND__DATABASE_PORT=5432
BACKEND__DATABASE_USERNAME=user
BACKEND__DATABASE_PASSWORD=pass
BACKEND__DATABASE_NAME=shop
The POSTGRES_MULTIPLE_DATABASES
will be used when spinning up the database container to create a separate test
database for our end-to-end tests. This ensures that our tests run in isolation without affecting the development database.
Database Initialization Script
To support both development and testing databases, we use a database initialization script that creates a second PostgreSQL database:
# docker/init-multiple-databases.sh
#!/bin/bash
set -e
set -u
function create_user_and_database() {
local database=$1
echo " Creating database '$database'"
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
CREATE DATABASE $database;
GRANT ALL PRIVILEGES ON DATABASE $database TO "$POSTGRES_USER";
EOSQL
}
if [ -n "$POSTGRES_MULTIPLE_DATABASES" ]; then
echo "Multiple database creation requested: $POSTGRES_MULTIPLE_DATABASES"
for db in $(echo $POSTGRES_MULTIPLE_DATABASES | tr ',' ' '); do
create_user_and_database $db
done
echo "Multiple databases created"
fi
This script is essential for creating a separate database for testing.
Seed Data
We will need some seed data for our tests to run with. We need some users to interact with, some items that they can shop for and some orders for them to check and interact with.
This is why we create minimal sample data for our tests to run with tests/seed-data.json
.
Here's a sample of our seed data structure:
{
"users": [
{
"id": 1,
"name": "Sarah Johnson"
},
{
"id": 2,
"name": "Michael Chen"
},
{
"id": 3,
"name": "Emily Rodriguez"
}
],
"items": [
{
"id": 1,
"name": "Apple iPhone 15 Pro",
"price": 999.00,
"stockQuantity": 25
},
{
"id": 2,
"name": "Samsung Galaxy S24 Ultra",
"price": 1199.99,
"stockQuantity": 18
},
{
"id": 7,
"name": "AirPods Pro (2nd Gen)",
"price": 249.00,
"stockQuantity": 0
}
],
"orders": [
{
"id": 1,
"userId": 1,
"status": "created",
"createdAt": "2024-02-01T10:00:00Z",
"items": [
{
"itemId": 1,
"quantity": 1,
"priceAtOrder": 999.00
}
],
"totalAmount": 999.00
}
]
}
Jest E2E Configuration
Before writing our e2e tests, we need to configure Jest specifically for end-to-end testing. This configuration is crucial for ensuring our tests run properly with TypeScript and our module path mapping:
// tests/jest-e2e.json
{
"moduleFileExtensions": ["js", "json", "ts"],
"rootDir": "../",
"testRegex": "tests/.*\\.e2e-spec\\.ts$",
"transform": {
"^.+\\.(t|j)s$": "ts-jest"
},
"collectCoverageFrom": ["**/*.(t|j)s"],
"coverageDirectory": "../test-results/coverage",
"testEnvironment": "node",
"moduleNameMapper": {
"^@/app/(.*)$": "<rootDir>/src/$1"
}
}
Key aspects of this configuration:
- testRegex: Only runs files matching the e2e test pattern
-
moduleNameMapper: Resolves our
@/app/*
path aliases to actual source files - testEnvironment: Uses Node.js environment for server-side testing
- rootDir: Points to project root for proper module resolution
You'll also want to add an npm script to run these tests:
// package.json (scripts section)
{
"scripts": {
"test:e2e": "jest --config tests/jest-e2e.json --detectOpenHandles --forceExit"
}
}
The --detectOpenHandles
and --forceExit
flags ensure proper cleanup after database operations.
E2E test boilerplate
In order to test using supertest, there is some boilerplate code that we would need to:
- Initiate the NestJS application in a test context
- Populate the database with the seed data
- Clean up the database after the tests are done
- Apply custom validation filters for proper error handling
import 'reflect-metadata';
import { Test, TestingModule } from '@nestjs/testing';
import { INestApplication, ValidationPipe } from '@nestjs/common';
import { TypeOrmModule } from '@nestjs/typeorm';
import { DataSource } from 'typeorm';
import * as request from 'supertest';
import { ValidationExceptionFilter } from '../src/utils/validation-exception.filter';
const seedData = require('./seed-data.json');
// Core NestJS App Modules - assuming these will be created
import { CartModule } from '@/app/cart/cart.module';
import { UserModule } from '@/app/user/user.module';
import { ItemModule } from '@/app/item/item.module';
import { OrderModule } from '@/app/order/order.module';
// Entities - assuming these will be created
import { UserEntity } from '@/app/user/user.entity';
import { ItemEntity } from '@/app/item/item.entity';
import { OrderEntity } from '@/app/order/order.entity';
import { DB_HOST, DB_PASSWORD, DB_PORT, DB_USERNAME } from '@/app/constants';
describe('Cart Controller (e2e)', () => {
let app: INestApplication;
let dataSource: DataSource;
let testUser: UserEntity;
const {users: seedUsers, items: seedItems, orders: seedOrders} = seedData
beforeAll(async () => {
const moduleFixture: TestingModule = await Test.createTestingModule({
imports: [
TypeOrmModule.forRoot({
type: 'postgres',
host: DB_HOST,
port: +(DB_PORT || 5432),
username: DB_USERNAME,
password: DB_PASSWORD,
database: 'test',
autoLoadEntities: true, // Auto-loads entities from imported modules
synchronize: true, // Creates schema in the test DB
}),
OrderModule,
CartModule,
UserModule,
ItemModule,
],
}).compile();
app = moduleFixture.createNestApplication();
app.useGlobalPipes(new ValidationPipe({
whitelist: true,
forbidNonWhitelisted: true,
}));
// Apply custom validation filter for consistent error responses
app.useGlobalFilters(new ValidationExceptionFilter());
await app.init();
dataSource = moduleFixture.get<DataSource>(DataSource);
// Seed the database before tests run
const userRepository = dataSource.getRepository(UserEntity);
const itemRepository = dataSource.getRepository(ItemEntity);
const orderRepository = dataSource.getRepository(OrderEntity);
// Clear previous test data
await itemRepository.query(`TRUNCATE TABLE "items" RESTART IDENTITY CASCADE;`);
await userRepository.query(`TRUNCATE TABLE "users" RESTART IDENTITY CASCADE;`);
await userRepository.query(`TRUNCATE TABLE "orders" RESTART IDENTITY CASCADE;`);
// Insert fresh seed data
await userRepository.insert(seedUsers);
await itemRepository.insert(seedItems);
await orderRepository.insert(seedOrders);
testUser = seedUsers[0] as UserEntity;
});
afterAll(async () => {
if (dataSource && dataSource.isInitialized) {
const entities = dataSource.entityMetadatas;
for (const entity of entities) {
const repository = dataSource.getRepository(entity.name);
await repository.query(
`TRUNCATE TABLE "${entity.tableName}" RESTART IDENTITY CASCADE;`,
);
}
}
await app.close();
});
/// ...
// Tests defined here ...
// ...
}
Top comments (0)