DEV Community

Cover image for Learning Go Testing from K8s
Leapcell
Leapcell

Posted on

Learning Go Testing from K8s

Cover

Why Do Testing

Good unit testing can lead to more elegant code design, thereby improving code understandability, reusability, and maintainability. When introducing changes, there’s no need to retest the entire program—just ensure that the inputs and outputs of the modified parts remain consistent, and you can quickly verify if there are any issues with the program.

Additionally, whenever a bug occurs, we can add the bug’s input as a test case. This way, we won’t make the same mistake again, and we only need to run the tests once each time to see if new changes have reintroduced similar issues from the past. This is a significant boost to software quality.

Passing Methods as Parameters to Facilitate Mocking

In the graceful shutdown logic of Kubernetes, by declaring the handler parameter as a method instead of calling it directly, we can test only the logic of flushList without worrying about the correctness of the handler itself.

However, we can also use gomonkey’s reflection to directly mock the return value of a method to achieve the same effect.

If we need to test for race conditions, we can do so by launching goroutines.

type gracefulTerminationManager struct {
 rsList graceTerminateRSList
}

func newGracefulTerminationManager() *gracefulTerminationManager {
 return &gracefulTerminationManager{
  rsList: graceTerminateRSList{
   list: make(map[string]*item),
  },
 }
}

type item struct {
 VirtualServer string
 RealServer    string
}

type graceTerminateRSList struct {
 lock sync.Mutex
 list map[string]*item
}

func (g *graceTerminateRSList) flushList(handler func(rsToDelete *item) (bool, error)) bool {
 g.lock.Lock()
 defer g.lock.Unlock()
 success := true
 for _, rs := range g.list {
  if ok, err := handler(rs); !ok || err != nil {
   success = false
  }
 }
 return success
}

func (g *graceTerminateRSList) add(rs *item) {
 g.lock.Lock()
 defer g.lock.Unlock()
 g.list[rs.RealServer] = rs
}

func (g *graceTerminateRSList) len() int {
 g.lock.Lock()
 defer g.lock.Unlock()
 return len(g.list)
}
Enter fullscreen mode Exit fullscreen mode

Here we need to test flushList and add under race conditions.

func Test_raceGraceTerminateRSList_flushList(t *testing.T) {
 manager := newGracefulTerminationManager()
 go func() {
  for i := 0; i < 100; i++ {
   manager.rsList.add(&item{
    VirtualServer: "virtualServer",
    RealServer:    fmt.Sprint(i),
   })
  }
 }()

 // Wait until a certain number of elements are added before proceeding
 for manager.rsList.len() < 20 {
 }

 // Pass in the handler for mocking
 success := manager.rsList.flushList(func(rsToDelete *item) (bool, error) {
  return true, nil
 })

 assert.True(t, success)
}
Enter fullscreen mode Exit fullscreen mode

By using https://github.com/agiledragon/gomonkey to mock parts of your program, you can isolate the methods under test from the impact of external calls.

If you need to stub private methods, you can use a higher version of gomonkey, allowing you to focus more on the methods you need to test.

If you want to do some integration testing within your test files, you may encounter a headache: you need to initialize a lot of resources first, such as databases and caches. In this case, you can add methods for initializing these resources under each module’s directory. For example:

func InitTestSuite(opts ...TestSuiteConfigOpt) {
 config := &TestSuiteConfig{}
 for _, opt := range opts {
  opt(config)
 }
 dsn := config.GetDSN()
 err := NewOrmClient(&Config{
  Config: &gorm.Config{
   //Logger: logger.Default.LogMode(logger.Info),
  },
  SourceConfig: &SourceDBConfig{},
  Dial:         postgres.Open(dsn),
 })
}
Enter fullscreen mode Exit fullscreen mode

Then, in the test file where you need to use them, initialize via the TestMain method.

Another benefit of this is that you can discover early whether your modules are decoupled cleanly. For instance, if you find yourself initializing many components when setting up a test suite, it’s worth reviewing whether your module design is correct or necessary.

How to Test Concurrency Issues

How to Write Tests for Concurrent Programs?

In distributed systems, the most common issue is a large number of race conditions. Many cases only occur with very low probability, but once they do, they can lead to serious accidents. Therefore, we need to simulate concurrent race scenarios as much as possible and check the results after all operations are complete. However, occasionally the test may pass successfully in a single run, so we need to ensure that after multiple executions, the result is still consistent. This requires executing the code multiple times, as shown in the following sample code:

var (
 counter int
)

func increment() {
 counter++
}

func TestIncrement(t *testing.T) {
 count := 100
 var wg sync.WaitGroup
 for i := 0; i < count; i++ {
  wg.Add(1)
  go func() {
   increment()
   wg.Done()
  }()
 }
 assert.Equal(t, count, counter)
}
Enter fullscreen mode Exit fullscreen mode

By launching multiple goroutines to operate on the method, you may find that the results do not match your expectations. At this point, you need to review and modify your code.

TDD (Test-Driven Development)

Each time after writing a test, you only write the minimal code necessary to pass the test. Take implementing state machine code as an example. First, define your methods. For easier reading, here’s a simple implementation:

func GetOrder(orderId string) Order {
 return Order{}
}

func UpdateOrder(originalOrder, order Order) error {
 return nil
}

func UpdateOrderStateByEvent(ctx context.Context, orderId string, event Event) (err error) {
 order := GetOrder(orderId)
 stateMap, ok := orderEventStateMap[event]
 if !ok {
  return errors.New("event not exists")
 }

 if !stateMap.currentStateSet.Contains(order.OrderState) {
  return errors.New("current OrderState error")
 }

 updateOrder := Order{
  OrderId:    order.OrderId,
  OrderState: order.OrderState,
 }

 err = UpdateOrder(order, updateOrder)
 if err != nil {
  return err
 }
 return nil
}
Enter fullscreen mode Exit fullscreen mode

Then, test UpdateOrderStateByEvent. We must be clear that unit tests are meant to test this method in isolation; other methods can be mocked with gomonkey to ensure the repeatability of the test.

func TestOrderStateByEvent(t *testing.T) {
 type args struct {
  ctx     context.Context
  orderId string
  event   Event
 }
 tests := []struct {
  name      string
  args      args
  wantErr   error
  initStubs func() (reset func())
 }{
  {
   name: "",
   args: args{
    ctx:     context.Background(),
    orderId: "orderId1",
    event:   onHoldEvent,
   },
   wantErr: nil,
   initStubs: func() (reset func()) {
    patches := gomonkey.ApplyFunc(GetOrder, func(orderId string) Order {
     return Order{
      OrderId:    orderId,
      OrderState: delivering,
     }
    })
    return func() {
     patches.Reset()
    }
   },
  },
 }
 for _, tt := range tests {
  t.Run(tt.name, func(t *testing.T) {
   // 1. Mock the required methods
   reset := tt.initStubs()
   defer reset()
   // 2. Call the method to be tested
   err := UpdateOrderStateByEvent(tt.args.ctx, tt.args.orderId, tt.args.event)
   assert.Nil(t, err)
  })
 }
}
Enter fullscreen mode Exit fullscreen mode

The concept of test-driven development was proposed as early as the 1990s. Although this example uses Go, TDD was first practiced in other languages. The author writes tests and then writes the minimal code needed to make the tests pass, alternating between these steps. When the program is completed, it is already in a testable state.

By starting with the test code, you can avoid being reluctant to make major changes after writing the actual code. This prevents functions from becoming too long, making future modifications and retesting much more manageable. When developing business logic, if you decompose the business logic in advance and then combine components with glue code, you will encounter fewer bugs compared to writing all the code in one go and testing afterward.

Some may think that writing tests takes too much time, but we can use tools to improve our testing efficiency. For example, using the IDE to generate test skeletons: with the advent of AI copilots, repetitive work in test cases no longer needs to be done manually. Now, you just need to write one case and implement the logic for the test method, and AI can help generate many edge case examples, sometimes even more thoroughly than you would think of yourself. Moreover, as long as your method names are well chosen, the generated samples are highly usable. If the AI-generated test cases are not suitable, you can reflect on whether the method name itself is problematic and continuously improve your code.

Conclusion

We don’t need to write elegant code on the first try, but we should always aim to write better code, continually reflect on our work, and use tools to constantly improve ourselves. This way, the results we produce will also become more outstanding.


We are Leapcell, your top choice for hosting Go projects.

Leapcell

Leapcell is the Next-Gen Serverless Platform for Web Hosting, Async Tasks, and Redis:

Multi-Language Support

  • Develop with Node.js, Python, Go, or Rust.

Deploy unlimited projects for free

  • pay only for usage — no requests, no charges.

Unbeatable Cost Efficiency

  • Pay-as-you-go with no idle charges.
  • Example: $25 supports 6.94M requests at a 60ms average response time.

Streamlined Developer Experience

  • Intuitive UI for effortless setup.
  • Fully automated CI/CD pipelines and GitOps integration.
  • Real-time metrics and logging for actionable insights.

Effortless Scalability and High Performance

  • Auto-scaling to handle high concurrency with ease.
  • Zero operational overhead — just focus on building.

Explore more in the Documentation!

Try Leapcell

Follow us on X: @LeapcellHQ


Read on our blog

Top comments (0)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.