Skip to main content
Remove indented code block formatting from block quote.
Source Link
user40980
user40980
     >  what is a good strategy for generating the underlying data for the tests?

What is a good strategy for generating the underlying data for the tests?

I would use a modified version of the second approach:

Generate fake data such that for each request X there is a well-defined, intentionally constructed set of results Y that will be returned.

But instead of querying the database directly your searchengine should be implemented against datasource-specific repository-interfaces

Each repository-interface has one implementation that uses the database and one fake implementation that can read the result from a human-readable textfile. This way your testdata is less dependant to database-schema-changes.

  • While testing the searchengine uses the fake-repository.
  • For each test there is test-specific answer-file that can be maintained with a text editor.
     >  what is a good strategy for generating the underlying data for the tests?

I would use a modified version of the second approach:

Generate fake data such that for each request X there is a well-defined, intentionally constructed set of results Y that will be returned.

But instead of querying the database directly your searchengine should be implemented against datasource-specific repository-interfaces

Each repository-interface has one implementation that uses the database and one fake implementation that can read the result from a human-readable textfile. This way your testdata is less dependant to database-schema-changes.

  • While testing the searchengine uses the fake-repository.
  • For each test there is test-specific answer-file that can be maintained with a text editor.

What is a good strategy for generating the underlying data for the tests?

I would use a modified version of the second approach:

Generate fake data such that for each request X there is a well-defined, intentionally constructed set of results Y that will be returned.

But instead of querying the database directly your searchengine should be implemented against datasource-specific repository-interfaces

Each repository-interface has one implementation that uses the database and one fake implementation that can read the result from a human-readable textfile. This way your testdata is less dependant to database-schema-changes.

  • While testing the searchengine uses the fake-repository.
  • For each test there is test-specific answer-file that can be maintained with a text editor.
added 69 characters in body
Source Link
k3b
  • 7.6k
  • 1
  • 21
  • 31
     >  what is a good strategy for generating the underlying data for the tests?

I would use a modified version of the second approach:

Generate fake data such that for each request X there is a well-defined, intentionally constructed set of results Y that will be returned.

But instead of querying the database directly your searchengine should be implemented against datasource-specific repository-interfaces

Each repository-interface has one implementation that uses the database and one fake implementation that can read the result from a human-readable textfile. This way your testdata is less dependant to database-schema-changes.

  • While testing the searchengine uses the fake-repository.
  • For each test there is test-specific answer-file that can be maintained with a text editor.
     >  what is a good strategy for generating the underlying data for the tests?

I would use a modified version of the second approach:

Generate fake data such that for each request X there is a well-defined, intentionally constructed set of results Y that will be returned.

But instead of querying the database directly your searchengine should be implemented against datasource-specific repository-interfaces

Each repository-interface has one implementation that uses the database and one fake implementation that can read the result from a human-readable textfile.

  • While testing the searchengine uses the fake-repository.
  • For each test there is test-specific answer-file that can be maintained with a text editor.
     >  what is a good strategy for generating the underlying data for the tests?

I would use a modified version of the second approach:

Generate fake data such that for each request X there is a well-defined, intentionally constructed set of results Y that will be returned.

But instead of querying the database directly your searchengine should be implemented against datasource-specific repository-interfaces

Each repository-interface has one implementation that uses the database and one fake implementation that can read the result from a human-readable textfile. This way your testdata is less dependant to database-schema-changes.

  • While testing the searchengine uses the fake-repository.
  • For each test there is test-specific answer-file that can be maintained with a text editor.
Source Link
k3b
  • 7.6k
  • 1
  • 21
  • 31

     >  what is a good strategy for generating the underlying data for the tests?

I would use a modified version of the second approach:

Generate fake data such that for each request X there is a well-defined, intentionally constructed set of results Y that will be returned.

But instead of querying the database directly your searchengine should be implemented against datasource-specific repository-interfaces

Each repository-interface has one implementation that uses the database and one fake implementation that can read the result from a human-readable textfile.

  • While testing the searchengine uses the fake-repository.
  • For each test there is test-specific answer-file that can be maintained with a text editor.