John Gaughan already said it, "If you need a utility function to wrap the usage of another function, that is a sign that whatever you are wrapping was poorly designed."
Indeed, ADO.NET is old and requires to write much boilerplate, inelegant code which can easily cause mistakes (like forgetting to open a connection before starting a query or forgetting to complete a transaction when finishing working with it).
You may start doing utility functions. But remember, C# is object oriented, so you may want to use a more conventional way of a library. You may create your own, but why reinventing the wheel? There are already plenty of libraries which abstract ADO.NET calls and provide a much better interface. There are many ORMs, including Entity Framework and the much more lightweight LINQ to SQL, and if an ORM is an overkill for your current project, why not using something like Dapper?
What's wrong with utility functions, you may ask?
Nothing, except that there is no benefit whatsoever compared to an object-oriented approach of a library, and that you lose all you get with OOP. Let's see what it gives on an example of Orseis, a library similar to Dapper (but much, much better, because I created it; nah, just joking).
In this library, database is accessed this way:
var numberOfProducts = 10;
var query = @"select top (@Limit) [Id], [Name]
from [Sales].[Product]
order by [Sold] desc";
var products = new SqlDataQuery(query, this.ConnectionString)
.Caching(TimeSpan.FromMinutes(5), "Top{0}Sales", numberOfProducts) // Cache the results.
.Profiling(this.Profiler) // Profile the execution of the query.
.TimeoutOpening(milliseconds: 1000) // Specify the timeout for the database.
.With(new { Limit: numberOfProducts }) // Add query parameters.
.ReadRows<Product>();
This syntax has several benefits:
It's slightly much more readable and intuitive than the way it would be written using utility functions:
var numberOfProducts = 10;
var query = @"select top (@Limit) [Id], [Name]
from [Sales].[Product]
order by [Sold] desc";
using (Utility.OpenConnection(this.ConnectionString, timeoutMilliseconds: 1000))
{
var parameters = new { Limit: numberOfProducts };
Utility.Cache(
TimeSpan.FromMinutes(5),
"Top{0}Sales",
numberOfProducts,
() => Utility.Profile(
() =>
{
using (var reader = Utility.Select(connection, query, parameters))
{
var products = Utility.ConvertAll<Product>(reader);
}
}
)
);
}
I mean, how a new developer could possibly understand what's happening here in just a few seconds?
It's not even about naming conventions (not that I want to criticize your choice of Utility as a name for a class), but about the structure itself. It simply looks like the plain old ADO.NET.
It's refactoring friendly. I can reuse a part of a chain (I can do it too with utility functions) very easily during a refactoring (that's much harder with utility functions).
Imagine that in the previous example, I want to be able to specify profiling and timeout policy once, and reuse it everywhere. I'll also specify the connection string:
this.BaseQuery = new SqlDataQuery(this.ConnectionString)
.Profiling(this.Profiler)
.TimeoutOpening(milliseconds: 1000);
// Later in code:
var numberOfProducts = 10;
var query = @"select top (@Limit) [Id], [Name]
from [Sales].[Product]
order by [Sold] desc";
var products = this.BaseQuery
.Query(query)
.Caching(TimeSpan.FromMinutes(5), "Top{0}Sales", numberOfProducts)
.With(new { Limit: numberOfProducts }) // Add query parameters.
.ReadRows<Product>();
Such refactoring was pretty straightforward: it just works. With the variant using utility functions, I could have spent much more time trying to refactor the piece of code without breaking anything.
It's portable. Adding support for a different database, such as Oracle, is seamless. I can do it in less than five minutes. Wait, I already did, and it took 5 lines of code and less than a minute (the time needed to install Oracle doesn't count, right?).
This one is crucial, and this is also why .NET developers got it wrong in .NET 1 when they designed File and Directory utility functions. Imagine you've created an app which spends a large deal of time working with files. You're preparing for your first release the next week when you receive a call from the customer: he wants to be able your app to work with Isolated Storage too. How do you explain to your customer that you'll need additional two weeks to rewrite half of your code?
If .NET 1 was designed with OOP in mind, they could have done a file system abstract provider with interchangeable concrete providers, something similar to another project I started, but not finished yet. Using it, I can seamlessly move from file storage to Isolated Storage to in-memory files to a native Win32 storage which supports NTFS transactions and doesn't have the stupid .NET constraint of 259 characters in a path.
It's Dependency Injection (DI) friendly too. In point 2, I extracted a part of a chain in order to reuse it. I can as well push it even further and combine it with DI. Now, the methods which actually do all the work of querying the database don't even have to know that I'm using Oracle or SQLite. By the way, they don't have access to the query string; it's by design.
It's easily extensible. I had to extend Orseis dozens of times, adding caching, profiling, transactions, etc. If I had to struggle with something, it was more the features themselves and how to make them fool-proof. For example, the notion of collection propagations I implemented to seamlessly bind queries containing joins to collections of objects wasn't a good idea, because despite all my efforts, it's still not obvious to use and can be a source of lots of mistakes.
But adding simpler concepts (like cache invalidation of multiple entries or the reusing of a connection in multiple queries) was pretty straightforward. This straightforwardness is much more difficult to find in utility functions. There, you start adding something, and you find that it breaks consistency or requires to make the changes which are not backwards compatible. You end up creating a bunch of static methods which are so numerous, that they look more like patchwork than a consistent class which helps developers.
Let's take your example:
Utility.SelectToReader(connection, "SELECT 1 FROM DUAL");
Later, you need to use the parameters, so it becomes:
Utility.SelectToReader(
connection,
"SELECT 1 FROM DUAL WHERE CATEGORY = @Category",
new Dictionary<string, string>
{
{ "Category", this.CategoryId },
});
Then, you notice that you have too many timeouts, so you must add a timeout too:
Utility.SelectToReader(
connection,
"SELECT 1 FROM DUAL WHERE CATEGORY = @Category",
new Dictionary<string, string>
{
{ "Category", this.CategoryId },
},
2500);
Step by step, the method becomes unusable. Either you need to split it, or you keep one with a dozen of optional arguments.