Skip to main content
added 108 characters in body
Source Link
Dave Hillier
  • 3.9k
  • 1
  • 28
  • 38

I disagree with the accepted answer for many reasons.

In my experience, when I see "miscellaneous" libraries like the accepted answer, they're an excuse to reinvent the wheel (or not invented here(NIH)) - a far greater sin than violating Dont Repeat Yourself (DRY).

Sometimes violating DRY can be a reasonable compromise, it is better than introducing tight coupling. Reuse is a secondary concern compared to good object oriented design. A bit (I mean small amount, apply the Rule of Three) of duplication is easier to understand than a spaghetti code base.

The approach of numerous general purpose libraries sets a bad example. It leads to a fine granularity of assembly and too many assemblies is bad. I recently reduced an in-house from 24 libraries to 6 libraries. It improved the compile time from several minutes to ~20 seconds. Visual studio is also slower to load and less responsive with more assemblies. Having too many libraries also leads to confusion as to where code should live; prefer fewer, simpler rules.

Why is the stuff in the .Net Framework not good enough? The Framework is pretty big; many times I've seen code that re-implements stuff that already exists there. Really make sure that your frameworks are filling gaps in the .Net framework and dont just exist for aesthetic reasons (for example "I don't like the .Net framework here" or perhaps some premature optimization)

Introducing another layer into your architecture has a significant complexity cost. Why does the layer exist? I've seen false reuse, by that I mean that, the code is built on top of an in-house framework. It would have been far more efficient to implement it directly on top of standard libraries.

Using standardized technologies (like the .Net framework and popular 3rd party/open source libraries) have benefits that often outweigh the comparative technological gains of building it yourself. It is easier to find talent that knows these technologies and your existing developers will invest more in learning it.

My recommendations:

  • Do not to share this code.
  • Create a new library if it has a cohesive purpose do not use the ball of mud design pattern.
  • Reuse existing 3rd party libraries where possible.
  • Prefer fewer assemblies, with simpler rules as to where code should live.

I disagree with the accepted answer for many reasons.

In my experience, when I see "miscellaneous" libraries like the accepted answer, they're an excuse to reinvent the wheel (or not invented here(NIH)) - a far greater sin than violating Dont Repeat Yourself (DRY).

Sometimes violating DRY can be a reasonable compromise, it is better than introducing tight coupling. Reuse is a secondary concern compared to good object oriented design. A bit (I mean small amount) of duplication is easier to understand than a spaghetti code base.

The approach of numerous general purpose libraries sets a bad example. It leads to a fine granularity of assembly and too many assemblies is bad. I recently reduced an in-house from 24 libraries to 6 libraries. It improved the compile time from several minutes to ~20 seconds. Visual studio is also slower to load and less responsive with more assemblies. Having too many libraries also leads to confusion as to where code should live; prefer fewer, simpler rules.

Why is the stuff in the .Net Framework not good enough? The Framework is pretty big; many times I've seen code that re-implements stuff that already exists there. Really make sure that your frameworks are filling gaps in the .Net framework and dont just exist for aesthetic reasons (for example "I don't like the .Net framework here" or perhaps some premature optimization)

Introducing another layer into your architecture has a significant complexity cost. Why does the layer exist? I've seen false reuse, by that I mean that, the code is built on top of an in-house framework. It would have been far more efficient to implement it directly on top of standard libraries.

Using standardized technologies (like the .Net framework and popular 3rd party/open source libraries) have benefits that often outweigh the comparative technological gains of building it yourself. It is easier to find talent that knows these technologies and your existing developers will invest more in learning it.

My recommendations:

  • Do not to share this code.
  • Create a new library if it has a cohesive purpose do not use the ball of mud design pattern.
  • Reuse existing 3rd party libraries where possible.
  • Prefer fewer assemblies, with simpler rules as to where code should live.

I disagree with the accepted answer for many reasons.

In my experience, when I see "miscellaneous" libraries like the accepted answer, they're an excuse to reinvent the wheel (or not invented here(NIH)) - a far greater sin than violating Dont Repeat Yourself (DRY).

Sometimes violating DRY can be a reasonable compromise, it is better than introducing tight coupling. Reuse is a secondary concern compared to good object oriented design. A bit (I mean small amount, apply the Rule of Three) of duplication is easier to understand than a spaghetti code base.

The approach of numerous general purpose libraries sets a bad example. It leads to a fine granularity of assembly and too many assemblies is bad. I recently reduced an in-house from 24 libraries to 6 libraries. It improved the compile time from several minutes to ~20 seconds. Visual studio is also slower to load and less responsive with more assemblies. Having too many libraries also leads to confusion as to where code should live; prefer fewer, simpler rules.

Why is the stuff in the .Net Framework not good enough? The Framework is pretty big; many times I've seen code that re-implements stuff that already exists there. Really make sure that your frameworks are filling gaps in the .Net framework and dont just exist for aesthetic reasons (for example "I don't like the .Net framework here" or perhaps some premature optimization)

Introducing another layer into your architecture has a significant complexity cost. Why does the layer exist? I've seen false reuse, by that I mean that, the code is built on top of an in-house framework. It would have been far more efficient to implement it directly on top of standard libraries.

Using standardized technologies (like the .Net framework and popular 3rd party/open source libraries) have benefits that often outweigh the comparative technological gains of building it yourself. It is easier to find talent that knows these technologies and your existing developers will invest more in learning it.

My recommendations:

  • Do not to share this code.
  • Create a new library if it has a cohesive purpose do not use the ball of mud design pattern.
  • Reuse existing 3rd party libraries where possible.
  • Prefer fewer assemblies, with simpler rules as to where code should live.
added 58 characters in body
Source Link
Dave Hillier
  • 3.9k
  • 1
  • 28
  • 38

I disagree with the accepted answer for many reasons.

In my experience, when I see "miscellaneous" libraries like the accepted answer, they're an excuse to reinvent the wheel (or not invented here(NIH)) - a far greater sin than violating Dont Repeat Yourself (DRY).

Sometimes violating DRY can be a reasonable compromise, it is better than introducing tight coupling. Reuse is a secondary concern compared to good object oriented design. A bit (I mean small amount) of duplication is easier to understand than a spaghetti code base.

The approach of numerous general purpose libraries sets a bad example. It leads to a fine granularity of assembly and too many assemblies is bad. I recently reduced an in-house from 24 libraries to 6 libraries. It improved the compile time from several minutes to ~20 seconds. Visual studio is also slower to load and less responsive with more assemblies. Having too many libraries also leads to confusion as to where code should live; prefer fewer, simpler rules.

Why is the stuff in the .Net Framework not good enough? The Framework is pretty big; many times I've seen code that re-implements stuff that already exists there. Really make sure that your frameworks are filling gaps in the .Net framework and dont just exist for aesthetic reasons (for example "I don't like the .Net framework here" or perhaps some premature optimization)

Introducing another layer into your architecture has a significant complexity cost. Why does the layer exist? I've seen false reuse, by that I mean that, the code is built on top of an in-house framework. It would have been far more efficient to implement it directly on top of standard libraries.

Using standardized technologies (like the .Net framework and popular 3rd party/open source libraries) have benefits that often outweigh the comparative technological gains of building it yourself. It is easier to find talent that knows these technologies and your existing developers will invest more in learning it.

My recommendations:

  • Do not to share this code.
  • Create a new library if it has a cohesive purpose do not use the ball of mud design patternball of mud design pattern.
  • Reuse existing 3rd party libraries where possible.
  • Prefer fewer assemblies, with simpler rules as to where code should live.

I disagree with the accepted answer for many reasons.

In my experience, when I see "miscellaneous" libraries like the accepted answer, they're an excuse to reinvent the wheel (or not invented here(NIH)) - a far greater sin than violating Dont Repeat Yourself (DRY).

Sometimes violating DRY can be a reasonable compromise, it is better than introducing tight coupling. Reuse is a secondary concern compared to good object oriented design. A bit (I mean small amount) of duplication is easier to understand than a spaghetti code base.

The approach of numerous general purpose libraries sets a bad example. It leads to a fine granularity of assembly and too many assemblies is bad. I recently reduced an in-house from 24 libraries to 6 libraries. It improved the compile time from several minutes to ~20 seconds. Visual studio is also slower to load and less responsive with more assemblies. Having too many libraries also leads to confusion as to where code should live; prefer fewer, simpler rules.

Why is the stuff in the .Net Framework not good enough? The Framework is pretty big; many times I've seen code that re-implements stuff that already exists there. Really make sure that your frameworks are filling gaps in the .Net framework and dont just exist for aesthetic reasons (for example "I don't like the .Net framework here" or perhaps some premature optimization)

Introducing another layer into your architecture has a significant complexity cost. Why does the layer exist? I've seen false reuse, by that I mean that, the code is built on top of an in-house framework. It would have been far more efficient to implement it directly on top of standard libraries.

Using standardized technologies (like the .Net framework and popular 3rd party/open source libraries) have benefits that often outweigh the comparative technological gains of building it yourself. It is easier to find talent that knows these technologies and your existing developers will invest more in learning it.

My recommendations:

  • Do not to share this code.
  • Create a new library if it has a cohesive purpose do not use the ball of mud design pattern.
  • Reuse existing 3rd party libraries where possible.

I disagree with the accepted answer for many reasons.

In my experience, when I see "miscellaneous" libraries like the accepted answer, they're an excuse to reinvent the wheel (or not invented here(NIH)) - a far greater sin than violating Dont Repeat Yourself (DRY).

Sometimes violating DRY can be a reasonable compromise, it is better than introducing tight coupling. Reuse is a secondary concern compared to good object oriented design. A bit (I mean small amount) of duplication is easier to understand than a spaghetti code base.

The approach of numerous general purpose libraries sets a bad example. It leads to a fine granularity of assembly and too many assemblies is bad. I recently reduced an in-house from 24 libraries to 6 libraries. It improved the compile time from several minutes to ~20 seconds. Visual studio is also slower to load and less responsive with more assemblies. Having too many libraries also leads to confusion as to where code should live; prefer fewer, simpler rules.

Why is the stuff in the .Net Framework not good enough? The Framework is pretty big; many times I've seen code that re-implements stuff that already exists there. Really make sure that your frameworks are filling gaps in the .Net framework and dont just exist for aesthetic reasons (for example "I don't like the .Net framework here" or perhaps some premature optimization)

Introducing another layer into your architecture has a significant complexity cost. Why does the layer exist? I've seen false reuse, by that I mean that, the code is built on top of an in-house framework. It would have been far more efficient to implement it directly on top of standard libraries.

Using standardized technologies (like the .Net framework and popular 3rd party/open source libraries) have benefits that often outweigh the comparative technological gains of building it yourself. It is easier to find talent that knows these technologies and your existing developers will invest more in learning it.

My recommendations:

  • Do not to share this code.
  • Create a new library if it has a cohesive purpose do not use the ball of mud design pattern.
  • Reuse existing 3rd party libraries where possible.
  • Prefer fewer assemblies, with simpler rules as to where code should live.
added 206 characters in body
Source Link
Dave Hillier
  • 3.9k
  • 1
  • 28
  • 38

I disagree with the accepted answer for many reasons.

In my experience, when I see "miscellaneous" libraries like the accepted answer, they're an excuse to reinvent the wheel (or not invented here(NIH)) - a far greater sin than violating Dont Repeat Yourself (DRY).

Sometimes violating DRY can be a reasonable compromise, it is better than introducing tight coupling. Reuse is a secondary concern compared to good object oriented design. A bit (I mean small amount) of duplication is easier to understand than a spaghetti code base.

The approach of numerous general purpose libraries sets a bad example. It leads to a fine granularity of assembly and too many assemblies is bad. I recently reduced an in-house from 24 libraries to 6 libraries. It improved the compile time from several minutes to ~20 seconds. Visual studio is also slower to load and less responsive with more assemblies. Having too many libraries also leads to confusion as to where code should live; prefer fewer, simpler rules.

Why is the stuff in the .Net Framework not good enough? The Framework is pretty big; many times I've seen code that re-implements stuff that already exists there. Really make sure that your frameworks are filling gaps in the .Net framework and dont just exist for aesthetic reasons (for example "I don't like the .Net framework here" or perhaps some premature optimization)

Introducing another layer into your architecture has a significant complexity cost. Why does the layer exist? I've seen false reuse, by that I mean that, the code is built on top of an in-house framework. It would have been far more efficient to implement it directly on top of standard libraries.

Using standardized technologies (like the .Net framework and popular 3rd party/open source libraries) have benefits that often outweigh the comparative technological gains of building it yourself. It is easier to find talent that knows these technologies and your existing developers will invest more in learning it.

My recommendations:

  • Do not to share this code.
  • Create a new library if it has a cohesive purpose do not use the ball of mud design pattern.
  • Reuse existing 3rd party libraries where possible.

I disagree with the accepted answer for many reasons.

In my experience, when I see "miscellaneous" libraries like the accepted answer, they're an excuse to reinvent the wheel (or not invented here(NIH)) - a far greater sin than violating Dont Repeat Yourself (DRY).

Sometimes violating DRY can be a reasonable compromise, it is better than introducing tight coupling. Reuse is a secondary concern compared to good object oriented design. A bit (I mean small amount) of duplication is easier to understand than a spaghetti code base.

The approach of numerous general purpose libraries sets a bad example. It leads to a fine granularity of assembly and too many assemblies is bad. I recently reduced an in-house from 24 libraries to 6 libraries. It improved the compile time from several minutes to ~20 seconds. Visual studio is also slower to load and less responsive with more assemblies. Having too many libraries also leads to confusion as to where code should live; prefer fewer, simpler rules.

Why is the stuff in the .Net Framework not good enough? The Framework is pretty big; many times I've seen code that re-implements stuff that already exists there. Really make sure that your frameworks are filling gaps in the .Net framework and dont just exist for aesthetic reasons (for example "I don't like the .Net framework here" or perhaps some premature optimization)

Introducing another layer into your architecture has a significant complexity cost. Why does the layer exist? I've seen false reuse, by that I mean that, the code is built on top of an in-house framework. It would have been far more efficient to implement it directly on top of standard libraries.

Using standardized technologies (like the .Net framework and popular 3rd party/open source libraries) have benefits that often outweigh the comparative technological gains of building it yourself. It is easier to find talent that knows these technologies and your existing developers will invest more in learning it.

I disagree with the accepted answer for many reasons.

In my experience, when I see "miscellaneous" libraries like the accepted answer, they're an excuse to reinvent the wheel (or not invented here(NIH)) - a far greater sin than violating Dont Repeat Yourself (DRY).

Sometimes violating DRY can be a reasonable compromise, it is better than introducing tight coupling. Reuse is a secondary concern compared to good object oriented design. A bit (I mean small amount) of duplication is easier to understand than a spaghetti code base.

The approach of numerous general purpose libraries sets a bad example. It leads to a fine granularity of assembly and too many assemblies is bad. I recently reduced an in-house from 24 libraries to 6 libraries. It improved the compile time from several minutes to ~20 seconds. Visual studio is also slower to load and less responsive with more assemblies. Having too many libraries also leads to confusion as to where code should live; prefer fewer, simpler rules.

Why is the stuff in the .Net Framework not good enough? The Framework is pretty big; many times I've seen code that re-implements stuff that already exists there. Really make sure that your frameworks are filling gaps in the .Net framework and dont just exist for aesthetic reasons (for example "I don't like the .Net framework here" or perhaps some premature optimization)

Introducing another layer into your architecture has a significant complexity cost. Why does the layer exist? I've seen false reuse, by that I mean that, the code is built on top of an in-house framework. It would have been far more efficient to implement it directly on top of standard libraries.

Using standardized technologies (like the .Net framework and popular 3rd party/open source libraries) have benefits that often outweigh the comparative technological gains of building it yourself. It is easier to find talent that knows these technologies and your existing developers will invest more in learning it.

My recommendations:

  • Do not to share this code.
  • Create a new library if it has a cohesive purpose do not use the ball of mud design pattern.
  • Reuse existing 3rd party libraries where possible.
Source Link
Dave Hillier
  • 3.9k
  • 1
  • 28
  • 38
Loading