Skip to main content
replaced http://stackoverflow.com/ with https://stackoverflow.com/
Source Link

It's not about testing, it's about tuning. Since you're doing a lot of I/O, any sort of "CPU profiler" is not what you want. The method I always use is thisthis.

Here's what I would do if I were you: Tune the program until it is as fast as possible. Then if it is not fast enough to be satisfactory, get faster hardware.

The way I would do it is take a number of samples manually. Some of them will be in the process of doing I/O. If they are mostly in I/O, then I would ask if there is any way to avoid some of that I/O. (Don't assume ahead of time that all I/O it's doing is necessary. You may find that it's doing something that could actually be avoided.) If you can avoid some of the I/O, that will speed you up accordingly.

Now look at the samples landing in non-I/O processing. Is it significant, like it takes more than 10% of the samples? If so, is there any way to speed that up, by avoiding some of the work?

Each time you find something to improve, fix the program and run it all over again. You may be pleasantly surprised that, since the last fix, some new thing shows up to fix, that you didn't see before, but now it's important. When you can't find anything more to fix, you can declare the program "as fast as you or probably anyone can make it".

Then if it's still not fast enough, your only option is faster CPU, solid-state disk drive, or whatever.

It's not about testing, it's about tuning. Since you're doing a lot of I/O, any sort of "CPU profiler" is not what you want. The method I always use is this.

Here's what I would do if I were you: Tune the program until it is as fast as possible. Then if it is not fast enough to be satisfactory, get faster hardware.

The way I would do it is take a number of samples manually. Some of them will be in the process of doing I/O. If they are mostly in I/O, then I would ask if there is any way to avoid some of that I/O. (Don't assume ahead of time that all I/O it's doing is necessary. You may find that it's doing something that could actually be avoided.) If you can avoid some of the I/O, that will speed you up accordingly.

Now look at the samples landing in non-I/O processing. Is it significant, like it takes more than 10% of the samples? If so, is there any way to speed that up, by avoiding some of the work?

Each time you find something to improve, fix the program and run it all over again. You may be pleasantly surprised that, since the last fix, some new thing shows up to fix, that you didn't see before, but now it's important. When you can't find anything more to fix, you can declare the program "as fast as you or probably anyone can make it".

Then if it's still not fast enough, your only option is faster CPU, solid-state disk drive, or whatever.

It's not about testing, it's about tuning. Since you're doing a lot of I/O, any sort of "CPU profiler" is not what you want. The method I always use is this.

Here's what I would do if I were you: Tune the program until it is as fast as possible. Then if it is not fast enough to be satisfactory, get faster hardware.

The way I would do it is take a number of samples manually. Some of them will be in the process of doing I/O. If they are mostly in I/O, then I would ask if there is any way to avoid some of that I/O. (Don't assume ahead of time that all I/O it's doing is necessary. You may find that it's doing something that could actually be avoided.) If you can avoid some of the I/O, that will speed you up accordingly.

Now look at the samples landing in non-I/O processing. Is it significant, like it takes more than 10% of the samples? If so, is there any way to speed that up, by avoiding some of the work?

Each time you find something to improve, fix the program and run it all over again. You may be pleasantly surprised that, since the last fix, some new thing shows up to fix, that you didn't see before, but now it's important. When you can't find anything more to fix, you can declare the program "as fast as you or probably anyone can make it".

Then if it's still not fast enough, your only option is faster CPU, solid-state disk drive, or whatever.

added 48 characters in body
Source Link
Mike Dunlavey
  • 12.9k
  • 2
  • 38
  • 59

SinceIt's not about testing, it's about tuning. Since you're doing a lot of I/O, any sort of "CPU profiler" is not what you want. The method I always use is this.

Here's what I would do if I were you: Tune the program until it is as fast as possible. Then if it is not fast enough to be satisfactory, get faster hardware.

The way I would do it is take a number of samples manually. Some of them will be in the process of doing I/O. If they are mostly in I/O, then I would ask if there is any way to avoid some of that I/O. (Don't assume ahead of time that all I/O it's doing is necessary. You may find that it's doing something that could actually be avoided.) If you can avoid some of the I/O, that will speed you up accordingly.

Now look at the samples landing in non-I/O processing. Is it significant, like it takes more than 10% of the samples? If so, is there any way to speed that up, by avoiding some of the work?

Each time you find something to improve, fix the program and run it all over again. You may be pleasantly surprised that, since the last fix, some new thing shows up to fix, that you didn't see before, but now it's important. When you can't find anything more to fix, you can declare the program "as fast as you or probably anyone can make it".

Then if it's still not fast enough, your only option is faster CPU, solid-state disk drive, or whatever.

Since you're doing a lot of I/O, any sort of "CPU profiler" is not what you want. The method I always use is this.

Here's what I would do if I were you: Tune the program until it is as fast as possible. Then if it is not fast enough to be satisfactory, get faster hardware.

The way I would do it is take a number of samples manually. Some of them will be in the process of doing I/O. If they are mostly in I/O, then I would ask if there is any way to avoid some of that I/O. (Don't assume ahead of time that all I/O it's doing is necessary. You may find that it's doing something that could actually be avoided.) If you can avoid some of the I/O, that will speed you up accordingly.

Now look at the samples landing in non-I/O processing. Is it significant, like it takes more than 10% of the samples? If so, is there any way to speed that up, by avoiding some of the work?

Each time you find something to improve, fix the program and run it all over again. You may be pleasantly surprised that, since the last fix, some new thing shows up to fix, that you didn't see before, but now it's important. When you can't find anything more to fix, you can declare the program "as fast as you or probably anyone can make it".

Then if it's still not fast enough, your only option is faster CPU, solid-state disk drive, or whatever.

It's not about testing, it's about tuning. Since you're doing a lot of I/O, any sort of "CPU profiler" is not what you want. The method I always use is this.

Here's what I would do if I were you: Tune the program until it is as fast as possible. Then if it is not fast enough to be satisfactory, get faster hardware.

The way I would do it is take a number of samples manually. Some of them will be in the process of doing I/O. If they are mostly in I/O, then I would ask if there is any way to avoid some of that I/O. (Don't assume ahead of time that all I/O it's doing is necessary. You may find that it's doing something that could actually be avoided.) If you can avoid some of the I/O, that will speed you up accordingly.

Now look at the samples landing in non-I/O processing. Is it significant, like it takes more than 10% of the samples? If so, is there any way to speed that up, by avoiding some of the work?

Each time you find something to improve, fix the program and run it all over again. You may be pleasantly surprised that, since the last fix, some new thing shows up to fix, that you didn't see before, but now it's important. When you can't find anything more to fix, you can declare the program "as fast as you or probably anyone can make it".

Then if it's still not fast enough, your only option is faster CPU, solid-state disk drive, or whatever.

Source Link
Mike Dunlavey
  • 12.9k
  • 2
  • 38
  • 59

Since you're doing a lot of I/O, any sort of "CPU profiler" is not what you want. The method I always use is this.

Here's what I would do if I were you: Tune the program until it is as fast as possible. Then if it is not fast enough to be satisfactory, get faster hardware.

The way I would do it is take a number of samples manually. Some of them will be in the process of doing I/O. If they are mostly in I/O, then I would ask if there is any way to avoid some of that I/O. (Don't assume ahead of time that all I/O it's doing is necessary. You may find that it's doing something that could actually be avoided.) If you can avoid some of the I/O, that will speed you up accordingly.

Now look at the samples landing in non-I/O processing. Is it significant, like it takes more than 10% of the samples? If so, is there any way to speed that up, by avoiding some of the work?

Each time you find something to improve, fix the program and run it all over again. You may be pleasantly surprised that, since the last fix, some new thing shows up to fix, that you didn't see before, but now it's important. When you can't find anything more to fix, you can declare the program "as fast as you or probably anyone can make it".

Then if it's still not fast enough, your only option is faster CPU, solid-state disk drive, or whatever.