Timeline for Execution time of C program
Current License: CC BY-SA 3.0
6 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Feb 14, 2023 at 6:06 | comment | added | Peter Cordes | See also What's the relationship between the real CPU frequency and the clock_t in C? | |
| Feb 14, 2023 at 6:01 | comment | added | Peter Cordes |
No constant value compiled into a binary can count actual core clock cycles and be portable to other systems. Or even count cycles on a single system, since CPUs for about 2 decades (1 when this was written) have been able to vary their clock frequency to save power. It's just a convenient tick interval for measuring CPU time. If that's not what you want (e.g. wall-clock time), then use something else like clock_gettime. If you want core clock cycles, use perf stat for your microbenchmark.
|
|
| Aug 28, 2019 at 12:23 | comment | added | user823738 |
Why would it not be a good solution? You get some value from clock(), if you divide that value with CLOCK_PER_SEC you are guaranteed to get time in seconds cpu took. The responsibility of measuring actual clock speed is responsibility of clock() function, not yours.
|
|
| Apr 22, 2015 at 15:29 | comment | added | AntoineL |
This is not a pratical problem: yes Posix systems always have CLOCK_PER_SEC==1000000, but in the same time, they all use 1-µs precision for their clock() implementation; by the way, it has the nice property to reduce sharing problems. If you want to measure potentially very quick events, say below 1 ms, then you should first worry about the accuracy (or resolution) of the clock() function, which is necessarily coarser than 1µs in Posix, but is also often much coarser; the usual solution is to run the test many times; the question as asked did not seem to require it, though.
|
|
| Oct 16, 2014 at 21:00 | comment | added | ozanmuyes | Thanks for the information but is there any better alternative yet? | |
| Nov 8, 2012 at 18:35 | history | answered | Stephen | CC BY-SA 3.0 |