Skip to main content
added 266 characters in body
Source Link
Marcus Müller
  • 110.2k
  • 5
  • 149
  • 288
  1. Read C code, make syntactical understanding (i.e., understand what a function is, convert statements like i += 7 into "there's a variable i, and it gets a new value, and the new value is the result of the operation + applied to the variable i and an immediate value, 7", …)
  2. model what the so-called (by the standard) "abstract C machine" would do based on that code, i.e., make a behavioural model of your code according to a well-defined theoretical computer. That asks the following questions:
    1. "given the rules of the C language, what is result calculated by each function?" (there's no result here)
    2. "given the memory model of the C language, and the memory locations outside of sole control of the function, what about the rest of the environment / memory does this function change, after it's run completely?" (there's no side effects / changes to memory here; and just counting up a memory value, for example, can be done by a single update, instead of 1275 steps, because the question is after it's run, not at every tiny step in the source code.)
  3. implement that behaviour by generating machine code for your target computer (here: the CPU of the AT89C2051).
  1. Read C code
  2. model what the so-called (by the standard) "abstract C machine" would do based on that code, i.e., make a behavioural model of your code according to a well-defined theoretical computer. That asks the following questions:
    1. "given the rules of the C language, what is result calculated by each function?" (there's no result here)
    2. "given the memory model of the C language, and the memory locations outside of sole control of the function, what about the rest of the environment / memory does this function change, after it's run completely?" (there's no side effects / changes to memory here; and just counting up a memory value, for example, can be done by a single update, instead of 1275 steps, because the question is after it's run, not at every tiny step in the source code.)
  3. implement that behaviour by generating machine code for your target computer (here: the CPU of the AT89C2051).
  1. Read C code, make syntactical understanding (i.e., understand what a function is, convert statements like i += 7 into "there's a variable i, and it gets a new value, and the new value is the result of the operation + applied to the variable i and an immediate value, 7", …)
  2. model what the so-called (by the standard) "abstract C machine" would do based on that code, i.e., make a behavioural model of your code according to a well-defined theoretical computer. That asks the following questions:
    1. "given the rules of the C language, what is result calculated by each function?" (there's no result here)
    2. "given the memory model of the C language, and the memory locations outside of sole control of the function, what about the rest of the environment / memory does this function change, after it's run completely?" (there's no side effects / changes to memory here; and just counting up a memory value, for example, can be done by a single update, instead of 1275 steps, because the question is after it's run, not at every tiny step in the source code.)
  3. implement that behaviour by generating machine code for your target computer (here: the CPU of the AT89C2051).
Source Link
Marcus Müller
  • 110.2k
  • 5
  • 149
  • 288

Your system probably just does what you told it to.

Your code does not contain a delay function:

// Define a delay function
void delay_ms(unsigned int milliseconds) {
  unsigned int i, j;
  for (i = 0; i < milliseconds; i++) {
    for (j = 0; j < 1275; j++) { // Approximate delay, adjust based on clock frequency
      // Delay loop
    }
  }
}

The job of your C compiler is the following:

  1. Read C code
  2. model what the so-called (by the standard) "abstract C machine" would do based on that code, i.e., make a behavioural model of your code according to a well-defined theoretical computer. That asks the following questions:
    1. "given the rules of the C language, what is result calculated by each function?" (there's no result here)
    2. "given the memory model of the C language, and the memory locations outside of sole control of the function, what about the rest of the environment / memory does this function change, after it's run completely?" (there's no side effects / changes to memory here; and just counting up a memory value, for example, can be done by a single update, instead of 1275 steps, because the question is after it's run, not at every tiny step in the source code.)
  3. implement that behaviour by generating machine code for your target computer (here: the CPU of the AT89C2051).

Your loop doesn't actually calculate anything that becomes externally visible. Hence, the effect on the abstract C machine of calling delay_ms(1000) would be the same as doing nothing. So, the compiler, correctly understanding your code says "oh this function calculates no value and has no side effects, let's just remove it". This is valid. This is how your compiler knows that unsigned int A = …; A = A + A is the same as A *= 2 is the same as A << 1, and that it knows how to put into machine code.

So, your code really doesn't contain a delay, when compiled. (rich.g.william's delay loop code doesn't either, i = i also has no effects and hence "compiles away", even without compiler optimizations; this is the typical "but I can trick the compiler into doing what I want" magical thinking that luckily has little to do with how C compilers or the C language work.)

The side-effect of "wasting cycles" is something you need to explicitly tell, in your C code, you want to preserve – there's no shortcut there. Any version of your delay loop that doesn't actually change anything every loop iteration is actually a null operation for the abstract C machine, and will get optimized away. And that is fine, because you want your C code in general to be correct and fast, and the rumor that C is "just high-level syntax for assembler" simply hasn't been true for maybe 30, 40 years.

You'll, in actual standard libraries for microcontrollers, find such wait loops often implemented in straight inline assembler. That actually gives you the freedom to "count cycles" (you don't know what, even if they do something, your for loops get compiled to, it might change depending on e.g. how many registers the code surrounding them needs). In your case, try with an inline assembly NOP where you just say // Delay Loop.