1
$\begingroup$

There are questions here and on electrical engeneering stackexchange with similar titles, but they do not quite address the part that is bugging me. I hope I can explain it here in a clear way and that my question is not hopelessly naive.

Suppose that I have computer program that takes two inputs: the boolean input condition and the more interesting input x. Now uppon receiving these inputs it does this:

if(condition == TRUE){
  perform hideously complicated, 24 hour computation on x resulting in one-bit output y
  return(y)
}else{
  perform different, equally time-consuming but completely unrelated computation on x resulting in one-bit output z
  return(z)
}

I assume (correct me if I'm wrong) that modern compilers have a way of making sure that they only perform the hideously long computation computing the answer (y or z) that was asked for, spending only 24 rather than 48 hours.

HOWEVER. When I try to imagine how I would built this program from AND,OR and NOT gates then I could not come up with any other way than doing this:

Take x
Split it into two copies of itself
Lead one copy through a complicated circuit computing y
Lead the other copy through a different but equally complicated circuit computing z
return: (condition AND y) OR (NOT(condition) AND z)

In other words: unless I overlook some brilliant idea, in the world of logic gates we are forced to always perform BOTH computations.

But if CPUs are built up from logic gates, how do compilers avoid always doing both computations?

$\endgroup$
7
  • 2
    $\begingroup$ Do you know about branching and the program counter? You are thinking too low level. You need to think on the scale of microops and CPU instructions. $\endgroup$ Commented Aug 24, 2024 at 15:50
  • $\begingroup$ No I know nothing, but I want to learn! Could you perhaps develop your comment into an answer? $\endgroup$ Commented Aug 24, 2024 at 15:52
  • 3
    $\begingroup$ I don't know what your background knowledge is on hardware level computing. Your best bet is to read a computer architecture book that starts from the ground up, meaning from logic gates, to logic components like adders and counters, then CPU components. Something on digital logic like nand2tetris.org (I've never tried it) $\endgroup$ Commented Aug 24, 2024 at 16:26
  • $\begingroup$ 'else' is NOT, 'if' - any combinations of boolean functions (or, and, xor and others) $\endgroup$ Commented Aug 24, 2024 at 18:08
  • $\begingroup$ You don't need an answer here, you need an electronics course. A typical 2-year school-level course (on the level of GCSEs in the UK; google them if you're not from here) should give you enough of the basics. $\endgroup$ Commented Aug 25, 2024 at 11:12

2 Answers 2

6
$\begingroup$

You are thinking too low level at the transistor level, while it's better to understand it at the CPU level. The CPU executes instructions, which are special commands like "add register 1 to register 2". These instructions are stored in program data and the CPU has lots of dedicated circuitry to decode these instructions. Each of these instructions is decoded into micro-ops like "move register data into accumulator, move to new register, increment program counter". The CPU is always loading and executing the instruction pointed to by the program counter (you can think of it as a settable counter that has memory, and memory is made from clocked D flip flops). The conditional instructions are handled by branch instructions, which do something like "if a special flag register is set, jump to another instruction, otherwise continue to the next instruction". Jumping instructions is simply changing the program counter.

Here's some pseudo-assembly to see what a compiler could produce (check out godbolt.org for actual compiler output!)

cmp r1, r2  # compare reg1 and reg2, set flags
bne L1 # branch to PC address specified by L1 if equal flag is false, otherwise keep executing

Honestly there is so much to explain here that I can't do it adequately and you'll have to read a proper book on CPU architecture. This image shows the process of a highly simplified MIPS CPU pipeline. Modern processors are way, way more complicated than this.

enter image description here

$\endgroup$
3
$\begingroup$

Each CPU has a register called either the "program counter" or "instruction pointer" (different name, same thing) which holds the address of the next instruction it will perform. The CPU has to read the instruction from memory using this address, then do whatever it says, over and over and over again until you turn the power off. Normally it will be incremented after each instruction is done, so it points to the next instruction's address. Therefore many instructions can be written in a sequence.

An "if" statement changes the program counter to a completely different address, only if the condition is true. There is an instruction that says: if condition == TRUE then set the program counter to the address of perform hideously complicated, 24 hour computation on x resulting in one-bit output y. Otherwise, the program counter is incremented to the next instruction by default, which will be perform different, equally time-consuming but completely unrelated computation on x resulting in one-bit output z. (It may also be the other way around, where the program counter is changed if the condition is false. This is up to the compiler and makes no real difference.)

Other instructions which change the program counter are for example goto, return, loops, and function calls.


If you want to build the same thing from combinational logic then you are correct: the easiest way is to build both circuits and then select the output.

However, in some cases it may be possible to reuse the same circuit in both paths by selecting the input. For example if one side of the if-statement calculates "i+1" and the other one calculates "j+1", then you only need one "+1" circuit, and another circuit to choose either i or j as the input. A CPU is an extreme case of this, since every instruction loops back through the same set of circuits - the same adder circuit gets used for every add instruction in the program. Such extreme reuse is possible because a CPU is not combinational.

It may also be possible to save power by turning off the circuits that you don't need.

$\endgroup$
1
  • $\begingroup$ A CPU is not just combinational, but has memory, like a giant FSM. that's what gives it so much computing ability. $\endgroup$ Commented Aug 24, 2024 at 17:46

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.