Skip to main content
added 56 characters in body
Source Link

But why was the compiler stuff such a difficult technical problem? It seems to me that if the explicit parallelism in EPIC was difficult for compiler vendors to implement... why put that burden on them in the first place? It's not like a good, well-understood solution to this problem didn't already exist: put that burden on Intel instead and give the compiler-writers a simpler target.

What you describes is a bit what Transmeta tried to do with their code morphing software (which was dynamically translating x86 "bytecode" into Transmeta internal machine code).

As to why did Intel failed to make a good enough compiler for IA64... I guess is that they did not have enough compiler expertise in house (even if of course they did have some very good compiler experts inside, but probably not enough to make a critical mass). I guess that their management underestimated the efforts needed to make a compiler.

AFAIK, Intel EPIC failed because compilation for EPIC is really hard, and also because when compiler technology slowly and gradually improved, other competitors where also able to improve their compiler (e.g. for AMD64), sharing some compiler know-how.

BTW, I wished that AMD64 would have been some more RISCy instruction set. It could have been some POWERPC64 (but it probably wasn't because of patent issues, because of Microsoft demands at that time, etc...). The x86-64 instruction set architecture is really not a "very good" architecture for compiler writer (but it is somehow "good enough").

Also the IA64 architecture has builtin some strong limitations, e.g. the 3 instructions/word have been good as long as the processor had 3 functional units to process them, but once Intel went to newer IA64 chips they added more functional units, and the instruction-level parallelism was once again hard to achieve.

Perhaps RISC-V (which is an open source ISA) will gradually succeed enough to make it competitive to other processors.

But why was the compiler stuff such a difficult technical problem? It seems to me that if the explicit parallelism in EPIC was difficult for compiler vendors to implement... why put that burden on them in the first place? It's not like a good, well-understood solution to this problem didn't already exist: put that burden on Intel instead and give the compiler-writers a simpler target.

What you describes is a bit what Transmeta tried to do with their code morphing software (which was dynamically translating x86 "bytecode" into Transmeta internal machine code).

As to why did Intel failed to make a good enough compiler for IA64... I guess is that they did not have enough compiler expertise in house (even if of course they did have some very good compiler experts inside, but probably not enough to make a critical mass). I guess that their management underestimated the efforts needed to make a compiler.

AFAIK, Intel EPIC failed because compilation for EPIC is really hard, and also because when compiler technology slowly and gradually improved, other competitors where also able to improve their compiler (e.g. for AMD64), sharing some compiler know-how.

BTW, I wished that AMD64 would have been some more RISCy instruction set. It could have been some POWERPC64 (but it probably wasn't because of patent issues, because of Microsoft demands at that time, etc...). The x86-64 instruction set architecture is really not a "very good" architecture for compiler writer (but it is somehow "good enough").

Also the IA64 architecture has builtin some strong limitations, e.g. the 3 instructions/word have been good as long as the processor had 3 functional units to process them, but once Intel went to newer IA64 chips they added more functional units, and the instruction-level parallelism was once again hard to achieve.

But why was the compiler stuff such a difficult technical problem? It seems to me that if the explicit parallelism in EPIC was difficult for compiler vendors to implement... why put that burden on them in the first place? It's not like a good, well-understood solution to this problem didn't already exist: put that burden on Intel instead and give the compiler-writers a simpler target.

What you describes is a bit what Transmeta tried to do with their code morphing software (which was dynamically translating x86 "bytecode" into Transmeta internal machine code).

As to why did Intel failed to make a good enough compiler for IA64... I guess is that they did not have enough compiler expertise in house (even if of course they did have some very good compiler experts inside, but probably not enough to make a critical mass). I guess that their management underestimated the efforts needed to make a compiler.

AFAIK, Intel EPIC failed because compilation for EPIC is really hard, and also because when compiler technology slowly and gradually improved, other competitors where also able to improve their compiler (e.g. for AMD64), sharing some compiler know-how.

BTW, I wished that AMD64 would have been some more RISCy instruction set. It could have been some POWERPC64 (but it probably wasn't because of patent issues, because of Microsoft demands at that time, etc...). The x86-64 instruction set architecture is really not a "very good" architecture for compiler writer (but it is somehow "good enough").

Also the IA64 architecture has builtin some strong limitations, e.g. the 3 instructions/word have been good as long as the processor had 3 functional units to process them, but once Intel went to newer IA64 chips they added more functional units, and the instruction-level parallelism was once again hard to achieve.

Perhaps RISC-V (which is an open source ISA) will gradually succeed enough to make it competitive to other processors.

added 56 characters in body
Source Link

But why was the compiler stuff such a difficult technical problem? It seems to me that if the explicit parallelism in EPIC was difficult for compiler vendors to implement... why put that burden on them in the first place? It's not like a good, well-understood solution to this problem didn't already exist: put that burden on Intel instead and give the compiler-writers a simpler target.

What you describes is a bit what [Transmeta][1]Transmeta tried to do with their code morphing software (which was dynamically translating x86 "bytecode" into Transmeta internal machine code).

As to why did Intel failed to make a good enough compiler for IA64... I guess is that they did not have enough compiler expertise in house (even if of course they did have some very good compiler experts inside, but probably not enough to make a critical mass). I guess that their management underestimated the efforts needed to make a compiler.

AFAIK, Intel EPIC failed because compilation for EPIC is really hard, and also because when compiler technology slowly and gradually improved, other competitors where also able to improve their compiler (e.g. for AMD64), sharing some compiler know-how.

BTW, I wished that AMD64 would have been some more RISCy instruction set. It could have been some POWER64 POWERPC64 (but it probably wasn't because of patent issues, because of Microsoft demands at that time, etc...). The x86-64 instruction set architecture is really not a "very good" architecture for compiler writer (but it is somehow "good enough"). [1]: http://en.wikipedia.org/wiki/Transmeta

Also the IA64 architecture has builtin some strong limitations, e.g. the 3 instructions/word have been good as long as the processor had 3 functional units to process them, but once Intel went to newer IA64 chips they added more functional units, and the instruction-level parallelism was once again hard to achieve.

But why was the compiler stuff such a difficult technical problem? It seems to me that if the explicit parallelism in EPIC was difficult for compiler vendors to implement... why put that burden on them in the first place? It's not like a good, well-understood solution to this problem didn't already exist: put that burden on Intel instead and give the compiler-writers a simpler target.

What you describes is a bit what [Transmeta][1] tried to do with their code morphing software (which was dynamically translating x86 "bytecode" into Transmeta internal machine code).

As to why did Intel failed to make a good enough compiler for IA64... I guess is that they did not have enough compiler expertise in house (even if of course they did have some very good compiler experts inside, but probably not enough to make a critical mass). I guess that their management underestimated the efforts needed to make a compiler.

AFAIK, Intel EPIC failed because compilation for EPIC is really hard, and also because when compiler technology slowly and gradually improved, other competitors where also able to improve their compiler (e.g. for AMD64), sharing some compiler know-how.

BTW, I wished that AMD64 would have been some more RISCy instruction set. It could have been some POWER64 (but it probably wasn't because of patent issues, because of Microsoft demands at that time, etc...). The x86-64 instruction set architecture is really not a "very good" architecture for compiler writer (but it is somehow "good enough"). [1]: http://en.wikipedia.org/wiki/Transmeta

But why was the compiler stuff such a difficult technical problem? It seems to me that if the explicit parallelism in EPIC was difficult for compiler vendors to implement... why put that burden on them in the first place? It's not like a good, well-understood solution to this problem didn't already exist: put that burden on Intel instead and give the compiler-writers a simpler target.

What you describes is a bit what Transmeta tried to do with their code morphing software (which was dynamically translating x86 "bytecode" into Transmeta internal machine code).

As to why did Intel failed to make a good enough compiler for IA64... I guess is that they did not have enough compiler expertise in house (even if of course they did have some very good compiler experts inside, but probably not enough to make a critical mass). I guess that their management underestimated the efforts needed to make a compiler.

AFAIK, Intel EPIC failed because compilation for EPIC is really hard, and also because when compiler technology slowly and gradually improved, other competitors where also able to improve their compiler (e.g. for AMD64), sharing some compiler know-how.

BTW, I wished that AMD64 would have been some more RISCy instruction set. It could have been some POWERPC64 (but it probably wasn't because of patent issues, because of Microsoft demands at that time, etc...). The x86-64 instruction set architecture is really not a "very good" architecture for compiler writer (but it is somehow "good enough").

Also the IA64 architecture has builtin some strong limitations, e.g. the 3 instructions/word have been good as long as the processor had 3 functional units to process them, but once Intel went to newer IA64 chips they added more functional units, and the instruction-level parallelism was once again hard to achieve.

Source Link

But why was the compiler stuff such a difficult technical problem? It seems to me that if the explicit parallelism in EPIC was difficult for compiler vendors to implement... why put that burden on them in the first place? It's not like a good, well-understood solution to this problem didn't already exist: put that burden on Intel instead and give the compiler-writers a simpler target.

What you describes is a bit what [Transmeta][1] tried to do with their code morphing software (which was dynamically translating x86 "bytecode" into Transmeta internal machine code).

As to why did Intel failed to make a good enough compiler for IA64... I guess is that they did not have enough compiler expertise in house (even if of course they did have some very good compiler experts inside, but probably not enough to make a critical mass). I guess that their management underestimated the efforts needed to make a compiler.

AFAIK, Intel EPIC failed because compilation for EPIC is really hard, and also because when compiler technology slowly and gradually improved, other competitors where also able to improve their compiler (e.g. for AMD64), sharing some compiler know-how.

BTW, I wished that AMD64 would have been some more RISCy instruction set. It could have been some POWER64 (but it probably wasn't because of patent issues, because of Microsoft demands at that time, etc...). The x86-64 instruction set architecture is really not a "very good" architecture for compiler writer (but it is somehow "good enough"). [1]: http://en.wikipedia.org/wiki/Transmeta