commit | f9d741e32c6f1629ce70eefc68d3363fa1cfd696 | [log] [tgz] |
---|---|---|
author | Vladimir Marko <vmarko@google.com> | Fri Nov 20 15:08:11 2015 +0000 |
committer | Vladimir Marko <vmarko@google.com> | Fri Nov 20 16:18:39 2015 +0000 |
tree | 409005e5b1d01d2830c20421f8466125e110d6af | |
parent | beb709a2607a00b5df33f0235f22ccdd876cee22 [diff] |
Optimizing/ARM: Improve long shifts by 1. Implement long Shl(x,1) as LSLS+ADC, Shr(x,1) as ASR+RRX and UShr(x,1) as LSR+RRX. Remove the simplification substituting Shl(x,1) with ADD(x,x) as it interferes with some other optimizations instead of helping them. And since it didn't help 64-bit architectures anyway, codegen is the correct place for it. This is now implemented for ARM and x86, so only mips32 can be improved. Change-Id: Idd14f23292198b2260189e1497ca5411b21743b3