Okay, I may be downvoted for this hackneyed subject of redundant prefixes but I will post it anyway. Let's take this byte sequence:
0x66, 0x66, 0x8B, 0xC0
Intel CPU, 32 bit mode, visual studio disassembler output:
addr: ?? ??
addr + 1: mov ax, ax
When we single-step given byte sequence it turns out that disassembler output actually corresponds to the actual CPU behavior, we can see that EIP is incremented by 1 and then by 3. From high-level point of view this byte sequence can be considered as instruction with redundant operand size override prefix, however I think that from lower point of view we can consider it as two instructions.
Intel CPU, 64 bit mode, visual studio disassembler output:
addr: mov ax, ax
In this case RIP is incremented by 4.
Obviously Intel manual does not cover this. So the question arises: how disassembler knows actual CPU behavior. I can think of two possibilities:
1) Microsoft developers had unofficial information from Intel.
2) Disassembler code is written in such a way that when it sees redundant prefixes it runs instruction in question on CPU. Then it compares EIP/RIP increment to computed instruction size. If they match instruction is fine, otherwise put some ??. But it seems over-complicated.
I know that output can vary across different disassemblers and we cannot trust it. But my argument is that single-stepping is implemented by CPU itself and debugger cannot lie about instruction pointer current value.