I was asked to do a write-up on this entry, to explain some of the mystery of this code:
int main(int b,char**i){long long n=B,a=I^n,r=(a/b&a)>>4,y=atoi(*++i),_=(((a^n/b)*(y>>T)|y>>S)&r)|(a^r);printf("%.8s\n",(char*)&_);}
The beauty of a one-liner is that, besides doing something interesting in one line of code, it is probably the most accessible of the obfuscated entries in that there is so little code.
This document assumes you:
Be warned: this write-up is a major spoiler for
bin.c
You are encouraged to compile it, run it, and test it yourself before reading further.
If you cannot compile it, you won't be able to follow this description as well.
If you don't have a C compiler, try this one online.
You will need to figure out how to copy the program and set the compile flags
and the program argument
(hint: compile flags are accessed through the gear icon in the top right corner).
The remainder of this document assumes you have a Posix system, C compiler, and command line.
Compile bin.c
in a Unix shell, on the command line, on a little-endian system (x86, pc):
B=6945503773712347754LL
I=5859838231191962459LL
T=0
S=7
cc -include stdio.h -include stdlib.h -DB=$B -DI=$I -DT=$T -DS=$S -o prog prog.c
Run it like this:
$ ./bin 1
00000001
$ ./bin 2
00000010
$ ./bin 3
00000011
$ ./bin 4
00000100
$ ./bin 8
00001000
$ ./bin 16
00010000
$ ./bin 32
00100000
$ ./bin 64
01000000
$ ./bin 128
10000000
$ ./bin 127
01111111
The program converts the decimal value to its binary (base 2) equivalent for a little-endian CPU. If you use the other set of constants, it converts for a big-endian CPU, using the exact same logic!
How it does this in one line of code, without a loop, is the mystery you need to solve. This mystery takes us down quite the rabbit-hole of assumed knowledge of how computers represent data. In particular, understanding this program requires you to know:
This document has a quick primer on each of these, relevant to the description of this program. If you know these, ignore the primers and continue to read.
You should now have enough information to understand the bin.c
program,
and what it does.
But: there is still the obfuscation and the central trick necessary to understand the code.
(See the primer endian-ness for a small program
mem.c
, which clearly demonstrates the memory layout of integers.)
And similar to the trick in mem.c
, the address in memory where the value is stored is
coerced (cast) into the needed form.
I'll start by explaining top-down what the code does,
and then use another tool I developed (and my first winning IOCCC entry), calc.c,
to explore this 2020 entry, bin.c
.
Download: calc
Download: calc manpage
Visit: calc judges' hint
calc
directly, you can still read along here.
The syntax of calc
is pretty intuitive (although using $ to print the
table of variables is obscure).
Concentrate on how the sub-expressions from bin.c
are used with the variables
to pick apart how the expressions work.
Build calc with this command line:
cc -Wno-parentheses -o calc prog.c
Note, this will be a rather noisy build, since I had to squeeze a lot into a little, thus not writing C in the modern style. However, the code will work on most platforms (except I think it does not work on Windows; I don't have a Windows box to play with).
Given the calc tool, you can now experiment with the bin.c
source code to unravel it.
The named registers in calc are useful here. We will use the same names in calc as in bin.c
(except calc is mono-case, and the code uses B and b -- calc treats these as the same).
The things you type into calc are in black italics, its outputs are in green.
$ calc b=6945503773712347754 i=5859838231191962459 t=0 s=7 # print the non-zero values (name, decimal, hex): $
b 6945503773712347754 0x606362e22242726a i 5859838231191962459 0x515253d31373435b s 7 0x00000007
Using calc, you can try each expression as the code does it. It will be helpful to reformat
bin.c
first, to make each assignment on a separate line.
You can use my winning entry
tac.c
and the companion script
unob.sh
from 2018 to assist here, but a one-liner is easy enough:
split on the commas, semi-colons and braces:
int main( int b,
char** i)
{
long long n = B,
a = I ^ n,
r = (a / b & a) >> 4,
y = atoi(*++i),
_ = (((a ^ n / b) * (y >> T) | y >> S) & r) | (a ^ r);
printf("%.8s\n", (char*)&_);
}
(Notice what the vertically aligned variables spell?)
Again, using calc:
n=b a=i^n # print the value of a a
3544668469065756977 0x3131313131313131
# recall the ASCII digression above, this is the string "11111111" # very interesting result # b, the first argument to main, is the number of arguments to this program # as noted in the remarks, there must be only 1 argument, making the value of b == 2 # but calc does not distinquish upper/lower case, so let's set b as it should be b=2 a/b
1772334234532878488 0x1898989898989898
a/b&a
1157442765409226768 0x1010101010101010
(a/b&a)>>4 # nb: the leading bit is NOT a 1; hex 0x10 is binary 0001 0000 # so this signed shift works correctly always
72340172838076673 0x0101010101010101
# note: this last value has a leading 0, elided by calc (added here for clarity) # r is a mask for the low order bits in each byte r=(a/b&a)>>4 # y is the value of interest, here, let's use 85: y=85 # now to unwind the long expression... a^n/b
72198606942111748 0x100804020100804
# remember this result!!! We'll call it x: x=. # this result is multiplied by y>>t
85 0x00000055
# notice that t=0, so y>>0 is itself x*y
6136881590079498580 0x552a954aa552a954
# then we or y>>S y>>s
0 0x00000000
# so that does not change anything x*y&r
72058693566333184 0x100010001000100
# wow -- something very interesting has happened... but a^r
3472328296227680304 0x3030303030303030
# see where this is going? if we or these two values together.... (x*y&r)|(a^r)
3544386989794013488 0x3130313031303130
^D
# return to the shell, then use mem.c
$ ./mem 0x3130313031303130
30 31 30 31 30 31 30 31
Remember the ASCII, and the LE orientation. Mem.c does the re-orientation. Another thing about ASCII, is for digits, to get the human string representation, just remove the leading 0x3.:
0 1 0 1 0 1 0 1
removing all the spaces:
01010101
Which we have shown to be the binary representation of 85, our input value!
Obviously, there was some magic that happened along the way. And I apologize, this gets a little mind-bending because the little-endian-ness requires swapping bytes from high-low and low-high in your head.
The two constants B
and I
contain three numbers:
B=6945503773712347754 I=5859838231191962459 n=B a=n^I # a holds both the 0x30303030... for the ASCII conversion # and the 0x01010101.... for the bitmask to keep the low bits
3544668469065756977 0x3131313131313131
# to get these separated: r=a/2&a>>4
72340172838076673 0x101010101010101
a^r
3472328296227680304 0x3030303030303030
# the magic constant is obtained by B^I^(B/2) a^n/2
72198606942111748 0x100804020100804
This is just a bit of obfuscation of the 3 constants needed by this code. The un-obfuscated calculation is this:
int64_t tobin(unsigned char c)
{
return 0x3030303030303030LL | (0x101010101010101LL & (0x100804020100804LL * c | c >> 7));
}
We take a long long of all ASCII zeros, OR that with (a string of low-bits, ANDed with the computation.) In other words, what the computation must do is remove the one's not needed for this result, or conversely, just keep only those required.
NB: In this discussion, we will be freely moving between hex and binary. And assuming familiarity with LE and BE memory mappings. If you need, look at the primers for these before continuing.
The mapping between hex and binary is simple:
0 0000 8 1000
1 0001 9 1001
2 0010 A/10 1010
3 0011 B/11 1011
4 0100 C/12 1100
5 0101 D/13 1101
6 0110 E/14 1110
7 0111 F/15 1111
It is important to recall that each nibble (one hex character) is composed of 4 binary bits as in the table above. Thus, the hex value 0x1 is really the bit pattern 0001, and the hex pattern 0x8 is bit pattern 1000.
To motivate this discussion, let's convert this input to ASCII:
1001 1101
Recall from the previous section that the required result will need to be
correct in lowest bit of each byte,
so the mask (x&r)
and the ASCII-fy steps | (a^r)
will give the desired output.
For a little endian machine, the result must be in reverse order from memory; that is, the most significant byte in the result will represent the least significant bit in the input value.
The first question is what multiplication value will get the LSb (least significant bit)
into the MSB (most significant byte)?
We need to multiply the value 1 (0000 0001
) by something
to get the result 0x.1.. .... .... ....
--
clearly, this should be the constant value:
0x0100 0000 0000 0000 result 1
Another way to look at this is to shift the lowest bit into the desired location. Shifting left is multiplication by powers-of-2.
0x0100 0000 0000 0000 target 1
0x0000 0000 0000 0001 input 1
^^ ^^^^ ^^^^ ^^^ 13 nibbles of 0
There are 13 nibbles (groups of 4 bits) of 0 between input and target. This is 13*4 = 52 bits of shift. Plus, we need to left shift 0001 (binary) into 10000. Why 10000? Because we are shifting left, and we need to get this bit into the lowest bit of the *next* byte. This is 4 more shifts:
0000 0001 original
0000 0010 shift 1
0000 0100 shift 2
0000 1000 shift 3
0001 0000 shift 4
So this is a left shift of 52+4 = 56 bits
1<<56 = 0x0100 0000 0000 0000 result 1
The remainder of this document will follow this second line of reasoning.
The second lowest bit in the input should map to the next byte in the result.
That is, 2 (0000 0010
) will need to be multiplied by something to result in:
0x0001 0000 0000 0000 target 2
This is the bit shift:
0x0001 0000 0000 0000 <- 0x0000 0000 0000 0002
Align these vertically:
0x0001 0000 0000 0000 target 2
0x0000 0000 0000 0002 input 2
^^^^ ^^^^ ^^^
There are 11 nibbles of 0 between input and target. This is 44 bits of shift. The hex value 0x2 is binary 0010. We need to left shift 0010 into 10000, that is 3 more bits. Thus, shifting 47 places should do it:
1<<47 = 0x0000 8000 0000 0000 result 2
We OR results 1 and 2 together:
0x0100 8000 0000 0000
The value we are ultimately looking for is:
0x0100 8040 2010 0804
So this is working as desired. Using calc
:
z=0x0100800000000000 z*0&r # no input bits
0 0x00000000
z*1&r # lowest input bit
72057594037927936 0x100000000000000
z*2&r # second lowest input bit
281474976710656 0x1000000000000
z*3&r # both lowest input bits
72339069014638592 0x101000000000000
z*4&r # unmapped as of yet, so no impact
0 0x00000000
And it computes correctly for the bits it maps (the lowest 2 bits, values 0-3).
The 3rd bit: we need to map 4 (0000 0100) (bit 3) into
0x0000 0100 0000 0000
Another bit shift, this time from
0x0000 0100 0000 0000 target 3
0x0000 0000 0000 0004 input bit 3
^^ ^^^^ ^^^
There are 9 nibbles, plus 2 bits (10000 <- 0100), so shift by 9*4+2: 38 bits:
1<<38 = 0x0000 0040 0000 0000 result 3
Or result 3 into the final:
0x0100 8040 0000 0000
There is a pattern here.
The lowest bit was 56 bit shift, then 47 bits, then 38 bits. The difference is 9 shifts
between each.
What if we short-cutted all this work and used this pattern: 56,47,38,29,20,11,2?
1<<56 | 1<<47 | 1<<38 | 1<<29 | 1<<20 | 1<<11 | 1<<2
72198606942111748 0x100804020100804
Regrouping this result:
0x0100 8040 2010 0804
This is the magic constant, created by individually mapping each input bit into the lowest bit of each successive byte in the 64-bit result.
However, this is only 7 bits; what of the most significant bit? We cannot continue the pattern because (2-9 = -7).
The result was built up from shifting the target bits into the next lower byte in the result. If we consider the value 128 (1000 0000), and where this bit must land, it is a right shift of 7 of the input value OR'd into the result:
result * input | input >> 7
Recognize this from the expression? The LE constant T is 0, S is 7.
(((a^n/b)*(y>>T)|y>>S)&r)
Simplifying this expression, knowing that y is the input, y>>0 == y, and substituting x for a^n/b:
x*y|y>>7
Can we do the same for a big-endian constant? Certainly.
Where LE machines need to map the lowest bits into the highest bytes, and the highest bits into the lowest bytes (so that when printed from memory, the LE byte order comes out correct), BE machine do not need this complexity.
We are mapping the bits of the input highest bit into the highest byte, next highest bit into next highest byte, etc.
0x0100 0000 0000 0000 target 1
0x0000 0000 0000 0080 input 1
^^ ^^^^ ^^^^ ^^
As before, this is 12 nibbles (12*4=48) plus 1 bit to get 10000 <- 1000, or 49 bits:
1<<49 = 0x0002 0000 0000 0000
The next is:
0x0001 0000 0000 0000 target 2
0x0000 0000 0000 0040 input 2
^^^^ ^^^^ ^^
This is 10 nibbles (10*4=40) plus 10000 <- 0100 (2 bits) is 42 bits:
1<<42 = 0x0000 0400 0000 0000
The third is:
0x0000 0100 0000 0000 target 3
0x0000 0000 0000 0020 input 3
^^ ^^^^ ^^
This is 8 nibbles plus 3 (10000 <- 0010), or 35 bits:
1<<35 = 0x0000 0008 0000 0000
49-42=7, 42-7=35
So, the pattern here is 49,42,35,28,21,14,7,0:
1<<49 | 1<<42 | 1<<35 | 1<<28 | 1<<21 | 1<<14 | 1<<7 | 1<<0
567382630219904 0x2040810204081
0x0002 0408 1020 4081
This seems perfect! All 8 bits mapped, test values 1,2,3,127,128,250,254 all work correctly. But 255 yields completely the wrong value!
There is a subtle numeric limit problem here.
127 * 0x2040810204081
yields the correct result, but look at the result before masking:
127 * 0x2040810204081
0x00ff ffff ffff ffff
This does not happen for other 1-bit "saturated" values:
127 0x00ff ffff ffff ffff
63 0x007e fdfb f7ef dfbf
31 0x003e 7cf9 f3e7 cf9f
15 0x001e 3c78 f1e3 c78f
7 0x000e 1c38 70e1 c387
3 0x0006 0c18 3060 c183
Note carefully that the next un-saturated value is the next bit position (3+1 = 4, 7+1 = 8, 15+1 = 16, ...). The calculation is stable from inputs 0-254. At 255, there is a cascade byte overflow:
255 0x0202 0408 1020 3f80
0x0101 0101 0101 0101 OR
---------------------
0x0000 0000 0000 0100 = 00 00 00 10 = 0000 0010 = 2 !!!
The problem may be more visible if you consider 254:
254 0x01ff ffff ffff fffe
0x0101 0101 0101 0101 OR
---------------------
0x0101 0101 0101 0100 = 11 11 11 10 = 1111 1110 = 254
The problem is that while we *are* mapping the input bits into the correct bytes, but at 254 we cannot add any other value without perturbing neighbor bits. Why is this? Because the bit-spacing is only 7 bits, not 9 like in LE.
To fix this, we half the input value and double the constant. This avoids the problem at 255.
2 * 0x2040810204081 = 0x0004 0810 2040 8102
But halving the input drops the lowest bit!
The insight here is that simply OR-ing in the original input preserves the lowest bit.
0x0004 0810 2040 8102
T=1
S=0
a^n/b*(y>>T)|y>>S
For big-endian, T=1
and S=0
. If x=a^n/b
, this becomes:
x*(y>>1)|y
Thus, for y=1,2,3,4,5,128:
x*( 1>>1)|1 = 0x0000 0000 0000 0001 & r = 0x0000 0000 0000 0001
x*( 2>>1)|2 = 0x0004 0810 2040 8102 & r = 0x0000 0000 0000 0100
x*( 3>>1)|3 = 0x0004 0810 2040 8103 & r = 0x0000 0000 0000 0101
x*( 4>>1)|4 = 0x0008 1020 4081 0204 & r = 0x0000 0000 0001 0000
x*( 5>>1)|5 = 0x0008 1020 4081 0205 & r = 0x0000 0000 0001 0001
x*(128>>1)|128 = 0x0102 0408 1020 4080 & r = 0x0100 0000 0000 0000
So what of 254 and 255?
x*(254 >> 1) = 0x01ff ffff ffff fffe
x*(255 >> 1) = 0x01ff ffff ffff fffe
And by OR-ing in the value itself, the value's low bit becomes the results low bit.
That is, the final 0xfe
for 254, which is the hex value 0xfe
, remains itself.
And 255 is hex 0xff
, so it becomes 0x01ff ffff ffff ffff
,
and thus no overflow.
This explains why big-endian fails at 256, whereas little-endian fails at 511. Bonus points for you if you understand all this.
You are invited to explore the multiplication in a bit more detail. Once you hit upon the realization of how multiplication with the one-bits arranged actually distributes the original value across all the bytes, you will have understood the central insight into this program.
I was asked if this concept could be done in JavaScript.
I do not think so.
The second piece of the puzzle is taking the address of the calculated result,
and printing the first 8 characters of that.
JavaScript does not do pointers and addresses and big/little endian like C does.
The math is portable, and the value would be correct in JavaScript, but
using the single printf
on the address of this value is not portable to JavaScript.
You are familar with this, even if the schooling is long lost. But the central concept of place-value is common with all the representations, so if you do not grok binary or hex, this is likely the best place to start.
The familiar decimal numbers are represented by digits with increasing powers-of-10. The decimal number 1,010,101 is really the summation of place-values:
digits: 0 1 0 1 0 1 0 1
power10: 10^7 10^6 10^5 10^4 10^3 10^2 10^1 10^0
pow10dec: 10,000,000 1,000,000 100,000 10,000 1000 100 10 1
Recall: N^0 power is 1. This is understood as:
0*10,000,000 + 1*1,000,000 + 0*100,000 + 1*10,000 + 0*1000 + 1*100 + 0*10 + 1*1
0 + 1,000,000 + 0 + 10,000 + 0 + 100 + 0 + 1
or
one million, ten thousand, one hundred, one
We become so used to doing this, that de-composing a number is unconscious. However, this decomposition is obvious when written out. Binary and hexadecimal (hex) values are interpreted in an analogous manner.
Back to topThe word binary describes the digits: one of two states, on or off, true or false, 1 or 0. Just like in the decimal system, the places have values. In base 10:
76543210 (76,543,210)
This value gets large with fewer places than binary because each digit place can be represented by 10 values: 0,1,2,3,4,5,6,7,8,9. Binary only has 0 and 1, thus the same number of digits is a much smaller decimal equivalent:
01010101
This is the hexadecimal value 0x55, the decimal value of 85. Let's pick it apart:
We conventionally number digit places from the right to the left. The lowest binary digit (bit) is value 1, which is 1*2^0. The next digits follow as numbered above, in increasing powers-of-2:
binary: 0 1 0 1 0 1 0 1
power2: 2^7 2^6 2^5 2^4 2^3 2^2 2^1 2^0
decimal: 128 64 32 16 8 4 2 1
To convert from binary to decimal, just add the decimal values:
0*128 + 1*64 + 0*32 + 1*16 + 0*8 + 1*4 + 0*2 + 1*1
0 + 64 + 0 + 16 + 0 + 4 + 0 + 1 = 85
Working the other direction, we just divide the decimal value by decreasing powers-of-2:
85 / 128 = 0
85 / 64 = 1, remainder 21
21 / 32 = 0
21 / 16 = 1, remainder 5
5 / 8 = 0
5 / 4 = 1, remainder 1
1 / 2 = 0
1 / 1 = 1, remainder 0
The binary value read top to bottom is 0,1,0,1,0,1,0,1. As in decimal, group the bits together, conventionally in groups of 4 or 8:
0101 0101 or 01010101
At this point, it should be clear what bin.c
does: it converts decimal to binary equivalents.
It would be a good exercise to write the code that converts 8 bits of binary to decimal.
This is both easier and harder than binary. It is easier because there are fewer still digits-places to deal with, and harder because it uses more digits than the familar decimal system. The values in hex run from 0-15, but to prevent confusion when writing them down, the values 10-15 are represented using the letters ABCDEF:
A=10 B=11 C=12 D=13 E=14 F=15
As above, hexadecimal runs in power-of-16 places. The hex value of 0x55 is:
hex: 5 5
power16: 16^1 16^0
decimal: 16 1
To convert from hex to decimal, just add the decimal values:
5*16 + 5*1
80 + 5 = 85
Another example: the hex value 0x3031:
hex: 3 0 3 1
power16: 16^3 16^2 16^1 16^0
decimal: 4096 256 16 1
To convert from hex to decimal:
3*4096 + 0*256 + 3*16 + 1*1
12288 + 0 + 48 + 1 = 12337
Finally, using the A-F values: 0xBEEF:
B=11, E=14, F=15
hex: B E E F
power16: 16^3 16^2 16^1 16^0
decimal: 4096 256 16 1
To convert from hex to decimal:
11*4096 + 14*256 + 14*16 + 15*1
45056 + 3584 + 224 + 15 = 48879
Back to top
The ASCII character set is a proper subset of UTF-8, and is in use by most commercial computers.
The ASCII character set is composed of 128 characters (values 0-127, 7 bits of information), and represented in an 8-bit byte in modern computers, with the high bit cleared (zero).
The ASCII digits 0,1,2,3,4,5,6,7,8,9 have the hex values:
0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39
Notice the pattern? Essentially, the ASCII digits are 0x30 or'd with the digit in decimal.
So the ASCII string "01010101" is stored in memory as:
0x30 0x31 0x30 0x31 0x30 0x31 0x30 0x31
If this value is a string, this is the order of the digits on any modern computer, regardless of its memory endian-ness.
For more detail: Wikipedia: ASCIIPersonally, I prefer man ascii
on a Posix/Unix/Linux computer.
If this value is a number, then there exist two primary arrangements of the bytes in memory: little-endian, the most prevalent, and likely the computer (x86) you are using; and big-endian, the one-true memory representation, used as the order for the internet, and in portable data stores, and elsewhere.
Greatly simplified, CPU memory is arranged in bytes. But CPU registers are typically 64 bits (8 bytes) wide, So there are 2 primary ways to lay down the registers in memory: forwards and backwards. Assume we have the 64-bit hexadecimal value:
0x0001020304050607
Since CPU memory is addressed in bytes, little-endian (LE) would store this in memory starting at offset 0:
offset: 0 1 2 3 4 5 6 7
LE: 0x07 0x06 0x05 0x04 0x03 0x02 0x01 0x00
BE: 0x00 0x01 0x02 0x03 0x04 0x05 0x06 0x07
Notice big-endian (BE) is exactly the opposite direction, with the most significant byte in the lowest memory address, and the least significant byte in the highest memory address.
This memory arrangement is central to the operation of bin.c
. If you wish to see how memory
is stored on your computer, this small program will let you see this directly:
// mem.c
#include <stdio.h>
#include <stdlib.h>
int main(int ac, char **av)
{
union {
unsigned long long v;
unsigned char buf[8];
} u;
unsigned int i;
u.v = 0x0001020304050607ULL;
// Allow the use of any other value, if given on the command line
// Try: 0x0102 or 0xBEEF or 0x0123456789abcdef
if (ac > 1)
u.v = strtoull(av[1], 0, 0);
// Pick out each byte of the value from low-to-high addresses
for (i = 0; i < 8; ++i)
printf("%02x ", u.buf[i]);
printf("\n");
// Print out each byte in its ASCII representation:
printf("%.8s\n", u.buf);
return 0;
}
Compile and run this:
cc -o mem mem.c
./mem
./mem 0x30313233
If the program without arguments prints
07 06 05 04 03 02 01 00
your CPU is little-endian. Otherwise, it is big-endian:
00 01 02 03 04 05 06 07
Notice that big-endian is the way humans usually think of a number, and the way a program represents the value in source code. The compiler will handle the memory order needed for the CPU when it compiles the source code.
For more detail: Wikipedia: EndiannessThe bit-wise logical operators in C are: OR (|), XOR (^), and AND (&). It is also possible to left- and right-shift values (there is no rotate operator). The truth tables for these 3 follow:
OR | 0 1
---+---------
0 | 0 1
1 | 1 1
That is: if any bit is set, the result is 1. Else 0.
AND| 0 1
---+---------
0 | 0 0
1 | 0 1
That is: if both bits are set, the result is 1. Else 0.
XOR| 0 1
---+---------
0 | 0 1
1 | 1 0
That is: if the bits are different, the result is 1. Else 0.
Shifting is simply that: shifting the bits left or right, inserting zero bits. Left shift is easy: add N zero bits to the end, remove the first N bits.
1010 1001
left shift 3 (n<<3):
1010 1001 000 add 3 zeros to end...
0 1001 000 remove 3 bits from front...
0100 1000 repackage the bits into canonical format
If you left-shift more places than the width of the register, the result is 0. If you left-shift 0 places, the value is unchanged.
Right shift is only slightly more complex; add N copies of the high bit to the front, remove the last N bits (signed shift):
1010 1001
right shift 3 (n>>3)
111 1010 1001 add 3 ones to the front (because the high bit is 1)...
111 1010 1 remove 3 bits from the end...
1111 0101 repackage the bits into canonical format
If you right-shift signed values more places than the width of the register, the result is -1 (all 1 bits). If you right-shift 0 places, the value is unchanged.
If you shift unsigned values, then always add zero bits to the front:
1010 1001
unsigned right shift 3 (n>>3):
000 1010 1001 add 3 zeros...
000 1010 1 remove 3 bits from the end....
0001 0101 repackage
If you right-shift UNsigned values more places than the width of the register, the result is 0. If you right-shift UNsigned values the number of bits - 1, the result is the highest bit moves to the lowest bit.
1000 0000
right shift 7 (n>>7)
0000 000 1000 0000
0000 000 1
0000 0001