Introduction
In mathematics, inclusive OR is given preference to exclusive OR (also called XOR).For example, when you write p V q which is read "p OR q", this is inclusive OR. This statement is true when p is true, or q is true, or both p and q is true. Inclusive OR is used as the default meaning of OR, and works more often in logic than exclusive OR.
We often see XOR used in sentences. For example, if someone says "This summer, I am going to London, or I am going to Paris", they seem to suggest that they plan to go to one or the other, but not both. Even though their statement wouldn't be considered wrong if they went to both cities, it suggests very strongly that they intend to go to one city. (Of course, in logic, it's hard to convey the notion "suggests very strongly").
You would think XOR wouldn't be that interesting an operator. It's similar to OR, so why should it be any more interesting than OR?
But it is!
Various Views of XOR
You can think of XOR in many ways. Assume that p and q are Boolean variables. Also assume that p1, p2, ...pn are Boolean variables. Let (+) be the XOR operator (this is a circle with a plus sign inside it). In this case, we assume that p and q are boolean values, instead of words, and we assume (+) is plain XOR, not bitwise XOR. Later on, we'll use it as bitwise XOR.- p (+) q is true if exactly one of p and q is true. This is the conventional definition of XOR.
- p1 (+) p2 (+) ... (+) pn is true if the number of variables with the value true is odd (and is false if the number of variables with the value true is even). Notice this definition ignores the variables which are assigned to false.This may seem like an odd view of XOR. (No pun intended). However, if you believe that XOR is associative (which it is), it's merely an extension of the first definition of XOR.
This definition makes sense, if you read the next definition. - p1 (+) p2 (+) ... (+) pn is the same as adding modulo 2. If pi is true, then treat it as a 1, otherwise treat it as a 0. Then, XOR operation is equivalent to:
p1 (+) p2 (+) ... (+) pn == ( p1 + p2 + ... + pn ) % 2
Thus, XOR applied to a bunch of Boolean variables is the same as summing up all the variables' values (where true is 1 and false is 0), and dividing mod 2. Recall that dividing mod 2 is one way to determine if a number is even or odd. Since only the variables that have value 1 contribute to the sum (0 is the identity value in sums), this determines how many variables have value 1.
Parity Check
People often use XOR as a means of doing a parity check. A bitstring has even parity if the number of 1's in the string is even. It has an odd parity if the number of 1's is odd. If you XOR the bits together, you can tell whether a bitstring has even or odd parity.This can often be used to verify data sent across a network, where there's some probability a bit may be corrupted. For example, suppose you're sending N bytes across the network from a source location to a destination location.
How can you determine whether the bytes were sent correctly? One way is to use a kind of checksum, which uses XOR. Each byte can be written as b7b6...b0. For each byte, XOR all the bits in position bi.
If N was 10, and you're transmitting 10 bytes, then create an 11th byte where bi is the XOR of all 10 bytes ith bit. This 11th byte is called the checksum. This checksum is also sent across the network.
At the destination end, where the data is being received, you can then independently perform the checksum again, and see if the checksum you performed matches the checksum sent across. If so, then you have some confidence that no bytes were corrupted. If it's not the same, then the network has corrupted some bytes, and you may need to retransmit the data.
Clearly the system could have errors. For example, if two bits were flipped, say, bit 3 of bytes 4 and 5, then the checksum would be the same if they hadn't been flipped, but there would still be errors. Nevertheless, it catches some errors.
Properties of XOR
Here are several useful properties of XOR. This applies to plain XOR and bitwise XOR.- x (+) 0 = xXORing with 0 gives you back the same number. Thus, 0 is the identity for XOR.
- x (+) 1 = \xXORing with 1 gives you back the negation of the bit. Again, this comes from the truth table. For bitwise XOR, the property is slightly different: x ^ ~0 = ~x . That is, if you XOR with all 1's, the result will be the bitwise negation of x.
- x (+) x = 0XORing x with itself gives you 0. That's because x is either 0 or 1, and 0 (+) 0 = 0 and 1 (+) 1 = 0.
- XOR is associative.That is: (x (+) y) (+) z = x (+) (y (+) z).
You can verify this by using truth tables.
- XOR is commutative.That is: x (+) y = y (+) x.
You can verify this by using truth tables.
Bitwise XOR
The properties of XOR apply to bitwise XOR. Suppose x and y are 32 bit words. Then, x ^ y is basically like performing 32 XORs in parallel on an array of 32 booleans.Swapping without "temp"
Here's one of those brain-teasers that you can give to your CS friends. One of the classic problems any CS major should be able to solve is writing code to swap two numbers. Here's how it looks in C.temp = x ; x = y ; y = temp ;To swap, you introduce a "temp" variable. Its name doesn't have to be "temp", but it is nevertheless an additional temporary variable.
Now ask your friend to solve this without using a temp variable. This means you can ONLY use x and y. This does NOT mean that you name the variable temp2.
How can you do this? If you're thinking "perhaps I can use bitwise XOR", you're right! If you're adventuresome, you can think about how to do this on your own, but if not, here's the answer.
x = x ^ y ;
y = x ^ y ;
x = x ^ y ;
Are you convinced this works? Perhaps not. How could you be convinced this works?The key to convincing yourself this works is to keep track of the original value of x and y. Let A be the original value of x (that is, the value x has just before running these three lines of code). Similarly, let Bbe the original value of y.
We can comment each line of code to see what's happening.
// x == A, y == B
x = x ^ y ;
// x == A ^ B, y == B
y = x ^ y ;
// x == A ^ B
// y == (A ^ B) ^ B == A ^ (B ^ B) (by Assoc)
// == A ^ 0 (by z ^ z == 0 property)
// == A (by z ^ 0 == z property)
x = x ^ y ;
// x == ( A ^ B ) ^ A
// == ( A ^ A ) ^ B (by Assoc/Commutativity)
// == 0 ^ B (by z ^ z == 0 property)
// == B (by z ^ 0 == z property)
// y == A
After the second statement has executed, y = A. After the third statement, x = B.Now it turns out you can do a similar trick using subtraction instead of XOR. Doing swaps with XOR is slightly safer than swapping with subtraction because subtraction can cause overflow. However, usually overflow occurs in a "nice way" that it may still work. Overflow often just "wraps around", so things may still work out fine.
Writing bitwise XOR without ^
Suppose you wanted to implement bitwise XOR, but didn't have ^ operator. What would you do? With bitwise AND (&) and bitwise OR (|), you can do this.x ^ y == (~x & y) | (x & ~y)This is the standard definition of XOR as defined in logic books, applied to bitwise operations.
Summary
XOR is an interesting operation. With it, you can do parity checks and swaps without temp variables.source: https://www.cs.umd.edu/
behaeQul_ze_Washington Curtis Bethea https://wakelet.com/wake/oGJ2_lAPFYZwWK_Tpc__O
ReplyDeletescisoggata
centcalas_teWest Valley City Keith Cvetkovic McAfee Internet Security
ReplyDeleteHDD Regenerator
Internet Download Manager
phyllitanel