a program uses 4 bits to represent whole numbers. when that program adds the numbers 12 and 6, the result is given as 0. which kind of error does this produce and what is the binary equivalent of the sum of 12 and 6?