What is Number System and Binary Number System

Number System in Maths - GeeksforGeeks

A number system is a mathematical notation that represents numbers using symbols or digits. The most commonly used number system is the decimal system, which uses ten digits (0-9) and positional notation to represent any number.

Other examples of number systems include binary (base-2), octal (base-8), and hexadecimal (base-16). Each number system has its own set of rules and symbols for representing numbers, and they are used in various fields such as computer science, engineering, and mathematics. Understanding the properties and operations of different number systems is essential for many applications, including digital electronics and computer programming.

Binary Number System

Origins of the binary number system | Digit

The binary number system is a positional numeral system with a base of two. It uses only two digits: 0 and 1, to represent any number. The position of each digit in a binary number determines its value, which is a power of 2.

Steps to convert a decimal number to binary:

    1. Divide the decimal number by
    2. Write down the remainder (0 or 1).
    3. Continue dividing the quotient by 2 and writing down the remainder until the quotient becomes 0.
    4. The binary representation of the decimal number is the string of remainders, read from bottom to top.

    Example: Convert decimal number 27 to binary.

    27 ÷ 2 = 13 R 1

    13 ÷ 2 = 6 R 1

    6 ÷ 2 = 3 R 0

    3 ÷ 2 = 1 R 1

    1 ÷ 2 = 0 R 1

    Binary representation of 27 is 11011.

      2. Binary Codes:

      What is binary code and how does it effect data recovery

      Binary codes are sequences of 0s and 1s used to represent characters, instructions, or data in digital electronics and communication systems. Some common binary codes include:

        • Gray code: A binary code where two consecutive numbers differ in only one bit position.
        • BCD (Binary Coded Decimal): A binary code that represents decimal digits using four bits.
        • Excess-3 code: A binary code where each decimal digit is represented by the corresponding 4-bit BCD code plus 0011 (decimal 3).

        3. ASCII & Unicode:

        What is the Difference Between ASCII and Unicode? - Isotropic

        ASCII (American Standard Code for Information Interchange) is a widely-used character encoding standard that assigns a unique 7-bit code to each character in the English alphabet, as well as numbers and special characters.

        Unicode is a character encoding standard that supports a much larger range of characters from different writing systems, including non-Latin scripts such as Chinese, Arabic, and Cyrillic. Unicode uses a variable-length encoding scheme, with characters represented by one or more 8-bit bytes, or occasionally 16-bit or 32-bit sequences. Unicode includes all the characters in the ASCII character set as a subset.

        Leave a Comment