What is the format of the whole number?

Integer format is a data type in computer programming. The data is written by the type of information that is stored, depending on the number of accuracy data and how this information should be manipulated in processing. The whole numbers represent whole units. Over numbers occupy less space in memory, but this function of space investigation limits the size of the entire entire number that can be stored.

integers are integers used in arithmetic, algebers, accountants and enumeration. The integer means there are no smaller partial units. Number 2 as an integer has a different meaning, which number 2.0. The second format suggests that there are two whole units and zero tenths of the unit, but that tenths of the unit are possible. The first number, as an integer, means that smaller units are not taken into account.

There are two reasons for the integer format in programming languages. First, the integer format is suitable for considering objects that are not divisible into smaller units.The manager writes a computer program for calculating a $ 100 bonus distribution between three employees would not include an integer format of the bonus variable, but would use one to store the number of employees. The programmers have acknowledged that integers are integers and do not require so many digits to be precisely represented.

In the first days of the calculation, the memory space was limited and rare and an integer format was developed for memory. Since the computer memory is a binary system, the numbers were represented in base 2, which means that the acceptable numbers are 0 and 1 .. Number 10 at base 2 represents number 2 at base 10, because 1 in the column of both the number multiplied by 2 increased to second power. 100 in base 2 equals 8 in base 10, because 1 in the first column is 1 multiplied 2 cube.

Using the base on/off to represent binary numbers, electrically based on calculationsDeveloped Rs. Bit is the only one on/off, true/false or 0/1 data representation. While the various hardware configurations were explored using variations of the number of bits that are directly addressed by the computer, the 8-bit byte and the 2-word word have become the standard for the general use of the calculation. Then specification of the format width of the entire determined number of decimal places, but the largest and smallest value that the integer can assume.

integer formats of most languages ​​allow a little to be used for a sign to indicate a positive or negative whole number. On a 32-bit language compiler, C/C+ Integer, INT, for storing signed integer values ​​between –231 to 231-1. One integer value is deducted to suit zero or roughly +/- 2.1 trillion. The 64-bit compiler is enabled using the Int64 data type, signed the integer values ​​between -263 to 263-1 or +/- 9.2 Quintillion.

IN OTHER LANGUAGES

Was this article helpful? Thanks for the feedback Thanks for the feedback

How can we help? How can we help?