Primitive Data Types

Posted by Jared Kipe on | 0 Comments

Tags:

The Primitive Data Types are data types that are used in every day in programming and usually correspond to the most basic building blocks of all the other Objects or data types you will be programming with down the road.

The primitive data types come in a few flavors, and usually have a couple of sizes for each as well.  The sizes of the types depend on the language used, the specific compiler, and thus the computer architecture you are programming in (i386, x86_64 ...)

Question

Integer Types

Integer types are basically just counting numbers.  If you want to keep track of how many cows you have, the integer type is the one you will probably want to use.

In strongly typed languages like C or Java, you define these numbers with the data type name in front of the variable name, e.g. "int x = 5;"

In both strongly typed languages and weakly typed languages (like PHP and Javascript) you can force one type into the integer type by what we call "casting", e.g. "$x = (int)$_GET['x'];"

name: size:
char 8-bit
short 16-bit
int 32-bit
long long 64-bit

They also come in "signed" and "unsigned" flavors.  Signed versions can keep track of negative numbers but do so by dedicating their last (highest order) bit to keep track of the sign.  Unsigned types use all of the bits for counting, thus they can count higher.

Languages like C# and Java actually have other names for these differently sized integer types.  You can checkout the giant table of integer types over on wikipedia.

Question

Floating Point Types

Floating point types are for real numbers like 1.2 or 3.14, that is numbers that are not whole numbers.

Floating point types are basically like always representing numbers in scientific notation say 1.22311 x 10^3.  They break some of their bits up to represent the exponent and the significant digits, and a single bit for the sign.

Floating point numbers do NOT accurately represent all numbers, and this causes confusion and frustration when you try to use them where perfect base 10 accuracy is needed. (like banking)  This failure is the equivilant to say 1/3 represented as a base 10 decimal number.  The more accurately you want to define 1/3 as a base 10 decimal you need more and more digits 0.3333333...  1/10 (0.1) is for example not representable perfectly as a floating point number.

Another problem with Floating Point types is that the bigger/longer the number gets, the less accurate it gets.  Rather, floating point numbers have a fixed number of significant digits, once you use those up, the remaining ones will be inaccurate.

There are really only two major size flavors for floating point types.

float 16-bits (on 32-bit processors) 32-bits (on 64-bit processors)

double 32-bits (on 32-bit processors) 64-bits (on 64-bit processors)

I personally always use doubles if I care about the accuracy even a little bit.  Here is a more detailed analysis on the internals of floating point types.

Question

Char

You may have though wondered what was up with the name for the smallest integer data type.  Well it is certainly an integer type mathematically and structurally, but it has another important usage in several languages.

Char is used to store a single alpha-numeric character (and quite a few non alpha-numeric characters like ?!%$#_)

Basically characters like 'A' or 'p' have integer equivalents, 65 and 112, as defined by various character encoding schemes.  E.g. ASCII, and utf-8.

Question

Enum

Enum stands for enumerated type, and is essentially a way for you to define keywords to specific integer values.  Examples would be days in a week, or suits of playing cards.

Enum's come up in C and its descendants such as C++ and Java, as well as databases such as MySQL.

enum suit { HEART, DIAMOND, SPADE, CLUB };

Question


Post your comment

Comments

No one has commented on this page yet.

RSS feed for comments on this page | RSS feed for all comments