Understanding Variable Data Types
Hello. In this post, let’s explore variable data types.
In daily life we often talk about “kinds.” Even when ordering coffee we distinguish between iced Americano and hot cafe latte. Different kinds mean different cups, ice or no ice, and even different ways to make them. In real life this feels like a simple choice, but for how a computer handles information it becomes a big difference.
Imagine you are paying at a convenience store. You put a candy, a drink, and then say “Please give me a receipt.” The clerk can ring up the candy and the drink, but not the sentence “Please give me a receipt.” That is a sentence, not a price. Computers are the same. They can calculate the number 3, but not the string "3" with quotes. That is only a character that looks like a number, not an actual number.
Computers also have this idea of “kind.” In programming we call it a data type. A data type is the computer’s basis for telling the world apart and serves as a tag that tells “what kind of nature this data has.”
A person can read “100” as 100 won, 100 points, or 100 degrees depending on context. A computer does not understand that kind of human context. To a computer it is only a bundle of signals made of zeros and ones. Whether those signals will be used as a number, as text, or as a true or false decision is entirely up to the programmer. That is what a data type is.
A data type is simply the rule that determines the character and role of data. Is it a number, text, a logical truth value, or a collection that holds many values? Depending on that, the way a computer processes it changes completely.
Suppose there were no data types. If every piece of data were just a “value,” the computer could not tell the
difference between 1 and "1". Then arithmetic would break down. Adding numbers should calculate,
but adding strings should concatenate sentences. A data type is the minimum order that prevents this confusion.
When a computer receives data it first identifies its nature (the data type) and stores it with the proper tag. Only then can it know exactly how to handle it later. Numbers go with numbers, text with text, and true or false with true or false.
All the languages we commonly use — C, C++, Java, JavaScript, Python, Go, Rust, and so on — share a common principle beneath their different syntaxes.
“Every piece of data has a specific type.”
Among data types the first that comes to mind is the number type. In every language numbers are the most basic. Numbers can be calculated, compared, and used to determine counts for loops or lengths. Yet there are important sub-differences.
An integer is a number without a decimal point — values like 1, −2, 100, 2025. This corresponds to counting things in the real world (“three apples,” “twenty students”).
By contrast, floating-point numbers such as float or double include a decimal part (1.5, 3.14159, −7.25) and represent continuous measures like length, temperature, and weight. Because of that, most languages separate integers and floats into different types and store them differently.
This is not only a difference in appearance. Integer math is crisp and often exact and fast. Floating point uses approximations and can carry tiny errors. That’s why in finance where exactness matters we often avoid floats and convert decimals to integers (e.g., “100.25 dollars” → 10025 cents).
Next are character and string. A character means a single symbol such as 'A', '1', or '가'.
A string means a sequence of characters such as "Hello", "123", or "apple juice".
With strings, "Hello" + "World" becomes "HelloWorld" — concatenation, not arithmetic addition.
Next is boolean — the most basic unit of computer logic: true or false.
All conditionals such as if and while consist of boolean decisions.
Now let’s move to composite types. Arrays or lists store multiple values at once. Inside the computer an array is a series of values laid out in a row with index numbers (0, 1, 2, …). Lists (in many languages) can grow or shrink freely, bringing flexibility.
The next step is the dictionary or map. If arrays distinguish by order, dictionaries distinguish by keys (name → value). The JSON used in web development is essentially a dictionary structure.
There is another interesting data type: Null (None, Nil). It means “no value exists,” which is different from being zero or empty. In programming this distinction is crucial.
The data types we have discussed so far exist in almost every language: integer, float, character, string, boolean, array, dictionary, and Null. These eight form a basic scaffolding for programs.
A data type also decides how the computer interprets bits.
The same 8 bits could be the number 65, or (interpreted as ASCII) the character 'A'.
Meaning changes with type — just like the word “Gift” means “present” in English but “poison” in German.
So what happens if you use the wrong type? Trying to add the string "10" and the number 10 often causes an error.
Some languages auto-convert: in JavaScript, "10" + 5 becomes "105". Convenient, but a source of subtle bugs.
Data types exist in the human world too. Something that looks like a number might actually be a phone number. Computers lack this context unless we declare it. The programmer must say, “This is for display, not for arithmetic.”
Ultimately, data types are order and rules. Every piece of data belongs in a fitting container: numbers with numbers, text with text, booleans with logic. Understand types, and you understand the core of coding. Syntax varies across languages, but the computer’s way of seeing the world — the principle of data types — is constant.
Thank you for reading this post. Wishing you a happy day.
You can view the original blog post in Korean at the links below: