Understanding computer languages

One of my favourite things to do when exploring a concept is to see the etymology of the key terms. So what is a computer? According to the online etymology dictionary, the word computer comes from the Latin word computare, which means to ‘to count or put together.’ By the 1640s, the word computer was introduced into the English language as someone who computes or calculates. By 1897, this definition evolved to include calculating machines. It was only by 1945 that computer had seen its modern use “as a programmable digital electronic device for performing mathematical or logical operations.“

Computers, as we understand them, are simply complex devices that handle arithmetic or logical calculations. Therefore, it is not surprising that much of computer programming has to do with calculations.

At a most fundamental level, computers speak a language of binary, i.e. in 1’s and 0’s. These 1’s and 0’s represent the state of tiny switches in the integrated circuits of computing devices, the combination of switches turning on and off being used to program the devices to achieve a wide variety of tasks.

The purpose of computer languages is to facilitate communication between humans and computers. Understandably, hard coding computer programs, especially today’s complex ones, will be difficult and error-prone, which necessitated the development of more human friendly computer languages.

Types of computer languages There are many different types of computer languages. According to some, programming languages are considered a subset of computer languages, while others use the terms interchangeably.

Programming languages are considered languages that are used to create computer ‘programs’. A subset of programming languages includes scripting languages, which are languages that create ‘scripts’ that are interpreted by other programming languages. Are scripting languages the same as programming languages? Yes and no. The short answer is that it is both languages used to interface human-computer communications and, as such, convey human instructions such that they can be translated into a machine-readable format. Javascript and python are examples of the well-known programming

Another subset of computer languages commonly used is markup languages. Markup languages are typically used to define the structure of content. They are used for a variety of purposes, ranging from wordprocessing to structuring web pages. Some common examples of markup languages include HTML (hypertext markup language) which is used for web development, SGV () which is used for vector graphics, and XML (extensible markup language) which is used for word processing and other similar purposes.

Modelling languages are a type of programming language.

In common lexicon, we refer to the set of instructions written for the computer to run as the source code (aka, code). Standalone pieces of code that can be run by devices are known as programs, while code that is translated line by line is typically known as scripts.

When we compile programming languages, we translate a set of written instructions (code) into a cohesive unit that can be carried out by the computer. This translation of the programs is done by a program called the compiler. A compiler translates a program written in a high-level language into a low-level language that can be run directly by the computer. This allows the program to be run on the device with a low runtime, making compiled programs execute fast. However, compiled programs may not always work on different devices and may need to be recompiled for different build environments.

Scripts take a different approach to being translated. Instead, they typically use a program known as an interpreter to translate the code line by line into a machine-readable format.

Generations of computer languages Imperative languages Imperative languages are computer languages that provide explicit instructions on what to do and how to do it. It provides the exact sequence of steps that need to be executed in order to achieve the desired outcome.

First Generation Language (1GL) : Machine code As discussed earlier,the very first level of programming languages configure the programs directly on the machines themselves. Such languages were based on binary and were machine specific. As such, it was not possible to use the same program for a machine with a different integrated circuit. On the other hand, as the program can be directly understood by the machine, there was no need to translate the written programs. This significantly increases the efficiency and speed of the machine code, allowing for faster and more lightweight applications. However, programming such applications is cumbersome and error-prone. The difficulty of learning and understanding the processor-dependent languages contribute to the lack of human-friendliness as well. As such, alternatives were developed for the part of human programming.

Second Generation Language (2GL): Assembly language Assembly language was a step up from machine code in that it was developed using an English-like syntax. This made it comparatively easier for humans to learn and use. The code written in assembly language is then translated into machine code using a program called an assembler. This translates the assembly code into machine code, with the code translating directly to binary. Like machine code, this is also a device-specific language. As such, it is not possible to use the same program on different platforms. As such, both 1GL and 2GL languages were both known as low-level languages, mainly because they were machine dependent. However, it is easier to modify and debug assembly language as it is comparatively easier to understand.

Programs such as the BIOS and kernels are frequently written in assembly language.

Third Generation Language (3GL): High level languages High-level languages were the start of machine-independent coding. For high-level languages, coding followed a more abstract approach where it provides the steps or algorithm instead of providing a hardware-level configuration required to get the desired output. An algorithm describes the exact steps the computer would need to perform to solve the problem. As the code is written using an English-like syntax, it is easier to understand and debug and is also simpler to write code as it only provides the control flow for the execution of the steps. While the code can be run on different machines, it would need to be compiled or interpreted into the respective machine code.

While several programming languages fall under multiple generations, languages such as C and Java can and are frequently described as 3GL.

Declarative languages Unlike imperative languages, declarative languages uses code to describe a paradigm as opposed to describing how to achieve the desired results. It is used to provide the computational logic without providing the exact steps required to achieve it.

Fourth Generation Language (4GL): Domain Specific languages There are several different names for 4GL languages, such as domain-specific languages and non-procedural languages, just to name a few. These programming languages tend to be used for very specific domains such as databases, mathematical programming, report generation, GUI development and web development. Unlike 3GL, 4GL uses statements that describe what the code is supposed to do as opposed to how it should be done. It requires a few lines of code to achieve its actual objectives, reducing time in development, while also reducing the incidence of errors. However, as it depends on prewritten instructions for execution, it is limited in terms of function and flexibility, while also consuming more memory when bring used. Common examples of 4GL include SQL, R, HTML, Simulink (MATLAB) and SPSS.

Fifth Generation Languages (5GL): Problem solving languages The next generation of programming languages include problem solving languages, which uses given constraints as opposed to an algorithm in order to develop the solution. While this is an emerging area, it is the frontier between artificial intelligence and natural language, where computers as opposed to human programmers solve the problem. This reduces human effort, making it far easier to learn and use than other programming generations. However, as the computing decisions are made by the computer, it requires more resources and tends to be mire complex and expensive than other generations of computer languages.

Sixth Generation Languages (6GL): Visual languages and beyond The final, albeit emerging type of programming languages involve visual development or ‘no-code’ languages. These languages use visual abstractions to develop computer programs.

While this is still an emerging area, common examples include Scratch, bubble.io and other visual development tools.

Built With

  • n/a
Share this project: