|« Alchemy: Message Buffer||Devil's Advocate: TDD »|
When I first started my college education to become a Computer Scientist (Programmer) an ignorant acquaintance of mine told me with some uncertainty, "Computer programming, don't they have computers write the programs now?" I thought he may have been thinking of the compiler. Alas, no. He continued to become more certain, while he told me that computers were writing programs now, and in ten years I wouldn't be able to find a job. I no longer know this person, and I, along with millions of other programmers make a living writing computer programs. Why aren't computers writing these programs for us?
The most basic answer to this question is Information. I will try to avoid giving a completely academic answer, however, we will need to visit a few concepts studied in Information Theory and Computability Theory. A specialized combination of these two fields of study is, Algorithmic Information Theory (AIT), this will also provide a more precise, or at least satisfying answer.
What is Programming?
Unfortunately, we won't be able to get very far unless we define what we mean when we refer to Programming. For simplicity, let's define programming in a way that is congruent with AIT. This will make the discussion easier to associate to the relevant theories, and simplify the definition to level that can be easily visualized and reasoned about.
Here's a dictionary definition of programming:
The action or process of writing computer programs.
What is a Program Then?
I think that definition is actually simple enough. Let's look at a basic definition for computer program:
A computer program, or just a program, is a sequence of instructions, written to perform a specified task with a computer.
This is also simple, but not specific enough. Therefore, it's time to turn to AIT and use one of their basic constructs that is often used to also represent a program. This construct is the string. Here is an excerpt of text from Wikipedia regarding the relationship of a string and a program in AIT:
... the information content of a string is equivalent to the length of the most-compressed possible self-contained representation of that string. A self-contained representation is essentially a program – in some fixed but otherwise irrelevant universal programming language – that, when run, outputs the original string.
I added the extra emphasis in the text to make it more obvious that there is a relationship between these three concepts. After a long-winded and roundabout simplification, we will represent and visualize a program as a string such as this one:
1100100001100001110111101110110011111010010000100101011110010110Or even an 8-bit string like this:
... and what does this have to do with information?
Yes, let's get back to information. AIT defines a relationship between information and a string, which if it is self-contained representation of the string that contains the information, it is a program. We just defined our purpose for having a program. Which is to reproduce the desired information encoded in the program itself.
We have established that for this discussion, the purpose of a computer program, or just program, is to reproduce information. Also, we will represent a program like this
11001001. So in essence, computer programmers generate strings that, when executed will produce the information originally encoded within the program. Of course there are plenty of tools that programmers use to run over their language of choice, that will compiler, link, interpret, convert and eventually generate a string that is executable by the target computer.
How do programmers know what to program?
Programmers are given a set of requirements that define the information that the program needs to produce. In the real-world, this information can represent projectile trajectories, financial calculations, images, interactive processing of commands, this list is potentially endless. With the known requirements, the programmers can set out to create a program that will produce the desired information in a reasonable amount of time.
I mention time because of some of the concepts that exist in the fields of study that I mentioned. These concepts will help us reason, and answer the question I posed for this entry. The most obvious part of programming is writing code. It's the visible aspect to an outside observer.
"What's he doing?"
"Oh, he's eating cold pizza, drinking Mountain Dew, and writing code."
Again, we can thinking of a program as a simple string. Before the programmer can write this simple string, they have to have a concept that they are trying to implement as the program. Once they have this concept in mind, they write can write the code. This is very much like expressing an idea to a person, except instead it is the concept is articulated in a language or form that is computable by the computer.
In English, at least, there are many ways to say things. Some people are verbose, others are terse, and yet others speak in innuendo. Solving a problem in a computer program can be done in many different ways as well. Sometimes the language and hardware that you are writing for will dictate how you need to solve the problem. Other times there are no strict limitations, and it is up to you to find the best way to solve the problem. The best might not always be the fastest. Sometimes it means the most portable, maintainable, or uses the least amount of memory. Much of the study of Computer Science is focused on these concepts
A Turing machine is a hypothetical device that allows computer scientists understand the limits of computation by a machine. When reasoned about, the Turing machine is given a fixed-instruction set, infinite memory, and infinite time to run the program. Even with unlimited resources, we discover problems that are very difficult to calculate, and attempt to approach the infinite time limit. These problems only known solutions scale exponentially as the size of the problem increases.
On the other hand, we can also discover problems that are quickly solvable and verifiable in polynomial-time, such as AES encryption. However, if the constants chosen for these problems are large enough, the amount of time required to calculate the solution can still attempt to approach an infinite amount of time.
So we've established that programs are encoded strings, that produce information when the program is executed. We mentioned a theoretical computer called a Turing machine that is used to reason about the calculability and complexity of problems solved by programs. I told you I was going to try to avoid as much academics as possible. What about real-world computers?
Real-world computers generally fantastic. The majority of computers we interact with are General Purpose CPU's(GPCPU). Very much like the Turing machine, except without access to unlimited resources. We have got quite a bit of resources on the current processing hardware. We have hit a point where computers are no longer getting faster. In order to continue to gain processing power, the trend is to now provide multiple CPU's and perform parallel processing.
An extreme example of parallel processing is the use of Graphics Process Units to perform General Purpose computing (GPGPU). GPGPU processing performs up to 1664 parallel processing streams on the graphics card that I own. This is for consumer grade hardware; I don't know about the high-end chips, I can't afford them, so I don't torture myself. The challenge with this path, is that you must have a problem that can be solved independently and in parallel. Graphics related problems are natural fits for this model, as well as many scientific simulations.
What is Artificial Intelligence (AI)? It is when intelligence is exhibited by a machine or software. Intellingence, Damn! more definitions. I don't actually want to go there, mostly because a great definition for intelligence is still debated. Let's simply state that AI involves giving computers and machines enough intelligence to make decisions to perform a predefined tasks.
AI is far enough along that we could command it to write computer programs. However, they would be fairly simple programs. Oh, and here's the catch, a programmer would be the person to command the AI program to write the simpler program. AI will continue to improve. So the programmer will be able to command the AI to program even more complex programs. But still not one as complex as itself.
Do you see the trend?
"We can't solve problems by using the same kind of thinking we used when we created them."
I have a feeling there is a better quote that fits the concept I am trying to convey, but this one by Einstein still fits, and I like it. Technology is built upon other technologies that have come before it. When a limit is reached, a creative turn occurs, and progress continues forward. I understand how computers work, in the virtual sense. I myself am not capable of building one from scratch. I take that back. I had a project in college where I had to build an 8-bit accumulator by wire-wrapping; the inputs were toggle switches, and the clock was a manual button that I pushed. For me to build a computer with the processing power we have today, would be a monumental task (that I am not capable of today).
We keep improving our technologies, both physically and virtually. We continue to use known and invented technologies to build and invent new technologies. When some people reach their limits, others may pick up research and advance it a step further by approaching it from a different direction. This is similar to the theorems mentioned from AIT, regarding the amount of information encoded in a program.
This point is:
In order for a computer to write computer programs, it will need to be at least as intelligent as the program that it is going to encode.
In AIT, the string that is defined may be the program that will generate the desired information. In order for a computer to develop programs, it will need to be more intelligent than the program that it is trying to write. Which will require a program to have developed the top-level computer developer in the first place. At some point a program could develop a genetic algorithm to write this new computer that is a programmer. However, we're not there yet.
When that happens, many possibilities become available. Just imagine, a computer writing genetic algorithms. Generations of its algorithm can be adjusted at lightening speed, but hopefully it is an intelligent computer using the existing algorithms that have been mathematically proven to be the most efficient. Because if it is just let loose to try to arrive at the desired output, well, that could take forever.
There is no drop in replacement
There's actually another point that I want to make related to this sci-fi concept of the computers actually writing new programs. There is no drop-in replacement that exists for an experienced developer. There are many fields of study, a wide range of algorithms and problems that have already been solved. These things could conceivably be added to the programming computer's database of knowledge. However, this task alone is monumental.
The same statement applies to people too
That's right. Software Engineers are not components or warm bodies that can be replaced interchangeably. Each engineer has focused on their own field of study or interests. They each have followed separate paths through their career to reach the point that they are at now. I have seen this on projects where a company manages a pool of engineers. When there is work and a need for a software engineer, one is plucked from the pool.
However, the pool may consist of network programmers, modem programmers, antenna programmers, user interface programmers and so on. They each know their area of study very well. However, if you try to place an antenna programmer in a group that is need of network programmers, or a UI programmer to develop modem software, you may have a problem. Or at least you will not get the value that you expect from placing this engineer in a misfit group. Their knowledge of the topic is not great enough to effectively develop a solution to provide the desired information efficiently.
I am not sure what spurred the idea for this topic. The incident with the person that told me I was making a poor decision about becoming a software engineer happened about 15 years ago. It's fascinating watch and be a part of the new advances in technology that are occurring with both software and hardware. Better hardware means more things become possible in software. It can be frustrating when there is a situation where the software engineers are treated as warm-bodies; but I don't expect a computer to be doing my job anytime soon.