Someone posted a link to this article about programming languages in 100 years time to a forum I read. It’s interesting, but I think the author has misunderstood the goal of programming language evolution. This essay is my response to that article.
The original essay makes the following statement:
I don’t predict the demise of object-oriented programming, by the way. Though I don’t think it has much to offer good programmers, except in certain specialized domains…
It strikes me, therefore, that the author has misunderstood Object Oriented programming, its goals and its application in a fairly fundamental way, and in a way which means that I feel his analysis of the direction of programming in the future is flawed; he fails to address the issue of the development of abstraction within languages – he gets bogged down in details of data structures and syntax, whereas I believe the greatest leaps forward in programming will come from increased abstraction in languages.
The point is that the development of Object Oriented programming allowed us to model solutions to problems that are closer to the way we think about the world – which is, in fact, how all programming languages have developed over time. Consider the evolution:
- Machine code (pure hex) – Writing to the metal. No abstraction, raw manipulation of the state of the machine.
- Assembler – Still manipulating the state of the machine, but at least we have words rather than numbers, which people are more used to dealing with. We’ve abstracted away the raw binary and made it easier for humans to interact with.
- Imperative languages – Early computational problems were implementations of mathematical algorithms, which are written in an imperative fashion; that is, take x, square it, add the result to the reciprocal of the difference between y and z, and that’s your answer. This maps onto a certain model of human thinking, and allows a level of abstraction which is higher than pure assembler. We can continue to raise the level of abstraction here, but imperative programming inevitably means that we continue to think in terms of a pure sequence of operations, and human thought is more complex and abstract than that.
- Functional languages – An expression of ‘goal based’ computing, which matches another human thought pattern. You give the computer a set of rules (or functions), and express your goal in terms of an evaluation of these functions; everything is defined in terms of a function of something else. This is useful for a purely problem-solving approach to computing, but isn’t especially useful for defining things like user interfaces (although theoretically, you could feed a functional system a set of rules for good UI design, give it a collection of controls and buttons to arrange and set it going – which might be an interesting problem for all you functional nuts out there 🙂
- Object Oriented programming – In the real world, we deal with, well, objects, and manipulations of objects. We deal with abstractions, we categorise things, etc. Classes, objects, methods, inheritance and relationships allow us to model the real world more closely. It is, in theory, a much more natural way to model large systems, and allows us to think in a much more abstract fashion.
Therefore, my prediction would be that programming languages will continue to evolve in such a way that they more closely model the way we solve problems and interact in the real world. We’ll be doing less “talking to the computer” and more problem solving – languages in the future will not, for example, force you to do your own memory management – this is not part of solving the problem, and should not, therefore, be part of the programmers job (well, not part of the application programmers’ job, anyway; someone will still have to write the compiler 🙂
In terms of what I personally feel, I think bytecode and interpreted languages are very much going to be the way forward. We’ll continue to see the evolution of languages like Perl, Ruby and Python (although personally, I think Perl probably needs trimming back rather than further development; Ruby is a good model for a next-generation scripting language) because they’re quick, easy and simple to use and hugely powerful for common, practical data manipulation tasks – and even map well to large systems like backends for websites.
We will probably also see some measure of convergence between “scripting” languages and “application” languages – already, Perl, Ruby and Python are being used for large systems; it’s possible to target Python at .NET now. There’s no reason to consider them as a completely separate set of languages to the “serious” language set (C++, C#, Java et al).
For the actual development process for large systems, though, I think we need to look to what MS are doing with Visual C# and .NET. Developing in C# and .NET allows you to easily work at a high level of abstraction than most other languages; there’s an excellent set of prewritten libraries and the language is highly abstract: UI coding is as simple as writing a method to deal with a button click – there’s no language contortions as there were with C++ and MFC, because C# has been designed from the ground up for development of GUI driven applications.
However; the thing that strikes you most about C# is how little actual code you write: most of the time you are simply calling prewritten methods on objects, or iterating over collections. There is no good reason why this necessarily has to be represented as lines of written code – perhaps the way forwards is to represent these interactions between objects in a visual fashion, only stepping down into something so vulgar as handwritten code when you need to do something unusual or obscure.
Ultimately, though, all of this is going to be driven by what people want to do with computers in the future; computers are beginning their slow but inevitable move away from the desktop and becoming a more integrated part of our daily lives. Applications are changing to reflect that, and therefore the way we approach development and the languages we use are also going to change to reflect that, and that is probably what is going to have the biggest impact on programming in the future.
Hmmm… food for thought for the ‘specialist bit arranger’ (or programmer). Please don’t be offended by my lack of response to the UI coding issue being ‘as simple as writing a method to deal with a button click…’ LOL! 😉
Aspect-Oriented Programming. It’s the future*!