Tuesday, August 23, 2011

Can a programming language be completely visual?

Google recently abandoned their App Inventor for Android development tool. To be totally accurate they open-sourced it, but it's clear they won't be investing it in any longer. App Inventor was designed to make it possible to build apps (though certainly not every kind of app) for Android without having any background in programming. It didn't take off to the degree that would be required for it to stay within Google's focus, so they dumped it. It might not be a popular move (especially with App Inventor users) but it's still likely the correct one.

There have been many attempts to make a programming language that is completely visual. The idea is that rather than type in lines of code, you would drag and drop icons that each represent an object or function and then chain them together to create your application's logic. It sounds great in theory and it demos very well. But does it work in reality?

If you are as old as I am, you've seen visual programming a few times before. There was a database development tool for the Mac back in the 1980's called Double Helix. Its programming language was completely visual and it was very easy to get started. However, as your functions became larger and more complex, the code became unwieldy. I remember visiting a friend who was building an application in Double Helix for his company. He had printed out some of his functions but they were so large (because icons simply consume more space than the equivalent text-based code) that he had literally covered an entire wall of his office. It was quickly becoming impractical.

There was also a series of products called VIP (Visual Interactive Programming) that attempted visual programming. And there was AppWare as well, which eventually failed. I looked at another one called Prograph which used a wiring diagram model which I'm sure made a lot of sense if you were an electrician or electrical engineer. The database development tool I used back in the 1980's through the mid-1990's, 4D, had a flowchart-based option (as opposed to the more popular text-based option) for writing methods but it wasn't very popular and I believe they eventually abandoned it.

There's an interesting Wikipedia article about visual programming languages. As I looked down the list of languages, I was surprised to see so many of them but I didn't recognize many names.

Why aren't visual programming languages more popular? Probably for the same reasons that visual instructions aren't always used in daily life. They can and do work just fine for simple things but the more complex and intricate the instructions need to be, the more difficult it becomes to describe them visually. We spend so much time reading and writing that it's really the only way we receive and communicate complex instructions. Of course there are times where visual instructions make sense but they are usually for dealing with physical things like Legos or building Ikea furniture.

I will continue to investigate future attempts at visual programming because I think it's an interesting area to research but I don't hold out much hope for a solution that can really be scaled to solve large, complex problems.


Bob Keeney said...

I'm a coach for First Lego League which uses the MindStorms programming software. It is a visual programming environment and all I do is find it frustrating. Even the kids hate it.

Karen said...

If you were a scientist or engineer you would DEFINITELY have heard of LabView.

It has been around many years. There are versions for Mac and Windows and Linux. I was originally released on the Mac in 1986 and it is very successful in it's niche.

If i very commonly used in those fields because of all teh vi's (virtual instruments) available for it. I makes creating nice UIs for sophisticated equipment control, data acquisition and high powered data reduction and display relatively easy.

BTW I have used RB in conjunction with it for a project.

- Karen

Steveorevo said...

LabView created Lego's Mindstorms. The visual editor found in products like (before M$ killed them) Caligari's trueSpace, which generated JavaScript code, and Blender which manipulates Python as well as similar interfaces like Unity3D's design environment all prove that such IDE features are not only possible, but highly sought after and ideal. They put the RAD in rapid application development. They all share the same attributes that made SmallTalk so great. If it wasn't for IBM's insane runtime fees, SmallTalk would probably dominate today. Fortunately, there are OpenSource projects like Squeak.org (a smalltalk vm) that brings this technology to the masses. A practical fully visual environment will probably follow. I would suspect a resurgence in this arena as serializable dot-syntax languages like JavaScript become the norm. Macromedia's Director (which switched from dot-syntax Lingo to dot-syntax JavaScript) was close but Adobe virtually killed it with terrible mismanagement. Adobe's Authorware (also on the chopping block) was patently closer still. I don't think it is a matter of if as it is of when.

Geoff Perlman said...

Someone tweeted about this blog post and asked why I had not mentioned Illumination Software Creator. In fact, I stumbled across it while searching for pictures of some of the visual programming languages I mentioned . I didn't specifically mention that one because I was focused on the mainstream products. Bryan's effort is ambitious and I applaud him but I think it still suffers from what every other visual programming language suffers from: the more complex logic gets, the more difficult it becomes to follow it in a visual programming language. But Bryan is a smart guy and if he sticks to it, it's certainly possible he will come up with a solution.

Geoff Perlman said...

@ Steveorevo - Making design visual is the right way to go. We try to make as much of Real Studio as visual as possible. You can make small, tedious logic jobs visual but I don't think you can make the language as a whole visual in any practical way.

William R. Porter said...

The solution to the scalability issues (i.e., ending up with an unwieldy large chart that is hard to follow) that visual programming languages typically have, is to group portions of the chart into smaller functional blocks. This is directly analogous to what happens in written code when a 'function' is created or, with drag-and-drop RADs like realstudio, creating a button or object. No biggie, conceptually, but pulling it off so the language remains easy and intuitive is indeed the trick.

Norman Palardy said...

I used ProGraph for a bit and one of, if not the biggest pitfall, was not that it was visual but it was, by it's very nature, parallel.
"Methods" essentially ran when all their inputs were available and so that was very different from what most programmers knew with fairly linear code execution.
You really had to work at making code serial and more like C, Pascal etc.

Daniel L. Taylor said...

Prograph was the language that got it right. It was well adapted to solving large, complex problems. I developed several applications in Prograph and was very disappointed to see it die out. There were a number of commercial and shareware apps written in it, and as I recall at least one Prograph developer did a project for Apple. It had a decent run as far as IDEs go, longer than Metrowerks CodeWarrior despite CodeWarrior's larger user base.

Prograph succeeded because it was a real, fully object orientated language in which you could code generic algorithms, classes, and design patterns. The other visual languages I've seen were too specialized with building blocks that were too high level. AppWare was the classic example of this.

Prograph's dataflow diagrams were not generally as compact as text, but Prograph solved this with "locals", an organizational tool whereby pieces of a function were enclosed in little, labeled boxes. I found them a valuable tool for organizing complex functions. They were also essentially self documenting. A properly written Prograph function could be extremely complex, yet another Prograph programmer could simply glance at it and understand the design. I've never seen a text language that was comparable in this respect.

While thinking of code in terms of dataflow diagrams took a little getting used to, I found that it, along with a few other language features, allowed for some very simple and elegant solutions to common programming problems. To this day I still have moments when I think about what I have to do in REALbasic or .NET to solve some problem, think about how it would have been done in Prograph, and sigh. (And I consider these my #1 and #2 RAD/high level language tools today.)

Norman mentions the parallel nature as a potential pitfall, and it did occasionally require extra effort to synchronize things. The flip side is that had Prograph survived, it would have been able to take advantage of multicore CPUs without any special effort on the part of the programmer.

I'm rambling. But Prograph was a great language. The company hit a rough spot financially right at the moment they needed to complete moves to both Windows and Mac OS X. It's too bad. I do love REALbasic, but there's a lot to be learned from Prograph. Unfortunately it wasn't open sourced and the demos don't even run on OS X any more. I do have the Windows version in my XP VM. It still launches, though I never use it. Any code bases I saved from that language were moved to RB years ago.

Anonymous said...

@Daniel you can get a working Prograph clone called "Marten" from Andescotia http://andescotia.com/products/marten/ with a nice bonus of a PDF copy of Scott Steinman's "Programming in Prograph CPX" book. As I recall, Prograph died more because of business decisions to target effort in other places than due to inherent weakness in the tool (another victim of the 2000-era Internet bubble).

Luke Software Guy said...

Possible, but it can be pretty difficult and I think you'll run into some errors with correct customization and flexibility of code. Just not the same mechanics involved.

vapour said...

I have for a long believed the main reason programming languages are languages and not visual tools has really only two supporting causes:

1. Legacy principles of people in language design.

2. Additional barriers to portability which would come with a totally graphical environment.

Designers prefer a linguistic form because they are more comfotable with it, because most languages are derived from succesful languages which predeed them, which in turn are linguistic and simply because the verbal style seems more institutionally respectable than a graphical control interface.

In the second case above of course, portability is hard enough as it is when one can't predict the graphical environment or user interface API for the host system. Restricting oneself to a text-based language provides some easy of portability as console mode applications generally have text interfaces which have simple implementations for keyboard/string IO.

Ironically this hobbles the language right from the outset as almost all modern programs besides music applications, end up with an output which is visual but most languages lack any native visual primitives at all owing to their textual form. This leads to the fact that most visual software is written through a variety of graphical APIs.

I believe there is plenty of scope for a purely visual language which assumes from the start you're going to need graphical results eventually and supports such things directly.

MaxMSP is a very visual language, as is vvvv. Embarcadero DelphiXE2 ( which vvvv was written in ) has some excellent 3D visual components.

Paul Hodara said...

If you think about it all programming languages are 100% visual. The more granularity you need the more icons you need. Most programmers use the 256 ASCII icons (or character set).

As soon as you decrease the number of icons you decrease the power of the programming language therefore you will never have a programming language with a limited number of graphical Icons that is as capable as a language using a fully character set.

The full character set allows you to define new object with new names. Once you start doing this in a graphical environment you loose all the benefits and you wind up with a conventional naming schema.

Paul Hodara said...
This comment has been removed by a blog administrator.
Unknown said...

Geoff, great post. Thanks, brings back fond memories. I recall Double Helix quite well, it was a tremendous innovation. We wrote complex applications and handed them to clerks with little computer knowledge who could take over the project and not only maintain it but enhance it. The problem Double Helix had was perhaps more related to the Mac based Motorola 6800 falling behind the Intel chips, and the developer not porting it to run on Unix or later Windows. The levels of complexity of the assembled logic tiles can easily be simplified by simply packaging a group of tiles in an individual object (which if I recall from back in the early 80's might have been a feature? Think I may even have it on an old Mac in the garage that has not been fired up since the 80's).

Clive @ InfoTelesys

Unknown said...

Vapour, you make valuable comments relating to the nature of Computer training and education and the concepts of a purely graphical representation being portable.

Paul, recognize that the 256 ASCII “icons” you refer to in a 3GL, are effectively arranged through levels of abstraction to represent two elements: “1's” and “0's”. One can make counter arguments along the lines of Assembler languages.

One could even extend the concepts of the “design” debate to DNA, electrons, neurons, quarks etc. and get down the quantum physics level. Which in many ways may be where “programming” is heading. Is the quantum physics field graphical? The world we live in certainly is.

Some of the mathematical geniuses think in pictures. If you can find it, look for a documentary that was made on the guy they made the “Rain Man” movie about.

Back in the day we used things like punch cards and FORTRAN, the arguments used to be about assembler verses 3GL. 3GL's won and progressed with things like “vi”, “nroff” and “troff”, the arguments turned to “command line” verses “WYSIWYG”, and ASCI verses EBCDIC. ASCI and WYSIWYG won and graphical user interfaces replaced the command line; word processors and spread sheets completely replaced the old syntactical technology and made anyone a programmer.

Back in the day before DNS we also had to know where computers were on the net and used things like UUCP to transfer mail; that got replaced by DNS, SMTP, POP, IMAP and HTML and any old Joe could not only cruse the Internet with ease, they could contribute their knowledge.

Today no one uses punch cards, UUCP, FORTRAN, vi, nroff and troff, so why are programmers still stuck on 3GL's? Perhaps the problem is that computer science is taught linguistically verses graphically favouring language verses analytical skills. Are our universities and schools scaring off the real geniuses?

One cannot help notice the "disappearance" of Google's App Inventor and Blocky and many other CASE tools some of which I had the pleasure of working with in Silicon Valley, Rational Rose, Purefy, etc. etc. However, many of the tools have been successful, just like assembler and old 3GL's.

Double Helix was successful, Visual Basic is still popular. As Karen and Steveorevo pointed out, LabVIEW was hugely successful in engineering circles. And there are more new ones too: Illumination, Scratch, Tersus, Marten, Blender, etc.

Blender certainly is a programming tool, sure it has a relatively steep learning curve, however, Blender, like many of the CAD tools, offer the opportunity to take program design from the simple 2D tiles of Double Helix to advanced 3D models. And in the virtual world we are not necessary limited to only 3D.

Our InfoTelesys not-for-profit educational division GetiTEd is embarking on helping educators put together more effective computer training. Actually, we're completely rethinking education as a whole (along with government, banking and the courts). We can not help notice how “uneducated” children inherently come pre-programmed with computer skills their parents are devoid of. It's not a difference in intelligence; we suspect the adults are handicapped by their education methods and the banker owned media.

To get a perspective of the extraordinary advances demonstrated with uneducated children, look into “The Hole in the Wall” project and OLPC's Ethiopia project.

Key to these astonishing developments and advances with the “uneducated” children, are graphical interfaces.

Today you can even relatively easily model “virtual clay” on a computer and then print the artists creation on a 3D printer. Surely we can use the same concepts to write programs.

Exciting times this New Renaissance.

Clive @ InfoTelesys