Tuesday, January 31, 2012

We Really Don't Know How To Compute!

One of my new year resolutions was to blog more. It's not working out yet, as I've been too busy the last few weeks. It's already Jan 31 and this is only my second blog entry this year.

I recently came across a fascinating presentation from Gerald Jay Sussman, co-author of the famous MIT computer science text book 'Structure and Interpretation of Computer Programs' and co-inventor of the Scheme programming language.

He claims that we really don't know how to compute, compares computer programs (constrained to rigid designs and difficult to adjust to a new situation) to living organisms (which can be reconfigured to implement new ways to solve a problem) and makes a convincing argument that we need drastically different programming models to approach that level of flexibility.

He then introduces the Propagator Programming Model (work supported in part by the MIT Mind Machine project). A propagator program is built as a network connecting cells and propagators. Cells collect and accumulate information. Propagators are autonomous machines which continuously examine some cells, perform computations on the information from these cells and add the results to other cells.

A propagator program is analogous to an electrical wiring diagram. To extend it and add a new way to approach a problem, you simply add and connect new propagators. Your cells now collect alternate results from different propagators, and you can then decide to merge redundant results, combine partial results, or even exclude contradictory results when some propagators do not work well in a new situation.

This is similar to how human beings resolve problems. We try several approaches, weigh and combine their results, then wire up our brain with the approaches that work well for the next time we face a similar situation.

I couldn't help but see some relation between that propagator model and my recent interests in computer programming models.

Massively Parallel programming
A propagator program is naturally parallel. Each propagator is continually watching its neighbor cells and computing new results as their values change, autonomously and in parallel with other parts of the program.

Functional programming
A propagator is like a pure function that computes results only from its inputs. A result can also be wrapped in a monad to provide information about its premises, relevance or correctness (useful to pick or combine partial results as they accumulate in a cell for example).

Web Component Assembly
The wiring diagram describing a propagator program seems to map really well to an SCA (Service Component Architecture) component assembly wiring diagram. A propagator could easily be realized as a stateless Web component providing a computation service. A cell could be realized as a Web resource accumulating and storing data.

The propagator model also seems like a great candidate to represent programming expressions as networks of connected components, a subject I researched a bit last year, but which would be too long to describe here... perhaps in another blog post.

Anyway, that got me thinking about a fun weekend project. If I find the time, I'd like to do a little hacking and experiment with implementing a propagator program as an assembly of SCA components wired together.

How about defining two new cell and propagator SCA component types, perhaps with REST interfaces to allow propagator programs to live on the Web and play with data from some useful REST services out there?

Wouldn't that be fun?

No comments:


The postings on this site are my own and don’t necessarily represent positions, strategies or opinions of my employer IBM.