Extracts from Alan Kay’s AMA on Hacker News

Bora M. Alper
28 min readJun 21, 2016



Answer from Alan Kay.

In “The Power of the Context” (2004) you wrote:

...In programming there is a wide-spread 1st order theory that one shouldn’t build one’s own tools, languages, and especially operating systems. This is true—an incredible amount of time and energy has gone down these ratholes. On the 2nd hand, if you can build your own tools, languages and operating systems, then you absolutely should because the leverage that can be obtained (and often the time not wasted in trying to fix other people’s not quite right tools) can be incredible.

I love this quote because it justifies a DIY attitude of experimentation and reverse engineering, etc., that generally I think we could use more of.

However, more often than not, I find the sentiment paralyzing. There’s so much that one could probably learn to build themselves, but as things become more and more complex, one has to be able to make a rational tradeoff between spending the time and energy in the rathole, or not. I can’t spend all day rebuilding everything I can simply because I can.

How does one decide when to DIY, and when to use what’s already been built?

This is a tough question. (And always has been in a sense, because every era has had projects where the tool building has sunk the project into a black hole.)

It really helped at Parc to work with real geniuses like Chuck Thacker and Dan Ingalls (and quite a few more). There is a very thin boundary between making the 2nd order work vs getting wiped out by the effort.

Another perspective on this is to think about “not getting caught by dependencies” — what if there were really good independent module systems — perhaps aided by hardware — that allowed both worlds to work together (so one doesn’t get buried under “useful patches”, etc.)

One of my favorite things to watch at Parc was how well Dan Ingalls was able to bootstrap a new system out of an old one by really using what objects are good for, and especially where the new system was even much better at facilitating the next bootstrap.

I’m not a big Unix fan — it was too late on the scene for the level of ideas that it had — but if you take the cultural history it came from, there were several things they tried to do that were admirable — including really having a tiny kernel and using Unix processes for all systems building (this was a very useful version of “OOP” — you just couldn’t have small objects because of the way processes were implemented). It was quite sad to see how this pretty nice mix and match approach gradually decayed into huge loads and dependencies. Part of this was that the rather good idea of parsing non-command messages in each process — we used this in the first Smalltalk at Parc — became much too ad hoc because there was not a strong attempt to intertwine a real language around the message structures (this very same thing happened with http — just think of what this could have been if anyone had been noticing …)

The real reason to do the 2nd order is get new things rather than incrementing on older poorer ideas.

When you were envisioning today’s computers in the 70s you seemed to have been focused mostly on the educational benefits but it turns out that these devices are even better for entertainment to the point were they are dangerously addictive and steal time away from education. Do you have any thoughts on interfaces that guide the brain away from its worst impulses and towards more productive uses?

We were mostly thinking of “human advancement” or as Engelbart’s group termed it “Human Augmentation” — this includes education along with lots of other things. I remember noting that if Moore’s Law were to go a decade beyond 1995 (Moore’s original extrapolation) that things like television and other “legal drugs” would be possible. We already had a very good sense of this before TV things were possible from noting how attractive early video games — like SpaceWar — were. This is a part of an industrial civilization being able to produce surpluses (the “industrial” part) with the “civilization” part being how well children can be helped to learn not to give into the cravings of genetics in a world of over-plenty. This is a huge problem in a culture like the US in which making money is rather separated from worrying about how the money is made.

Then what do you think about the concept of “gamification?” Do you think high densities of reward and variable schedules of reward can be exploited to productively focus human attention and intelligence on problems? Music itself could be thought of as an analogy here. Since music is sound structured in a way that makes it palatable (i.e. it has a high density of reward) much human attention has been focused on the physics of sound and the biomechanics of people using objects to produce sound. Games (especially ones like Minecraft) seem to suggest that there are frameworks where energy and attention can be focused on abstracted rule systems in much the same way.

I certainly don’t think of music along these lines. Or even theater. I like developed arts of all kinds, and these require learning on the part of the beholder, not just bones tossed at puppies.

I guess in the use of technology one faces a process rather similar to natural selection, in which the better the user’s ability to restrict his use to what he has to do, the more likely the survival, i.e. the user will not procrastinate and get distracted. The use of computers for entertainment is unstoppable, it’s nearly impossible to not allow the kids find and play those games, chat with friends on WhatsApp, and be exploited otherwise by companies that make money from that sort of exploitation, even though that’s at the cost of their psychological health and future success. People spend every single second of the day connected and distracted, and this seems irreversible. I wonder if you have any practical thought on how this can be remedied.

My friend Neil Postman (our best media critic for many years) advocated teaching children to be “Guerilla Warriors” in the war of thousands of entities trying to seize their brains for food. Most children — and most parents, most people — do not even realize the extent to which this is not just aggressive, but regressive…

1. What do you think about the hardware we are using as foundation of computing today? I remember you mentioning about how cool was the architecture of the Burroughs B5000 being prepared to run on the metal the higher level programming languages. What do hardware vendors should do to make hardware that is more friendly to higher level programming? Would that help us to be less depending on VM’s while still enjoying silicon kind of performance?

2. What software technologies do you feel we’re missing?

If you start with “desirable process” you can eventually work your way back to the power plug in the wall. If you start with something already plugged in, you might miss a lot of truly desirable processes.

Part of working your way back to reality can often require new hardware to be made or — in the case of the days of microcode — to shape the hardware.

There are lots of things vendors could do. For example: Intel could make its first level caches large enough to make real HLL emulators (and they could look at what else would help). Right now a plug-in or available FPGA could be of great use in many areas. From another direction, one could think of much better ways to organize memory architectures, especially for multi-core chips where they are quite starved.

And so on. We’ve gone very far down the road of “not very good” matchups, and of vendors getting programmers to make their CPUs useful rather than the exact opposite approach. This is too large a subject for today’s AMA.

I’ve found the Situated Learning perspective interesting. At least I think about it when I feel grumpy about all the young kids and Node.js, and I genuinely like that they are excited about what they are doing, but it seems like they are on a mission to rediscover EVERYTHING, one technology and one long discussion at a time. But they are a community of learning, and maybe everyone (or every community) does have to do that if they are to apply creativity and take ownership over the next step. Is there a better way?

It used to be the case that people were admonished to “not re-invent the wheel”. We now live in an age that spends a lot of time “reinventing the flat tire!”

The flat tires come from the reinventors often not being in the same league as the original inventors. This is a symptom of a “pop culture” where identity and participation are much more important than progress…

What do you think of Bret Victor’s work? Or Rich Hickey?

I love Bret Victor’s work!

He is certainly one of the most interesting and best thinkers of today.

They collaborate together at YCR / HARC!

YCR is not “my group” — I’m very happy to have helped set up HARC! with its very impressive group of Principal Investigators (including Bret).

Previously you’ve mentioned the “Oxbridge approach” to reading, whereby — if my recollection is correct — you take four topics and delve into them as much as possible. Could you elaborate on this approach (I’ve searched the internet, couldn’t find anything)? And do you think this structured approach has more benefits than, say, a non-structured approach of reading whatever of interest?

There are more than 23,000,000 books in the Library of Congress, and a good reader might be able to read 23,000 books in a lifetime (I know just a few people who have read more). So we are contemplating a lifetime of reading in which we might touch 1/10th of 1% of the extent books. We would hope that most of the ones we aren’t able to touch are not useful or good or etc.

So I think we have to put something more than randomness and following links to use here. (You can spend a lot of time learning about a big system like Linux without hitting many of the most important ideas in computing — so we have to heed the “Art is long and Life is short” idea.

Part of the “Oxbridge” process is to have a “reader” (a person who helps you choose what to look at), and these people are worth their weight in gold…

On the “worse is better” divide I’ve always considered you as someone standing near the “better” (MIT) approach, but with an understanding of the pragmatics inherent in the “worse is better” (New Jersey) approach too.

What is your actual position on the “worse is better” dichotomy?

Do you believe it is real, and if so, can there be a third alternative that combines elements from both sides?

And if not, are we always doomed (due to market forces, programming as “popular culture” etc) to have sub-par tools from what can be theoretically achieved?

I don’t think “pop culture” approaches are the best way to do most things (though “every once in a while” something good does happen).

The real question is “does a hack reset ‘normal’?” For most people it tends to, and this makes it very difficult for them to think about the actual issues.

A quote I made up some years ago is “Better and Perfect are the enemies of What-Is-Actually-Needed”. The big sin so many people commit in computing is not really paying attention to “What-Is-Actually-Needed”! And not going below that.

I fear this is because “What-Is-Actually-Needed” is non-trivial to figure out. Related: “scratch your own itch”, “bikeshedding”, “yak shaving”.

Exactly — this is why people are tempted to choose an increment, and will say “at least it’s a little better” — but if the threshold isn’t actually reached, then it is the opposite of a little better, it’s an illusion.

What advice would you give to those who don’t have a HARC to call their own? what would you do to get set up/a community/funding for your adventure if you were starting out today? What advice do you have for those who are currently in an industrial/academic institution who seek the true intellectual freedom you have found? Is it just luck?!

I don’t have great advice (I found getting halfway decent funding since 1980 to be quite a chore). I was incredibly lucky to wind up quite accidentally at the U of Utah ARPA project 50 year ago this year.

Part of the deal is being really stubborn about what you want to do — for example, I’ve never tried to make money from my ideas (because then you are in a very different kind of process — and this process is not at all good for the kinds of things I try to do).

Every once in a while one runs into “large minded people” like Sam Altman and Vishal Sikka, who do have access to funding that is unfettered enough to lead to really new ideas.

* What programming language maps most closely to the way that you think?

* What concept would you reify into a popular language such that it would more closely fit that mapping?

* What one existing reified language feature do you find impacts the way you write code the most, especially even in languages where it is not available?

I think I’d ask “What programming language design would help us think a lot better than we do now (we are currently terrible!)

Certainly, in this day and age, the lack of safe meta-definition is pretty much shocking.

Could you give an example of what you mean by “safe meta-definition”?

“Meta is dangerous” so a safe meta-language within a language will have “fences” to protect.

(Note that “assignment” to a variable is “meta” in a functional language (and you might want to use a “roll back ‘worlds’ mechanism” (like transactions) for safety when this is needed.)

This is a parallel to various kinds of optimization (many of which violate module boundaries in some way) — there are ways to make this a lot safer (most languages don’t help much)

I’ve always felt that the meta space is too exponential or hyper to mentally represent or communicate. Perhaps we need different lenses to project the effects of the meta space on our mental model. Do you think this is why Gregor decided to move towards aspects?

I don’t think Aspects is nearly as good an idea as MOP was. But the “hyperness” of it is why the language and the development system have to be much better. E.g. Dan Ingalls put a lot of work into the Smalltalks to allow them to safely be used in their own debugging, even very deep mechanisms. Even as he was making these breakthroughs back then, we were all aware there were further levels that were yet to be explored. (A later one, done in Smalltalk was the PIE system by Goldstein and Bobrow, one of my favorite meta-systems)

What do think about the current state of language design (Swift, Rust, Go)? Anything that makes you happy/annoys you?

I think all languages today annoy me — just put me down as a grump. They seem to be at a very weak level of discourse for the 21st century. (But a few are fun when looked at from the perspectives of the past e.g. Erlang …)

As a high school teacher, I often find that discussions of technology in education diminish ‘education’ to curricular and assessment documentation and planning; however, these artifacts are only a small element of what is, fundamentally, a social process of discussion and progressive knowledge building.

If the real work and progress with my students comes from our intellectual both-and-forth (rather than static documentation of pre-exhibiting knowledge), are there tools I can look to that have been/will be created to empower and enrich this kind of in situ interaction?

This is a tough one to try to produce “through the keyhole” of this very non-WYSIWYG poorly thought through artifact of the WWW people not understanding what either the Internet or computer media are all about.

Let me just say that it’s worth trying to understand what might be a “really good” balance between traditional oral culture learning and thinking, what literacy brings to the party, especially via mass media, and what the computer and pervasive networking should bring as real positive additions.

One way to assess what is going on now is partly a retreat from real literacy back to oral modes of communication and oral modes of thought (i.e. “texting” is really a transliteration of an oral utterance, not a literary form).

This is a disaster.

However, even autodidacts really need some oral discussions, and this is one reason to have a “school experience”.

The question is balance. Fluent readers can read many times faster than oral transmissions, and there are many more resources at hand. This means in the 21st century that most people should be doing a lot of reading — especially students (much much more reading than talking). Responsible adults, especially teachers and parents, should be making all out efforts to help this to happen.

For the last point, I’d recommend perusing Daniel Kahneman’s “Thinking: Fast and Slow”, and this will be a good basis for thinking about tradeoffs between actual interactions (whether with people or computers) and “pondering”.

I think most people grow up missing their actual potential as thinkers because the environment they grow up in does not understand these issues and their tradeoffs….

In seeking to consider what form this “really good’ balance” might take, can you recommend any favored resources/implementations to illustrate what “real positive additions” computers and networking can bring to the table? I’m familiar with the influence of Piaget/Papert — but I would love to gain some additional depth on the media/networking side of the conversation.

With a good programming language and interface, one — even children — can create from scratch important simulations of complex non-linear systems that can help one’s thinking about them.

Many mainstream programming tools feel to be moving backwards. For example, Saber-C of the 1980s allowed hot-editing without restarting processes and graphical data structures. Similarly, the ability to experiment with collections of code before assembling them into a function was advance.

Do you hold much hope for our development environments helping us think?

You could “hot-edit” Lisp (1.85 at BBN) in the 60s (and there were other such systems). Smalltalk at Parc in the 70s used many of these ideas, and went even further.

Development environments should help programmers think (but what if most programmers don’t want to think?)

A lot of the VPRI work involved inventing new languages (DSLs). The results were extremely impressive but there were some extremely impressive people inventing the languages. Do you think this is a practical approach for everyday programmers? You have also recommended before that there should be clear separation between meta model and model. Should there be something similar to discipline a codebase where people are inventing their own languages? Or should just e.g. OS writers invent the languages and everyone else use a lingua franca?

Tricky question. One answer would be to ask whether there is an intrinsic difference between “computer science” and (say) physics? Or are the differences just that computing is where science was in the Middle Ages?

Did you intend to compare the progress and formalization of the fields?

Yes, that was what I was driving at. Anyone could do physics in the Middle Ages — they just had to get a pointy hat. A few centuries later after Newton, one suddenly had to learn a lot of tough stuff, but it was worth it because the results more than paid for the new levels of effort.

We met at a retreat last fall, and it was a real treat for me to hear some fantastic stories/anecdotes about the last 50 years of computing (which I have only been directly involved with for about 1/10th of). Another one of my computing heroes is Seymour Cray, which we talked about a bit and your time at Chippewa Falls. While a lot of HN’ers know about you talking about the Burroughs B5000, I (and I bet most others) would have had no idea that you got to work with Seymour on the CDC 6600. Do you have any particular Seymour Cray/6600 stories that you think would be of interest to the crowd?

Seymour Cray was a man of few words. I was there for three weeks before I realized he was not the janitor.

The “Chippewa OS” is too big a story for here, but it turned out that the official Control Data software team failed to come up with any software for the 6600! Hence a bunch of us from Livermore, Los Alamos, NCAR, etc. — the places that had bought the machine — were assembled in Chippewa Falls to “do something”.

Perhaps the most interesting piece of unofficial software was a multitasking OS with graphical debugger for the 6600 that had been written by Seymour Cray — to help debug the machine — in octal absolute! I had the honor of writing a de-assembler for this system so we ordinary mortals could make changes and add to it (this was an amazing tour de force given the parallel architecture and multiple processes for this machine). And it was also a good object lesson for what Cray was really good at, and what he was not so good at (there were some really great decisions on this machine, and some really poor ones — both sets quite extreme)

Do you still see an advantage of using Smalltalk (like Squeak/Pharo) as a general purpose language/tool to build software or do you think that most of its original ideas were somehow “taken” by other alternatives?

Smalltalk in the 70s was “just a great thing” for its time. The pragmatic fact of also wanting to run in it real-time fast enough for dynamic media and interactions and to have it fit within the 64Kbyte (maybe a smidge more) Alto rendered it not nearly as scalable into the future in many dimensions as the original ideas intended.

We have to think about why this language is even worth mentioning today (partly I think by comparison …)

I would also like to know if he uses emacs or vim, or whatever. And if he thinks the editor is relevant.


You once said that lisp is the greatest single programming language ever designed. Recently, with all the emergence of statically typed languages like Haskell and Scala, has that changed? Why do you think after being around for so long, lisp isn’t as popular as mainstream languages like Java, C or Python?

I should clarify this. I didn’t exactly mean as a language to program in, but as (a) a “building material” and (b) especially as an “artifact to think with”. Once you grok it, most issues in programming languages (including today) are much more thinkable (and criticizable).

The second question requires too long an answer for this forum.

How do you think we can improve todays world (not just with technology)? What do you think is our species way forward? How as a civilization can we ‘get to higher level’? Specifically, I’m interested in your views on ending poverty, suffering, not destroying the Earth, improving our political and social systems, improving education etc. I understand that these are very broad topics without definitive answers but I’d love to hear some of your thought about these.

“What Fools these Mortals be!” Puck meant that we are easy to fool. In fact we like to be fooled — we pay lots of money to be fooled!

One way to look at this is that the most important learning anyone can do is to understand “Human beings as if from Mars” — meaning to get beyond our fooling ourselves and to start trying to deal with what is dangerous and counterproductive in our genetic (and hence cultural) makeups. This is quite different than what most schools think they are supposed to be about — but the great Jerome Bruner in the 60s came up with a terrific curriculum for 5th graders that was an excellent start for “real anthroplogy” in K-5. (Man: A Course of Study)

How do you think that object-oriented programming and distributed computing will intertwine in the not-so-far future?

I’ve been constantly surprised about how what I called “object-oriented” and “system-oriented” got neutered into Abstract Data Types, etc., (I think because people wanted to retain the old ways of programming with procedures, assignment statements, and data structures. These don’t scale well, but enormous amounts of effort have been expended to retain the old paradigms …

I recall reading an article about 10 years ago describing a PARC research project in which networked computers with antennae were placed throughout a set of rooms, and the subject carried a small transmitter with them from room to room. As the computer in each room detected the transmitter, it triggered actions in each room. I think it was called “ambient computing.”

Does this ring a bell for you? I have searched for this article recently and not been able to find it again.

Yes, this idea was originally Nicholas Negroponte’s in the 70s. The Parc version was called “Ubiquitous Computing” and was led by Mark Wieser in the 80s …

What is your opinion about Self programming language? I’ve read „STEPS Toward The Reinvention of Programming“ pdf and this feels related, especially to with the Klein interpreter.

I liked Self. “Good OOP” is still waiting for a much better notion to replace the idea of a “Class”

How would you teach programming today to a kid? Would you choose a particular medium such as a computer, a raspberry pi or even a tablet?

It’s time to do another children’s language. My answer a few years ago would have been “in part: Etoys”, “in lesser part: Scratch (done by some of the same people but too much of a subset)”.

Any conventional paradigms that you’d like to see retired? FS, Unix, signalling/messaging, etc.?

Most of them …

What is your one piece of advice to college students studying CS?

Learn a lot of other things, and at least one real science and one real engineering. This will help to calibrate the somewhat odd lore aka “computing knowledge”. I would certainly urge a number of anthropology courses (and social psychology, etc), theater, and so forth. In the right school, I’d suggest “media theory” (of the “Mcluhan”, “Innis”, “Postman” kind …)

What do you think about functional languages like Haskell, OCaml etc ?

They need a much better idea of time (such as approaches to McCarthy’s fluents).

And then there is the issue that we need to make “systems” …

I like what a function is, and this idea should be used, but I think it is better used rather differently …

I am not sure what the time problem is for functional programming, but I reckon the Elm language/framework solves problems with time in a very elegant way with it’s flavour of FRP and Signals.

In Elm, you can play back your UI interactions in a debugger as they happened and watch the variables as they would have been!

Worth looking at Bob Balzer’s EXDAMS system at Rand in the late 60s early 70s.

Do you believe everyone should be thought, or exposed to, programming in school?

Everyone should get fluent in “real science” and many other things come along very nicely with this — including the kinds of programming that will really help most people.

How do you come up with ideas?

There’s coming up with ideas: learn to dream while you are awake, the ideas are there.

There’s coming up with a good idea: learn how to not get buried in your ideas (most are mediocre down to bad even for people who have “good idea skills”!)

I write down ideas in notebooks to get rid of them. Every once in a while one will capture a different point of view.

And, there’s the Princeton Tea joke of scientists comparing what they did for ideas. One says “I have them in the middle of the night so I have a pad by my bed”. Another says “I have them in the shower so I have a grease pencil to write them on the walls”. Einstein was listening and they asked him about his ideas. He said “I don’t know, I’ve only had two!”

(Some people are better at filtering than others …)

How important is finding the right language?

For big problems, “finding the problem” is paramount — this will often suggest representation systems that will help think and do better. A language that allows you to quickly make and refine your sense of the context and discourse — i.e. to make languages as you need using tools that will automatically provide development environments, etc. and allow practical exploration and progress.

What are your thoughts on how rapidly GUIs are evolving nowadays?

Or devolving?

Do you think Java is an Object Oriented programming language?

Object oriented to me has always been about encapsulation, sending messages, and late-binding. You tell me …

While I enjoy your thoughts on “object oriented”, “functional”, etc, I’d love to hear your thoughts about philosophy of religion and its origins (i.e. a slightly meta version of the conversation around “object oriented”, “functional”, etc).

Bob Barton once called systems programmers “High priests of a low cult” and pointed out that “computing should be in the School of Religion” (ca 1966).

Thinking is difficult in many ways, and we humans are not really well set up to do it — genetically we “learn by remembering” (rather than by understanding) and we “think by recalling” rather than actual pondering. The larger issues here have to do with various kinds of caching Kahneman’s “System 1” does for us for real-time performance and in lieu of actual thinking.

1. Spaced repetition can make the recalling — and thus the thinking and pondering — easier. It can certainly make one more consilient, given the right choice of “other things to study” e.g. biology or social psychology, as you’ve mentioned in an earlier comment. 2. It takes quite a bit of training for a reader to detect bias in their own cognition, particularly the “cognition” that happens when they’re reading someone else’s thoughts.

What to do about System 1, though? Truly interactive research/communication documents, as described by Bret Victor, should be a great help, to my mind, but what do you think could be beyond that?

I think that the “training” of “System 1” is a key factor in allowing “System 2” to be powerful. This is beyond the scope of this AMA (or at least beyond my scope to try to put together a decent comment on this today).

There’s a recursive sense in which “training” “System 1” involves assimilating more abstractions, through practice and spaced repetition, such as deferring to the equations of motion when thinking about what happens when one throws a ball in the air. Going as far as providing useful interfaces to otherwise difficult cognitive terrain (a la Mathematica) is still part of this subproject. The process of assimilating new abstractions well enough that they become part of one’s intuition (even noisily) is a function of time and intense focus. What do you see as a way to aggregate the knowledge complex and teach further generations of humans what the EEA couldn’t, fast enough that they can solve the environmental challenges ahead? What’s HARC’s goal for going about this?

Yes, this is precisely what I meant here, and it’s a very interesting set of ideas for education. I can’t articulate a great goal yet.

With such a huge threat (global warming) to humanity on the horizon, do you maintain a sense of optimism here? Or will humanity forget how to “compute” the same way Europeans forgot how to make Roman concrete?

There are some good old sci-fi stories about your last question.

People in my process are optimists. People in my position who spend a lot of time trying to improve education are realists.

How do you seek out the people you choose to work with, now or in the past? Is it an active process, or do you find interesting people naturally glom around a nucleus of interesting work?

Interesting people, even as students, leave trails in various ways

What do you think about a “digital Sabbath,” specifically in the context of touchstones like:

Engelbart’s Augmenting Human Intellect ;Edge’s annual question, How is the Internet Changing the Way you Think?; Carr’s Is Google Making Us Stupid? [4] …and other common criticisms of “information overload”

Candles, wine and bread aren’t technologies? Hard to take this seriously. (And I like to play music, and both music and musical instruments are technology, etc.)

A better issue is not getting sucked into “legal drugs” that have no nutritional value.

“We are already stupid” — this is why things could be much much better but aren’t. We have to start with ourselves, and a positive way to do this is to ask “what can real education really do to help humanity?”

1. Do you think the area of HCI is stagnating today?

2. What are your thoughts on programming languages that encapsulate Machine Learning within language constructs and/or generally take the recent advancements in NLP and AI and integrate them as a way to augment the programmer?

1. Yes for the most part. The exceptions are also interesting.

2. Haven’t seen anything above threshold yet

What advice would you give language designers looking to keep the cognitive load of a new language low? What was your design process like, and would you do it that way again?

What kinds of learning do you want your prospective programmers to go through? It’s what you know fluently that determines cognitive load. In music you are asking for an answer in a range from kazoos to violins.

The main thing a language shouldn’t have is “gratuitous difficulties” (or stupidities).

That said, it’s worth thinking about the problems of introducing things in our culture today that even have the learning curves of bicycles, let alone airplanes …

Or … what is the place of tensor calculus in a real physical science?

When you’re working on a system, how do you approach the question, “Is this really useful, or am I spinning my wheels chasing a conceit?” Is the answer as simple as try it out and see what happens? Or do you have some sort of heuristic that your many years of experience has proven to be helpful?

I keep ideas on the back burners for a long time. (The nature of the ideas will determine whether this heuristic works well.)

You’ve been a long-time proponent for creating educational software (e.g squeak etoys) helping teach kids how to program and have been fairly critical of the iPad in the past. What are your thoughts on Apple’s new iPad Swift playground in teaching kids how to learn how to program in Swift?

“Criticizing reasonably” takes much longer than praise — and my reactions to this question are “critical”

If you were to design your own high school curriculum for gifted students, what would it look like?

I would work on the early grades for all students, especially gifted ones. The epistemological stance you wind up with gets set fairly early — not in stone but also hard to work with — and the early grades are where we should be putting our resources and efforts.

How many hours a day did you sleep per day during your most productive research years? Cos i usually wonder how very productive people seem to achieve much more than others within the same 24 hours we all have.

I used to sleep about 5 hours a night well into my 50s, but then started to get respiratory infections, especially brought on by plane travel. After many years, my sleep habits were examined and I was told to get at least 8 or more hours a night. This is hard, so I usually make up with an afternoon map. This has cured the infections but has cut down the number of active hours each day.

I’ve always been a big fan both of text console accessible UI’s like CLI’s and REPL’s as well as of GUI’s. In my mind they each clearly have a different mix of strengths and weaknesses. One way a user might have a bit of the “best of both worlds” potentially is an app or client featuring a hybrid input design where all 3 of these modes are available for the user to drive. Any thoughts on that?

Smalltalk and some other languages have all three — easy to do if the system is a “certain way”.

Do you mean if the underlying architecture design makes it easier?

Yes …

I’ve heard you frequently compare the OOP paradigm to microbiology and molecules. It seems like even Smalltalk-like object interactions are very different from, say, protein-protein interactions.

Not proteins, but cell to cell (this is still an interesting mechanism to contemplate and develop …

Does it suck getting old for you? Do you have stamina to make new stuff?

It doesn’t suck “getting old” — and you only find out about stamina by trying to do things …

(We are fortunate that most of what is “new” is more like “particular ‘news’” rather than actually “new”. From the standpoint of actual categorical change, things have been very slow the last 30 years or so.)

How can I get my thinking out of the/my box?

Pay a lot of attention to realizing you (and all of us) are in “boxes”. This means what we think of as “reality” are just our beliefs” and that “the present” is just a particular construction. The future need have no necessary connection to this present once you realize it is just one of many possible presents it could have been. This will allow you to make much more use of the past (you can now look at things that didn’t lead to this present, but which now can be valuable).

Any memories or thoughts about Gregory Bateson?

Larger than life character, including literally!

A rather kind big bear of a guy, with lots of ideas (and maybe a bit too much baggage from the past …)

Worth reading all “the Macy people”, including Gordon Pask, Heinz von Forester, etc.

What do you wish someone would ask you so that you could finally share your thoughts, but nobody has broached as a subject?

I actually don’t think this way — my actual interior is a kind of “hair-ball” and so questions are very helpful.

What impresses you the most about american free enterprise? What most disappoints you about it?

If there are lots of resources more or less available, then a lot of “hunting and gathering” types can do things with them, and some other types can see about what it takes to make resources rather than just consume them. The former tends to be competitive, and the latter thrives on cooperation.

The biggest problems are that the “enterprisers” very often have no sense that they are living in a system that has many ecological properties and needs to be “tended and gardened”.

Not an easy problem because we are genetically hunters and gatherers, we had to invent most of the actual sources of wealth, and these inventions were not done by the most typical human types.

Yet another thing where education with a big “E” should really make a difference (today American education itself has pretty much forgotten the “citizenship” part, which is all about systems and tending them.

What is your recommendation to someone wanting to get into the kind of research you do?

“No one owes more to his research community than I do”

I lucked into the ARPA community 50 years ago (without even knowing that it existed).

A good start is to find people and places that are doing things you think are interesting …

Well what is thinking about then? What was the mistake the greeks made? In this video you said thinking is not about logic and that was the mistake the greeks made.

It’s basically confusing “math” (which they had and like all of us were overwhelmed by how neat it is and how much you can do with “thinking rationally”) with “science” which they didn’t really have. “Math” is hermetic and Science is a negotiation between our representation systems and “What’s out there?”

I.e. it is really difficult (didn’t happen) to guess the “self-evident” principles and operations that will allow you to deduce the universe. And being super-rational without science is one of the most dangerous things humans have come up with.

Do you believe that the gap between consuming software and creating software will disappear at some point? That is, do you expect we will soon see some homoiconic software environment where the interface for using software is the same as the interface for creating it?

One of the most interesting processes — especially in engineering — is to make a model to find out what it is that you are trying to make. Sounds a little weird, but the model winds up being a great focuser of “attempts at intent”.

Now let’s contemplate just how bad most languages are at allowing model building and having optimizations being orthogonal rather than intertwined …

Do we “really” need more programming languages ?

We could use a few “good ones” (meaning ones that really are about the realities, needs and scales of the 21st century).

I get the impression from the book Dealers of Lightning that Bob Taylor played an indispensable role in creating Xerox Parc. What are the Bob Taylors of today up to, and why aren’t they doing something similar?

“Dealers of Lightning” is not the best book to read (try Mitchell Waldrop’s “The Dream Machine”).

That said, Bob Taylor cannot be praised too highly, both for Parc and for his earlier stint as one of the ARPA-IPTO directors.

Simple answer: There aren’t a lot of Bob Taylors in any decade (but there are some). Big difference between then and now is that the funders were “just right” back then, and have been “quite clueless” over the last 30 some odd years. An interesting very recent exception is what Sam Altman is doing — this is shaping into the biggest most interesting most important initiative since the 70s.

What are your thoughts on the Semantic Web? Why do you think it hasn’t succeeded yet?

Too weak a model of meaning on all counts. Not a new idea, and still they did it again. (This is not an easy problem, and machine learning won’t do the job either.)

Compiled from https://news.ycombinator.com/item?id=11939851 at 2016–06–21T10:25:50+03:00