The Future of Web Development: Coding as a Service
Matthew Biggins

Before AWS existed you had been able to purchase web hosting for years. GoDaddy was founded in 1997 and by 2005 it was so successful it could afford to advertise during the Superbowl. The only real difference between GoDaddy and AWS was the business model. I’m not trying to argue that AWS didn’t solve lots of hard technical problems, but it seems that what they did was within the bounds of current knowledge before it started.

Similarly there have been content management systems like WordPress around for many years. In the old days you needed to rent space on a web server and database, deploy them yourself, install the right plugins, debug the issues due to the code not being production quality… What has been happening over the past however many years is that companies have been refining these systems so that they can be configured easily from a user interface, the plugins all work properly together and the code quality is much higher. All of this is again totally evolutionary. I’m not attempting to trivialise what has happened, but if you said to a programmer 15 years ago that this is what things would look like today they would probably have said it sounded in the ballpark.

I don’t know anything about AI, but if you look at the sort of problems it is applied to then they seem to be quite closely related to pattern matching. Take a vast amount of question answer pairs, expose the AI to this training set, let it find patterns, now use the patterns it has generated from the training set to guess the answers to things it hasn’t seen before. The guys at Google have done some truly amazing stuff with image recognition: they can do things that people can’t do However, although this looks like black magic, it still falls under the remit of pattern matching, just on a superhuman level.

Again, I know nothing about AI, but it seems that AI problems need to be phrased in terms of patterns, to which a computer can respond. For driverless cars the pattern is the data it gets from its cameras and other sensors and the response is a given in terms of velocity and direction.

I can see how you could phrase a medical diagnosis and course of treatment in these terms: observed symptoms being the pattern and diagnosis, treatment plans being the response. People are already researching this: There is a wealth of historical data for scientists to work with to build their models in this field through a century of medicine. Again I know nothing about the practice of medicine, but I’ve been told by people who do that doctors are taught to look for a single diagnosis that explains all of the symptoms, not many. That is a good kind of problem to work with.

The problem I have with your assertion that we are on the verge of computers being able to write meaningful computer programs is that the normalized dataset of pattern and response that would be needed to train an AI just isn’t there. The problem space is enormous and so is the response space. How are you suppose to encode the specification of the program that the computer is going to generate?

People use AI to generate fast approximation routines for existing algorithms, but this is again a pattern response where the training set can be generated by the original algorithm.

I’m not saying it won’t happen. But what I am saying is I think it would represent a paradigm shift on the order of discovering the double helix and once that has happened it would take many years to transition from pure research to a viable product. My guess is that it is somewhat equivalent to computers becoming sentient and that has not happened.

If you can reference some viable research that says otherwise it would be fascinating to read (these seem to be missing from the article). Otherwise I think this article should get tagged as science fiction, or just wishful thinking.