Reflection on “The Art of Doing Science and Engineering: Learning to Learn” The Representation of Information Part I & II

Kraft Inequality example.

Coding Theory- The Representation of Information Part I & II

Once you build fairly reliable equipment, it gets exponentially more expensive to get better.

In this lecture he talks about coding theory but also about design problems and constraints and finding the balance within these constraints like money, precision, and reliability.

This two part series is very math and white boarding heavy so there isn’t as much dialog to write about about, but if you look at the diagrams in the book or watch the videos he talks about some really interesting stuff I never heard of or thought about before. Some of the math is over my head but I try to keep up as best I can. Watching him go through bit decoding was really interesting and goes into Kraft’s inequality and explains a lot of that.

You think you understand what time is and what information is, but when you are asked to write a program to define or show that, you begin to realize you didn’t know what you were talking about. You cannot write the program.
We cannot prove that within a system, that a system is consistent. Our language is not like that. (original statement referred to math eg. arithmetic)
Words tend to freeze things to a state or definition. It’s much harder to change a meaning after it is spoken.

It’s better to think through and write out a problem or statement of value first before speaking it, because people have a hard time changing what they say or going back on their word even if it is not correct. There is just something about speaking something that makes it more difficult to change later.

In the second part he goes into more detail in design patterns for encoding and decoding in an efficient manner to save as many bits as possible. This is something very much uncommon in front end web development.

How often do we load in a handful of dependency libraries plus some images, throw in some tracking scripts, and you have a typical bloated web site. There didn’t used to be the ease of access to data and internet speeds like we have now. Saving a couple bits was a very important thing and still is in certain roles. Of course when considering performance you must still be conscious of the size of your minified and gzipped program, but as you can see time and time again, this is something that is largely looked over. Most people just throw in a grunt or gulp dependency to minify and zip up their program and call it good. Having to reuse functions and variables to save a few bits in web development isn’t something seen to often. I guess the ship fast thought process has eroded the mindset of writing efficient code.

Engineering balances he brings up again in the second part about how much precision do you want in your program against how fast it is. It’s something you have to find a good medium of.

There is a tendency for someone to do something good early on, then spend the rest of their life working on it.

He references Einstein, who, as has been noted by many people with personal accounts with Einstein remember him as being very protective of his Relativity Theorem and never did anything else of note after establishing that. He spent the rest of his life working on that one thing.

I have seen any number of people spend the rest of their life elaborating on that one idea. It’s a waste of talent.

Move on and try something else, is basically what he’s saying here. This is what he states he did. After he came up with error correcting code, he put it aside and moved on to try new things.

Next up will probably be another two parter on ‘error correcting codes’ and ‘information theory’.