The 8 Layers of Software Engineering

Sergey Piterman
Outco
Published in
24 min readDec 25, 2016

--

Becoming a software engineer has been an interesting journey for me. I’ve learned more than I thought I ever would about computers and programming, and it has expanded my horizons in more ways than I ever thought it could.

It’s not just how much you can do with computers these days, but also how much we will soon be able to do and what it all means. And like any complex adaptive system, there are intended and unintended consequences that arise from the use and misuse of computers.

Some things like Moore’s Law, as a special case of the Law of Accelerating Returns, have been extremely precise with and accurate with their predictive powers. Computers have been getting systematically cheaper and more powerful over the years. Evidence for this can be found in any number of things, from camera’s and screens getting higher pixel counts, to greater bandwidth for uploads/downloads, to larger hard-drives, to more realistic special effects and video game graphics.

But other things have been less foreseeable. Typically they have been related to how society either embraces or rejects certain aspects of technology. Sometimes it comes from some fundamental lack of understanding, like when artificially intelligent algorithms finally beat humans at chess but never rose up and took over. Another example is how memes and viral content appear spontaneously, and are difficult to orchestrate or predict. Another is how social media bubbles, where the content people see is continuously reinforced through predictive algorithms, create echo chambers that can polarize people’s views and opinions since they continuously receive social affirmation for their views. Or take the paradox of where people are more over-worked and more depressed than ever before, when there is more technology to do the work for us and keep us connected to those we love most. Or the arms race for using technology to create a freer, safer world and the forces that would use that same technology to create chaos and oppress others.

And on and on.

What I’ve attempted to do with this post is to create a little bit of clarity through a somewhat arbitrary hierarchy. It was inspired in part by the OSI model, which abstractly characterizes the communication functions of a computing system. What I’ve attempted to summarize with this model is a broader set of layers of how to reason about software engineering and what kind of thinking goes into building software. Each layer builds on the previous layer, but is meant to be independent because the properties that emerge in higher layers are different from the ones of the layers below. In other words, mastering a lower layer doesn’t necessarily guarantee mastery over a higher layer, and vice versa.

However, typically people’s jobs seem to span layers that are closer to one another (aka a UI/UX engineer will have a job closer to that of a data scientist’s than that of a hardware engineer). This is because layers do affect each other in both directions. Things at lower levels do affect higher level considerations and vice versa.

I made this list because I think it’s important to have some kind of framework for thinking about software engineering and it’s ramifications. To create clarity and sort of a blueprint for building applications. Hopefully this framework will allow people to think about the interplay of different layers more clearly and possibly even avoid some negative unintended consequences.

And with that said let’s jump right into it:

0. Physics and Chemistry: Laws of the Universe

To the non-software engineers I started my list at number 0, because that’s typically where indexing starts from in most programming languages. I think R is the only one I’ve encountered that doesn’t follow this rule.

Fundamentally everything is physics, and obeys the laws of the Universe. Computers need to be built out of materials with certain properties that are conducive to doing what computers are meant to do, which in short comes down to processing information.

Understanding how electric current flows through a wire, understanding fundamentally how electrons behave, understanding the physical limitations and constraints on the behavior of computers is necessary to build a computer after all.

The speed of light determines how fast signal can travel from one side of the planet to another, or from a rover on Mars to Earth. Quantum physics and the size of atoms set a minimum size for transistors on chips before electrons start jumping through barriers. Thermodynamic laws determine how much heat a chip can withstand before melting. Solid-state physics determines how the ions in silicon will drift over time.

I won’t pretend to know more about this subject than I do, having only taken a few college level introductory courses in physics and chemistry, and having done some personal research. But it’s enough to know that these laws are the ultimate constraints we work, which is why being familiar with these laws is so important. They are the price of admission into this Universe.

It also gets really interesting at the theoretical level. There is something called Bremermann’s limit, which shows what the maximum computational speed a computer can attain in this Universe. It uses Einstein’s equations and the Heisenberg uncertainty principle to show how small and fast a computer can possibly get, and believe it or not, this limit has practical applications for cryptography, and potentially deeper ramifications for the field of physics itself:

That everything is information and information is everything.

  1. Hardware: The Best Tools for the Job

Silicon is the material of choice for computer chips because it’s a semiconductor. This means that it can act both as an insulator and as a conductor under specific circumstances. If it is enriched, or doped, with certain ions it’s conductivity will change. This is perfect for computers because it allows you to start with a substrate that is extremely common in the earth’s crust (about 27%), relatively stable, and that can be modified reliably.

The funny thing is, Gallium Arsenide is actually a better choice of semiconductor for a number of reasons. Transistors made from it are less likely to overheat, it’s better at emitting light, and has a higher electron mobility. But at the end of the day, because of its abundance and lower cost, the industry moved forward with silicon.

This layer is intimately tied to the previous layer, but is more practical than it is theoretical. For example, fabricating microprocessors uses a process called photolithography. You can think of it like a stencil that you lay over the substrate that marks where the integrated circuits will lie. When you then shine a UV light over that substrate, which consists of several layers of chemicals with different properties, some of those layers will be destroyed and can be removed, leaving behind the completed circuit. But because light travels as a wave of a certain size, this process only works up to a point. So being aware of how physics and material science can be wielded is crucial here.

And Silicon chips and transistors weren’t always what we used to perform computations. Before that we had magnetic tapes, vacuum tubes, punch cards, and even analog computers. The history of computing goes all the way back to the abacus. A very deep subject on its own, the key element is that whatever tool was in use needed to be able to store and process data, information. Whether that’s done mechanically or digitally doesn’t change the core of what the machine is doing. The advances have been in improving speed, efficiency and reliability. In theory if you had enough punch card machines you could play a game of angry birds, but it would be much slower, it would consume a lot more energy and it would be very expensive.

Looking forward though we can see the limitations of silicon chips themselves and we can imagine different kinds of hardware. HP is working on a new kind of hardware called ‘memristors.’ DNA has been the information storage device for living things for billions of years, and because of it’s small size and relative stability could theoretically be used to perform computations. Quantum computing promises to completely revolutionize modern encryption keys and light-based computing would be more efficient and generate less heat than using electrons.

This layer is all about putting theory into practice and building the best possible computers taking into consideration physical, technical and financial constraints. People working on this layer are responsible for the major leaps in computing power, speed and affordability. They moved us from vacuum tubes to transistors, and will be responsible for building whatever replaces current hardware technologies.

2. Information: Abstract Machines

George Boole was a 19th century mathematician who lay the mathematical foundation for the information age. He created something called the laws of Boolean algebra, which were are the basics of logic itself. And It all begins with the notion of truth and falsehood. Yes and no. 1 and 0. These notions are encapsulated in things called statements (usually abbreviated by a letter like p or q), and these statements interact with one another through logical comparators: AND, OR, NOT.

So for example: True AND False is False. True OR False is True. NOT True OR True is True. And so on.

His rules are very simple but their implications are far-reaching. Combine enough of these statements together and you can creates some really crazy behavior. For one, propositional logic is the basis of mathematical proofs. You can use it to PROVE things like the Pythagorean theorem for any right triangle. Mathematical proofs are a whole other topic on their own, but they are extremely important since they literally establish what is true and what isn’t.

You can also build anything from a calculator to a desktop computer using these rules alone. Combining them into arbitrarily complex arrangements allows for an arbitrarily complex behavior. Imagine chaining a bunch of statements together:

If A, and B, and C, or D are all true, then perform action X. Even IF/THEN statements themselves can be constructed with this kind of logic.

And what’s nice about this is that 1 and 0 work well with how electrical laws work. It’s simply high voltage and low voltage, and configuring the circuits in a particular way.

This layer starts to become more abstract than the previous ones because in some sense it doesn’t matter what is below it for you to understand what’s going on. The rules of 0s and 1s don’t care whether they are encoded on a punchcard or a solid state drive. The only thing that changes is the speed.

0s and 1s might be the fundamental abstractions, but it’s difficult for humans to derive meaning from them. Sure simple operations are fine, but what happens when things get more complex? Like when you want to encode something like the english alphabet. For something like a word processor, you need a way of storing the values of each distinct letter.

The solution is to have fixed length sequences of 0s and 1s, with a one-to-one correspondence between the sequence and the character they represent. 26 lowercase letters, 26 upper case, 10 digits. Throw in special characters, escape, enter, shift, tab… It starts to add up but the list is finite, and once you have that you can start doing some pretty interesting things.

What I just described are the ASCII values, which converts each character into a binary code which the computer can understand. A string of bits (0s and 1s). A byte is just a chunk of 8 bits strung together, which collectively mean one thing. So 00000001 might be ‘A’, 00000010 might be ‘B’, and so on. Individually, each bit is without meaning; they have to be considered together.

This layer is heavily mathematical, so a lot of the early work done on computers still holds true today. Alan Turing, the so-called father of modern computers, lay the foundation of our understanding of computers that holds true to this day. He imagined a tape of 0s, and 1s, that would be read by a machine with a set of instructions on it, what we call a Turing machine. This machine could read, write, move positions and re-write 0s and 1s deterministically and in theory be able to perform the same set of operations as a supercomputer, given enough time and tape.

Of course such a machine would be wildly inefficient, but the implications of this are huge, since it essentially says all computers are one in the same, or can at least be represented the same way. It even suggests that human beings might just be very complex Turing machines. But that’s a topic for another time.

3. Algorithms: Solving Higher-Level Problems

A quick note about abstraction.

0s and 1s are a great place to conceptualize information and its storage. But for humans, they are hard to work with. We need something more abstract. We need to be able to work with instructions.

This process of abstraction has to continue because the behavior of computers starts to become more complex. It’s the reason programming languages have been evolving and becoming more abstract over time. Specifically what this means is that they become more like natural language. Certain things get hidden from the view of the programmer.

For example, if I want to declare a variable called ‘X’ in Javascript, and set it equal to 2 I can simply say:

var x = 2;

Now I can leave out the semicolon at the end, and it will still work (though it may not follow style guidelines), and I can even leave out the ‘var’ part, though it will just declare x in the global scope. For non-programmers the takeaway is that the language is flexible, it can infer what I mean without me needing to tell it.

Compare that with the same thing in Java, where I have to specify what type of variable I am declaring:

int x = 2;

I have to tell it if x is going to be a number, a string (which is like a word), a boolean (true or false), a list… And once I’ve told it what type it is, I can’t change it on the fly. The reason is that depending on the type of data, certain assumptions and optimizations can be made. This is great for efficient code, but can be a pain for the programmer. Oh and you have to remember the semicolon at all times, or else it breaks.

The point is that there are tradeoffs. Javascript does a lot of work under the hood, that slows it down compared to some lower level languages, but it allows the programmer to work more quickly and efficiently. If what you are doing isn’t computationally intensive, then you can probably get away with being a little wasteful. On the other hand if you are processing a ton of information and need more fine-grain control over you code, a lower level language might be better.

Again this ties into lower levels. As the hardware improves, you can afford to be a little more wasteful for the sake of convenience.

This main theme of this layer is the basic unit of problem solving, which is the algorithms. These are essentially just instructions on how to process an input and produce an output, if any.

Algorithms can be represented as pseudocode, which is something in-between the English language and actual code. It helps programmers reason about the problem they are solving, before actually setting out to solve it. It might look something like this:

If the return value is even:multiply it by 3 and add 1otherwise:divide by 2repeat

It’s difficult to conceptualize an algorithm in terms of 1s and 0s, so we do it in a kind of language. It’s a kind of compromise between humans and machines where they both can communicate and understand each other.

Algorithms range from arbitrarily simple, to mind-bogglingly complex. And they aren’t always well defined. Usually there are natural break points, so a large algorithm might make use of several smaller ones, but it doesn’t really make sense to slice an algorithm up arbitrarily. It usually has to have some kind of input and output. The same way the individual bits in a byte don’t have meaning, the individual parts of a single algorithm don’t really have meaning.

Algorithms can be a lot of fun because it’s abstract problem solving. And there are good ways of solving problems, which won’t eat up a ton of computer power or memory, and there are bad ways. Some problems can be solved very quickly, and produce an output at the same speed regardless of input size. Something like, return the input plus 1. Some things scale linearly. Say you had a list of numbers, and you had to print each one out. The time it takes depends on how long that list is. And some problems get harder and harder the larger the input size, and there’s nothing that an be done about it. If you have a list of 5 letters, and you want to see every possible arrangement of those 5 letters, you’re left with 120 different possibilities. For a list of 10 letters, you have 3628800 unique possibilities.

At this level, concepts like recursion, abstraction barriers, scope, data structures, space and time complexity, become relevant. This is typically where most programmers start their journey, and where you can start writing your first self-contained programs. They are a topic unto themselves, and a fascinating one at that.

But typically won’t find these algorithms on their own in the wild. Algorithms are usually embedded in a larger system. They are the building blocks of applications.

4. Architecture: Coding for Humans

What kind of tasks are computers well suited for?

A good answer would be boring or repetitive tasks that humans either can’t or don’t want to do. Hence the saying “if you find yourself doing the same thing more than twice, automate it”.

So a fundamental concept in programming is the loop; a chunk of code that runs until it is told to stop. Or it just never stops at all. This can be a bad thing, as in the case of the infinite loop. And sometimes it doesn’t even have to be infinite. Like if you told your computer to print every number from 0 to a billion. Fun fact, it’s impossible for a computer to know ahead of time whether a piece of code will terminate or run forever, without actually running it. See something called the halting problem.

Automation is great. It removes human error, it saves time, and it frees up human capital for other tasks. But you can take the concept one step further and apply it to other tasks. Things like sorting, filtering, reducing, searching… all of these can be done with the right algorithms and data structures.

At this level finding the right tool for the task is important, and a lot of it depends on the real world situation that you are trying to solve. If want something like a bank teller program, you’ll want to use a queue, where you can process the person at the front of the line, and add to the end of the line. Other situations call for something like a tree, where you can search through hierarchies. Some situations call for matrices, graphs, heaps, stacks… There’s a tool out there for most situations.

But just like driving a car, a lot of this gets abstracted away, and that’s important. Though it’s helpful to know how the engine works in case your car starts making a weird sound, you don’t need to worry about it in most driving situations.

What starts to become important is how the programmer interacts with the programs. I want to be able to just say ‘Add this user to the list’ without always having to worry about how that user gets added. Whether it’s chronologically, alphabetically, by age, location… The same goes for deleting, searching or sorting. It’s still important to know that some sorting algorithms are better than others, but once you have it set up, you should just be able to run your sort functionality.

Paradigms like imperative, functional or object oriented programming become important here. They help the programmer make sense of larger codebases, and how to structure both their code and their thinking. If I create a rectangle object, can I create squares as a special case of rectangles? Is there a way to tell a function to perform a certain task on every item in a list, but not tell it what that task will be? These are the types of questions that new programmers ask at this level.

Open source libraries, and frameworks are at this level. They help organize applications around different architectures. MVC frameworks set up guidelines for how to organize code around what that code does. Data should be separate from the logic that manages it, which should be separate from how that data gets displayed. Service oriented architecture allows code to be reused and shared, compartmentalizing functionality so that if something breaks it won’t disrupt everything else.

There’s a lot of work around this layer because there’s a lot to be done. Performance can be improved in a number of ways, new features get built out at this level, work starts to get broken up into teams. Version control, basically a history of any saved changes to your code, goes from being a good idea to a necessity.

Here the needs of the programmer are the focus, and bad architecture can have disastrous consequences.

5. UI/UX: Building for the User

The customer is always right. Any software that is consumer facing must take into account the user’s experience. This is a tricky layer because the added abstraction means there are more forces at work now.

Now it’s no longer just software engineers at work, but designers, business people, marketers, and of course the users. Each brings something good to the table, but they also bring conflicting needs, incentives and understanding. Software engineers aren’t always the best at predicting what users will need or even enjoy. Our solutions can be ‘over-engineered.’ Clean and highly performant code doesn’t always translate into a good user experience. Sure it’s great to have a page be snappy, but that isn’t all there is to it. There’s a certain subjectivity to UX that engineers don’t always see. My personal belief about this is that it’s partly from working so closely with their own code, and partly because it requires a different kind of aptitude: empathy.

Having non-technical people involved can be great since they can add their own strengths to the product. There’s a greater degree of specialization that goes into building things at this level, where entire teams can dedicate their time to tweaking and improving the user interface through A/B testing, and market research. This, in theory, should work well with the engineers because they can then focus on building, rather than worrying about what to build.

But in practice this doesn’t always work. XKCD had a great bit about this topic, where a non-technical person asks for two seemingly similar tasks but is told they require wildly different development times. Non-technical people don’t always understand how easy or difficult creating or changing something can be. And they can also suffer from their own biases from being too close to their own work. Good communication and understanding becomes important at this layer.

And then of course there’s the user. As one programmer said ‘The user is sometimes evil and always stupid.’ Though it doesn’t cast the user in a great light, it does serve to illustrate a point: users will tend to do unexpected things. They can be fickle, and if the product is one they enjoy, they can be very opinionated. Just think about how people reacted to the changes to Facebook’s timeline, cover photo and reaction changes. People can be slow to embrace change, and this is part of the reason why sites tend to change less at the UI/UX level once they are more established. Google still has the same search bar on a blank page that it had over a decade ago, even though it has made major changes under the hood.

But the benefit of users is that they can provide valuable feedback that improves the site. They can find bugs, propose new features, locate pain points. It’s important to listen to the user at this level, but to not be a slave to them either.

This layer is less of a hard science than previous ones, though the trend began at the architecture level. There are many ways to architect an application, though some ways are better than others. This depends on what you are trying to accomplish but there are best-practices for a reason. Here ‘best-practices’ can actually limit innovation. Tinder didn’t exactly accomplish some huge engineering feat, but what they did innovate on was on how users interacted. They streamlined the whole dating profile and how users connected with one another. The users interacted with the app in a very straightforward, and no nonsense way. It wasn’t hard to learn, it could be used casually, and it didn’t take long to set up. In essence what you are getting is a random Facebook profile, a few pictures, the ability to message that person, and the ability for either party to revoke messaging privileges at any time. They just recombined old functionality in a new way.

What comes to mind are my pet hamsters. I remember giving them a wheel to run in, and them just stuffing it with cotton and using it as a place to sleep. People can be unpredictable, even with all the engineering principles, market research, tweaking and tinkering, there is still room for the artist. Crazy ideas, both grand and simple can have a huge impact. Remixing old ideas into new formats can give them new life at this layer.

Because at the end of the day, there’s more than one way to skin a cat.

6. Metadata: Seeing Larger Trends

The dawn of Big Data meant that bigger trends could be understood and exploited. The best analogy I could think of for this was the behavior of a gas. At the level of the individual molecule, a gas behaves a bit randomly, bouncing off other molecules and changing direction constantly. However at a macroscopic level, certain behaviors are more predictable. If you heat up a closed container of gas, the pressure builds. If the volume shrinks suddenly, the temperature rises. Hot air rises, and water vapor condenses on cold surfaces. Knowing what each molecule is doing isn’t really necessary to understand what’s going on in broad strokes, though fundamentally that might be what is at work.

Here it doesn’t matter what one user does, but what the collective does. What this means for site design can vary. For example, Walmart has to deal with one day a year where their site receives an enormous amount of traffic. Every ‘Cyber Monday’ people flood their site with requests and their servers have to handle it or they lose millions of dollars. There are both hardware and software considerations to take into account here.

Another example is the stock market. Algorithmic trading makes use of tons of data. They don’t care if my neighbor bought a certain stock. That data point will be in there if they measure how hot a stock is based on the volume of shares traded on a particular day.

Meme’s go viral by chance. There’s no way for a human to control every piece of content and how it travels throughout a social network. Sure inappropriate content can be censored and flagged, but no one is engineering virality. If a piece of content is good enough for an average of more than one other person, then it has the potential to snowball and go viral.

It’s gotten to the point where social media dictates what stories are run in traditional media. I find myself wondering why such weird stories like Harambe’s get covered. It’s such a unique situation, but it invites all kinds of opinions about it: animal rights, zoo safety, parenting, conservation, gun control. People are able to turn it into whatever they want, and the news networks eat it up. Not that outlandish stories reaching national levels is new, but now there’s just this social network that picks the stories for them.

People also are able to exploit virality through things like clickbait. If something has a catchy title, it encourages more people to click on that content, drowning other content out. Sites like Buzzfeed built their business on this principle, outsmarting the masses and social media algorithms. The additional clicks drive user engagement, and drive up ad revenue. In a sense there is a natural selection process favoring certain kind of content, like lists and quizzes.

It makes sense that this would be cause for concern. I notice my own attention span shortening, and when new content is just a few swipes away, it doesn’t really hurt to just take a peak. But is quantity really a substitute for quality? How much content is really that valuable, and how much of it is just noise?

This layer is also where the NSA and ad trackers live. Looking for patterns in search history, keywords, requests. It’s the domain of networked machines. It comes with dangers like spyware and malware. Corporations have to defend against DDOS attacks. These are basically when a virus infects many user’s computers, and use them as hosts to take down large servers that a single supercomputer couldn’t by overloading them with requests.

But it’s not all doom and gloom. Though this layer is like the wild west, it’s evolving very quickly, and most people aren’t even aware of it. Making sense of data is the name of the game, and hopefully it will lead to some amazing advancements. Self-driving cars, smart cities, efficient electric grids, traffic control, all types of sensors. In short, the internet of things.

Finally, artificial intelligence. Machine learning algorithms need a lot of data to learn and improve. Neural networks are becoming more and more prevalent and they are modeled after how our own brains work. AI is in everything these days, though it may not be apparent. So far it’s applications have been very niche, and it has been a great tool for solving very domain specific problems. But these neural networks are more general purpose. They solve problems and make predictions based on input data, so the more data we give them, the better they become in theory. Tie that to the exponential advances in hardware and software, and those algorithms become better at more and more things.

Everything is leaving a widening trail of data. And those who know how to wield that data are the ones who will have the power.

7. Transcendent: What does it all mean?

Abstraction is a powerful tool. It’s really what allows us to manage to the world cognitively. Imagine if you had to learn every detail about how the engine of a car works before you could drive it. Or if before you could learn how to use a computer, first you had to understand every detail of electromagnetism. Abstraction gives us the ability to wrap complex ideas up into simple, manageable bundles that make our lives easier.

This layer is one of the most difficult to talk about because it’s almost philosophical. The kinds of questions it tries to answer are things like ‘What should technology be used for?’, ‘Who gets to make those decisions?’

This layer also has a lot of potential future impact because it shapes our thinking and creates not only the policies but the core beliefs with which we proceed. For example, Von Neumann architecture is the framework with which all modern computers are designed. Boolean algebra hasn’t been touched since the 18th century. Every computer today is a Universal Turing Machine. In a less technical realm, we still hold the Constitution and Bill of Rights to be the ultimate sources of legal truth from which all other laws must fall in line with. In a more spiritual realm the holy texts like the Bible, Quran or Bhagavad Gita are the moral bedrock from which billions of people derive their values and beliefs.

These concepts and ideas were devised by people way ahead of their time and many of them still hold true to this day. Don’t kill, don’t steal, all men are created equal. They are deeply ingrained ideals that we strive for and can come back to if we lose our way.

Which begs the question, what other core truths will we discover in the present or near future that are robust enough to withstand the test of time? Languages, frameworks, paradigms all come and go, but certain principles stay with us. So what are they?

These principles don’t just discover themselves, they require insights, divergent thinking, trial and error, testing, peer review, discussion.

Now that we have this massive power of connectivity we can harness the minds of millions of people to work on problems. This has already been used to help train machine learning algorithms or discover the folding of different proteins that teams of scientists couldn’t solve themselves. But what if we used it to have massive discussions? What if there was a way to tap into the best of people’s collective intellects and experiences to devise laws, standards, norms for the future. Processing massive amounts of data collectively to solve real world problems like global warming, resource allocation, environmental degradation, poverty, hunger, obesity. It’s doable, but it requires the infrastructure to channel that collective will power.

This layer is also designed to tackle some of the deepest and most important questions. What does it mean to be human? Where is humanity headed, if it is heading anywhere at all? What will happen if we create artificial consciousness? How will we know when we created it? What is our destiny as a species?

Each one of those questions, and so many more fit in this layer. It borders on science fiction at times and requires a certain amount of imagination to understand the importance. Isaac Asimov was a science fiction writer, but his stories inspired many people to pursue science, and build towards a certain vision that he helped illustrate. He was an architect of thought. And his imagination propelled real science forward.

In a way this blog is biased towards this layer. There will be a lot more topics that could fit in here but that will have posts of their own. Because I think there is a lot to be said that isn’t being discussed deeply enough. It’s not just a matter of unanswered questions, but questions that are not even being asked. And it’s important because if we don’t know what we’re building towards, we’ll end up somewhere we don’t want to be.

As engineers, we should really be asking ourselves ‘what exactly am I building?’ A way for lazy people to get takeout more quickly? A way for the rich to get richer? A way to express more vanity? To read more nonsense, fast-food news? To sell more shit people don’t need, that won’t make them happy?

There’s a saying that if you build an app around one of the seven deadly sins, it is likely to succeed. Is that really what we want? Have we become that disillusioned with society and technology?

I could go on and on but I’ll end it on a hopeful note: Technology is neither good nor bad, it is what we make of it. And we have the potential to make almost anything we can imagine. In fact, we don’t know what the limits of technology are and to me that is one of the most exciting things I can think of. If we are willing to think and evolve as individuals, we can evolve collectively. If we can become better and help one another there’s no telling what we can achieve.

But it all begins with a choice. The choice to believe what we imagine, is in fact possible.

--

--

Sergey Piterman
Outco

Technical Solutions Consultant @Google. Software Engineer @Outco. Content Creator. Youtube @ bit.ly/sergey-youtube. IG: @sergey.piterman. Linkedin: @spiterman