BIM (Building Information Modeling) is a relatively new term. I think the term can be attributed to the marketing of Revit from Autodesk
However, the need to represent building information when designing, and later when constructing has been an old goal among theoreticians in this area. Literally from Chuck Eastman’s time in the nineteen seventies.
The phrase “Building information” does not mean only the 3D geometry of the building. That is just one of the information which is needed in the model. What is crucially also needed would be the non-visual information packed into each and every element of the model
There are two challenges to this definition: What is an element in the model? and secondly, will all the elements contain the same kind of information?
Now, for many software vendors, the term “element” seem very obvious — of course what can be “touched” and “felt”. So this wall is such an “element” and that window out there is another “element” and so on and so forth. In a few cases; even the space around which the walls are enclosed are regarded as an “element”.
(Understanding spaces and representing them as elements itself is a huge challenge which deserves another article. I won’t get into that here in any detail)
The second challenge: “Will all the elements contain the same kind of information?” Of course, the answer is obvious: No
How can a wall contain the same information that the window would need? Not only would the geometry itself be different, but even the non-visual information would also be different
Now the second challenge is quite serious.
Many practicing architects don’t know this as they are not into programming. But architects reading this should pause for a moment and reflect.
Think of this: A building contains a humongous amount of elements.
If each element has to be packed into the model with a lot of its own specific non-visual information — along with the 3D geometry, of course — that means this can consume a huge amount of resources.
It is for this reason, current conventional BIM consume lot of memory, disk-space, etc.
Several vendors of current conventional BIM do meet this challenge by quite a lot of clever programming and data management inside the innards of their software. I would not get into those, here — and I am sure there are some original IPR out there.
But even then, a conventional BIM software is catering to a whole lot of stuff which can seriously pose a drag on the processing power and speed of the machine.
But I have a spanner in the works here: I call a piece of architecture (aka building) the biggest hard-disk one can ever deal with.
If the computer guys think they know what is “big-data”, they should think again. Come inside any building and I will point out and list out properties of each element which should ideally go into the model of that element. It can blow a definition of “big-data” right through the roof
Where does this accounting of properties of elements stop? Sadly, never.
Take the example of a wooden top of a table. What would be its properties? Well, the type of wood, the type of polish, the cost, the place where it was purchased. Maybe a few more.
Would those properties be enough to be put into the model? The answer is actually no, though some may disagree initially.
Let us think of someone hurting his elbow on that table top — so now that table top also has another property, namely: The type of accidents that can happen on that table top.
Now you may think this is a contrived example — but then, who decides that?
For you it may look contrived. But for that rigorous person who wants to pack in all the information correctly, the type of accidents that element can lead to could be important. Then there could be even lot more properties which would be important to someone or the other. So can the list of properties on that element ever stop.
I hope I can now convince you that the brutal answer is actually: No!
A building is always storing lots of information — not only during designing but also during construction and later on during use. The model that represents that building should be in a position to keep receiving properties for each and every element in that building. This is a seriously hard problem. The computer science guys would call it an np-complete problem. To fully solve it you may need as many resources as all the atoms in the universe
Now for the second spanner in the works:
At which border would one say that I have modeled enough elements in the building? Sadly, it can never be stated. The border keeps expanding to include some parts of a building which we never thought earlier as an “element” of the building
One clever way by which conventional BIM convinces us that enough elements are available in their software is by giving a whole lot of components.
But come on! A clever and talented craftsman can design and make an element which does not fit clearly into an earlier definition of an element.
For example; a traditional Rajasthani craftsman from India can make an exquisite hand-rail which was never ever modeled in any western BIM software
Yet, conventional BIM software provide all kinds of insertable “elements” into the model. Architects who work within such boundaries may never realize that actually such listings can never be complete
So now comes the last, and final spanner in the works (for this article)
Do we architects actually deal with the same elements that we started out with when we start designing. The answer here is again: No! What is getting constructed is actually the end product emerging at the fag-end of a complex design process
To understand this, let me allude to cooking: When we cook, say rice — the final end product is the nice fluffy rice. So one can regard the cooked rice grain to be one of the elements in that model. But that is not what the element we started with. We started with hard rice grain. We added water — which is another element, and that was actually boiled and drained off. It is only at the end, the “elements” of the cooked rice were seen
On similar grounds, an architect starts off his/her designing with many elements that ought to be modeled. Such as the spaces that he/she were doodling on some scratch-pad. Over time, the spaces — aka the rooms — would emerge only when the architect placed built matter to delineate them. Sure enough, the spaces get boiled and drained off. What then remains behind are the built-matter which needs to be represented in the BIM
It is the individual elements of that built-matter which are actually modeled in conventional BIM software. That is why conventional BIM finds its use only towards the end of the design process. At the fluffy cooked rice stage.
Conventional BIM makes the problem easier — or they think they make the problem easier — by actually talking predominantly of the built matter seen when the design is finalized.
The real issues that an architectural project faces can start very, very early on. When that architect was sitting somewhere and idly doodling the bubble-diagrams, he/she may have thought of some spaces, which later on got merged and changed as the design process proceeded. What happened to the representation of those?
The three spanner in the works, are the real hard problems that should have been solved but never have been. I can understand this from another angle:
There is surely a need for giving the architecture industry ONE coherent model for all the modeling that needs to be done.
Architecture is certifiably the oldest subject there is — even if there are some jokes about the “oldest profession” because architecture has to be present even for the so-called oldest profession to be practiced. Architecture is everywhere and it is surely complex. During the designing of a piece of architecture from its hazy stages onwards a lot of “elements” come and go.
In today’s complex world, the final built-form has contributed not only misfortunes of individuals (due to budget overruns, etc) but also to many ills in society such as global warming, energy crisis, ozone layer depletion and so on.
So BIM is surely needed.
Conventional BIM, historically, is extremely new. Compare the last 9 years or so when conventional BIM came into us architects’ lives, to that of the historical time-span of architecture itself. Has conventional BIM really respected all of that?
They forgot that architecture is a deep and and extremely well thought through subject. Possibly they did not have enough time to ask around and instead they borrowed their knowledge on how do such information modeling from other places.
We must not forget that a lot of conventional BIM software we architects use today were never really written by architects. They came from mechanical engineering and other domains. To an outsider, they are not really too aware of the “boiling and draining away of the spaces” This is similar to saying that someone who is not a cook would see a cooked pot of rice without realizing the heat and water that went into it, which is now absent
Let me end by saying that these are the kind of questions that gave the birth to TAD (The Architect’s Desktop) I had the first DOS based version working in 1989. By 1991, I had promoted a theory to the Indian Institute of Architects on how architecture could be represented succinctly.
I have been working on this ever since and kept refining TAD to handle all those questions (spanner in the works) I asked here reasonably deeply — and avoided resource crunches in the modeling
The real BIM that would come up in our lives would only come up when such serious questions are surely addressed and laid to rest. I believe TAD is one such solution. But this is not really about TAD. It is about the questions we all architects have to handle.
If we do not respect the sensitive dependence on initial conditions of designing, and instead talk mainly of what is seen just as the design is being finalized, the horses would have all run away from the stable
Here’s another article expanding on this sensitive dependence on initial conditions