1. 6
  1. 6

    Oh man I have opinions about this, since I worked at a somewhat old startup in this space a decade ago (which died a sad, ignoble death).

    First, you have to understand, Autodesk (via the Revit acquisition, eating Maya and other 3D editors, and so on) is this gigantic thorny ball of hate tentacles that just bludgeons and smashes anybody who isn’t them. Your goal as a business in this space is to get enough of a foothold that they buy you, full stop. Selling to customers is hard, integrating with the (baroque, inane) existing software is hard, the math is hard…everything is hard and hates you.

    Second, the AEC (architecture/engineering/construction) industry (as mentioned in the article) has wildly differing needs for building models, and it tends to go beyond mere filtering of entities. Like, you might imagine “I have a site, it has a building, the building has floors, those floors have rooms, those rooms have objects.” Okay, but then some bozo says “I want to see the plumbing”, and then you have a different hierarchy to work with. Some other bozo says “I want to check my LEED certification”, and then you need to ignore good chunks of everything. Some third bozo, doing space planning, says “Okay we’re doing an open-office plan let’s look at the how to segment up the big room into different zones”. The same poor underlying data model is starting to get a little strained–and to get the full power of BIM, ideally each layer of things can talk to/is related to the other layers (if you change the usage of some space in a room, you’d expect the plumbing or electrical or whatever to update, which might mean updates to the structural stuff, and so on and so forth).

    (Autocad, for a while a nontrivial thing we had to support, eschewed a data model and just said “fuck it I’m a drafting tool beep boop”. It is decades old and still in business, and we are not.)

    Next, in a somewhat similar fashion, architects (like Gehry, as a gratuitous example) often do not care about any of that stuff (they have people/engineers for that), and may only want to focus on the wacky exteriors or certain broad interiors of buildings and subcontract out the rest of the stuff. So, you run into this interop situation that is made even worse by, as the article said, having the same poor files handing off between multiple versions of the software.

    Outside of the business and domain frustration, there’re the actual technical issues.

    First, these models can get detailed. Look around the room you’re in, count up all the bric-a-brac. Now, count up outlets and imagine the wiring for them. Now, count up hardware (doors, windows), and guess how many panels of drywall are in use. Now, look at each wall and (if you’re in the US) imagine that there’s a 2x4 stud (metal? wood?) every 16” and that there are horizontal bits of wood between each stud as fire breaks and reinforcement. Imagine between the studs, if you’re on an exterior wall, insulation and plastic sheeting for moisture and vapor barriers, and then siding or brick or whatever outside. Now imagine, how are all of those studs and panels connected? That’s just one room.

    One problem with this much detail is that it is split between discrete objects (say, what chairs belong in a room) to discrete repeating objects (say, US electric code says “though shalt have this many outlets per linear foot of wall) to continuous forms and shapes (say, this wall is going to follow a hyperbolic curve to show off certain acoustical properties or whatever). And as the article points out, all of this is subject to the interpretation of the people actually building anything. Modeling that is…weird. Getting it to all play nicely together is…weird.

    Another problem, at least on the rendering end where I used to live, is that this sort of software runs head-first into the most annoying performance problems in graphics. As an example, properly rendering multiple layers of transparent possibly colored glass in real-time without raytracing is no mean feat–but for an architect, that’s totally reasonable, right? Looking out from your office across an atrium and seeing into another office isn’t some crazy far-fetched edge case.

    Similarly, an architect might say “Well, I got this room perfect. Let’s stamp out 50 of them on this floor and duplicate it a dozen times for the office floors of this building !” That’s seems perfectly reasonable, but now your poor renderer is just crying in the corner and rolled up in a ball because of the all that shit not fitting neatly into an acceleration structure at runtime.

    The last problem I’ll point out is that, honestly, this stuff is just plain hard. Like, real-people-math hard. What’s a good algorithm for tiling objects (say, lamp fixtures) over a ceiling? Okay, now, how do you clip out the parts of that for spaces jutting out into the room? Okay, but now, what if one of those spaces isn’t rectangular but instead is a curve–how are you testing against a curve? Okay, how do you do that performantly for hundreds of thousands of square feet?

    Are you measuring area or testing walkability? Okay, are you using triangles for that? Sweet, how are you turning your rooms into lists of triangles? Okay cool, what about rooms that aren’t rectangles? Okay, cool, what about rooms that are rectangular, but have two “holes” in the middle of each half where an elevator shaft passes (making kind of a figure-eight floorplan)? Okay, cool, you’re turning those into lists of triangles…how numerically robust is your solution? You’re not using just floats, right? …right? Okay cool you’re importing the floor plan from AutoCAD (somehow) but how are you guaranteeing the winding of the floor–if you pull in four line segments for the walls of the room, is it a room surrounded by the world or the world surrounded by the room?

    (The answer for us, incidentally, was making heavy heavy use of some of the awesome computational geometry stuff Java shipped with. Even then, it did require some care and feeding. We also had a somewhat limited data model, which made life easier).

    I could go on for ages about this, but basically it’s a tremendously fun field with lots of problems and value to society and basically no profit to speak of so it’s hard to justify working on improving the tools that better civilization when the pay for you and your family is so much better writing scripts to automatically scale k8s clusters on AWS so you can better harvest customer data. C’est la vie.

    1. 6

      And for all of that, using a tiny team with heavy layoffs over the years, we still managed to have:

      • Multiplayer design editing (using an EAV shared DB with locks, ::shudder::)
      • Automatic design verification for things like space flow (designing a courthouse, can you guarantee prisoners don’t use the public-facing areas to transit while awaiting trial) or bills of material (designing a hospital, can you guarantee that your design does in fact include enough surgery rooms and equipment for same)
      • Web report generation and visualization via JWT (not the tokens, the other one
      • Integration with Google (now Trimble, rip) Sketchup and exporting and importing building exteriors/footprints (another case of different views…we saw buildings as a collection of floors with exteriors, they saw building exteriors as not that)

      It was my first real exposure of what happens when you’re two decades ahead of the market, building cool shit, and nobody caring.

      1. 1

        Automatic design verification for things like

        Wow, that sounds like Design Rule Checks from the PCB world, but on building scale. Is that a common feature in BIM these days?

        1. 1

          Is that a common feature in BIM these days?

          I have no idea–I would imagine there’s probably some tiny software shop of five or ten people that supports a plugin for Revit that does this and prays each day Autodesk decides it’s marginally more cost effective to buy them out and merge their codebase than to throw interns at the problem.