You have added a new link to your collection You have deleted a link from your collection
;
    • 24 March 2022

      Q&A: In Digital Models, Architects and Gamers Find Common Ground

      Henning Larsen explores how acoustic research, video games and virtual reality might shape the future of architecture.

    • What do architects and gamers have in common? When it comes to digital simulations, more than you might think. Finnur Pind, an acoustics expert and Ph.D. fellow with Henning Larsen, is exploring how sound mapping and virtual reality can create more immersive architectural renderings – And how this research crosses over into the video game industry. We sat down with Finnur to learn more.

      Your professional background is focused on acoustics research, and acoustic modeling in particular. How do you go about modeling something intangible?

      That’s the thing – people have a hard time understanding raw data that speaks to acoustic properties. Compared to lighting for example, if you have a 2D plan showing how light is distributed in a room, it’s fairly understandable – most people can see where there’s light and where there’s not. You can’t really do that with acoustics. You could map the decibel levels, but that doesn’t tell us much about the acoustic comfort inside a building. Acoustics is about how clear sound is, how easy it is to focus in a space, and so on. You can’t capture that as well in a drawing. If you can just experience it firsthand, it makes that explanation a lot easier.

      A lot of your recent work here at Henning Larsen has been with virtual reality (VR). What potential do you see in VR, as far as acoustic research is concerned?

      I prefer to say we’re getting sound and acoustics into 3D models. Because it’s not just VR, it’s having sound become a part of architectural models – so my work is not necessarily restricted to VR. You could also sit at your computer, put on your headphones and look at a 3D model on the computer screen and hear how the building sounds. It’s about showing in an intuitive way how acoustic conditions respond to changes in a building’s materials, furnishings, and interior layout.

      The unique thing about VR is how immersive it is, which gives you a better sense of the impact of different design options, geometric forms, material choices and floor plans. Any of these things can influence the soundscape, and certainly the architecture and aesthetics. Seeing these rendered in VR can make those considerations more immediate, more holistic. VR offers this to a broader audience than you might reach with, say, 80 printed pages of raw data. Right now, we’re transitioning from static 3D models to more interactive, immersive, realistic models. To do this, we’re incorporating lot of technology from the video game industry.

      Can you say more about borrowing video game technology? I wouldn’t expect that to cross paths with architecture. What’s that relationship like?

      When you view a 3D model in architecture, everything’s usually static – there’s no life. You don’t sense motion, you don’t hear the sounds of people talking or music playing. If you really want to judge the quality of a building’s interior, do you get the right picture if you visit when it’s completely empty? Using world-building technology from video games, like animating people, adding ambient noise and more motion, can help us give more life to our renders.

      There’s some inspiration going the other direction, too. The first thing that comes to mind is that Unity, which is a game engine developer founded in Copenhagen, is now putting a big emphasis on the AEC [architecture, engineering and construction] market. They’re developing a sort of transfer program that translates an architectural program like Revit or AutoCAD into the Unity engine, because then you can use all the tools from the gaming world to give life to your 3D model.

      It’s super interesting to see these two worlds merging. In architecture and building design, accuracy is of course key. All dimensions must be correct, from engineering to sound design; it’s got to be as close as possible to reality. In gaming, things just have to look good and seem plausible, but they work much more with interactivity, on creating a really immersive experience, which is something that architects and building designers don’t use so much. We can both learn something from one another.

      Regarding acoustic considerations in 3D models, what developments or innovations are you hoping to see in the coming years?

      I would first like to say that you could certainly spend your lifetime on this, and still have mountains more to improve. Modeling sound in spaces is an amazingly complex task.

      At the moment, we want to improve interactivity. Right now, for our acoustic 3D models we’ve pre-computed different cases of interior layouts, and you can switch between the acoustic conditions of each one. Exploring changes we haven’t pre-calculated requires going back to the computer, re-running models, and making new calculations, and that takes some time. Ideally, you could change cladding materials, move a wall or shift furniture around on the fly, and experience a real-time response in the model. But it’s tricky to get both accuracy and interactivity – That requires real-time calculation, where the sound qualities instantly update depending on where a sofa is moved in a model, for example.

      In the coming years, I think machine learning technology could bring us closer to the real-time domain. You can train a machine learning network to study huge amounts of acoustic simulation data and recognize patterns. Then when you present it with a new scenario, it could potentially predict with good accuracy what the conditions will be like based on all these other points of reference.

      Do you think there could be a day where architects design primarily through VR headsets?

      That’s certainly the vision that I think quite a lot of people have, that you’re in the headset, designing, drawing, moving things around. Yeah, why not? I mean, not now, not in two years, but if we’re talking HL60, 60 years is a long time. Look where we were 60 years ago. What did they say about the moon landing? A modern iPhone has 100,000 times the processing power as the computer that brought us to the moon. Predicting the future is hard, and it’s evolving faster and faster.