Session 3A – Generative Design and Machine Learning

Tuesday 30 March, 14:00 – 15:30 // Session Chair: Roland Snooks

156 – Generative Design Method of Building Group: Based on Generative Adversarial Network and Genetic Algorithm

Tuesday 30 March, 14:00, Session 3A

JIAWEI YAO, College of Architecture and Urban Planning, Tongji University
CHENYU HUANG, School of Architecture and Art, North China University of Technology
XI PENG, Department of Architecture, Tamkang University
PHILIP F. YUAN, College of Architecture and Urban Planning, Tongji University

From parametric shape finding to digital shape generation, the discussion of generative design has never stopped in recent years. As an important watershed of building intelligence, generative design method has dual significance of scheme selection and building performance optimization in digital architectural design workflow. In this paper, the generative design method for the layout of residential buildings is studied. The pix2pix network, a kind of generative adversarial network, is used to learn the layout method of residential buildings in Shanghai. The generated layout uses Octopus, a genetic algorithm tool of Grasshopper, to generate the volume and optimize the sunshine hours and other performance parameters. In the generation process, different training sample sets and Pareto genetic algorithm optimization are used to realize the control of building density, plot ratio and height limit. This method can meet the real application scenarios in the early stage of architectural design to a certain extent, and has more expansibility, providing ideas for the generative design method of building group.

JIAWEI YAO, associate professor in College of Architecture and Urban Planning, Tongji University. He received his Ph.D in the University of Nottingham, UK and he is also the member of Computational Design Academic Committee of The Architectural Society of China. His main research areas are the evaluation of building environment and urban microclimate, as well as performance-oriented generative design method supported by artificial intelligence and other technologies

130 – Exploring the Key Attributes of Lifestyle Hotels: A Content Analysis of User-Created Content on Instagram

Tuesday 30 March, 14:15, Session 3A

Yoojin Han, Yonsei University
Hyunsoo Lee, Yonsei University

This study aims to investigate the key attributes of lifestyle hotels by analyzing user-created content on Instagram, an image-based social network service. In an era of uncertainty in the tourism and hospitality industry, it is inevitable that hotels must create a competitive identity. However, even with the significant growth of the lifestyle hotel segment, the concept of a lifestyle hotel is still vague. Therefore, to explore how to define, perceive, and interpret lifestyle hotels and to suggest their crucial attributes, this paper examines user-created content on Instagram. The data from 20,886 Instagram posts related to lifestyle hotels, including 2,209 locations, 43,586 hashtags, and 20,866 images, were analyzed using Vision AI, a social network analysis method and computer vision technology. The results of this study demonstrated that lifestyle hotels are perceived as design-focused branded hotels that represent the urban lifestyle and share both vacation and urban activities. Furthermore, the results reflected one of the latest hospitality trends-a holiday in an urban setting in addition to the primary purpose of traveling. Finally, this research suggests broader uses of big data and deep learning for analyzing how a place is consumed in a geospatial context.

Yoojin Han is a PhD student at Department of Interior Architecture & Built Environment, Yonsei University in Seoul, South Korea. Her primary research interests are user experience, design, and branding in physical environments.

Hyunsoo Lee is a Professor at Department of Interior Architecture & Built Environment, Yonsei University in Seoul, South Korea. He teaches classes such as residential space planning, digital design studio, digital fabrication, and design management. His research has been in the interdisciplinary areas of bio-inspired architectural design, design theory and design methods.

196 – CubiGraph5K: Organizational Graph Generation for Structured Architectural Floor Plan Dataset

Tuesday 30 March, 14:30, Session 3A

Yueheng Lu, Collov. Inc
Runjia Tian, Harvard University Graduate School of Design
Ao Li, Transsolar. Inc
Xiaoshi Wang, Harvard University Graduate School of Design
Jose Luis Garcia del Castillo Lopez, Harvard University Graduate School of Design

In this paper, a novel synthetic workflow is presented for procedural generation of room relation graphs of floor plans from structured architectural datasets. Different from classical floor plan generation models, which are based on strong heuristics or low-level pixel operations, our method relies on parsing vectorized floor plans to generate their intended organizational graph for further graph-based deep learning. This research work presents the schema for the organizational graphs, describes the generation algorithms, and analyzes its time/space complexity. As a demonstration, a new dataset called CubiGraph5K is presented. This dataset is a collection of graph representations generated by the proposed algorithms, using the floor plans in the popular CubiCasa5K dataset as inputs. The aim of this contribution is to provide a matching dataset that could be used to train neural networks on enhanced floor plan parsing, analysis and generation in future research.

Yueheng Lu is a Machine Learning Engineer at Collov, a startup in Silicon Valley focusing on transforming traditional interior design through artificial intelligence. She is interested in machine learning research in the field of design intelligence and urban data science. Yueheng holds a Master of Architecture in Urban Design degree from Harvard Graduate School of Design and a professional Bachelor of Architecture degree from Illinois Institute of Technology.

Runjia Tian is a Master in Design Studies, Technology Track student at Harvard Graduate School of Design. Trained as an architect, Runjia is a multidisciplinary advocate of architecture, computation, and engineering. He investigates the future of design through the synergetic engagement of creative computation, extended reality, multimodal media, and machine perceptions. His more recent research focuses on the enactive co-creation between human designers and artificial intelligence. Runjia has authored/co-authored several peer-reviewed publications on architecture, urban design, and technology. He is the Co-founder of AiRCAD, with research and working experience at MIT CSAIL and at Autodesk.

Ao Li is a building simulation engineer and tool developer at Transsolar Energietech, New York. He is also an advocate in seeking data-driven, evidence-based design approach and quantitative thinking for the creative industry. His works have involved disciplines across design, computation, optimization, data science and machine learning. Ao got his Mater degree in Design Studies, with a concentration on Energy and Environment, from Harvard Graduate School of Design in 2020.

Xiaoshi Wang is a third-year PhD student in Harvard Graduate School of Design. His research concentrates on using machine learning model to build connection between a low-complexity boundary geometry and its interior natural ventilation condition. Xiaoshi is also a researcher at Harvard Center for Green Buildings and Cities (CGBC). Xiaoshi holds a Master of Design degree from Harvard GSD; a Master of Science Advanced Architectural Design degree from Columbia University and a Bachelor of Architecture degree from Tongji University in Shanghai, China. He also used to work as an architectural designer in New York City.

Jose Luis García del Castillo López is a Lecturer in Architectural Technology at Harvard Graduate of Design. He advocates for a future where programming and code are tools as natural to artists as paper and pencil. In his work, he explores creative opportunities at the intersection of design, technology, fabrication, data, and art. Jose Luis is a registered architect, Doctor of Design and Master in Design Studies in Technology by the Harvard University Graduate School of Design.

305 – GenScan: A Generative Method for Populating Parametric 3D Scan Datasets

Tuesday 30 March, 14:45, Session 3A

Mohammad Keshavarzi, University of Califronia, Berkeley
Oladapo Afolabi, University of Califronia, Berkeley
Luisa Caldas, University of Califronia, Berkeley
Allen Y. Yang, University of Califronia, Berkeley
Avideh Zakhor, University of Califronia, Berkeley

The availability of rich 3D datasets corresponding to the geometrical complexity of the built environments is considered an ongoing challenge for 3D deep learning methodologies. To address this challenge, we introduce GenScan, a generative system that populates synthetic 3D scan datasets in a parametric fashion. The system takes an existing captured 3D scan as an input and outputs alternative variations of the building layout including walls, doors, and furniture with corresponding textures. GenScan is a fully automated system that can also be manually controlled by a user through an assigned user interface. Our proposed system utilizes a combination of a hybrid deep neural network and a parametrizer module to extract and transform elements of a given 3D scan. GenScan takes advantage of style transfer techniques to generate new textures for the generated scenes. We believe our system would facilitate data augmentation to expand the currently limited 3D geometry datasets commonly used in 3D computer vision, generative design, and general 3D deep learning tasks.

Mohammad Keshavarzi is a Ph.D. Candidate at the University of California, Berkeley. His research focuses on how generative workflows can be utilized to address contextual challenges in current AR/VR/MR interfaces, exploring topics such as scene synthesis, spatial telepresence, space layout optimization, and performance-based generative design for spatial computing applications. Keshavarzi previously worked as a Research Intern at Facebook Reality Labs, and a Design Researcher and Generative Design Software Developer at Autodesk Research. He comes with a background in computational design, with an M.S in Architecture from UC Berkeley and a B.Arch from the University of Tehran.

Dr. Oladapo Afolabi is a recent Ph.D. graduate from the Electrical Engineering and Computer Sciences Department at UC Berkeley. He received his B.S. in Electrical Engineering from the University of Virginia. He is broadly interested in problems involving computer vision and autonomous systems. He has worked in areas such as autonomous driving, energy disaggregation and 3D reconstruction. His recent focus has been on improving computer vision based 3D reconstruction and understanding for indoor scenes, making use of geometric properties of objects as well as deep generative models to build 3D models that are amenable to MR applications.

Dr. Luisa Caldas is Professor at the Department of Architecture, faculty scientist at Lawrence Berkeley National Laboratory and founder of the XR Lab, a laboratory for VR/AR/MR research at UC Berkeley. An architect by training, she received her PhD in Architecture and Building Technology from MIT. Caldas is a Fulbright Scholar and has held academic or visiting appointments at Tokyo Institute of Technology, MIT and the University of Lisbon, among others. Her research focuses on immersive virtual environments, generative design systems and sustainable design. Among her XR projects are BAMPFA AR – Augmented Time and Virtual Bauer Wurster.

Dr. Allen Y. Yang is the Executive Director of the FHL Vive Center for Enhanced Reality at UC Berkeley. Previously he served as Chief Scientist of the Coleman Fung Institute for Engineering Leadership. His primary research areas include high-dimensional pattern recognition, computer vision, image processing, and applications in motion segmentation, image segmentation, face recognition, and sensor networks. Yang received his Ph.D. degree from the University of Illinois at Urbana-Champaign and two M.S. degrees in Electrical Engineering and Mathematics. He completed my B.Eng. degree in Computer Science from the University of Science and Technology of China.

Dr. Avideh Zakhor is a professor of EECS at the University of California at Berkeley where she holds the Qualcomm chair. Her primary areas of research include 3D computer vision and machine learning. Dr. Zakhor was the recipient of SPIE Electronic Imaging Scientist of the Year (2018), IEEE Fellow (2001), Presidential Young Investigator (PYI) Award (1990), and Hertz Fellow (1984-1988). Dr. Zakhor founded Indoor Reality in 2015 with products in visual and metric documentation of indoor spaces. She received her Masters and PhD degree in EECS from MIT in 1985 and 1987, and her Bachelor’s degree from Caltech in 1983

308 – Intuitive Behavior: The Operation of Reinforcement Learning in Generative Design Processes

Tuesday 30 March, 15:00, Session 3A

Dasong Wang, Royal Melbourne Institute of Technology University (RMIT)
Roland Snooks, Royal Melbourne Institute of Technology University (RMIT)

The paper posits a novel approach for augmenting existing generative design processes to embed a greater level of design intention and create more sophisticated generative methodologies. The research presented in the paper is part of a speculative research project, Artificial Agency, that explores the operation of Machine Learning (ML) in generative design and robotic fabrication processes. By framing the inherent limitation of contemporary generative design approaches, the paper speculates on a heuristic approach that hybridizes a Reinforcement Learning based top-down evolutionary approach with bottom-up emergent generative processes. This approach is developed through a design experiment that establishes a topological field with intuitive global awareness of pavilion-scale design criteria. Theoretical strategies and technical details are demonstrated in the design experiment in regard to the translation of ML definitions within a generative design context as well as the encoding of design intentions. Critical reflections are offered in regard to the impacts, characteristics, and challenges towards the further development of the approach. The paper attempts to broaden the range and impact of Artificial Intelligence applications in the architectural discipline.

Dasong Wang is a Ph.D. candidate at RMIT University in the School of Architecture and Urban Design with RMIT RRSS scholarship. His research focuses on exploring the innovative role of machine learning, especially reinforcement learning, on generative architecture design and digital fabrication. Previously, he received a M.Arch from RMIT University, M.A from Hochschule Wismar and B.Eng from Shenyang Jianzhu University which leads to a multidisciplinary academic background including Computational Design, Sustainable Technology, Urban Planning and Ecological Landscape Design. 

Roland Snooks is an Associate Professor at RMIT University in the School of Architecture and Urban Design. Roland’s design research is focused on the development of behavioral processes of formation that draw from the logic of swarm intelligence and the operation of multi-agent algorithms. Roland has previously taught widely in the US including at Columbia University, University of Pennsylvania, SCI-Arc and the Pratt Institute. He is the director of the Studio Roland Snooks and a co-founder of the experimental research practice Kokkugia. He received a PhD from RMIT and a Master in Advanced Architectural Design from Columbia University.