ML Generated Form Exploration
Utilized StyleGAN2, trained on a database of over 1500 images of mushroom gills and microscopic images. 
Such explorations pose a question about the potential uses for AI and ML generated forms. My interest particularly lies in the ability for ML models to provide insight beyond our limited scope of human vision. That is, our human vision and mind inherently inhibits us to think beyond our lived experience. How might ML & computer vision provide us with insight to other perspectives? 
With my further interest in decolonial and post-colonial design canons and vernaculars, I am intrigued by the ability for ML and AI to provide and promote visual languages that are unlike anything seen within the established Western Design canon.* 

*I recognize that these models were created by humans, particularly developers within the Western setting. Though, I believe there is still value in utilizing such technologies in a design setting as it may still provide perspectives that may otherwise be overlooked or unconsidered. 

Latent walk between more visually accurate fungi gills produced by the StyleGAN2 Trained model. 

Latent walk between more abstract forms grounded in the model trained on a dataset of over 1500 images of fungi gills. 

How might this fungi model project itself onto various human forms? How can this provide insight into potential symbiotic relationships between humans and fungi? 
Further Questions
How might models of other datasets, such as Graphic Design and/or Typography for example, give insight into other canons of design? (i.e., can the ML trained model outputs be used to understand Graphic Design from a different perspective?). 
How might ML generated art provided non-human perspectives? 
What are the implications of creating art and digital forms from models that are not widely understood? 
Back to Top