Lost In Diffusion museumnacht

    Installation

    Design Museum Ghent commisioned an installation based on my earlier work I did with the collection. The install;ation was both an explorer for the collection as a creative tool. It also showcased the result of a livecoded visuals (hydra) workshop I teached at mutationfestival. The installation was build around the everlasting question:

    CAN ARTIFICIAL INTELLIGENCE DESIGN A CHAIR?

    Are machines capable of understanding the intricacies of design? And does "more data" actually generate "better results"? The project Lost In Diffusion by artist/coder and engineer Kasper Jordaens, is an attempt to better understand the role of algorithms, and AI models in relation to a museum collection. AI has proven supportive in turning archival data into accessible, interconnected, and intelligible information. However, its potential in "generating" new data in meaningful ways is a domain largely left untapped. Given access to our collection database, containing both metadata and images on the design objects represented in the collection of the museum, Kasper Jordaens, (re-)trained a diffusion model* in an attempt to make the machine understand not only the intricacies of design, as defined through the (diffuse) scope of the collection, but also to generate a model that could potentially act as a co-agent, next to the designer, in the process of design.
    Putting this model to the test, the results are shown during Museumnacht both in the form of an interactive kiosk. which allows the visitor to explore parts of the museum collection using results from the workshop at Mutation Festival. But also as data input for the visual backdrop of the Algorave, a live performance where creative-coding, music and graphic design intersect.

    the explorer

    control panel explained

    Below are some impressions of the machine, it was a purpose built kiosk, and a version 0.1 of the museumdata explorer, a WIP, but fully operational. The controller is a revamp of the pumpkinmaster3000 (served in pumpkin orchestra and data intersect study), now expanded with a Raspberry Pi to run the realtime visuals.

    images/video

    unfiltered, untransformed image showing a piece of the collection
    unfiltered, image showing a piece of the collection after a first diffusion step, with the input still shown as the small image on the right. The enabled and disabled buttons are also shown on screen in this screenshot
    image showing a 3D model generated from the input as well as the image description generated by AI. Thew filtered outputs is realtime rendered hydra patch loaded in p5Live
    heavily filtered, image showing a piece of the collection

    the process

    For more insights in how I processed the data, have a look at the lost in diffusion page

kaotec bv
Londenstraat 40
9000 Gent
Belgium
VAT BE0784540750