Hello Visitor!

Creativeapplications.Net (CAN) is a community of creative practitioners working at the intersection of art, media and technology.
Login
Status
Register | Forgot Password
Online for 6,407 days (17 years, 6 months, 16 days), published 4,128 articles about 2,890 people, featuring 196 tools, supported by 1,724 members, and providing access to 440 students.
Categories
CAN (94) Education (32) Event (255) Member (301) News (879) NFT (256) Project (2558) Review (46) Theory (54) Tutorial (39)
Log
Links

  • D08/09/2025
  • A @Metacreation_Lab
  • STextCopy to Clipboard (Text)
    Title + (Year) + People + URL
    /ImageGenerate Image
    PNG File Download (1080x1920)
    Copy URL to Clipboard
  • Autolume is a no-code visual synthesizer developed by the Metacreation Lab. It enables artists to train and explore their own models with small datasets, giving them direct creative control and the ability to perform live with AI-generated imagery.

    The system covers the full workflow, from data preprocessing and model training to real-time latent space navigation and output upscaling. By making the artistic potential of generative AI accessible to non-technical users, Autolume supports a hands-on workflow that fosters creative ownership. It also integrates with the OSC (Open Sound Control) protocol for audio-reactive visuals, making it a powerful tool for live performance.

    Autolume is built on Generative Adversarial Networks (GANs) and provides a controllable artistic workflow. Key features include:

    • Model Training: Train models from scratch or resume from a checkpoint, with support for square and non-square datasets. Augmentation techniques enable training with small datasets.
    • Latent Space Projection: Project an image or text embedding into the latent space for unique artistic exploration.
    • Model Mixing: Blend two trained models into a new one, combining their visual features.
    • Real-time Generation: The Autolume-live module allows real-time latent space exploration and network parameter control via OSC for audio-reactive works and other interactive works.
    • Super-resolution: Upscale images and videos with a dedicated module for high-resolution output.
    • Hardware/Software Details: Built on GANs, Autolume supports interactive applications through OSC integration.

    See Autolume in action in these projects:

    Autolume Mzton

    A generative AI artwork combining AI-driven music and visuals to reflect on dystopia and algorithmic autonomy.

    Revival

    A real-time audiovisual improvisation where artists collaborate with AI agents in sound and visuals.

    Alpha Prism

    An installation where audience portraits are projected into an AI model and morphed into a continuous video loop, reflecting on deepfakes and identity.

    Reprising Elements

    A live audiovisual performance co-created by a calligrapher, a generative AI system, and a sound artist through intricate audio-visual feedback loops.

    Dreamscape

    Artists used Autolume to create improvised, audio-visual reactive works by processing stills and video loops.

    Autolume Acedia

    Generative visuals respond to music in real time, producing abstract imagery evocative of bodies and organs.

    Project Page | LinkedIn | Instagram | Read the Paper | More

    Activity Log
    Join our Community to View/Add Comments.
    Title Excerpt Metadata Color