• Generative styling of editorial content

    Editorial styling has made significant progress since Snowfall, which inspired Gutenberg. With the evolutions, you can automatically generate styles considering content, mood, or data input. As a result, it has become more sophisticated and accessible, truly generative.

    In this post, I will discuss two ways of creating styles: one already being used and one currently being developed. Let’s examine them more closely.

    Now Reactive

    Reactive styling involves responding to user input with a predetermined outcome, similar to simple prompts today. This type of styling has its roots in algorithmic art, such as the works of Katharina Brunner, which can be found on GitHub. It’s important to note that AI has been widely used in generative art for a while; in many ways, we are late to the party with our discovery not only in WordPress but as a technology discipline. When exploring, we need to learn from the artists who have gone before us. That, however, is another post; today, let’s look at what generative styling is.

    Let me demonstrate how reactive generative styling might work. You may find it similar to other tools you have used before. One example is a feature that suggests photos for your article or proposes a background color based on the title. Starting by clicking a button or following a prompt is a straightforward process.

    You can go quite far with reactive when you think of combining prompts, even if simple. Imagine you have created content in whatever format for the top 5 holiday spots this year. Using prompts, you can:

    • Change the background and text to the top color combination of this summer, asked by a prompt.
    • Pick three photographs for each holiday spot you recommend and present them as a gallery, asked by a prompt.
    • Use a recommended font, and a recommended readability combination, asked by a prompt.

    The common thread, though, is you are asking. Input leading to output – reactive.

    Next responsive

    If reactive is about prompts, responsive takes that further and might not even visibly have a prompt. You could think of it as ‘true generative’ as it seems to respond without engineered input. The more common examples today are outside editorial and in art installations. They also are today typically based on prompts still but further up the execution stack – often with the programs running a distance beyond the original input.

    Artists have been looking at responsive art for a while; if you think of the art of someone like Jon McCormack, who creates life forms that are impossible to create outside computers, this is a good example of looking to art for inspiration of what has happened.

    As the system becomes more responsive, it becomes more interesting. Instead of just giving prompts, you provide content with the system analyses to give recommendations. This is where true AI creativity comes in, and it can involve entire articles or visual arrangements changing to reflect the intensity of a dramatic story. 

    Let’s look at where we could see responsive today. If we go back to our holiday article, in this case, you’d add the content, and it would generate everything without a prompt. You might have some style boundaries, but you likely click a ‘style’, which picks the best. That’s still a prompt, but it is more hidden or further up the source tree, with the AI taking it as a seed and growing from it. It would recognise you’ve:

    • A need to add photographs knowing that a certain number does better, so it would add a certain number of photographs and galleries to the posts at the right points to help in the content.
    • It would know the style isn’t set, so come up with one that best fits the content and apply it to everything so it cohesively fits.

      Maybe it …. Could do so much more as it’s not based on inputs.

    That is a start; just like with the artists’ works, this is a seed you grow from. To truly explore beyond simple prompts and responsive, we need explorations.

    Concept of intent

    Understanding intent is crucial, especially concerning tone and mood in styling. Although it’s just a tag or label at its core, the nuances become more complex when discussing style. For generative styling to work, the intent is essential to be understood. This has been a problem in generative art, and Jon McCormack expressed it in this quote: 

    “I see myself as being the artist. The computer is still very primitive — it doesn’t have the same capabilities as a human creative, but it’s capable of doing things that complement our intelligence.”

    Jon McCormack: computers as enabling

    That primitive isn’t changing today, and many wilder notions of content intention and parsing aren’t possible yet. They are with some learning and growing, though; they aren’t as far as we might think. For now, we are stuck with a gap in the concept of intent and comprehension of that by the AI tools we use.

    Where next?

    We have lessons to learn from generative art, it’s been around for a long time, and generative styling can go a long way; we have to experiment. It starts with housekeeping, with the prompt based reactive, then moves to responsive.

    The pace today is rapid; understanding and exploring what could be is critical, as the tools we have today are the foundations of possibility. Some of those will be wild experiments, but that is where this all gets exciting.


Posted

in

,

Discover more from Nownextlab

Subscribe now to keep reading and get access to the full archive.

Continue reading