Generative Art Musings

Trust me, I know that there are aspects of “AI Art” that can be questionable or problematic. In fact, one of the reasons I decided to do the Stable Diffusion seminar at Dundracon last year was because of that.
Someone suggested that they’d like to see a seminar on the topic, and I offered the opinion that any discussion of generative art should at least acknowledge some of the issues and concerns. The person that had asked for the seminar (who I am fairly certain did not attend), then proceeded to be offended and make inflammatory remarks about “AI art haters”.
And so, I decided that *I* would do the seminar so I could make sure that content was discussed… make of that what you will. 🙂
So, some of this is in the seminar deck itself, but it doesn’t hurt to repeat things here. The tl;dr is that I feel that the tech is here to stay, and will become another tool available for artists. I can remember a time when people were hostile to Desktop Publishing, and programs like Photoshop. History rhymes…
Environment Concerns: These are legitimate—especially at scale.
- Stable Diffusion is open source software, and you can (and many people do) run it locally. If you’re patient, it can run on hardware that’s five or more years old. A Windows machine with an NVIDIA RTX 20-series card and sufficient RAM is typically enough.
- Running locally doesn’t eliminate environmental impact, but it does shift it. You’re using the electricity your machine already consumes rather than relying on a remote data center. That said, local generation is not “free” from an energy perspective—it still draws power—but it generally avoids the additional infrastructure overhead of large-scale cloud services.
- At scale is where environmental concerns become more pronounced. Services like Midjourney and DALL·E rely on large data centers. However, this isn’t unique to generative art; it’s true of most
cloud-based services.
Training Ethics Concerns: Yes—many generative art models have been trained on large datasets that include copyrighted works, often scraped from the public web without explicit artist permission. That is a real and ongoing controversy.
- Most models are trained on extremely large datasets containing millions (not just tens of thousands) of images from many sources. The models do not store images in a retrievable way, nor do they intentionally “copy” specific works under normal use. However, concerns about consent, attribution, compensation, and stylistic mimicry remain valid.
- It’s easy to oversimplify either direction:
“They’re just copying artists” isn’t technically accurate in most cases.
“It’s just how humans learn” also isn’t a perfect analogy. - The reality is more complex. A deeper explanation would require unpacking how diffusion models are trained and how they represent visual patterns—probably a topic for another post.
Legal Concerns: I am not an intellectual property attorney.
- In the United States (as of current U.S. Copyright Office guidance), purely AI-generated works without meaningful human creative input are generally not eligible for copyright protection. However, works that involve substantial human authorship—selection, arrangement, editing, transformation—may qualify for protection in those human-created elements.
- This means that if you publish something that is largely machine-generated with minimal human modification, your ability to assert copyright protection may be limited. Laws and interpretations are still evolving, and this varies by jurisdiction.
- If this matters for your use case, consult an IP lawyer.
Commercial Use Concerns: Related to the above.
- If you are a publisher using generative art, you need to understand:
- Copyright protection may be limited.
- Distribution platforms may require disclosure of AI-generated content.
- Some customers may object to generative assets.
- Policies vary by publisher and distributor. Always review platform rules and consider legal advice before commercial release.
Disclosure: What is or isn’t “AI art” can become murky.
My view: don’t claim machine-generated work is entirely hand-created. Be transparent. One of the broader concerns around AI systems is trust—what is real, what is synthetic, what is authored. Clarity helps.
With all that said, I still consider generative art a tool. Tools can be misused—but responsibility lies primarily with the user, within reason.
What I would recommend:
- If you can generate locally, consider doing so. It reduces reliance on large centralized platforms and gives you more control over your workflow. Stable Diffusion, as free and open-source software (FOSS), makes that possible.
- Avoid models explicitly trained to mimic a specific living artist’s style. If you admire a particular artist’s aesthetic, support them directly—Patreon, Ko-fi, or wherever they maintain a presence.
- If you plan to publish professionally, disclose generative assets and consult an IP attorney. Requirements and norms are changing quickly.
- I personally avoid photorealistic content. If you generate photorealistic images, do not create images of real identifiable individuals—living or dead—without clear legal rights to do so. Laws around publicity rights, defamation, and likeness vary by jurisdiction and can carry real consequences.
And no, I’m not that skinny. I do have a cute orange tabby though. His name is Butters.