Friday, 24 November 2023

Art experiments in Blender and Stable Diffusion

 All over the place a bit at the moment, as is often the way when I finish a piece... In practice it's the time I do some experiments, so it's all good... So working on two technical things:

Autumn Beeches: Woodblock Print-like Shading in Blender

Can I create interesting print-like non-photorealistic rendering effects in Blender? I've done a bit for previous projects where I use rgb nodes to 'colourise' a grey-scale render. This time I'm adding an outline using a hack where you extrude the mesh to create a 'hull' around the object, but then texture it with a black material that has inverted normals and backface culling - the backface culling removes the areas that have normals that point away from the viewer, but because the normals are flipped, the only faces you see are the ones that point AWAY from you... Which appear around just the outlines of the normal rendered parts... It works pretty well. I was testing it on a branch made using geometry nodes which was working great until it wasn't... which sadly seems to be a common issue with geometry nodes - they work great (albeit with some out of the box thinking!) until they hit some weird limit or bug... In this case after some faffing I managed to get material indexes to work (it's all one mesh so you have to apply multiple materials), and it was also starting to look half-decent when suddenly everything swapped to using the 'bark' material and wouldn't shift... So I need to come back to this and decide if I want to finishing, or what to consider it a complete experiment (it did work from a materials point of view!) and then either throw in the towel or try doing it it 2d instead...

Struck by an idea: Pose control using controlNet for Stable Diffusion

Was struck by a silly/funny/provocative image of a queue of generic workers trudging along and one is stuck by inspiration and has a look of delight on his face, and the rest are piling into the back of him and looking very pissed off... I wanted to use AI because it's mostly a bit of fun, and it might give an interesting style ... but I can't get AI to produce anything remotely resembling the pose I want... So I'm going to use it as an excuse to bite the bullet and get AUTOMATIC1111 and ControlNet working. Which I've been wanting to do forever... And this is a good excuse... After some Googling I have it installed in Google Colab, but Google wants money before I can use it for more than a few seconds... So I guess that's next, then get simple text2img working, and thence to pose control via AUTOMATIC1111, and then THE WORLD!!!

Someone being (literally) struck by an idea

No comments:

Post a Comment