As discussed earlier, when taking scans of the “captures” and creating a composite configuration in Illustrator, it is really time consuming, and should be automated.
After a bit of googling around, I found a couple of example scripts (e.g. this script) and took a look at how they worked. Scanning through the documentation for the Illustrator DOM, I think I’ve got the hang of it. Fortunately, the Illustrator DOM is fairly easy to grok (document -> layers -> objects), and this script has some nice logic to open a directory and iterate over all matching images in it.
A bit of synthesising some concepts, I’ve got a proof-of-concept. It works well, but it places each image in the same place on the artboard. What I want is to have them placed in a grid manner. After some refactoring I get this functionality. Now time to add some nuance. Each object is currently placed in the corner of a virtual grid. Nice. For better aesthetics, and to replicate what I have been doing manually, each image needs to be placed in the center of a virtual area allowing for its dimensions — each image has slightly different dimensions due to the substrate warping during the creation process and the scanning being almost but never quite square. More refactoring and this is done.
Essentially, the flow is:
- Add a new document to the application
- Get the first layer
- Add a new placed item into the layer
- Twiddle the placed items’s position so it fits into a “grid”
- goto 3 if more files.
There’s some edge-case handling, but otherwise that’s it. Now to add some final touches. Adding a background. Adding some guide lines. Tweaking the overall way things are added so they are put into layer groups which means their visibility can be easily turned on and off — mainly for my debugging use.
After a day, I’ve turned a rather tedious manual process into a fully automated one. When I did some earlier compositions by hand, such as m13, it took about an hour to do. Running this script, the same can be done in under 2 minutes.
Here is composition of m15 in progress (wet) against the digitally generated one by the script.
One thing that is apparent when comparing the “wets” with the “s0” is the attenuation in the randomness. The randomness-but-structured-implicit-grid was something discussed in my fifth one-one-one with Jonathan. I need to think about bringing back some of this into the script, if only to see whether this is a “quality” I want to include: perhaps increasing the implicit grid spacing and adding a random translation as each “capture” is placed… I could be exceedingly meta about this and use the strictly normal generation to “harmonize” the random translations. This is something to do for another day.comments powered by Disqus