Could a machine learning model be trained to generate realistic (or just interesting) letterforms? As it turns out, yes, albeit with mixed results. A styleGAN model was trained in RunwayML on a dataset of 2674 Google fonts organized as individual image-per-glyph in Drawbot. Runway-generated images were then piped via Python in to GlyphsApp to process the final font.
Some characters took longer/more steps to create recognizable forms but generally speaking each glyph was processed on 3k model training steps. For example, the /b/ despite being crudely described as a simple reflection of a /d/ was non-compliant and required three separate training attempts and an ever increasing step-count to eventually be recognized as the letterform itself. Other times a glyph would come out almost too perfectly first time of asking. So it was very much hit and miss. This inconsistency does to some degree explain the clusterf*ck of forms that make up this ‘alphabet’. I estimate something like 550 hours of total training time for 110 unique glyphs. The font comes in two versions, modified and unmodified. Both versions of the font contain diacritics and composite forms.