The Startup
Published in

The Startup

Geometric Symmetry in Deep Texture Generation

Recent work in neural style transfer has been successful in producing stunning images with a very similar look to real paintings, while resembling photographs in content. However, these images tend to lack certain qualities that you find in some pieces, such as the geometric symmetry found in much of MC Escher’s work.

Existing techniques are too freeform to replicate the geometric perfection that is the aesthetic quality of this art style. In this paper, we explore a new technique to generate textures with symmetry. This is not a technical paper, although the concepts can become technical for the reader who researches them; key technical terms and brief descriptions are provided where appropriate.

By modifying the technique and adding some additional layers which enforce strict symmetry and configurable views, we can introduce hard symmetry and open up a whole host of techniques to play with. The concept is simple, and would be familiar to anybody who has ever taken a close look at a kaleidoscope or has ever made snowflakes and grade school by folding and cutting paper.

All of the AI generated art that you see here was made by my software, MindsEye, which is fully open source and can be run on your own machine (if you have the required hardware) or on AWS. Details are given near the end of the paper.

Many of these designs may look vaguely familiar. In particular MC Escher, one of my favorite artists of all time, explored this area so well that for many of the examples I’ve generated below, I can also cite some very similar (but much better!) prior art from his work. Truly this paper would not exist without the inspiration of his work.

Rendering Process

You have no doubt heard about the impressive imagery that modern day AI can create. I don’t want to get lost in the details of this fascinating topic, but I do want to briefly introduce two important AI based image generation techniques. The first is Deep Dream, published by Google to enhance images in a way that seems eerily familiar to anybody who’s taken hallucinogenic drugs. Another is deep style transfer, which shows an impressive ability to mimic an artist’s style while rendering nearly any given input photograph.

Original Deep Dream demonstration image

The exhibit photo for A Neural Algorithm of Artistic Style

Both of these use a technique called “Transfer Learning”, where they piggyback off of existing image processing neural networks. They almost literally do neurosurgery, cutting off the first N layers of the network in order to yield an image processing device. This is then spliced with other components which seek to maximize or match a signal. Because neural networks are designed to be optimized we can “learn” whatever image causes a given neuron to activate the most, for example. If you identify the “cat” neuron, you can cause your image to contain many cats by optimizing its activation.

These networks are pre-trained for other purposes, and the learning process is also a given, so we can pretty much treat the technical details of those components as a black box. The important thing is what we’re adding around the black box and why. We really don’t need to worry about the details of these neural networks any more than a painter needs to understand neurobiology.

A good analogy to understand this is to think about a human artist. That artist uses their vision system in everyday life, but today they are using it to make a painting. Whatever they have currently painted on the canvas goes through their eyes, is processed by the vision system, and then is interpreted by the brain. The artist asks “is this what I want?’’ and it then sends instructions through the spinal cord to add new brush strokes to the painting (or throw it into a fire). Where the physical act of painting is handled by the hand (via the spinal cord), the changing of an objective image is handled by the optimization algorithms of the neural network platform. Where the human artist’s existing vision system is used to interpret the painting in progress, an AI artist uses a preexisting vision network to interpret the working canvas. In both deep dream and deep style transfer, the objective is key in determining the image; similarly, the human artist’s output is determined by the conscious ideas in their own head. The Deep Dream objective attempts to maximize the outputs of certain neurons, for example the “cat” neuron to focus on cats. The Style Transfer objective attempts to match the pattern of activations, on average, to a style image.

My experimentation has used basically the same process but adds an additional “opticals” in front, before the neural vision layers. This is very much like the human artist viewing their painting through a kaleidoscope and designing the painting to look appropriate in that context. This article is more about the construction of that “Kaleidoscope” than it is about any of these other components.

Another important detail is pre-processing and post-processing. We may want to pre-process any style images we intend on using by correcting the color, sharpening, or other effects that make the desired texture most prominent. The better the texture, the better the output. Once the output is ready, it is often post-processed to enhance the final color and contrast.

Geometric Symmetry

Flat Space

This project started when I was playing with AI texture generation and trying to develop textures that are capable of tiling. I wanted images suitable for a desktop or web background. The solution was to not only optimize a view of the normal tile but to optimize it pre-tiled as a 2x2 square. This caused the network to not only produce a texture, but to seek one which avoided any unsightly discrepancy despite the periodic nature. This works well, and we can use any aspect ratio (for square, wide, or thin tiles) but we always see the same rectangular repetition which is noticeable and not very aesthetically pleasing.

My next experiment was with symmetric effects, by enforcing the canvas to be “degenerate”, meaning it can only take on certain values. This is performed by having an objective image which is then transformed and superimposed upon itself in such a way that the “kaleidoscope” output has the resulting symmetry. In this example, if you rotate the painting 180 degrees, it looks the same. Once the painting is complete, we record what the source painting looks like via our “kaleidoscope” transformation, then discard the original source painting.

Another interesting effect is found by requiring a color change in addition, as in “if you rotate the painting by 120 degrees, it will look the same if you also change blue to green, green to red, and red to blue.” A multi-colored symmetry is produced. One could also say “Flip White and Black” or “Invert the Red channel”. All of these are known as what is called a signed permutation group, and can be written as something like (2,3,1) or (g,b,r) to state what the tuples (1,2,3) or (r,g,b) look like after a single permutation.

This could be attempted with any parameters, however I found that many of the sets of parameters simply didn’t work. The training failed. It turns out that only some combinations of requirements are achievable. For example, if you have a 2-way rotational symmetry, you need to use a color permutation that, if you apply it 2 times, will result in the original color scheme. There is no way to make a painting look like you swap (r,g,b)->(g,b,r) when you rotate by 180 degrees, because another flip of 180 degrees must look like the original, but permuting the colors twice give you (r,g,b)->(g,b,r)->(b,r,g). The only way for this to work is if r=b=g, and the image is monochromatic, but in practice this simply does not converge. With that permutation,only after a third transformation do we get the original (r,g,b). In mathematics, this is known as a “ring” of size 3.

Additionally, it clearly mattered what aspect ratio I was using for the tile. Most rotational symmetries worked fine at 1:1, but at other aspect ratios, convergence simply failed. It turns out there are only two aspect ratios that work: 1:1 (a square) and sqrt(3):2 (a hexagon). Some other values converge for most of the painting, but leave some regions “fuzzy”.

To explain this, it turns out to relate to how there are only a few very specific ways that you can achieve a regular polygonal tiling on an infinite plane. All of these regular tilings are embeddable within a rectangular unit tile with a particular aspect ratio. Each corresponds to a valid configuration of symmetries that may converge. If that aspect ratio permits a particular tiling, and the rotational symmetry encourages the appropriate polygonal pattern, a natural tiling pattern emerges. Interestingly, rotationally symmetric focal points then emerge corresponding to the neighbor polygons in that unit tile. A good survey of the regular tilings of a torus (i.e. infinite plane), with illustrations of the unit tile defining the periodicity, is given at http://www.weddslist.com/groups/genus/1/top.php

Some aspect ratio choices do mostly converge, and end up creating more complex tiling patterns such as below. Although an interesting result, notice the fuzziness in the diagonal areas, and odd patten reuse between the horizontal and vertical joining areas.

Partial Degeneracy

If you have 6 fold symmetry that you are prescribing, you have to “fold” the canvas six times. For rotational symmetry, one way to do this is to take 6 copies of the source image, rotate each to an increment of 60°, then average each of the superimposed images. An interesting thing happens if you use the same angle, but you only use one or two superpositions instead of the full six. Mathematically, you are introducing less degeneracy, or you are removing less information from the potential solutions. In practice, this produces a pattern which is “almost” rotationally symmetric. If you rotate the painting by 60° now, the painting changes a little bit (and only a little bit!)

This effect of limited degeneracy can also be used via translations (as opposed to rotations.) By “folding only once”, a mostly-periodic pattern is produced wherein spatial changes should be linear across any 3 cells. That is, we can produce a tile where the pattern tends to repeat itself with a specific periodicity independent of the tile width. This effect, enforced both horizontally and vertically, does an interesting job of mimicking images of writing.

On the right, only the first of 5 folds are used, producing limited degeneracy and a self-similarity at short distances. On the left, the 2nd of 5 folds was used, producing the same amount of degeneracy but with a more distributed self-similar behavior.

The ability of a regular tiling to morph from one pattern into another was also explored by Escher, notably in his “Metamorphosis” series.

The Sphere

Of course, flat images produce fairly poor texture maps for 3D objects. This is why you can’t make a paper map covering a globe without a lot of distortion somewhere. Having a covering of an infinite plane is actually equivalent to having covered a torus, topologically speaking, and spheres are simply different animals!

If you want to use AI to produce a texture map of a sphere, you need a view layer that can project your texture map onto the sphere before the AI vision system perceives it. Also, since it is impossible to see the entire surface of a sphere from one viewpoint, we need to simultaneously optimize many different viewpoints to produce a balanced result. An experiment notebook has been prepared which does exactly this, and produces interesting textures on a sphere.

Now that we can paint a sphere, can we do something like the Kaleidoscope effect, but rotating in three dimensions? It turns out we can! For a reason similar to the regular tiling of the infinite plane problem discussed above, it turns out that only some setups of this symmetry work out. These setups are known as rotational groups in mathematics. Skipping the mathematics, I have prepared 3 setups using rotational groups. These are structurally equivalent to the tetrahedron, octahedron, and icosahedron. An interesting side effect of producing a texture with rotational symmetry is that less views are needed, and we may be able to see the entire tile in a single view. Thus, the generation process can be much faster.

Due to its relevance to objects in the physical world, this type of texture generation seems to have the most practical purpose. Two things I’ve started to look at are texture generation for general 3d models, and using these textured spheres to produce 3d printed sphere sculptures similar to the “Escher Orbs”, like this reproduction being sold on Amazon.

Hyperbolic Space

In geometry, spherical space is known as “curved” and paper-like surfaces are known as “flat”. Specifically, the sphere’s curvature is “positive”. This is because the surface curves in the same direction on both axes. Imagine you are standing on a giant surface. If the surface curves down in all directions, you are standing on top of a ball with positive curvature. With paper, you can only curve it in one direction; this curvature prevents any curvature in the opposite direction. This property is why we can make paper fold so neatly. Paper has “zero curvature”. You can also have “negative curvature” by having the surface curve in different directions. If you were standing on a surface, it might curve uphill in front and behind you but downhill on the left or right. This sort of curvature is called a “saddle” or “hyperbolic”.

Hyperbolic space is really weird, and I do not intend to dive into the mathematics of it nor do I pretend I can explain it simply. So I’ll just call it “magic” and point out a few cool things about it. First, the main reason we care about hyperbolic space here is that it permits some very unique tiling behaviours that compliment the ones we’ve already explored. Symmetric tiling is what causes the interesting geometric properties of hyperbolic space to become visible.

You may remember from math class that it is the angles within a polygon that determine if it can tile or not. The angle of a triangle, 60°, can be multiplied by 6 to form a full circle of 360 degrees. A square produces 360° by using four of its 90° vertices. Three hexagons work, but pentagon’s don’t. Nothing above hexagons work in flat space. This is all because there are 360 degrees in a circle in flat space. If curve space, however, the number of degrees contained within a circle depends on its size! If we were driving on a giant race track in hyperbolic space, each lap would involve turning much more than 360 degrees. (Homework: Build a racing game where the track is laid out in curved space.) That means we are less constrained, and there are many more workable arrangements for tiling. Now we can tile using pentagons!

I’d like to point out some features of the way we are packing all of this into a circle. This is known as a Poincare disk, and it is a well-studied mathematical representation of hyperbolic space. I don’t want to get into the details of projective geometry, but suffice it to say that this representation preserves angles but distorts distances and areas. Therefore, the repeating vertices in the image I’ll have the same angular symmetry, although they are smaller and more compact near the edge. This gives the representation a sort of global consistency and aesthetic. Another property of this representation that adds to the aesthetic appeal is that all straight lines form circles which intersect the boundary circle perpendicularly. Since a lot of tiling patterns produce infinite straight lines, the Poincare disk representation shows these lines as neatly arranged circles.

The above left image is made of squares, six of which meets at every vertex. This is obviously something you can’t do in normal space. We would call this tiling 4–6, because it is a four-sided polygon, with six of them meeting at each vertex. You could also have 3–20, for example, where 20 triangles meet at each of the three points of each tile, as the above right shows.

The Poincare disk is of course not the only way to represent hyperbolic space. I’ve experimented with a number of different options, and the best-looking variant in my opinion is to simply stretch the circle into a square. Note that applying this distortion during painting is critical; although a circular result could be distorted into a square afterwards, it would appear distorted.

Other Setups

These three types of geometry above form an elegant set with plenty of room to experiment, but as you can imagine there is a lot of room left to explore. Nearly any tiling arrangement or image distortion and replication effect could be used. For example, here is a zooming-repetition effect.

Instead of enforcing rotational symmetry, here we rotate with each stage of magnification.

Emergent Symbols

This process, by design, generates random patterns that are interesting to a human eye. It does this out of a blind desire to make certain neurons “fire,” but the brain involved isn’t even close to complex enough to understand symbol meanings. You can encourage certain patterns to appear, such as buildings, but the best results in doing this generally need to use a network trained to recognize and classify buildings, not just distinguish buildings from cats.

However, if you generate enough random patterns and look deeply at them, similar to looking at a cloudy sky on a lazy day, you will see some interesting things. A couple of my favorites are given here.

Some other symbols emerge recurrently, though. Some are indicative of the texture being used, and others can be encouraged by symmetry constraints. In particular, when 4-, 5-, or 6-fold rotational symmetry is used without color permutations, emergent symbols include swastikas, pentagrams, and stars of david, respectively. A good way to prevent this is to introduce a color permutation. A note of cultural sensitivity: The swastika, which is used by many cultures but is marred by the events surrounding WWII, should always be displayed such that the counterclockwise pointedness is dominant (卍). This is actually a sauwastika, not a swastika. Even still, display sparingly.

How to make your own

Around 2015 when style transfer and deep dream were published, I became very interested in this technology and wanted to build my own experiments. The project took on a life of its own, and over the next five years I built in my spare time a complete artificial intelligence platform in Java. Throughout this project I’ve explored and written about a number of different research concepts I have explored, but this is by far the most interesting from a visual perspective.

The easiest way to run this yourself is to use AWS; All of the “recipies” shown at http://symmetry.deepartist.org/ contain a link at the top “To Run Again On EC2” which contains the needed information to start a cloud-based AI rendering. There are an infinite spectrum of creations possible using this software, and if anybody does make anything interesting I would love to hear about it!

Recipe Catalog

One problem with modern AI is that it requires specialized and expensive hardware. By running on the cloud, you can pay for the hardware by the hour and also skip a lot of the system setup. All of the experiments I run are automatically published to sites as notebooks where the instructions for re-running a given them are also given.

For recipes related to the work in this paper, look at http://symmetry.deepartist.org/ to view the gallery. Each recipe type is listed separately, and the most recent example of that recipe is displayed. By following the recipe name link, you can view all previous runs of that recipe type. Following the link on any of the images takes you to the full report of that single run.

When looking through a report, the first thing you may notice is the text area with json. This specifies the run parameters that were used, and at runtime this was editable. There are also a couple of links near the top. The first, entitled “to run this again on ec2”, provides instructions with the needed command to start an ec2 instance with that recipe running. (Please note that some of these recipes will not work, since they were executed on my local machine. For ease of use, I’ve included a few recipes below which I’ve tested.) The second link is debugging information about the system, and is not discussed here.

The report then continues, containing rendered images and links into sub-reports describing the process used to create them. The reader will see that this is a multiresolution process. In the first phase the painting is rendered with a small resolution. Once the small painting is finished, the canvas is enlarged and the rendering process is restarted. By performing the rendering process repeatedly over multiple scales, we allow the neural network to construct image features both large and small.

By following links deeper into the subreports, the reader can even view details of the optimization process itself, revealing how the image converges over time. Here we can see the first, lowest resolution phase where the initial image emerges from random noise. We then scale the image up and repeat the process, refining finer details.

Running a Recipe on AWS

This does however require an AWS account with some initialization. AWS accounts are free in that there is no subscription fee, and that you simply pay for what you use. However, it is fairly easy to spend a lot of money in machine time, so be careful! I typically for this project I pay about $5 a month, but my bill was once over $1,000!

AWS accounts are not required to run this project. If you have the appropriate hardware, specifically a high end Nvidia graphics card with plenty of RAM for both your CPU and GPU, you can get this to work locally with some system setup.

Assuming you’re completely new to AWS, but willing to make an account today, there are some things you need to do before you get to play!

  • First, you will need to configure yourself as an admin user and set up your environment so that you can use the AWS command line tools with your account.
  • Configure a security group and open port 1080 is open to the public
  • Configure an IAM profile for the EC2 instance with access to any S3 buckets that you want to publish to and the VerifyEmailIdentity permission.
  • Set up an SSH key for your instance, this will be needed to SSH into it, but also to create it at all.

Once these entities are set up and you have the name values for them, you can use the script given in a recipe. This is a bash script, but bash can easily be set up in any modern operating system.

Create a new .sh file, and insert the script. Next we edit it to configure the Security Group, IAM instance profile, ssh key name, and email address. Then, simply run the script and wait a few minutes for the instance to start. Find out the IP address from the AWS Management console, and visit that host at port 1080 after a few minutes.

The first step in the application once it is running is to confirm or edit the starting parameters. These are displayed as a text box in Json format. If you want to save your work, you will want to edit the S3 bucket property to be a bucket that you have access to. If this S3 bucket is set up to host a static website, you will get a website in the same format as the recipe gallery. Each recipe can contain a number of additional parameters. Make any desired edits, and click the submit button.

This will start the application, and after some initialization it will open one or more subreports. Follow the link into the subreports, and you will see a prompt where the application asks you to upload a style image. Choose a style image, upload it, and then refresh the page. It will then ask for another image. If you want to specify multiple images, upload them sequentially. When you are done, press cancel. Then go back to the main page and refresh. Processing will have begun and this page will display its progress whenever refreshed.

The application logic is generally very good about shutting the instance down when it is done, but since you are paying for it, you should always double-check. Because the instance shuts down at the end automatically, it is highly recommended that you set up publishing to S3.

Tested Recipes

Just replace ***SECURITY GROUP***, ***IAM PROFILE***, ***SSHKEY***, and ***EMAIL***

Whirlpool

aws ec2 run-instances --region us-east-1 --instance-initiated-shutdown-behavior terminate --image-id ami-00f44084952227ef0 --instance-type p2.xlarge --count 1 --security-groups '***SECURITY GROUP***' --iam-instance-profile Arn= '***IAM PROFILE***' --key-name '***SSHKEY***' --user-data " $(cat <<- EOF #!/bin/bash sudo -H -u ec2-user /bin/bash << UIS export CP=""; export EMAIL="***EMAIL***"; cd ~/; for jar in f-EH0t.jar oeH7t0.jar iJL-Pb.jar J3F7SB.jar pPr5X4.jar voMY9c.jar O7m0ey.jar YCN3tR.jar cSpzwv.jar RpCOEb.jar FNuYac.jar eDcqSl.jar C4HQIn.jar 8UOGP0.jar xtt19H.jar 1G14YH.jar 9es3XF.jar 48vRV5.jar 5h3bR6.jar pzeQN-.jar CRiDmT.jar 0JTCJX.jar uYIrM_.jar Z77u5Y.jar X6TsTs.jar 3YOsy4.jar a00s_L.jar pXz6c5.jar O4SgvD.jar V31sam.jar UMOLX5.jar _TXfJi.jar l3RN_h.jar xZL0n3.jar Oj8k9M.jar _YhFlX.jar 9t-Yau.jar baTLup.jar 9dzl-z.jar xdJiUB.jar YuOk8U.jar kutaq8.jar QNa5B1.jar 4Hokj2.jar Pwf130.jar 7G3g29.jar wjACoP.jar BEN_dU.jar NUQrWo.jar e1bjSZ.jar -eJmgW.jar fy4wUB.jar w5WtoP.jar _zHx5z.jar MIcjU_.jar BPGq9t.jar 6J3VYt.jar aF1S0g.jar 6HKE5E.jar 6E9PRl.jar 2u1fa8.jar N5gHDt.jar 3byqIm.jar bLRpy7.jar AGcHNa.jar OrqkJV.jar -tTGkl.jar r7lPxb.jar 45zjaX.jar rqYc2K.jar alIK-w.jar afqNFo.jar GMJhut.jar Ko2Hte.jar 8izKg9.jar EJfjGR.jar Vwf0Ra.jar 8mM1cU.jar QUxBO6.jar dROQuc.jar yJTzpW.jar pXGql3.jar hawbCt.jar bxr15w.jar nZykAJ.jar L1OY8r.jar 6K-JXg.jar Um0Kei.jar T0R9Td.jar Jx-25C.jar bdplSg.jar l9jVAn.jar QIfR2z.jar D0yETT.jar wRQbx9.jar tLZqts.jar 787x-K.jar peL8kP.jar 8ZVI59.jar mBrHXl.jar 6ZZOqI.jar wTLv11.jar XuFIvy.jar pMuNN6.jar bnHrjo.jar Wdf4w-.jar xT3K-m.jar DxeC-S.jar CL-vxy.jar j9VQK4.jar CLQBSL.jar -kbjMt.jar YJlxds.jar EHpifH.jar jEeVMf.jar BPujX7.jar dns2zS.jar GCOLwW.jar oQf5lQ.jar x-Ftxj.jar LyOrsV.jar 3_YOzw.jar jelh57.jar kyFLWy.jar AlBe8b.jar WfbARv.jar 4xCIr3.jar 01ltjy.jar MtpouU.jar G-kFuu.jar FLkR7U.jar g9gt1I.jar q4RYQK.jar OLbLHO.jar GqyKVU.jar qfW3YZ.jar lQR2uY.jar EB-wYY.jar 0xC6Ig.jar X5SZPT.jar 4v8-Zl.jar q1DWLy.jar ckSqtQ.jar Od5OE4.jar k3aZGU.jar Cd75fp.jar LeUAw6.jar yD32Lu.jar iMfwjc.jar 0Xfs_u.jar RkMYRs.jar S2mAFC.jar C4gLDS.jar f1JcA_.jar MnuHxO.jar VE5ZVR.jar TfkeL3.jar 9wOnFP.jar bClyca.jar pb-yJy.jar K6X5_x.jar v9yuH_.jar 9UqFEP.jar XlDla8.jar G89AJf.jar nH2y_H.jar W0AYwP.jar 9zEuTd.jar ljRMdY.jar UJTxfr.jar jYQE-O.jar cFwX2Y.jar uKNBE6.jar Sf901d.jar NLC2Wu.jar _IDgqr.jar G2ief4.jar mPrQWe.jar 2dTWnh.jar B9xTLu.jar bAcOV7.jar Ippim0.jar IlRtma.jar z3bW9S.jar 1i2_qH.jar 8qkh1L.jar wouHHS.jar FlF87r.jar vxX0tI.jar 9w-Q7U.jar Z45oIk.jar SiAfyj.jar QkVCiQ.jar yev-uh.jar pnNigj.jar DAZ-Yb.jar slFyWW.jar BSrZNy.jar 9QZEim.jar NPQo5o.jar VFxg8p.jar T5mocr.jar lcMRhs.jar IZdLkO.jar wI6J3m.jar ixUb09.jar ETY71Y.jar jrub1C.jar QWMip1.jar po1S0S.jar sLzt4c.jar yMrc3o.jar VQDDjF.jar n7wAL1.jar vettuO.jar wf3dUK.jar JXH-Vh.jar hAKEqe.jar Y-0SDz.jar xVJm_J.jar d4tJQE.jar Lvpyz6.jar 2RBz1r.jar 7Gr1L4.jar ULEMWE.jar I2JIzj.jar 7Inkfp.jar KlirFk.jar i7j6Lt.jar vL5_Ns.jar UU0JMd.jar qZEIAW.jar 8YoaFD.jar mlotDH.jar vUuxRe.jar k5ypeI.jar Ocjqhu.jar sWr959.jar 7OuRor.jar C5sqD5.jar 16qoWH.jar L7Hrbv.jar NT-7k2.jar T7DxJj.jar oACk12.jar cwlcUL.jar RHN28u.jar x9orXf.jar 7WQY7k.jar hsyZom.jar -Ky0UC.jar fEnNjl.jar uDwZQ6.jar DlYIoe.jar _JUO_a.jar i5kwu2.jar 7fPu29.jar FB_Xoq.jar aBm76Y.jar 6_PF5V.jar r6BCsX.jar 9Dteow.jar RWbct1.jar luFCcw.jar nIxI5Y.jar V8GNvw.jar KJB15I.jar TNVrLk.jar obfLK6.jar Z-cisn.jar qGvEiN.jar RiwJWf.jar CkblvJ.jar dlGiXs.jar j7uwVG.jar PpW_qM.jar UyIsWE.jar -w1_7J.jar 8RoAKp.jar 02i0qq.jar hybEFl.jar TtOfrv.jar keli5C.jar -4qEWQ.jar nUeNvL.jar 8F_lnk.jar K4uQoH.jar LqjYcm.jar kjTgGb.jar sLAk5j.jar HbKTAX.jar yl-4ID.jar cu_LeU.jar 8pcMuj.jar BCqwm6.jar AX7EiC.jar vXB4G1.jar s_AT99.jar g8--uE.jar OsusOe.jar SrpWaU.jar Ld8hJv.jar EudCbW.jar 5ehZVI.jar 1oXHZH.jar P2MFbV.jar FjibVP.jar 7U-SSe.jar ss88id.jar e5Tsvy.jar j44s1p.jar 1W2Ggj.jar Uz6USa.jar hxwJ2p.jar ORw1Ld.jar qJtmzz.jar fFkkQg.jar 2al7q_.jar jJhd5I.jar jUbTCj.jar H7uuLL.jar BoVsYH.jar fUNqU8.jar ysqQmE.jar G3JX2k.jar cwovNg.jar T9QB3u.jar 9UeJ4R.jar fRrTLe.jar gOnRy9.jar sAQVj6.jar l4rpcm.jar gqEM5x.jar U-HqwU.jar Xd4uu-.jar nuurqg.jar eV23hK.jar qbphll.jar jnZYPv.jar O9mbQu.jar xA0PIY.jar EZqsYZ.jar itjJIp.jar F1D5wz.jar NTz2or.jar IsN8Le.jar VoRcev.jar ako9s7.jar XhxkY0.jar Ny-ZJN.jar aJzzbY.jar xDhJAi.jar ZDulZk.jar MQ9YQT.jar IbpXV5.jar FKTWjX.jar 8hPHK2.jar EDleWl.jar MZxJpD.jar P3I3-1.jar UQUOWV.jar HP5OZp.jar 1QaL83.jar hj4Ywq.jar P3lwCp.jar GUOg7L.jar Fk0cKM.jar QZH8Yi.jar WYPR4u.jar g0c5E_.jar nN9lkW.jar k6zjpu.jar 38jQPy.jar iuCBfT.jar Lbmq4U.jar Vl35kt.jar UsU8bL.jar NYd0Xa.jar 2rtAul.jar m2RzmK.jar x38a_N.jar aOxpYf.jar xLEJ7a.jar cWF1cU.jar ft7dMy.jar wRvsn3.jar rSO0fK.jar aUTpvA.jar 7Oeu7r.jar VM42fS.jar sBEy6K.jar RqN1Ep.jar FWrTD7.jar Nb_zDS.jar O5SyiI.jar AyeI8I.jar ed5p6f.jar wcEww2.jar bkv_sz.jar 3FXtb-.jar lC1gse.jar r3WKBv.jar J9HZRM.jar O5gofE.jar I8lNUe.jar pCL-Y3.jar hz5ZA3.jar kERNCZ.jar iZ0rA1.jar viWbeG.jar w4kjgq.jar TXyAoe.jar Qtrz94.jar ixZc9Y.jar 5-grgh.jar OMj_95.jar mPNIe_.jar d1AdUp.jar IwRumF.jar 5lOGom.jar lWwzkw.jar 39NYcg.jar AjeEbr.jar I-jBja.jar dcy1VT.jar OkJn4B.jar UbjumV.jar KYJ46i.jar vpLrIy.jar sLM2YL.jar gP-Xqu.jar y8Ddmn.jar G_myC0.jar jIi5Q_.jar QulA1d.jar JZ4M09.jar 3TMMZe.jar S0dsWf.jar mByV44.jar djNzzp.jar SI8fq8.jar uGSIEH.jar T9JkbX.jar zlePMY.jar toffcP.jar n2_bZH.jar Elk3G-.jar AiPjaz.jar HKectW.jar -GVJrL.jar 9Ex4aM.jar fjMR70.jar 44-gTw.jar X3fVGz.jar TVwWkw.jar 91MK_J.jar JfzZM4.jar hpAuYP.jar ZfW8Ih.jar H53QWp.jar FQod1j.jar q-i-cN.jar mDZVCn.jar aU9RBm.jar HYPfFq.jar 1ypx5P.jar A3EfuH.jar W3MNl-.jar ly-Ejh.jar vmStrx.jar nQPfR1.jar ARgIYk.jar vSaZxl.jar bLrEiw.jar; do export FILE="~/lib/\\\$jar"; aws s3 cp "s3://symmetry.deepartist.org/lib/\\\$jar" \\\$FILE; export CP="\\\$CP:\\\$FILE"; done nohup java -DMAX_FILTER_ELEMENTS=268435456 -Dspark.master=local[4] -Dtendril.sessionId=76ace435-1ee6-4122-ac1b-48f185487783 -DMAX_IO_ELEMENTS=268435456 -Dtendril.bucket=symmetry.deepartist.org -DMAX_DEVICE_MEMORY=11811160064 -Dtendril.keyspace=lib/ -Dtendril.localcp=~/lib/ -DCUDA_DEFAULT_PRECISION=Float -DMAX_TOTAL_MEMORY=11811160064 -cp \\\$CP com.simiacryptus.aws.S3TaskRunner & UIS EOF ) "

Coding Setup

If you use AWS, but lack the hardware to run this yourself, you can still easily customize the source code and play with whatever you like. You can compile the project locally and have it deploy to AWS and run there nearly automatically. If you look at the source code in https://github.com/SimiaCryptus/examples.deepartist.org , you will see that there are traits on the executable objects. Some traits, such as LocalRunner, cause the application to run locally. If the P2_XL trait is used the application will instead start an ec2 instance, upload itself to that host, then run the application there. (This requires that your environment is configured with appropriate AWS credentials.)

Running Locally

Of course, AWS is not required in order to use this tool. If you have the necessary Hardware, you can run this locally. All that is required is the NVidia CUDNN, CUSparse, and Cuda libraries. These are all free, though you will need to sign up for an NVidia developer’s account, which is also free. Once installed, if your machine has the resources, the applications will run locally using the LocalRunner trait.

Further Reading

http://symmetry.deepartist.org/ — This site collects all the working copies of notebooks I’ve run in preparation for this paper.

https://github.com/SimiaCryptus/examples.deepartist.org This is the code for implementing the notebooks both in the examples gallery and in the symmetry gallery.

https://github.com/SimiaCryptus/all-projects/tree/master/mindseye — These are the main code projects used to implement the AI used to generate these examples.

http://gallery.deepartist.org/ — Another site I maintain of manually curated AI-produced images

Neural Networks

OpenAI Microscopehttps://microscope.openai.com/models — This is a catalog of the textures produced by maximizing the response of each neuron for a wide variety of open source networks.

Deep Dreamhttps://ai.googleblog.com/2015/07/deepdream-code-example-for-visualizing.html — This is the original Deep Dreap paper, which heralded in this age of deep learning art.

Deep Style Transferhttps://arxiv.org/abs/1508.06576 — Another important paper, describing the style transfer technique which forms the basis for the techniques described in this paper.

Escher’s Online Galleryhttps://mcescher.com/gallery/ — The official online gallery of MC Escher’s art

DeepDreamGenerator.com Gallery- https://deepdreamgenerator.com/#gallery — An online gallery of art produced by a commercial service

DeepArt.io Galleryhttps://deepart.io/latest/ — Another online gallery of art produced by a commercial service

Geometry and Mathematics

Tessellation — This branch of mathematics deals with how to divide space, for example into regular polygons. It studies repeating patterns that are the basis for tiled textures.

  1. Regular tilings: http://www.weddslist.com/groups/genus/1/
  2. List of tessellations

Group Theory — This branch studies important types of symmetry and order

  1. Color Permutation Groups — Permutation group
  2. Symmetry on a Sphere — Polyhedral group
  3. Point Groups — Point groups in three dimensions

Hyperbolic geometry — The antithesis of spherical geometry, hyperbolic geometry is a very specialized but interesting subject in mathematics.

  1. Projective geometry
  2. Poincaré disk model
  3. https://en.wikipedia.org/wiki/M%C3%B6bius_transformation#Hyperbolic_space

Originally published at http://blog.simiacryptus.com on January 31, 2021.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store