Category: graphics

DALL-E mini… legit or cheat?

DALL-E mini is a scaled down implementation of DALL-E, a neural net AI that can create novel images based on natural language instructions. I’ve been having fun trying it out.

I like pugs, so I have told DALL-E mini to make me a lot of pug pictures of various types. A lot of the results look at least halfway decent.

The twitter account @weirddalle posts a lot of “good” (amusing) DALL-E results. Most of which I think are safe to say are novel creations, not something you would expect to find many (if any) examples from a google image search (although, who knows, the internet is a really big, really weird space).

And then I asked DALL-E mini for “a great big pug” and the mystique unraveled for me. I could recognize a lot of familiar pug photos from Google’s image search results page. I tried to go to google to find them, but the current results it gives me are a bit different; I’d guess that the images in the screen cap of DALL-E, below, would have been the top hits for “pug” in Google Image search several years ago.

The four in the upper right corner look especially familiar to me, as though I’ve seen those four images in that arrangement many times before (as I believe I have from searching Google for images of pugs many times.) I feel very confident that I have seen images very close to all nine of them before. Of course now, in 2022, if I search google for images of pugs, I get different results. But I’ve been a frequent searcher of pug pictures for 20 years, and I’m pretty confident that perhaps 5 or 10 years ago, most of the above 9 images were first-page results in Google Images for “pug”.

So, this makes me wonder, is DALL-E merely a sophisticated plagiarist? Is it simply taking images from Google and running some filters on them? For a very generic, simple query, it seems like the answer might be “maybe.”

DALL-E’s source code is available on github, which should make answering this question somewhat easy for someone who has expertise in the programming language and in AI. But I probably don’t have much hope of understanding it myself if I try to read it. I can program a little bit, sure, but I have no experience in writing neural net code.

My guess is that DALL-E does some natural language parsing to guess at the intent of the query, tokenizes the full query to break it up into parts that it can use to search for images, very likely using Google Image search. Then it (randomly? algorithmically?) selects some of those images, and does some kind of edge detection to break down the image’s composition into recognizable objects. We’ve been training AI to do image recognition by solving captchas for a while, although most of that seems to be to help create an AI that can drive a car. But DALL-E has to recognize whatever elements form the Google Image Search results match the token in the query string. Once it does so, it “steals” that element out of the original Google image, and combines it with other recognizable objects from other images from other parts of the tokenized query string, compositing them algorithmically into a new composition, and then as a final touch it may apply one of several photoshop filters on the image to give it the appearance of a photograph, painting, drawing, or what have you.

These results can be impressive, or they can be a total failure. Often they’re close to being “good”, suggesting a composition for an actually-good image that would satisfy the original query, but only if you don’t look at it too closely, because if you do, you just see a blurred mess. But perhaps if DALL-E were given more resources, more data, or more time, it might make these images cleaner and better than it does.

(Mind you, I’m not saying using the set of data that is Google Image Search results is the cheating part. Obviously, the AI needs to have data about the world in order to apply its neural net logic to. But there’s a difference between analyzing billions of images and using that analysis to come up with rules for creating new images based on a natural language text query, and simply selecting an image result that matches the query and applying a filter to it to disguise it as a new image, and then call it a day.)

So, when you give DALL-E very little to work with, such as a single keyword, does it just give you an entire, recognizable image from Google Image search results, with a filter applied to it.

Is it all just smoke-and-mirrors?

I guess all of AI is, to a greater or lesser degree, “just smoke and mirrors” — depending on how well you understand smoke and mirrors. But the question I’m trying to ask really is “just how simple or sophisticated are these smoke and mirrors?” If it is easy to deduce the AI’s “reasoning method” then maybe it’s too simple, and we might regard it as “phony” AI. But if, on the other hand, we can be fooled (“convinced”) that the AI did something that requires “real intelligence” to accomplish, then it is “sophisticated”.

I really enjoy playing around with DALL-E mini and seeing what it can do. It is delightful when it gives good results to a query.

For example:

DALL-E: A group of pugs having a tea party
DALL-E: “A world war one doughboy who is a capybara” didn’t quite work, but I’m giving them points for trying. I wanted an anthropomorphic capybara in a WWI uniform. Perhaps I should have asked more specifically.
DALL-E: “Pugs at the last supper”
DALL-E: Pug Da Vinci
I think DALL-E gets Da Vinci pretty well. The Puga Lisa?
I like these so much.
DALL-E doesn’t do a bad Klimt, either.
DALL-E, a beautiful stained glass window of a pug
DALL-E, a mosaic tile floor depicting a pug

I would proudly hang any of these cubist pug paintings in my house.

I think this puggy spilled the paint before he could get his portrait finished.

Pixel Art Chess Set: Communicating function through design

My five year old nephew started learning to play Chess recently, as I discovered on a visit a few weeks ago.  We played two games, and I didn’t have too much trouble beating him, but for a five year old he’s not bad. He knows all the pieces and their basic moves and their relative value.

I thought it would be fun to build a video Chess game that he could use to help learn strategy and how to see the board. So this is my latest project. I’ll be posting more about that as I work on it.

My first step was to design graphic resources. I didn’t want to spend too much time on it, just a basic “programmer art” chess set, that I could use to build the program with. Of course, it didn’t end up that way, and I’ve gone down the rabbit hole designing variations on sets of minimalist pixel art chess men. It’s too fun and fascinating not to.

My first attempt was actually rather good, I thought. I went for 16x16px representations of the classic chess pieces. I drew them very quickly, but was very pleased with my results, particularly the Knight.

I could have stopped right there, but it was so fun to make this set that I wanted to continue exploring and see if I could refine my pixeling technique further.

I decided to search the net for images of chess pieces to see what variety there was, and found a lot of examples of beautiful sets. I started to take notes and to infer design rules for chess men:

  1. Chess pieces are called “chess men” which seems antiquated and sexist, especially given that the most powerful piece in the game is the Queen.
  2. The modern standard chessmen are a design named for English chess master Howard Staunton, and have been in use only since the mid-19th century. A strength of its design is that each piece is easily distinguished from the others, making errors from mistakes in identifying a piece — a problem with other sets — unlikely. Previously, chess men of different types had a less distinct appearance from one another, and were not as standardized.
  3. In a Staunton set, the Knights are the most detailed, ornate, varied, and realistically represented pieces. 
  4. In Staunton sets, there is a standard height order: pawn, rook, knight, bishop, queen, king. (This surprised me, since Rooks are more valued in Chess I would have expected them to be taller than Bishops.)
  5. The pieces are differentiated by their tops. Each type of piece has a distinct, unambiguous shape.
  6. The body/base of the pieces have a common design, to create unity among the pieces in the set.

I tried to apply design choices to my chess set following these insights.

A follower on Twitter offered feedback that the pieces should be different heights, so I tried that. With a 16×16 pixel tile size, I could only shorten the back row pieces by 1-3 pixels.  I also tweaked the King piece by adding a few more pixels to its top, to make it a bit more distinct from the Queen, and moved the Pawn so that it would be more centered in its square.

I do like the result:

Staunton pixel chessmen

I think my initial 16×16 Staunton set look like they’re in ALL CAPS, while this set is more “readable” by using “mixed case” heights for the pieces.

I wanted my chess game to be focused on usability and instruction. I needed each piece to be immediately recognizable, and not to convey a bunch of extraneous information to the player that has nothing to do with play mechanics. 

My next attempt was a different take altogether. I wanted the look of each piece to suggest its rules for movement. I also thought that it would be clever if the pieces communicated the rules for using them through their visual design.

I ended up being very pleased with this set as well, although I went through many more variations, particularly with the Pawn. This one also came together easily and rapidly.  When your tile size is 16×16 and you’re working in just a few colors, it’s easy to work fast.

Things I like about this set:

  1. The shape of the piece is a built-in tutorial for how the piece moves.
  2. The Pawns still have a pawn-like shape (at least the black pawns; white pawns are “upside down”).
  3. The Knight’s shape may be read as an abstraction of the horse’s head shape of the Staunton piece.

I think out of these variations, my favorites are: P9, Kn2, B3, R1, K?  I’m least certain which King I like.  I think K4 and K5 are my top two, but I also liked the idea of incorporating a crown motif into the design, to signify the King’s special property of being the King.  K1, K2 and K6 were attempts at this, but I think K1 looks too much like a Staunton Rook, unfortunately.

I wasn’t sure which of my designs to use for my final set, so  I posted my sets on Twitter and a pixel art community on Facebook. @Managore responded to my request for feedback by coming up with a set of his own design, which I quite like.

His design was retweeted and liked over 500 times, and received many positive compliments from his followers, many of whom are game developers. One of my favorite indie developers, @TerryCavanaugh, who made VVVVVV and Don’t Look Back, pointed out an physical chess set that had been designed a few years ago which incorporated the same ideas.

It’s exciting to see my idea get picked up and reworked by fellow game developers who are inspired by the concepts I am exploring. So fun! Getting that validation that what I’m working on is interesting to others is very motivating. But it’s particularly good to get some attention from developers whose work I’ve admired for years, however modest.

I’m excited about this project and look forward to working on the program. I have more design ideas that I’m looking forward to getting into soon.

Visual theory of the Z3D Engine

Editor’s note: [I originally wrote this as an Appendix to the documentation for Z3D Engine, but I think it’s interesting enough to deserve a slightly wider audience.]

I have to preface this section by saying that I have no idea what I’m talking about here, but am trying to learn.  I like math, but I didn’t go to school primarily for it, and that was decades ago. I haven’t studied 3D geometry, or optics, or computer graphics in any formal sense. I’m figuring this out more or less by myself, learning as I teach myself.

So if someone who knows more than I do wants to explain this stuff better than I can, I’d love to hear from you. You can send me an email at the Contact page, or tweet at me @csanyk, or just comment on this article

Thanks in advance!

I’ve called Z3D Engine a “Fake 3D” engine and “2.5D” engine, because those are fairly vague terms that I don’t have to worry about being right about. Someone asked me what type of view it is, and I couldn’t tell them. That bothered me, so I started reading a bit. I still don’t really know for sure.

or·tho·graph·ic pro·jec·tion
ˌôrTHəˈɡrafik prəˈjekSHən/

  1. a method of projection in which an object is depicted or a surface mapped using parallel lines to project its shape onto a plane.
    • a drawing or map made using orthographic projection.

I think this is sortof close to what Z3D is… maybe.  What I can tell you about Z3D is this:  You can see the full front side and the full top side of (most) objects.  These do not foreshorten.

gerund or present participle: foreshortening

  1. portray or show (an object or view) as closer than it is or as having less depth or distance, as an effect of perspective or the angle of vision.
  2. “seen from the road, the mountain is greatly foreshortened”

The blue rectangle that represents the “player” in the demo is intended to show the player as a side view only, with no pixels in the sprite representing the top surface of the player. This is because I’m intending Z3D to be used for games drawn in a visual style similar to the top-down Legend of Zelda games, and in those games, no matter which way Link is facing, you can only see pixels in his sprite that represent his side, and nothing that represents the top of him, even though you’re viewing most of the rest of the terrain in the room from this weird view where you can see both the top and side of things like blocks and chests, and for other things like bushes you can only see the side.

Things in Z3D do not appear to get smaller as they recede into the background, or get bigger as they get closer to the foreground.  As well, the tops of objects (that have tops), the top is drawn 1 visual pixel “deep” (in the Y-dimension) for every pixel of distance.

This doesn’t look correct, strictly speaking; if you’re looking for “correct” visuals this engine likely isn’t for you.  But it is visually easy to understand for the player, and it is very simple.

What I’m doing in Z3D Engine is showing the top of everything (that has a top) as though you’re looking at it’s top from a vantage point that is exactly perpendicular to the top, while at the same time you’re also seeing the side of everything as though you’re looking at the side from a vantage point that is exactly perpendicular to the side.  This is an impossible perspective in real life, but it works in 2D graphics that are trying to create a sort of “fake” 3D look, which is what Z3D does.

Imagine you’re looking at this cube:


At most, assuming the cube is opaque, you can see only three faces of the cube from any given vantage point outside the cube; the other three faces are occluded on the other side of the cube.

Cube with occluded faces

(That image above is properly called Isometric, by the way. Z3D is not isometric).

If you were looking at the cube from a vantage point where you were perpendicular to one of the faces, you could only see that one face, and it would look like a square:


(Since the faces of this cube are all nondescript, we can’t tell if we’re looking at the side or the top of the cube.)

Now, if it were possible to be at a vantage point that is exactly perpendicular to the both the side and the top of the cube simultaneously, the cube would look like this:

Flattened Bi-perspective cube

This is weird and wrong, but yet it is easy to understand, and it turns out that it is also very easy to compute the position and movement along 3 dimensional axes if you allow this wrong way of drawing.  This is view is (or perhaps is similar to) a method of visualization known as a oblique projection.

More properly, if you were positioned at a vantage point somewhere between the two points that are perfectly perpendicular to the top and side faces, the cube would look like this:

Cube in perspective

Here, obviously, we are looking at the cube mostly from the side, but our eye is slightly above, so we can see the top of the cube as well.  But notice, since we are not viewing the top face of the cube from a perpendicular vantage point, it does not appear to be a square any longer — it foreshortens, so that the far end of the top of the cube appears narrower than the closer end.

This is perhaps obvious, because we’re using to seeing it, because we see it every day, because that’s what real life looks like.  But it’s because we see this every day that we take it for granted, and when we have to explicitly understand what’s going on visually with geometry, we have to unpack a lot of assumptions and intuitions that we don’t normally think consciously about.

If we were to put our eye at the exact middle point between the points that are perpendicular to the side face and the top face, the cube should look to us like this:

Cube at 45°

Notice that both the bottom of the side face and the far edge of the top face are foreshortened due to perspective.

This is how they “should” look in a “correct” 3D graphics system, but Z3D “cheats” to show both the side and top faces without doing any foreshortening, which means that it can draw an instance as it moves through any of the three dimensions using extremely simple math.

Visually moving 1 pixel left or right is always done at a hspeed of -1 or 1, regardless of whether the object is near (at a high y position) or far away (at a low y position).  Likewise, moving near or far is also always done at a rate of one distance pixel per apparent visual pixel. And moving up and down in the z-dimension is also always done at a rate of 1 distance pixel per apparent visual pixel.

If we wanted to draw more convincingly realistic 3D graphics, we need to understand what’s going on with the eye, with perspective, and things at a distance.

Eye viewing the cube at a 45° angle

The same object viewed in Z3D’s perspective is something like this:

Eye looking at Z3D rendering

(We’ve omitted the occluded faces on the back end of the cube relative to the viewer, for simplicity.)

These two “apparent” perspectives are combined at the point where the player’s real eye is, resulting in something like this fake-3D perspective:

Z3D Flattened Orthographic bi-perspective rendering

So, in conclusion I’m not 100% sure that my terminology is correct, but I think we can call this perspective “flattened orthographic bi-perspective” or perhaps “oblique projection”.

(From this, we can begin to see how a corrected view might be possible, using trigonometry to calculate the amount of foreshortening/skew a given position in the Z3D space would need in order to appear correct for a single-POV perspective.  But this is something well beyond what I am planning to do with the engine; if you wanted this, you would be far better off creating your game with a real 3D engine.)

It gets weirder when you realize that for certain objects, such as the player, we’re going to draw only the side view, meaning that the player will be drawn a flat 2D representation in a fake 3D space.  Yet the player’s “footprint” collision box will likely have some y-dimension height to it.

Pixel Art: Admiral Ackbar

Here’s a quick Admiral Ackbar that I did at 64×64.


This one was a little more interesting to work on. I started out at 16×16, as I always do, and found that there weren’t a lot of details that I could put on him at that resolution. At 32×32, I was able to shift the eyes, which gave me enough room to add the mouth. When I went to 64×64, I realized that the white space suit he wears needed to be given a little bit of color to allow it to stand out against a light background. I also added some spines to the arms, and irises to the eyes, and nostrils, to give a little more detail.

32x32_Admiral_Ackbar_512 16x16_Admiral_Ackbar_512

Pixel Art: Greedo

It’s been a long time since I did any pixel art, and with the new Star Wars film being released this week, I got the itch to do some more characters from a galaxy far, far away. Again, using my quick/easy minimalist style that I’ve found works well for me.

Here’s Greedo, at 64×64, blown up to 512×512. I’ll probably be uploading some more in the next few days.



I have created a font I call min. It’s the first font that I’ve created. I used bitfontmaker. It was easy and fun, and I liked it. I want to do it more. Minimalist pixel art is something I enjoy, and I wanted to try my hand at the glyphs of a typeface, so I created the most minimalist typeface I could that was still legible. I might have been able to go further but I like the size. Most letters are 3×5 px or smaller in their original drawing.

It’s also an asset package in the GameMaker Marketplace. If it does well I’ll probably create a font pack of fonts that I develop over time, and see how that does. So if you’re a GameMaker user, please download it from there so I can get accurate analytics stats.

Update: I’ve added a second font to the collection, submin.ttf.

Download links

  1. Min Font – Free GameMaker Asset Pack
  2. min.ttf
  3. submin.ttf

Pixel art resources

I’ve been spending many hours today and yesterday playing Javel-ein and reading about pixel art. Javel-ein’s creator clued me into a 16-color palette he used to create the graphics resources for the game. Created by someone at named Dawnbringer, it opened my eyes to something I hadn’t given much thought to in my pixel art dabbling to date.

In my pixel art method that I’ve been developing, I have not expended much effort at all in choosing colors. My method has been to use the absolute smallest number of colors as possible. To pick the color, I simply pick “the color” — I don’t give consideration to the context, the lighting conditions I want to simulate, or anything else. To draw the Hulk, I said to myself, “The Hulk is green, pick a green that looks like Hulk Green” so I did that, and then I was done.

I don’t pretend at all that my pixel art method is the best. It is merely a method, and one that I’m able to work in quickly, and achieve results that I find acceptable. I deliberately do not concern myself so much with quality, but with speed. I reason that speed serves me best because with speed, I can iterate more quickly, and iterating will give me the experience that in time will yield an improvement in quality. Also, because I do everything in my games, I have to ration my time and distribute it between designing the game, programming, doing graphics, sound effects, testing, etc. So I don’t have a lot of time, and therefore speed is all the more important. But because my time goes to other things, I haven’t iterated as much as I thought I would.

My pixel art method is very primitive and newbie feeling, but I try to use that as a strength. But that’s not to say that understanding color and using it more effectively would not be very valuable. So far my approach to drawing has been more like a very young child than an artist. I take the fewest number of colors possible, use them iconically to represent the subject with what “everyone knows” is the right color — the sky is blue, grass is green, the sun is yellow, no surprises. For standalone subjects, and for the specific style I’m going for, this works. But when I integrate standalone subjects into a composition, if I want it to look cohesive, and minimalist, I need to give more thought to how I select colors.

Seeing the results in Javel-ein from what a well-chosen palette of 16 colors can make possible, I became very interested in using the palette in the color picker more. Until now, I’ve always just picked something in the RGB color wheel that looked right, but from now on I’ll be giving more thought to my palette ahead of time.

As well, I spent a few hours and created a few palettes for Paint.NET so I can select from a collection of pre-defined, general purpose palettes. I have Dawnbringer’s 16- and 32-color palettes, and several that I made for simulating the NES, Atari 2600, and Gameboy. I plan to try to use these for a while, to see how I can adapt to a constraint of a pre-defined palette of so few colors, and what that will teach me as a pixel artist.

From my reading, I know that there is a lot to creating a good palette, and I’m not yet experienced enough to do so, but I now know enough to start trying. It’s encouraging to read that even well-regarded, accomplished pixel artists struggle with selecting just the right colors for their palettes, and that it is not just me because I’m “not really an artist”. This is something I can learn, and get better with, and I just have to work at it.

In the course of learning, I need to keep a notebook for ideas and references. I figure sharing it publicly will only help to improve the quality of what I find. I don’t really have much original to say on the subject yet, so this is mostly a dump of links to interesting articles, discussion threads, and resources.

Pixel art communities

Pixel art tools

There’s a ton of them. I primarily use Paint.NET so far, but there are many others I’ve yet to try. I’m too lazy right now to put hyperlinks for all of them here, but you can find them by google.

  • Paint.NET
  • GraphicsGale
  • Pickle
  • PyxelEdit
  • GrafX2
  • Aseprite
  • GIMP
  • ProMotion

Palette Choices

Some very interesting forum threads on the philosophy and method of building up a palette.


I’ll add more to this as my reading continues.

Product Review: Pickle 2.0

There are a few dedicated bitmap editors out there with features specific to pixel art and sprite animation. One that I like is Pickle. It features a stripped down feature set designed to give the pixel artist just the tools that they need to use.

A full featured image editor like Photoshop or Gimp can just get in the way of the pixel artist, providing too many tools with too many options, most of which are highly un-optimal in their standard configuration for pixel art techniques. By stripping the unnecessary, Pickle gives the user a streamlined interface that they can learn quickly and use very quickly once they’ve learned the handful of keyboard shortcuts to enable them to switch between tools and modes with great efficiency.

I first tried Pickle sometime in the last year, and since then a new version has come out. While I didn’t quite grok the beauty of the interface at the time, I’ve become more experienced with pixel art since then, and decided to give it another try.


Pickle is available now only a paid license, no longer a nagware/donationware license. There is a 7-day trial mode, but after that you have to pay to use it.

If it’s the sort of tool you’ll get a lot of use out of, it’s probably worth the price.

For serious sprite animation, I’d recommend Spriter, which is still in beta, but has been coming along nicely in recent releases. But to create the sub-sprite bitmap resources that you’ll import into Spriter, Pickle is still a good tool to consider, especially if your art style is pixel art and you don’t do a lot of work with gradients and filters and the like.

For simpler sprite animations, Pickle has an onion skin feature that shows the previous and next frames in an animation loop so that you can compare the frame you’re working on against them. I find this really speeds up the process of creating simple animations, and removes a lot of the guesswork and trial and error. While it doesn’t have as many features as the built-in Sprite Editor that comes with GameMaker, it makes up for this by providing a well-thought out interface for the tools it does give you, and providing only what is essential to producing pixel art.

Using Pickle

Couldn’t be easier, really. The manual is simple and fits on a single page on the web site, which covers the entire application from start to finish.

All a pixel artist really needs is the pencil, paintbucket, eraser, and selection tools. When you’re manipulating a bitmap at that level, you really don’t need any other tools. Line and shape tools might be useful, but aren’t really necessary, particularly for smaller sprite and tile bitmaps. A text tool would be nice as well, in a more featureful app, but again, for the intended purpose of creating tiles and sprites, not needed.

Pickle shows its strengths in two areas: Tile making, and animation.

For tile making, it provides a means to shift the bitmap so you can more easily discern hard edge transitions in order to smooth them out. There are also mirroring modes which allow you to make symmetrical shapes easily, by mirroring the pixels horizontally, vertically, or both, as you draw them.

For animation, I really like its “onion skin” feature, which overlays the previous and next steps in the animation as a translucent layer, which you can use to guide where you draw the current frame. This helps you make better, smoother animations in less time because you don’t have to flip back and forth between frames for comparison and preview it constantly to make sure it’s right.


You can only save up to 10 color palettes. This seems arbitrary and way too small. I’d like to be able to develop a palette for any particular project I happen to be working on. 10 is way too small and there’s really no excuse for it to be so low a number. I should be able to browse a directory full of xml files that define custom palettes, however many I need. Pickle does come with a few built-in color palettes that are useful for game development: the GameBoy an Atari palettes are most welcome, but they should keep adding more: NES and Commodore 64 perhaps being the most necessary. In time, I’d like to see every classic console palette emulated.

I also found the palette to be very tiny and hard to click on, and wished that it was quite a bit larger.

Wish List

There’s a lot in this section, but don’t let that mislead you into thinking that I don’t like Pickle as-is. Rather, I’m so enthusiastic about it that I can’t stop thinking of ways that I’d improve it if I could. I forwarded these suggestions to their feedback email, so I’ll be thrilled if the developer decides to incorporate any of these ideas.

Altogether, I admire the minimalism of the Pickle interface and the easy learning curve that the constrained interface permits. I wouldn’t like it to lose that beauty by adding too many new features, but I feel that if the following ideas were incorporated into the interface, it would make for just about the perfect pixel art image editor.

Indexed color/Palette swapping. Switching to a different color palette from the selection of saved palettes doesn’t cause the colors in an image you’re currently editing to change. This might be a desired capability, however. Old game consoles used indexed color and palette swaps to good effect, and an easy way to replicate this in Pickle would be awesome. An “indexed color mode” which changes the colors in the current image when a new palette is selected, instantly re-coloring the image with the new palette, would be a great feature.

Easier palette building. It would also be nice if I could define a new custom palette based off of an existing palette (such as “night colors from the NES palette” for example) — by dragging a color swatch (or a range of swatches) from one palette onto a new palette.

Color switching shortcut. A keyboard shortcut to cycle through the colors in the palette to enable rapid switching would be useful. There is a shortcut (x) to switch between the primary and secondary colors, and that is OK for what it is. But when I have a lot of colors in the palette, I want to be able to switch between any of them easily, maybe though CMD+arrow keys or CMD+scroll wheel or something like that. Mousing over to select the color from the palette, then back over to where I want to draw, is slower.

Better color picker. I find the color picker in Pickle to be a step backward from the color picker in Paint.NET. I like when I can control the exact value of RGB or HSV or alpha and see the result as I change it. I find this helps me to select a color that I like fastest. The advanced color picker in Paint.NET is damn good: I really like being able to switch between using the color wheel, RGB/HSV sliders, and the value boxes to find and select the color I want very quickly.

The Paint.NET color picker

The color picker from Paint.NET provides a better interface for rapidly finding and selecting the color I want. I’d like to see something similar in Pickle.

Another great feature to include in a future iteration of the color picker would be a mode that makes it easy to pick colors schemes based on the theory of coloring presented in this tutorial — for each color added to the palette, the color picker could auto-suggest adjacent and complimentary color wheel suggestions, to enable better shading and highlighting, allowing the user to add them to the palette automatically if desired. This would be an AMAZING feature. The awesome web app Color Scheme Designer does it right.

Canvas and imaging Scaling/resizing! Scaling is a must-have feature which is currently missing. My favorite technique with pixel art is to rough in at very low resolution, then resize the image, doubling the resolution of the image, scaling it using Nearest Neighbor scaling so that no anti-aliasing artifacts are introduced, and then refining the details.

I can work extremely quickly when I work this way — i’ll start out at 16×16, and double a few times until I’m at 64×64, or 128×128, and those single pixels in the 16×16 rough version end up doing the work of a 8×8 block of pixels when I’ve scaled the rough image to 128×128, thereby saving me a factor of 64 pixels worth of work for each pixel that I rough in with the right color by the time I get to 128×128 resolution.

Being able to Select All, then CTRL+plus to double the canvas size, and CTRL+SHIFT+plus to double the canvas size while scaling up the image using Nearest Neighbor would be great. And to marquee-select, then CTRL+SHIFT+plus to scale the marquee selection 2x with Nearest Neighbor. (Essentially it’s like working with a large 8×8 brush tool, then switching to a 4×4, 2×2, and 1×1 brush, but it’s quick and easy for me to work this way.)

 Arbitrary rotation: Currently Pickle only can rotate 90 degrees and mirror the image. Arbitrary rotation of the entire image, and pixel selections, would be extremely helpful. Not strictly necessary, as when you are manipulating pixels one at a time, you often want more precise control. But it can come in handy.

Gearing up for Ludum Dare 28

I’m getting myself ready for LD48-28, deciding my general approach to take to this project. I like to do this ahead of time so I can get certain design considerations of the way, impose creative constraints, and focus on a particular goal within the scope of my project, independent of the theme.



GameMaker Studio

As usual, I’ll be making use of GameMaker Studio for my development, and probably only targeting Windows .exe build for the initial release, with a possible HTML5 build eventually if feasible.

I don’t have any particular aim this time around to use any specific features of GameMaker, we’ll leave that up to the theme and game concept to drive those decisions this time.


Paint.NET, and Pickle

I’ve given the new Pickle 2.0 a try and while it’s no longer free/donationware, I do think that I at least like it for its onion skin feature that enables easier animations. I can see a lot of potential improvement for Pickle, and to that end I’ve written up a number of feature requests and enhancements and sent them along to the developer. I’ll be really excited if any of them get picked up and implemented.

I am going to use my pixel art minimalism technique, and also I intend to use a tightly constrained color palette. Not sure yet how few colors I want, but maybe a 4-color monochrome palette a la classic GameBoy would be fun. In any case, I’ll be making an effort to use only the smallest number of colors necessary, and paying close attention to how color works in the graphics I develop. I am going to see if Paletton can help me make better palette selections, and if I can apply what I learned from this coloring tutorial that I recently came across thanks to Joe Peacock’s recommendation.



Bfxr, of course, for sound effects. Maybe also some recorded audio samples for stuff that I can’t do well in bfxr.

Famitracker (maybe)

It’s probably still ambitious for me to try to pick up and learn Famitracker in a weekend and use it to good effect. I’ve been putting off learning it, though, and I want to have some kind of bgm in my game. Whether I end up using it or not in my LD project, I’ll be making an effort over the next few months to figure it out and put together some compositions with it.