Tag: graphics

Visual theory of the Z3D Engine

Editor’s note: [I originally wrote this as an Appendix to the documentation for Z3D Engine, but I think it’s interesting enough to deserve a slightly wider audience.]

I have to preface this section by saying that I have no idea what I’m talking about here, but am trying to learn.  I like math, but I didn’t go to school primarily for it, and that was decades ago. I haven’t studied 3D geometry, or optics, or computer graphics in any formal sense. I’m figuring this out more or less by myself, learning as I teach myself.

So if someone who knows more than I do wants to explain this stuff better than I can, I’d love to hear from you. You can send me an email at the Contact page, or tweet at me @csanyk, or just comment on this article

Thanks in advance!

I’ve called Z3D Engine a “Fake 3D” engine and “2.5D” engine, because those are fairly vague terms that I don’t have to worry about being right about. Someone asked me what type of view it is, and I couldn’t tell them. That bothered me, so I started reading a bit. I still don’t really know for sure.

or·tho·graph·ic pro·jec·tion
ˌôrTHəˈɡrafik prəˈjekSHən/
noun

  1. a method of projection in which an object is depicted or a surface mapped using parallel lines to project its shape onto a plane.
    • a drawing or map made using orthographic projection.

I think this is sortof close to what Z3D is… maybe.  What I can tell you about Z3D is this:  You can see the full front side and the full top side of (most) objects.  These do not foreshorten.

fore·short·en
fôrˈSHôrtn,fərˈSHôrtn/
verb
gerund or present participle: foreshortening

  1. portray or show (an object or view) as closer than it is or as having less depth or distance, as an effect of perspective or the angle of vision.
  2. “seen from the road, the mountain is greatly foreshortened”

The blue rectangle that represents the “player” in the demo is intended to show the player as a side view only, with no pixels in the sprite representing the top surface of the player. This is because I’m intending Z3D to be used for games drawn in a visual style similar to the top-down Legend of Zelda games, and in those games, no matter which way Link is facing, you can only see pixels in his sprite that represent his side, and nothing that represents the top of him, even though you’re viewing most of the rest of the terrain in the room from this weird view where you can see both the top and side of things like blocks and chests, and for other things like bushes you can only see the side.

Things in Z3D do not appear to get smaller as they recede into the background, or get bigger as they get closer to the foreground.  As well, the tops of objects (that have tops), the top is drawn 1 visual pixel “deep” (in the Y-dimension) for every pixel of distance.

This doesn’t look correct, strictly speaking; if you’re looking for “correct” visuals this engine likely isn’t for you.  But it is visually easy to understand for the player, and it is very simple.

What I’m doing in Z3D Engine is showing the top of everything (that has a top) as though you’re looking at it’s top from a vantage point that is exactly perpendicular to the top, while at the same time you’re also seeing the side of everything as though you’re looking at the side from a vantage point that is exactly perpendicular to the side.  This is an impossible perspective in real life, but it works in 2D graphics that are trying to create a sort of “fake” 3D look, which is what Z3D does.

Imagine you’re looking at this cube:

Cube

At most, assuming the cube is opaque, you can see only three faces of the cube from any given vantage point outside the cube; the other three faces are occluded on the other side of the cube.

Cube with occluded faces

(That image above is properly called Isometric, by the way. Z3D is not isometric).

If you were looking at the cube from a vantage point where you were perpendicular to one of the faces, you could only see that one face, and it would look like a square:

Square

(Since the faces of this cube are all nondescript, we can’t tell if we’re looking at the side or the top of the cube.)

Now, if it were possible to be at a vantage point that is exactly perpendicular to the both the side and the top of the cube simultaneously, the cube would look like this:

Flattened Bi-perspective cube

This is weird and wrong, but yet it is easy to understand, and it turns out that it is also very easy to compute the position and movement along 3 dimensional axes if you allow this wrong way of drawing.  This is view is (or perhaps is similar to) a method of visualization known as a oblique projection.

More properly, if you were positioned at a vantage point somewhere between the two points that are perfectly perpendicular to the top and side faces, the cube would look like this:

Cube in perspective

Here, obviously, we are looking at the cube mostly from the side, but our eye is slightly above, so we can see the top of the cube as well.  But notice, since we are not viewing the top face of the cube from a perpendicular vantage point, it does not appear to be a square any longer — it foreshortens, so that the far end of the top of the cube appears narrower than the closer end.

This is perhaps obvious, because we’re using to seeing it, because we see it every day, because that’s what real life looks like.  But it’s because we see this every day that we take it for granted, and when we have to explicitly understand what’s going on visually with geometry, we have to unpack a lot of assumptions and intuitions that we don’t normally think consciously about.

If we were to put our eye at the exact middle point between the points that are perpendicular to the side face and the top face, the cube should look to us like this:

Cube at 45°

Notice that both the bottom of the side face and the far edge of the top face are foreshortened due to perspective.

This is how they “should” look in a “correct” 3D graphics system, but Z3D “cheats” to show both the side and top faces without doing any foreshortening, which means that it can draw an instance as it moves through any of the three dimensions using extremely simple math.

Visually moving 1 pixel left or right is always done at a hspeed of -1 or 1, regardless of whether the object is near (at a high y position) or far away (at a low y position).  Likewise, moving near or far is also always done at a rate of one distance pixel per apparent visual pixel. And moving up and down in the z-dimension is also always done at a rate of 1 distance pixel per apparent visual pixel.

If we wanted to draw more convincingly realistic 3D graphics, we need to understand what’s going on with the eye, with perspective, and things at a distance.

Eye viewing the cube at a 45° angle

The same object viewed in Z3D’s perspective is something like this:

Eye looking at Z3D rendering

(We’ve omitted the occluded faces on the back end of the cube relative to the viewer, for simplicity.)

These two “apparent” perspectives are combined at the point where the player’s real eye is, resulting in something like this fake-3D perspective:

Z3D Flattened Orthographic bi-perspective rendering

So, in conclusion I’m not 100% sure that my terminology is correct, but I think we can call this perspective “flattened orthographic bi-perspective” or perhaps “oblique projection”.

(From this, we can begin to see how a corrected view might be possible, using trigonometry to calculate the amount of foreshortening/skew a given position in the Z3D space would need in order to appear correct for a single-POV perspective.  But this is something well beyond what I am planning to do with the engine; if you wanted this, you would be far better off creating your game with a real 3D engine.)

It gets weirder when you realize that for certain objects, such as the player, we’re going to draw only the side view, meaning that the player will be drawn a flat 2D representation in a fake 3D space.  Yet the player’s “footprint” collision box will likely have some y-dimension height to it.

GameMaker Studio Tutorial: Getting Into Shaders

Shaders have been a part of GameMaker Studio for a while now, having been introduced in 2014. Since their inclusion, I have mostly remained mystified by them, with a vague and cloudy understand of what they are and what they can do, and haven’t used them at all. That will [hopefully] start to change today.

As always, when try I learn something new in programming, I find that writing up what I’ve learned helps me to remember and keep my learning organized. I like to publish my notes so that they can help others, and so that others can find errors and make suggestions for better ways to do things. I’m still very new to working with shaders, so I’m not trying to put myself out there like I’m some kind of expert, but here’s what I’ve been able to learn about using shaders with GameMaker Studio so far:

Shader basics

First, we need to understand what a shader is. A shader is a specialized program that processes graphics. Shaders are executed on the Graphics Processing Unit, or GPU, which is specialized hardware for accelerated graphics processing. Thus, shaders are very fast. As well, since they work on the GPU, using shaders will free up the CPU to do other tasks, which can further help to improve the frame rate of your games.

This sounds like enough of a reason to want to use shaders, doesn’t it? Well, it gets better. The main thing about shaders is that they can do amazing visual effects, which will can make your game look better, but can also play an active role in how the game plays. For example, you could use a shader to handle the graphical processing of a special view mode in the game, such as night vision or x-ray vision. One of my favorite shader-based gameplay mechanics that was centered on the use of shaders was Daniel Linssen’s Birdsong, winner of the “Overall” and “Theme” categories of the Ludum Dare 31 compo held in 2014. The theme of LD31 was “Entire Game in One Screen”, and Linssen’s approach to this was to create a giant one-room game, that was crammed into a single screen(no scrolling), and, using a fish-eye lens effect done with a shader, magnify the area where the player is so that it was large enough and detailed enough to be playable.

There’s virtually no limit to what graphical effects you can come up with using shaders, other than the limits of your imagination and of course your programming and math skills. It also helps to understand how computers work with graphical concepts such as color, pixels, binary math, and so forth. Additionally, scientific knowledge in disciplines like optics can be very useful. Shaders have their own specialized programming language that they are coded in — actually there are several related languages to choose from. Because of this, shaders are considered an advanced topic in programming, and there are numerous hurdles to surmount in order to be able to write them yourself.

That said, shaders are re-usable bits of code, and so one of the first things you can do when you start getting into shaders is to simply use pre-existing shaders that have been written by other people.

Getting Started with Shaders

Before you can use shaders, you’ll want to familiarize yourself with a few concepts.

Shader references

Here’s links to the relevant pages in the GMS manual on using shaders in the context of GameMaker Studio:

GMS1:

Shaders Overview

Shaders GML reference

Shader Constants

Tech Blog Shaders tutorial: 1 2 3 4

GMC Forums shader tutorial.

GMS2:

Shaders Overview

Shader Constants

 

Other shader resources (general)

Language References

The four shader languages that GMS supports are: GLSL ES, HLSL9, HLSL11, and GLSL. Which one you need to learn and use will depend on your target platform, but for this article we’ll focus on GLSL ES, since it supports the most target platforms (all of them, except Windows 8).

GLSL ES language specification

HLSL language specification

I haven’t gotten into the shader languages enough yet to know why you’d ever pick HLSL over GLSL, but presumably there must be some advantage to using HLSL when targeting Windows platforms, either in terms of correctness or performance. Otherwise, I would think you’d be better off just sticking with GLSL ES and be compatible with the most targets.

Tools

Shadertoy Shadertoy is a wonderful website that allows you to play with shader programming, running them in your web browser. Then you can share your shader creations with the community of users of the website, and in turn you can demonstrate and use shaders written by others.

Other graphical concepts in GameMaker, and how they relate to shaders

It’s not a bad idea to review and understand the entire chapter on Drawing in the GameMaker documentation. There are many concepts that you will need a working knowledge of in order to understand how to use drawing to its fullest capacity, and to get things working together smoothly.

But the manual isn’t the end of the story. Often I find that the manual doesn’t go far enough to explain how different concepts work. The sections on blend modes and alpha testing are particularly inadequate by themselves. The manual also doesn’t go very far to demonstrate or suggest how different features and functions can be connected to one another. That’s for the user to infer, and verify through experimentation. This is great if you are creative and love to experiment and discover. On the other hand, there’s so much that has already been figured out and discovered by others, and it would be nice if that was all documented in an easy to search reference somewhere.

Read the entire manual, cover to cover, if you can. Create little demo projects to test your understanding of what you’ve read, and figure out how to do things. Read it again. And refer to it whenever you need to. There’s no substitute for reading and understanding the manual. I’ll still touch briefly on the major concepts here, for summary:

Surfaces

All drawing happens somewhere, and in GameMaker that somewhere is called a surface. Behind the scenes, a surface is simply a chunk of memory that is used to store graphics data. You can think of it as a 2D grid of pixels, stored in the program’s memory, and drawn to the screen only when called for. You can also think of it as virtual “scratch paper” where you do some work “backstage” and then bring it out to use in the game when needed.

The application has an Application Surface, and by default everything is drawn here. But you can create other surfaces, which you can work on, composing a drawing off-screen, until you are ready to draw it to the screen. As you might imagine, there are countless reasons why this is useful, and endless ways to make use of surfaces.

Surfaces are relatively easy to use, but are considered an intermediate level programmer’s tool in GameMaker, for a couple of reasons:

  1. Surfaces consume memory, and need to be disposed of when no longer needed.
  2. Surfaces are volatile, and can be destroyed without warning, so should not be assumed to exist. For example, if the player switches focus to a different application, or if the computer enters sleep or hibernation mode, or if the game is saved and resumed, surfaces that were in existence at the time the application was last running may have been cleaned up by the operating system, and will need to be re-created if they don’t exist.
  3. All drawing must happen in one of the Draw events. If you try to use draw functions in other events, it may or may not work correctly, and this will vary from computer to computer. I once made a game where I did some set-up in the Create Event of an object, where I created a surface, drew to it, and then created a sprite from the surface, and assigned the newly created sprite to the object. It worked fine on my computer, but when other players downloaded my game to try it out, it did unexpected things and the graphics were glitched. Fortunately, I figured out what the problem was, and fixed it by moving this sprite creation into the Draw Event. Once I did this, the game ran correctly on everyone’s computer.

Drawings done to surfaces can be run through a shader, as input, and thereby be processed by the shader. In short, a surface can be the input image data for a shader, and the output of the shader will be the processed version of that surface, transformed by the shader.

Blend Modes

For a long time, long before GMS introduced shaders, GameMaker has provided blend modes. Blend modes affect what happens when GameMaker draws graphics over existing graphics that were drawn previously. Normally, when you draw something, it covers the pixels that were there before. But, by changing blend modes, you can do other things than simply replacing the previous pixels with new pixels, blending the old and the new in different ways according to the mathematical rules of whatever blend mode you had selected.

To be honest, I’m not sure what useful purpose there is for every blend mode. It would be great if there were more tutorials showing useful applications for them, especially the obscure ones that I don’t see used much, if ever.

The most commonly useful blend modes, in my experience, are bm_normal and bm_add. Normal blending is the default drawing mode, and is what you use 99% of the time in most games. Additive blending creates vivid glowing effects, and is particularly lovely when used in conjunction with particle systems to create glowing systems of overlapping particles, especially when you are drawing translucent pixels (using alpha < 1).

Blend modes are also useful for creating clipping masks. For more info on that, there are some good tutorials already written on how to create a clipping mask using surfaces and blend modes.

Some of the first questions I had when Shaders were introduced were: What do we do with blend modes now that we have shaders? Do we still need them? Can we combine them with shaders, somehow? Or do shaders make blend modes obsolete?

Basically, as I understand it, the answer seems to be that blend modes were kind of a limited predecessor to shaders, and enabled GM users to achieve some basic drawing effects simply, without exposing GM users to all that highly technical stuff that shaders involve, that I mentioned above.

Anything you could do with blend modes, can be done with shaders instead, if you wanted to. That said, if all you need is the blend mode, they’re still there, still supported like always, and you can go ahead and use them. They’re still simpler to use, so why not.

One thing to be aware of, though, when using blend modes, every time you change blend mode in GameMaker, you create a new “batch” of drawing. The more batches, the longer GM will take to draw the game each step. Thus, many batches can slow drawing down tremendously. This is an area where you may need to focus on optimization. And if you’re that focused on performance, then it might be worth looking into a shader-based approach instead.

Once you’ve become sufficiently comfortable with shaders, you may not have as much need for using GameMaker’s drawing blend modes.

D3D functions

I have not used GML’s d3d functions, much, either, so my understanding is very limited. Basically, as I understand it, the d3d functions in GameMaker wrap Microsoft’s Direct3D drawing functions, and enable drawing with more sophistication than is possible with the basic GML draw functions such as draw_rectangle, draw_line, draw_ellipse, etc.

Despite the name, the Direct3D functions are useful for 2D drawing as well as for 3D.

This article will not cover using GML’s d3d functions, as we’re focusing on shaders. But as any graphics in your game can be used as input into a shader program, anything you draw using d3d functions can become input for a shader to process.

Particles

Particles are “cheap” (efficient) graphical effects that can be created without having to instantiate an object. They are efficient because they do not incur all the processing overhead that comes with a full-blown object. Huge numbers of particles can be generated at very little cost. These can be used for all sorts of effects, so long as the particles do not need to interact with instances in the game, such as triggering collisions. Typically, particles are used for things like dust clouds, smoke, fire, glowing “energy plasma”, haze, rain, snow, and so on to create additional atmosphere.

To use particles, you have to create a particle system. As with Surfaces, particle systems take memory, and need to be disposed of when no longer needed, in order to free up that memory. Full detail on how to set up and use particle systems is beyond the scope of this article.

Several external utilities have been developed by GameMaker users over the years to make designing, building, previewing, and testing particle systems easier, and these are highly recommended.

In conjunction with shaders, I don’t know that there is any direct interplay between particle graphic effects and shaders, but certainly a shader may be used to further process a region of the room where particles exist, to create more sophisticated effects.

Using Shaders in GameMaker Studio

Right, now that we’ve introduced the concept of what a shader is, and reviewed the other main graphics concepts in GMS, here’s where we get to the heart of how to use shaders in GameMaker.

A shader is a pair of shader-language programs, consisting of: a vertex shader, and a fragment shader. Vertex shaders deal with the edges of the drawn area, while fragment shaders deal with the insides.

Let’s say you want to use a shader program that has already been written, perhaps by someone else. All you need to do is use this code in your draw event:

shader_set(my_shader);
 //draw stuff
shader_reset();

So, it’s a lot like drawing to a Surface. With surfaces, first you set which surface you want to draw to, then you draw, then you reset so that drawing resumes to the application surface. With shaders, you set the shader you want to use, draw the stuff that you want to be transformed by the shader, then reset to normal drawing.

Everything drawn between setting the shader and re-setting back to non-shader drawing will be drawn through the shader program.

Easy enough, right? Well, there’s slightly more to it than that.

Uniforms

“Uniforms” is a strange term at first, and was where shaders started to seem strange to me. This is a term that comes from the shader language itself. The GameMaker manual talks about them in a way that assumes the reader is already familiar with the concept, and doesn’t go into a lot of detail explaining it to newbies.

In essence, “uniforms” are input variables that can optionally be passed into a shader that is designed to use input values. Once you understand what a uniform is, it’s not that difficult a concept. You can read more about them at these pages:

The gist of it is, when writing a shader program, when you declare a variable, you can declare it to be a uniform variable, which means that the variable can be accessed from outside the shader, thereby giving the program that calls the shader a way to change the shader’s behavior during execution. To do this, you can’t just refer to the uniform variable by name; you have to get the uniform variable’s memory handle, using shader_get_uniform(nameOfUniformVariable), and then change the value of the variable using shader_set_uniform_f(nameOfUniformVariable, value). Uniforms are actually constants within the shader’s execution scope, so once a value is passed into the shader from the outside and it is set as a uniform, it cannot be changed (the value could be copied to another variable, and that variable could then be modified, though.)

If you’re using a shader that has uniforms that you need to set, it’s done like this:

u_color1 = shader_get_uniform(my_shader, "f_Colour1");
u_color2 = shader_get_uniform(my_shader, "f_Colour2");

shader_set(my_shader);
shader_set_uniform_f(u_color1, 1, 1, 1);
shader_set_uniform_f(u_color2, 1, 0, 0);
//draw stuff
shader_reset();

There are actually a few uniform functions in GML:

Conclusion

That’s about all I know about shaders, for now.

As I get more familiar with using shaders, I’ll update this with more complicated examples, such as (possibly):

  • How to use multiple shaders on the same drawing (eg chaining the results of one shader’s transformation of some drawing into the input for a series of shaders).
  • Other stuff…

How to write shaders in GLSL, if I ever do it, will be a topic for its own article (or series of articles).

Pixel Art

Over the last two years, my primary focus in becoming a game developer has been on programming. I’ve made a lot of progress with my programming in the last two years, and I’m very happy with that, but I’m starting to feel like it’s time to balance that out by leveling up in other areas.

As a “game developer” I have to be proficient in a lot of different skill sets. My greatest strength, and how I see myself primarily, is as a designer. I am a designer who can program, draw, and to a very limited extend, do audio. No matter what I do, the more time I spend on doing it, the better I get.

Lately, I’ve been feeling like getting back into graphics. I find that when it comes to 2d video games, the stuff I have always loved the best has been low-res bitmap graphics, what has come to be known as “pixel art”. Pixel art is deceptively simple. It’s not easy to do well, and it requires a deep understanding of how shape and color work when the constraints are turned up almost as high as they can go.

I’ve been reading a lot of tutorials on how to do pixel art better, and I’m starting to try my hand at it. Now that I have a better understanding of what goes into good pixel art, I’m starting to feel less frustrated while working and enjoying the results more.

I’m at a point now where I feel like I won’t be embarrassing myself by sharing my work, and I really am interested in getting feedback from people who appreciate this kind of stuff, so I’ll be posting completed works and maybe some works-in-progress, along with my comments about it all.

 

The Early 80’s Arcade Aesthetic

My friend Sam recently asked the internet if there were any books on early arcade game aesthetics. I’m not aware of any books that particularly stand out as being focused on game graphics, so I didn’t have any titles to suggest, although there are starting to be quite a few really good books on the history of the arcade.

To help him out, I brainstormed as much as I could, and since I think this ended up being pretty valuable, I figured I’d turn it into a blog post.

Basically every design principle in the graphics of early 80’s arcade games was governed by the insane limitations of the tiny systems of the day. Memory was SUPER expensive, 16k of RAM was a LOT in the late 70s/early 80s. CPU was 8 or 16 bit and SLOW – 1MHz or so. At the time there often wasn’t a dedicated video processing unit, or even dedicated video memory — everything was handled by the CPU, which often dedicated most of its processing power to simply drawing each frame of video, leaving relatively little processing power left over for handling game logic.

Here’s a list of qualities and factors that fed into creating the early 80’s aesthetic:

  • Portrait aspect ratios. Most of the old games, particularly vertical scrolling shooters, had monitors mounted in the cabinet in Portrait orientation (3:4 aspect ratio, as opposed to 4:3 ratio). Portrait gave vertical shooters more range to fire, and enabled manufacturers to build narrower cabinets, which allowed them to store, ship, and display more units in a given area.
  • Large pixels. The dot-pitch of those old screens was pretty coarse. You might have had a 15-, 17-, or 19-inch screen displaying 320×240 resolution, or even 240×160. Individual pixels were quite apparent, particularly in the late 70’s. Macro lens photos of the screen would reveal visible gaps between pixels. Early home computer monitors were capable of displaying a mere 40 or 80 characters of text, and the screens were tiny — 13″ or smaller.
  • Tiny sprites (usually 16×16 or 32×32 max)
  • Animations typically limited to 2-3 frames, though there were sometimes exceptions. Each frame of animation in a sprite cost valuable storage.
  • Bright colors and pastels. Here’s a great collection of color palettes available to home consoles and computers.
  • Grid-based graphics. Most terrain, characters, etc. were sized to fit within a standard grid size. Terrain, mazes, etc. were generally built out of repeated tiles.
  • No alpha channel. I don’t recall seeing any translucency (colors blending when two sprites overlap) in this era. Any transparency would have been all or nothing, provided by a mask. Before masking techniques became widespread, many early games had the background color drawn into the sprite, resulting in artifacts when two sprites would overlap.
  • Limited color palette. 2N colors to pick from, where N <= 8. So, generally 256 or fewer colors on screen. The most common color depths were 1-bit (B&W) and 8-bit (256-colors), although there were a few notable grayscale games, such as Fire Truck. 8-bit color ruled in the arcade until the 16-bit revolution came to the arcade, around 1986-87 — the golden era (roughly, 1978-1984) of the arcade was exclusively B&W, and 8-bit.Oftentimes, computers of the day had a pre-defined color palette and were further limited by the number of distinct colors they could draw on the screen at any one time, such as out of a total of, say, 4096 possible colors, which were baked in to the hardware and could not be changed, and you can only draw 16 (or 64, or 128) of them on the screen (or, in some cases, up to 4 colors in any one sprite) at any one time. If you want to emulate specific hardware, it’s a good idea to research the capabilities and narrow your color selection to match the authentic palette of the original hardware. These limitations often resulted in workarounds such as dithering (drawing two colored pixels closely together to allow the eye to blend them to a middle value). Here’s a fascinating article about the Commodore 64, describing a technique for getting “secret” colors to emerge from the C64’s limited palette by rapidly switching between two colors in the palette to synthesize a new color. It also meant that smoothing your images with anti-aliasing wasn’t possible, because there weren’t enough available colors to do proper tweening. Jaggy pixels ruled the day. Many home computer games of the era did their graphics in Text Mode, which has its own distinct look.See also: MDA, CGA, EGA, VGA
  • Palette swapped sprites. Old computers used color palettes, or indexed color. Out of a gamut of, say, 64 or 256 or 1024 or 4096 possible colors, a sprite typically could only use, say, 4 or 16 out of the 256 available colors. These four chosen colors were defined by a “palette”, and each color on the palette had an index value used to refer to it. By changing the colors in the palette to different colors, or in other words swapping one palette for another, the indexes in the sprite would be updated to use the new colors. Re-using and re-coloring the sprite, saved on storage space. A palette swap took a bitmap and re-mapped the values in each pixel to a different color from the new palette. This is why Mario is red and Luigi is green, for example. It was also very common to have different power levels of enemies denoted by using palette swaps.
  • Blinking and flashing. Rapidly flashing colors as a cheap, eye-catching form of pseudo-animation.
  • Flicker. If the processor couldn’t handle drawing all of the sprites on the screen in every screen refresh, something had to drop. So a sprite might not draw every screen update if there are too many on the screen, or too many in a horizontal scan line.
  • Abstract, iconified representations of things, and cartoony drawings, as opposed to realistic drawings.
  • Reliance on clichés, tropes, and popular idioms to help make graphics more easily recognizable, and a willingness to extend the idiom in a clever/absurd/zany fashion.
  • Fruit and keys and things are canonical bonus items.
  • Giant head/face, tiny body/limbs. They tried to fit the entire character into a 32×32 square, and most of the detail needed to go into the face/head to make the character recognizable and memorable.
  • High contrast is important for foreground/background.
  • Shigeru Miyamoto once gave an interview where he discussed why the original Donkey Kong sprites for Mario…mario
    • had white skin (the background was black, so they wanted strong contrast),
    • had a mustache (it helped his nose stand out and a mouth and chin were too complicated for the number of pixels left in the region)
    • wore red overalls/blue shirt (the overalls helped with the contrast of his swinging arms, which you otherwise wouldn’t get from a solid colored top.)
    • Wore a hat (his dark hair would have stood out less against a dark background, and presented problems with animation.)

Don’t forget vector!

Notable vector titles of the era:

  • Asteroids was the first hugely successful arcade game that used a vector display. Note the intense glow of the UFO and missile in this image, due to the vector display over-drawing those lines many more times than the refresh rate of a raster scan CRT would have allowed.asteroids_630x[1]
    I’m not sure what the very first use of a vector monitor was in the arcade, maybe Lunar Lander?
  • Battlezone When gamers of the area think about vector games, probably the first two titles they’ll think of are Asteroids and Battlezone.battlezone[1]
  • Qix Actually, Qix used a raster monitor, but it was primarily line based art, so I’m including it anyway for inspiration. Plus, it gives you an idea of how a line art game would look on a low-res raster display of the period.Qixingame[1]
  • Tempest Tempest was the first color vector game, and was a sensation at the time of its release.maxresdefault[1]
  • Space Duel is one of my all time favorite games. It featured innovative 2-player co-op/competitive play, and awesome graphics.Spaceduel[1]
    Note the distinct difference in this photo of an actual vector monitor screen photograph vs. how the game looks when emulated on a modern display:score8055_20140504190722[1]
    spacduel2[1]
  • Star Castle An often overlooked classic, the arcade version Star Castle used a color overlay over a monochrome vector CRT:star_castle_large[1]
    Later cabinets made use of a color vector CRT display, and looked much better: screenshot1[1]
  • Star Wars (really impressive achievement vector graphics, actually — convincing 3D, accurate wireframes of familiar star fighters from the movie, simulated fills, etc.)Star_Wars_Screen[1]
    1181242172192[1]

There might be other notables that I’m forgetting, as well, but these should have you pretty well covered.

Color vector screens were something rare and expensive, most vector games were B/W or monochrome (green or amber). I believe before proper color vector monitors became cheap enough, some vector games may have made use of cellophane overlays attached to the screen which filtered the vector image painted on that part of the screen to make it appear colored.

When you DID have colors, they were very bright colors, almost always primary colors (RGB).

The way the vector monitors worked:

  • There are no pixels (not easy to emulate, but maybe the retina display on the new iPhone/iPad can help make this more convincing?) This meant no aliasing or scaling artifacts.
  • A->B, not scan lines. The cathode beam was drawing from A to B for each line segment, not drawing scan lines from top to bottom.
  • Bright and sharp. As such, a vector display could spend much more time drawing each line segment, far faster refresh rates than the 30Hz that is typical of pre-HDTV raster CRTs. Unlike a raster CRT, there was not a fixed refresh rate; the cathode beams traced over the line segments as quickly as they were able to. This resulted in a very bright, flicker-free vector line (again, not easy to emulate) compared to the brightness of a white pixel on a raster display. There was often some ghosting as the intensely bright phosphor dimmed after the object on the screen moved. This was a hardware artifact, not something programmed in to the graphics routine as a special effect. Vector displays GLOWED and were sharp and gorgeous.
  • More stuff to draw means dimmer lines. This also meant that the more stuff being drawn on the screen at once, the overall brightness of each individual line was diminished, ultimately resulting in visible flicker if too many things were being drawn at once.
  • Even brighter vertices. Where line segments intersected, or at vertices, the beams additively excited the phosphors resulting in an even brighter point at the corner in relation to the brightness of the rest of the line segment. We’re talking REALLY excited phosphors!
  • Geometric shapes and polygons, not curves. Curves would have required far too much computing time to calculate precisely. Curves were always approximated with line segments. Linear functions are way faster than polynomial and trig functions, and the processors of the day didn’t even have dedicated floating point units (FPUs).
  • (Usually?) a single line thickness for all graphics. I can’t think of any vector games where the line thickness varied, but it’s possible there may have been some. Typically the lines were quite thin, like pencil lines.
  • No fills. Everything is a wireframe — maybe a simulated fill by drawing in a bunch of lines in a pattern. Fancier 3D games would occlude line segments that were “behind” the surface of some other object, but a lot of them just let you have a kind of x-ray vision effect where you could see through the wireframe.
  • Black background. You can have any background color you want, as long as it’s black.
  • Favorable to 3D. These properties made 3D games much easier to draw in vector than for raster graphic displays of the time. So a lot of the early 3D experiments were done with vector displays, most notably Battlezone.

Further reading