csanyk.com

video games, programming, the internet, and stuff

Tag: GameMaker Studio

GameMaker Tutorial: Audio speedup with sync

In so many games, music speedup is a great way to get the message to the player that they need to hurry up and get things done.

It’d be great if you could simply set a new tempo with a simple GML function, and have the current background music adjust on the fly. Something like audio_sound_set_speed(sound, speed) would be lovely. But it’s not as simple as that in GameMaker, as I found out recently.

Here’s how I implemented a speedup for my GMLTetris project:

First, I created two music tracks, one at normal speed, and one at double speed, and added them to the game project as sound assets.

Everything else is just programming:

if <condition> {stop slow_music; start fast_music;}

This is easy enough, the trickiest part is probably getting the condition right to switch tracks, but depending on the game, that condition could be very simple, too.  The only real complication is that if you’re checking the condition repeatedly, as you normally would every Step, you only want to trigger the changeover once.  So to do that, set up a variable to track whether the switch has happened already, and check it, too, even if the condition that triggers the changeover continues to remain true on successive steps.

if <condition> && !music_switched {stop slow_music; start fast_music; music_switched = true;}

The music speedup that happens in a game like Super Mario Bros., where the music speedup occurs when the level timer hits 100, is a typical example of such a technique. If you only need to do a single, one-way switch, this is all you need.

If your game needs to switch back and forth between slow and fast music, your conditional needs to be more sophisticated.

if <condition>
{
   if !<fast_music_already_playing>
   {stop slow_music; start fast_music;}
}
else
{
   if !<slow_music_already_playing>
{stop fast_music; start slow_music;}
}

Here, because the game can switch multiple times, when the condition check happens, we can’t get away with a music_switched variable that changes one time. What we need to do is check to see if the music we need to switch to is already playing, and if not, stop the current music and switch to the other music.

One thing to keep in mind, this basic technique will start the fast music from the beginning of the track. This might be what you want, but it would also be good if you could start the fast music at the position where the slow music was, for a seamless transition.

GML has the functions to do this: audio_sound_get_track_position() and audio_sound_set_track_position(). But in order to make use of them, we need to do a bit more work.

First, since the gml functions return the absolute time position of the track, and since two tracks play at different speeds, we need to adjust the position proportionately when we switch tracks, so that the position is at the same position percentage-wise. This is actually easy, as long as we know the tempo change, which we do. Since the fast track is double speed, we can easily calculate the equivalent position in the other track by multiplying or dividing by 2.

Slow to fast: position *= 0.5;

Fast to slow: position *= 2;

Where I ran into trouble was, I needed to be able to switch both ways. It seemed like it should be simple — just check whether the desired track is already playing, and if not, get the position of the current track, adjust it proportionately, start the desired track, set the position. Easy, right?

Let’s look at it in pseudocode first:

if <condition to switch to fast music>
{
   if audio_is_playing(slow_music)
   {get position; stop slow music; start fast music; set position;}
}
else
{
   if audio_is_playing(fast_music)
   {get position; stop fast music; start slow music; set position;}
}

This was when I discovered that audio_play_sound() returns a handle for identifying the specific instance of the sound that is playing. This is necessary to use to set the track position of the playing sound. You can’t just set the position for the sound_index; you have to set it for the specific handle of the currently playing instance of the sound. If you set the track position for the sound_index, any time that sound resource is played in the future, it will start from that position.

///Create Event:
bgm = audio_play_sound(slow_music, 10, true);
///Step Event:
if <condition>
{
  if audio_is_playing(slow_music)
  {
  var pos = audio_sound_get_track_position(bgm);
  audio_stop_sound(bgm);
  bgm = audio_play_sound(fast_music, 10, true);
  audio_sound_set_track_position(bgm, pos * 0.5);
  }
}
else
{
  if audio_is_playing(fast_music)
  {
  var pos = audio_sound_get_track_position(bgm);
  audio_stop_sound(bgm);
  bgm = audio_play_sound(slow_music, 10, true);
  audio_sound_set_track_position(bgm, pos * 2);
  }
}

I also discovered that detecting which track is playing with audio_is_playing() does not work for this purpose. I still don’t have a clear understanding of what was happening in my code, but some debugging showed that my track position calculations were being distorted by being called multiple times. This doesn’t make sense to me because the song should no longer be playing after the first step when the switch condition is met. But my theory is that since the audio is played in another process from the main program, there’s some messaging going between the two processes asynchronously, and as a result audio_sound_is_playing() can return true a step later after the message is sent to stop the track playing.

By trying to set the track point multiple times in quick succession, weird and unexpected results happened, and the tracks switched but did not set to the correct position.

So I had to come up with a surer method of knowing which music is playing.

Debugging was tricky, since I couldn’t tell from listening where the audio position was. To aid debugging, I drew the position of the playing track to the screen, and then I was able to see that the position was not being set correctly as expected. Somehow, switching from slow to fast would drop the track back from about 10 seconds to 2 seconds, and then switching from fast to slow would jump from 2 seconds to 38 seconds.

I couldn’t figure out why that was happening, so I tried using show_debug_message() and watched the output console, and saw that the track position would update 2 or 3 times when the tracks switched; I was expecting it to only update once.

This is what clued me in to what I believe was happening due to the multiple processes communicating synchronously.

The solution I used in the end was easy and simple: instead of checking which track was currently playing, and switching from slow to fast or vice versa based on the currently playing audio asset, I just added a new variable, condition_previous, and compared condition to condition_previous, and made the switch happen only when the current condition didn’t match condition_previous. This only happens in one step, when the condition changes from false to true, or vice versa, and so the track position is set once, when the bgm switches tracks and syncs up the new track to where the old track left off.

switch_previous = switch;

switch = <condition to check for switching to the fast music>;

if switch && !switch_previous
{
var pos = audio_sound_get_track_position(bgm);
audio_stop_sound(bgm);
bgm = audio_play_sound(fast_music, 10, true);
audio_sound_set_track_position(bgm, pos * 0.5)
}
else
{
if !switch && switch_previous
{
var pos = audio_sound_get_track_position(bgm);
audio_stop_sound(bgm);
bgm = audio_play_sound(slow_music, 10, true);
audio_sound_set_track_position(bgm, pos * 2);
}
}

This works, because it guarantees that the condition checks will be accurate, as they do not depend on checking the status of the audio playing in another thread.

GML Tetris, a GameMaker Demo project

My latest asset for the GameMaker Marketplace is a Tetris demo. Fully-featured, and configurable, it requires only sound files to be added to complete the project.  

It’s meticulously researched, beautifully coded, fully documented, and rigorously tested, and represents approximately 150 developer hours of work, for only $4.99.  It’s playable as is, right out of the box.  It’s easy to understand the code, easy to configure with simple changes to the code, and modding is encouraged.

Visual theory of the Z3D Engine

Editor’s note: [I originally wrote this as an Appendix to the documentation for Z3D Engine, but I think it’s interesting enough to deserve a slightly wider audience.]

I have to preface this section by saying that I have no idea what I’m talking about here, but am trying to learn.  I like math, but I didn’t go to school primarily for it, and that was decades ago. I haven’t studied 3D geometry, or optics, or computer graphics in any formal sense. I’m figuring this out more or less by myself, learning as I teach myself.

So if someone who knows more than I do wants to explain this stuff better than I can, I’d love to hear from you. You can send me an email at the Contact page, or tweet at me @csanyk, or just comment on this article

Thanks in advance!

I’ve called Z3D Engine a “Fake 3D” engine and “2.5D” engine, because those are fairly vague terms that I don’t have to worry about being right about. Someone asked me what type of view it is, and I couldn’t tell them. That bothered me, so I started reading a bit. I still don’t really know for sure.

or·tho·graph·ic pro·jec·tion
ˌôrTHəˈɡrafik prəˈjekSHən/
noun

  1. a method of projection in which an object is depicted or a surface mapped using parallel lines to project its shape onto a plane.
    • a drawing or map made using orthographic projection.

I think this is sortof close to what Z3D is… maybe.  What I can tell you about Z3D is this:  You can see the full front side and the full top side of (most) objects.  These do not foreshorten.

fore·short·en
fôrˈSHôrtn,fərˈSHôrtn/
verb
gerund or present participle: foreshortening

  1. portray or show (an object or view) as closer than it is or as having less depth or distance, as an effect of perspective or the angle of vision.
  2. “seen from the road, the mountain is greatly foreshortened”

The blue rectangle that represents the “player” in the demo is intended to show the player as a side view only, with no pixels in the sprite representing the top surface of the player. This is because I’m intending Z3D to be used for games drawn in a visual style similar to the top-down Legend of Zelda games, and in those games, no matter which way Link is facing, you can only see pixels in his sprite that represent his side, and nothing that represents the top of him, even though you’re viewing most of the rest of the terrain in the room from this weird view where you can see both the top and side of things like blocks and chests, and for other things like bushes you can only see the side.

Things in Z3D do not appear to get smaller as they recede into the background, or get bigger as they get closer to the foreground.  As well, the tops of objects (that have tops), the top is drawn 1 visual pixel “deep” (in the Y-dimension) for every pixel of distance.

This doesn’t look correct, strictly speaking; if you’re looking for “correct” visuals this engine likely isn’t for you.  But it is visually easy to understand for the player, and it is very simple.

What I’m doing in Z3D Engine is showing the top of everything (that has a top) as though you’re looking at it’s top from a vantage point that is exactly perpendicular to the top, while at the same time you’re also seeing the side of everything as though you’re looking at the side from a vantage point that is exactly perpendicular to the side.  This is an impossible perspective in real life, but it works in 2D graphics that are trying to create a sort of “fake” 3D look, which is what Z3D does.

Imagine you’re looking at this cube:

Cube

At most, assuming the cube is opaque, you can see only three faces of the cube from any given vantage point outside the cube; the other three faces are occluded on the other side of the cube.

Cube with occluded faces

(That image above is properly called Isometric, by the way. Z3D is not isometric).

If you were looking at the cube from a vantage point where you were perpendicular to one of the faces, you could only see that one face, and it would look like a square:

Square

(Since the faces of this cube are all nondescript, we can’t tell if we’re looking at the side or the top of the cube.)

Now, if it were possible to be at a vantage point that is exactly perpendicular to the both the side and the top of the cube simultaneously, the cube would look like this:

Flattened Bi-perspective cube

This is weird and wrong, but yet it is easy to understand, and it turns out that it is also very easy to compute the position and movement along 3 dimensional axes if you allow this wrong way of drawing.  This is view is (or perhaps is similar to) a method of visualization known as a oblique projection.

More properly, if you were positioned at a vantage point somewhere between the two points that are perfectly perpendicular to the top and side faces, the cube would look like this:

Cube in perspective

Here, obviously, we are looking at the cube mostly from the side, but our eye is slightly above, so we can see the top of the cube as well.  But notice, since we are not viewing the top face of the cube from a perpendicular vantage point, it does not appear to be a square any longer — it foreshortens, so that the far end of the top of the cube appears narrower than the closer end.

This is perhaps obvious, because we’re using to seeing it, because we see it every day, because that’s what real life looks like.  But it’s because we see this every day that we take it for granted, and when we have to explicitly understand what’s going on visually with geometry, we have to unpack a lot of assumptions and intuitions that we don’t normally think consciously about.

If we were to put our eye at the exact middle point between the points that are perpendicular to the side face and the top face, the cube should look to us like this:

Cube at 45°

Notice that both the bottom of the side face and the far edge of the top face are foreshortened due to perspective.

This is how they “should” look in a “correct” 3D graphics system, but Z3D “cheats” to show both the side and top faces without doing any foreshortening, which means that it can draw an instance as it moves through any of the three dimensions using extremely simple math.

Visually moving 1 pixel left or right is always done at a hspeed of -1 or 1, regardless of whether the object is near (at a high y position) or far away (at a low y position).  Likewise, moving near or far is also always done at a rate of one distance pixel per apparent visual pixel. And moving up and down in the z-dimension is also always done at a rate of 1 distance pixel per apparent visual pixel.

If we wanted to draw more convincingly realistic 3D graphics, we need to understand what’s going on with the eye, with perspective, and things at a distance.

Eye viewing the cube at a 45° angle

The same object viewed in Z3D’s perspective is something like this:

Eye looking at Z3D rendering

(We’ve omitted the occluded faces on the back end of the cube relative to the viewer, for simplicity.)

These two “apparent” perspectives are combined at the point where the player’s real eye is, resulting in something like this fake-3D perspective:

Z3D Flattened Orthographic bi-perspective rendering

So, in conclusion I’m not 100% sure that my terminology is correct, but I think we can call this perspective “flattened orthographic bi-perspective” or perhaps “oblique projection”.

(From this, we can begin to see how a corrected view might be possible, using trigonometry to calculate the amount of foreshortening/skew a given position in the Z3D space would need in order to appear correct for a single-POV perspective.  But this is something well beyond what I am planning to do with the engine; if you wanted this, you would be far better off creating your game with a real 3D engine.)

It gets weirder when you realize that for certain objects, such as the player, we’re going to draw only the side view, meaning that the player will be drawn a flat 2D representation in a fake 3D space.  Yet the player’s “footprint” collision box will likely have some y-dimension height to it.

z3d Engine for GameMaker Studio

z3d is a fake-3d engine designed for simplicity, efficiency, performance, and ease of use. Full documentation + demo included.

In a 2D GameMaker room, x and y coordinates are used for positions in the 2D space. 3D requires a third variable for the third dimension, z. In the z3d engine, x and y are used to represent the “floor” plane as viewed from a top-down perspective, from a forced perspective that gives the viewer a full view of one side and top of objects, while z is used for altitude.

GameMaker Marketplace

Full Documentation

GameMaker Studio Tutorial: Getting Into Shaders

Shaders have been a part of GameMaker Studio for a while now, having been introduced in 2014. Since their inclusion, I have mostly remained mystified by them, with a vague and cloudy understand of what they are and what they can do, and haven’t used them at all. That will [hopefully] start to change today.

As always, when try I learn something new in programming, I find that writing up what I’ve learned helps me to remember and keep my learning organized. I like to publish my notes so that they can help others, and so that others can find errors and make suggestions for better ways to do things. I’m still very new to working with shaders, so I’m not trying to put myself out there like I’m some kind of expert, but here’s what I’ve been able to learn about using shaders with GameMaker Studio so far:

Shader basics

First, we need to understand what a shader is. A shader is a specialized program that processes graphics. Shaders are executed on the Graphics Processing Unit, or GPU, which is specialized hardware for accelerated graphics processing. Thus, shaders are very fast. As well, since they work on the GPU, using shaders will free up the CPU to do other tasks, which can further help to improve the frame rate of your games.

This sounds like enough of a reason to want to use shaders, doesn’t it? Well, it gets better. The main thing about shaders is that they can do amazing visual effects, which will can make your game look better, but can also play an active role in how the game plays. For example, you could use a shader to handle the graphical processing of a special view mode in the game, such as night vision or x-ray vision. One of my favorite shader-based gameplay mechanics that was centered on the use of shaders was Daniel Linssen’s Birdsong, winner of the “Overall” and “Theme” categories of the Ludum Dare 31 compo held in 2014. The theme of LD31 was “Entire Game in One Screen”, and Linssen’s approach to this was to create a giant one-room game, that was crammed into a single screen(no scrolling), and, using a fish-eye lens effect done with a shader, magnify the area where the player is so that it was large enough and detailed enough to be playable.

There’s virtually no limit to what graphical effects you can come up with using shaders, other than the limits of your imagination and of course your programming and math skills. It also helps to understand how computers work with graphical concepts such as color, pixels, binary math, and so forth. Additionally, scientific knowledge in disciplines like optics can be very useful. Shaders have their own specialized programming language that they are coded in — actually there are several related languages to choose from. Because of this, shaders are considered an advanced topic in programming, and there are numerous hurdles to surmount in order to be able to write them yourself.

That said, shaders are re-usable bits of code, and so one of the first things you can do when you start getting into shaders is to simply use pre-existing shaders that have been written by other people.

Getting Started with Shaders

Before you can use shaders, you’ll want to familiarize yourself with a few concepts.

Shader references

Here’s links to the relevant pages in the GMS manual on using shaders in the context of GameMaker Studio:

GMS1:

Shaders Overview

Shaders GML reference

Shader Constants

Tech Blog Shaders tutorial: 1 2 3 4

GMC Forums shader tutorial.

GMS2:

Shaders Overview

Shader Constants

 

Other shader resources (general)

Language References

The four shader languages that GMS supports are: GLSL ES, HLSL9, HLSL11, and GLSL. Which one you need to learn and use will depend on your target platform, but for this article we’ll focus on GLSL ES, since it supports the most target platforms (all of them, except Windows 8).

GLSL ES language specification

HLSL language specification

I haven’t gotten into the shader languages enough yet to know why you’d ever pick HLSL over GLSL, but presumably there must be some advantage to using HLSL when targeting Windows platforms, either in terms of correctness or performance. Otherwise, I would think you’d be better off just sticking with GLSL ES and be compatible with the most targets.

Tools

Shadertoy Shadertoy is a wonderful website that allows you to play with shader programming, running them in your web browser. Then you can share your shader creations with the community of users of the website, and in turn you can demonstrate and use shaders written by others.

Other graphical concepts in GameMaker, and how they relate to shaders

It’s not a bad idea to review and understand the entire chapter on Drawing in the GameMaker documentation. There are many concepts that you will need a working knowledge of in order to understand how to use drawing to its fullest capacity, and to get things working together smoothly.

But the manual isn’t the end of the story. Often I find that the manual doesn’t go far enough to explain how different concepts work. The sections on blend modes and alpha testing are particularly inadequate by themselves. The manual also doesn’t go very far to demonstrate or suggest how different features and functions can be connected to one another. That’s for the user to infer, and verify through experimentation. This is great if you are creative and love to experiment and discover. On the other hand, there’s so much that has already been figured out and discovered by others, and it would be nice if that was all documented in an easy to search reference somewhere.

Read the entire manual, cover to cover, if you can. Create little demo projects to test your understanding of what you’ve read, and figure out how to do things. Read it again. And refer to it whenever you need to. There’s no substitute for reading and understanding the manual. I’ll still touch briefly on the major concepts here, for summary:

Surfaces

All drawing happens somewhere, and in GameMaker that somewhere is called a surface. Behind the scenes, a surface is simply a chunk of memory that is used to store graphics data. You can think of it as a 2D grid of pixels, stored in the program’s memory, and drawn to the screen only when called for. You can also think of it as virtual “scratch paper” where you do some work “backstage” and then bring it out to use in the game when needed.

The application has an Application Surface, and by default everything is drawn here. But you can create other surfaces, which you can work on, composing a drawing off-screen, until you are ready to draw it to the screen. As you might imagine, there are countless reasons why this is useful, and endless ways to make use of surfaces.

Surfaces are relatively easy to use, but are considered an intermediate level programmer’s tool in GameMaker, for a couple of reasons:

  1. Surfaces consume memory, and need to be disposed of when no longer needed.
  2. Surfaces are volatile, and can be destroyed without warning, so should not be assumed to exist. For example, if the player switches focus to a different application, or if the computer enters sleep or hibernation mode, or if the game is saved and resumed, surfaces that were in existence at the time the application was last running may have been cleaned up by the operating system, and will need to be re-created if they don’t exist.
  3. All drawing must happen in one of the Draw events. If you try to use draw functions in other events, it may or may not work correctly, and this will vary from computer to computer. I once made a game where I did some set-up in the Create Event of an object, where I created a surface, drew to it, and then created a sprite from the surface, and assigned the newly created sprite to the object. It worked fine on my computer, but when other players downloaded my game to try it out, it did unexpected things and the graphics were glitched. Fortunately, I figured out what the problem was, and fixed it by moving this sprite creation into the Draw Event. Once I did this, the game ran correctly on everyone’s computer.

Drawings done to surfaces can be run through a shader, as input, and thereby be processed by the shader. In short, a surface can be the input image data for a shader, and the output of the shader will be the processed version of that surface, transformed by the shader.

Blend Modes

For a long time, long before GMS introduced shaders, GameMaker has provided blend modes. Blend modes affect what happens when GameMaker draws graphics over existing graphics that were drawn previously. Normally, when you draw something, it covers the pixels that were there before. But, by changing blend modes, you can do other things than simply replacing the previous pixels with new pixels, blending the old and the new in different ways according to the mathematical rules of whatever blend mode you had selected.

To be honest, I’m not sure what useful purpose there is for every blend mode. It would be great if there were more tutorials showing useful applications for them, especially the obscure ones that I don’t see used much, if ever.

The most commonly useful blend modes, in my experience, are bm_normal and bm_add. Normal blending is the default drawing mode, and is what you use 99% of the time in most games. Additive blending creates vivid glowing effects, and is particularly lovely when used in conjunction with particle systems to create glowing systems of overlapping particles, especially when you are drawing translucent pixels (using alpha < 1).

Blend modes are also useful for creating clipping masks. For more info on that, there are some good tutorials already written on how to create a clipping mask using surfaces and blend modes.

Some of the first questions I had when Shaders were introduced were: What do we do with blend modes now that we have shaders? Do we still need them? Can we combine them with shaders, somehow? Or do shaders make blend modes obsolete?

Basically, as I understand it, the answer seems to be that blend modes were kind of a limited predecessor to shaders, and enabled GM users to achieve some basic drawing effects simply, without exposing GM users to all that highly technical stuff that shaders involve, that I mentioned above.

Anything you could do with blend modes, can be done with shaders instead, if you wanted to. That said, if all you need is the blend mode, they’re still there, still supported like always, and you can go ahead and use them. They’re still simpler to use, so why not.

One thing to be aware of, though, when using blend modes, every time you change blend mode in GameMaker, you create a new “batch” of drawing. The more batches, the longer GM will take to draw the game each step. Thus, many batches can slow drawing down tremendously. This is an area where you may need to focus on optimization. And if you’re that focused on performance, then it might be worth looking into a shader-based approach instead.

Once you’ve become sufficiently comfortable with shaders, you may not have as much need for using GameMaker’s drawing blend modes.

D3D functions

I have not used GML’s d3d functions, much, either, so my understanding is very limited. Basically, as I understand it, the d3d functions in GameMaker wrap Microsoft’s Direct3D drawing functions, and enable drawing with more sophistication than is possible with the basic GML draw functions such as draw_rectangle, draw_line, draw_ellipse, etc.

Despite the name, the Direct3D functions are useful for 2D drawing as well as for 3D.

This article will not cover using GML’s d3d functions, as we’re focusing on shaders. But as any graphics in your game can be used as input into a shader program, anything you draw using d3d functions can become input for a shader to process.

Particles

Particles are “cheap” (efficient) graphical effects that can be created without having to instantiate an object. They are efficient because they do not incur all the processing overhead that comes with a full-blown object. Huge numbers of particles can be generated at very little cost. These can be used for all sorts of effects, so long as the particles do not need to interact with instances in the game, such as triggering collisions. Typically, particles are used for things like dust clouds, smoke, fire, glowing “energy plasma”, haze, rain, snow, and so on to create additional atmosphere.

To use particles, you have to create a particle system. As with Surfaces, particle systems take memory, and need to be disposed of when no longer needed, in order to free up that memory. Full detail on how to set up and use particle systems is beyond the scope of this article.

Several external utilities have been developed by GameMaker users over the years to make designing, building, previewing, and testing particle systems easier, and these are highly recommended.

In conjunction with shaders, I don’t know that there is any direct interplay between particle graphic effects and shaders, but certainly a shader may be used to further process a region of the room where particles exist, to create more sophisticated effects.

Using Shaders in GameMaker Studio

Right, now that we’ve introduced the concept of what a shader is, and reviewed the other main graphics concepts in GMS, here’s where we get to the heart of how to use shaders in GameMaker.

A shader is a pair of shader-language programs, consisting of: a vertex shader, and a fragment shader. Vertex shaders deal with the edges of the drawn area, while fragment shaders deal with the insides.

Let’s say you want to use a shader program that has already been written, perhaps by someone else. All you need to do is use this code in your draw event:

shader_set(my_shader);
 //draw stuff
shader_reset();

So, it’s a lot like drawing to a Surface. With surfaces, first you set which surface you want to draw to, then you draw, then you reset so that drawing resumes to the application surface. With shaders, you set the shader you want to use, draw the stuff that you want to be transformed by the shader, then reset to normal drawing.

Everything drawn between setting the shader and re-setting back to non-shader drawing will be drawn through the shader program.

Easy enough, right? Well, there’s slightly more to it than that.

Uniforms

“Uniforms” is a strange term at first, and was where shaders started to seem strange to me. This is a term that comes from the shader language itself. The GameMaker manual talks about them in a way that assumes the reader is already familiar with the concept, and doesn’t go into a lot of detail explaining it to newbies.

In essence, “uniforms” are input variables that can optionally be passed into a shader that is designed to use input values. Once you understand what a uniform is, it’s not that difficult a concept. You can read more about them at these pages:

The gist of it is, when writing a shader program, when you declare a variable, you can declare it to be a uniform variable, which means that the variable can be accessed from outside the shader, thereby giving the program that calls the shader a way to change the shader’s behavior during execution. To do this, you can’t just refer to the uniform variable by name; you have to get the uniform variable’s memory handle, using shader_get_uniform(nameOfUniformVariable), and then change the value of the variable using shader_set_uniform_f(nameOfUniformVariable, value). Uniforms are actually constants within the shader’s execution scope, so once a value is passed into the shader from the outside and it is set as a uniform, it cannot be changed (the value could be copied to another variable, and that variable could then be modified, though.)

If you’re using a shader that has uniforms that you need to set, it’s done like this:

u_color1 = shader_get_uniform(my_shader, "f_Colour1");
u_color2 = shader_get_uniform(my_shader, "f_Colour2");

shader_set(my_shader);
shader_set_uniform_f(u_color1, 1, 1, 1);
shader_set_uniform_f(u_color2, 1, 0, 0);
//draw stuff
shader_reset();

There are actually a few uniform functions in GML:

Conclusion

That’s about all I know about shaders, for now.

As I get more familiar with using shaders, I’ll update this with more complicated examples, such as (possibly):

  • How to use multiple shaders on the same drawing (eg chaining the results of one shader’s transformation of some drawing into the input for a series of shaders).
  • Other stuff…

How to write shaders in GLSL, if I ever do it, will be a topic for its own article (or series of articles).

YoYoGames: GameMaker Studio 1.4 sunset annonced for 2018

Today, YoYoGames announced that GameMakerStudio 1.4 support will sunset in 2018.

Over the next 14 months, until 31 July 2018, we will continue to mend major issues and support platform updates for Studio 1.4. After the 14-month period, Studio 1.4 will still function normally however, we cannot guarantee it will remain compatible with every platform’s future updates.

While this day has been expected to come since the release of the 2.0 beta back in November 2016, this is quite a bit sooner than expected. YoYoGames support for the 8.x version of GameMaker continued for years after GameMakerStudio 1.0 was released, only ceasing very recently.

It’s not entirely bad for the company to focus itself on development and support for the latest version of their product, but it’s unfortunate for users who for whatever reason are not able to move to the new version so quickly. This also makes it all the more important that GMS2 be as good as it can be. Hopefully, by focusing on the latest version exclusively, this will enable YYG to develop it even faster, adding new features, fixing bugs, and addressing UX issues with the brand new IDE.

Writing files in GameMaker Studio

The levels in TARJECTORIES are procedurally generated, using a lot of calls to the Random Number Generator. The output of the RNG is repeateable for a given seed, so the only variability from one run to another is the number of calls made to the RNG. Since the number of calls to the RNG in TARJECTORIES happens to be deterministic, this gives the game the same random procedurally generated levels each time the game is played. Fortunately, it turned out that those randomly generated levels aren’t too bad to play.

The RNG sequence will vary if you die and play a new game without quitting/relaunching the game, but if you do quit/relaunch it will be the same sequence of levels every time.

I became worried that the game was too sensitive to the RNG, and wanted to capture the level data so that I could re-generate any of the levels in the procedurally generated sequence if I wanted to, even if the game ends up getting modified in such a way that the number of RNG calls changes, which would otherwise screw up the level generation.

To that end, I just wrote up a little script that writes the level data to a .json file, and played through the game, going through two cycles of the 18 procedurally generated levels, so I could get the original sequence plus the level sequence from GE Mode. This was pretty straightforward, except the part about where GameMaker writes the file.

It’s complicated and not easy to understand. GameMaker runs sandboxed, so you can’t just write files anywhere you want. There’s documentation that explains this in the manual, but even after reading it several times, it’s still not entirely clear where you should look to find a file.

My intent in this post isn’t to write a tutorial that guides you to a complete understanding of where GMS writes its files, but I will explain how I found the location.

GML has a built-in variable, working_directory, which is a path on the filesystem where the game.exe is running. The confusing thing about working_directory is that it can return two different values. In certain contexts, the working directory is literally the location where the game.exe is located. You can read files from here, so if you have any “included files” in your project that you need to read from, this is where they’ll be. The thing is, this location is read-only (to GameMaker) so you can’t have your application write any data here, not even to update existing included files. You have a separate location where GameMaker (and all well-behaved Windows applications) can write files, in %LocalAppData%.

This is where it gets weird for me, and I still don’t entirely understand what’s going on.

I run the project out of GMS, and to help me find the file I’m writing, I draw the working_directory to the screen, and I get this:

C:\Users\XXXXX\AppData\Local\gm_ttt_20698\gm_ttt_1803\

So I go there, and I do not see the file that I wrote!

Eventually, I found the file I was looking for in C:\Users\XXXXX\AppData\Local[project_ name]

I was able to find it because I went to %LocalAppData%, sorted by date last modified, and looked for the most recently updated directory until I found it.

What’s the first directory, then? That’s where GMS builds your project when you run it through the IDE! GMS also uses %LocalAppData% to store the temporary build that it creates when you click Run Project. But this is the read-only working_directory location that is where the game.exe resides. The game.exe then creates its own directory in %LOCALAPPDATA% which is named whatever you named your project (eg., [project_name]) and this is the directory that GMS will allow itself to write to. And of course you can also read files from here, too.

The confusing bit is when you call working_directory in the context of, say, draw_text(x, y, working_directory) and working_directory returns the read-only path, but when you call working_directory in the context of fileID = file_text_open(working_directory+"file.txt");
file_text_write_string(fileID, "text")
, working_directory returns the writeable path.

The behavior of working_directory is what’s confusing. It would be a good idea, I think, if YYG were to create a GML variable called “working_directory_writeable” which was an alias of the writeable working_directory that always returns the writeable path. There should then also be a companion variable called working_directory_readonly They could leave the behavior of working_directory as it is, to avoid breaking anything already written. It can return the readonly directory or the write-able directory, just as it does currently. But if you need to be sure the path to the writeable directory, then you could use “writing_directory”.

So, moral of the story, if your your GameMaker Studio project wrote a file to working_directory, you will find it in C:\Users\XXXXX\AppData\Local[project_ name] — not the location given by working_directory if you draw it to the screen.

iMprOVE_WRAP 2.1 released

iMprOVE_WRAP 2.1 has been released. Get it at GameMaker Marketplace or itch.io.

Full Documentation.

Release Notes:

1.0 Initial release
1.0.1 Updated iw_draw_self_wrap() to use image_blend rather than c_white for the color argument.
2.0.0 Added new functions:

  • iw_draw_sprite_wrap(): an iMprOVE_WRAP version of draw_sprite()
  • iw_draw_sprite_ext_wrap(): an iMprOVE_WRAP version of draw_sprite_ext()

Improvements:

  • Boundary drawing now occurs at wrap corners as well.
  • Phantom collison checking also occurs at wrap corners.
  • iw_collision_wrap() and iw_collision_wrap_map() functions now incorporate do_wrap_h and do_wrap_v arguments, and only perform collision checks where they are needed. They still return a value for all locations, but where no check is needed, they return noone.
2.0.1 Improvements:

  • iMprOVE_WRAP demo resources have been placed in folders to keep them tidy when importing the asset into a project.
  • oIMprOVE_WRAP_demo sprite has been updated to allow for more precise positioning. Sprite is semi-transparent, with a yellow pixel at the origin
  • oIMprOVE_WRAP_demo object now draws guide lines indicating the height and width of the wrap range. This is useful in confirming that clone drawings and wrapping is occuring where it should.
  • iMprOVE_WRAP demo dashboard text has been updated to be a bit more clear
2.1 New functions:

  • iw_distance_to_object(): returns the shortest distance to the target object from the wrapping object, taking into account all directions available.
  • iw_distance_to_point(): returns the shortest distance to the target point from the wrapping object, taking into account all directions available.

New demo room for the iw_distance_to_object() and iw_distance_to_point() functions

 

GameMaker Studio 2 nearing release

In the last couple of days, YoYoGames have released some teasers that seem to be signaling the immanent release of GameMaker: Studio 2.0. This long-awaited release will overhaul the GM:S GUI, which YoYo have been rewriting in modern C++, and usher in a new era for GameMaker. Beyond that, little is known, as YoYo have been pretty secretive about their plans for the future of GameMaker since being acquired by PlayTech in 2015.

My greatest hope is that GM:S2 will have builds for Mac OS X and Linux. Out of all the software I use today, GameMaker is the last product that runs only on Windows, and I am eager to move to Linux full-time.

It remains to be seen what the release will bring.

Recent purchases of GameMaker who picked it up through the Humble Bundle have been speculating about what GM:S2 will cost. Obviously, a major release isn’t going to be free. It’s typical practice for software companies to sell upgrades to existing users at a substantial discount, so I’m expecting no less.

If YYG do extend discounted upgrade pricing to Humble buyers, most of whom paid around $15, they’ll still be getting an incredible value.

YoYoGames: “No roadmap for GM:S. Our Hands Are Tied!”

For a long time, YoYoGames used to publish a roadmap, showing their plan for the future of GameMaker: Studio. Interested parties could look and see what new features were in the works.

Since PlayTech took them over, they’ve taken this information offline.

In a recent Forum conversation, YoYoGames employees Shaun Spalding and Mike Dailly explained that while they wish they could communicate the future of the product, their hands are tied, and when they can talk about things like upcoming release dates and new features, they will.

This is very disappointing to serious GameMaker Studio users. A roadmap is an important document for developers. Software development is all about maintainability. In order to write software that is maintainable, it’s important to know how the tools you are using will be changing over time. Knowing the future plans of the tools can help developers avoid wasting time using features that will be deprecated and removed in the future, or avoid wasting time writing their own implementation of a feature that is planned in an upcoming version. A roadmap also prevents the repeated asking of the same questions, “when is [X] coming out?” or “I suggest you implement [already planned feature].” A roadmap is part of the conversation that happens between a software developer and the users, and not having one harms both the company and its customers.

Most software engineering projects intended to be consumed by other developers have a roadmap. Other game engine developers such as Unity3D and Godot Engine have public roadmaps.

It is my hope that PlayTech will change their policies surrounding information of their products, and allow their employees to engage in open conversation about their products. In the meantime, concerned GameMaker users should speak out and make their voices heard.

csanyk.com © 2016
%d bloggers like this: