On BitCoin

Like a lot of people, I started hearing a lot about BitCoin recently. I didn’t pay too much attention at first, but after I kept hearing about it mentioned, I started to get interested and decided to check into it.

I became intrigued. BitCoin is an ambitious project to rethink currency and provide a decentralized means of exchange.

Sadly (or perhaps luckily, if widespread establishment might have been disastrous), before it could get too much more established, a high profile digital theft of $500,000 in equivalent value seems to have resulted in the crash of the currency’s valuation.

I say sadly because I always like things that are new and experimental and attempt to re-think the status quo.

Digital currency transactions presently are dependent upon banks and other institutions which impose a lot of costs, and imposes control on the way money exchanges between two parties. This intermediary has to be trusted, and it would be nice if you didn’t have to trust an intermediary in order to conduct a transaction between two private parties. In the real world, this is not impossible; an transaction of cash money for a good or service is extremely commonplace. The physical realities of common transaction exchanges tend to enforce honesty and punish criminality, but there are always risks. Still, the risks, the mitigation methods, and the rewards are all tangible, easy for people to understand, and this enables people to engage with each other and conduct business.

After validation of identity, the two biggest problems with transactions conducted in virtual spaces are privacy and anonymnity. These are very old school values that are still easily realized through physical media such as cash or gold, or some barter commodity. Once quantities change hands and people part company, there’s nothing to trace what took place unless someone documents the transaction. Laws require this to be done, but informally people ignore this all the time, and for minor/informal transactions it’s almost unthinkable to do the sort of bookkeeping and reporting that is required for larger scale transactions and legitimate, long-term businesses to function.

Politically speaking, if what you do with your money can be observed, monitored, or traced by third parties, it threatens your ability to conduct business — particularly the free exercise of your political power, expressed through your financial means — freely. Thus, there’s a great deal of interest in any way to conduct transactions anonymously and privately.

Of course, these shields are highly desirable to the criminal elements of society, as well. Which makes BitCoin inherently controvercial. But as it often is said, one man’s terrorist is another man’s freedom fighter. Obviously, it’s a totalitarian’s wet dream to have a completely visible economy in which all holdings and transactions are visible, traceable, and verifiable. To that end, a technology like BitCoin that attempts to do without the sort of centralization that enables totalitarianism would appear to be a good thing. BitCoin is good where the government is corrupt or oppressive, and no worse than cash where crime is a problem.

Like any currency, and especially any new currency, it is highly dependent upon the faith of the people who use it that the currency has value.

As a purely digital currency, this is a particularly challenging proposition. The two biggest threats to users of a decentralized currency are counterfeiting and theft. BitCoin employs some sophisticated cryptography to address counterfeiting (if there is no centralized authority to determine whether a note is legitimate). Users are more or less on their own to prevent theft (centralized banks are insured and otherwise mitigate this risk for you as a part of the cost of the services they provide; the options for a private individual to safeguard their belongings exist but are of a decidedly different character)

The victim in the high profile loss incident that led to the collapse of confidence in BitCoin fell victim to theft. If what I heard about the case is accurate, the theft was not a sophisticated cyber-attack, but depended upon physical access to the file that held the rightful owner’s keys which proved in essence that they owned the particular BitCoins in question. The file essentially acts like a bearer bond, in that it is not tied in any way to an individual’s identity (as is required in order to preserve anonymnity).

In other words, the situation is no different from someone breaking into the person’s house and stealing $500,000 in cash from their mattress.

So, then, it would seem that BitCoin’s infrastructure might still be essentially sound from a security standpoint. It might not, too. But the physical theft of a bearer bond in no way invalidates the concept or value of bearer bonds.

Whether BitCoin’s infrastructure ultimately is or not secure depends on a lot of very sophisticated math and computer programming. Ultimately, conventional wisdom seems to be that any security can be defeated. Perfect, invulnerable security is a childish power-trip fantasy suitable for comic book fiction. In the real world, security can never be perfect, but does need to be “good enough”.

The question, then, is can a decentralized digital currency ever have “good enough” security? The general consensus in the wake of the collapse of BitCoin’s value seems to be “we doubt it.” And this is perhaps correct, given that a technology like BitCoin depends to a great degree on the trust and faith of its users, and is thus vulnerable to crisises of confidence as well as actual breaches of security.

In a world where there is no possibility of perfect security, but where “good enough” security is attainable for most clients, certain entities may nevertheless have too many enemies, or are simply too attractive a target not to attract the interest of so-called Advanced Persistent Threats which, given enough time, will eventually successfully breach. Perhaps the ultimate mitigation for this sort of thing is not technological, but rather legal or even military in nature.

So what could we do for BitCoin? The solution seems to me be to recognize the risk, and use BitCoin like one uses cash. Just as one would not keep $500,000 stuffed in their mattress, one should not hold such sums of BitCoin in such an insecure manner — the holding should be converted to a more securable currency. BitCoin potentially can still be useful when it is useful to do what it is valuable for: secure private, anonymous digital transactions.

The way to do it is this:

Hold your money in normal bank accounts and other traditional holdings, like you normally would. When you want to conduct a secure, private, anonymous transaction, convert some of your holdings into BitCoin. Then conduct the transaction as quickly as you are able. The recipient of the BitCoins should then convert them to a traditional holding that they are comfortable with.

The problem with this is that the conversions are traceable, and the proximity of the conversions can be used to determine who did business with who. But there are probably methods of dealing with that which can be employed — such as breaking up the conversions, spreading them across many different types of holdings, and over enough time that traceability becomes difficult or impossible at the point of conversion.

Follow the Leader: Firefox 5 and the State of the Browser Wars

Mozilla released Firefox 5 yesterday. I upgraded on one of my systems already, but haven’t done so on all of my systems due to some Extensions that are lagging behind in compatibility. These days I mostly use Chrome as my default browser, so I’m less apt to notice what might have changed between FF4 and FF5, and looking at the change list it doesn’t look like a huge release, which is another way of saying that Firefox is mature and can be expected to undergo minor refinements rather than major uhpeavals — this should be a good thing. FF4 seemed like a pretty good quality release. I’ve been a Firefox user since the early 0.x releases, and have been more or less satisfied with it, whatever its present state was at the time, since about 0.9.3. And before that I used the full Mozilla suite, IE4-6 for a few dark years when it actually was the best browser available on Windows, and before that Netscape 4. I actively shunned and ridiculed WebTV ;-). And I’d been a Netscape user since 1.1N came out in ’94. So, yeah. I knows my web browsers.

These are pretty exciting times for the WWW. HTML5 and CSS3 continue slowly becoming viable for production use, and have enabled new possibilities for web developers.

Browsers have matured and become rather good, and between Mozilla, Chrome, Opera, Safari, and IE, it appears that there’s actually a healthy amount of competition going on to produce the best web browser, and pretty much all of the available choices are at least decent.

It seems like a good time to survey and assess the “state of the browser”. So I did that. This is going to be more off the cuff than diligiently researched, but here’s a few thoughts:

After some reflection, I’ve concluded that we seem to have pretty good quality in all major browsers, but perhaps less competition than the number of players in the market might seem to indicate.

Hmm, “Pretty good quality”: What do I mean by this, exactly? It’s hard to say what you expect from a web browser, and a few times we’ve seen innovations that have redefined good enough, but at the moment I feel that browsers are mature and good enough, for the most part: They’re fast, featureful, stable. Chrome and Firefox at least both have robust extensibility, with ecosystems of developers supporting some really clever (and useful) stuff that in large part I couldn’t imagine using the modern WWW without.

Security is a major area where things could still be better, but the challenges there are difficult to wrap one’s head around. It seems that for the forseeable future, being smart, savvy, and paranoid are necessary to have a reasonable degree of security when it comes to using a web browser — and even then it’s far from guaranteed.

There has been some progress in terms of detecting cross site scripting attacks, phishing sites, improperly signed certificates, locking scripts, and the like. Still, it seems wrong to expect a web browser to ever be “secure”, any more than it would make sense to expect any inanimate object to protect you. It’s a tool, and you use it, and how you use it will determine what sort of risks you expose yourself to. The tool can be designed in such a way as to reduce certain types of risks, but the problem domain is too broad and open to ever expect anyone but a qualified expert to have a hope of having anything resembling a complete understanding of the threat picture.

That’s a can of worms for another blog post, not something I can really tackle today. Let’s accept for now the thesis that browser quality is “decent” or even “pretty good”. The WWW is almost 20 years old, so anything other should be surprising.

In terms of competition, we have a bit less than the number of players makes it seem.

Microsoft only develops IE for Windows now, making it a non-competitor on all other platforms. Yet, because its installed userbase is so large, IE is still influential on the design of web sites (primarily in that IE forces web developers to test for older versions of IE’s quirks and bugs). By now, we’re really very nearly done with this, one would hope the long tail of IE6 is flattening as thin as it can until corporations can finally migrate from Windows XP. Even MS is solidly on board with complying with w3C recommendations for how web content gets rendered. It seems that their marketshare is held almost exclusively due to IE being the default browser for the dominant OS. Particularly in corporate environments where the desktop is locked down and the user has no choice, or the hordes of personal computer owners who own a computer but treat it like an appliance that they don’t understand, maintain, or upgrade. I suspect that the majority of IE users use it because they have no choice or because they don’t understand their computer enough or have the curiosity to learn how to install software, not because there are people out there who genuinely love IE and prefer it over other browsers. I’m willing to be wrong on this, so if you’re out there using IE and love it, and prefer it over other browsers, be sure to drop me a comment. I’d love to hear from you.

Apple is in much the same position with Safari on Mac OS X as MS is with IE on Windows. Apple does make Safari for Windows, but other than web developers who want to test with it, I know of no one who uses it. Safari is essentially in the inverse boat that IE is in on its native platform: a non-competitor on every other platform.

This leaves us with Opera, Mozilla, and Chrome.

Opera has been free for years now, though closed-source, and has great quality, yet adoption still is very low, to the point where its userbase is basically negligible. There are proud Opera fanboys out there, and probably will be as long as Opera sticks around. But they don’t seem like they’ll ever be a major player, even as the major players always seem to rip off features that they pioneered. They do have some inroads on embedded and mobile platforms (I use Opera on my Nokia smartphone rather than the built-in browser, and on my Wii). But I really have to wonder why Opera still exists at this point. It’s mysterious that they haven’t folded.

The Mozilla Foundation is so dependent on funding from Google that Firefox vs. Chrome might as well be Google vs. Google. One wonders how long that’s likely to continue. I guess as long as Google wants to erode the entrenched IE marketshare and appear not to be a drop-in replacement for monopoly, it will continue to support Mozilla and, in turn, Firefox. Mozilla does do more than just Firefox, though, so that’s something to keep in mind. A financially healthy, vibrant Mozilla is good for the market as a whole.

Moreover, both Chrome and Firefox are open source projects. This makes either project more or less freely able to borrow not just ideas, but (potentially, from a legal standpoint at least) actual source code, from each other.

It’s a bit difficult to be able to describe to a proverbial four year old how Mozilla and Chrome are competing with each other. If anything, they compete with each other for funding and developer resources (particularly from Google). Outwardly, Firefox appears to have lost the leadership position within the market, despite still having the larger user base, they are no longer driving the market to innovate. Firefox largely has given that up to Google (and even when they were given credit for it, much of what they “innovated” was already present in Opera, and merely popularized and re-implemented as open source by Mozilla. And with each release since Chrome was launched, Firefox continues to converge in its design to look and act more and more like Chrome.

It’s difficult to say how competing browsers ought to differentiate themselves from each other, anyway. The open standards that the WWW is built upon more or less demand that all browsers not differentiate themselves from each other too much, lest someone accuse them of attempting to hijack standards or create a proprietary Internet. Beyond that, market forces pretty much dictate that if you keep your differentiating feature to yourself, no web developers will make use of it because only the users of your browser will be able to make use of those features, leaving out the vast majority of internet users as a whole.

Accelerating Innovation

After releasing Firefox 4, Mozilla changed its development process to accomodate the accelerated type of release schedule that quickly lead to Google becoming recognized as the driver and innovator in the browser market. Firefox 5 is the first such release under the new process.

This change has met with a certain amount of controversy. I’ve read a lot of opinion on this on various forums frequented by geeks who care about these things.

Cynical geeks think that it’s marketing driven, with version number being used to connote quality or maturity, so that commercials can say “our version number is higher than the competitor, therefore our product must be that much better”. Cynics posited that since Chrome’s initial release put them so many versions behind IE/FF/Opera that this put Google into a position of needing to “make up excuses” to rev the major version number, until they “caught up” with the big boys.

While this is something that we have seen in software at times, I don’t think that’s what’s going on this time. We’re not seeing competitors skipping version numbers (like Netscape Navigator skipping 5 in order to achieve “version parity” with IE6) or marketing-driven changes to the way a product promotes its version (a la Windows 3.1 -> 95 -> 98 -> 2000 -> XP -> Vista -> 7).

Some geeks, I’ll call them versioning “purists,” believe that version numbers should “have integrity”, “be meaningful”, or “stand for something”. These are the kind of geeks who like the software projects where the major number stays at 0 for a decade, even though the application has been in widespread use and in a fairly mature state since 0.3 and has a double-digit minor number. The major release number denotes some state of maturity, and has criteria which must be satisfied in order for that number to go up, and if it ever should go up for the wrong reasons, it’s an unmitigated disaster, a triumph of marketing over engineering, or a symptom that the developers don’t know what they’re doing since they “don’t understand proper versioning”.

From this camp, we have the argument that in order to rev the major number so frequently, necessarily this must mean that the developers are delivering less with each rev, which thus necessarily dilutes the “meaningfulness” of the major version number, or somehow conveys misleading information. So much less is delivered with each release that the major number no longer conveys what they believe it ought to (typically, major code base architecture, or backward compatibility boundary, or something of that order). These people have a point, if the major number indeed is used to signify such things. However, they would be completely happy with the present state of affairs if only there were a major number ahead of the number that’s changing so frequently. In fact, you’ll hear them make snarky comments that “Firefox 5 is really 4.1”, and so on. Just pretend there’s an imaginary leading super-major version number, which never changes, guys. It’ll be OK.

Firefox’s accelerated dev cycle is in direct response to Chrome’s. Chrome’s rapid pace had nothing to do with achieving version parity. In fact, when Chrome launched in pre-1.0 beta, in terms of technology at least, it was actually ahead of the field in many ways. Beyond that, Chrome hardly advertises its version number at all. It updates itself in about as silently a manner as it possibly can without actually being deceptive. And Google’s marketing of Chrome doesn’t emphasize the version number, either. It’s the Chrome brand, not the version. Moreover, they don’t need to emphasize the version, because upgrading isn’t really a choice the user has to make in order to keep up to date.

Google’s development process has emphasized frequent, less disruptive change over less frequent, more disruptive. It’s a very smart approach, and it smells of Agile. Users benefit because they get better code sooner. Developers benefit because they get feedback on the product they released sooner, meaning they can fix problems and make improvements sooner.

The biggest problem that Mozilla users will have with this is that Extensions developers are going to have to adjust to the rapid pace. Firefox extensions have a built-in check which tests an Extension to see if it is designed to work with the version of Firefox that is loading it. This is a simple/dumb version number check, nothing more. So when version numbers bump and the underlying architecture hasn’t changed in a way that impacts the working of the Extension, the extension is disabled because the version number is disqualified, not necessarily because of a genuine technical incompatibility. Often the developer ups the version number that the check will allow, and that’s all that is needed. A more robust checking system that actually flags technical incompatibilities might help alleviate this tedium. But if and when the underlying architecture does change, Extension developers will have to become accustomed to being responsive quickly, or run the risk of becoming irrelevant due to obsolescence. Either that, or Firefox users will resist upgrading rapidly until their favorite Extensions are supported. Either situation is not good for Mozilla.

Somehow, Chrome doesn’t seem to have this problem. Chrome has a large ecology of Extensions, comparable to that of Firefox. Indeed, many popular Firefox Extensions are ported to work with Chrome. Yet I can’t recall ever getting warned or alerted that any of my Chrome extensions are no longer compatible because Chrome updated itself. It seems like another win for Chrome, and more that Firefox could learn from them.

I have to give a lot of credit to Google for spurring the innovation that has fueled browser development in the last couple years. The pace of innovation that we saw when Mozilla and Opera were the leaders just wasn’t as fast, or as influential. With the introduction of Chrome, and the rapid release schedule that Google have successfully kept up with, the entire market seems to have been invigorated. Mozilla has had to change their practices in order to keep up, both in terms of speeding up their release cycle, and in adopting some of the features that made Chrome a leader and innovator, such as isolating browser processes to indivual threads, drastically improving javascript performance. Actually, it feels to me that most of the recent innovation in web browsers has been all due to the leadership of Chrome, with everyone else following the leader rather than coming up with their own innovations.

In order to be truly competitive, the market needs more than just the absence of monopoly. A market with one innovator and many also-rans isn’t as robustly healthy as a market with multiple innovators. So, really, the amount of competition isn’t so great, and yet we see that the pace of innovation seems to be picking up. Also, it’s strange to be calling this a market, since no one at this point is actually selling anything. I’d really like to see some new, fresh ideas coming out of Mozilla, Opera, and even Microsoft and Apple. As long as Google keeps having great ideas coupled with great execution, and openness, perhaps such a robust market for browsers is not essential, but it would still be great to see.

Intellectual property value of social networking referrals

One thing I have noticed over my years of using the social web (fb, twitter, livejournal) that human culture instinctively places a value on linking to things in a way that I find odd. There’s a type of “intellectual property” that people conventionally recognize as a sort of matter of natural course. I don’t know how else to describe it than that.

In real value terms this sort of intellectual property is very low value, but in social etiquette terms, the value is more substantial. The phenomena is one of credit, but it’s not credit for authorship, rather it is credit for finding and sharing. If you find something cool and blog about it, and you’re the first one in your little social group to do so, you get some kind of credit for being on top of things, being cool enough to know where to look, lucky enough to be in the right place at the right time, or whatever. It’s not much more than that, but somehow if you post the same link and are not the first in your social group to do so, and don’t acknowledge the coolness of the person who you saw posted it first, it can ruffle feathers, as though people think you’re trying to be the cool, original one and are stealing other people’s “cool points” by not acknowledging where you got your cool link from.

It’s funny though since posting a link is an act of evaluation (“I judge this content to be worthy of your time, so I’m sharing it.”) rather than an act of creativity (if you want to be really cool, go author some original content and see how many people you can get to link to that.)

What I take from this is two things:

  1. having good enough taste in something to make a recommendation which one of your friends will pass along to others is an important, valuable thing in itself. Having this sort of taste implies that you are cool.
  2. Getting there first is important, OR perhaps acknowledging who was cool enough to turn you on to something that you found cool is important.

One of the things about Facebook that I like a lot is that they get this, and implement it in such a way that it basically works automatically. You can click “Share” and it just handles crediting who you got it from in a behind the scenes sort of way that forces you to follow the etiquette convention automatically, thereby avoiding being a leech or douchebag. On the other hand, in Livejournal, this is a somewhat useful way to discern who among your friends is a douchebag, since if they don’t think to credit someone for showing them something that you’ve already seen before, you know they’re not with it, or at least aren’t following their friends-list all that closely.

 

Another interesting thing about this is that, depending, sometimes people will just post a link to something without any comment, while other times people will post and add their thoughts to it as an annotation. Sometimes no comment is needed, or is implied by the context of how you know your Friend and what they are about and why they would be posting that link. Other times, people will post their thoughts and sometimes write something reasonably lengthy and thoughtful on the subject that they are linking to. This tends to happen much more on Livejournal than on Facebook or Twitter, which are geared toward more structured, but forced brief content. I think that Livejournal tends to encourage more expressive posts because people tend to use pseudonyms and write with somewhat more anonymity than they have with Facebook, where most people use their real name. I do like the way that Facebooks conversations of comments seem to flow very nicely once a topic hits someone’s wall. It’s also interesting to see how different groups of friends will come to the same original linked content and have different or similar conversations about it.

I think it would be fascinating to be able to visualize through some sort of graphic how sub-circles of an individual’s friends might converge though common interest in some topic. In my own Facebook experience, it has been interesting to see people I know from elementary and high school mixing with people I knew from college and afterward, and from various workplaces, and so on. I think it would be really interesting to see this sort of interaction on a very large scale, sortof a Zuckerberg’s eye view of what’s going on in various social circles that occupy Facebook. I can mentally picture colored bubbles occupying various regions of space, and mixing at the edges, colors blending like wet paint.

I also think it’s interesting how the constraints and style of the different social sites shape behavior and the characteristics of the groups who use them. Facebook users in my experience have tended to be more sedate, dryer, and thoughtful, though not always. Substantial numbers of my friends seem to be comfortable goofing and making fools of themselves, or being outspoken to the point that they run the risk of offending people of a differing political polarity. Twitter seems to be a land of important headlines mixed with one-liner witticisms and the occasional bit of Zen. Livejournal seems to be more private, insular, and diary-ish. I almost said “diaretic” but that sounds a lot like another word which, actually, might be even more appropriate, if disgusting. Discussting? Heh.

OK, I’m clearly blogging like I’ve been up for too long, and I have. But I hope to revisit and put more thought into these matters and see if something materializes out of that that is worthy of linking to and discussing. This could end up being someone’s Social Media studies PhD thesis:P

Packt Press announces Game Maker 8 Cookbook

http://www.packtpub.com/game-maker-8-cookbook/book

I am contributing to this book as a technical reviewer, and it’s my first time getting credit in a publication, so I’m kindof excited and happy about that.

[Update 1/11/2013: After several months of delays, I have heard from the publisher that this book is about to be canceled. However, I am now working on reviewing a book on GameMaker: Studio from the same publisher, which looks like it’ll be a much better book.]

Unearthed Treasures of Childhood

I’m cleaning out my home office today… well, let’s say this week.

For most of my life, I have been a packrat, and a fairly disorganized one at that. I have always been very sentimental about things, and would keep reminders of events, and basically never threw out anything that I had worked on when I was a kid. Most of that stuff got kept for a really long time, far longer than it was useful, if it even ever was useful.

But after a certain point, keeping everything gets to be a burden. A few years ago, I started to recognize that this was a problem and that I needed to do something about it. I have a very strong instinct to preserve and archive things that are important to me, either because they express some idea that I find compelling, or because they help to define my personality or could be used to explain myself in some way that would be persuasive or understandable to others.

Anyhow, in my office I have piles and piles of old papers, drawings and things I wrote for fun in childhood, school assignments, newspaper clippings, comic strips, receipts, artifacts from memorable experiences, and outright junk. I’m working on going through it all and purging as much of it as I can stand to part with. Mostly it’s not hard, it’s just a lot of sifting and it’s time consuming.

It’s a bit embarrassing for me to admit this. If you’ve seen Hoarders, that could be me. I say “could be” because that’s easier than admitting that it is me. I’m definitely not that bad, and definitely not as bad as I used to be, but it’s taken me some time. I just had a realization and asked myself what I was doing, and from there it’s just been a matter of prioritizing when I can go through the accumulation of decades and get rid of things. Most of the time I am busy or obligated and haven’t gotten to a lot of it. But I have free time this week so it’s happening.

Anyhow, going through my piles of junk, I came across my original concept drawing for the Boobie Teeth game! I thought that it had been lost to the ages. But here it is. I’m blown away. Finding it here is surreal. I have no idea how it came here from my mom’s attic, where I know it had stayed for many years. But I must have retrieved it at some point, put it aside, and forgotten completely about it.

It’s 3 pages of wide format fanfold line printer terminal paper. Someone in my family would bring lots of it home and give it to us for scrap paper which we could draw on the unused side. The printing on the used side has 1982, which helps establish the date — likely the drawing would have been done some time around 1982-83. This means that I must have been 7 or 8 when I drew up the concept, not six.

The game concept is also quite different from how I reconstructed it from memory: Boobie Teeth is a space fish. Rather than eating other fish, Boobie Teeth primarily feeds on these alien creatures called Boobies, which resemble the ghostmonsters from Pac Man, only with legs. Actually, I recognize them as being “stolen” from a game I liked at the time, Fast Eddie for Atari 2600:

 

Fast Eddie for Atari 2600
Fast Eddie for Atari 2600, the inspiration for “boobies” in my original concept.

My game concept also featured some birds, called Boobie Helpers, who tried to help the Boobies avoid becoming Boobie Teeth’s prey. Also, there were two other types of fish in the game: An “older” species of the same genus as Boobie Teeth, which only eats plants, but which Boobie Teeth can eat for bonus points, and a “new” species which preys on Boobie Teeth. One of the more interesting aspects of the original concept was that the Boobie Teeth species was facing extinction. Only 12 fish remained. And when you died in the game, your skeleton would sink down into the sea bed, and become fossilized in the sediment.

I’m not sure how much of these unearthed revelations will end up making it into the finished game. They don’t entirely fit with the version of the game that I reconstructed from memory. I like the death->fossil concept.

As soon as I can scan the drawing, I’ll be posting it here for you to see. For now, here’s photo I took with my digital camera:

 

Boobie Teeth original concept

Original concept drawing for Boobie Teeth

Original concept description:

You are one of the 12 remaining space fish called Booby Teeth. These little space creatures called boobies are trying to eat you. There are booby helpers that look like birds. You have to eat the boobies from behind. There are 11 other fish that are in reserve. A new specie of Booby Teeth called the Booby Crusher. It is much bigger and it is trying to eat you because it doesn’t know any better. He is 15 mph faster than you. There is one last kind of booby teeth. It only eats plant life. You are trying to eat it like the Booby Crusher is trying to eat you. It swimsflies across every time you have played for 30 seconds. It is 1000 points the first time it flies by and it will stay there until the score on it is 0, then it will disappear. Every time it flies by its score gets lower. The gill on it is the 1 in 1,000. After that it disappears. It is 900, 800, and so on until it gets to 100. The score keeps splitting in half. If you clear the whole board of Boobies, Booby Helpers, and Booby Teeth that eats plant life, you have to find the fossil form. The more times you lose a remaining fish you have to find more fossil forms. When you find it, you have to pull it up with your mouth, then you will get a rougher landscape. The game starts all over again. (Later there are mountains and volcanoes.)

Using the controller: Pushing the joystick up makes the Booby Teeth go up. Pushing the joystick down makes the Booby Teeth go down. Pushing the stick to the right makes the Booby Teeth go to the right. Pushing the stick to the left makes the Booby Teeth go to the left. The fire button makes the Booby Teeth bite.

Scoring:
Common Boobies: 50 pts.
Tallest & second tallest Boobies: 100 pts.
Strongest Booby: 150 pts.
Shortest Booby: 125 pts.
Flying Booby: 250 pts.
Booby Helpers:
Baby: 300 pts.
Mother: 2,000 pts.
Fastest Booby: 350 pts.

Goal

Like just about everyone, I don’t have nearly enough time to do everything I need to do in my life, let alone what I want to do. I need to figure out what to do about that. I have some vague ideas, but nothing fully formed and ready to go. That’s pretty frustrating. Like, self taunting self that you’re losing at life frustrating.

But I do have a goal: I want to move from being a 9-5 guy into being a 24-7 guy. I don’t want to simply work 40+ hours a week so I can afford to keep living and maybe (ha!) retire someday. I want to take that time I spend working and put it into activities that are meaningful to me and happen to generate income. I’d rather spend all my time doing that than spending most of my productive hours working on someone else’s projects and try to cram my own stuff into whatever’s left over when The Man is done with me.

To summarize, I want to achieve a convergence of work, income, fun, and values. I got a good taste of it this month when I worked on my game project and delivered my talk at Notacon. I didn’t make any money doing it, but had I figured out some way of doing so, I would already have the life I want. As I see it, I’m pretty far from having that life right now. And, unless I’m smart about how I go about it, the more I work for The Man, the further from it I will get.

I don’t need to maximize my income potential; I’d rather not compromise doing what I love in order to make the most income I possibly could. I do need to make enough money to live on. I don’t necessarily need all that much to live, and could deal with making less, and working longer to attain it, as long as I truly love what I’m doing. Before getting too far along in this dream, I have to figure out how to make any money doing what I love, try that out, and get it to work.

What are the things a person needs to do to accomplish this? Why don’t they teach you this in college or high school, when it would be super useful?

Looking for advice here. This blog does have a comment feature.

“How I (FINALLY) Made My First Video Game” Notacon 8

The talk went very well. Several people I didn’t know came up to me afterward and said that it was inspiring, which means a lot to me. I tend to discount compliments from friends, which perhaps I shouldn’t, but I always throw out data that might be subject to bias.

I am posting my presentation slides here, for anyone who would like to download and read my notes. The talk was videotaped (do camcorders still use tape? No, I think we need a new word), and will be available on the Notacon.org archive once they have time to process everything. I’ll post a link when they do; might be a few weeks or months.

How I (FINALLY) Made My First Video Game – Notcon 8 2011-04-15

Edit: The videos that were embedded in the original powerpoint appear in this version of it as single frame placeholders. For the actual videos, I have them up on YouTube:

Development Stages

Builds 0.20 – 0.22 montage

Boobie Teeth 0.22

Boobie Teeth 0.22 is now available for download at the Releases page.

I figured out how to fix the bugs with the Feeder Fish, so they’re back in the game. The problem was rather complicated, and had to do with both Game Maker’s inheritance model AND the weird way that Game Maker can change an instance of one type of object into an object of another type. When Feeder Fish feed on each other, they eventually grow large enough to change into Fish. For reasons I still don’t understand, when the built-in function instance_change() is called, it results in a weird condition with the instance’s variables no longer being accessible to the application.

In my AI targeting code, this could cause a changed instance to no longer exist insofar as the Game Maker engine was concerned, resulting in an unhandled “unknown variable” exception being thrown. To work around the problem, I had to write my own code to handle the instance_change, and so it creates a new Fish, assigns it properties equivalent to the former Feeder Fish instance that it used to be, and then destroys the original Feeder Fish.

Further complicating matters was the fact that I had initialization happening in the Create event for the Fish object, which normally gives a new fish random values for its properties. In this case, however, the Create event initialization was blowing out the Feeder Fish’s properties which I’d just assigned to the new Fish. So to fix that problem, I had to remove the call to my initialization function from the Fish object, and am now calling it from outside after the Fish instance is created. The Fish Factory uses randomized values, while a matured Feeder Fish passes its original values. This works nicely, and I am happy with the result. The game is more fun with Feeder Fish acting as an attractive food source to the full-grown AI fish in the game, so I’m glad I was able to get this in before the demo at Notacon.

I also added a function to prevent fish from getting stuck on the bottom of the sea. Now if a fish swims and hits bottom, it changes course. I have seen on occasion a fish being spawned too close to the bottom and getting stuck, and will have to fix that in a different manner. I have been trying to work some safety checks into my level_spawn function to prevent the player from being unfairly killed and prevent fish from being spawned stuck in the sea floor. So far, though, this has proved tricky. I can’t simply check for collisions, because the fish’s sprite gets scaled up immediately after it is spawned, and in the first tick of the game when the fish is newly spawned, its sprite is still normal size, so an anti-collision check could still leave it too close when it spawns. As well, I can’t merely check for collisions with the Player, because I want to ensure that the Player has enough room to swim away from a nearby threatening fish. I had some experimental code that should calculate a safe clearance distance around the player and pick coordinates outside of this safe zone to place newly-spawned fish, but when I tried running it, an odd bug arose wherein the fish count function stopped working properly — I’m guessing fish end up outside the room where they can’t be seen or eaten, but I can’t see how that’s possible given how I wrote the code that checks for this. At any rate, level transitions will be a future development goal that I will work out in more detail anyway.

The game demo level is already pretty fun to play, so if you have not yet tried the game out, now is a great time to get acquainted.

Boobie Teeth 0.21

Boobie Teeth 0.21 is now up on Releases. This is most likely the last update that I will be releasing before the talk I’m giving at Notacon 8, so if you’re reading this consider it an advance preview of what you’ll see if you happen to be attending.

This release has numerous improvements over 0.20. I went through most of the script code that I wrote and reformatted it, and in some cases did some refactoring. As a player you won’t notice this of course, but it will enable me to work faster and release more frequently as I continue developing the game. I greatly simplified the collision detection, which has yielded some modest performance improvements. As a result of some of the changes I made, I had to temporarily remove Feeder Fish from the game, but I’ll be bringing them back in as soon as I can rework them properly.

Since playtesting this version, I am really moved by some of the emergent behaviors that I’ve observed. Granted, sometimes the fish do stupid things like ram themselves into the ground or continue chasing a fish they targeted that has since grown too large for them to eat. But I’ve also seen fish leaping out of the water, feeding behavior that almost looks like schooling (though it isn’t). The thing I like the most, though is a change I made to the fish’s “normal” movement behavior. Instead of moving at a fixed speed, it changes through a range of speeds every few ticks. This gives it a movement that is more characteristic of fish swimming, with a pulsating faster/slower speed to it. I find it emotionally compelling when I see a fish moving like this while being chased closely by another fish right behind it — it looks like the fish is struggling to swim as fast as it can, falling behind, going faster… sometimes they get away, sometimes the chase fish is too fast for them.

I have a lot more planned for the AI in the game, but for now it’s at a point where it is basically playable and you can get a decent feel for what the game is meant to play like. This is much improved over earlier builds where “dumb” fish simply moved horizontally at a fixed speed.

Three eras of searching the world wide web

A little late to the game and perhaps obvious, I know, but I was just musing and it occurred to me that there are perhaps three distinct eras for the way people using the world wide web have found information:

The Yahoo era: A cadre of net geeks personally indexed and recommended stuff for everyone to look at when you told them what you were looking for.
The Google era: A massive cluster of robots scoured the internet and figured out what web sites looked like they were pretty good and matched them up with what you told them you were looking for.
The Facebook era: Your friends find something cool/funny/useful/outrageous and post something about it, leading you to do the same.

Ok, so yes, that’s pretty obvious to anyone who’s been on the web and paying attention from 1994-onward or earlier. Predicting what the next era will be is of course the billion dollar question.

The obvious thing that comes to mind is that things will just remain this way forever, and of course this is false and just a failure of imagination.

The next most obvious guess at what the future will bring is to combine the stuff that happened in the previous eras in some novel way. The Facebook era is kindof like that — instead of a hand-picked WWW index managed by the geeks at Yahoo!, we have a feed (rather than an index) of links which our our social contacts (rather than a bunch of strangers working for Yahoo!) provide for us to check out.

So, perhaps just doing a mashup of the Facebook and Google eras would point to what the next breakthrough in search might look like. Let’s try that:

Mash1: Our social contacts create a cluster of robots who index the WWW and come up with a custom-tailored PageRank algorithm tied to what turns our crank.

Hmm, intriguing, but unlikely. Most of our social contacts probably don’t know enough about technology to do that.

Mash 2: The behavior of our social contacts is monitored by robots who analyze the information that can be datamined out of all that activity, and use it to beat our friends to the punch. Especially for marketing purposes.

Much more likely! What we’re doing on social networking sites is already closely watched and analyzed by hordes of robots. All it would take is for someone to come up with the idea and implement it.

And it’s a good enough idea that I bet there are already people working on this right now. In fact, there definitely are if you consider social media advertisers. But I’m also thinking about more general purpose informational search.

In fact, after I congratulate myself on what a clever prognosticator I am and hit Publish, I bet within 15 minutes someone will post a comment with a link to a company that’s doing exactly this.

I mean, of course I could save myself the embarrassment and google around and see if I could find that myself, but it’s so last-era.

I want to see whether the Facebook era will bring the information to me with less effort expended. It may or may not be faster than the google era, but faster isn’t always the most important thing — sometimes there’s a tremendous amount of value in getting information from a friend that could easily have been looked up through a simple query to google.

5… 4… 3… 2…