Boobie Teeth 0.23!

At long last, I’ve gotten back into developing Boobie Teeth. I’ve released 0.23, available as always on the Releases page.

In the two weeks leading up to Notacon 8, I worked on Boobie Teeth roughly 16-18 hours/day, making great progress, but had about burned out by the time I got 0.22 ready for the presentation. I really like pushing on a project like this, where I’m interested and motivated, but probably can’t sustain more than 12-14 hours of focused effort a day for very long. I had intended to take the next week or two off from the project and focus on some other things — housecleaning, getting ready to start a new job in May. For a number of reasons, this summer has been full of misfortunes which kept me out of advancing the project as I dealt with various crises. But things have settled enough, to where I can now resume working on it.

This release was actually mostly done between 4/16-4/17. I had a few ideas from playing 0.22 that I wanted to put in quickly during Notacon weekend, and got it done quickly. There are only a few changes in this release, not as much as I probably had intended when I started 0.23, but since so much time has passed between then and now, and I’m not entirely sure what else I might have decided to include had I kept working on it — there are a number of things on my list that I might have thrown into this release if I’d kept working on it, but at this point I’m not too sure what they were.

I briefly considered just ditching what I’d done for 0.23, and going back to 0.22 and start over again on 0.23, but I decided in the end to keep what I’d accomplished in 0.23 and just put it out, then start 0.24 with fresh direction. The additions which I did make to the game seem to have been complete, so rather than drop them and add them back later, it makes more sense to keep them.

I think that in 0.24, I’m going to focus on a few performance optimizations so the game will scale up more before dropping frames. It’s about at the limit of that right now, in my demo level, which won’t do if I want to do a few things that I want to do when the game advances to the next level. I really need to be able to put more objects on the screen at once without things slowing down.

Cleveland GiveCamp 2011

This past weekend (7/29-7/31) I participated in my first Cleveland Give Camp event! This was the second year of Cleveland’s Give Camp, and I had heard about it too late to participate in it last year. Once again, it conflicted with the PyOhio conference, which I went to in 2010, and hope to be able to again, so hopefully event planners will try to do better to avoid conflicts in the future. Not that it’s always possible

I was slightly apprehensive about how it would all work out, being my first time doing it. It didn’t seem like it would be possible. In just 72 hours, I was supposed to join a team of developers whom I’d likely never met before, and collaborate on a small IT project on behalf of an Ohio nonprofit organization who had a worthy project. The projects were evaluated ahead of time so that the Give Camp organizers could select candidates who could be helped within the constraints of the event’s format. This was a well planned weekend, and we were able to be productive. In fact, all 22 projects were successfully completed this year, up from 14 last year. In total, over $500,000 in professional services were donated. Burke Lakefront Airport and LeanDog generously hosted the event, which was funded by a raft of corporate sponsors. Volunteers camped out and worked all weekend, and in exchange we got fed and thanked and made to feel appreciated. This seemed to be pretty fair.

Most of the projects were to create web sites. There were a few that were mobile applications or databases. Of the web sites, most if not all seemed to use WordPress. I’ve been using WordPress for my own site for more than a year now, and have liked it. I like it even more now! WordPress is so powerful, flexible, and easy to use. I may gush about that later.

Team 14

In our opening meeting, we were assigned to teams. In my team, we had a project leader, and a couple developers, and I served as the team’s designer. When we sat down to go over the project, it took me a few minutes to realize that eyes were on me to tell the developers what to do; I had expected there would be a little more direction from the project leader, but after we had dinner on Friday and sat down together to meet with our customer contact, Lisa, we just went around and briefly introduced ourselves, and were let go.

Lisa told us about her organization, Adaptive Sports Programming of Ohio, and we took a quick tour of their old website. I spent about five minutes clicking links and skimming, and quicker than I knew how, I came up with a bullet list of the top 5 or 6 categories of information, and proposed that we try to re-organize the site’s content in that fashion. I noticed that much of the site’s content was geared toward linking to other resources. This is an older approach to web strategy, and seemed to me a bit outdated, a bit reminiscent of the early Geocities-era world wide web, when the manually maintained index of Yahoo! ruled. Maintaining all that content was a lot of work — checking for dead links and whatnot, and, I reasoned, was not effective in the current era, since most people find content on topics they’re interested in through google or through social networks. I proposed dropping as much of this “resource” content as the customer felt comfortable, and putting the priority into emphasizing the other categories, most importantly: donations, how to get involved, and events that their organization was directly involved with.

To my surprise, and to everyone’s credit, there wasn’t any resistance to this idea. I was a bit worried that Lisa might feel like I was about to destroy “her baby”, or simply disagree about the importance or purpose of various parts of the old site, but she was ready to put her trust in us with us right away, and the rest of the group seemed to like my proposal as well.

Throughout the weekend, we (and I emphasize “we”) quickly made decisions together in a most expedient manner, without egos butting heads or any of the typical bad human behaviors that I’ve seen over the years which have wrecked teams and projects. Someone would have an idea or see a problem, get someone else’s opinion on it, and propose a course of action, and it would either be adopted readily or if someone else had some additional thought on it, this was added to the original proposal, and we went with it. I never felt like I was pushing on anyone else on the team or stepping on anyone’s toes. It was great. The couple or three wrong turns we made along the way didn’t cause us any real harm as a result, and when we changed course both abandoning the old idea and adopting its successor went very smoothly.

One of the concessions we did have to make was that we weren’t able to migrate all of the content from the old site over to the new site. By the end of the weekend we realized we would not be able to get 100% of the new content for the site up by the time we were finished. Too many decisions needed to be made about the content, which were best left in the hands of the site owner, which is what we did. We ended up building a good, but mostly empty, new “house” for her to move into.

In retrospect, our project might have gotten more accomplished if we’d had someone do a content inventory and figure out what to do with it all. My initial idea that we could simply re-organized and adjust emphasis according to the priority of the site’s missions was a good thought, but as we got deeper into the content on the old site, it became apparent that simply copying and pasting it was not going to be effective. The old site had a lot of content on it, not all of which was going to come over. The volunteers on the team couldn’t make decisions about significant amounts of it without the site owner.

I had the thought late Friday that once we got the new site up and running on WordPress, we might be able to get Lisa trained on managing content in WordPress, and have her get familiar with it by bringing over the content herself after we got her started. This didn’t end up happening, either. There was too much else to do, Lisa needed to be involved in other decisions, and her availability for the weekend wasn’t as total as the rest of the team. Much of the content needed to be re-thought, not just re-organized. We didn’t have a dedicated copy writer on the team, and even if we did we might not have known what content to re-work and what to ignore without direct involvement from the site owner.

I think we can consider what we ended up with a successful IT project. We produced a redesigned web site, built on open source, which promises to be easier to manage in the future, and handed it off in “move-in ready” condition. It’s certainly an improvement over the infrastructure that they had in place previously.

The Takeaway

Keep it simple, less is more

There was a good bit less coding on this project than I would have guessed, going in to the weekend. In fact, I gauge our success in no small part by how much we did not need to code to get the project done. Code we wrote was code that someone would have to maintain, and the way Givecamp projects work, there is no provision for support once the weekend is over. We didn’t want to leave Lisa stuck with a project that would need ongoing code maintenance, and no one to do it for her. Accordingly, a lot of the things we could have considered doing for the project, we dismissed in favor of finding an off the shelf solution that we could mash up to create what we needed.

This was definitely the right way to go. The amount of customization we needed to do in order to get the site working and looking the way we needed was not zero, but we did keep it minimal. What customizations we did make were documented and handed off to the site owner. It was all, as far as I know, css, php, or javascript — all of which are interpreted, and thus can be modified without need for recompiling. We spent the bulk of our time looking for plug-ins and figuring out how to configure them, testing things out, and fixing this or that with the least amount of customization that was needed. This was a good application of the “what’s the simplest thing that could possibly work” principle.

Whence TDD?

We did not elect to take a TDD approach, but it is really not apparent how we could have had we wanted to. With so much existing code provided for us in the form of the WordPress application, the themes, and plugins, embedded youtube videos, and so on, there really wasn’t much of anything to write a test case against. I’d be interested to hear from someone more experienced in WordPress and TDD to provide some ideas on how to take a TDD approach to building a WordPress site. If you want to use TDD, do you need to start from the ground up, or at least find a stack that will lend itself to the TDD approach? When you’re dealing with platform built out of interpreted code, and with the final interpretation of html, css, and javascript are all up to the client, what value do your test cases have?

WordPress still doesn’t guarantee success or quality

I love WordPress for how easy it is to use. It has sufficiently abstracted the web stack that you can do a tremendous amount with it, without having to write one line of code. It is well maintained, and yields web sites that are easy to maintain. That said, it still does not guarantee optimum results. It takes skill and knowledge to deliver that.

At the end of the weekend, the 22 projects that the Givecamp volunteers built were presented for all to see. While all 22 projects were completed, and about 18 of them were web sites, most if not all of which were built on a WordPress platform, I did observe limitations of what can be accomplished in a single weekend. Without singling any project out, I could see some design issues in the final products that were completed by Sunday afternoon. I could only observe the most obvious design problems — bad typography, disproportionate layout and whitespacing. And granted, typography on the web is still in a pretty horrible state which CSS3 will hopefully address. I don’t mean to come off as negative or nitpicky. Still, there were a few sites which I saw that had obvious problems that probably could have been fixed with a few minutes (at most, hours) of CSS tweaking. Even our own site was not 100% perfect, and I’m confident there is no such thing as a weekend web site that can’t be improved with a little more labor.

Not every team might have had the eye or the skill or the time, and I’m certain that every group delivered a product that was better than what it was replacing (which in some cases might have been nothing at all) and was appreciated by the recipients of our giving. But I hope that future givecamps will have more designer resources to go around. And copy writers! In fact, I’m tempted to say that the WordPress ecosystem is so successful at solving the developer side of the problem that future givecamps might well be better off with more copy writers and editors than developers.

Success on Monday, Sadness Tuesday?

Given how happy and successful everyone seemed to be at the end of the weekend, and how… well, Dilbertesque the real world seems to be, I had to wonder if somehow all the success story I was seeing around me all weekend wasn’t somehow an illusion. Like, why do real companies have such a hard time with IT projects? I’ve never had such an easy, happy time working on a real-world project as I did this weekend. And I can’t really chalk it up to having the right people. No complaints about our team, but we were just put together and had never worked before. We weren’t an established, well-oiled machine.

My biggest worry about the effectiveness of an event like Givecamp is that, no matter how much you can accomplish in one weekend, ultimately the long-term success of these projects is in the hands of the organizations we gave them to. We can give them a great start, but without proper maintenance and attention, any website will quickly languish and become irrelevant And without a web-savvy content maintainer, that ain’t gonna happen. No matter how easy and intuitive a developer thinks they have made something, there’s always non-technical people whose purpose in life seems to be to prove the cynical adage about nature building a bigger “idiot” whenever an engineer thinks they’ve idiotproofed something.

I’ve seen proud development teams hand products off to users, who manage to break things in some novel and heretofore unforseen manner within seconds too many times to expect that this should somehow not happen here without good reason. I think one such good reason is that WordPress is good, mature, and healthy. But not every project was done in WordPress. At least two were done in C# with MS Access backends and one was done in iOS. I’m really curious as to the viability of the Access apps.

I was talking with event organizer extraordinaire, Mark W. Schumann, about this, and he’d had some thoughts along these lines as well. We’re both really curious to measure the long-term success of these projects. Because, let’s face it, as good as it makes us feel to donate a weekend and do good deeds, we want our efforts to be as effective at solving the problems they were designed to solve six months or a year from now as our efforts were appreciated when we handed it off. It’ll be interesting to see what we can see a year from now. I’d really like to see some quantified measurements next year.

The Type of Fortune That Never Misses

I’ve been having a rough time of it lately, at least judging what I’m normally accustomed to when it comes to my normally very convenient, easy, underappreciated life.

Just so you know, I’m typing this post one-handed. I’m doing about 30wpm now, which is pretty good, but I’m frustrated that it’s not the 90wpm that I normally do. Hahaha, no, that’s not why… I had a bicycle accident about three weeks ago, in which I broke my left humerus, just below the shoulder joint.

This was to have been my comeback ride — for about a month prior, I wasn’t able to ride my bicycle due to pain caused by an inflamed sciatic nerve, which was aggravated by an overly ambitious bike ride that caused me to have some lower back pain.

I normally don’t write about personal stuff in this blog, as the intent is for the web site is to be a professional portfolio and blog about software development, IT, and other things related to whatever it is I’m making my career out to be. But, as I have realized as a result of my experience in putting together my presentation to Notacon 8, I feel most complete when I do not separate my professional life from my personal life. I’m not very happy when I compartmentalize my life, working 9-5 just to earn a paycheck, and leaving work behind when I’m done with my day, and having the balance of the day left to pursue my personal interests. I want to make a unified life where nearly everything I do is geared toward achieving personal goals, and my professional goals align and merge with my personal goals to a great degree.

So this might not be the most technical of blog entries, but rest assured it ties in, as all aspects of my life tie in somehow to the work that I do and the experience and qualifications that enable me to do it.

I have been disappointed at these setbacks that have hampered me for the last two months. I had been enjoying bicycling with my friend Rachel, who had gotten me interested in hitting the road a few times a week, and had been feeling great as a result of the exercise. But it seems like every time I make an effort to get into shape, something happens that prevents me from continuing my program and I lose the progress I’d gained until I get my life in order enough to start again. This has happened countless times in my adult life. And then when I try to make my comeback after patiently waiting for the inflammation to subside enough to be active again, misfortune strikes again.

I got pretty depressed about the prospect of losing my whole summer to being sidelined with this stupid broken arm. But this lasted only a few days. The experience of mending is still ongoing, but so far has been very unlike what I might have expected. Although it is not something I would have wished for, I am the sort of person who delights in surprises, particularly when they have something to teach me. And I have come to the point where I realize that there have been things worth observing and reporting coming out of this experience.

So here’s what I’ve learned:

Broken bones don’t always hurt…. also, Pain has evolved to provide a GREAT User Experience.

When I fell off the bike, I didn’t hear or feel the bone break. I wasn’t quite stunned by the impact with the road, but it definitely staggered me, the way a body slam does. I knew almost immediately that there was something wrong with the arm, and didn’t want to try to use it. But the injury was numb initially. Not tingly numbness, but just not feeling much of anything at all numbness.

Before this happened to me, I would have expected any broken bone injury to be constantly acutely painful. I was surprised that in my case, it wasn’t. Perhaps it would have been for a worse injury than I sustained. I recognize that in my case, the type of fracture I had was perhaps one of the best possible ways to break a bone, if you had to break a bone. There was no displacement, no compound fracture, and a clean break. It only started to feel painful about 30-40 minutes on after the injury.

This gave me adequate time to do things that I needed to do in order to survive: pick myself up and get out of the street, and assess my condition, for example. Even when I did start to feel it, it wasn’t too bad unless I got jostled or bumped.

The pain was very useful in this way. It communicated to me what I needed to know, namely that I had an injury, that it was fairly serious, and then it largely got out of the way and let me take care of myself. If I did anything that might have made the trauma worse, the pain increased sharply, giving me immediate feedback that what I had done, or what had happened, was not something that I should allow to continue for any length of time.

From a User Experience perspective, I find this very instructive.

A few months ago, I watched a video of a talk entitled “Stealing From God!“, which in retrospect might have put me in the right frame of mind to pick up this lesson. The talk is worth watching in full, but in summary it’s about bio-mimesis as an inspiration to the design of technology.

When it comes to systems I might design in the future, I’ll definitely be taking this into account. Very often users complain about a shitty user experience by calling it “painful”. It’s a metaphor, but usually means tedious, repetitive, or time-wasting. Normally we designers who care about User Experience just want to eliminate any pain so that users will never have to experience it. I still think that this is a good goal. But I now think pain has something to tell us, and what it can tell us is important, and that a user experience designer can learn from it when s/he encounters it.

My advice is this: Don’t simply seek to avoid any and all pain. When you first experience “pain”, explore it. Find out what it is telling you, and if you’re the designer, ask yourself if that’s the right message, or what else or how else can you communicate more effectively. Delve into it, listen to it, appreciate it in its subtlety, learn from it. Once you understand the pain fully, then you can work effectively to avoid it altogether, or to make the “pain” as beneficial an experience as possible for the user.

It’s still best to avoid painful experiences if at all possible in the final design. But where “pain” is actually useful, it may be possible to incorporate it into the design in such a way that a user actually appreciates it. The pain experience that I had through breaking my arm was so much more useful than my typical negative experiences with computer software errors… in a strange way I find myself almost welcoming the experience with the arm — not that I would rather break my arm than experience a stupidly designed software error condition! But I can appreciate the beauty of the user experience that the bone fracture pain has given me much more than I can appreciate certain repetitive nags and modal dialogs with inadequate information and poor choices that I often see when using a computer.

Broken bones don’t need a cast to heal… and other unexpected things experience can teach you.

Common, seemingly familiar events can still surprise when they happen to you. Due to the location of the break, the ER couldn’t put me in a cast. Before this happened, I thought I pretty well knew what “the deal” is when you break something. I assumed walking into the ER that I’d be walking out with my torso half encased like a plaster or fiberglass cyborg, stuck in an immobilized position for months, and emerge from my chrysalis months later, atrophied and unable to move through my usual full range of motion. In my mind, broken bone == cast. Experience has taught me otherwise.

It turns out my experience will be nothing like that. They put me in a sling. Three weeks after the injury, my doctor tells me I’m doing well and the sling is now for comfort, not necessary to keep the arm immobilized. I can already use my arm to do light tasks. My effective strength is greatly diminished, and my range of motion is greatly restricted, but other than an ugly bruise, to look at me you wouldn’t think I had anything wrong with me, leastwise a broken bone.

As I’ve been healing, I noticed early on that if I ate a lot of protein, particularly red meat, the pain in the fracture site lessened. Often this effect was greater than the effect of the pain management drugs that the ER prescribed for me. I was surprised, but I understood immediately that it makes a great deal of sense for the body to be wired to work this way. My body needs more and different nutrients to mend a fractured bone than it normally needs, and the reward of reduced pain for eating the right (apparently) stuff is a great example of positive feedback. As a result of the reduced pain, I learned immediately that I needed to eat more of certain foods. And the pain would return if I hadn’t eaten recently enough. Again, more lessons to learn from the pain user experience. If I wasn’t paying attention to the pain, I might have just taken more pills to dull the pain. The pills, as it happen, dull my appetite, and would have created a negative feedback loop by not eating what I need in order to heal quickly and properly.

Instead, I’m amazed that three weeks after the injury I’m able to do so much, and am ready to begin physical therapy. I hear that it’s going to be painful. I’m acutely interested to learn what new messages I will be able to read in the pain to come.

People should try out being disabled. Especially user interface designers. But really, everyone.

To ensure that I eat well while I’m in this crucial healing period, and to keep myself from going stir crazy while I’m stuck at home (my doctor told me I shouldn’t drive, so I’ve been working remotely from home) without much human interaction, I sent out an open invitation to about sixty of my friends, asking them to pick a day to come over for dinner, which they would have to provide for me and they would also have to do the dishes afterward.

Surprisingly, a number of my friends were happy to take me up on this invitation. Tonight, my friend Jennifer came over with chinese food. Jennifer works with accessibility issues for disabled computer users, and is rather obsessed (I guess the polite term is passionate) about it. Perhaps because she’d come straight from work, perhaps also because she knows I’m an IT guy who is into design issues especially as they intersect with user experience issues, we ended up talking a lot about accessibility.

Jennifer clearly knows a ton about accessibility, way more than I do. I get what she says without trouble, but she’s way more familiar with the problem domain than I am. As a designer, I think it’s important to account for accessibility when designing the user experience.

Developer is a generic word — usually it means “programmer” but it also incorporates “designer.” And of course, developers are not always — in fact not often — good at both programming and good at design.

It’s hard enough for your average/mediocre programmer-developer to come up with a good UI to provide a high quality experience to the user because the average programmer-developer doesn’t know much about design, or usability. Developers who are decent at usability can design an OK user interface, but often forget about accessibility issues.

Actually, forget might be too strong a word. I sympathize with developers, so allow me to soften that a bit. They’re often resource constrained and unable to devote resources into providing accessibility. And because they seldom get to work on accessibility, they tend to not “get” accessibility.

For that reason, I think it’s a good idea for developers to “try out” various disabilities.

I’ve learned a great deal from not being able to use my off-arm at all for two weeks. I found I can do almost everything that I can normally do, but I have to take a different approach to it. It definitely slows me down. I have to think and plan actions and break them down into multiple steps that a one-armed person can accomplish. Not having the off-arm to assist my strong arm slows me down a lot more than you might think — two arms working together realizes synergy. I can carry a lot more with two arms together than I could with each arm doing its own thing. And the off-arm assists the strong arm in innumerable ways. If I needed to accommodate a one-armed person in some future project, I’d at least have some idea about what approach I’d take.

If I were to head a User Experience group, I think a great team building exercise would be to have each member try out being disabled while at work. Each person in the group would randomly select a disability, and then they would have to live it at least while at work, and I’d encourage them to continue living it at home. And not just for a few hours, or a day… say, for a week, or two. To really get an appreciation, you need to get to where you almost forget what it was like not to have the disability, but that would take a long time. A compromise “tryout time” will do well enough. During this time, they would observe carefully their experiences, and note where they found difficulty or obstacles, what approaches they took to deal with those obstacles, including asking for help. And they would be tasked to identify changes to their work environment that would make their disability easier for them to manage, and implement solutions. Needless to say, they would also have to do this for whatever projects they happen to be working on while disabled.

After going through that disability, the employee would put it down, and take on another. We’d have blindfolds, noise-cancelling ear protection, wheel chairs, arm and hand restraints, maybe special glasses to simulate colorblindness, whatever we could come up with. On a regular basis, employees would be encouraged to talk about their experiences as a “disabled” person and share what they are going through, what they are feeling, and especially how they are dealing with everything.

After the employee had gone through each disability that we could simulate, they would no longer be required to try them out, but could still revisit a disability if they wanted to go back and look for more insights. I have a feeling the best user experience guys would do so regularly.

This isn’t just a good idea for people who design the things that we use — it’s a great exercise for everyone.

For one, it makes you appreciate the things you can do but take for granted.

Second, it prepares you for the possibility of living with a disability at some point later in life if that should ever happen, as it commonly does for so many.

Lastly, it would help foment sympathy and caring for people with disabilities. So often disabled people aren’t thought about at all, and when we’re confronted with the need to accommodate their needs, too often people resent it, and and up resenting the person as well. This is terrible, but avoidable through the right experiences, learning, and appreciation.

In a way, I’m almost glad that I broke my arm. I still wish I hadn’t, but I’m grateful for what I’ve learned from it.

Notacon 8 Presentation video now online!

Notacon blog announcement

YouTube

Boobie Teeth alpha downloads

On BitCoin

Like a lot of people, I started hearing a lot about BitCoin recently. I didn’t pay too much attention at first, but after I kept hearing about it mentioned, I started to get interested and decided to check into it.

I became intrigued. BitCoin is an ambitious project to rethink currency and provide a decentralized means of exchange.

Sadly (or perhaps luckily, if widespread establishment might have been disastrous), before it could get too much more established, a high profile digital theft of $500,000 in equivalent value seems to have resulted in the crash of the currency’s valuation.

I say sadly because I always like things that are new and experimental and attempt to re-think the status quo.

Digital currency transactions presently are dependent upon banks and other institutions which impose a lot of costs, and imposes control on the way money exchanges between two parties. This intermediary has to be trusted, and it would be nice if you didn’t have to trust an intermediary in order to conduct a transaction between two private parties. In the real world, this is not impossible; an transaction of cash money for a good or service is extremely commonplace. The physical realities of common transaction exchanges tend to enforce honesty and punish criminality, but there are always risks. Still, the risks, the mitigation methods, and the rewards are all tangible, easy for people to understand, and this enables people to engage with each other and conduct business.

After validation of identity, the two biggest problems with transactions conducted in virtual spaces are privacy and anonymnity. These are very old school values that are still easily realized through physical media such as cash or gold, or some barter commodity. Once quantities change hands and people part company, there’s nothing to trace what took place unless someone documents the transaction. Laws require this to be done, but informally people ignore this all the time, and for minor/informal transactions it’s almost unthinkable to do the sort of bookkeeping and reporting that is required for larger scale transactions and legitimate, long-term businesses to function.

Politically speaking, if what you do with your money can be observed, monitored, or traced by third parties, it threatens your ability to conduct business — particularly the free exercise of your political power, expressed through your financial means — freely. Thus, there’s a great deal of interest in any way to conduct transactions anonymously and privately.

Of course, these shields are highly desirable to the criminal elements of society, as well. Which makes BitCoin inherently controvercial. But as it often is said, one man’s terrorist is another man’s freedom fighter. Obviously, it’s a totalitarian’s wet dream to have a completely visible economy in which all holdings and transactions are visible, traceable, and verifiable. To that end, a technology like BitCoin that attempts to do without the sort of centralization that enables totalitarianism would appear to be a good thing. BitCoin is good where the government is corrupt or oppressive, and no worse than cash where crime is a problem.

Like any currency, and especially any new currency, it is highly dependent upon the faith of the people who use it that the currency has value.

As a purely digital currency, this is a particularly challenging proposition. The two biggest threats to users of a decentralized currency are counterfeiting and theft. BitCoin employs some sophisticated cryptography to address counterfeiting (if there is no centralized authority to determine whether a note is legitimate). Users are more or less on their own to prevent theft (centralized banks are insured and otherwise mitigate this risk for you as a part of the cost of the services they provide; the options for a private individual to safeguard their belongings exist but are of a decidedly different character)

The victim in the high profile loss incident that led to the collapse of confidence in BitCoin fell victim to theft. If what I heard about the case is accurate, the theft was not a sophisticated cyber-attack, but depended upon physical access to the file that held the rightful owner’s keys which proved in essence that they owned the particular BitCoins in question. The file essentially acts like a bearer bond, in that it is not tied in any way to an individual’s identity (as is required in order to preserve anonymnity).

In other words, the situation is no different from someone breaking into the person’s house and stealing $500,000 in cash from their mattress.

So, then, it would seem that BitCoin’s infrastructure might still be essentially sound from a security standpoint. It might not, too. But the physical theft of a bearer bond in no way invalidates the concept or value of bearer bonds.

Whether BitCoin’s infrastructure ultimately is or not secure depends on a lot of very sophisticated math and computer programming. Ultimately, conventional wisdom seems to be that any security can be defeated. Perfect, invulnerable security is a childish power-trip fantasy suitable for comic book fiction. In the real world, security can never be perfect, but does need to be “good enough”.

The question, then, is can a decentralized digital currency ever have “good enough” security? The general consensus in the wake of the collapse of BitCoin’s value seems to be “we doubt it.” And this is perhaps correct, given that a technology like BitCoin depends to a great degree on the trust and faith of its users, and is thus vulnerable to crisises of confidence as well as actual breaches of security.

In a world where there is no possibility of perfect security, but where “good enough” security is attainable for most clients, certain entities may nevertheless have too many enemies, or are simply too attractive a target not to attract the interest of so-called Advanced Persistent Threats which, given enough time, will eventually successfully breach. Perhaps the ultimate mitigation for this sort of thing is not technological, but rather legal or even military in nature.

So what could we do for BitCoin? The solution seems to me be to recognize the risk, and use BitCoin like one uses cash. Just as one would not keep $500,000 stuffed in their mattress, one should not hold such sums of BitCoin in such an insecure manner — the holding should be converted to a more securable currency. BitCoin potentially can still be useful when it is useful to do what it is valuable for: secure private, anonymous digital transactions.

The way to do it is this:

Hold your money in normal bank accounts and other traditional holdings, like you normally would. When you want to conduct a secure, private, anonymous transaction, convert some of your holdings into BitCoin. Then conduct the transaction as quickly as you are able. The recipient of the BitCoins should then convert them to a traditional holding that they are comfortable with.

The problem with this is that the conversions are traceable, and the proximity of the conversions can be used to determine who did business with who. But there are probably methods of dealing with that which can be employed — such as breaking up the conversions, spreading them across many different types of holdings, and over enough time that traceability becomes difficult or impossible at the point of conversion.

Follow the Leader: Firefox 5 and the State of the Browser Wars

Mozilla released Firefox 5 yesterday. I upgraded on one of my systems already, but haven’t done so on all of my systems due to some Extensions that are lagging behind in compatibility. These days I mostly use Chrome as my default browser, so I’m less apt to notice what might have changed between FF4 and FF5, and looking at the change list it doesn’t look like a huge release, which is another way of saying that Firefox is mature and can be expected to undergo minor refinements rather than major uhpeavals — this should be a good thing. FF4 seemed like a pretty good quality release. I’ve been a Firefox user since the early 0.x releases, and have been more or less satisfied with it, whatever its present state was at the time, since about 0.9.3. And before that I used the full Mozilla suite, IE4-6 for a few dark years when it actually was the best browser available on Windows, and before that Netscape 4. I actively shunned and ridiculed WebTV ;-). And I’d been a Netscape user since 1.1N came out in ’94. So, yeah. I knows my web browsers.

These are pretty exciting times for the WWW. HTML5 and CSS3 continue slowly becoming viable for production use, and have enabled new possibilities for web developers.

Browsers have matured and become rather good, and between Mozilla, Chrome, Opera, Safari, and IE, it appears that there’s actually a healthy amount of competition going on to produce the best web browser, and pretty much all of the available choices are at least decent.

It seems like a good time to survey and assess the “state of the browser”. So I did that. This is going to be more off the cuff than diligiently researched, but here’s a few thoughts:

After some reflection, I’ve concluded that we seem to have pretty good quality in all major browsers, but perhaps less competition than the number of players in the market might seem to indicate.

Hmm, “Pretty good quality”: What do I mean by this, exactly? It’s hard to say what you expect from a web browser, and a few times we’ve seen innovations that have redefined good enough, but at the moment I feel that browsers are mature and good enough, for the most part: They’re fast, featureful, stable. Chrome and Firefox at least both have robust extensibility, with ecosystems of developers supporting some really clever (and useful) stuff that in large part I couldn’t imagine using the modern WWW without.

Security is a major area where things could still be better, but the challenges there are difficult to wrap one’s head around. It seems that for the forseeable future, being smart, savvy, and paranoid are necessary to have a reasonable degree of security when it comes to using a web browser — and even then it’s far from guaranteed.

There has been some progress in terms of detecting cross site scripting attacks, phishing sites, improperly signed certificates, locking scripts, and the like. Still, it seems wrong to expect a web browser to ever be “secure”, any more than it would make sense to expect any inanimate object to protect you. It’s a tool, and you use it, and how you use it will determine what sort of risks you expose yourself to. The tool can be designed in such a way as to reduce certain types of risks, but the problem domain is too broad and open to ever expect anyone but a qualified expert to have a hope of having anything resembling a complete understanding of the threat picture.

That’s a can of worms for another blog post, not something I can really tackle today. Let’s accept for now the thesis that browser quality is “decent” or even “pretty good”. The WWW is almost 20 years old, so anything other should be surprising.

In terms of competition, we have a bit less than the number of players makes it seem.

Microsoft only develops IE for Windows now, making it a non-competitor on all other platforms. Yet, because its installed userbase is so large, IE is still influential on the design of web sites (primarily in that IE forces web developers to test for older versions of IE’s quirks and bugs). By now, we’re really very nearly done with this, one would hope the long tail of IE6 is flattening as thin as it can until corporations can finally migrate from Windows XP. Even MS is solidly on board with complying with w3C recommendations for how web content gets rendered. It seems that their marketshare is held almost exclusively due to IE being the default browser for the dominant OS. Particularly in corporate environments where the desktop is locked down and the user has no choice, or the hordes of personal computer owners who own a computer but treat it like an appliance that they don’t understand, maintain, or upgrade. I suspect that the majority of IE users use it because they have no choice or because they don’t understand their computer enough or have the curiosity to learn how to install software, not because there are people out there who genuinely love IE and prefer it over other browsers. I’m willing to be wrong on this, so if you’re out there using IE and love it, and prefer it over other browsers, be sure to drop me a comment. I’d love to hear from you.

Apple is in much the same position with Safari on Mac OS X as MS is with IE on Windows. Apple does make Safari for Windows, but other than web developers who want to test with it, I know of no one who uses it. Safari is essentially in the inverse boat that IE is in on its native platform: a non-competitor on every other platform.

This leaves us with Opera, Mozilla, and Chrome.

Opera has been free for years now, though closed-source, and has great quality, yet adoption still is very low, to the point where its userbase is basically negligible. There are proud Opera fanboys out there, and probably will be as long as Opera sticks around. But they don’t seem like they’ll ever be a major player, even as the major players always seem to rip off features that they pioneered. They do have some inroads on embedded and mobile platforms (I use Opera on my Nokia smartphone rather than the built-in browser, and on my Wii). But I really have to wonder why Opera still exists at this point. It’s mysterious that they haven’t folded.

The Mozilla Foundation is so dependent on funding from Google that Firefox vs. Chrome might as well be Google vs. Google. One wonders how long that’s likely to continue. I guess as long as Google wants to erode the entrenched IE marketshare and appear not to be a drop-in replacement for monopoly, it will continue to support Mozilla and, in turn, Firefox. Mozilla does do more than just Firefox, though, so that’s something to keep in mind. A financially healthy, vibrant Mozilla is good for the market as a whole.

Moreover, both Chrome and Firefox are open source projects. This makes either project more or less freely able to borrow not just ideas, but (potentially, from a legal standpoint at least) actual source code, from each other.

It’s a bit difficult to be able to describe to a proverbial four year old how Mozilla and Chrome are competing with each other. If anything, they compete with each other for funding and developer resources (particularly from Google). Outwardly, Firefox appears to have lost the leadership position within the market, despite still having the larger user base, they are no longer driving the market to innovate. Firefox largely has given that up to Google (and even when they were given credit for it, much of what they “innovated” was already present in Opera, and merely popularized and re-implemented as open source by Mozilla. And with each release since Chrome was launched, Firefox continues to converge in its design to look and act more and more like Chrome.

It’s difficult to say how competing browsers ought to differentiate themselves from each other, anyway. The open standards that the WWW is built upon more or less demand that all browsers not differentiate themselves from each other too much, lest someone accuse them of attempting to hijack standards or create a proprietary Internet. Beyond that, market forces pretty much dictate that if you keep your differentiating feature to yourself, no web developers will make use of it because only the users of your browser will be able to make use of those features, leaving out the vast majority of internet users as a whole.

Accelerating Innovation

After releasing Firefox 4, Mozilla changed its development process to accomodate the accelerated type of release schedule that quickly lead to Google becoming recognized as the driver and innovator in the browser market. Firefox 5 is the first such release under the new process.

This change has met with a certain amount of controversy. I’ve read a lot of opinion on this on various forums frequented by geeks who care about these things.

Cynical geeks think that it’s marketing driven, with version number being used to connote quality or maturity, so that commercials can say “our version number is higher than the competitor, therefore our product must be that much better”. Cynics posited that since Chrome’s initial release put them so many versions behind IE/FF/Opera that this put Google into a position of needing to “make up excuses” to rev the major version number, until they “caught up” with the big boys.

While this is something that we have seen in software at times, I don’t think that’s what’s going on this time. We’re not seeing competitors skipping version numbers (like Netscape Navigator skipping 5 in order to achieve “version parity” with IE6) or marketing-driven changes to the way a product promotes its version (a la Windows 3.1 -> 95 -> 98 -> 2000 -> XP -> Vista -> 7).

Some geeks, I’ll call them versioning “purists,” believe that version numbers should “have integrity”, “be meaningful”, or “stand for something”. These are the kind of geeks who like the software projects where the major number stays at 0 for a decade, even though the application has been in widespread use and in a fairly mature state since 0.3 and has a double-digit minor number. The major release number denotes some state of maturity, and has criteria which must be satisfied in order for that number to go up, and if it ever should go up for the wrong reasons, it’s an unmitigated disaster, a triumph of marketing over engineering, or a symptom that the developers don’t know what they’re doing since they “don’t understand proper versioning”.

From this camp, we have the argument that in order to rev the major number so frequently, necessarily this must mean that the developers are delivering less with each rev, which thus necessarily dilutes the “meaningfulness” of the major version number, or somehow conveys misleading information. So much less is delivered with each release that the major number no longer conveys what they believe it ought to (typically, major code base architecture, or backward compatibility boundary, or something of that order). These people have a point, if the major number indeed is used to signify such things. However, they would be completely happy with the present state of affairs if only there were a major number ahead of the number that’s changing so frequently. In fact, you’ll hear them make snarky comments that “Firefox 5 is really 4.1”, and so on. Just pretend there’s an imaginary leading super-major version number, which never changes, guys. It’ll be OK.

Firefox’s accelerated dev cycle is in direct response to Chrome’s. Chrome’s rapid pace had nothing to do with achieving version parity. In fact, when Chrome launched in pre-1.0 beta, in terms of technology at least, it was actually ahead of the field in many ways. Beyond that, Chrome hardly advertises its version number at all. It updates itself in about as silently a manner as it possibly can without actually being deceptive. And Google’s marketing of Chrome doesn’t emphasize the version number, either. It’s the Chrome brand, not the version. Moreover, they don’t need to emphasize the version, because upgrading isn’t really a choice the user has to make in order to keep up to date.

Google’s development process has emphasized frequent, less disruptive change over less frequent, more disruptive. It’s a very smart approach, and it smells of Agile. Users benefit because they get better code sooner. Developers benefit because they get feedback on the product they released sooner, meaning they can fix problems and make improvements sooner.

The biggest problem that Mozilla users will have with this is that Extensions developers are going to have to adjust to the rapid pace. Firefox extensions have a built-in check which tests an Extension to see if it is designed to work with the version of Firefox that is loading it. This is a simple/dumb version number check, nothing more. So when version numbers bump and the underlying architecture hasn’t changed in a way that impacts the working of the Extension, the extension is disabled because the version number is disqualified, not necessarily because of a genuine technical incompatibility. Often the developer ups the version number that the check will allow, and that’s all that is needed. A more robust checking system that actually flags technical incompatibilities might help alleviate this tedium. But if and when the underlying architecture does change, Extension developers will have to become accustomed to being responsive quickly, or run the risk of becoming irrelevant due to obsolescence. Either that, or Firefox users will resist upgrading rapidly until their favorite Extensions are supported. Either situation is not good for Mozilla.

Somehow, Chrome doesn’t seem to have this problem. Chrome has a large ecology of Extensions, comparable to that of Firefox. Indeed, many popular Firefox Extensions are ported to work with Chrome. Yet I can’t recall ever getting warned or alerted that any of my Chrome extensions are no longer compatible because Chrome updated itself. It seems like another win for Chrome, and more that Firefox could learn from them.

I have to give a lot of credit to Google for spurring the innovation that has fueled browser development in the last couple years. The pace of innovation that we saw when Mozilla and Opera were the leaders just wasn’t as fast, or as influential. With the introduction of Chrome, and the rapid release schedule that Google have successfully kept up with, the entire market seems to have been invigorated. Mozilla has had to change their practices in order to keep up, both in terms of speeding up their release cycle, and in adopting some of the features that made Chrome a leader and innovator, such as isolating browser processes to indivual threads, drastically improving javascript performance. Actually, it feels to me that most of the recent innovation in web browsers has been all due to the leadership of Chrome, with everyone else following the leader rather than coming up with their own innovations.

In order to be truly competitive, the market needs more than just the absence of monopoly. A market with one innovator and many also-rans isn’t as robustly healthy as a market with multiple innovators. So, really, the amount of competition isn’t so great, and yet we see that the pace of innovation seems to be picking up. Also, it’s strange to be calling this a market, since no one at this point is actually selling anything. I’d really like to see some new, fresh ideas coming out of Mozilla, Opera, and even Microsoft and Apple. As long as Google keeps having great ideas coupled with great execution, and openness, perhaps such a robust market for browsers is not essential, but it would still be great to see.

Intellectual property value of social networking referrals

One thing I have noticed over my years of using the social web (fb, twitter, livejournal) that human culture instinctively places a value on linking to things in a way that I find odd. There’s a type of “intellectual property” that people conventionally recognize as a sort of matter of natural course. I don’t know how else to describe it than that.

In real value terms this sort of intellectual property is very low value, but in social etiquette terms, the value is more substantial. The phenomena is one of credit, but it’s not credit for authorship, rather it is credit for finding and sharing. If you find something cool and blog about it, and you’re the first one in your little social group to do so, you get some kind of credit for being on top of things, being cool enough to know where to look, lucky enough to be in the right place at the right time, or whatever. It’s not much more than that, but somehow if you post the same link and are not the first in your social group to do so, and don’t acknowledge the coolness of the person who you saw posted it first, it can ruffle feathers, as though people think you’re trying to be the cool, original one and are stealing other people’s “cool points” by not acknowledging where you got your cool link from.

It’s funny though since posting a link is an act of evaluation (“I judge this content to be worthy of your time, so I’m sharing it.”) rather than an act of creativity (if you want to be really cool, go author some original content and see how many people you can get to link to that.)

What I take from this is two things:

  1. having good enough taste in something to make a recommendation which one of your friends will pass along to others is an important, valuable thing in itself. Having this sort of taste implies that you are cool.
  2. Getting there first is important, OR perhaps acknowledging who was cool enough to turn you on to something that you found cool is important.

One of the things about Facebook that I like a lot is that they get this, and implement it in such a way that it basically works automatically. You can click “Share” and it just handles crediting who you got it from in a behind the scenes sort of way that forces you to follow the etiquette convention automatically, thereby avoiding being a leech or douchebag. On the other hand, in Livejournal, this is a somewhat useful way to discern who among your friends is a douchebag, since if they don’t think to credit someone for showing them something that you’ve already seen before, you know they’re not with it, or at least aren’t following their friends-list all that closely.

 

Another interesting thing about this is that, depending, sometimes people will just post a link to something without any comment, while other times people will post and add their thoughts to it as an annotation. Sometimes no comment is needed, or is implied by the context of how you know your Friend and what they are about and why they would be posting that link. Other times, people will post their thoughts and sometimes write something reasonably lengthy and thoughtful on the subject that they are linking to. This tends to happen much more on Livejournal than on Facebook or Twitter, which are geared toward more structured, but forced brief content. I think that Livejournal tends to encourage more expressive posts because people tend to use pseudonyms and write with somewhat more anonymity than they have with Facebook, where most people use their real name. I do like the way that Facebooks conversations of comments seem to flow very nicely once a topic hits someone’s wall. It’s also interesting to see how different groups of friends will come to the same original linked content and have different or similar conversations about it.

I think it would be fascinating to be able to visualize through some sort of graphic how sub-circles of an individual’s friends might converge though common interest in some topic. In my own Facebook experience, it has been interesting to see people I know from elementary and high school mixing with people I knew from college and afterward, and from various workplaces, and so on. I think it would be really interesting to see this sort of interaction on a very large scale, sortof a Zuckerberg’s eye view of what’s going on in various social circles that occupy Facebook. I can mentally picture colored bubbles occupying various regions of space, and mixing at the edges, colors blending like wet paint.

I also think it’s interesting how the constraints and style of the different social sites shape behavior and the characteristics of the groups who use them. Facebook users in my experience have tended to be more sedate, dryer, and thoughtful, though not always. Substantial numbers of my friends seem to be comfortable goofing and making fools of themselves, or being outspoken to the point that they run the risk of offending people of a differing political polarity. Twitter seems to be a land of important headlines mixed with one-liner witticisms and the occasional bit of Zen. Livejournal seems to be more private, insular, and diary-ish. I almost said “diaretic” but that sounds a lot like another word which, actually, might be even more appropriate, if disgusting. Discussting? Heh.

OK, I’m clearly blogging like I’ve been up for too long, and I have. But I hope to revisit and put more thought into these matters and see if something materializes out of that that is worthy of linking to and discussing. This could end up being someone’s Social Media studies PhD thesis:P

Packt Press announces Game Maker 8 Cookbook

http://www.packtpub.com/game-maker-8-cookbook/book

I am contributing to this book as a technical reviewer, and it’s my first time getting credit in a publication, so I’m kindof excited and happy about that.

[Update 1/11/2013: After several months of delays, I have heard from the publisher that this book is about to be canceled. However, I am now working on reviewing a book on GameMaker: Studio from the same publisher, which looks like it’ll be a much better book.]

Unearthed Treasures of Childhood

I’m cleaning out my home office today… well, let’s say this week.

For most of my life, I have been a packrat, and a fairly disorganized one at that. I have always been very sentimental about things, and would keep reminders of events, and basically never threw out anything that I had worked on when I was a kid. Most of that stuff got kept for a really long time, far longer than it was useful, if it even ever was useful.

But after a certain point, keeping everything gets to be a burden. A few years ago, I started to recognize that this was a problem and that I needed to do something about it. I have a very strong instinct to preserve and archive things that are important to me, either because they express some idea that I find compelling, or because they help to define my personality or could be used to explain myself in some way that would be persuasive or understandable to others.

Anyhow, in my office I have piles and piles of old papers, drawings and things I wrote for fun in childhood, school assignments, newspaper clippings, comic strips, receipts, artifacts from memorable experiences, and outright junk. I’m working on going through it all and purging as much of it as I can stand to part with. Mostly it’s not hard, it’s just a lot of sifting and it’s time consuming.

It’s a bit embarrassing for me to admit this. If you’ve seen Hoarders, that could be me. I say “could be” because that’s easier than admitting that it is me. I’m definitely not that bad, and definitely not as bad as I used to be, but it’s taken me some time. I just had a realization and asked myself what I was doing, and from there it’s just been a matter of prioritizing when I can go through the accumulation of decades and get rid of things. Most of the time I am busy or obligated and haven’t gotten to a lot of it. But I have free time this week so it’s happening.

Anyhow, going through my piles of junk, I came across my original concept drawing for the Boobie Teeth game! I thought that it had been lost to the ages. But here it is. I’m blown away. Finding it here is surreal. I have no idea how it came here from my mom’s attic, where I know it had stayed for many years. But I must have retrieved it at some point, put it aside, and forgotten completely about it.

It’s 3 pages of wide format fanfold line printer terminal paper. Someone in my family would bring lots of it home and give it to us for scrap paper which we could draw on the unused side. The printing on the used side has 1982, which helps establish the date — likely the drawing would have been done some time around 1982-83. This means that I must have been 7 or 8 when I drew up the concept, not six.

The game concept is also quite different from how I reconstructed it from memory: Boobie Teeth is a space fish. Rather than eating other fish, Boobie Teeth primarily feeds on these alien creatures called Boobies, which resemble the ghostmonsters from Pac Man, only with legs. Actually, I recognize them as being “stolen” from a game I liked at the time, Fast Eddie for Atari 2600:

 

Fast Eddie for Atari 2600
Fast Eddie for Atari 2600, the inspiration for “boobies” in my original concept.

My game concept also featured some birds, called Boobie Helpers, who tried to help the Boobies avoid becoming Boobie Teeth’s prey. Also, there were two other types of fish in the game: An “older” species of the same genus as Boobie Teeth, which only eats plants, but which Boobie Teeth can eat for bonus points, and a “new” species which preys on Boobie Teeth. One of the more interesting aspects of the original concept was that the Boobie Teeth species was facing extinction. Only 12 fish remained. And when you died in the game, your skeleton would sink down into the sea bed, and become fossilized in the sediment.

I’m not sure how much of these unearthed revelations will end up making it into the finished game. They don’t entirely fit with the version of the game that I reconstructed from memory. I like the death->fossil concept.

As soon as I can scan the drawing, I’ll be posting it here for you to see. For now, here’s photo I took with my digital camera:

 

Boobie Teeth original concept

Original concept drawing for Boobie Teeth

Original concept description:

You are one of the 12 remaining space fish called Booby Teeth. These little space creatures called boobies are trying to eat you. There are booby helpers that look like birds. You have to eat the boobies from behind. There are 11 other fish that are in reserve. A new specie of Booby Teeth called the Booby Crusher. It is much bigger and it is trying to eat you because it doesn’t know any better. He is 15 mph faster than you. There is one last kind of booby teeth. It only eats plant life. You are trying to eat it like the Booby Crusher is trying to eat you. It swimsflies across every time you have played for 30 seconds. It is 1000 points the first time it flies by and it will stay there until the score on it is 0, then it will disappear. Every time it flies by its score gets lower. The gill on it is the 1 in 1,000. After that it disappears. It is 900, 800, and so on until it gets to 100. The score keeps splitting in half. If you clear the whole board of Boobies, Booby Helpers, and Booby Teeth that eats plant life, you have to find the fossil form. The more times you lose a remaining fish you have to find more fossil forms. When you find it, you have to pull it up with your mouth, then you will get a rougher landscape. The game starts all over again. (Later there are mountains and volcanoes.)

Using the controller: Pushing the joystick up makes the Booby Teeth go up. Pushing the joystick down makes the Booby Teeth go down. Pushing the stick to the right makes the Booby Teeth go to the right. Pushing the stick to the left makes the Booby Teeth go to the left. The fire button makes the Booby Teeth bite.

Scoring:
Common Boobies: 50 pts.
Tallest & second tallest Boobies: 100 pts.
Strongest Booby: 150 pts.
Shortest Booby: 125 pts.
Flying Booby: 250 pts.
Booby Helpers:
Baby: 300 pts.
Mother: 2,000 pts.
Fastest Booby: 350 pts.

Goal

Like just about everyone, I don’t have nearly enough time to do everything I need to do in my life, let alone what I want to do. I need to figure out what to do about that. I have some vague ideas, but nothing fully formed and ready to go. That’s pretty frustrating. Like, self taunting self that you’re losing at life frustrating.

But I do have a goal: I want to move from being a 9-5 guy into being a 24-7 guy. I don’t want to simply work 40+ hours a week so I can afford to keep living and maybe (ha!) retire someday. I want to take that time I spend working and put it into activities that are meaningful to me and happen to generate income. I’d rather spend all my time doing that than spending most of my productive hours working on someone else’s projects and try to cram my own stuff into whatever’s left over when The Man is done with me.

To summarize, I want to achieve a convergence of work, income, fun, and values. I got a good taste of it this month when I worked on my game project and delivered my talk at Notacon. I didn’t make any money doing it, but had I figured out some way of doing so, I would already have the life I want. As I see it, I’m pretty far from having that life right now. And, unless I’m smart about how I go about it, the more I work for The Man, the further from it I will get.

I don’t need to maximize my income potential; I’d rather not compromise doing what I love in order to make the most income I possibly could. I do need to make enough money to live on. I don’t necessarily need all that much to live, and could deal with making less, and working longer to attain it, as long as I truly love what I’m doing. Before getting too far along in this dream, I have to figure out how to make any money doing what I love, try that out, and get it to work.

What are the things a person needs to do to accomplish this? Why don’t they teach you this in college or high school, when it would be super useful?

Looking for advice here. This blog does have a comment feature.