Thoughts on the Transhuman revolution

I’ve been reading a lot of near-future science fiction and speculative nonfiction lately, and as a result I’ve been contemplating the idea of transhumanism and what it means for us as a species and a culture.  Transhumanism is decently defined by wikipedia, and has been explored in fiction by Charles Stross, Cory Doctorow, and others.  It has been discussed extensively in the non-fiction sphere as well: Ray Kurtzweil is probably the most well-known thinker discussing the topic.  However, while Kurtzweil discusses the possibilities of AI consciousness and the emergence of the singularity, I am more interested in transhumanism in this article.

Defining Transhumanism


For semantic clarity, I’m going to define what I mean by transhuman, because my definition and connotations may differ from yours.

A ’transhuman’ is someone who augments reality with technology at a constant and unconscious on nearly unconscious level. The key concept here is that transhumans use technology to augment reality. This helps avoid the temptation to define any tool-user as a transhuman; a primitive man with a spear is more capable at hunting than a primitive man with his bare hands.  A person driving a car is more mobile than a person walking.  A person who watches a movie while browsing IMDB on their iPhone knows more about the movie than someone watching it passively (though the passive viewer may well be enjoying the movie more).  By our definition, the iPhone user comes close to transhumanism. We might call her a proto-transhuman. However, these is still significant effort involved; she must look away from the movie and focus on her iPhone to search IMDB.

So, where are we now?  Some people (early-adopting geeks, for example), already consider quick access to information to be something like an extra limb; as one of those affected with this feeling, I can vouch for it.  Are we transhuman or not? Again, we’re on the way there, but we haven’t yet achieved the fluidity of control and automation needed yet.

As for where we’re going next, let’s begin by discussing how we got where we are.

The Evolution of Information Access


For the majority of human history, access to information has been difficult.  Even after the invention of the printing press, one had to have either a personal library or access to a public library.  Information could be obtained, but not in a timely manner; poring over books was the purview of academia.  And even academics could only access this information when they were actually at their libraries.

As a result, the human brain has been the only way to store a significant amount of information for quick retrieval.  As a storage device, the brain is not that great; storing something permanently requires multiple writes (our recall of a fact is better the more times we have heard it).  It can be finicky at retrieving information; everyone who has ever had something ‘on the tip of their tongue’ can attest to that.

The next revolution in storage was electronic storage; in other words, computers.  Of course, early computers couldn’t hold a ton of information, but more importantly; that information was still stuck in one location.  To look up a fact on a computer, you had to physically travel to the computer with the information on it (or to a terminal connected to that computer; typically, these needed to be pretty close to the mainframe).

Enter the Internet


And here we come to the revolution; the Internet allows us to access virtually any piece of information from any computer in the world.  And with the information searching miracle we call Google, we can usually find that information very quickly.  Of course, the original problem with the Internet was that you still had to get to a computer to use it.  The solution?

Smaller computers.

Laptops, to be precise.  Laptop computers allowed us to connect to the Internet, and its massive store of information, anywhere we could find a phone line.  With the emergence of wireless networking, all we need is a wireless signal.  However, laptops are big, and cumbersome to use in a hurry; if I’m in the midst of a conversation and need to recall a fact, it takes several minutes to get my laptop powered on and connected to the Internet.  The solution?

Smaller computers.

Cellular telephones have evolved from foot-long bricks that required external power to pocket-sized devices with capacitive touchscreen interfaces.  These phones can also connect to the Internet, play music and games, function as e-book readers, scan bar-codes and do real-time price comparison, and perform myriad other tasks.  They can access any information from any location that has cellular service.  This is the first real step toward ubiquitous information access.  However, these devices can still be somewhat cumbersome to use.  The device must be fetched from a user’s pocket, then interacted with for quite some time to get the information you want.  If you want this information in the middle of a conversation, it can be fairly cumbersome.  The solution?

Smarter computers.

The current state of the art


Immediate mastery of a wide array of information was once a symbol of the elite.  Now, anyone who can type reasonably quickly can have an online, text-based conversation and match the knowledge of anyone else on many topics (this can be tough for very advanced or specialized topics, obviously). I suspect that this trend will continue until anyone can retrieve any fact instantaneously.

The implications this has for culture are immense. Once memorization of fact is no longer a measure of the intellectual elite, intelligence will be judged along other axes; the ability to synthesize existing content (analysis) and the ability to create new content (art). The stigma that exists against artists will disappear, and we will be left with a culture in which artists are not only lauded as worthwhile members of society, but financially supported by society.

Portrait of a Transhuman


Let’s look at what transhumans might look like at a point in the near future.

He wears sunglasses with transparent OLED overlays and a bluetooth radio that communicates with his personal Mobile Device (the successor to the smart phone). The overlay provides a Heads-Up Display; in it, he sees that he has 3 unread emails, 4 new RSS items, and an instant message from his wife. A pinhole camera in the glasses tracks his eye movements and responds to them; he keeps his gaze on the IM for a moment and it expands. His wife is asking him to pick a restaurant for dinner. A second pinhole camera looks outward from the glasses, feeding data about his surroundings to the Mobile Device. He looks at a restaurant down the block, and a moment later his HUD provides a menu, operating hours, and reviews. He pulls out his Mobile Device and types a quick reply to his wife.

All of the technology I just described already exists; it just needs to be made small enough, responsive enough, and accurate enough. Protocols and standards need to be developed, and our access to information needs to be made a public commodity. Once this is achieved, we will have the future. What we’ll do with it, I have no idea.

Twitter from the command line

I’ve recently started playing with twitter. A nice way to use it via the command-line (using curl) was suggested here. I have taken that and improved slightly on it.



Here is the result:

#!/bin/sh
echo -n “twitter> “
read text

while [ ${#text} -gt 140 ]; do

echo
echo “Message too long; used ${#text}/140 characters."
echo
echo -n “twitter> “
read text

done

echo
echo “Message is ${#text}/140 characters.  Press enter to post, or Ctrl+C to cancel."
read

curl –basic –user “username:password” –data-ascii “status=echo $text|tr ' ' '+'” “http://twitter.com/statuses/update.json" &> /dev/null



To use the script, copy all of that into a file somewhere in your path, then make the file executable (e.g., chmod 755 /usr/local/bin/twitter).  Now you can type ’twitter’, type in your tweet, and you’re done!

I even set up fluxbox so that mod4+t launches a terminal with the script running.  To do that, I added this to ~/.fluxbox/keys:



Mod4 t :Exec xterm -e “twitter”



If you’re not familiar with ‘mod4’, it is the Windows key on most PC keyboards.

I’ll eventually get around to writing a slightly more full-featured twitter updater in c or c++.  Until then, enjoy this script!

My new project - netjatafl

I’ve been pretty busy the last month working on netjatafl. Netjatafl will eventually be a networked client for playing various board and/or card games. It was originally created for hnefatafl and other tafl games. However, I have designed it to be extensible; I’m working on adding mancala games, and it looks like my design makes it pretty easy to add a new game. (I’ve added most of the logic for mancala to the client and server in just a couple hours of work). I intend to add shogi, xiangqi, chess, and possibly even go at some point in the future.

The netjatafl server (taflserv) operates on a simple, completely open protocol; it will eventually support authenticated logins and statistics tracking. Anyone could write a netjatafl client for any platform, if they wished. My clients will all be in C++, because this let’s me reuse the ’libboardgame’ library, which contains the game logic used by the server. I will also build in a “capabilities” system at some point, so the client and server can both advertise which games they support.

The whole thing is theoretically usable in its current state; the client is an ncurses-based text UI that is pretty cumbersome, but can be used. As far as I know, it only works in Linux. Anyone who wants to cross-compile it for Windows and send me a patch with everything you had to add, feel free! I will eventually add a proper GUI, probably gtk+-based.

Like the sound of this project? Feel free to check out the code, compile it, and let me know what you think!


Etymology notes: netjatafl is Old Norse for “net-table”; i.e. a networked table you can gather around to play games. ’taflbordh’ is ON for ’tafl board’ (tafl can also refer to tafl games in general), which sounds a little redundant, but it made a nice name for a client. And ’taflserv’ is just ’tafl server’… ‘serv’ was meant to be short for ‘server’, but I later noticed that it’s also a French word meaning ‘it serves". I find this somewhat appropriate.

How to fix PulseAudio in Fedora in 2 easy steps!

  1. su -c “yum -y remove alsa-plugins-pulseaudio”

    2. su -c “reboot”

The Case of the Odd NetworkManager Behavior

I recently purchased an Eee PC 1000HE.  This is a very nice machine, and aside from one weird bug, Linux support is great.  However, I’ve run into a very annoying problem with Fedora 10, and at the root of that problem is gnome-keyring-manager.


Misconfiguration Most Foul


We begin our tale with NetworkManager.  Since I connect to several wireless networks and a VPN, NetworkManager is a very useful thing to have working.  Its initial setup was great; I loaded nm-applet in my fluxbox startup, it prompted me for a default keyring password, and we were off.

However, on my next boot I was not prompted for my keyring password; I had to enter my WEP key manually.  After some exploration, I learned that gnome-keyring-daemon needs to be running.  The paradox is that it WAS running.

A Red Herring


I found some rather old advice thas suggested I run gnome-keyring-daemon manually from ~/.fluxbox/startup, but this didn’t work; gnome-keyring-daemon starts automatically in Fedora 10, thanks to pam_gnome_keyring.so.  I now had two copies of the daemon now running, neither of which worked.

What I eventually discovered was this: if I kill the automatically-started gnome-keyring-daemon (or remove auto_start from the pam_gnome_keyring options in /etc/pam.d/kdm), then start it manually with different options, it works every time.  So, instead of:

gnome-keyring-daemon -d –login

which is the automatically provided command, I ran:

gnome-keyring-daemon -f -c keyring

from my fluxbox startup file.  This worked, but turned out to be unnecessary.

An Anwser


My next discovery:  If I disable the daemon’s automatic starting (once again by taking the auto_start option out of /etc/pam.d/kdm) and remove my custom invocation from the startup file, it still starts automatically, but with different options than the auto_start version!  In fact, it starts with the options work.

It turns out that nm-applet and gnome-screensaver both automatically start gnome-keyring-daemon if it isn’t running.  Since nm-applet runs first, it starts up the daemon, and passes it a completely different set of options than the pam-invoked version.  Thanks for the consistency, gnome!

A Problem


Starting gnome-keyring-daemon manually or allowing nm-applet to start it still poses a problem: the daemon doesn’t die when I log out!  This means that, as I log in and out several times, useless instances of the daemon end up sitting around doing nothing.  Since the apps that talk to the daemon use $GNOME_KEYRING_SOCKET to do so, everything keeps working; but it’s cruft I’d rather not have.

Elementary


After following this circuitous path, I finally stumbled into the answer: it’s a known bug.  It is actually related to the lack of a proper $DISPLAY getting set for gnome-keyring-daemon; it isn’t related to the passed in options at all.

At this point, I’m forced to fall back on a hack.  I’ve added the following to my ~/.fluxbox/startup, above the gnome-related apps:

killall gnome-keyring-daemon

I’ve also removed the auto_start option from /etc/pam.d/kdm.  Unfortunately, not launching the daemon with pam means that I can’t take advantage of the single sign-on feature provided by pam_gnome_keyring.  But until the bug is fixed, I guess this will have to be good enough.

(As for why I don’t use gdm, see this post)

Update: a command explained



If you look at the –help output for gnome-keyring-daemon (or, if you’ve applied my hack below, gnome-keyring-daemon-bin), you’ll see this output:

Usage:
gnome-keyring-daemon [OPTION…] - The Gnome Keyring Daemon

Help Options:
-?, –help Show help options

Application Options:
-f, –foreground Run in the foreground
-d, –daemonize Run as a daemon
-l, –login Use login password from stdin
-c, –components=ssh,keyring,pkcs11 The components to run


Anyone acquainted with Linux will understand the first two options, -f and -d, pretty intuitively. You’ll note in my post above that my ‘working’ option set included -f; this is because -f prints to standard out, allowing us to capture the GNOME_KEYRING_SOCKET and GNOME_KEYRING_PID variables that the daemon spits out. However, when run in -d, these variable seem to get set correctly anyway. Further, the -c option I used in my quest seems superfluous; the daemon defaults to using the keyring component. I wanted to explain this since it wasn’t clear in the original post exactly why I bounced between options. At the time, I was grasping at straws, and assigned a simple correlation (the different command-line options in use) to a causation (the daemon that started automatically, with the different options, failed to work correctly).

The option that had me baffled, though, was –login. The information in the help output is cryptic, but I finally worked out its purpose; it allows single sign-on. pam_gnome_keyring passes your login password to gnome-keyring-daemon, which uses it to unlock a special keyring called the login keyring. This keyring can then be used to store the passwords to your other keyrings, so that when you log in, everything unlocks automatically. Your system login doubles as your keyring authentication.

Further Update: Eureka! (or: building a better hack)


Based on a comment in the bugzilla entry for this problem, I have crafted a better (if more system-intrusive) hack. I simply perform the following:

mv /usr/bin/gnome-keyring-daemon /usr/bin/gnome-keyring-daemon-bin
touch /usr/bin/gnome-keyring-daemon
chmod 755 /usr/bin/gnome-keyring-daemon
cat > /usr/bin/gnome-keyring-daemon << EOF
#!/bin/sh
DISPLAY=":0.0" /usr/bin/gnome-keyring-daemon-bin "$@"
EOF


This hack creates a wrapper script that sets the $DISPLAY variable before running the keyring daemon. Until this kdm bug is worked out, this hack performs beautifully.

It is pitch black. You are likely to be flamed by a fanboy.

I feel the need to comment about this (and, subsequently, this and this).

First, a summary, for those who get a case of the tl;dr’s.  A woman bought a laptop to use for her coursework at a local college.  She accidentally bought a Dell laptop with Ubuntu on it.  When she realized her ISP’s setup disk wouldn’t work, she tried to get Dell to swap the laptop for one with Windows.  The Dell representative apparently convinced her to keep the one she had.

She claims that this problem, combined with a lack of Microsoft Office, forced her to withdraw from classes.  The local news ran the linked article; it is worth noting that the bottom portion (the part where the news agency contacted the college and Verizon, and everything got cleaned up) did not appear in the initial article.


Needless to say, the Linux community (and the Ubuntu community in particular) exploded.  The article hit digg, slashdot, and reddit.  The angry letters and phone calls started pouring in to the news station (though they got tons of traffic, naturally).  More significantly, the woman in question was harassed on facebook.

This story shows mistakes from every party involved.  The Dell representative should have helped her switch to a machine she was more comfortable with.  The woman herself should have taken initiative, called Verizon and asked what she could do to get her connection working.  Alternately, what’s wrong with using another computer (say, at a local library) until you can get the laptop issue sorted out?  Dropping all your classes for the semester is overly drastic and melodramatic.

The worst perpetrators of stupidity here, though, are the Linux community members who not only lambasted and ridiculed this woman publicly on forums and blogs, but also attacked her personally on her Facebook account.  This is childish, pointless, and it paints the entire Linux community as anti-social assholes.

Unlike most groups, the Linux community IS Linux.  If a Star Wars fan blogs about how everyone who doesn’t know the difference between a Sith and a Dark Jedi is an idiot, the Star Wars franchise is not going to be damaged; there is a clear disparity between the creators (Lucasfilm et al) and the consumers (fans).  On the other hand, if a Linux fanboy blogs that everyone should know the intricacies of iptables configuration before being allowed on the Internet, this will color peoples’ perception of Linux.

Why does this happen?  Because Linux is Free, open to the world.  Anyone can add to it.  The community and the product are intricately intertwined.

This is a false perception, though; in reality, the rabid fanboys who would harass a woman on Facebook are a completely different set of people than the assholes that argue fine technical points on LKML. (I’m using asshole here in its rare application as a compliment)  However, the impression that an outsider has looking in is that Linux is some wild, anarchistic (or maybe communist) creation.  This stems from the growing cultural knowledge that Linux was created by and for the people that use it.  This is not quite true.  Linux was created by and for developers and technology enthusiasts, true.  However, not every vocal member of the community actually contributes to Linux itself; only a fairly small subset of users are actively involved in improving the software.

I don’t mean to devalue the role of the community in development.  Community contributors are important, welcome, and numerous.  Bug submitters and other “active users” are vital to the strength of the open development model.  However, the active users aren’t even the people that we see evident in this article.  What we see here are fanboys:

fanboy (n): Someone who is so obsessed with some subject or thing that they are blind to its faults and harass and deride anyone whose opinion differs.

These are precisely the people that Linux does not need.  The community would be doing itself a favor by creating public distance from this subset of itself.  We need more rational, clear-headed people speaking out about the benefits of Linux.  Fanboys ranting and harassing people will get us nowhere.
I am aware that I haven’t offered any advice on how to make the fanboys go away, and that’s because I don’t have any.  I don’t know how to do it, or if it is even possible.  This is just a statement of a problem that I see; anyone with ideas, please share them.

5 things I hate about Fedora 10

Every release of Fedora feels like a step in the wrong direction.  I don’t say this lightly - I use Fedora at work and at home; it is my primary operating system.  I have staunchly supported it in the face of critical Ubuntu fans for a while now.

First, a little background.  I switched to Fedora from a mixture of gentoo and slackware around the time I started my current job, since it was far easier to keep track of one package management toolset, and several things about gentoo’s packaging system had started to irk me.  The current release of Fedora at the time was 7.  I have been using it since, usually upgrading to new releases (via a clean install) about a month after they release.

My needs are simple, but apparently elusive to Fedora.  I use fluxbox as my window manager.  I prefer to perform all of my system configuration from the command line.  My graphical application use is minimal (firefox, games, pidgin).

Let’s explore the problems I’ve noticed have started creeping in, starting with the release of fedora 8.  My solution/workaround for each problem is included, if I have one.  For what it is worth, I realize that some of these could be the result of 3rd-party packages (such as Nvidia’s proprietary drivers).  However, if any of these are the result of user error, then the solution should rightly be easy to find by searching documentation, which I have done extensively in every case.


1. Pulseaudio


Pulseaudio… I hate the word

This one heads the list because it’s the problem I’ve had to deal with most recently.  I have been lucky in that pulseaudio plays nicely with the sound cards on all 3 of my Fedora machines (others have been less fortunate).  However, I was stuck with audio far quieter than what I had grown used to in gentoo.

Solution: I finally discovered that pulseaudio has its own volume settings, independent of the ALSA-level audio device.  You can adjust the hardware volume levels with either of these commands:
alsamixer -Dhw:0
alsamixer -c 0

It would be nice if this were clearly documented somewhere.  There are some vague hints on this page, which is what pointed me in the right direction.

Thankfully, pulseaudio is no longer quite so painful when dealing with apps that only talk to ALSA.  I noticed some popping in certain applications, though (Neverwinter Nights, for one).  pasuspender seems to work around this, but the fact that this is necessary is kludgy.

2. GDM


The thousand injuries of GDM I had borne as best I could; but when he ventured upon insult, I vowed revenge…

GDM in Fedora has been upgraded to the latest upstream from the gnome team.  The problem with this version of GDM is that it removes almost all of its configuration options.  They have crippled it thus intentionally, and while they claim the removed options were “obsoleted due to redesign”, it seems that some of the options were dropped to prevent users from doing stupid things.

This Lowest Common Denominator approach is fine for a default configuration, but it should always be possible to change the default behavior.  Removing the ability to customize it entirely is not only against the spirit of open source software and Linux, it is insulting to the users.  It feels as if the team responsible for GDM thinks they know better than I do when it comes to configuring my machine.

In my case, the default behavior that troubles me is the fact that GDM passes the +accessx option to X.  Gnome includes a daemon that can override the accessx behavior (namely, enabling sticky keys if you hold shift down too long).  KDE includes a similar tool.  Fluxbox, however, has none - it assumes (justly) that you can turn off the accessx option at the X11 level if you don’t want it.  The new GDM denies you this ability, however.

Solution: Switched to KDM, which doesn’t seem to enable +accessx by default.  I tried XDM first, but it has SELinux errors and fails to launch fluxbox.  Also, KDM looks much nicer.  Alternately, I could have booted into runlevel 3 and then used startx, but I’ve become a fan of the graphical login prompt.

3. Upstart


The name says it all

upstart is the new init system in fedora; a replacement for the aging sysVinit.  In theory, upstart is great - gives you much more granular control over what processes should happen at each runlevel, and may eventually replace /etc/init.d entirely.  In practice, however, it has a rather annoying problem: sometimes it fails to respawn the ttys when in runlevel 5.  This problem doesn’t seem to be present in runlevel 3, for whatever reason.

Solution: no real solution at present, but you can work around it with initctl start ttyX

4. rsyslog


Hey… Listen!

The traditional syslogd has been replaced with rsyslog, a much more powerful/configurable syslog daemon.  However, it seems to dump all kernel output to the console.  The default configuration doesn’t include any statements that should be logging to the console, so it could be caused by something else.  Either way, the problem is present.

You can test this from any fedora machine: it seems to happen on every F10 box I can find.  Just press Ctrl+Alt+F2, then plug in a USB flash drive.  This is annoying on its own, but is especially frustrating when combined with #5, below.

Solution: none

5. PCI-Express device errors


Or How I Learned to Stop Worrying and Love X.org

On my PCI-Express video card, I receive constant error messages, both in messages and on the console (see #4, above).  These happen whenever the screen is cleared or switched to.  In other words, Ctrl+Alt+FX will generate one of these, sometimes two.  Running ’less’ generates the errors.  So does the ‘clear’ command.  emacs and vi both trigger the error.  Each instance of the error takes up about 25% of the screen’s real estate.  This makes operating on the command line extremely difficult.

Solution: None yet.  I suspect this may be related to the Nvidia drivers; in that case, a future update may fix these errors.  I’ll give Fedora the benefit of the doubt where I can.

An aside on Education

I first encountered Clay Burell on his blog Beyond School, where he had started a series of Unsucky English Lectures.  These posts were brilliant, engaging, and poignant, and I followed them to their tragically early conclusion. (Clay, if you’re reading this, pick those back up, man!)  It turns out that Beyond School was actually a blog about revolutionizing education.  I just happened in while he was doing a special series.  I kept following his blog, though.

At any rate, Mr. Burell now has a new blog at education.change.org.  In particular, one recent post impressed me, and I wanted to increase its distribution, at least by the tiny amount that people actually view this blog :P

Why Schoolwork Doesn’t Have to Suck

There’s some important ideas here.  The concept that our technology could (should, must) become the medium through which we engage in learning is as groundbreaking as it is obvious.  Enjoy.

.com is the new .org

No, not an angry rant about proper gTLD usage.  Instead, this is more of a Public Service Announcement: silenceisdefeat, my favorite provider of life-long free shell accounts, has had their domain name taken hostage.  silenceisdefeat.org now redirects to an ebay auction for the domain name.  As a result, they can now be found at:

http://silenceisdefeat.com

I have updated my previous link to their site (in this article) to reflect the change as well.

Self-indulgent musings on total knowledge strategy games

Total knowledge games are games in which all players involved have equal knowledge of the current state of the game, and the only factor that influences the game’s future state is the actions of the players.  Chess, Go, and tafl are three such games that I play periodically.

Recently, I pondered a fairly simple question: which of these games is the most complex?  All of them are complex enough that new players have room to become stronger over time.  Skill in these games has been traditionally praised as a virtue by each game’s culture of origin.  So, which game provides the greatest depth as a topic of study?

Before I consider the differences in the level of complexity of these games, let’s look at how a few basic elements of the games compare.  This will give us a fuller understanding of the factors that contribute to the games’ complexity.

Symmetry


Chess and Go have in common that they are symmetric games - both players have the same resources at their disposal, and seek the same goal.  In chess, the pieces for each player are arranged similarly at the start of the board, and each player tries to capture the other player’s king.  In Go, the board begins empty of pieces, and capturing territory is the goal for both players.  In both of these games, neither player has a handicap evenly matched opponents will have an equal chance of winning.

Tafl, on the other hand, is asymmetric.  One player, the defender, controls a king and his bodyguards, and tries to flee to one of the corners of the board.  His pieces begin the game arranged in the board’s center.  The attacker, on the other hand, has his pieces along the four sides of the board.  He also outnumbers the defender 2-1.  Tafl also favors the defender; if two equally skilled players play each other, the defender is nearly guaranteed victory.

Board Size


Tafl and chess are played on fairly small boards - 8x8 for chess, and anywhere from 7x7 to 13x13 for tafl.  The most common tafl board sizes appear to be 9x9 (Tablut) and 11x11 (Hnefatafl).  I will be contemplating a hnefatafl board here, because that is the size on which I most commonly play.

Go, on the other hand, is played on a 19x19 board. This means that, in general, far more moves are possible at any given time in Go.

Spaces vs Intersections


While we’re talking about boards, I will pause briefly to discuss spaces and intersections.  In Go, your pieces are played on the intersections of the lines.  In tafl and chess, your pieces reside in the spaces, or squares, between the lines.  This fundamentally makes no difference at all.  You could make a grid of 8x8 intersections instead of 8x8 spaces, and play chess on it.  It would feel unnatural, perhaps, but only because you would be accustomed to the other convention.  Likewise, you could play Go on a board of 19x19 spaces.  In fact, some variants of tafl were played on a grid of intersections, such as Alea Evangelii, a tafl game played on a 19x19 board (you could, in other words, use a modern Go board to play Alea Evangelii).

Capturing


Go requires a player to surround an opponent’s stones and ‘cut him off’ from all open spaces.  Capturing, however, is not the point of the game, only a strategic element.  This is also fundamentally true of tafl and chess; the ultimate goal is to surround the king; the capturing move is not strictly necessary.  Tafl’s capture rules are less straightforward than chess; you must ‘flank’ an opponent’s piece (place your pieces on opposite, orthogonal sides of the opposing piece) to capture it.  The king must be surrounded on all four sides (in most variants).

Construction vs Destruction


In chess and tafl, players begin the game with all of their pieces in place; pieces can be captured, but new pieces will never be added to the board.  In a sense, they are destructive games; the forms which are in play at the beginning can change and be eliminated, but nothing new ever appears in play.

By contrast, Go is a constructive game.  The board begins completely barren; players add pieces until the board is full of pieces surrounding empty territories.  Pieces can be captured, but the overall trend during play is toward a fuller board.

Complexity


So, how do these games compare to each other in terms of strategic complexity?  Go has a lot going for it in terms of complexity.  First, it is played on a large board, meaning there will always be more moves to consider.  In addition, the constructive nature of the game means it is legal to play in nearly any open space at any time.  This means that the number of possible moves in Go will always be much greater than the other two games.

Additionally, the strategic elements within Go are extremely intricate.  Opening moves can impact the later game dramatically, and individual ‘battles’ (sequences of moves on a small section of the board) have countless patterns and scenarios that players must be comfortable with.  Capturing an opponent’s stones isn’t always a good idea; often, nothing prevents your opponent from immediately capturing even more of your stones (and thus gaining territory) in return.

By stark contrast, chess and tafl have a fairly small number of legal moves.  For example, in chess, there are only 20 possible moves on the first play.  The average number of possible moves for a given chess game is something in the range of 32.  Tafl provides more possibilities than chess, even with fewer pieces; since all tafl pieces can move any number of spaces orthogonally, the attacker (who plays first) has 116 possible opening moves.  The defender’s first move has 120 possibilities.  Go, by comparison, has 361 possible opening moves.

In terms of capture rules, it is not clear to me whether tafl’s capture mechanics, which are more involved than those of chess, make the game simpler or more complex than chess’ straightforward captures.  In chess, the capture rules require you to keep track of more information, since each piece has a more complex influence on holding territory.  Go, however, is the clear winner here as well, as capturing can be extremely intricate - often when trying to capture a group, you may limit your own liberties and end up being captured yourself.  A significant portion of the game’s strategy involves creating arrangements of stones that cannot be captured.

Ultimately, my observations and subjective experience suggest that Go is the most complex of these games.  It has an amazing number of possible permutations, and a very simple ruleset that nevertheless lends itself to an immense number of factors that must be considered.

Between chess and tafl, the numbers seem to favor tafl.  The asymmetry,  larger board, and larger number of possible moves seem to make it more sophisticated.  However, as long as the game is skewed in favor of the defender, the complexity may mean very little in the end.  Mostly from subjective experience, I would estimate that tafl is the more numerically complex game, but this experience may be skewed by the fact that so many of the possible moves in chess have been so well mapped. The complexity of tafl also depends heavily on the specific tafl game and board size. Even subjectively, I can’t come to any real conclusion here.