Paranoid Security: Establishing a Connection the Hard Way

Recently, I was describing the personal setup I use to connect to my home machine over on watchingback (a group that has gone unfortunately silent).  This setup combines port-knocking (with one-time sequences), disk encryption, and passphrase-protected rsa keys.  Here’s a basic rundown of how it works from an end-user perspective (i.e., once everything is set up):

First, the user (me) inserts a USB flash drive with an encrypted partition.  He mounts up the encrypted disk on a local machine (I’ll call this machine the ‘client’ throughout this article), providing the necessary password, and runs a script called ‘callhome’.  He is prompted for his passphrase, and then gets a terminal session on his home machine (we’ll call this one the ‘server’).

Read on for details about this setup, and how to do it.



Warning: what follows is madness. It is overkill taken to an extreme.  I am describing a way you can take a very, very simple procedure (connecting remotely to a system), and make it exceedingly complicated, all for the benefit of a little added security.  Whether or not this security is worthwhile to you is, of course, your business.  In an age where our governments and fellow citizens are increasingly keen on everything from our shopping and reading habits to our credit card numbers, I personally feel that cautiousness is worth the effort.

It is madness.  I’m not convinced it isn’t justified madness.

This tutorial assumes you are running Linux, and that you are comfortable with the command-line interface and with networked computing in general (of course, you’re reading this on the Internet, so that’s a good start).  All of my examples will be Fedora-centric.  If you don’t use Fedora, you’ll need to figure out what the commands are for your distro.

So, how is this complex setup I describe different from just typing “ssh user@server”?  Well, first, the callhome script executes a portknocking sequence.  Until this sequence is done, ssh is closed on the server.  After the sequence, ssh is opened only for the IP address of the client, and only for a small time window.  The ssh connection must happen during this window.  The script initiates the ssh connection, which helps keep this secure.  In addition, each portknocking sequence is valid only once - the USB drive contains a list of all valid sequences, and the script is set up to only use each one once.

Next, ssh on our server is set up to only allow connections with public keys.  This means that even if an attacker knew the correct portknocking sequence, he would not be able to login with a password - he must have the private RSA key.  The private key is on our USB flash drive, which is encrypted.  The key itself is further encrypted with its own passphrase, so you still enter a password to connect home, the work to verify it is simply done on the local machine.  The passphrase is never sent across the Internet, even in an encrypted/hashed form.

There are some other nice features, including a ‘panic’ portknocking sequence that will shut down the portknocking server itself, locking down the remote server completely.  This panic script is stored on a machine to which I have a shell account.  If the USB flash drive is ever lost/stolen, I can get to any machine with an ssh client, log in to the shell account, and kill the knock server.  New connections to the server then become impossible.

This setup is useful for more than just a terminal connection home.  You can forward X through it and run graphical apps from home (this is typically going to be very slow, however).  You can forward any ports you like, so that you can route web traffic through this ssh tunnel and prevent people on your network from watching where you go on the web.  Anything you can do with a normal ssh connection can be done here.  Later I’ll demonstrate some examples that I use. So, that’s the setup.

Now I will outline exactly how to do it, one step at a time.  You might want to grab a snack and use the bathroom - this is going to be a long trip.

Part 1: Dynamic DNS


Before you can call home to your server, it helps to have a name to call it by.  However, you can’t use a traditional hostname if your machine is on a broadband network because your IP address may periodically change.  Dynamic DNS (or DynDNS) was created to solve this problem.  A daemon runs on your server that periodically checks the IP address of the server and sends it to a DynDNS server.  This DynDNS server then updates a DNS record whenever your IP address changes. I use DynDNS.com.  It’s free and easy.  Just choose a hostname for your machine, then install and configure the ddclient software.  You can get instructions on configuring ddclient for DynDNS.com here.

Part 2: Configuring SSH


On the server, find your sshd configuration file (on Fedora, this is at /etc/ssh/sshd_config) and ensure the following options are set to these values:
RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile     .ssh/authorized_keys
PasswordAuthentication no
ChallengeResponseAuthentication no

Now, restart your ssh daemon:
service sshd restart

Now, try to ssh into your machine (you can just do ‘ssh user@localhost’).  You’ll get denied immediately, without even seeing a password prompt.  This is what we want. Next, we create the ssh key that we will use.  Run:
ssh-keygen -t rsa -b 4096

When prompted, specify a path other than the default.  Your home directory is a good choice - we will be moving id_rsa to the USB flash drive later.  Also, make sure you specify a good passphrase - if the USB flash drive is compromised, the strength of this passphrase will buy you time to lock down the server. Now you have 2 files in your home directory, id_rsa and id_rsa.pub.  id_rsa is your encrypted, private RSA key.  id_rsa.pub is the public key that matches this private key.  Copy the contents of id_rsa.pub into ~/.ssh/authorized_keys.  This step will allow the private key to connect to the server as this user.

Part 3: portknocking


There’s still one significant security concern: unknown vulnerabilities.  OpenSSH is a complex program, and almost certainly still contains a vulnerability or two that haven’t been discovered.  To combat getting hit with that latest exploit, we can hide the presence of ssh from the outside world completely.  This is the beauty of portknocking. The premise of portknocking is that the ssh port is firewalled off unless a specific sequence of ports are first pinged, in order.  This doesn’t add a lot of security by itself; an attacker can simply sniff the portknock sequence, then repeat it to open the same port.  Normally, portknocking will only deter attackers who don’t know you have ssh open.

However, the portknocking server we are going to use supports one-time sequences.  With this configuration, the correct knock sequence changes after each knock.  The server has a list of sequences to use, and we will also keep this list with us on the USB flash drive. Before we begin configuring portknocking, make sure you have firewalled off port 22.  There are two possible network setups we will consider:

  • You have a router between the server and the Internet.  This router passes ssh traffic to your server, and the router acts as the firewall that blocks ssh access.

  • The server is connected directly to the Internet.  Local firewall rules on the machine are blocking ssh access.


In the first instance, you need to be able to install a portknocking server on the router; additionally, the firewall rules needed will be more complicated, and will vary based on how your router is configured.  My example here assumes the second case: that the server itself is listening to the knocks (i.e. it is directly connected to the Internet).  The first case is discussed in Appendix C.  Install knockd.  Once installed, you’ll need to configure /etc/knockd.conf.  For now, I’ll present a basic configuration (we’ll add some more stuff to this later):
[options]
logfile = /var/log/knockd.log


[ssh]
one_time_sequences = /etc/knockd/ssh
seq_timeout = 10
tcpflags = syn
start_command = /usr/sbin/iptables -A INPUT -s %IP% -p tcp –dport 22 -j ACCEPT
cmd_timeout = 5
stop_command = /usr/sbin/iptables -D INPUT -s %IP% -p tcp –dport 22 -j ACCEPT

In /etc/knockd/ssh, you need to have sequences of numbers to use as one-time sequences.  Each entry in the list should be formatted like this:
1,2,3,4,5

There is a space at the beginning of the line; this is helpful because knockd will comment out each line as it uses it by placing a ‘#’ at the beginning of the line.  The numbers you generate should ideally be between 1024 and 65535; I generate my numbers with a script similar to the following:
#!/usr/bin/perl
$num_keys = 50;
@data = d20diceroller --nototals "5d65535[reroll< 1024][repeat $num_keys]";


foreach (@data)
{
next if (/:/);

s/ $//;
s/ /,/g;
s/^/ /;

print;
}

This script uses a program I created, d20diceroller, to generate its random numbers.  That tool is part of the d20tools package, and can be found at its sourceforge page.  The subversion repository is currently recommended. Now that you have the one-time sequences, you must start the knock daemon.  You’ll most likely want to add this to an init script (such as /etc/rc.local):
knockd -i eth0 &

‘eth0’ here should be replaced with whatever the name of your Internet-facing network interface is. Now, portknocking is configured and running.  We only need to configure the USB flash drive, and we’re done with the basics.

Part 4: USB Flash drive setup


First, you need a partition on the flash drive that will be dedicated as the encrypted partition.  This can be one small partition, or it can be the entire disk.  I set aside the last 10 MB of the disk, myself.  Use fdisk, parted, or another partitioning tool to get the disk to your liking (remember, re-partitioning can erase everything you have on the drive.  Be careful). Once the disk is partitioned, you must create a secure volume, then create a filesystem on that volume.  As root, run the command:
cryptsetup create secure /dev/sda1

Where ‘/dev/sda1’ is the device name of the partition that should be the encrypted partition.  Enter your desired passphrase when prompted.  This should be a different passphrase than you used with the ssh key, ideally.  Now, you should have a device file named /dev/mapper/secure.  This is the encrypted pseudo-device Linux has created to represent your partition.  Create a filesystem on it.  I recommend a DOS filesystem because of its portability (an ext3 filesystem will retain the UID/GID and permissions for each file, which can get really confusing when moving from system to system and using users with different UIDs):
mkdosfs -F 16 /dev/mapper/secure

Now mount /dev/mapper/secure.  On it, create a directory called .ssh.  Copy the id_rsa file you created earlier into this directory, and create a file called ‘config’ that looks like this:



Host your.server.address home
User your_user_name
UserKnownHostsFile .ssh/known_hosts
HostName your.server.address
Port 22
IdentityFile .ssh/id_rsa
Compression yes

Also, take a copy of the sequences you created earlier (in /etc/knockd/ssh) and copy them to a file called ‘sequences’ in this .ssh directory.  You need to modify this sequences file so that the commas are converted to spaces.  You can do that with this command:
perl -p -i -e ’s/,/ /’ sequences

Now, create a script in the root of the encrypted partition with these contents:
#!/bin/sh
SERVER_NAME=your.server.name
WD=$(echo $0 | perl -pe ’s/^(.)/.?$/\1/’)
cd $WD
chmod 600 $WD/.ssh/id_rsa &> /dev/null
sequence=head -n 1 $WD/.ssh/sequences
[ -z “$sequence” ] && echo “Error: No more knock sequences” && exit 1
for i in $sequence;
do nc -z $SERVER_NAME $i; done
sleep 1
ssh -F $WD/.ssh/config home && sed -i ‘1d’ $WD/.ssh/sequences

This script will execute the next portknocking sequence in the list, then automatically ssh into the server.  It uses the config file in our local .ssh directory, so the username and key file are already specified. Now, to unmount the USB flash drive’s encrypted partition, you can execute these commands:
umount /dev/mapper/secure
cryptsetup remove secure

That’s it!  Now, all you need to do use the system is set up the partition as an encrypted volume, mount the encrypted filesystem, run the ‘callhome’ script, and enter your ssh passphrase.  Extra-secure connection home, for the truly paranoid.  The only upkeep required is to periodically generate a new list of sequences when you run low.  This system is a bit more complicated than just using an ssh command, but I discuss how to automate the connection procedure on systems you use a lot in Appendix B.

But wait, there’s more!  What happens when The Bad Guys steal our USB flash drive and start frantically trying to decrypt it?  Enter the panic knock.

Part 5: Disaster recovery


A scenario, if you will.  You’re sitting at your desk, your uber-secure connection home humming along, letting you chat on IRC without your boss being any wiser.  You lock your X session and get up to grab some coffee.  You get back to your desk, and glance over at your workstation, expecting to see protruding from the front your faithful USB flash drive, fast friend these many years, steadfast companion against the dangers of revealing your personal life’s details to those who would kill for it.  But it’s gone.  Someone has taken it. A quick survey of your fellow workers (and by “survey” we mean “threaten them with violence so they know you’re serious”) reveals that they don’t have it.  No one saw anyone come near your desk.

There is only one explanation: Identity Theft Ninja.  Trained in the secluded mountains of Japan from birth, these versatile agents of stealth can smell a USB flash drive that allows a connection to someone’s home server from a league distant.  You never had a chance. Hope is not lost, however!  Because the drive is encrypted, and the ssh key is further encrypted, you have an advantage.  The Identity Theft Ninja have powerful computers for cracking encryption schemes, but it will still take time.

Basically, when you notice your USB drive is missing, you can execute your panic script. The panic script should live on a shell server; something you can get to from any machine.  I recommend silenceisdefeat.  On the shell server, you simply have a script called ‘panic’.  It can look like the following:
#!/bin/sh
SERVER_NAME=your.server.name
sequence=(“1” “2” “3” “4” “5”)


for i in ${sequence[@]};
do nc -z $SERVER_NAME $i;
done

Most shell servers will not give you execute privileges, but because this is a script, you can simply type ‘sh panic’ to execute it. On the knock server, we have a special action to perform when someone executes that particular sequence.  Add this to /etc/knockd.conf:
[shutdown]
sequence = 1,2,3,4,5
seq_timeout = 10
tcpflags = syn
command = killall knockd

By the way, do not actually use the sequence 1 2 3 4 5.  Use a random sequence, but include a number that will never appear in your normal portknocking sequences.  The ‘out of phase’ number guarantees you never accidentally shut down the server, and keeping the sequence random guarantees that a portscan or other malicious attack won’t lock you out down your server.  It would be a good idea to change this sequence every time you use it, as well, to prevent an attacker from repeating the sequence to frustrate you.

Appendix A: Beyond SSH - forwarding other traffic


You can take advantage of the power of SSH to create an extremely secure tunnel for almost any data; you aren’t limited to running commands on your remote machine. Perhaps you want to browse the web through the encrypted tunnel, so other users on the network can’t see that you’re really shopping on newegg instead of getting work done.  In that case, you could add this to your .ssh/config file:
DynamicForward 8137

This creates a SOCKS proxy that you can route traffic through.  Simply configure your web browser to use a proxy at localhost, port 8137.  If you want to tunnel certain sites through the proxy but not others (and you use Firefox), check out FoxyProxy. Check the command ‘man ssh_config’ for more options you can put in the .ssh/config file.

Appendix B: Mounting your encrypted volume made easy


You need root access to create and mount an encrypted volume.  If you use the same few computers all the time (and you have root access on them), you can simplify your life.  First, use the ‘visudo’ command and add a line to the end of the sudoers file like this:
your_user ALL=(root) NOPASSWD: /sbin/cryptsetup

This will allow you, as a normal user, to execute ‘cryptsetup’, which lets you create and remove encrypted volumes.  Next, add a line like this to /etc/fstab:
/dev/mapper/secure /mnt/secure auto noauto,user,umask=077 0 0

This will allow users to mount /dev/mapper/secure once it is created.  The umask guarantees other users on the system can’t see our files, which would compromise the ssh key.  Don’t worry, we can still prevent another user on the system from hijacking our mount; that comes next. Now, create two files in /usr/local/bin, called ‘secureon’ and ‘secureoff’.  In secureon, put:
#!/bin/sh
sudo cryptsetup create secure /dev/sda1 && <br/>mount /mnt/secure

sda1, of course, is whatever the device name of the encrypted partition is.  You can use udev or hal to ensure this is always a consistent name.  secureoff looks like this:
#!/bin/sh
umount /mnt/secure && <br/>sudo cryptsetup remove secure

Make both of these scripts executable (‘chmod 755 /usr/local/bin/secureo*’).  Now you can simply run ‘secureon to create and mount the secure volume (you’ll be prompted for the encryption passphrase), and ‘secureoff’ when you’re finished.

Appendix C: Behind a router


The last case we will consider is the complex but extremely common situation where you have one device acting as a router.  This changes the iptables rules we need to use with the knockd server.

First, you need to have a router that  you can install Linux software on.  In other words, your router must be running Linux.  If you have a computer acting as your router, this is probably no problem for you.  If you have a consumer broadband router, this may be more difficult.  You can get Linux firmware for certain models of broadband router, however.  Several broadband router distributions exist; I use OpenWRT; it is easy to install new software with OpenWRT, and knockd is available for it.

The exact rules you will need are going to depend on your particular iptables setup, but to forward a port you will need two rules:  one in the filter table’s FORWARD chain and one in the nat table’s PREROUTING chain.  The approach that I recommend is to add the rule in the FORWARD chain permanently, and use knockd to add and remove the PREROUTING rule.  This simplifies the knockd configuration, and allows you to use the FORWARD chain as a handy reference for what forwards are possible.

For example, let’s say you have a machine at 10.10.9.18, and the knock daemon will open SSH to this machine.  First, you want to add this firewall rule permanently:
iptables -A FORWARD -p tcp –dport 22 -d 10.10.9.18 -j ACCEPT

Put that in your router’s iptables configuration.  If your router is running Fedora, put this line (minus ‘iptables’) in /etc/sysconfig/iptables.

If you’re using OpenWRT, I would suggest using the forwarding_wan chain instead of the FORWARD chain.  Also, on OpenWRT you can put this line in /etc/firewall.user.

The start_command and stop_command lines in /etc/knockd.conf will add and remove the PREROUTING rule, like so:
start_command = /usr/sbin/iptables -t nat -A PREROUTING -s %IP% -p tcp –dport 22 -j DNAT –to 10.10.9.18:22
stop_command = /usr/sbin/iptables -t nat -D PREROUTING -s %IP% -p tcp –dport 22 -j DNAT –to 10.10.9.18:22

For OpenWRT, use the prerouting_wan chain instead of the  PREROUTING chain.

One great thing you can do with a router is use different knock sequences to forward SSH to different servers.  If you have several machines on your network, you can simply add additional sections to knockd.conf (and additional rules in the FORWARD chain).  As long as they use different knock sequences, you can overload port 22 to forward to whichever machine you need.

A New Hope

I once had a blog on livejournal, titled slashsplat.  This blog didn’t see very many posts, because I had to log out of my personal journal to log in to it.  So I decided that a blog hosted somewhere other than livejournal would be a good idea.

That’s the purpose of this site.  It will be somewhat more general; the goal of this blog is to discuss geek culture and everything that may mean to me:  programming, technology, gaming (video and table-top), and whatever else springs to mind.  However, each post will try to be an entity separate from myself; personal matters that I feel like ranting about will not appear here.  I have a personal journal for that, after all.

Anyone who feels like reading this little piece of the Internet is welcome to come along for the ride.  If it’s just an exercise in self-indulgence, then so be it.

(Below this post, you may notice I’ve included all of the posts from slashsplat as well.  Those few posts span a lot of changes in my life, so the tone will vary as you venture farther back.)

Nintendo and the Homebrew Arms Race

When I purchase a piece of hardware, it is mine to do with as I wish.  This is a long-held understanding.  If I buy a piece of clothing, I can have it altered.  If I buy a car, I can change the tires.  If I buy a television, I can kill myself trying to screw with its insides.

It might void the warranty, it might put my life at risk or potentially damage the thing I’ve purchased, but it is my right as a consumer.

Nintendo takes a different view on the issue.  Owners of the Wii have long been able to employ a simple buffer overflow exploit in Twilight Princess to run custom code.  This exploit, called the Twilight Hack, allows a user to install, among other things, an application called the Homebrew Channel, which looks like any other Wii channel and lets you run other custom code without using the Twilight Hack again.  It’s the gaming console equivalent of installing a new stereo in your car.

Since the hack was made public, Nintendo has been trying to thwart it.  They have, to date, released three firmware updates that included code targeted to stop the Twilight Hack.  The most recent update succeeded at stopping it completely - it appears to detect the hacked save files and delete them, both on boot and whenever you insert an SD card.

So, all of this is standard fare.  Whenever a console launches, homebrewers will make it run custom code.  The console manufacturer will release an update to prevent this.  The homebrewers will work around it.  This process will continue in an escalating cycle.

However, Nintendo has delivered a low blow here.  Along with the System Menu 3.4 update, they changed their terms of service.


We may without notifying you, download updates, patches, upgrades and similar software to your Wii Console and may disable unauthorized or illegal software placed on your Wii Console…

Now, that’s pretty cold - deleting our custom software?  Come on Nintendo, all I want to do is play videos on my Wii!  Also, the first time a fully automated background firmware update breaks something, the angry calls are going to pour like rain.  Power outage in the middle of a night-time firmware update?  Too bad!  But it gets worse…
If we detect unauthorized software, services, or devices, your access to the Wii Network Service may be disabled and/or the Wii Console or games may be unplayable.

Okay, at this point I feel it is crucial to point out a couple of things.  First, these quotes come from two documents, the Wii Network Service Privacy Policy and the Wii Network Service EULA.  Both of these documents are required, not to use the Wii in general, but to use the Wiiconnect24 services (the Shop channel, Nintendo channel, and Nintendo’s other online content channels).  So, to use their network, you agree that they may disable your system completely.  This means two things:

1. You can perfectly legally run hacked code on a Wii that does not use Wiiconnect24.

2. You grant Nintendo the right to break the law (destruction of private property) if you choose to use the Wiiconnect24 service.

Now, according to a lawyer I know, a contract cannot override criminal law, even if signed in full knowledge as opposed to clicked-through (the enforceability of click-through EULAs is still up for debate in the US).  So this clause is, by necessity, unenforceable.

So why is it there?  Nintendo has a juggernaut legal team, famed for its ruthlessness.  They can bankrupt any individual consumer with the legal proceedings necessary to challenge them, and it is unlikely that this will raise enough stink to get a class-action suit started.

I used to have some respect for Nintendo.

Linux on the Desktop - a partial solution

Lately, I’ve read a number of “Windows user tried Linux for a week and hated it, and this is why” articles. Then, while holding back the urge to scream during a Windows XP install, it hit me: we’re holding a double standard, here.

In the last year, whenever someone talks about “whether Linux is ready for the desktop”, the complaints that always crop up revolve around the fact that a user can’t throw in a Linux install CD, click next a few times, and have a fully functional desktop environment in half an hour. Several things plague these proverbial users: the lack of mp3 support is probably the most problematic now, as is the lack of 3d graphics support. The complaints further, er… complain, that the user has to know what she is doing to enable/install all of these components.

What most people overlook, though, is that installing Windows is no cakewalk, either. Windows ships with almost no real video or audio hardware support - everything must be downloaded from 3rd party websites, and more importantly, the user has to know what vendor website to go to, and how to navigate the vendor’s site (with some vendors, that can be a real pain!).

So now, let’s be fair. I’m taking a Windows XP install, out of the box, and comparing it side-by-side with an Ubuntu Linux install. Okay, here goes.

Ubuntu Linux

No mp3 support

As a user, I have to install several non-free packages, which means changing my available repositories and running a few commands (or using the graphical tool). If I prefer the less-questionably-legal route, I would purchase Fluendo (28E for their entire set of plugins, with perputual updates, as of this writing. Still about 1/4 the price of Windows’ most basic version), and follow their instructions to install it.

Of course, I also have to know about these options. A quick google search (“MP3s in Ubuntu”) and a forum gives me the answer, in step-by-step format.
No 3d graphics acceleration

This is even easier. All we need is to install the nvidia-glx or xorg-driver-fglrx packages, depending on the card. They’re also in the restricted repository, but we’ve already enabled it previously. If we hadn’t, the google search “3d graphics in Ubuntu” gives us the correct answer immediately.
No flash player

Another quick google search turns up the answer, as always with step-by-step instructions.

And, that’s it. Everything else I need to do to be productive is already provided by Ubuntu: web browser, office suite, multimedia software. Note: I never had to restart Ubuntu during this whole process.
Windows XP

No audio

First, I have to figure out the name of my audio chip, which Windows doesn’t tell me. All Windows will say is “Unknown Multimedia device”. By booting Linux and running lspci, I discover it’s a C-Media chip, and go to their website. I have to give them the exact chip model number, and they give me a driver to download. I have to restart Windows.
No 3d graphics acceleration

Again, the video controller is just called an “Unknown display adapter”. Foreknowledge tells me I have an Nvidia Geforce 6600 GT. I go to Nvidia’s website (much easier to use than C-Media was), and get the driver. I have to restart Windows.
No flash player

Well, this one installs automatically. Doesn’t even need a restart! 1/3 isn’t bad, I suppose.
The Conclusion

What’s the point of this exercise? Am I trying to say Windows is teh sux0r? No, that’s not my message today. I could extoll the myriad problems with Windows that make Linux a better option (spyware, viruses, openness and all the benefits thereof, etc), but that’s not the point.

The point is this: when it comes to installation, Linux and Windows are roughly equivalent in complexity. Linux has its installation issues; so does Windows. They tend to break roughly even, in my experience, although Linux has a much more readily available support structure in the form of community forums. But both OSes require a lot of user knowledge in order to get up and running. They assume you already know how to do things. What they really assume, underneath, is that
a technical person is doing the install.

The Solution

Most Windows users never install their OS; some technician installs it, either OEM at a factory, or at the local computer shop, or the in-law programmer who gets drafted for technical work (ahem…). Linux users have seldom known this luxury; instead, whenever someone talks about Linux, they assume that the end user is doing the install.

The solution is to treat Linux installation the way we treat Windows installation. Someone who Knows What They Are Doing (tm) sets up the OS and delivers it to the end user. One practical advantage for the Linux community is that all the time spent on fancy installers could be channeled elsewhere (not to say we don’t like our hardware auto-detection, et al. But a curses-based menu is just fine, thanks). Make Linux installation work like OS installation always has before: technical users install their own OS, everyone else leaves it to the techs.

At least don’t hold us to a double standard.

Then They Fight You

Microsoft threatens to sue the entire FOSS community

Where have I seen this kind of threat before? Hmm… SCO, anyone? Is MS really desperate enough for that? SCO only sued IBM because they were losing money in copious amounts, flirting with bankruptcy. Vista seems to be the straw that’s breaking Microsoft’s back.

2^8

09 F9 11 02 9D 74 E3 5B D8 41 56 C5 63 56 88 C0

That is all.

Decentralizing Second Life

So, I’ve been thinking about Second Life, and it occured to me that it’s being done entirely the wrong way. Don’t get me wrong; I enjoy SL, and have no qualms with the experience itself. It’s the underlying scheme it’s built on that bothers me: one company controlling all the servers, one company responsible for keeping everything running smoothly. It seems to me that all technologies built on that model eventually fail on the Internet, while distributed technologies (Web, email, usenet) thrive.

To that end, I’ve been thinking about how Second Life could be successfully decentralized, without adversely affecting the experience that everyone has come to know and love. I’ve identified key elements of the user experience that would be difficult to decentralize, and possible ways to handle them. First, though, we’ll talk about the basics; how could decentralization even work.

First, LL releases the code for the Second Life server. Now, anyone who wants to can host a Second Life sim/sims of their own on a server. A central repository would keep track of the existing sims, in a vaguely similar fashion to DNS (see The Grid, below). This would allow Second Life to grow without bound, with sims run by a multitude of companies and even home users.

So, how do we keep that Second Life experience without the centralized monolith of Linden Labs?

Economy
First and most importantly, the Second Life economy must be preserved. The economy has become the most crucial element to the experience; the ability to use real money, diluted down to a virtual quantum, to purchase other users’ custom created content. This breaks down into two sub-problems:

a) Managing the money. The most likely way to do this would be to set up a “bank”, wherein a single host (or several different hosts) manages all of the banking transactions. I’m thinking basically a system like paypal, where you buy L$ (“Linden Dollars”, Second Life’s currency) from the bank, or sell $L back to the bank for real currency. Each SL server would use this central bank system to check a user’s account balance, and make withdrawals/deposits, with proper confirmation on the part of the user, naturally. A public/private key system to ensure the user actually sent the confirmation could prevent abuse here, so no worries on that score. The SL bank could even be controlled by Linden Labs, as this would be a lot easier to handle than the entire grid, and still give them opportunity to have a strong stake in their creation.

b) Protecting Intellectual Property. This is a tricky problem, and the single hardest element to decentralizing SL. Since a huge portion of the money in SL is traded for users’ creations, there must be a way to prevent them from being stolen. Under a decentralized scheme, when a user rezzes an object on a sim, all the data for that object (textures, sounds, scripts) would necessarily be available to the owner of that sim. The most obvious solution I can find for this is to keep the object data elsewhere, and have a rezzed object be a pointer to that data. The advantage is that compiled scripts, raw texture data, and sound files stay on a secure server independent of their rezzed location. But where is this mystical server? I see two options here: either the data is on another sim, perhaps the user’s “home sim” (see User Accounts, below), or the data is in a central “asset server” (essentially the way SL works right now). Using the former approach, the client would have to make tons of connections to different servers to get all the data. Under the latter, the asset server would have to be extremely load-tolerant and robust, and all the data is stored by the same group of people, whose ethical integrity the SL user base would have to trust implicitly. Since both of these are flaws in the existing Second Life system, however, it is acceptable for the hypothetical exercise we’re attempting here. Also, under either system the sim owner’s creations could be stored on-sim for lower lag.

One other solution would be to create some DRM scheme that encrypts this data until it reaches the client. Of course, in all of these cases the client could be modified to steal the data. However, here we again reach the fact that these flaws are already inherent in SL, and there’s no easy way around them.

The Grid
The ability to bring up a map and scroll around, or teleport instantly to another part of the world, is an exciting part of SL, and another crucial part of the SL experience. Fortunately, the Internet already has a great system that we can build on - DNS and hyperlinking. We simply define 2 kinds of link: “landmarks” and “neighbors”. Each sim can have 4 neighbors, and neighbors must mutually agree to be neighbors (for a neighboring to work between sim A and B, A would have to set B as a neighbor and vice versa). The neighboring agreements would be stored in a central server system, modelled on DNS. A few recursive calls to this system and each sim can cache a portion of the overall grid map. Want a private island? Simply don’t neighbor your sim with any others. This creates user-level “peering agreements” that could create a more logical terrain (snowy areas linked together, etc) even if the landscape does shift from time to time.

The other kind of link would work just like landmarks in the current SL system. Pretty self-explanatory, except this system would make “click to teleport” objects a necessity, finally.

If a user searches for a sim on the map, the client can grab that sim’s cache of neighbors, and display more of the grid. The client could be configured to keep any amount of that information cached locally, for a more immersive experience.

User Accounts
There are two ways to handle user accounts: a centralized account server, or a sim-based account system. Under a centralized server, all accounts would be handled by, say, LL. This simplifies the system greatly, and aids in managing the asset server. With “home sims”, you’d have a system similar to Jabber, where user accounts are essentially user@home_sim. I believe the centralized system will work best, given that the asset server system seems to be the most logical way to do things.

Instant Messages
Well, LL is currently planning to re-implement the IM system in Jabber, so we’re pretty much covered there :P

So, in summary, we have a system that uses a centralized server for accounts and user-created assets, as well as a DNS-like neighboring system to create the world map, but grids are controlled by individuals, and hosted by companies just like web servers are now.

Technophobia

I have recently realized why there are so many computer illiterate people running around. It’s not that people are simply stupid - that’s a grossly judgemental answer that many of my fellow geeks unfortunately arrive at. That’s not it at all, because computer illiteracy reaches into technical fields. I know several computer science professors that simply can’t use technology newer than 5 years old.

So, what causes this, if not simply “they’re dumb”? Fear. Technology is mysterious; most people, when confronted with something unfamiliar, are uncomfortable. It feels like some delicate piece of magic; if they touch it too hard, it might shatter.

The consequence of this fear is that, once gripped by it, people start assuming they can’t learn anything about computers; it’s too arcane. So, when presented with technical terms or ideas, they stumble over them. If the technophobe stopped to think about the idea they are grappling with, they’d probably figure it out pretty quickly. But their mind won’t do that, computers are “too complicated” for anyone like them to figure out.

An example: USB flash drives. Even most technophobes know what floppy disks are, but when you tell them this is similar, except it connects to that rectangular plug on the side of their computer, they give a blank stare. They can’t comprehend it because it’s new.

A better example: If presented with two products that very clearly do the same thing, but are made by different companies, the technophobe will invariably ask “what’s the difference between these two?” If you showed them a Dirt Devil and a Hoover, they would have no such problem, but computers are mysterious, afforded a special class of untouchability.

So, to all you technophobes out there: Stop being afraid of the computer. I promise it won’t bite. Engage your mind and really listen when computer jargon floats by. Make intuitive leaps; even if they’re wrong, they’ll eventually point you in the right direction.

Programming: The theory

One of my biggest problems with the IT community, both in amateur programmers and prospective employers, is the following question: “So, what programming languages do you know?” This implies that learning a language is an extremely difficult task, and collecting languages like trophies is somehow a worthy pursuit.

A programming language is a tool. A skilled craftsman isn’t good at her trade because she knows how to use a given set of tools; anyone can learn that. Rather, true skill comes from knowing how to apply the tools. The fundamental concepts behind programming are the skills on which we should be focusing.

This applies to academia as well. The language you use to teach students, especially the first language they encounter, is important. I’m not about to advocate “teaching languages” like Pascal, though. I think it’s important to choose a real-world language, with all the pitfalls and caveats of a real-world language, as a student’s first language. At the same time, it should be a language with the features available to demonstrate all the fundamental concepts in programming. A language that doesn’t support recursion would be a Bad Choice, for example.

So, when someone (a peer or a hopeful programmer-to-be) asks me “what languages do you know?”, I won’t respond “Well, I know C, C++, Java, perl, php, xhtml/xml/css (if you count those), lisp, prolog, LotusScript, Javascript, LSL…” etc. Instead, I’ll say “I’ve used a number of languages, but the key thing is that I know how to learn any language.” When an employer asks, I suppose I’ll have to say “Well, I know @languages…”. Then, though, I might add “…but I consider the fundamental concepts behind programming languages to be more important, because mastering those means I can learn to get around in any language given a week or two of study."

In summary: Learning a programming language is trivial, once you know the fundamental concepts of programming.