ZFS xattr tuning on Linux

It took me a while to figure out why my Linux ZFS disk was so slow, but there's an easy fix.

I moved disks over from OpenSolaris b134 to ZFSOnLinux directly. The new NAS had just awful performance over Samba and rsync, especially with large folders. I did a bunch tracing and watched the xattr request for posix ACLs use tons of time.

After reading up on it, what I found out is this: ZFS on Linux stores xattrs in a hidden folder, as regular files! This is very slow, requiring multiple seeks per xattr, and it doesn't appear to cache very well either.

The fix is to store the data in the inodes. And given the performance impact, I can't tell at all why this isn't the default:
zfs set xattr=sa (pool)
If you use ZfsOnLinux, you should probably go do this now. It's that big a difference.

(Also, some bugs were reported early on with "xattr=sa" but they appear to be fixed as of 0.6.2.)

Performance is quite amazing after this change, and I recommend it. I hope it's the default someday.

Also, I installed samba4, and this new version can store xattr data in a "tdb" (Samba's "temporary" database.) I didn't need to do this after the above fix.

Too many Chargers

Quick roster of things that charge:

  • I have 5 laptops and only 2 of them are charged at all.
  • I have two Kindles that have been out of batteries for 6 months.
  • We share chargers in the kitchen because we have too many  iOS devices.
  • I got a Fitbit and an Up bracelet and I charged them each twice, ever. Confused about when to take them off for 8 hours too. e.g., if I want sleep and fitness tracking, then I can't ever charge!

I hope this iteration of smart watches, devices, etc. have some clever charging solutions too, because I am out of plugs and time.

Charging all these new devices is really a first-order problem, and whoever solves it will win the market. It shouldn't be an afterthought, or these things will sit in drawers like all the old devices do.

Spying is Easier on the Non-Stupid Network

Many writers have attributed the early success of the Internet to its being a "stupid" network. You could build all the smarts you wanted on a single computer, a "smart" node, and then you'd have a new application. If you had to re-program all the Internet routers for each new protocol, not much would get done, but on the Internet this didn't need to happen.

Recently, the major IM providers stopped doing point-to-point communications. Instead, they started logging all your traffic to a server, apparently to allow you to use multiple devices. So if you get up from your desktop PC in the middle of a conversation and pick up your cell phone a half hour later, you can continue where you left off, because the conversation, and the stuff your friend typed while you weren't looking at a screen, now lives on a server somewhere. Similarly, Dropbox is so useful because your data is on a server somewhere, too.

Ten years ago, many of the IM clients allowed you to make a connection directly to another person and chat, without that traffic going to a server at all. For the most part this doesn't happen anymore.

Perfect Forward Secrecy

What's interesting about this change is that there are really quite good ways to encrypt point-to-point communications ("Perfect Forward Secrecy" including Diffie-Hellman and the Elliptic-Curve equivalents), ensuring nobody is snooping in the middle. It's sort of implicit with these methods that you throw away the actual key you used after the communication is done, and that's really important.

Even if you capture all the encrypted traffic from one of these conversations, it's impossible to recover a key later that lets you see what was communicated. You have to actually go get the computer out of someone's bedroom to see what was said! Not so much when there's a server in the middle, even if the data on that server is encrypted. Of course everyone tells you they encrypt your cloud storage, but it's not the same as a secure communication channel between two smart nodes. These are totally different grades of encryption.

As soon as you decide you must persist a key to get at your data, you also have a key that can be subpoena'd or recovered, so a packet capture of the transmission can be later decoded. If two smart clients agree on a key that is discarded after the information is transferred, no such recovery is possible.

And this fundamentally is the risk the cloud poses to secure communications. If someone's logging encrypted communications and has a legal framework to recover persisted keys, the only way around this is to make point-to-point communications a lot more prevalent.

You could make a Snapchat that was entirely and profoundly secure from snooping. You can't very easily make a Facebook that is.

Linux ZFS move (OpenSolaris all gone)

I moved my big ZFS array from a working OpenSolaris system to ZFS on Linux!

Previously I was running Linux but on a VirtualBox hosted by OpenSolaris, mounting ZFS over NFS, which turns out to be very slow (VirtualBox seems kind of slow at network I/O. KVM is the more modern way to do this, and I suspect it's a better way.)

I would guess that the Linux port of ZFS is somewhat slower, but it's totally great to have all my stuff on Linux again, and for certain, the apps I'm running on Linux are quite a bit faster now.

This new machine is the gen8 microserver, which in practice is quite a bit faster than my old 3GHz Xeon. I'm booting it to an ext4 SSD software RAID-1 (md), and then using the same LSI card that came with my Sans Digital TR8x for the external ZFS array.

Other than thinking "if it works don't fix it", the only thing that held me back from this move was the idea that NFSv4 ACLs (i.e. NTFS ACLs you copied from Windows to ZFS) would somehow make for a trainwreck when I moved away from the built-in Solaris CIFS service to Samba. I felt "locked in" somehow by setting up all those ACL bits. But no such catastrophe has occurred - a few chown's and an occasional setfacl and it's fine. The ACLs are inherited in ZFS (and there's a Samba option for this too), so most of it just continues to work.

Samba is probably faster too, because it supports SMB2 and CIFS does not. Also, if the ACL issue bugs you, you can even compile Samba with ZFS-style ACLs - I didn't even do this.

A Solaris-like auto-snapshot service is available with one git checkout: https://github.com/zfsonlinux/zfs-auto-snapshot. Everything else I was running is tons easier to configure on Linux anyway.

Maintaining one server OS is going to save me a lot of time, and for that alone, it was worth the move.

The Few can now Watch the Many: 1984 vs 2014 and Exponential Growth in Surveillance Tech

My friends know lots of things about topics like Machine Learning and Big Data. They can detect phrases in speech, objects and faces in billions of pictures, or sort trillions of numbers in a few minutes.

And not unsurprisingly, these same people could do so much less just 10 years ago. Machine Learning and Big Data have been on such an amazing curve that these whole fields have been reinvented very recently. The technology is reaching a totally new scale and range of capabilities.

You think for a minute about that Orwell book,1984. It's suddenly a bestseller again.

Half of us, Spies?


What would it have taken to build the surveillance state in our 1984? Well, you would have had to hire half the population just to spy on the other half. Without technology to help, you would need massive quantities of human attention to do surveillance. It would be an immense cost and massive operation. That's one of several reasons it never happened.

But today, 30 years later, it's conceivable to filter all voice communication through very smart algorithms, to run face detection at airports, to log every license plate that drives by many thousands of locations. Easy stuff now.

If the NSA is paying $600 million to Amazon or $860 million to build a new datacenter, those numbers are pretty small, even tiny, compared to hiring just 10,000 people to do the same work manually.

Let me say that again. Thirty years ago, it would take 100 million people to keep track of what everyone was saying and doing. And this year, it's 10,000 times cheaper. (I even looked at the license-plate reading cameras and thought of buying one for my house. It's under $1000 for a complete system.)

So maybe, ten years from now, what if you could just buy today's NSA cloud computing infrastructure for maybe a million dollars?

Ultimately, it doesn't matter how much you cut the NSA budget. Technology is getting cheaper really fast. The algorithms are getting better, while the storage is getting lots cheaper.

And I've started to think that this exponential growth is the story that isn't getting written. We know from the tech boom that technology centralizes control. A few people can write code that makes billions of dollars in just a few years. That's unprecedented in history, kind of like this "new" surveillance issue.

Even knowing that, there are still two kinds of people in the world: the 0.1% who are watching this exponential growth in machine technologies and understand the implications of the scale and centralization, and everyone else, who thinks maybe the NSA/Snowden thing is just some questionable policy and nothing new in the world other than perhaps an abuse of power.

People are completely and utterly unprepared for this particular exponential growth curve. Surveillance is going to be millions of times cheaper than it was ever before.

But most of the world's population has no way to reason about this, or to understand it. There is no precedent for this in history. 

Microserver gen8: a good (great?) home server

[EDIT: OpenIndiana-discuss concludes that the broadcom network chip is not supported on Solaris, but there is some interest. Also, to use 4 disks as storage, you may have to use the on-motherboard microSD chip, or the optical disk slot, to boot from. I'm using Linux and am very happy with it.]

HP made an amazing new Microserver, the gen8.

Intel. Haswell. Fanless (the CPU at least.) And it's as fast as my old 3GHz one.

Two NICs.

ECC RAM.

It's tiny and holds 4 disks (plus a tiny space for a 5th on top.)
I just turned off the built-in soft RAID.

This is a really good start for a home server. Recommend it.

A couple notes to get ZfsOnLinux working on CentOS 6.4

I've been trying out ZfsOnLinux, which seems pretty stable according to a couple of friends.

In addition to the usual stuff, I had to make two symlinks to get DKMS to work on CentOS 6.4.

First, the post-install script DKMS uses is pointed to /usr/lib/ and not /usr/lib64/ (this should probably get fixed some other way):

ln -s /usr/lib64/dkms /usr/lib/dkms

Also, the kernel build is using the fully-qualified version and not the abbreviated one:

ln -s /usr/src/kernels/2.6.32-358.14.1.el6.x86_64 /usr/src/kernels/2.6.32-358.el6.x86_64

There is probably some reason for all this, or it will be fixed in a week. But anyway, it might save you time this week. It compiles the kernel module for me now.

Toddler Usability, Nouns vs. Verbs

Watching our 2 year old navigate the iPad has been amazing. She has a huge capacity to figure it out, play her favorite songs, run her favorite apps, etc.

Of course she does a lot better when there are pictures. An icon, no matter how small, gives her tons more benefit than just a text label.

But it's about the Nouns

But the biggest concept I've learned is the idea of nouns vs. verbs.

When language starts developing, you see thousands of nouns appear, before verbs really show up at all.

And even for adults, the best interfaces are made of simple nouns. Apps are nouns. One of the big splits between an early "technology" for "professionals" and "consumer apps" seems to happen at the point a UI gets noun-ized. A photo, a song, a file.

It's not a "workflow" or a "write down these steps." It's, that app, or, that icon. It's when you become a noun that your software takes a leap in usability.

In real life, our most important tools do things, so software designers try to emulate that. But the best-loved tools are also nouns. And those tools, those nouns, can only do a few things. You start a sentence with a noun, and it limits which verbs apply. So your brain is just faster this way.

It's what toddlers do, and it's the same with people bigger than two years old.

Your users aren't so smart that they want a workflow or a wizard or an action.

Instead, you should try to start with nouns.

On Crossfading Properly

Since Adobe Revel just replicated the thing Google+ did, and they're both wrong, I thought I would say this:

If you have two images over a black background and you want to crossfade between them, please DO NOT fade out the first one as you fade in the second.

(Exception: you can fade the first one out, quickly, at the very end of your animation, if the aspect ratios are different.)

The reason is this: in the middle of your animation, if you do it the Google+/Adobe way, you will have:
 - image A at 50%
 - image B at 50%
=============
Total opacity: 75%

And you didn't really mean to have 25% of the background showing through there, did you?

Crossfading between two images this way will leave you with a 25% darker image in the middle of your very fast crossfade, and that's very distracting and looks bad.

You don't want intensity to change in a crossfade, unless it's very slow and fades to black on purpose.

Windows 8

I bought a Lenovo Yoga to try out Windows 8. The good news is that screens are getting somewhat better on Windows laptops. The Yoga even has IPS, so it has contrast and the colors don't look insane when you look at it from a slight angle.

Many other touchscreen laptops have screens that bounce around when you touch them, which is a bad thing for a touchscreen to do. The Yoga is pretty solid.

I really disliked everything about the new OS, but now I find that I'm adjusting to most of it, and touch on the desktop is the unexpected HUGE benefit.

Overall the "new" parts of the OS feel very rough. In a way, Windows 8 gives me the ability to goof off and to work on the same machine (which feels more like working than using an iPad.) It's not going to be my main machine, though, because it doesn't deliver on work or play as well as the dedicated machines do.

What's Surprisingly Good

IE on a touchscreen! Because two-finger zoom on a desktop browser is just incredible. Of course, I don't use IE because Chrome is actually faster for everything other than zooming, and Chrome doesn't zoom yet. In my opinion, touch without zoom is not nearly as much fun.

Scrolling with your fingers in regular apps, when it works. Sketchup is pretty fun with touch especially.

What's Not Great 

Touch almost works on the desktop, but you really can't click anything that's of a normal size. I can't close a tab in Chrome, can't close a Window, and I can't actually hit most buttons in a dialog box. This may be the accuracy of this model's touch screen, but it's very frustrating - I can use remote desktop on an iPad more easily than this, probably because it allows zooming when I really want to click a button.

Because Microsoft didn't spend enough effort fixing the desktop, a majority of the UI is inherited from Windows 7 (like the entire Control Panel and Explorer), which is made up of tiny close-together links, so nothing at all is usable with a touchscreen. There is apparently no gesture to zoom in to click on a thing you really want to, so you're absolutely stuck using the trackpad to click on small objects, which is almost everything.

The ugly desktop: the window chrome is huge and the visual design is completely lacking. It's like looking at a Powerpoint wireframe of a UI, and less customization than ever before. For instance, there is a narrow range of color settings that makes the taskbar look decent at all, and while they give you a couple color choices by default, it's very hard to see at a glance what's going on: which shade of gray is the active window, etc. This set of choices makes the desktop UI tiring to use.

The two-headed hydra of "Modern" vs the traditional OS: the launcher has never previously needed full-screen real estate on the desktop, and it pretty obviously doesn't deserve it. This UI is very half-baked, and it misses on some basic usability. Microsoft could have built a Quicksilver-style overlay view in the desktop mode that allows launching the 4 most popular Modern apps, but they might have also kept the benefits of the desktop UI.

In case this point hasn't been articulated by every single reviewer of the OS so far, I'll add my two cents. On the desktop I always have tons of state in my head, and I usually launch an app to add to it, not to replace it. When I haven't used my PC all day and I go to look something up, that is the time I want a modal UI that asks which task I meant to do, so I don't forget and admire all the old things I was reading yesterday. But other than this single case, I want the PC to be full of overlapping windows and tons of complexity.

The quality of the built-in Modern apps is mostly poor. The finance app and a few others are pretty nice, but most of them are too minimal, and all of them are too slow. On pretty much the fastest laptop I can buy, most of these apps take >5 seconds to launch. Scrolling is often chunky, not even animated. The email app is flaky. I'm not used to the gestures for sharing content from them, so I won't use them very much. I have existing ways to share content on the desktop, and I don't want new ones.

There are extra rough edges, like Modern apps switching in to tell me they need to update while I'm trying to do work on the desktop.

But overall, I'm surprised that I really like touch in a desktop environment. I almost bought a machine with no touchscreen, but it is more fun than you expect.

I wish Microsoft had spent more time making touch work on the desktop, rather than all the effort they put everywhere else. Touch on the desktop is actually a killer feature (not a flashy one) and it deserves more of their effort in the future. Adding multiple ways to do the same tasks (but in a more limited way) is just not useful.

If Microsoft spent all their effort in the next year making touch magical and seamlessly integrating the best of their new apps a la carte with the Windows desktop, they would have a product people would love, at least a little bit.

iTunes, by spamassassin

I found a copy of the iTunes Terms and Conditions and Privacy Policy in my spam folder.

Funny.

Here's what Spamassassin has to say about it:


X-Spam-Report: 
*  2.2 INVESTMENT_ADVICE BODY: Message mentions investment advice
*  2.4 TVD_PH_BODY_ACCOUNTS_PRE TVD_PH_BODY_ACCOUNTS_PRE
*  2.1 ADVANCE_FEE_4_NEW Appears to be advance fee fraud (Nigerian 419)
*  3.5 ADVANCE_FEE_3_NEW Appears to be advance fee fraud (Nigerian 419)
*  0.0 FORM_FRAUD_5 Fill a form and many fraud phrases


How Daylights Savings Time made me Tired for Two Weeks

Along with everyone else, we stopped observing Daylight Savings Time a couple weeks ago. Lorna and I adjusted all our clocks, at least the ones that didn't adjust automatically.

Of course that switch makes everyone tired for a few days. But it's been worse this year, and I didn't understand why until recently.

It has something to do with our toddler.

Because in that way that babies are, the baby didn't notice. She just doesn't read most words or notice what time it is.

Now, a couple weeks after the "switch," I realized that I'm living in a house with a person who basically ignores what time the clock says. Baby Girl is going to bed and waking up at the same time as before, but the same solar time. We are going to bed an hour later, and then waking up when she does.

And interestingly, what's happening is: we're missing about an hour of sleep every night, and it took me a long time to figure out why. But I've been tired for two weeks now, and that's why.

It's seems so obvious that a person who can't read the clock is behaving differently than one who does. We must be really attached to going to bed at a certain time, and not when we're tired, right?

The temperature

Around the same time, my nest updated, and it seems to regard the numbers we set with a little more "flexibility" than before. So 72 doesn't seem to be 72 anymore. It's 71 or 73, or whatever. I can't really explain, just that I used to dial in particular magic numbers, and now they seem to overshoot sometimes. 

And in nest's defense, I do think nest deserves to use numbers, because it can't adjust the temperature instantly. You need a way to say what you mean for the future.

Photo editing

And back in the old days when we were designing Picasa, we took the numbers off of a lot of the sliders, so you would look at what you were doing to the picture, rather than the number. This was really uncommon back then. In photo editing, you can actually make an argument that the numbers matter. What if you want to do the same thing to 5 photos, but you don't want to copy & paste the whole set of effects? Photoshop jocks do really use the numbers, and they like to see them, and then copy and paste and tell people about them.

But we took them off anyway, and we tried to make the sliders work right instead. So I guess what I'm about to say is a thing I've been thinking for a while.

That Analog Show

Mostly, I've been thinking about the next f.lux UI, and how much to show the color temperature number ("5000K!") and how important it is, or isn't. Maybe it isn't. I made it really big in one sketch and then I took it off completely in another.

And you wind up realizing that we use numbers when we need to coordinate. Like, "I'll meet you at 2PM." Or to make things reproducible ("Set the Photoshop levels to 0, 220 with a gamma of 1.2.") Or even when there's a delay, like "Make the temperature 72 sometime soon. I'm too cold." Or maybe, if you have to explain something to someone over email or on the phone.

But if you don't have a reason like this, you are...

In love with numbers

One big part of simplifying our interfaces is to make them more analog, more relative, and more human. It doesn't mean that we should give up the text that lets us coordinate and communicate.

But maybe we're ignoring our own best judgment, like when to go to bed, because we have this clock that says, "It's only 10:30!" And in these cases, it might be advisable to hide the numbers temporarily, unless you need them for some important reason. Like if you have an appointment you have to drive to, or if you want to talk about the details of your Photoshop filter.

In a way, wall clock time is just a technology. It's a pretty good one, suitable for coordinating activities and knowing if a store is opening without driving to it. But with modern devices, you could really pretty much replace the entire technology of time with realtime communications. It's a neat exercise to think through the implications of removing these numbers. Does it make it harder or easier? How much reinvention do you need to replace those numbers with something meaningful and intuitive?

So it's a challenge sometimes. But what if we did the experiment to dial numbers back a level in our interfaces, and then we figured out how to make things work without them? It's actually important not to lose the functionality that the numbers give, but to replace it intelligently and in an ambient way.

Maybe that's a challenge for you, too. 

A Day with Philips Hue

I got a Philips Hue on the first day it was available at the Apple Store and had f.lux running 3 hours later. Here are my notes.

Some notes about Hue in general:
  1. Color is great, whites look good
  2. Bulbs are efficient (8.5W)
  3. Brightness is good, useful as a regular bulb.
Negatives:
  1. The controller rate-limits updates from the software. So if you wire up a slider to send messages at 10fps, it won't work. You can send 2-3 messages per second, and then it does some interpolation on its own, which might not give the "look" you want.
  2. Powering down a bulb (like with a lightswitch) and restarting loses all the bulb's state! Depends on a client to restore it. The controller bridge really should do this automatically.
  3. Range isn't so great -- Zigbee loses to my DECT phones and baby monitor and wireless network. So you really need a "mesh" of nodes, not two bulbs 50 feet away.
  4. Minor: Bulbs have a front-facing distribution (put out less light to the side and back than the L-Prize bulb.)
Here are some notes about the interface to the Hue controller (I didn't look into the Zigbee side, just sniffed the iOS client.)

Discovery

Hue uses SSDP (like Roku and Sonos do). So, discovery is pretty standard. You can also walk your class-C network and look for a webserver responding to /description.xml, which is cheating (but it works.)

Authentication

Hue relies on client-generated secrets. Each client generates a secret (likely one of the device unique identifiers), and then uses that to talk to the hub.

So you'll see an http request like this:

GET /api/{secret}/lights HTTP/1.1

And the switch will respond "unauthorized user." The client will then start polling to make a user:

POST /api
{"username": "{secret}", "devicetype":"{name}"}

And Hue will respond (in JSON):
"link button not pressed"

Once the button is pressed:
"success"

Basic Controller

The current iOS client appears to poll /api/{secret} once per second, which gives the whole dump of settings. Most of these seem writable.

To change a light you do a PUT to /api/{secret}/lights/#/state (not POST).

(where # is a number you found from the main lights feed. On the default kit you'll have lights 1-3 available.)

Contents are JSON! For single brightness and color settings (not groups or scenes) you might see something like this:

{"bri": 128, "ct": 200, "on": true}

Brightness appears to go to 255, color temperature is 500 to 154 (which corresponds roughly to 2100K to 5800K, somewhat nonlinear.)

Or you can send xy coordinates to control the bulb in color (use your handy sRGB to XYZ matrix):

{"bri": 255, "xy":[0.4, 0.4], "on": true}

You must rate limit your client in the range of 2-3 updates per second, or the hub will send you a 503 and throw away your changes. It appears to be "worth it" to stay in this range, or the rate limits get a little more severe.

Advanced Controller [TBD]

The software has the ability to define groups and scenes. Have not investigated yet.


Just in case you want to avoid updating an iOS app...

If you're using a jailbroken phone and you want to avoid updating an iOS app for any reason:

1. Login via ssh and copy the sandboxed version of the app to /Applications. 
2. Uninstall the sandboxed version.
3. Reboot/Respring to make Springboard notice the version in /Applications.

It's all about the Audience

It was 2004, and we were in a meeting at Google debating if Picasaweb would have comments in the first version. (There was actually a contingent saying that maybe this wasn't needed.)

And I remember this moment like yesterday. Lorna, our PM (who I married two year later, so I'm not biased at all) piped up to say, "You know, Flickr gives people who look at your pictures something fun to do. It's all about the audience." Blink.

This view won several years later, and it's one of the biggest network effects online today. You don't make photo sharing sites that make it easy to share photos--it doesn't matter. You make photo sharing sites that give you an audience for sharing. That's Facebook, and it's the reason it is so very hard to compete with Facebook right now. Best audience, and the rest doesn't matter.

I keep telling people if Facebook made it nightmarishly hard to post a photo, 10x harder than today, it actually wouldn't matter much. People would figure out how and keep doing it, because the audience is there, and they like being there. They look at me like I'm insane for a few seconds, and then it usually clicks.

Moral of the story: when you "focus on the user", it's important to know who the user actually is.

iPhone vs. S3

Just a little spreadsheet for your day.

8GB iPhone: $99
16GB iPhone: $199 (+$12.50/GB)
32GB iPhone: $299 (+$6.25/GB)
64GB iPhone: $399 (+$3.12/GB)

Dropbox: $2/GB/year
Amazon S3: $1.25/GB/year

Cloud storage looks cheap when you compare it to storage in your pocket.

Sync is the new lock-in

I tried to run Firefox, instead of Chrome, yesterday.

But it felt so strange, because Firefox had none of my bookmarks, bookmarklets, passwords, and I just couldn't remember what I was going to do with it.

And in that moment I realized how much I depend on Chrome's sync feature. Chrome owns my browsing because of its fantastic ability to sync all this stuff between all my computers, PC, Mac, everywhere.

Dropbox contains tons of my important files, because it does the same thing.

And I've stopped buying music on Amazon because iCloud is so good at getting music to all my devices now.

Syncing is the bridge between the cloud and the PC and mobile devices. It saves dozens of steps, keeping everything merged intelligently, and even combining metadata automatically.

But of the three examples, only Dropbox provides a way to get to my data from anywhere, through a regular web browser, and with a good API that works with other services.

Google's Data Liberation page (which has a fantastic mission), has PC-era directions for moving your bookmarks to a new browser. You're supposed to sit at your computer and push an export button! And the  instructions provided about how you can view your bookmarks in Google Docs haven't worked for over a year. Google has moved Chrome bookmarks somewhere else now, and while users ask for a solution, none is forthcoming.

This "online viewing" should be a much bigger priority for Google. What if I mostly want to use Chrome, but I want to own my data and occasionally use it somewhere else?

Google has a separate delicious-like "bookmarks" service that appears to be completely different than Chrome's bookmarks. And while I really like social bookmarking sites too, I use them for a different purpose. In the browser, I use lots of bookmarklets and folders, and I really care about what order my commonly-accessed sites appear.

The New Lock-In

Of course, if you think about it, while you're sitting on the couch with your iPad, there is no way to get the  Chrome bookmarks exported from your Windows machine to show up in Mobile Safari. Part of this is Apple's fault, of course.

On the iPad, I can get to my Dropbox, but I can't get to my bookmarks.

And I think there's at least a partial solution in Dropbox: if you care about your users, every proprietary sync feature should have a live webpage, that you can access from anywhere. (And you should work hard at cross-platform support, too.)

Otherwise, like Apple and Google do today, you have created a new form of lock-in. And it's almost worse than the old kind.

In the old way, files were mostly idle on disk, and data was sort of stuck in a hard-to-parse format on a server.

But it is worse with data that changes all the time, because "exporting" isn't good enough. You can't export a moving target.

If you want to access your recent Chrome Bookmarks, you really need to write your own sync utility, or at least periodically export from Chrome.

I want  Google to do the right thing, and I want to use sync feature in Chrome, which is amazing. But maybe I will need to use XMarks instead. This feels complicated, hard to set up, and probably more prone to breaking than Google's solution, so I'm trying not to do it. But it may be the only solution available today.

Data that changes frequently is the hardest to "liberate", but it is also data that is very important to users. It is critical to get this right. We should insist that these amazingly-convenient sync features don't become the next form of vendor lock-in.

Google, at least give me a webpage with my Chrome bookmarks on it?

Always running, or at least running fast

There's something very compelling about writing a little script and putting it online. Because even if someone comes by once every three days and tries it out, there it is! Online always. Software like this is alive in a way that a binary on a PC is not.

It's been fun shipping our f.lux app to the desktop, because it's always on, and even though it does just a small thing, doing that thing without using a lot of resources, and running all the time is sort of neat.

Running all the time hasn't been true of the "app" on the desktop or device. Since we've historically tried to save RAM, launch times have remained inconvenient and excessive. Excel now takes 30 seconds. Visual Studio can take a minute. Photoshop, forever. Sure, we have these annoying tray icon apps with their inconvenient UIs, but the PC is getting slower for doing real work.

Dropbox is always on, and instantly useful. Gmail and Facebook are very fast, because they're already "loaded" into a server somewhere.

Excel XP from ten years ago launches in a half second. This year, Excel takes 30 seconds.

I'm not sure if Apple's quest, with OSX Lion, to ignore whether or not apps have launched yet is the right approach. Apple now hides the little "light" under each icon that says if an app is running, and it tries to restore some application state on reboot. This is good, but it's still too slow.

xCode now takes forever to launch, and so does iTunes. Launch times are getting worse, in a world where that really makes no sense. People's tolerance for 30 second launches is lower, since they're used to mobile OSes, which are fast because the apps are simpler. And people seem to be more distractible: when iTunes takes 30 seconds to launch, most people will have forgotten the song they wanted to hear and will have checked their Facebook messages instead.

You can't ignore the engineering around making launch times small, and complexity makes for sloppiness. A launch that takes a full second is noticeable, and probably most apps should strive for 500ms, at the most. 30 seconds is completely absurd and negligent.

Superfetch, Readyboost, etc., all aren't good enough. Important stuff needs to be loaded into RAM. How do we launch in one seek, rather than 10,000? SSD can't fix everything.

This problem, making apps launch fast, is 100 times as important as making Windows boot fast, and it's 100 times as complex. Hope the next versions of the OSes get to fix this, or application writers start paying attention to it.

Apps should launch instantly, and more real services (not launcher icons) should be loaded automatically, so the launch isn't there at all.

The perception of productivity on the desktop will wane quickly if this particular bloaty trend isn't fixed.

More thoughts on Google+ (after using it for a few days)

My Google+ experience is in that exciting time, with new people are coming in every day. And the extended Google network is also very compelling. Most of the people on Google+ today were invited by Googlers, so it's mostly smart people saying interesting things. I'm amazed in some ways all the people I don't know.

Reshare and Publicity

Many of my earlier comments about discovery/serendipity have been dodged by one thing I didn't anticipate. A huge portion of the content in Google+ is public, like Twitter. This means that there is a public sphere and a very private one. Nice way for Google to be in the game. For me, it's becoming hard to find the private content now (and this is not necessarily good).

Re-share also has some significant benefits. I'm not a fan of the re-share presentation (it should be more compact and integrated), but I get the value of it. To take a public conversation into a small group and talk about it makes a ton of sense. Already, some of the huge public threads have become incredibly unwieldy. I don't care about 500 public comments, but more about what my friends have to say.

The Plaxo Curve

When Plaxo first crawled our Outlook address books back in 2002 and encouraged everyone to invite all their contacts in one go, it was novel, because nothing had ever spread so fast. I recall it took Aol's Instant Messenger nearly 12 years to grow to 90 people per buddy list. But after Plaxo, it took this next generation of social networks not long at all. (My Google+ account has this many people, already.) Since then, it's become necessary for each new service to plumb your email contacts and IM contacts and make friend of a friend suggestions, in order to grow quickly enough.

If you think of it that way, the invite policy of social software results directly in a growth curve. Tagged.com took it too far, spamming friends without even asking your permission. Google's prior effort, Buzz, mostly confused users by not giving enough control. Facebook takes it right up to the line and stays on the side of user-controlled politeness, with a great Friend Finder that can login to a bunch of services.

But especially given the limited number of people on the service, Google's "Plaxo number" is very high indeed. People I don't know are adding me, sort of like Twitter. And I already have >100 people in my circles, despite being on the service for only a few days. Google isn't logging into as many services as Facebook, but they're making the contacts they have count for a lot. Never has adding people been quite so fun.

This bodes well for growth, and with Google's sorting tools, that growth is manageable too.

Invites

Based on promises that you would be able to share any entry by email, I uploaded a video and shared it to 20 people who haven't got Google+ accounts yet. They all got "permission denied" errors and emailed me about them. So it goes with a field test, I guess.

Integration and Speed

The integration with google.com navigation is incredibly compelling. Every search result tells me when Google+ has new stuff for me. I look at that top bar on Google more times per day than my email. Seriously, this is almost as good for Google as building Google+ directly into the browser.

A related bit: Google+ is incredibly fast to load, so I'm never afraid to popup the notification window or clickthrough. For me, Facebook's mobile app in particular has been taking >10 seconds per pageview, which gives me pause if I want to click through an email from my phone, or not.

Limited sharing a good match for photos

My usage of Picasaweb has been declining, but I expect Google+ to bring it back. Sharing by email was laborious, and the previous "groups" interface was daunting. It was becoming too easy to post a few photos on Facebook instead.

That said, I believe the Circles UI doesn't scale as well as it could.

I have too many people in the "friends" bucket, just like Facebook. I started adding "Family" to "Friends" also, halfway through, so I could just share with one group most of the time. Understanding which users are in multiple groups is often confusing. Knowing who you're sharing could be better, but today is not so clear: you have to be really precise about managing people in order for the experience not to devolve into an experience exactly like Facebook.

API

There's mention of an upcoming API. I can't wait.

Circles and Serendipity (Google+ thoughts)

After a few hours using it, I think Google has done a decent job with Google+. It's a gigantic improvement over Buzz.

I also believe they have over-thought the edge case "I'm in charge of my privacy", and perhaps killed some of the serendipity that is the real magic behind social software.

Let's start with some stories.


I'm a new parent

We recently had a baby, and we talk (a bit) about being new parents on Twitter and post pictures on Facebook. But what's amazing is the half-dozen people we both used to work with who've become "parent friends" due to this change in our lives, because of social media, because of the messiness of it, and because of the serendipity of it.

I wonder if in the Google+ world, we would have ever discovered these connections. Using the "Circles" UI to its fullest, I'd have a "parent friends" circle, and I'd not show any evidence that I've become a parent to my nerd friends. Why would they necessarily care? I don't really know for sure that they care, so I probably wouldn't pester them. And that's the fundamental problem with Google+.

And, Once I Broke my Foot

I broke my foot two years ago, and through a Twitter post that wound up in Friendfeed, learned that a former colleague had broken his arm at the same time. It became extra-interesting to post updates because I found out someone was going through a similar set of challenges at the same time.

Is it really like the real world?

The idea that "circles" represent the world exactly like real life seems to be failing me. In real life, people notice I have a baby carrier and had a foot brace, but online, these things don't happen. 

Also, in the real world, I meet new people sort of by accident, and this doesn't happen as much online. The best approximation is a person who makes an interesting comment, and Google+ does this as well as anything else online.

But in some ways, that's why the messy, serendipitous social media "oversharing" works at all. Because it approximates context, not because it excludes it.

Much like eBay and Craigslist created markets that barely existed offline, I think the most magical experiences online are the unexpected and accidental. If you put a thing online that all your real-life friends think is weird, you'll find a million people online who actually agree with you. And if you had to name them all first, you couldn't do it.

I can't remember ever experiencing this sort of serendipity over email. But most certainly it happens every day on Twitter, and occasionally it happens on Facebook.

In some ways, Google+ is trying to do a better, more social email, almost a half Facebook and half an awesome address book, and I grant there's a place for that. But some aspect of it feels like work, not play. Perfect for a company that's a bit secretive but has a rich internal culture, but not so applicable to real life.

And so I worry that the lack of magic will hurt it in the end.


UI thoughts

My overwhelming feeling is that the UI that you use everyday in Google+ has tons of friction.

For instance, maybe I'll spend 5 minutes ever in my life adding friends to circles, and it's insanely cool and fun. Fantastic. Magic.

But then, there's this UI you use every day, the one that creates a "To line" email-style interface to making a post. I don't like it at all. It's cumbersome and a lot of work.

This part of the interface could be amazing, and instead, it's more boring than email. It feels transient, and it's on the wrong part of the page. Yes, if I click the "Friends" container before making my post it sort of auto-fills for me (okay!) but it doesn't show who I've left out, who I'm including, or anything like that.

But basically, it's too opaque. You get a little blue tile that says "Friends" on it, and that's all you know. Once you close the popup it doesn't say how many friends you have, or show their faces or who's online, who's been online in the last week, or anything. It's anti-social.

And it makes me think a lot before posting, which means I'm going to post a lot less. "Is that person in Family AND Friends or just Family? Maybe I'll just email it."

If they fix nothing else, let it be this interface:


What's the filter?

In some ways the brilliance of Facebook is that they've spent a great deal more effort on what the receiver wants to see, rather than what the sender wants to share.

And in so many ways, this focus is a much better approximation to real life than the "need to know" classified access-control-list email addressbook complexity of making a person limit their sharing to a certain circle of friends.

My Facebook friends are mostly the people I believe aren't insane. They might not want to see baby pictures all the time, but I don't mind if they do. 

And I'd rather the software takes care of that....if I put "Baby Girl" in the title, some people won't want that stuff ranked very high, so it can be harder to find. If Facebook doesn't do this already, I imagine their approach leads naturally to classifying content (not just people) and showing information to the right people. (Bubble filter indeed, but why not.)

I know there's an important "user story" about a 19 year old who goes to a party and doesn't want her parents to see the pictures. The trend we've seen is that people facing the un-coolness of their social network ("Mom's on Facebook!") will gravitate to using a different site for this purpose. No single site has been able to "fix" this problem, no matter how good its groups UI. Facebook has a fantastic Groups system that nobody uses, but they don't make it so prominent.

In some ways it will be neat to see if Google+ manages to catch this endless churn and contain it. But I think to do it you have to make cool and identifiable "spaces" for people/circles to interact. (Even Facebook Groups have their own photo galleries.) If there's a circle for your friends, you should be able to brand it, like the new cool site you moved to because your mom joined Facebook. I think a subtle tab highlighted on the left isn't enough.

In trying to contain the "mom problem", again, the network loses. Google+ has a "Share" (reshare?) feature, but this again has a ton more friction, compared with Facebook's version: "Michael commented on Lorna's photo..." with friend-of-a-friend visibility. 

In some ways, the seamless way that FriendFeed would remix streams of friends based on "Like" clicks, and Twitter with Retweet, simply had no historical precedent in online communication. Email forwarding is hard work, and so is the Google+ Share button.

Serendipitous or intentional? Facebook's is magic. Google's is more like email forwarding.

Profiles fit great with Google search, but activity streams not so much

For years, Buzz and Picasaweb kept asking to make my profile public. And I kept deleting it. Here's why.

When you search for a thing on Google.com, you expect relevance. And if you're the one being searched for, you want to look good, or not show up.

If you search for my name, you want to find some relevant things about me. Putting "Mike once liked this dorky video on Youtube" (with a kind of authority) at the top of that is somehow out of context. And it's worse if I did that 4 months ago, and I just haven't done anything since. 

In my Google Profile, social interactions I've had that happen to be public take up 80% of the screen (even if it's empty or out of date!), and links that I've made to important stuff in my life take up the small bit in the corner. 

That sort of "information" makes sense if you visit a twitter.com page, but at a google.com URL, it is not a good representation of a person. You kind of expect "best post of all time" and you get "Mike was in a bad mood when he made a public post 2 months ago."

I'm sure Google has wrestled with this dozens of ways, but I think there's a conflict in there.

Social+

Perhaps Facebook's most audacious and brilliant move is in trying to make the receiver the filter of content, while keeping loose track of potential friends. The filter problem is very unsolved, but it's very exciting as well. If you can show me what I want to see in a single space (not in multiple tabs), and if you can make it relevant based on the content and the context, I think that's a brilliant user experience. 

I think the Google+ emphasis on sender-driven sharing replicates email too closely, down to an awkward UI for sharing (which feels like email forwarding). It makes sense to a person who's in a certain habit of using email all day, but it also misses a huge number of interactions that magically, serendipitously happen online via Likes, Retweets, and simpler interactions online.

Google's first effort is impressive, and I think I will use it more than Buzz. It actually nearly solves the "huge niche" of private collaboration, and maybe people will use it for work more than play.

But in the initial version, I think it fails in not being cooler or more magical in any notable way, when compared to the thought leaders in the space, Facebook and Twitter. Google has tons of magic, but it's just not coming through. Lessons learned from the Buzz launch, I'm sure, an abundance of caution. Don't use the technology to scare people!

But when you allow and encourage serendipity, you get magic. If you worry too much about privacy, you capture a ton of niches that care about these things, but you don't become the new way people communicate online.

You just...become a better version of email.