802.11n is not at all, even close, "ethernet without wires". In some cases it takes 20x longer to do things than 100Base-T. It's not clear by looking at the specs why this is true. But from what I've observed, the reason for this gap has a lot more to do with latency than throughput.
I'm sitting 12 feet from my WRT610N router with an Intel 802.11n chipset in my laptop, going through one wall, and this is mostly what you'd call a state of the art combo.
I can stream data at about 6MB/sec over this connection.
But my "average" ping to my router is nearly 10ms, about the same as my DSL ping to my ISP. For things like big HTTP streams, 802.11n does really nicely. It can stream video, and stuff like that.
Network filesystems were invented 20 years ago, when throughput and latency were balanced differently. The most data you can request at a time over these systems is maybe 64k, and that's not so much when latency is high.
This means if you issue 64kb reads in a tight loop, you can maybe use half of a typical 802.11n line. But just barely, and it depends on noise, and if your computer is doing anything else.
It's not nearly as good as an HTTP stream.
And still, the 64k at a time approach sounds close to reasonable, until you realize that the code everyone's using to read files (stdio and jpeglib and libpng) have embedded default buffer sizes of 4-8k! And that's when it gets really bad.
Yeah. Oh it's slow.
If you make software, you really have to change these 4-8k defaults, or even implement readahead if you don't want to be entirely gummed up by a normal wireless network. Because if you're only reading 8k at a time, you're reading at less than 25% of your network's capacity, maybe 10%.
No way around it. You are just slow.
People will notice that, and you'll be slow compared to the people who know this and code around it. They'll be 4x faster easily, and maybe more.
My advice: pretend you're reading streams 64k at a time. Don't seek a lot. Don't use a database that reads in 4k blocks.
I guess we should also think about when these 20-year-old filesystems are going to get updated. In the long term, we need network filesystems that can deal with latency in a smarter way. WebDAV tried to do that with HTTP, but with an XML fetish that doesn't look efficient over the wire. In general, a smarter approach to block sizes (intelligent read-ahead) and hints based on recent usage would help a lot. If I've just opened a dozen 30MB .CR2 files and read their full contents, software that adapts to that case would be very nice, rather than running at 10% of the network's speed because of a 20-year-old protocol.
The batchy, async "sync" protocols are all very proprietary right now, and they don't degrade nicely to "be like NFS or Samba". There's a big split in both openness and in "sometimes I need async speed and sometimes I need synchronous operation".
The cost of not updating this piece of the technology stack over time is that the 4x gap in throughput you can find today between "synchronous reading" and "streaming" widens further, until there are no "common" and high-performance network filesystems, and we all use custom or async protocols for high-performance situations.
We could do that, but I think there's potentially a middle ground, using some read-ahead, and some batching to avoid these issues. And if applications can usefully say "be totally async" then we get protocols like the ones we see in sync solutions today.
Finally, wireless standards could put some focus on latency as well as throughput. It's nice marketing to stream 3 HD video streams, but it's also nice when that doesn't come at the cost of increased latency for other common operations.
It's important to keep throughput and latency in balance. Already the big improvements made in the 802.11n standard show a trend towards putting them out of whack. Let's see if software or hardware makes the next step towards improving that.