Modern Yoga

Yoga in LA

Robin is not a real yoga teacher yet, but the picture is so compelling that he felt the need to write a fake bio. Robin spent most of the first 22 years of his life studying every day and jogging every other day. Finally, when he graduated from college, his body couldn’t put up with it any longer. Back pain and knee problems led him to physical therapy, and he feared that the rest of his life would be filled with boring-as-heck rubber-band exercises. Then a yoga teacher offered to give classes once a week in Robin’s apartment building, which was too convenient to pass up.  This teacher turned out to be the most caring, knowledgeable, inspiring, and hilarious person one could ever hope to meet. She helped Robin understand which muscles were overcompensating for which others, and she showed him how to practice yoga to improve core strength, flexibility, and alignment. There was also a bunch of other crazy stuff like hand stands, lion roars, and words like “supdebodikanasana” and “artichandrasana,” which overall kept things interesting.

Ever since that fateful day, Robin has been studying less and doing yoga more.  He believes deeply in living a balanced life — his yoga practice allows him to pound his knees during runs and crush his hips all day in front of the computer. Over the years, he has taken yoga classes from dozens of instructors in Boston, Seattle, and elsewhere. There was the teacher who reminded everyone to “mmmm-breathhhhe.”  The classes where you sweat so much you should have just used a towel as your mat. Classes like in the movies with beautiful 20-somethings checking each other out, and classes with plenty of rounder shapes. Because Robin came to yoga to heal his battered body, he was always especially mindful of the ways in which yoga poses can be designed to strengthen what is weak and loosen what is strong. Yoga has roots going back millenia, but very few people in ancient India spent all day sitting at a desk. Robin applies a healthy skepticism to the traditions and picks out (or makes up) the practices that are relevant specifically to the modern ways in which humans injure themselves for fun.

iPhone plus keyboard

I hope the next iPhone revision will take a cue from the new iPad and also support bluetooth keyboards. When I travel, I often just want to write – email, journal, blog, whatever. For this longer-form writing I don’t need a screen any bigger than the iPhone, but I do need a physical keyboard because typing on the on-screen one is so painfully slow. (I’ve gotten faster over time on the iPhone, but still nowhere the 80 words per minute that I type on my laptop.)

Back in the day (not that long ago, really) I used to travel with a Handspring Visor (a relative of the Palm Pilot) and a physical keyboard that folded up into an amazingly small package about the size of an iPhone (but an inch thick). The combination took up very little space in my backpack and weighed a total of about half a pound. For many years, in fact, the only time I ever wrote in my journal was on airplanes. I’d just whip out the PDA and keyboard and start typing away.

I don’t think that fold-up keyboard manufacturer ever sold enough to be profitable. Maybe now that so many people have smartphones, it would be a different story. On the other hand, maybe Apple has little motivation to support external keyboards for iPhone, because then we’d have less reason to buy an iPad…

I found a similar blog post after Googling a bit.

A vision of educational technology

Ken brought to my attention a good article that at first glance falls under the category of Apple Tablet speculation, but is really just a very clear vision of the educational technology that teachers and students want. Ken found out about the article because the author emailed him to point out that many of his requirements could be met by tablet versions of existing Omni software.

Will the Apple Tablet Support or Hinder Users’ Cognitive Fitness?

Learning involves a lot of thinking, writing, drawing and communicating. Learning involves anticipating what the author will say, setting learning objectives, detecting knowledge gaps, writing comments on the document, drawing diagrams. Unfortunately, today’s computers do not make this an easy task.

One item of personal interest is:

And why not allow users to directly add dictionary entries to their self-testing database, so that they never have to look up the same word twice?

In fact, I implemented a web-based version of this about ten years ago…


Update (August 2010): Inkling, just released, is the first iPad app I’ve seen that begins to incorporate some of these ideas, such as writing comments on the text.


Career Stereotypes

When I was in college, a poll sent to faculty members found that only something like 3 out of 200 professors were Republican.

The first explanation that came to mind, of course, was that it’s because you have to be reasonably intelligent to be a professor.

A gentler explanation is discussed by a recent NY Times article (“Professor Is a Label That Leans to the Left“). It reviews sociological research arguing that because professors are seen as liberal, it’s a career choice that appeals to liberals — thus perpetuating the imbalance.

To Mr. Gross, accusations by conservatives of bias and student brainwashing are self-defeating. “The irony is that the more conservatives complain about academia’s liberalism,” he said, “the more likely it’s going to remain a bastion of liberalism.”

Another reason I thought this was interesting is that I’ve been arguing along these same lines for a long time as applied to the dearth of women in computer science. That is, computer science is seen as a community of nerdy boys, so the people attracted to the field tend to be more nerdy boys, plus those who are attracted to nerdy boys.

But I hadn’t quite made it to the logical conclusion that the more we emphasize the lack of women, the more we perpetuate the problem. No wonder the trend has been so difficult to reverse.

Illustrations on the Web

I’ve been ranting again about how hard it is to deal with images in web applications. I’ve touched on this topic before. It came up again because I wrote a blog post with five or six illustrations inline. It was easy enough to draft the blog post in the Mac desktop environment: I just copy/pasted or dragged the illustrations directly into my rich-text document.

When it came time to move this onto the web, however, there was no simple approach I could find. All methods involved individually uploading the illustration images and re-linking to them from within the text of the blog post. Sure, I can do that. But it’s sufficiently annoying to discourage me from using images in blog posts when at all possible.

If we are going to start meeting the needs of the half of the population who are primarily visual thinkers, we are going to have to start creating software that makes the web much more friendly towards end-user graphics.

(Does this software already exist? Let me know.)

A story of blog consolidation

For a while now I’ve been wanting to consolidate all of my blog entries into a blog system that runs in my own web space.  That way, I can customize the blog as much as I want, I can be confident that I’ll never have to pay extra for it (beyond what I already pay for web hosting), and I can give it a nice url like robinstewart.com/blog/.

The only problem was that the process of transferring my old blog entries into the new blog system turned out to be waaaay more time-consuming than I had hoped.  That is the moral of this story.  But if you’re interested in all the nerdy details, read on.

I started writing occasional blog entries back in 2006. I originally used a blog provided through Williams College.  Then at some point I switched to a blog hosted by Blogger (but I left the old entries in the old blog).  Now that it has become easy to install WordPress on my own website, I did so.  But to really make the transition, I had to move all of the blog entries from the two old systems into the new system.  How hard could that be?

Well, the Williams blog was hosted in an extremely old, “multi-user” WordPress installation.  I made an attempt to upgrade that system to the latest version so that it would gain the “export” feature.  But after reading a lot of documentation, I decided that the process would be long, tedious, and fraught with peril (both because it is a “multi-user” version and would have to be upgraded through a series of new releases, one by one).  Instead, I ended up writing a PHP script that pulled my blog post information (title, text, date, author, etc) from the underlying MySQL database and exported it in the XML format that newer versions of WordPress can import.  After a few iterations of this script, I was able to successfully import the resulting file into my new WordPress system.  (Thankfully, the date/time format has not changed since the old WordPress version.)

The next step was to import my newer blog entries from Blogger.  The new WordPress has an option to do this import directly.  You provide your login information and it goes and automatically fetches all of the blog entries.  That import went smoothly, but the HTML underlying the blog entries I had created in Blogger were full of extra <div>s so that the entries didn’t render properly alongside normal, clean WordPress-generated blog entries.  I considered doing some CSS hacking to make the Blogger entries look ok, but after some experimenting to no avail I decided it would be a lot better to have clean HTML anyway.

To achieve that, I ended up exporting from my new WordPress system all of the blog entries that I had by now imported.  I opened the resulting XML export file in a text editor, and performed some judicious find-and-replace-all operations to get rid of those extraneous <div> tags (while keeping the important ones).  Then, I deleted all of the entries from WordPress and re-imported my edited XML file.  Unfortunately, this re-import didn’t quite work (the importer web page just hung indefinitely).  But I was eventually able to work around the problem by splitting up the XML file into about five different files, and importing them separately.

Finally, all of my old blog posts were in my new system, and in a way that looked fine without CSS hacks.  Whew!  It may have been faster to retype them all by hand.

Let My People Design Great Products

I finally read Let My People Go Surfing because Bill Buxton told me to. Actually, he told everyone at CHI 2008 during his keynote address. I actually wrote it down and found the reference later — personal information management for the win!

In his keynote, I remember Bill saying something like, “it’s a pity about the book’s title.” The author, Yvon Chouinard, is the founder of Patagonia (the outdoor clothing company). Let My People Go Surfing is the name of Patagonia’s employee benefits package, which is not all that exciting.

The reason Bill mentioned the book to an auditorium of human-computer interaction professionals is because of its incredibly compelling chapter on product design. Chouinard’s goal is to “make the best products.” But realizing how abstract and useless that phrase is by itself, he delves much deeper and lays out an insightful “philosophy” of product design standards. Below I’ve selected the ones that also apply to software, in my view.

  • Is it functional? (“Who needs it?”)
  • Is it multifunctional? (versatility. “Do I really need a new outfit to do yoga?”)
  • Is it durable? (“The poor can’t afford to buy cheap goods.”)
  • Is it as simple as possible?
  • Is the product line simple? (“The best restaurants in the world have set menus, and the best ski slopes have already decided which skis are best for your skill level.” “The best-performing firms make a narrow range of products very well.” “Fewer parts mean less to go wrong; quality comes built in.”)
  • Is it an innovation or an invention? (“It may take thirty years to come up with an invention, but within a few years or months there can be a thousand innovations spawned from that original idea.”)
  • Does it have added value? (“We treat customers with respect.”)Is it art? (“An illustrator becomes an artist when he or she can convey the same emotion with fewer brushstrokes.”)
  • Are we designing for our core customer? (the customers who “define the state of the art”)
  • Have we done our homework? (read: usability research. “You can minimize risk by doing your research and, most of all, by testing. Testing… needs to be included in every part of [the industrial design process].” “Measure twice, cut once.”)
  • Is it timely? (“To stay ahead of the competition, our ideas have to come from as close to the source as possible.”)
  • Does it cause any unnecessary harm? (ongoing environmental assessment)

It’s always useful to read a well-informed list like this one and try to apply its insights to my own product design work.

Simple defense against phishing

I was just explaining to my roommate about phishing scams and why many online banking websites now show you a personal picture when you log in. And I was reminded that the main usability problem at the heart of phishing scams is the URL naming scheme. It’s just unnecessarily complicated to figure out.
What do I mean? The very heart of a phishing scheme is a URL at the top of the page such as:

http://www.bankofamerica.com.online.b04k.li/login.html

And the only way to know that it’s a phishing site is to consciensciously look at the last part of the first part of the url, which is the part that has all period separators and comes before the first slash, except after the two slashes at the very beginning. Sheesh! Although web nerds have gotten used to this, it does not even remotely resemble an intuitive user experience. People see the “bankofamerica.com” portion out of the corner of their eye and assume all is well.

If URLs simply worked from left to right, the real Bank of America would be: http://com.bankofamerica.www/login.html and the phishing scam would be: http://li.b04k.online.bankofamerica.www/login.html

Then at least we could tell everyone to just look at the leftmost thing (after the unchanging http://) and make sure it is familiar.
Of course, this is not really an option anymore because there’s way too much infrastructure in place using the existing naming scheme. But why don’t web browsers at least highlight the important part of the URL for you? It could look something like this:
And then the scam would at least have a chance of catching your eye:
And I could tell my grandma, “just look at the bold red portion before you enter your password.”

Does anyone know why the major browsers don’t already do this?

Update: Dave just pointed out that Internet Explorer 8 has indeed publicly announced a similar domain highlighting feature:

Domain Highlighting lets you more easily interpret web addresses (URLs) to help you avoid deceptive and phishing sites that attempt to trick you with misleading addresses. It does this by highlighting the domain name in the address bar in black, with the remainder of the URL string in gray, making for easier identification of the sites [sic] true identity.


Update 2:
Google Chrome does something similar — it colors the “https” green if the site comes with a valid security certificate. It also makes the domain name darker than the stuff after the “/”, but it doesn’t do anything to distinguish the top-level domain pieces. So it is still open to phishing attacks like “www.bankofamerica.com.online.b04k.li”. Hopefully, phishing sites wouldn’t be able to get a green “https”, but the lack of a green prefix seems a lot less noticeable than the clear presence of a suspect top-level domain.

Where do domain names come from?

My uncle recently asked me a variant of this question, and I learned a few new things after doing some wikipedia research. Here is my attempt to explain it using language everyone can understand.

Part of what makes the internet work at all is that it is designed to be distributed — there is as little hierarchical control as possible. The big idea is to let anyone connect to anyone without going through some commander at the top. If everyone had to go through the top, then it would become a huge bottleneck.

A “web host” usually means any company that hosts web pages. This just means that they own computers that are connected to the Internet. Of course, your everyday desktop computer is also connected to the internet, but for a variety of technical and financial reasons it usually makes more sense to go through a “web hosting” company if you want a web site that is going to be available 24/7 to anyone in the world. But the point is, anyone can connect a computer to the internet, and thus anyone can be a web host — there are no qualifications. And that is part of why the Internet works at all.

However, the story is different for getting domain names. For domain names, some hierarchy is unavoidable, because you need some central way to determine who controls which names and which websites they point to. You want to be sure that “amazon.com” always goes to amazon and not “buy-stolen-belts-for-cheap.com”. In other words, you need to direct people to the right internet-attached computer. (There is also some hierarchical control needed for various other technical pieces of the Internet.)

According to the wikipedia articles, the US Department of Commerce is theoretically in charge of overseeing those aspects of the Internet that need some hierarchical control. However, they outsource the entire job to a non-profit corporation called ICANN – the Internet Corporation for Assigned Names and Numbers, which for historical reasons is based at the University of Southern California. ICANN has the authority to (1) make certain policy decisions, and (2) outsource the management of sets of domain names — like those ending in “.com”, “.org”, or “.net” — to various other companies. For example, a company called VeriSign is in charge of handling all “.com” domain names, because they won that contract from ICANN. (Part of the contract specifies certain rules, such as limits on the fees they can charge.) But VeriSign, in turn, only handles the actual repository of domain names, and outsources the job of actually dealing with customers to still other companies! But those other companies have to be “accredited” according to certain standards set by ICANN.

For example, I have control of the domain name “robinstewart.com”. I used a company called DreamHost to process that registration and collect payment of $9.95 per year. Part of that money (about $3) goes to DreamHost, for dealing with me, the customer. Part of it (about $6.50) goes to VeriSign, for keeping track of all the “.com” domain names and making sure there are no conflicts. And a very small part of it (20 cents) goes to ICANN, to continue to make policies and track down anyone abusing the system.

So there is a large hierarchy of organizations who all basically operate under the authority of ICANN, which in turn has some sort of mandate from the US government. And there are some international governance boards and treaties, but for various political reasons (i.e. the system is working, so why change it) the whole thing remains US-based.

And there you have it.