tumbledry

Die Work

One slip, and I gave myself four extra hours of lab work. Here’s how.

At the School of Dentistry, we are short on cash. (Our dean is also leaving, but that’s a story for another day). So, the students (us) do a lot of intricate lab work in order to make the stuff our patients need (crowns, bridges, etc.). That way, we don’t have to pay a professional lab to do as much of our lab work. This does the following:

  1. Saves the school money.
  2. Teaches students (us) what good lab work looks like.

These are good things. However, all of this lab work also:

  1. Drives me insane.

We use a polyvinyl siloxane (sure you can use polyether, if you want to really bum your patient out) to take a really accurate three-dimensional negative of a patient’s teeth. We then use a special stone (Type IV die stone) to make a positive version of the patient’s teeth in stone.

We then go through a series of steps to produce something that looks like this:

die_work

THEN we send it to lab. They take the time to create an entire fake tooth in wax, cast it, possibly porcelain veneer it, and polish.

They are supremely good at this. Incidentally, they’re supremely good at the steps before (the pouring of the cast, the model work to produce that picture above). BUT, us students, who will never do lab work on this scale again, have to try our hand at this stuff. Pretty discouraging, when you make the mistake I did.

If you want the crown to work in the patient’s mouth without adjustments, the whole process has to be completed to tolerances of about 10-100 microns. That’s where my problems started.

I was doing a final trim on a piece not unlike the blue one you see above. There’s a border on it, called the margin, that represents the division between the edge of the crown and the rest of the tooth. I’ve never messed this part up before, but this time I nicked it when trimming. My slip obliterated a piece of margin 0.7mm wide and about 0.2mm deep. I only showed it to a few people, but the consensus was universal: a hole like that is about the size of the grand canyon when you’re doing die work. When hundreds of microns matter, a half millimeter error might as well be a mile.

And so I redid it all.

2 comments left

Dusk Bike Ride

Tonight’s perfect weather let us take a bike ride to and from the Out On A Limb performance. During it, I discovered something quite interesting. See, I’m usually biking during the day when the wind is up and my forward progress depends on fighting against it. For this bike ride, however, it was dusk. For safety, we clipped a taillight to Mykala’s… tail. Coasting through the evening, the only thing impeding my progress was the occasional bug running into my face.

Leaning back in the saddle, dropping my hands to my sides, I could ignore the mild hum of the two wheels vibrating through the carbon and aluminum beneath me and just imagine I was jogging. Effortlessly. At 20mph. With a headlight. The streetlights dropped away just as quickly as they rose up next to me.

The illusion was convincing.

Screw Hashbangs: Building the Ultimate Infinite Scroll

I’m just a student in a field unrelated to computer science, but I’ve been coding for years as a hobby. So, when I saw the current state of infinite scroll, I thought perhaps I could do something to improve it. I’d like to share what I came up with.

(Demo: The impatient can just try it out by scrolling+navigating away+using the back button on the front page of my site. Works in current versions of Safari, Chrome, Firefox.)

Hashbangs Lie
Anybody that has even casually coded JavaScript over the past two years can tell you the story: Google proposed using the hashbang (#!) to enable stateful, crawlable AJAX pages. This proposal caught on. But at its root, the hashbang is a messy solution to a difficult problem: JavaScript is capable of creating an entire application, but browsers are still gaining the ability to update the URLs in their address bars to represent application state. (IE’s progress in this regard is abysmal and inexcusable).

Unfortunately, hashbangs are the dirt on a clean countertop. We’ve spent years trying to clean up our URLs and make them semantic by managing mod_rewrite, readable slugs, and date formatting. Now, we take a step backward, forcing the average internet user to learn another obscure set of symbols that make URLs harder for them to parse. Hashbangs are pollution.

But the real problem is the lying. Originally, a URL pointed to a resource and the deal between user and browser was this: you asked for something, and it was delivered to you. When that URL had a fragment identifier (e.g. url/#frag_id) at the end, there was still the tacit promise that the fragment existed on the page. But when your URL ends with a hashbang, the terms of the deal between user and browser change dramatically. Indulge me in an analogy.

Hashbangs for Lunch
Let’s say you are at a restaurant. We’ll call it cafe.com. Glancing through the entrées, you decide you’d like a burger, with some ketchup on the side (a truly great burger needs no ketchup… humor me here). “Coming right up” says your server, and leaves to retrieve your order.

burgers

Case 1
cafe.com/entrees/burger/ketchup/
If the server accepted your order with a URL sans fragment identifier, then you get everything at once, your ketchup is on the side of your plate, and you can immediately begin enjoying your delicious burger.

Case 2
cafe.com/entrees/burger#ketchup
If the server accepted your order with a URL PLUS fragment identifier, they’ll bring some ketchup to the table. “Where’s my ketchup?” you ask. The waiter helpfully points to the bottle of ketchup hiding behind the napkins. Everyone still gets what they want.

Case 3
cafe.com/#!/entrees/burger/ketchup
The system breaks down with the hashbang-style order. Best case scenario, here’s what happens: your burger arrives without ketchup. “Oh, that’ll be here in a bit” the waiter says, “I’ll make a separate trip to bring it over.” Here, the hashbang was guilty of, essentially, a lie of omission.

But here’s the worst case scenario for #3: you walk into a completely empty room. The waiter awkwardly walks up to you (remember, the room’s empty, you can’t even sit down yet) and takes your order. After staring at the empty room for a while, furniture begins to show up, and the restaurant takes shape. You see the waiter come back with a Weber grill.

“What the heck is going on?”

“We’re assembling it all for you, right here!” the waiter replies. A multitude of trips bring over a cook, the raw ingredients, and the whole darn thing is constructed in front of you. Your reaction is less than favorable: “But I just wanted a burger! Whatever happened to using a kitchen?! All the other restaurants just bring it to me all at once!”

The ketchup is applied for you, with a superfluous and sloppy auto-scrolling mechanism.

I call this “lying” because you don’t exactly get what you requested. You get factories and special features and all this extra stuff.

Read on to see how I applied this lesson to infinite scroll.

What’s OK with ∞ Scroll
An infinitely scrollable page is conceited because it presumes to know what you want. “Oh, you’ve reached the bottom of the page?” it asks… “well, why don’t I show you more?” This facilitates casual browsing and is especially well-suited to image galleries.

The fat footer does not suit my purposes, so I am perfectly FINE with the page assuming you’d like more, making the end of the page some weird variation of one of Zeno’s paradoxes. If you don’t feel comfortable with this or your projects don’t require such behavior, then infinite scrolling isn’t for you. That’s fine. Thanks for reading.

But you might want to hear the real problem…

What’s NOT OK with ∞ Scroll
Paul Irish’s thorough and utilitarian infinite-scroll.com starts to get at the main problem of tacking on content to the end of your page:

There is no permalink to a given state of the page.

Well, yes. If you only implement infinite scroll halfway. What I wish Mr. Irish would’ve said: FOR GOD’S SAKE, DON’T BREAK THE BACK BUTTON.

Solution
I’ve written an improved infinite scroll (which you can try out on the front page of my site) that does the following:

  1. Preserves the back button.
  2. Does NOT use the hashbang, no matter how happy twitter engineers are with it.
  3. Gives up on infinitely scrolling if (1) is impossible.
  4. Progressively enhances: when (3) occurs, the user follows a good ol’ hyperlink for more content.
    Remember, “If your web application fails in browsers with scripting disabled, Jakob Nielsen’s dog will come to your house and shit on your carpet.”).

I will outline my thought process here, but remember I’m just doing this for a hobby, so should you want to use this, you’ll probably want to recode it yourself. Let’s get started!

URL Design
I had to have a way to represent my page browsing states in a way that was easy to parse with regular expressions yet meaningful to the user. My schema:

load second page:
/page/2/
load 2 pages of results, including page 2, from beginning:
/pages/2/
the same as previous, except page 1 is explicit:
/pages/1-2/
load 2 pages of results, including page 3, excluding page 1:
/pages/2-3/

Here’s a helpful illustration:

pagination

I’ll annotate that image as we put this together.

Debouncing
I’m sure you’ve noticed that scroll-activated & mouseover JavaScript aren’t always robust. Quickly scroll to the bottom, then back to the top or mouseover-mouseout very fast, and suddenly JS fails to recognize your true intent. Things get stuck on or off… it’s a disaster.

In these situations, you should always debounce:

Debouncing ensures that exactly one signal is sent for an event that may be happening several times — or even several hundreds of times over an extended period. As long as the events are occurring fast enough to happen at least once in every detection period, the signal will not be sent!

I use the jQuery doTimeout plugin. Now that we can reliably detect scrolling to the bottom of the page, we begin the fun stuff.

Adding a Page
jQuery makes it simple (fun?) to extract the URL for the next page from a link I’ve placed at the bottom of the current page. Using the example image from above, let’s say we’re on page/1/. We scroll to the bottom, our debounced script detects this, and AJAX fires to load page/2. The page/2 URL is contained within the “↓ More” link at the bottom of the page. The AJAX result is appended to the page and the “More” link hidden.

Done!

Not if you want your user to enjoy using your page. Two problems:

  1. Your user is marooned on a giant, growing page. They see something cool and copy the current URL. Their friend clicks the URL and has NO IDEA what is going on.
  2. Your user clicks a link that interests them. “Cool!” they say, “I’ll just click the back button and keep browsing where I left off!”
    *Clicks back button*.
    And now they’re stuck at the very first page, having no idea how to get to where they were.

Both cases result in confusion — you’ve abandoned your user in the name of a cool way to add new content! No no no no. Repeat after me: “I must update the address bar to reflect the new state of the page.”

Updating the Address Bar
This is where you and I may differ — I will ONLY use history.pushState/replaceState to update the address bar. I don’t care how many libraries have been written to ease the process of falling back to hashbang-preservation of application state. I refuse to use hashbangs (Cf. the intro to this post!), and I wrote my infinite scroll script to take absolutely no action if history.pushState is not supported. (A nice detection of history support, borrowed from the excellent Modernizr: return !!(window.history && history.pushState);).

Now, I have the luxury of using JavaScript simply as a progressive enhancement; thus, I prefer to revoke functionality instead of spending days of my life working around old browsers. Again, I understand you may not have this luxury, and must use hashbangs for your project. Godspeed.

For those of you still left, put on your regex pants, here we go. I wrote a potent little regular expression to parse URLs of the schema described above. Here’s the unescaped version:

/pages?/([0-9]+)-?[0-9]*/?$

This regular expression is run against up to two strings:

  1. BROWSER URL (window.location.href), two variations:
    /page/xx/
    or this:
    /pages/xx-yy/
  2. The “↓ More” link at the bottom of the page:
    /category/page/zz/

Using TWO possible sources for page information makes the script robust, always able to determine how to update the URL. It works like this:

  1. Load tumbledry.org/
  2. Scroll. Detect the “↓ More” link and concatenate its contents to the current page. Parse the “↓ More” link using the regex above, and update the URL to tumbledry.org/pages/1-2/
  3. Scroll. Detect the “↓ More” link and concatenate its contents to the current page. Parse the current address bar link using the regex above, and update the URL to tumbledry.org/pages/1-3/
  4. Repeat (3).

Now that we are updating the URL to represent the state in an honest way, the user gets to navigate without jarring changes to their page location. I’ll explain below.

Handling the Back Button for Inter-Page Navigation
The user can freely click links and hit back to the infinitely scrolling page without worrying about losing their place:

backButtonInterPage

The first set of arrows should be self-explanatory — just your usual browsing. The second set of arrows illustrates how an intelligent infinite scroll can update the address bar URL. When the green area is tacked on to the blue, the script updates the URL to the red area: (/pages/1-2/); this reflects the state of the page. Now, when the user hits the back button, they end up exactly where they were on the page. Why? Because the URL they are navigating back to (illustrated in red) loads all the content the they were viewing (illustrated in blue and green).

Handling the Back Button for Intra-Page Navigation
Note: I originally thought that this behavior would be helpful rather than hurtful. Judging from the comments, I’ve learned otherwise. Since then, I’ve disabled this type of back button handling, and reinstated traditional back button behavior.

There’s one other exciting application of this address bar URL-updating we’ve been doing. Using window.onpopstate to detect a back-button press, the infinitely scrollable page can move backwards through the history stack, and remove nodes of content accordingly.

backButtonIntraPage

An interesting wrinkle: I’m using about a 200ms delay between the user hitting the bottom of the page and the AJAX firing to retrieve the next page. So, when the user hits back and the nodes are removed, they end up at the bottom of the last page, leaving them about 200ms to scroll up out of the target zone. So, using the window.onpopstate function, I temporarily modify the delay to 3000ms, giving the user time to scroll up and out of the target zone.

By adding and refining this popstate detection, the user gets a pretty darn good experience:

Known Issues

  1. Because /page/1/ has some of the same information as /pages/1-2/, my content management system ends up caching some duplicate information. This could be mitigated on the back-end by having the system intelligently assemble /pages/1-2/ from already cached items, but given the small scope of my site, it’s not necessary.
  2. If I navigate away from /pages/1-2/ and then come back, my saved value of the number of nodes previously added is lost. So, when window.onpopstate fires, my code must make an educated guess about the number of nodes removed. This could be fixed by using local storage to save an array of nodes added and removed.
  3. The entire purpose of updating the URL to /pages/1-2/ is to put the user precisely at their previous position on the page. This system breaks down if the user pops open some inline comment threads, navigates away, and then returns. I did not see a semantic way of saving this comment thread open/close information in the URL, so I didn’t bother creating a system to save this particular state. Again, some local storage could be used to save which comment threads were open, and pop them open again when the user returns to the page.

Enhancements
By caching the nodes added and removed, intra-page back button use followed by scrolling could be done without any extraneous HTTP requests.

Conclusions
I stubbornly insist that my JavaScript will only give you infinite scroll if your browser supports the event handlers and objects required to give the best experience.

That way, I am able to ensure that infinite scroll is all gain and no loss.

If for some reason you really want to read my source, you can do so:

Comments welcome.

24 comments left

Vacation Weather

Guess which day my vacation ends.

weather

Go on, guess.

Sugar

The May 16 New Yorker has John Seabrook’s latest article “Pepsi’s mission to make their offerings healthier”, which takes some time to enumerate recommendations on daily diet that have been lobbied away by the food industry. Regarding sugar:

Among the recommendations were calls to limit added sugar to ten per cent to the total calorie intake…

Assuming 2000 calories recommended daily allowance per day, that’s 200 calories per day. At 4 calories per gram of sugar, that’s 50 grams of added sugar per day. How much is that? Well, a 12 ounce can of soda has 39g of sugar.

Interestingly enough, that 50 grams is on the high end. “Sugar Shock”, in the latest issue of Experience Life magazine, adds this to sugar intake guidelines:

In 2009, the American Heart Association advised adults to eat no more than 6 teaspoons of added sugars a day for women and 9 teaspoons for men (recommendations are based on average weight for women and men).

White sugar weighs about 4.2 grams per teaspoon, putting the American Heart Association recommendations between 25 and 38 grams per day. So, a 12 ounce can of soda exceeds your daily sugar intake.

Tumbledry 1.0

And here we are. A brand new design for a… well it was going to be “for a new year”, but that didn’t work out so well. Anyhow, I’ll be squashing software bugs and refining things over the next few days.

3 comments left

Derek Miller

I last wrote about Derek Miller almost 4 years ago, remarking on his engaging writing and courageousness in facing stage 4 cancer. Yesterday, he died. Suddenly, that is his last post, put online by his family, in stark black and white.

It turns out that no one can imagine what’s really coming in our lives. We can plan, and do what we enjoy, but we can’t expect our plans to work out. Some of them might, while most probably won’t.

After reading the last sentence of his post (I won’t reproduce it here; I’ll let you read it yourself), I couldn’t help but cry.

Spring Snow

A few days ago, they had some record cold up in northern Minnesota. In late April — coldest day on record — something like 28°F for the high. This morning, I was taking out the trash (had to get the place looking presentable for our plumber visit), and it was SNOWING! On May 2nd!

Now, about that plumber. A few days ago, the tub stopped draining. It didn’t just drain slowly. No no no. We’d turn on the sink, and then water would start coming UP from the tub drain. Water that looked like it was half soil and half water. Mykala was concerned that we might have tree roots growing through the pipes. Thankfully, our landlord Mary Alice (who is always so extremely thoughtful and responsive, it makes me feel that we’ve won the rental lottery) got the plumber to stop by. Turns out there was a clogged drum trap (which, I guess, are typical for old tubs). Plumber recommended not messing with it, unless Mary Alice was interested in re-plumbing the entire house.

What adventures await Mykala and I when we jump into our first home. I envision me standing ankle deep in water in the basement screaming “I WAS JUST TRYING TO CHANGE THE WATER SOFTENER FLOW” and Mykala laughing at the ridiculousness of it all. She’s my instant perspective.

Quality Time

I just read “Electronic Devices Redefine Quality Family Time” at the New York Times. Mykala sent it to me.

When people are sitting in their living room, physically close to one another but absorbed with the goings-ons on different screens, the “parallel planes of existence” idea doesn’t seem so bad… kind of like reading the paper and occasionally looking up to discuss what you article you are reading — that is, they’re not entirely parallel planes… they do intersect.

But when people can’t make it through a movie without seeking out multiple dopamine hits from their digital devices… that really starts to worry me. I mean, movies are the thing that are, classically, the most passive way to absorb a long form story or narrative. If we can’t even stay with one flickering screen long enough to get a deeper understanding of a single topic, what can we do? Everyone becomes so ADD-addled that they demand their information in not just sound bites, but HuffPo-esque snarky little media snacks?

Here’s the thing I like about newspapers: they have a beginning and an end. Let’s say you’ve had a long week, and the Sunday Times arrives that morning. “I’m going to read what interests me from every section”, you say. And you do. You catch the article about Moby’s remodel of his house in Hollywood, then you read the life-changing “Is Sugar Toxic” in the magazine section. Sports: you find out just how bad are the Twins are doing. The point is, you are skipping around, just like you can online. BUT, there’s an end. “Well, that’s the end of the paper for today.” And you go about your life. And while you do that, your brain considers the implications of sugar in our life, of whether you’d like your house to look anything like Moby’s. You take a BREAK and get some time to absorb that which you read.

There is no end to the New York Times online. You’ll always be wondering what the newest article is, even at the expense of finishing the one you are currently reading. You are pulled ever forward, never having time to crystallize your thoughts about any one issue. Books, newspapers, TV with limited channels — they allow this crystallization. With the internet, you have to enforce that time to make sure it is there — that’s difficult.

Easter Eggs

We dyed easter eggs a week late this year. Mykala hard boiled them, and we had to dye them before they went bad. We’re sort of slow to get things like that done. BUT, even though she got back from work at 11pm, I have to give my wife credit: she was just as enthusiastic about dying eggs as I was. A note to my future self: dying brown eggs does NOT turn out well. The final color of the blue-died brown egg was… well not very appetizing. It looked sort of like a dirty football.

More