Tuesday, May 15, 2007

The Brave New World: Usability Challenges of Web 2.0 / Jared Spool

Saturday, March 24, 2007

Jared Spool, of User Interface Engineering, reported on research in progress. His company’s mission is to long term, improve quality of lives by elminiating frustrations of technology.

Describes Web 2.0 as…

  • designing with total user experience
  • combining users and content
  • moves beyond traditional interface
  • development/design teams shrinking (because teams are able to do more with less staff by adding onto the work of others)

A recent 37Signals survey asked users what Web 2.0 meant to them. The answer: AJAX, interactive, Rails.

Design tends to focus on three stages, or mutations:

  • Talking horse stage, where people are building stuff with a technology focus.
  • Adding features stage. Ex: Amazon adding features like blogging.
  • Designing for experience stage; Web 2.0. Described as a backlash against too many features. Ex: Craigslist, where individual user experience trumps classic views of design

Components of Web 2.0
1. APIs
2. rss feeds
3. folksonomies/tagging
4. social networks

Web 2.0 is not user-created content. That has always been there. Web 2.0 is leveraging that user-created content.

Ex: Flickr

  • Incidentally, flickr is #5 in popularity for photo sharing. Photobucket is top.
  • Flickr uses a personalized homepage. Most users never see the generic, non-user home page after signing up.
  • Flickr has a programming interface to create stuff (i.e. APIs)
  • The Geotagging and printing tools were adopted after someone else (non-Flickr employee) did the development using the APIs.

1. APIs have their challenges. They make everyone a designer.
They can also create seamless experience with code from multiple sources.

Overheard in New York

  • Example of mashup between Twitter and Google maps
  • Took Twitter streams and mapped it with Google maps
  • The result is randome overheard conversations from the streets of New York. Here's an example from today (4/26/07):

Old lady: This is a full sandwich. I said half sandwich.
Waiter: What's the big deal? I won't charge you for the whole thing -- just eat half.
Old lady: No, no, you don't understand -- I am claustrophobic.

Yahoo pipes

  • Yahoo providing the tools to create mashups with RSS feeds.
  • Then, users offer those mashups to the public.
  • Ex: UST Campus/Library/Community Flickr Associations, which finds Flickr images based on a University of St. Thomas news source, the same university's news blog, and a local news blog.

2. RSS feeds
Challenges include

  • explaining them to users. When do things come and go? Is it refreshing for corrections? How do users subscribe?
  • How do users deal with the number of feeds they track? (Google reader caps at 100 posts before it starts deleting)

3. Folksonomies
Challenges include

  • How tagging is being used. For instance, are tags for “me” and “hi_you_what’s_up?" useful to anyone else?
  • Difficulty in figuring out logic behind someone else’s tag
  • Are you tagging for yourself or for others?
  • Do we monitor? Who? How?

4. Social networking
Ex: Netflix

  • Ratings from friends are separate from everyone else
  • Compare your ratings with your friends
  • Games based on ratings

Challenges arise when more than one person is involved:

  • How do we prevent systems from being “gamed” (scoring/ranking people)
  • How do we encourage behavior; allowing for good behavior to propagate; not anti-social behavior

Challenge of the "long tail" of the Zipf curve

  • only a handful of CDs become really popular;
  • some that become sort of;
  • many that only a few folks like
  • get more sales from those unique—long tail—titles (only sell a couple of copies but there are many of them)
  • 98% of the Microsoft.com users are using 2% of the content; what do you do for the rest of the content?

New IA Challenges with Web 2.0

  • Most of IA has been dealing with known authors and static content.
  • New problem: dynamic content (same known authors)
  • Even newer problem: dynamic content; unknown authors

    Ex: LinkedIn.com
  • tracks contacts
  • can input resume content and output new resume but users don't tend ot put content in correctly.

Possible Application for L&ET:

  • What applications in L&ET would lend themselves to APIs?

Best Practices for Form Design / Luke Wroblewski

Saturday March 24 2007

Forms are used online for
  • shopping
  • access (i.e. registration)
  • data input (same as physical)

And they are how users talk to companies online.

No one likes filling in forms, so let's minimize the pain.

  • Smart defaults, in line validation, clear path to completion can all help.
  • The context matters, including how frequently the person completes the form and how familiar the data is to the user.
  • Consistency can be used with a single voice; errors, help, success.

Ways to analyze performance include:

  • Usability testing; track errors
  • Customer support
  • Site tracking

Layout Best Practices

Top aligned labels (i.e. label is directly above the box that the user fills in)

  • good for familiar data
  • minimize time needed to complete
  • user sees label and form at same time
  • allows for flexible designing (space to accommodate different languages, for instance)
  • need more vertical space
  • contrast is important

Right aligned labels (i.e. label is to the left of the box, aligned to a right margin)

  • close association between label and box
  • harder to scan, which makes it good for unfamiliar forms
  • still faster completion times than top-aligned
  • fit more in vertical space

Left aligned (i.e label is to the left of the box, aligned to the left margin; boxes are left-aligned, too)

  • best for unfamiliar data (left aligned labels—fields left aligned too)
  • easier to scan
  • takes longer to fill out
  • user has no problem associating labels and fields
  • forces user to pause/analyze

Required form fields

  • Asterisk if the convention for required fields
  • Some forms label required fields even if all are required; this doesn't make sense
  • Consider getting rid of optional fields (increases completion rates)
  • For forms that are mostly requited, could flag optional instead of required
  • Make it easy to see required fields at a glance
  • Associate indicators (i.e. required) with labels

Field lengths

  • Expresses expectation—make smaller if only a few characters
  • Random sizes increases visual noise; keep consistent if no expectation of length

Content Grouping Best Practices

  • Important for longer forms (i.e. tax forms)
  • User should be able to scan info at a high level
  • Boundaries (white borders) introduces new visual elements
  • Use minimal amount of visual information


Actions Best Practices

  • Avoid secondary actions
  • Minimize reset button if not important—visual representation of the actions should match importance
  • Consider the use of buttons versus links, and the visual weight of the icon

Help/Tips Best Practices

Used for:

  • unfamiliar data
  • to justify reason requesting

Can overwhelm a form if overused—dynamic solutions could help

  • Inline exposure (pops up info when you click on field) (ex: Intuit snap tax)
  • User-activated inline exposure (i.e small ? next to field)—shows up below or nearby
  • Help visible and adjacent to data request

Interaction Best Practices

  • Make the path to completion clear
  • Remove unneeded fields
  • Allow for flexible data input (i.e. allow different ways to insert dates)
  • Use smart defaults
  • Direct line path to completion (literally, can be graphically illustrated)
  • Offer chance to save for longer forms
  • Use proper html (tabindex attribute) to allow users to tab through a form.

Consider progressive disclosure for complex forms

  • Hide advanced options behind a link (display content below the more common needs)
  • Gradual engagement by offering subchoices
  • Dialog to progress through steps—gradually engage (give them a vested interest)
  • Give them a reason to be excited—show why the user should want first, then give the form (or move them through)
  • Use metrics for what users include in a form (i.e. less than 20% adoption of field, then it goes to an advanced option panel)
  • Progressive disclosure is most effective when user-initiated
  • Maintain clear relationship between selection and result

Feedback Best Practices

Inline validation (error messages appear as the user fills out the form instead of after submit)

  • Ex: password strength indicator
  • Show right away if username is available

Give options as user starts to type
Length limitations (show how many of available characters)

Indicating errors

  • Clear indicator
  • Gives way to resolve—top level message
  • Duplicate language at error spot—secondary indicator

Indicate progress

  • Disable submit button (after user clicks) and replace with animation to indicate progress
  • "Doing something…" messages

Indicate success

  • Top level message—shows in context of form or…
  • Popup success message (in context)
  • Animation (highlight item) to indicate that it’s done

Don’t change inputs (don’t clear after submit)
Warn of likely unknown/difficult information needs before the user even gets to the form.

Possible Application for L&ET:

  • We should review our forms against these recommendations.

Mentions two online form-builders:

Thursday, April 26, 2007

Data Driven Design: Using Web Analytics to Improve Information Architecture / Andrea Wiggins

Saturday, March 24, 2007

Web analytics= think WebTrends although there are other tools out there.

Web analytics can be used to:
  • quantify user experience audits
  • identify key performance indicators
  • compare over time with annual audits

Limitations:
Most tools are not designed to capture Rich Internet Applications (RIA); user may stay on one "page" while interacting with content.

Spiders ruin user data

  • block out with robots.txt—prevent from looking at logs
  • can also identify spiders by looking at speed of visits from single user

Types of Data:

Ratio of new to returning visitors

  • think about context
  • track over time and track with cross-channel marketing
  • consider the effect of timeouts

Median visit length

  • is closer to reality than an average visit length
  • can indicate depth and breadth of visit—are they digging deep or are they hopping around?

Clicktru rates for clickable graphics--requires additional programming (we've done this for our Featured Connections).

Response time--be sure to check at peak load time

Server errors

  • Monitor 500 server errors, which is where our server has the problem
  • Try to identify how the user got to the 404 server errors
  • Combined, hits to server errors should be < .5%

Action items:

  • Look at those dang 404 errors that show up in WebTrends more closely. Can we track where they are coming from?
  • Look at Crazy Egg analytics tool
  • Look at “leakage points”—where did users bail out of the website. Do they make sense?

Links for More Info:

Sunday, April 15, 2007

Using Search Analytics to Diagnose What’s Ailing Your IA / Rich Wiggins and Louis Rosenfeld

Saturday, March 24, 2007

Wiggins’ emphasis was best bets within search results. Rosenfeld spoke more generally about identifying problems from search logs.

Practicalities:

  • Zipf curve (long tail/short head) applies to search log—many users have unique needs
  • Look at top searches, and then dip down into the unique ones. Don’t treat all the searches as equal. Could look at top 50% of all searches, for instance.
  • Consider seasonality (by season, day, even hour). Some needs are higher by season. Could promote that content accordingly.
  • Capture search logs to SQL database to then process. Can dump relevant fields into Excel and then evaluate.
  • Use IP with time stamp to surmise single user.

Ways to Use Search Logs:

  • Look at most common unique queries; are there patterns?
  • Test common queries to see what results look like.
  • Look for null results.
  • Look for too large results.
  • Can grow content to satisfy searches (ex: Netflix did this in response to “yoga” searches)
  • Look at improving search entry, results, and/or algorithm
  • Combine with field study (ex: L.L.Bean saw users starting with catalog, then taking SKU to web—answered why users were searching for SKU)
  • Fixing a trend seen in long tail could help many.
  • Look for time variations; respond by positioning Best Bets or guides seasonally.
  • Add tools for results page (i.e. options for broadening/narrowing)—this moves advanced search options from search page to results page
  • Best bets as not the final answer; still should monkey with relevance ranking. (ex: rank company names higher if that’s what users search)
  • Consider a best bets index rather than/in addition to a site index. See MSU A-Z index as example of using common queries as a best bets index. Site index uis difficult to build. Do you make it comprehensive? Selective?
  • Look for the page the user is searching from to identify failure points.
  • Look at top pages found through search; how can these be easier to find in navigation?
  • When cleaning up site, start with what people want—rather than complete evaluation of content.
  • Look at “tone” in search (technical or popular; specificity; acronyms; plural) to help create labels.
  • Cluster queries to see parent/childs; look for possible metadata fields and contents for those.
  • Sample the long tail (tends to be more research oriented)
  • Compare spikes (proper names, companies) and compare with editorial content; identify future stories. (ex: Financial Times has done this)
Links for More Info:

Wednesday, April 11, 2007

The Lost Art of Productively Losing Control / Joshua Prince-Ramus

Saturday, March 24, 2007

This was the opening keynote.


Architect Prince-Ramus, spoke about a handful of projects on which he worked and, despite fairly radical end results, were designed for use rather than purely aesthetic. The expectation of the talk was that information architects would find parallels between information architecture and building architecture.

Project 1: Seattle Public LibraryResponded to challenge to create a space which would fulfill the varied expectations for a modern public library. Among the recognized conflicts is the growth of new technologies, on top of the existing, long-lived book technology. In parallel are increased expectations for an urban public library as community space. Both of these progressions are illustrated with the following graphs. In response, the architect attempted to designate a percentage to the “stable spaces” (book stacks, staff spaces, etc) and not allow the social spaces areas to encroach. In other words, both (or all) roles of the library…and all formats deserve a space.

One of the highlights of this building is that the book stacks are in one continuous spiral—much like a parking garage—within the building. This design provides a logical arrangement while also creating serendipity (users flow from one section to the next).

  • There is no down escalator, in fact, to encourage the user to browse down the stacks.
  • Call numbers are embedded in the floor at the end of each range.
  • Elevator buttons which normally indicate floor are instead labeled with call numbers.

Possible Application to L&ET:

  • Can make the same argument about stable versus unstable spaces in our environment? Our spaces seem to be in flux and our “stable” spaces (staffing, in particular) have encroached on social spaces, rather than the other way around.
  • How do we label our elevator buttons in the stacks? In public areas?
  • Embedded call numbers they way they did it don’t make sense for us (it looked somewhat permanent for them). I wonder about carpet squares with exchangable call numbers or an electronic display. Would the gain in navigation be worth the cost?

Project 2: CalTech Pasadena Information Sciences building
One of the early steps in this project was to chart out the different functions of the building spaces with colors—first, on a floor map and then stacked up into a bar graph. The end result looked much like a disk defrag graphic, and the conclusions may be somewhat similar.

In this building, as across the entire campus, the spaces were heavily intertwined. His point was that with everything all mixed up, users are less likely to visit that building because they can’t figure it out. Instead, they stay in a building that they’ve already learned, even though that building may be just as fragmented.

To clean it all up, the architect worked with the client to create clusters (research, factory, undergrads centers, Olympus were his labels). Like things were treated in like ways. For instance, staff office spaces ended up in a ring around the facility.

Possible Application to L&ET:

  • He promoted fishbowl conference rooms which spark conversation, even from outside a team, and promote innovation. The poster sessions offered a similar idea from AOL—not fishbowl, though.
  • The idea of defragging a space makes sense to me. Put like functions together and it seems like you’d get more efficiency.
  • I think we’ve been thinking too inside the box about staff work spaces. Should we start looking at vertical space like we did with the double-decker carrels?
  • An interesting observation from the architect…many architects (and the information architects I asked, too) work in large spaces where collaboration is easier. In fact, conversations from co-workers are seen less as distraction and more as a staying involved. He found that academics viewed collaborative space more as private offices with doors that could be opened.
  • Are too committed to the traditional work day. Do we have staff that would prefer to work different schedules and share spaces?

Links for more info: