Book Review: HTML5 For Web Designers (A Book Apart)

HTML 5 For Web DesignersHTML 5 For Web Designers by Jeremy Keith
My rating: 4 of 5 stars

I had pre-ordered this book and received it yesterday – it took me just over an hour (the duration of my commute into NYC) to zip through it. Based on this, my quick review.

The book is a slim 86 pages. Given the amount of detail in the HTML5 spec, this may seem lightweight. In fact the author does spend the first 2 (of only 6) chapters discussing the history and process behind the creation of this spec – which was unsettling. BUT…. once you get to Chap 3 (Rich Media) through 6 (Web Forms 2.0, Semantics and Using HTML 5 Today), you immediately derive a benefit from the brevity.

I view this book as an HTML5 buffet. You get a quick taste of all the different flavors and features that make the spec so compelling to web designers — but with sufficient tools and pointers for those who want a longer ‘dinner’ on the aspects of primary interest.

The key takeways for me:

  • HTML5 favors practice over theory and, as the author puts it, “paves the cowpaths” rather than trying to forge a new road that will require a new learning curve from web designers.
  • Transparency tops lock-in. This should make rich media content easier to search, index and manipulate by not only making semantics visible but making every interaction with that content observable to the application.
  • Adoption is quite risk-free. While browser support is not yet ubiquitous, the author explains a few ways in which designers can get to evolve their web applications while still playing nice with browsers that are yet to catch up.

Summary: Loved the buffet. Ready for a week’s worth of dinners

View all my reviews >>

(Liveblogging) Big Query and Prediction APIs (#io2010)to

Notes from the session:

Google infrastructure takes care of data storing, query scaling — and exposes Google’s deep analytics capability to users for leveraging in custom applications.

  1. Benefits – scalability, security, sharing,, easy integration with GAE, Google Spreadsheets
  2. 3 Steps: Upload (data to Google Storage), Process (Import to tables to train a model), Act (Run queries and make predictions)
  3. Big Query and Prediction APIs bridge user data and user apps directly in the cloud
  4. Security: SSL for securing interactions, user owns data (respects user ACLs)
  1. Many use cases: interactive tools, spam, trends detection, web dashboards, network optimization
  2. Deep dive: consider use case of monitoring large network of machines to detect network issues or threats.
  3. The M-Lab “open platform for advanced network research” (http://www.measurementlab.net) — we’ve imported their data into BigQuery so they can analyze their data with our tools. Doing a demo with 60 billion rows of data.
  4. BigQuery interface using simple SQL syntax — demo showing query that filtered that data, normalized results and returned them in a table — all within seconds (very real time responsive
  1. Key Capabilities of BQ: scalable (billions of rows), Simple (queries in SQL exposed via Web api), Fast.
  2. No need to worry about indices, sharding data or defining keys — BQ import takes care of it all
  3. No need to provision machines or resources – queries executed via simple API
  4. Writing Queries: Compact subset of SQL supported. Common functions (Math, String, time) supported for group or ordering. Also added statistical approximations (allow tradeoff of accuracy for speed) — e.g., TOP, COUNT DISTINCT
  5. API: Standard RESTful interface.
    GET /bigquery/v1/tables/{table name}
    GET /bigquery/v1/query?q={query}
    Returns JSON response

  6. Security and Privacy: support common Google Auth (Client Login, OAuth, AuthSub). HTTPs support (protect data, credentials), uses Google Storage for Developers to manage access

F8, Chirp and the increasing importance of the semantic web

As far as tech conferences and announcements go, April 2010 has certainly been one interesting month to put it mildly.

First came Chirp, the first “official” Twitter Developer Conference (Apr 14-15, San Francisco) — the keynotes were streamed live and recordings can still be viewed. Aside from some interesting stats, the notable announcements were support for “Points of Interest” (going from machine-friendly location coordinates to user-friendly ‘places’ with semantic utility), “User Streams” (supporting real-time push updates to clients without rate limiting), “@anywhere” (a platform for integrating Twitter data directly into a website’s content pages) and “Annotations” (the ability to attached structured meta-data to a tweet).

Of these, the idea of annotations has captured the most buzz — not only because it now allows for additional context to be attached without impacting the 140-character limit, but also because it can be used by end-applications to add or derive richer semantic value from otherwise terse content. A number of ideas have already been proposed for the use of such annotations — and the list will only grow longer once the feature is released for public and developer consumption. (Anticipated availability = end of 2nd Quarter 2010)

And today, Facebook upped the ante by unveiling its new features and capabilities at the Facebook F8 Developer Conference. As with Chirp, the keynotes were streamed live and archived — but so were all the sessions and backstage conversations. I strongly recommend that readers take time to check out at least Mark Zuckerberg’s Keynote and the “New Tools” session. While F8 had many announcements, the game-changer that debuted today was their “Open Graph” vision which allows any arbitrary web page to be integrated seamlessly into a user’s social graph.

In essence, with Open Graph, any website can be represented as a discrete object in the graph, and can be “connected to” and “interacted with” by the user — and be reasoned upon or exploited by Facebook applications. The key to Open Graph is the Open Graph Protocol specification which describes structured data (<meta> tags) that should be added to a website (page source) in order to enable the Facebook platform to import it as a graph object. Required tags include title, type (similar to ‘category’), url (translates into unique id for object) and image (to represent object in graph). In particular, the list of specified types underscores the huge impact this structured data can have — by simply implementing the Open Graph protocol, pages can not only get personalized (to reflect the Facebook user who interacted with them) but are now automatically indexed and surfaced in any related user collection or query against that type (category).

If the Open Graph Protocol allows the outside world to be imported seamlessly into the Facebook ecosystem, then the new “Social Plugins” feature allows Facebook data to be exported into the outside world for personalized web-views. With social plugins, a Facebook user can browse the web and have every page “framed” with comments or content from his social graph as pertains to that page. There’s a lot more to be learnt about this of course — and as the days pass, we should begin to see the rollout of experiences that incorporate these features in new ways.

However, if we step back from both Chirp and F8, two similarities emerge that are both representative of the strong paradigm shift that is the social semantic web:

1. From destination to decentralization. Both Twitter and Facebook started off as destination sites (all value locked into the portal and exposed to applications) but are moving towards decentralized usage with Twitter @anywhere and FB social plugins.

2. Structured data emergence.  Twitter with its annotations feature, and Facebook with its OpenGraph protocol, are both opening up their platforms to embrace richer metadata. With the extensive reach of both platforms (in users and applications), this could be the tipping point for more ubiquitous support for the semantic web.

Update
Facebook “Like” buttons using the OpenGraph protocol are already beginning to proliferate. In the interests of try-then-talk, here is a link to the Facebook page that describes how to add a Like (or Recommend) button to any site, or integrate other FB social plugins. A look at the options used to auto-generate the HTML (or FBML) code snippet for the button hints at the real potential for expansion of this idea to all other kinds of social interactions from the hosting webpage.

The Evolution of Social Search

I recently had the pleasure of attending my first New York Semantic Web Meetup (held Mar 25, 2010) not just as a participant but also as a presenter. Thanks primarily to organizer Marco Neumann’s efforts and enthusiasm, this session actually included three talks — with focus on User Interfaces for the Semantic Web, Social Search Space and the Factual API — and drew a packed house of attendees ranging from tech bloggers and diverse technologists to marketing and start-up folks.

I was surprised and more than a little pleased to see the level of interest that social search generated across the board at this meeting. My intuition is that we have all, at some point or another, been involved in conducting a query that reflected social search behaviors without ever being aware of it. Posting a question to an open forum, or asking your followers on Twitter for an opinion, or sharing your photos/tips/reviews on various sites like Facebook,  Foursquare and Amazon. In some sense every one of us has been either a producer or a consumer of social data that came up as a “relevant” result to a search query.

There is definitely a lot of interesting research and practice in this space and my talk was perhaps just the tip of the iceberg, serving more as a starting point for further exploration. My slides are available on SlideShare (link here) and Daniel Tunkelang also referenced the talk (post here) on his excellent blog www.thenoisychannel.com.

The slides were designed to be a backdrop for interactive discussion and may not necessarily provide all the context (and navigational links) in this format. To help reduce this cognitive gap, I thought it would help if I made some of my notes from those slides available. These pages also explicitly call out the hyperlinks for any referenced sites or recommended reading — hope you find them useful. 

The Evolution Of Social Search (Handout)  (PDF) version of the notes with slides.

As always, comments and feedback are most welcome. :-)

Top 10 Signs you may be a @foursquare addict..

Update:
Apparently there are folks out there who are waay more addicted than I am — a must-watch talk from Dennis Crowley of Foursquare at Where2.0 on March 30 2010
http://en.oreilly.com/where2010/public/content/livestream

With a tip of the hat to Dave Letterman — his humor got me through grad school. Long may he reign ..

#10.
When you hear the word Blender .. your first thought has nothing to do with smoothies.

#9.
You ask your spouse to stay parked for just a bit longer in random malls so you can find a data signal and check in.

#8.
He doesn’t argue. Heck, he doesn’t even flinch at the giddy exclamations of ‘I’m the Mayor!! I’m the Mayor!

#7.
He actually takes your phone and does checkins for you just so you’ll finish doing the real tasks you came to do at the store.

#6.
You realize that there was an Early Adopter badge at SXSW 2009 … and spend sleepless nights thinking of the ‘one that got away’

#5.
Your flight gets delayed .. and you rejoice inwardly at the thought that, given the large crowd gathered over time, you might JUST GET THAT SWARM BADGE. And then you realize other flyers have real lives and that you’re the dork. And then you spend a sleepless flight … see #6.

#4.
You convince your friends that you .. a lifelong vegetarian .. have absolutely rediscovered your love for that wonderful packed steakhouse atmosphere. (Moral: If at first you don’t succeed .. try try again. Now where’s dat dere swarm badge)

#3.
You realize “10” is a really small number… how do I <3 thee? Let me count the ways.. yadda yadda yadda

#2.
You see a Burger King ad (it’s all about the crown people ..) .. and you feel warm and fuzzy all over.

#1.
You land in a new city and as soon as the plane touches down, you hit check in. Ten minutes later you realize you still need to call home and let them know you arrived safely. … and before you can say a word your spouse says ‘hey there were 2 others who checked in at the same time.. are they on your flight? ‘ And you realize what a great guy he is. :-)

I would say more but I’m hopped on all the caffeine it took me to earn my Barista badge.

Next Post topic:
Founding Foursquares Anonymous. To paraphrase a famous line: you can check-in any time you like but you can never leave. And yes, there’s probably going to be a badge for that.

Typing this out on my Android WordPress client. Nice interface.

From Privacy Preservation to Privacy Pragmatism

I actually started this post a long time ago, but the real world intervened and I never got to publish it. Looking back over the past weeks, it probably makes more sense now than ever — so here goes.

When you work with mobile and social applications as I have, “privacy preservation” is a term that invariably rears its ugly head. Location-based services? Oh, users won’t like others knowing where they are — its stalking. Health alerts?? Oh, this is sensitive data — we can’t clear HIPAA? Peer-to-peer ad hoc networking? Oh no — I don’t want some stranger nearby to see my personal photos or know what music I listen to. And so, as technologists, we pare the features down and overload the configuration settings till the user either feels underwhelmed by the utility or overwhelmed by the maintenance.

That said, we are now seemingly in a stage where the notion of privacy becomes fuzzier. Services like Twitter, Gowalla and Foursquare are promoting “voluntary disclosure” of information by users — to perfect strangers. Privacy is typically a simple cutoff-switch — be public and share your data, or  be private and manually-oversee who you share data with and when. And if the statistics are to be believed, Foursquare is catching on and Twitter is going strong with an annual growth rate of over a thousand percent. And services like this are becoming the underpinnings of a new slew of social presence, sentiment mining and analytics applications that openly seek to share, slice and dice the data — exposing hidden traits and increasing the visibility of personal data through contextual or domain-specific interfaces.

And as expected, with popularity came paranoia. We’ve all seen the buzz created by PleaseRobMe — a site that rebrands location updates as an indication that the user is not home. Of course, this conveniently forgets that (a) one person checking out doesn’t mean the house is empty, (b) its common practice for the working population to be outside of the home during work hours — so is every working man a candidate now? and (c) since robbery requires physical proximity, it would be so much easier to just watch for people to leave.

But its (IMO) flashy sites like this that give privacy a bad name and mask the more important issues. Personally, I find it more interesting that companies mine social data (including blogs like this one) to profile users discreetly and track their interests. And translating limited social interaction data into a concrete user identity may not be too difficult either as this hack shows. A recent report indicated that phone carriers could potentially determine your exact location (and intent) simply from cell traces and the patterns of activity they indicate.

So, what can we learn from all this? I think there are two key insights here.

  1. Don’t underestimate the user. Users will disclose information voluntarily if they see a value to that disclosure. And, disclosed information is better that inferred profiles. As Dennis Crowley of FourSquare puts it, “The data set that people want you to have about them is better than things that are collected passively about them.”
  2. Replace preservation with pragmatism. As Scott Nealy famously said, “Privacy is dead. Get over it.” With sufficient effort and computing power, it will always be possible to find some relevant information about any person in any context. No one device or application can ever assert complete control over the information dissemination ecosystem. Perhaps a new way to think about these things is to proactively make users aware of the potential penalties associated with different data-sharing actions always assuming that information will go public. Put users in control of disseminating the data rather than in fear of an involuntary or out-of-context disclosure.

In the digital world, just as in the physical one, we should always hope for the best but be prepared for the worst — and let the market decide the policies through their actions (or lack thereof) in using the related applications or services.

Android Ahoy!

This is a test post from my MotoCliq Android phone. May this herald the dawn of a new and renewed commitment to blogging .. in more than 140 characters that is..

Follow

Get every new post delivered to your Inbox.