January 16, 2009

Streaming Video Playback Speed Controls - Two Innovative Methods

One of the coolest playback features for online video, especially academic video, is a player with the ability to speed up (or slow down) the playback speed of a streaming video.  Way back in the early 2000's there was a tool called Enounce that acted as a plugin to RealPlayer or Windows Media Player and would add a control slider to the player.  Everything from half-speed to 5x playback, with no pitch change on the audio.  It was very effective for watching lectures or news content - for much material, you can really absorb it much faster than it's spoken.  Turns out that Enounce is still available, and works pretty well, and they announced a version called MySpeed which supports embedded Flash video.   

End-users can buy and install Enounce and use it on their systems.  It's a native Windows-only application and must be installed individually on each system.

OK, that's great, but I want this as a feature of my website - I want all my Flash videos to appear with a speed control for all users.  To date, I'd been unable to find any way to do this - no one I've spoken with seems to know how to write code for Flash Player that will permit a speed control.  I'm told it's currently not possible.  

Then I came upon Bloggingheads.tv.  Bloggingheads.tv includes a Flash-based player (derived from the JW Media Player 3.2)  that has a "1.4x" button that bumps up the playback speed -- perfectly intelligible, but much quicker playback for taking in a long talk in a jiffy.  They did the impossible!

I had to know how they did it, so I did some poking around. Turns out they didn't do the impossible, they did an end-run around it.  The playlist that their flash player reads for each video program references two media files.  Here's the relevant code snippet from the XSPF-format playlist:

<location>
    rtmp://mirror-image.bloggingheads.tv/bloggingheads/flash
</location>
<identifier>bhtv-2009-01-13-pb-jg-100x.flv</identifier>
<meta rel="alternate">
   rtmp://mirror-image.bloggingheads.tv/bloggingheads/flash/bhtv-2009-01-13-pb-jg-140x.flv
</meta>

So, they created an alternate encoding of each video, one with the 1.4x timeline baked right in.  The player needed some modification to play this, but only so that the time, duration, and the location bar all showed an appropriately scaled value as this video played. After all, a 30 minute video encoded to play at 1.4x is actually only a 21 minute file, but the timeline still needs to show it like it's the 30 minute length of the original content.

When you switch from one speed to another while playing, the stream rebuffers and seeks to the same spot in the video, so there's just a momentary pause in playback switching from one stream to another.

It's a great workaround - although for my purposes (user-generated content, thousands of contributors) I'd still prefer a player-based way to do it so it can apply equally to video from all sources without requiring added backed processing.  Still...this is the only solution I've ever seen to this issue a) for Flash video, and b) not requiring an additional plugin.

December 13, 2007

Facebook and Academic Institutions - Content or Context?

In the world of enterprise and educational  IT, the question I keep hearing asked about Facebook is, "will this supplant our intranet/course platform/LMS/[insert your enterprise application platform of choice]?  Students want to know if they can get their course content in Facebook.  Administrators want to know if allowing students to use Facebook for anything academic will drive users away from their school portals or course environments. Part of the confusion is an as yet immature understanding of what Facebook and the custom applications you can develop for it are really best at.  

Harvard Business School Professor Andy McAfee writes, in Facebook on the Intranet? No -- Facebook AS the Intranet, about Serena Software's employing Facebook as its intranet portal.   Bill Ives goes into more detail:

One of major flaws of existing intranets, even when they work to find stuff, is the lack of social context. It is difficult to find anything about people. Serena wanted to promote a greater connection between people. Facebook, which is both free and a great example of web 2.0, seemed to be the right answer. They established a private Facebook group for Serena employees and they built a few simple custom Facebook apps to better enable intranet functions. Now they provide links through Facebook to documents stored securely behind the firewall.

Facebook is really good at one thing - providing a social graph that connects users to each other.  Developing a Facebook application makes the most sense when you're trying to intersect a social graph of your own (such as the enrollment in a course, the list of students with the same concentration, or those in the same study group).  When developing an application to do exactly this for students, it became clear to me that Facebook's value was not in being the container through which large bits of course content, school administrative information, or academic discussions would be delivered.  We already have excellent applications for all of that, and they provide a level of access control, administrative options, and a cultural "fit" that is useful and durable. 

What Facebook does do, however, is let us publish snippets or updates to students sourced from these university systems, and drive traffic back to them for the "full story".  It lets us give a student a page within Facebook with their course schedule, links to the course sites, lists of their Facebook friends (and other participating users) who are in their courses, and various ways to message between these groups.  Facebook's friends is social graph A, the various university roles and identities of students are graphs B/C/D/etc.  Facebook provides  the means to intersect and display them in creative and student-focused ways.  It's about context for your content, not really about delivering your content.  

As Serena found out in its implementation, Facebook's API that allows iFramed applications to run inside its framework mean that you can develop secure programs that combine a user's Facebook identity with their institutional identity, all without exposing any of your data or your users' institutional login credentials to Facebook.  I suspect that as more institutions explore this realm, some common understanding will emerge that Facebook and social-graph platforms like it are not a threat or a replacement for the portal, LMS or CMS, but a complement to them.

December 03, 2007

Video Transcript Browsing Interface

CNN has presented a unique and powerful UI for viewing and navigating the video of one of the recent Presidential debates.  Aside from having done a great job presenting the transcript alongside the video (with appropriate click-to-play-from-here functionality), as well as a table-of-contents by topic; CNN has created a unique "map" of the debate, allowing a user to single out a moment, a particular speaker, or the results of a search by spoken word in a brilliant, graphical display.  

What's also interesting is the implementation:  a client-side Flash applet handles the whole thing by reading a single XML file that contains the entire contents of the debate in text form.  

It's one of the finest examples of this kind of thing that I've seen.  I'd love to know if anyone has thoughts about other situations in which this kind of interface could really add value.  The cost would be an issue - transcripts are expensive, as is massaging a transcript into the descriptive XML required for this tool.  Automation using tools like the Virage VideoLogger and Pictron's Audio Gateway can identify speakers and generate text from speech - the accuracy would certainly be far less than what CNN has done here, but for some purposes, would it be "good enough"?

April 13, 2007

Image, Audio & Video Search - Reading Content and Context

In his article, Improving Image Search, Harvard's Michael Hemment writes about a research project at UC San Diego that uses human-generated sample data to train an engine that analyses images to extract searchable metadata. 

 Supervised Multiclass Labeling (SML), automatically analyses the content of images, compares it to various “learned” objects and classes, and then assigns searchable labels or keywords to the images. SML can also be used to identify content and generate keywords for different parts of the same image.

This is an interesting topic. I'm reminded of several related topics -- all involved in extracting useful metadata from binary media objects :
  • The Music Genome Project and their Pandora site. Uses human-generated metadata to describe the music, but using fields very similar in concept to the data in VIA or the seed data used in SML. 
  • Using OCR tools to identify and index text that appears in an image. Google's Orcopus project is an open-source way to do this, although commercial products like Pictron do it for images and video. 
  • Speech-recognition on audio/video content is similarly a way to try to index the otherwise opaque contents of a binary media file. What's odd is how little use this has gotten in the real world, even though the technology has been around for quite some years.

    I read somewhere on the web recently, (can't recall the source) the correct observation that hugely popular video sites like YouTube are built on making video findable by using very primitive metadata combined with the all-important context. Who else likes this? What else has this person created/bookmarked/shared? What comments and tags have users applied? All have turned out to be far more useful than a full transcript or speech-recognition search. 
One burning question for me is, why is searching inside a PDF massively useful, but searching inside a video just doesn't quite hit the mark?  What's holding video or image searching back?  Is it the quality of the metadata we extract and index?  Does video simply contain less information density (in its transcript) than a written article (i.e. have you ever read the transcript of a half-hour program, only to realize that you can read/skim it in less than 3 minutes?)? Or do people simply use these kinds of assets differently than they do text-based documents, so different rules and benefits apply when searching?  


April 06, 2007

Online Video and Web 2.0 - What's missing?

Dan Rayburn points out in his Business of Online Video blog that streaming video isn't a Web 2.0 technology.  But while Dan's point is that streaming video has been around way too long to be considered part of the Web 2.0 "fad", I think the relationship between video and Web 2.0 is more complicated than that.  

The key ingredient of "Web 2.0" technologies that makes them worthy of that label is that they have open APIs and are freeform platforms that allow user behavior to define and create value.  Harvard Business School professor Andrew McAfee says it well...

...the use of technology platforms that are initially freeform (meaning that they don't specify up front roles, identities, workflows, or interdependencies) and eventually emergent (meaning that they come over time to contain patterns and structure that can be exploited by their members).  Email is a channel, not a platform; groupware is not freeform and typically not emergent; and knowledge management systems were essentially the opposite of freeform --  they presupposed the structure of the knowledge they were meant to capture. 

...so, to build a Web 2.0 service, Andy says, 
  • Build platforms, not channels
  • Make sure they're initially freeform
  • Build in mechanisms for emergence.  These mechanisms include links, tags, powerful search...
...and, I'd add, simple APIs for combining and syndicating content from one site to another.  Sites like YouTube are on the edge of Web 2.0 because of the ease with which users can publish their content not just to YouTube, but to other sites.  Web 2.0 facilitates video mashups: videos can be embedded across sites, search results can be published as RSS, users can "mash-up" collections of video with photos from Flickr and maps from Google or Yahoo.

But, Dan's right - video isn't really Web 2.0 enough, yet.  As Microsoft's Jon Udell points out

The kinds of standard affordances that we take for granted on the textual web — select, copy, reorganize, link, paste — are missing in action on the audio-visual web. The lack of such affordances in our current crop of (mostly) proprietary media players suggests that open source and open standards can help move things along. But nobody in the open world or in the proprietary world has really figured out what those affordances need to be in the first place.

Standard ways to search within video, associate a video timeline with other media, and deep-link into video content simply don't exist.  RealPlayer and WindowsMedia always did offer a way to deep link using start parameters in the .ram or .asx file URLs, but the endless variety of custom Flash video players (since there isn't really an official, usable "standard" one) means that even that simple method is no longer available on most sites.  And as for search -- while web search engines crawling into a Word document or a PDF file is routine, video content search hasn't caught on, even though the technology, from (the defunct) Virage, Streamsage (now part of Comcast) , Pictron, Podzinger, and others, has been around for years.  

So, Online Video 2.0 is yet to be born - while video is a part of the Web 2.0 ecosystem that generates value from unscripted user behavior on freeform platforms, it's not yet ready to BE one of those freeform platforms.

January 04, 2007

User-Generated Media - Challenges & Solutions for Business and Academia

Social networking and user-generated content (UGC) sites present unique technical challenges, which lead to unique business challenges.  While unexpected growth is a potential problem for any online site, it is both the holy grail and (in the spirit of "be careful what you wish for") a ticking time bomb for social networking sites. 

A new whitepaper from Akamai (also available free from streamingmedia.com) goes into some depth about the special factors that affect social networking sites.  Some highlights:
  • User-generated content sites are the fastest-growing category of web site (by unique visitors) on the Net, showing, in some cases, triple digit year-over-year growth. Of the ten fastest growing web brands, five are UGC sites (for example, Flickr and Wikipedia). 
  • Social networking/UGC sites have, by definition, unpredictable storage and bandwidth needs, making technical infrastructure (and therefore, budget and capital expense) planning a crap shoot.  Outsourced capacity on-demand is an important option to consider before you're faced with site-crippling runaway success. 
  • Success is tied closely to having a fast innovation cycle -- try stuff out, see how it works for your users.  Continually sense-and-respond to user needs to find that sweet spot of simplicity, functionality, and sustainability that makes your site sticky and social.  One way to do this is to minimize the time and effort you put into infrastructure build-out and put it into more creative endeavors. 
  • If you're an ad-driven site, performance is directly tied to revenue, as faster loading pages keep eyeballs on the site, lead to more page views per user, and therefore register more ad impressions.  When Friendster moved to Akamai's delivery network in March 2006, they saw an immediate 33% decrease in page load times, and a threefold uptick in page views.
Even for an educational institution, outsourcing certain infrastructure is appealing.  With service-oriented Web APIs, it can be easier now to work with a vendor/partner than it is to build it myself.  If I want to put up a quick video recording/encoding/sharing service for my users, I can:
  • Build it myself - not always a bad idea, and definitely a quick-and-dirty solution for a pilot or proof-of-concept, provided I have to staff and the time to move it from P-O-C to production-ready if the need arises.  
  • Acquire and deploy an inexpensive product.  I was surprised to find YouTube clones like Clip'Share and Altrasoft VideoShare for a few hundred bucks or less.  Again - good for a proof-of-concept.  May or may not offer enough for coping with real success.
  • Use a Web Service API like that from Video Egg or JumpCut to handle all the media operations, while you focus just on your website.  These services handle media input (in the case of Video Egg, from webcam and cell phone, as well as file upload). transcoding, online editing and delivery.  It can provide a platform for rapid development of your own custom solutions, as well as a scalable solution in case your solution takes off.  
I'm generally a big fan of institutions building their media solutions in-house, but the combination of the unpredictable needs of user-generated media, the ease and excellence of some of the vendor service-based APIs, and the need to be able to innovate quickly without up-front investment in big infrastructure creates some interesting possibilities.  

The Akamai white paper, Successful Social Networking and User-Generated-Content Applications: What You Need to Know, (which, by the way, I wrote) addresses some other challenges of social and UGC sites -- how edge-caching works with dynamic content, how to control costs when growth is unpredictable, options for exercising editorial control over UGC sites, and some examples of how social networking is being used by businesses to build revenue and create new opportunities.  

November 15, 2006

Is Learning Online Like Watching Football on TV?

The challenge of effective eLearning is finding ways to leverage the medium that simply can't be equaled in solely traditional teaching environments.  Can students learn better from online-instruction than from in-person instruction?  One example pointed out to me was the wildly different experience of listening to a string quartet play live in a real space as compared to listening to the radio or even a CD on a good audio system.  Does the presence, energy, acoustic power, and ambiance of that live performance extend through the electronic realm?  Sort of, but it's just not the same.  Would you willingly deny anyone the option of the in-person experience without good reason?

To me, the alternate example is the professional football game.  Sure, sitting at the top section of a stadium with 85,000 of your closest friends is a social experience with an energy that's hard to beat; but for actually watching a game, nothing beats a TV (even a small one) with instant replay, close-ups of the action, and that bright yellow line that marks the yardage for a first down.  

Which led me to this:  The challenge for eLearning and distance education is to identify the "yellow lines" of the medium -- those things that represent something inherently valuable but simply not possible in the traditional-teaching realm.  Maybe eLearning's real advantage will remain rooted in the fact not that it competes with in-person teaching, but that it allows learning where in-person teaching is not possible or practical.  But I think there's also some "yellow-line" capabilities waiting to be explored, even where educational technology supports (rather than supplants) in-person learning.  

One example of a genuinely new and interesting capability is the digital pen note-taking integration done by Tegrity in their classroom capture system.  I've long been a user of Logitech's digital pen.  The pen allows you to write on special notebook paper, and captures everything you write to your computer as a perfect digital image of the page you wrote.  You can print pages, share them via email, as well as add text and drawing to the page in the computer, making pages indexable and searchable.  What Tegrity has done is to tie the note-taking with the digital pen to the timeline of the video/slides (marketing demo video) captured during a live lecture.  Students who took notes during the class can, at their own PCs, bring up their notes on-screen alongside the lecture video. The lecture video, the instructor's notes, and the student's notes all become part of a synchronized presentation.  Notes can go from being a one-shot chance to get the main points down (sometimes at the expense of really listening) to being a guide to review and further exploration.  I don't know if it will transform teaching and learning, but it struck me as an example of a stunningly clever and useful application of technology to do something that was previously quite impossible.

There's a lot of activity in researching the effect of these technologies.  One interesting study is Lonnie Harvel's dissertation Using Student-Generated Notes as an Interface to a Digital Repository (pdf).  Harvel explores the surprisingly low use of digital repositories in education by experimenting with methods to integrate lectures, student notes, and external resources in deeply integrated ways. 
  

October 30, 2006

Simulations and Games for Learning - the Federation of American Scientists gets involved

In his Learning Technology blog, Harvard Business School Publishing's Denis Saulnier recently published an informative overview of educational simulations and games.  Having worked with Denis at Harvard Business School's Educational Technologies and Multimedia Development (ETMM) group on over a dozen simulations (a few are profiled here), I know the amazing pedagogical power of a well-designed simulation to evoke tangible, experiential learning among students.  I also know the more-art-than-science nature of effective simulation design - it's hard to define what facets make will the game an effective learning experience, but you know 'em when you see 'em.  

Anyway, shortly after reading Denis' thorough summary of learning simulations (and a great outline of Clark Aldrich’s Learning By Doing: A Comprehensive Guide to Simulations, Computer Games, and Pedagogy in e-Learning and Other Educational Experiences), I came across the Federation of American Scientists report from their recent Summit on Educational Games (2006).  

The FAS is concerned with American competitiveness in science and engineering.  FAS points out that:

The success of complex video games demonstrates games can teach higher-order thinking skills such as strategic thinking, interpretative analysis, problem solving, plan formulation and execution, and adaptation to rapid change. These are the skills U.S. employers increasingly seek in workers and new workforce entrants. These are the skills more Americans must have to compete with lower cost knowledge workers in other nations.  
 
The report notes that game designers have instinctively implemented many of the features of "optimal learning environments": clear learning goals, broad (and reinforcing) experiences), continuous adjustment of the challenge based on performance, encouragement of inquiry, time on task, motivation, personalization and others. 

The summit's major findings include:
  • Educational games require players to master skills that employers want; with the potential to impact practical skills training, training individuals for high-performance situations that require complex decision-making, reinforcing skills seldom used, teaching how experts approach problems, and team-building. 
  • Designing games for learning is different from designing games for entertainment. 
  • Research is needed to develop a sound understanding of which features of games are important for learning and why, and how to best design educational games to deliver positive learning outcomes.
  • High development costs in an uncertain market make developing complex high-production learning games too risky for video game and educational materials industries.
  • Educational institutions aren't set to to take advantage of educational technology in general, and games in particular. 
  • Large-scale evaluations of the effectiveness of educational games are needed to encourage development and adoption of gaming technology. 
The report goes on to detail the roles of government, the gaming industry and the educational institutions in filling in the knowledge gaps and figuring out how to make the clear benefit of learning simulations more available to all learning environments.  Issues of how scale up and reduce the cost of design, production, deployment, and assessment of games are addressed.   The full report is about 50 pages long, but is well-written, to-the-point, and a highly recommended read for anyone interested in educational games and simulations. 

October 11, 2006

Video Editing Online - A Keyframe Extraction Script

Video processing and editing online is becoming a more common occurrence as video sharing and hosting sites are finally catching on.  YouTube, of course, is the one getting all the press this week with Google's $1.6B acquisition.  But other sites have begun offering some very impressive video editing capabilities on the Web.  EyeSpot and JumpCut (owned by Yahoo!) both offer simple, but capable, video editing, including some combination of cuts, remixes, transitions, effects, and audio tracks.  

What's new about this is that it's all done over the Web.  Tools like Apple's iMovie or Final Cut Pro, Adobe Premiere, Avid Liquid or Pinnacle Studio and others are more powerful, sure.  But being able to do this on the Web from any computer at any time, with no software to buy or install, is very cool.

That got me to wondering about the engines behind these sites -- is it all custom code or are there vendors developing and separately selling parts of these solutions?  My initial digging around didn't answer that question, but it led me to one rather simple, but very interesting video manipulation tool called VideoScript.  

Available on Windows and MacOS, VideoScript is a free tool that lets you write simple Basic-like code that manipulates, analyzes, assembles and edits video.  Record time-lapse movies, detect motion in video frames, subtract backgrounds, extract keyframes, blend and composite frames...it's all here and it's suprisingly simple to do.  It's not entirely bug-free - I found that my own first script, which extracts keyframes from a QuickTime movie (based on diff'ing frames and extracting as a JPG any frame that differs more than 25% with its predecessor) and writes an HTML page to look at them, tended to hang the program upon completion.  But it's a neat tool and sheds some light on how folks who aren't Google or Yahoo can do some Web-based video manipulation of their own.

My first VideoScript program, its source code, and its output is here in the extended entry:

Extracting keyframes from movie using VideoScript

Link to original movie on Harvard University's Video Archives



This is a movie assembled from just these keyframes

And here is the relevant part of the source code:
 set frame_count to (length of m) - 1;
 set new_movie to movie;
 set keyIndex to 0;
 set keyFrame to m[0];

 set n to 0;
 set i to 1;
set myHTML to "><html><head><title>Video Keyframe Extraction with VideoScript</title></head><body><br><br>";
    repeat frame_count times increment i
    begin
      set currentFrame to m[i];
      set diff to Math.Difference(currentFrame, keyFrame);
         if (diff > 0.25) then
         begin
  	 set n to n+1;
         append keyFrame to new_movie;
         set file "frame"+n+".jpg" to keyFrame;
	set myHTML to myHTML + "<img src='frame"+n+".jpg' style='border: 4px grey outset;width: 90px; height: 70px;'>\n";          
        set keyIndex to i;
        set keyFrame to m[i];
         end
    end
set file "keyframes.html" to myHTML as text;     
set file "keyframes_only.mov" to new_movie;


August 01, 2006

How do you measure the value of Instructional Technology, or any technology investment

People try to bean count while investing in "enabling technology".  They attempt to put into financial terms the value of a content management system, or adopting streaming video, or an intranet portal -- all the while looking for that bottom line to justify the cost.  Vendors publish whitepapers that try to put hard business-case numbers to convince IT shops to make an investment in a certain platform or technology.

"Switch to Flash video and you'll save $350k a year!"  It's possible.  Although when you look at the details, the ROI numbers seem more like estimates built on assumptions bolstered by guesses.  Perhaps, "Use Flash video and you'll build customer loyalty due to the excellent user experience (which can help lead to increased sales or market share)" is more realistic, if less quantified.  

This is the topic Harvard Business School professor Andy McAfee's latest blog post, The Case Against the Business Case.  Andy notes that using hard numbers to justify IT investment is natural -- numbers are the terms that business people traditionally use to measure cost and value.  But he points out that the chain between cause and effect of IT innovation can be long, complicated, and nearly impossible to quantify.

I’ve probably seen hundreds of business cases that identify the benefits of adopting one piece of IT or another, assign a dollar value to those benefits, then ascribe that entire amount to the technology alone when calculating its ROI.  The first two steps of this process are at best estimates, and at worst pure speculation.  The final step gives no credit and assigns no value to contemporaneous individual- and organization-level changes.

Some leaders instinctively have a sense of what kind of investment is going to lead to these intangible benefits.  They seem to naturally turn an organization towards the kinds of IT investments and organizational structure that's capable of capitalizing on IT innovation.  Yet, these leaders often have an uphill battle convincing the rest of their organization to follow when an investment does not have the hard numbers to provide assurance.  Andy notes:

One half of the ‘classic’ business case— the costs— can be assessed in advance with pretty high precision.  We know by now what the main elements of an ERP, BI,  Web enablement, systems integration, etc. effort are, and what their cost drivers are.  And we also know the capabilities that different types of IT deliver if they’re adopted successfully—if the human and organizational capital are well-aligned with the information capital.

It's this last part that interests me.  Certainly, the benefits of some IT innovations can be measured directly.  A recent Boston Consulting Group study on innovation reports that  corporate spending on "innovation" is up, even while companies are not satisfied with the results of prior innovation spending.  Among the companies studied, the most popular metrics for measuring innovation were time to market, new-product sales, and return on investment.  For product development organizations, these might be good measures.  

But what about service- or knowledge-based organizations?  How do justifyable decisions get made involving technology investments?  In my experience, the most sustainable technology innovation comes from organizations with visionary leadership and a culture that provided creative staff the freedom to explore, experiment, and sometimes fail.  As Andy alludes, it's about both the nature of the innovation, and the ability of the organization to capitalize on it.  But even with that, some paths lead to real business benefit and others do not.  Perhaps you can't measure which are which, but many feel that they can tell the difference when they see it.  How do you tell the difference?  

July 30, 2006

It's the little innovations that count

I stayed at a Doubletree hotel this weekend (Portland, Maine), where a tiny technology tidbit made my day.   As we were getting settled in the room, my wife asked "how can we listen to my iPod?"  I'd just got done responding, "Oops.  I brought no speakers, so we can't," when I noticed the funny-looking little clock-radio on the nightstand.  

It's truly an odd-looking device, tall, narrow, and awkward, but it had buttons on the top for AM, FM, and MP3.  MP3?  Sure enough - there was a captive cable hanging off the clock that had a 1/8" mini stereo plug that fits right into the iPod earphone jack.  

Seems like a little thing?  Well it is.  But it was the little touch that made the difference that made us go from saying "decent hotel" to saying "outstanding!"  We imagined a hypothetical corporate meeting in the past where someone on the team cleverly thought up this MP3 input idea, and another rolled his eyes and said, "you know, all they need is just a clock-radio."  It's was a nice reminder -- it's often the little, simple technology innovations that make a small but cumulative impact on the bottom line.  

Could Doubletree ever measure the ROI of putting this little MP3 dongle on their room radios?  No way.  But is it the thing that'll make me look first at their brands for my next stay?  Yup.  

Posted by larryb at 02:06 PM [permanent link]
Category: Innovative Technology

June 13, 2006

Podcasting and MPEG4 video -- the PSP problem

My prior post on the travails of podcasting, MPEG4, and supporting multiple devices detailed the differences between MPEG4 as it's supported by three of the most popular portable digital media devices: the iPod, the Creative Zen Vision, and the Sony PSP.  After further exploration, I've learned a few new things.

MPEG4 Woes
The MPEG4 format supported by the PSP is structurally the same as the iPod format: MPEG4 H.264 (AVC) w/ AAC audio.  For some reason that escapes me, Sony employs a customized header within the file that makes it look different -- and incompatible.  To get an iPod-format MPEG4 to play on a PSP, you have to either convert the file using software like the free PSP Video 9 or Sony's own PSP Media Manager; or use a utility that is supposed to flip the header bits: AtomChanger.  I didn't have any luck making AtomChanger work, but truthfully, I didn't spend a lot of time working at it.  
  
Podcasting
For podcasts, PSP Media Manager software is excellent and makes it easy for the user.  Although Sony sells it separately, it should be included with the device, in my opinion.  It handles RSS subscriptions, automatically does any file format conversions necessary for the PSP, also manages photos, music and games on the device, and makes the process seamless for the user.  Alternatively, PSP Video 9 combined with Videora provides a no-cost, although less seamless, solution for podcasting and file conversions.

For those of us producing podcast content for these devices,  I think the best answer is still to encode for the iPod.  Audio, of course, should be MP3 - then you support everyone. For video, software like PSP Video 9 and PSP Media Manager mean that PSP users can use the same media that iPod users can.  But if you're looking to deliver video content directly online to PSP Web surfers (see the next paragraph), you'll need to provide MPEG4  files in the PSP format.

PSP for Browsing the Web

The Sony PSP is a fine wireless Web device in its own right.  It took me about ten minutes to get it up and running on my home WiFi network (802.11g), complete with WEP authentication.  The internal Web browser is adequate, although the way you "type" text  (a URL, for example) on the device is clunky.  You can surf the Web, and download audio and video files directly to the device.

February 06, 2006

User-Driven Innovation in Television - the creative ecosystem around SageTV

Want to slip TV programs over to your iPod (or other portable media viewer) automatically?  Read on...

PC Magazine last month published a feature called TV Transformed - Watch Anytime, Anywhere, on Any Device.  It's a great piece on the options now available for digital distribution and consumption of TV and video content.  One solution they didn't cover in their article is called  SageTV.  In the process of getting ready to buy a new home computer for the family, I'd done some research on Windows Media Center Edition and found the presence of DRM restrictions on recorded content to be unnecessary and unacceptable.  

I ended up deciding on Sage TV, bundled with the Haupaugge PVR350 video tuner card.  SageTV is like Tivo, but runs on your computer.  It's got all the usual Personal Video Recorder (PVR) features, like interactive program guide, recording of individual shows or whole seasons, recording things it think you might like, and pause/instant replay of live TV.   The hardware includes a remote control and audio/video outputs that let you use your computer like a TV and your TV like a computer - but that's just the tip of the iceberg. 

The delightful thing about SageTV is that it's architected to be a platform for user innovation.  It comes with a set of  published APIs for everything from controlling it via command-line scripts to full Java and native C/C# APIs for customizing the system or writing your own applications.  A wide range of tools and utilities have sprung up around SageTV as users leverage the power an open platform gives them. The development community has a wiki and busy discussion forums where users and developers share ideas, code, and tips.

For example, Geoff Gerhardt at the InveterateDIY Blog has created Sage-To-iPod, a terrific utility that will automatically take your chosen selection of recorded TV programming, convert it to MPEG4/H.264 and sync it to your iPod.  Now you can go to bed early and still get The Daily Show on your iPod in time for the morning commute on the train.  There are other examples: the UI tweaks on the Ruel.net PC-TV page,  or these custom modules to tie in imdb.com movie lookup, RSS feeds, or control SageTV via a web interface.  And of course, anything you record can be burned to DVD.  

I know there are other options - including the open source MythTV.  On the scale of effort  required to get up and running, MythTV requires more of an investment in time than many people are willing to make. The sweet spot for me is that SageTV combined the ease of a commercial product with the open interfaces and invitation to tinker that makes good software great.

January 31, 2006

Contextual Search API from Yahoo - Keyword Extraction for free

I've been playing with some of  Yahoo's search APIs lately.  In particular, I was intrigued by the Content Analysis Service that takes a block of text, along with an optional "helper phrase" to help point to the context of the subject matter, and extracts keywords from it.   I'm always on the lookout for technologies that can help categorize or 'gist'  content.  In particular, the speech-to-text data extracted via voice-recognition from podcasts, videos and lectures is not good enough a transcript to read, but usually is good enough to search.  Is keyword extraction a useful tool for getting the topics from a blob of text?  Try it and see!  

The folks at the BBC certainly found ContextualAnalysis useful for doing research into the connections and relations among public figures and politicans.  Using this service to extract people's names from public documents, the team was able to create "six degrees of separation"-type graphs of "who-knows-whom" (or at least is "associated-with-whom") very quickly and at low cost.  

It took some time to figure out the code for this and get it all to work, but here's an example of it in action.  Here I used the text from my recent post - Digital Asset Management - Some Advice, but you can paste your own in here to try it out.  When you click on Run Query, the data will submit to Yahoo's ContextualAnalysisService via a PHP proxy on my website (to get around cross-domain scripting security restrictions in the browser), and the results will pop up under the form, AJAX-style.  This query uses Yahoo's JSON API, a simple and lightweight protocol for data exchange.  

FORM-BASED PROXY VERSION (any browser)
Helper Phrase:
Text to process:


There's another technique for making these AJAX calls that does not require a proxy - it employs SCRIPT tags dynamically added to the page (inserting DOM elements) with "SRC=" attributes that call the Yahoo API.  The inexplicable problem I found is that this version works in Firefox/Netscape but not in IE.  I'm unable to figure out why, since other sites using the very same code work fine. The SCRIPT element is written to the DOM with a SRC URL which - if I copy and paste it directly into the browser - works. But when I write the SCRIPT element to the page, IE never makes the HTTP call to retreive it. Unfortunately, IE's developer and debugging tools are so poor that it's difficult to find out what's going on.  If anyone has a suggestion, please share it with me.  

Update - thanks to colleague Jeff Griffith at HBS, who discovered that the reason IE is notworking is that the block of text submitted in the form was too long and violated a character limit that IE apparently has for SCRIPT SRC attributes. Shortening the text solved the problem.


SCRIPT TAG VERSION (seems to not work in IE, although it should)
Helper Phrase:
Text to process:


January 26, 2006

Resizing images in the page - a cool technology tip

Last month I wrote about Cucusoft's iPod Video Converter for converting DVDs to iPod video, and included a screenshot of the app in action.  As usual with screenshots, I had to choose between having the image appear full-size and crystal clear, or having it reduced to fit better on the page but have visble and objectionable artifacts from the resizing.  Looking around for a CSS/Javascript solution, I came upon John Berry's elegant solution on his Agile Partners Weblog - a movable slider to resize the image on the fly.   Drag the slider to see it work!

This script uses the Prototype and script.aculo.us javascript libraries.  Scriptaculous builds upon Prototype and adds some amazing DHTMLeffects, including the slider control used in this demo.  You can view source on this page to see the complete code that does this, but in a nutshell, you:
  • download and include the Prototype and Scriptaculous javascript libraries to your page:

    <script src="scripts/prototype.js" type="text/javascript" language="javascript"></script>
    <script src="scripts/scriptaculous.js" type="text/javascript" language="javascript"></script>

  • add the html for the slider and the image file.. Note that you may resize a collection of images by adding additional <DIV class='scale-image'> elements within the enclosing <div>

    <div style="border: 0px solid #ddd; width: 695px; overflow: auto; float:left;">
      <div class="scale-image" style="width: 695px; padding: 10px; float: left;">
        <img src="http://www.emediacommunications.biz/files/cucusoftdvdtoipod.jpg" width="100%"/>
      </div>
    </div>

  • and the HTML for the slider itself
    <div id="track1" style="border:1px solid #BBCCDD; width: 200px; background-image: url('files/scaler_slider_track.gif'); background-repeat: repeat-x; background-position: center left; height:18px; margin: 4px 0 0 10px;">
      <div id="handle1" style="width: 18px; height: 18px;">
      <img src="files/scaler_slider_gray.gif"/>
      </div><a href="#" style="font-size:small; float:right;" onClick="setSlider(1); return false;">[view full-size]</a>
    </div>

  • finally, include the custom resize_slider.js script in your page. This must go after the slider HTML in your page.

    <script type="text/javascript" src="resize_slider.js" ></script>

Here's another great example of this script in action.

January 06, 2006

Instructional Technology Innovation for Business Education

Business Week has done a quick rundown this week of how B-Schools Promote Better Learning Through Technology.  They surveyed 27 top B-Schools about how technology is affecting teaching and learning at the school.  Topics range from Wikis to Blogs to Podcasts, as well as some interesting technology hybrids (such as audio-annotated Excel spreadsheet tutorials) and classroom technology.  HBS isn't among those schools profiled, but some of our work in these areas is detailed on the HBS IT Website.

One point raised by the article is a most important fact about education - business education is fundamentally a social process. 

Will these technologies eventually make face-to-face classroom meetings obsolete? Not a chance, say B-school faculty members. Instead, implementing these new technologies is a way for them to free up time in the classroom for activities like business games, simulations, debates, and discussions.

In his recent book, In the Bubble: Designing in a Complex World, John Thackara echoes this theme, and talks about HBS' approach to Instructional Technology as a means to enhance the interpersonal experience of learning rather than to replace it.  Here's a snippet.

Simulations, databases, statistical and industry analyses, are intensively used learning 'objects' among
Harvard's MBA students and researchers. Online cases, audiovisual material, and computer-based exercises are useful extras, and "online is a microcosm of the new working environment graduates will encounter when they leave". "The goal", says Bouthillier, "is the emergence of Harvard Business School as an integrated enterprise that organises and connects information, and people, in a dynamic and continuous way".

Business schools like Harvard's are working hard to add value to – not substitute – a central function of universities: connectivity among a community of scholars and peers. Their approach uses the internet to bring people together – not the opposite, as with pure distance education. Learning at all levels, as John Seely Brown has observed, “relies ultimately on personal interaction and, in particular, on a range of implicit and peripheral forms of communication that technology is still very far from being able to handle”

December 20, 2005

A Wiki, turbocharged

I've been working on solutions for a learning exercise that will team up students from a dozen universities around the world.  The teams will collaborate to produce a business plan, but they need to work asynchronously due to the global time issues.  I've been looking at Wikis as one part of the solution - a common workspace where teams can group-edit a set of documents they're preparing.  

But Wikis have disadvantages -they're too geeky to use, and very limited in the kinds of documents you can build.  If you're used to Word, Excel, or even a decent HTML builder like Nvu, the best wiki may still cramp your style.  But then I came upon Writely, by way of Dave Lee at the Learning Circuits blog and Harold Jarche's blog.

Writely is a lot of things - you can start with a Word or OpenOffice document, edit it online as HTML, upload it to your blog, export it as Word, OpenOffice, or RTF.  It tracks versions, revision history., diffs, all by user and date.  From the point of view of my immediate need for collaborative workspace for globally distributed students, it's a wiki with rich formatting and tons of input and output options.

Officially a Beta, Writely may not be useful for my immediate purpose.  As with many corporate or institutional uses, I'd need some kind of custom branding, an easier way to manage accounts and access, and more peace of mind than that provided by a "beta" site.  For this project, we'll continue to look for a wiki that isn't awful...but something like a fully-baked Writely will be the standard to meet. 

December 15, 2005

Managing Video Content  - "Like Netflix, Only Better?"

The Videotools Video Content Management System, which my team developed at Harvard Business School, is a first-place winner of the 9th Annual Process Innovation Award by Kinetic Information.  Videotools was one of six winners in the Innovative Solutions Category (Recognizing Superior Solutions for their Creativity and Effectiveness ).  Specifically, they look for process improvement -- those applications that best exemplify how technology can be used for business benefit.  

A recent Campus Technology article on Digital Libraries by Matt Villano profiled Videotools, introducing it as "Like Netflix, Only Better."  It's flattering, even if that's a bit of a stretch!  But, Videotools does make an impact on the institution, by providing three services:
  • Managing and automating the encoding, metadata extraction and collection, and publishing of digital video in various (and multiple) formats and bitrates.
  • Managing permissions, roles and collections, and providing users with a video and media portal where they can search, organize, and share video content.
  • Providing delivery management that allows a unique URL for each video clip which applies rulesets to seamlessly determine a user's permission to view a video, detect their network location and preferred format/bitrate/size, and generates a metafile (.ram, .asx, etc) that gets the right video to the user quickly. (i.e. The same URL that  opens a 1.5Mbps RealVideo at full-screen when accessed from a classroom may provide 300kbps Real SureStream or 250kbps Flash video via http when accessed from home.)
More information about how we designed and built Videotools, along with our  philosophy of how to think about these kinds of projects, can be found in:

November 28, 2005

Enterprise Blogs as Content Management

This week's Gilbane Conference on Content  Management will feature a number of sessions related to Blogs, Wikis and RSS as tools for collaboration, knowledge management, and publishing applications in corporate environments.  

Coincidentally, this week's HBS Working Knowledge from Harvard Business School includes a terrific article: Does Your Company Belong in the Blogosphere?  According to HBS, corporate blogging is catching on, providing a low cost way to:
  • Influence the public "conversation" about your company
  • Enhance brand visibility and credibility
  • Achieve customer intimacy
HBS makes a few good points about how to keep a corporate weblog effective. From the article: 
  1. Have a distinct focus and goal.  Companies need to think about the objectives of their blog.  You need to set expectations very carefully as to what a corporate blog is going to be about. People will expect you to discuss everything about your company, but you need to stay on topic as explained and introduced," says Michael Wiley, the director of new media at GM.

  2. Feature an authentic voice. "Don't let the PR department write your blog. Bloggers will sniff it out, and when they do, you will lose all credibility," says consultant Debbie Weil, creator of the BlogWrite for CEOs blog.

  3. Be open to comment. If you don't want to hear from your customers and critics in a public environment, don't blog.
Finally, HBS WK closes with a great final word:

Advises Pete Blackshaw of Intelliseek, a marketing intelligence firm: "If your legal department requires three weeks' review time before you turn around a posting for your blog, you are not a good candidate for blogging.

These points will be a great complement to the discussions happening in Boston this week at Gilbane.

November 27, 2005

How to discourage innovation: measure everything

(title borrowed from Idea Festival) Of all the quotes that I came across in my exploration that started at Rod Boothby's Rigid Process can Kill Innovation post on his Innovation Creators blog,  my favorite is this: "process is an embedded reaction to prior stupidity."  (From Ross Mayfield's essay: The End of Process).  Mayfield says:

Because of constant change in our environment, processes are outdated the immediately after they are designed. The 90s business process re-engineering model intended to introduce change, but was driven by experts which simply delivered another set of frozen processes.

The discussion is about innovation, and running an innovative organization.  Boothby addresses the balance between necessary process and empowering standards.

I think it is important to note that a structured environment for supporting innovation, with some process for sharing information and ideas is fine - but those standards are standards of interaction - they are not standards of thought and not standards for what innovative solutions are built

He goes on to reference work by Harvard Business School's  Michael Tushman and  Wharton's Mary Benner  that show how process management programs discourage innovation.  

Process management can drag organizations down and dampen innovation. "In the appropriate setting, process management activities can help companies improve efficiency, but the risk is that you misapply these programs, in particular in areas where people are supposed to be innovative," notes Benner. "Brand new technologies to produce products that don't exist are difficult to measure. This kind of innovation may be crowded out when you focus too much on processes you can measure."

As someone who runs an innovative software development organization, I can attest to the challenge of maintaining balance.  You need enough process to keep the business running, but overall, the innovation comes from highly  talented, informed people working in a relatively process-free environment.  A former boss and mentor recently showed me the body of work her small, innovative team is doing at her new job.  The services and architecture being deployed online are dramatically impacting the entire business of a major institution with over 20,000 employees.  Her comment says it all:

The reason we can do this because we minimize process.

The other lesson to that success is about loose coupling between enterprise applications. I'll be talking about that in my talk at the Gilbane Conference on Content Management in Boston this week. More to come on that...

November 08, 2005

Make Magazine - Freedom to Innovate

When I was a kid,  I was a tinkerer.   Wiring my bedroom with speakers and lights, trips to Radio Shack for pilot lights and toggle switches; later a Ham Radio License (KA1CTX) and building shortwave transmitters, VHF transceivers, and assorted other electronic gizzmos, refitting my '73 GrandAm with a Holley 4-barrel and headers.  I recall my high school years (serious geek, before the word 'geek' was coined), having to explain to a high-school girlfriend what 'soldering' means.

The spirit of making it yourself  seemed long dead...until I noticed the new Make magazine. I happened upon a broadcast on Boston's WBUR, of the excellent On Point, which  featured Dale Dougherty, who happened to be my first editor on a professional writing gig (WebTechniques Magazine, 1998, media linked here) talking about his new project.  In this era of record companies telling you how you can listen and movie studios telling you how and where you can watch...the spirit of tinkering and making it yourself is still alive in Make Magazine.  Make is the journal of innovation, the literary embodiement of the spirit of do-it-yourself, modify-if-you-will, freedom-to-tinker that made America great.
Posted by larryb at 02:06 AM [permanent link]
Category: Innovative Technology

November 07, 2005

Skunkworks

How do companies and educational institutions create environments that foster the creativity that's the key to competitive advantage?  This excellent reference by way of John Thackara at Doors of Perception. a Netherlands-based design research institute: 

My in-tray is groaning under the weight of books, pamphlets and reports on all things Creative. Creativity is one of those Good Things (like Community) that is being rendered tedious by too much analysis by economists and policy makers. A welcome exception is this online report of a workshop on design principles for tools to support creative thinking. It's by some wise US researchers - among them, Ben Shneiderman, Mitch Resnick and Ted Selker. http://www.cs.umd.edu/hcil/CST/report.html

Posted by larryb at 05:00 AM [permanent link]
Category: Innovative Technology