Monday, December 27, 2010

2010-12-27: Google Summer Internship, Zürich Switzerland

"Hello Hany!...We are glad to inform you that you have been accepted in the summer internship program this year in Google Zürich GmBH!". Call me a geek but these were the best words I have ever heard! I now work for Google, well in one way or another!

After struggling with the visa issues I finally got my Swiss Schengen visa and the work permit. The Swiss people are very strict and precise, they thought I was 2 persons, one named Hany Khalil, and the other Hany SalahEldeen! Well I don't blame
them (fyi, in Egypt we don't have the concept of family name, your name is a concatenation of your ancestors names, my name then my father's, then his father's...etc). All my life I have been called Hany SalahEldeen but for some reason the American embassy in Cairo decided that my grandfather's name Khalil suits me better.

"Ich spreche kein Deutsch!" or "I don't speak German" Was the sentence I was repeating to my self on the plane to Zürich, you will never know when it could become handy sometimes! I was brushing up my old French as well, which seemed useless after I arrived to Zürich to realize that French is the main language in Geneva not Zürich. But I didn't care...I was in Google!....I am a Googler!...I even got an email address with my first name!

On the 6th of July I landed in Geneva, then I took the train to Zürich Hauptbahnhof (which means main station, try to keep up with the German words, or should I say Swiss-German words?). The Swiss really fascinate me, they know the real concept of time (well, they have the best clocks in the world). If you want to call something really punctual or accurate you say it's Swiss, or clock-work which also implies... Swiss. I was dragging my bag from the station, still can barely walk from my leg surgery I reached the tram station. When they say it will arrive 6:43 they actually mean it. I arrived to the student residence of ETH University where I sublet a room for the next 3 months, settled my stuff and fell asleep.

At 9 am next morning I was in the Google Zürich GmBH lobby. I met other interns and after an introductory session we were taken on a tour through all the huge 3 buildings (I used to lose my way for the first 3 days, well but maps were every where). I met there some fellow interns who became my great friends later on. The first two weeks were scheduled to be the training phase, including sessions and tutorials. I got to say when you get access to all these foods, candies, games and entertainment facilities (fussball tables, ping-pong tables, xbox, ps3, rockband, pool, musical instruments, they even got a massage and meditation room!) You get really distracted at the beginning, but that was trivial the following weeks and I loved the idea, if you spoil your employees and make them happy they will feel ownership to the company and commitment thus they will produce amazing work, that was the motto.

My host and manager was very excited and eager to start, so was I. I was the first intern to work under his supervision. He was a mentor, always there to help and give good advice, give me room to work, create and think outside the box and above all he was a good friend. Mostly that's the theme within all employees there, lieght weight, informal but respectful of course. later that week I had a standup coffee meeting with a guy who I later knew that he invented the automated language detection in Google translate! I was working in the MENA (Middle East and North Africa) team on a project allied with the Google translate team. I wish I was able to describe my project but the NDA (Non-Disclosure Agreement) I signed with Google prevents me as it is a new cool project and by the end of the three months I built successfully a huge portion of it. When it is released I will let you know!

Transparency and trust, that's what I was thinking of when I was working. You have access to all the resources and individuals, all available to help you proceed in your project. You can mail anyone and say hey I wanna ask you something! He/She will answer immediately. If you are stuck with a certain program or library you can ask, there experts in it on the mailing list. Maybe you can find the guy who actually invented it and wrote the whole thing! (Like the case in Vim, also you can find Sergey, Page and Cerf on the mailing list too!). Development process is totally different in Google, yes it is Agile and standup meetings are more common than coffee in Italy but there are other considerations. You want to meet deadlines and race to be innovative but also you have to produce code that is extremely scalable, dependable, throughly tested, following style convention and very readable. Handover time to another engineer shouldn't take a long time. I had to throw all most of what I know in C++ and adapt to the new framework of libraries, bigtables, mapreduces ...etc. If you required a functionality someone probably wrote it before so go directly to Code search and acess the code base.

TGIF (Thank Google It's Friday!) are the best weekly gatherings ever! You meet people from different teams in a social manner, relax, laugh, have fun and even karaoke which was a bad idea for me to participate! Every Friday night me and the other interns used to go discover the city and dine in a new place serving a new cuisine, ranging from Swiss cheese fondue to flaming duck Phad Thai. It was delicious and enlightening!

I have been to several parts of Switzerland, learned a little German,one of my friends at Google actually taught me the Blues Harp (AKA. Harmonica) and we used to practice three times a week. I travelled back to Spain to see friends, did water skiing on the lake in Zürich and was scheduled to do a sky-dive on top of the Alps but it was cancelled for bad weather, I was pissed!

Walking through the city was a pleasure itself. Enjoying a cup of coffee down one of the curling streets was amazing. Reading a book by the lake was a quality time. The only bad thing about Zürich is its prices!...I saw a suit in a shop and I kept looking for its price tag because I thought the numbers on the tag in front of it were the serial number not price!

The student residence I used to live in was amazing. Imagine living in a place where 100 different students live from more than 35 countries. We laughed together, we watched World-cup together and cheered for all teams! we cooked, watched movies and partied together too. It was friendly, brotherly and definitely educating. I met there people who definitely left a mark on my life.

In conclusion it was an amazing summer, educating, life changing experience. Working for the best company, living in an amazing city and meeting great people, what more can one ask for?!

Monday, December 6, 2010

2010-12-06: Memento Wins the 2010 Digital Preservation Award

The Memento Project won the 2010 Digital Preservation Award in London on December 1, 2010. The DPA is sponsored by the Digital Preservation Coalition, and the Memento Project is sponsored by the Library of Congress (see also: LC's project page).

Details about the DPA are provided in several press releases, including ones from the DPC, ODU, LANL and LC. DPC has also posted a short video of an interview with Herbert. And for posterity, the original tweet from William Kilbride announcing the winner (more information from the award ceremony will be announced on #dpa2010).

Thanks to the DPC, the DPA judges, the Library of Congress, and everyone on the Memento team!


Thursday, December 2, 2010

2010-12-02: NASA IPCC Data System Workshop

I attended a NASA Intergovernmental Panel Climate Change(IPCC) Data System Workshop in Greenbelt Maryland, November 9 - 10. The IPCC is an international committee overseeing the assessment of global climate change.

The purpose of this workshop is to discuss technical plan to prepare, incorporate and share IPCC-relevant NASA satellite observational datasets to support the Coupled Model Intercomparison Project Phase 5 (CMIP5). CMIP is a standard protocol and framework for evaluating climate model simulation (hindcast) and predictions/simulation of future climate change. CMIP5 is the 5th evaluation and being organized and lead by the Program for Climate Model Diagnosis and Intercomparison (PCMDI) mission at Lawrence Livermore National Laboratory. All of this activity will help contribute to the IPCC 5th Assessment Report (APCC AR5) and beyond. In prior assessments, NASA observational datasets were not used (or very little). NASA HQ has recognized the richness and important of NASA datasets and encouraged the satellite project teams to get involve and collaborate with the PCMDI on CMIP5.

An interesting overview talk on Earth System Grid (ESG) was presented. ESG is a distributed computational environment of grid services to support next generation climate modeling research. More technical details of ESG can be found in this paper by Bernoldt et al (2005). Technical talks from JPL,GSFC, NCAR, NOAA, and ORNL discussed progress from each group to support CMIP5. While most group are 1 or more year into the effort, we (at LaRC) are newbie. Our group presented an overview of relevant CERES datasets and new tool for ordering and retrieving CERES data. The biggest hurdle and question is how do we make satellite observations look like model output. This is critical for intercomparison. Lots of talk on CF NetCDF compliant formats, technical notes and metadata for each dataset, and selection of relevant observation dataset to include into CMIP5. A couple groups have gateways into the ESG while most have data nodes. With tight deadline in April 1, 2011, we agree to let ORNL host our CERES dataset on their ESG data node. We agree to set up a data node at Langley in the near future.


Monday, November 15, 2010

2010-11-15: Memento Presentation at UNC; Memento ID

I recently had a chance to return to the School of Information and Library Science, UNC Chapel Hill, where I had a most enjoyable post-doc during the academic year 2000-2001. Jane Greenberg was nice enough to invite me to speak about Memento in her INLS 520 "Organization of Information" class on Tuesday, November 9th as well as give an invited lecture about Memento to the UNC Scholarly Communications Working Group on Wednesday, November 10th.

When I first went to UNC I had the office next to Jane and she was just an assistant professor, now she's a full professor and director of the Metadata Research Center. I enjoyed catching up with her and my many other friends and colleagues at SILS.

My slides are available on; they are mostly a combination of slides I've posted before, but with some updates in the HTTP headers. Although the changes are very slight, the recently submitted (11/12/10) Memento Internet Draft takes precedence over all of our prior published papers and slides. For those who don't know, IETF Internet Drafts are the first step in the process of issuing an RFC (cf. "I'm Just a Bill...").


Friday, November 5, 2010

2010-11-05: Memento-Datetime is not Last-Modified

One of the key contributions of the Memento Framework is the HTTP response header "Memento-Datetime" (previously called "Content-Datetime" in our earlier publications & slides). Memento-Datetime is the sticky, intended datetime* for the representation returned when a URI is dereferenced. The presence of the Memento-Datetime HTTP response header is how the client realizes it has reached a Memento.

Rather than formally explain what we mean by "sticky, intended datetime", it is easier to explain how it is neither the value in the HTTP response header Last-Modified, nor is it the creation date of the resource (which has no corresponding HTTP header, for reasons that will become clear). For the examples below, we'll define the following abbreviations:
  • CD (Creation-Datetime) = the datetime the resource was created
  • MD (Memento-Datetime) = the datetime the representation was observed on the web
  • LM (Last-Modified) = the datetime the resource last changed state
Case 1: CD == MD == LM

We'll begin with a case in which all three datetime values could be the same. Consider the case of this index page at*/

The index page has a link to a single Memento. For simplicity, we'll assume created this index page and the Memento it references at the moment of the crawl, thus the various datetimes of the Memento would all be equal:

Creation-Datetime: Wed, 05 Mar 2008 20:16:49 GMT
Memento-Datetime: Wed, 05 Mar 2008 20:16:49 GMT
Last-Modified: Wed, 05 Mar 2008 20:16:49 GMT

Case 2: CD == MD < LM

If we click on the Memento (http://wayback.archive-it.or/927/20080305201649/, we see that it has a disclaimer banner ("You are viewing an archived web page...") that many archives employ to inform the reader that they are looking at a Memento and not the original resource. Although there are many techniques for inserting such a banner, the Archive-It example directly modifies the original HTML to insert this banner (as well as handle URI rewriting, etc.).

Now pretend the wording of the banner needs to be changed (for example, to address a new legal requirement). The CD and MD of the Memento are unchanged, but the LM must reflect when the wording of the banner changed:

Creation-Datetime: Wed, 05 Mar 2008 20:16:49 GMT
Memento-Datetime: Wed, 05 Mar 2008 20:16:49 GMT
Last-Modified: Fri, 05 Nov 2010 23:25:19 GMT

Both your lawyer and your HTTP cache consider this an important change, so you have to update LM. But it also clear that the essence of March 2008 observation of the Memento by is unchanged by the wording change of the archive banner, so MD is not updated. And certainly the CD is unchanged by this modification.

Case 3: MD < CD <= LM

Now pretend you are making a new web archive, and you are populating it by crawling other web archives such as (simulated with the king of browsers in the image to the left). You are effectively copying:


The presence of the Memento-Datetime header from indicates that the resource is an encapsulation of the state of another resource, at the MD datetime value. The link between the Memento and the original resource is indicated with an HTTP Link response header:

Link: rel="original"; <>

Thus, MD is sticky in that the new Memento at retains the MD value it observed from However, the CD and LM values reflect the datetime relative to

Creation-Datetime: Fri, 05 Nov 2010 23:25:19 GMT
Memento-Datetime: Wed, 05 Mar 2008 20:16:49 GMT
Last-Modified: Fri, 05 Nov 2010 23:25:19 GMT

The MD and LM datetimes can also vary for the Memento as described in Case 2. (In the unlikely case that the intent of was to create an archive of how resources were archived, the MD could be reset to 05 Nov 2010 and the Link header would point to the resource as the original resource instead of the resource; however, this is not the point of this discussion.)

Case 4: CD < MD <= LM

This scenario is probably less common, but you could imagine situations in which CD is the earliest datetime value. This might happen in situations in which the resource was created with something akin to fork() & exec() semantics: the resource was technically created at a certain datetime , but it did not acquire its own state until a later datetime, reflected in the MD & LM values.

For example, a transactional archive might record as CD the first datetime in which a resource returns a 200 response, but might choose to delay archiving Mementos until the resource's state is something other than "Welcome to Apache". In this scenario, you could have:

Creation-Datetime: Wed, 05 Mar 2008 20:16:49 GMT
Memento-Datetime: Fri, 05 Nov 2010 23:25:19 GMT
Last-Modified: Fri, 05 Nov 2010 23:25:19 GMT

The MD and LM datetimes could also vary as described in Case 2.

Creation Datetime Is Often Unavailable

To illustrate the differences between the various datetime concepts, the above examples have discussed Creation Datetime as if it is a commonly available value. However, this is most often not the case -- in fact, there is no defined HTTP response header that corresponds to Creation Datetime. This is due to the historical limitation of Unix inodes (i.e., metadata for files), which track three notions of time: atime (access time of the file), mtime (modification time of the file), and ctime (modification time of the inode). Modern content management systems might keep track of Creation Datetime, but it is not formally defined at the HTTP level.


The above examples should provide illustrations of how the three notions of datetime, although obviously related, have slightly different semantics. It should be clear that a Memento's Memento-Datetime is also not just Creation-Datetime or Last-Modified inherited from the original resource for which it is a Memento. Rather than overload an existing HTTP response header (such as Last-Modified), we have introduced the Memento-Datetime (nee Content-Datetime) response header. Additional information about Memento headers, Link rel types, and HTTP interactions can be found at

-- Michael

* Datetime = neologism of "date" & "time": the former is often understood to have a granularity of days, and the latter a granularity of seconds.

Thursday, October 21, 2010

2010-10-21: RRAC Presentation

Tuesday, I gave a presentation introducing some of the research we are doing in our WSDL group to the Records and Archivists (RRAC) national meeting. This group is made of archivists at Federally Funded Research and Development Centers (like MITRE and Aerospace) and University Archivists.

I used slides from several of Dr. Nelson's and Martin Klein's presentations (credits recently given in the last slide).

I also gave the same presentation to the Agile Development department (of which Carlton is a member) on Tuesday. Both groups widely received the research and had very interesting ideas and comments. The RRAC folks (who were of non-technical backgrounds) questioned the projected lifespan and availability of archives like the Internet Archive (IA). We also discussed the possibility of the Twitter virus being stored in the IA (and I have yet to investigate this possibility). The other interesting topic of discussion was how to use robots.txt files.

I thought the presentation went well, and I can provide more information on the other, less interesting questions offline.

--Justin F. Brunelle

Monday, October 11, 2010

2010-10-11: A Blast from the past: My road to Ws-Dl!

Hello everyone, I am Hany SalahEldeen, a PhD student in my first year and I am honored to be a new member of the Ws-Dl group at Old Dominion University and supervised by Dr. Michael Nelson.

I have been in the group for a couple of months now so I thought I should introduce myself and give a background summary on my career before Ws-Dl because I believe if you didn't know where you were, you will never know where you are going.

I received my BSc. in Computer Systems Engineering at Alexandria University, Egypt in 2008. My graduation project entitled "VOID: The web-based integrated development environment" was selected to win the first prize in the graduation projects competition in the University for year 2008. For the last 2 years of my degree I was working in a software company back home called eSpace technologies, I worked in developing systems using Ruby on Rails, and was one of the members who developed Neverblock (an open source project to enable easy development of non-blocking concurrent code.) along with fellow student and friend Mostafa Aly who is also in the Ws-Dl group.

I started my masters program in Universitat Autonoma de Barcelona, Spain. I worked in CVC (the Computer vision center) in the colour group under the supervision of Robert Benavente, Maria Vanrell and Joost Van de Weijer, and on July 2009 I defended my thesis entitled "Colour naming Using Context-Based learning through a Perceptual Model", published a paper and the second is still under development. In a nutshell we were able to create a parametric model for the Lab color space based on psychophysical experiments using real life images in the machine learning process to reach a better model near to human perception of color in context. In August 2008 I participated in the CVC team competing in PASCAL VOC2009 image classification world challenge in Kyoto, Japan and won 2 gold medals.

In September 2009 I started my internship at Cairo Microsoft Innovation center "CMIC" working on creating recommendation systems based on social networks with Nayer Wanas and wrote a paper which is under review. Also performed a study that the research center presented by CMIC's director Tarek Alabady to the minister of communication and information technology Tarek Kamel in December 2009.

In January 2010 I arrived Norfolk and started my first semester at ODU. Later in the same month I was invited by Google to attend the 2010 Google Grad CS Forum. An all-paid trip from Norfolk to San Fransisco including two day stay at Hilton downtown, who can say no?! Hanah Kim the University Programs Specialist contacted me giving me the details and the agenda.

On the 21 I was with 82 other fellow PhD students from all over the states attending the opening reception hosted by Alfred Spector, Google's VP of Research and Special Initiatives. He discussed with us several topics and answered all our questions. Surrounded by all these brilliant minds. I was so proud to represent old Dominion University in this prestigious event. Early next day a shuttle came to take us to the GooglePlex. There, Marissa Mayer, VP Search Products & User Experience welcomed us and gave along with Kevin McCurley, Research Scientist an amazing keynote and answered all our questions in regards to research in Google, publishing, research in the industry in general. After that we were taken on a tour around the humongous GooglePlex campus. The tour took more than an hour and yet we just skimmed some of the public areas (some areas are restricted to outsiders and guests).

After lunch, some of the students in their final years of their PhD were selected to give presentations about their work, which was definitely enlightening. After that we had two Tech-Talks, the first was by Hector Gonzalez, Research Scientist, in which he described to us new briefly techniques in extremely Large Scale data collaboration and integration. The second was by T.V. Raman, Research Scientist, which I think was the most amazing talk I attended in a long time. T.V. is blind, but he leads one of the biggest accessibility teams at Google and he specializes in auditory user interfaces and structured electronic documents.

Finally there were some round tables with scientists from different fields who gladly answered our questions. Andrea frome which leads one of the teams in Google Maps specialized in Street View described to us her work and answered all our questions. Later that day we had dinner in an amazing Italian restaurant in the heart of San Fransisco named Palio d'Asti. The next day I flew back to Norfolk.

That was a quick snapshot of the highlights in my career before Ws-Dl, I joined in february of 2010. I hope this post wasn't too long!

For more details check out my Blog and Website.


2010-10-11: ArchiveFacebook Version 1.2 is released

Celebrating a year from the very first release of ArchiveFacebook the development team is releasing the new version 1.2. Throughout the last couple of months we have received feedback from the users asking for enhancements and resolving issues. We also received lots of compliments and thumbs up! This feedback was channeled and analyzed to give us an idea on how to enhance the user experience.

We released version 1.2 3 days ago with lots of bug fixes and new features, among which the expansion of stories and posts on comments. Several users suggested that it would be useful to be able to archive all the posts and comments on a certain activity (status update, event attendance, photo...etc). Now V 1.2 can support this and any activity stream within your Facebook profile.

The new version seems to be highly anticipated to an extent that the number of downloads within the first 3 days even before announcing the release reached 2000 according to Mozilla:

Try out the new version and let us know what do you think. Development is triangle and feedback is one of its edges!

Please join the ArchiveFacebook group to post issues and stay tuned with the latest updates and future releases:


Monday, October 4, 2010

2010-10-04: WAC Kickoff Meeting; LC Storage Architectures Meeting, DPC Award Shortlist

On September 24, I attended the kickoff meeting at Stanford for the Web Archiving Cooperative (WAC) Project, a joint NSF project (~$2.8M) between Stanford, Old Dominion and Harding. A summary of the meeting will be published at a later date, but it was attended by several members of our Advisory Board (from memory: Chris Borgman (UCLA), Trisha Cruse (CDL), Rick Furuta (TAMU), Alon Halevy (Google), Carl Lagoze (Cornell), Raghu Ramakrishnan (Yahoo), Herbert Van de Sompel (LANL)) and several members and friends of the Stanford Infolab.

I gave two presentations, the first was a quick review of the state of web preservation (with the obligatory heavy emphasis on Memento), and the second was some of my ruminations about future things that we should (or should not) explore in the context of WAC.

That night I caught a redeye back to Norfolk so I could be in DC the following Monday for the Library of Congress Designing Storage Architectures for Preservation Collections Meeting. While I believe this is their fourth such meeting, it is the first one I attended and while (because?) I did not present or speak, I learned a great deal. The meeting featured a good mix of academicians and storage industry leaders discussing very large scale storage architectures -- scales that we don't typically approach in our research at ODU. The majority of the presentations were limited to 5 minutes each, so a good breadth of topics was covered and perusing the slides will be worth your time.

Finally, Memento has been named one of five finalists for the Digital Preservation Coalition 2010 Digital Preservation Award. It is an honor to be a finalist amongst the other projects (see the DPC Press Release for a descriptions of all the projects). The Library of Congress has also issued a press release as well as ODU. The final announcement will come in December -- here's hoping Memento can bring in the prize.


Saturday, August 28, 2010

2010-08-28: A Lookup for Nicknames and Diminutive Names

I created a simple lookup file that contains United States given names (first names) and their associated nicknames or diminutive names. For example "gregory" -> "greg", or "geoffrey" -> "geoff".  The file can be downloaded and contributed to from here

This lookup was started from which is used for genealogy purposes. It was a good source to start from but because it is used for genealogy purposes there are some pretty of old names in there.  There was also a significant effort to make it machine readable, i.e. separate names with commas, remove human readable conventions, like "rickie(y)", so that it would be made into two different names "rickie", and "ricky".

This is a large list with about 700 entries. Any help from people to clean this list up and add to it is greatly appreciated. Think of it as a wiki where you can contribute or change it as needed.  CSV was the easiest format to use. Maybe I'll release this in XML or something later, or maybe a kind soul who uses this library wants to contribute another format they converted it into?

I was rather surprised that I couldn't find anything like this on the web.  The best I could find was the pdNickname database and it costs $500.  So, I created my own and released it as open source so that others could benefit from my work.


Wednesday, August 18, 2010

2010-08-18: Fall 2010 Classes

There will be two WS-DL classes offered for Fall 2010. CS 418/518 "Web Programming" will be taught by Martin Klein, but it will be similar in format and content to prior offerings, especially in respect to the focus on LAMP. This class involves significant programming, developing a single project throughout the semester. It is a good complement to CS 495/595 "Web Server Development" which last taught by Martin, in Spring 2010. 2010-08-30 edit: The class page for CS 418/518 is now available.

I will teach CS 895 "Time on the Web", a new class that will deal explore the issues of Web resources evolving through time and how we interact with them. Aside from the canonical background readings, we will focus on current and recent projects such as our own Memento & Synchronicity, as well as OAC, Zoetrope, The Re:Search Engine, ADAPT, Past Web Browser, and other projects and papers to be determined. This class will be heavily oriented to research and will require the students to explore and investigate topics on their own, develop prototypes, and present the results to the rest of the class.

I'll update this entry when class pages are available.

2010-08-30 edit: CS 895 will begin on September 8th (not Sept 1), 4:20-7:00 PM, r. 3316.
2010-09-08 edit: the CS 895 class page is now available.


Tuesday, July 27, 2010

2010-07-27: NDIIPP Partners Meeting, IETF 78

On July 20-22, I was at the NDIIPP Partners Meeting in Arlington VA, along with Martin Klein and Michele Weigle. The Library of Congress has not yet uploaded a public summary of the meeting, but there were a number of interesting additions to previous NDIIPP Partners Meetings (edit: the meeting slides are now available). First, there were keynotes from both the Librarian of Congress, James Billington, as well as the Archivist of the United States, David Ferriero. There was also a ceremony to commemorate the charter members (which includes ODU CS) of the National Digital Stewardship Alliance (NDSA). I don't think the NDSA has a canonical web site yet, so the iPRES 2009 paper by Anderson, Gallinger & Potter is probably the best available description (edit: LC has announced a NDSA web site).

There was a theme of exploring the questions about "why we should care about digital preservation". The Library of Congress debuted this video, now available on their YouTube channel:

And I presented two sets of slides about Memento. One was in a break out session and focused on some of the details of http transactions as well as TimeBundles & TimeMaps, but the first presentation was in a plenary session in which I closed with an example of why digital preservation is important. For a recent conference submission, a reviewer asked:

Is (sic) there any statistics to show that many or a good number of Web users would like to get obsolete data or resources?

The answer we presented was that replaying the experience, as visualized through web resources, can be more compelling than a summary. The example concludes with an example about Hurricane Katrina.

The break out session slides are available too:

The day after NDIIPP, I was headed to Maastricht, The Netherlands to attend IETF 78 with Herbert Van de Sompel. We are working on an RFC for Memento, the first step of which is writing an Internet Draft which we hope to submit to the IETF soon. We learned a great deal about the ID/RFC process and met with several people who will help guide us through the process. Thanks to Mark Nottingham, we were even able to pitch Memento at the httpbis working group (see the agenda). Initial feedback was cautiously positive, but we were told by several people "I look forward to reading the Internet Draft".

And just for Johan's amusement, I had Herbert take the picture of me in Maastricht, next to giant french fries...


Saturday, July 17, 2010

2010-07-17: Microsoft Research Faculty Summit 2010

On July 12-13 2010 I was at the Microsoft Research Faculty Summit 2010 in Redmond WA. The agenda was exciting and one of the few conferences that where I've had real difficulty in choosing which of the parallel session to attend.

The first keynote was about Kinect for Xbox 360. The demos were very impressive and I had no idea that motion capture was ready for the home market. Check out the trailer at the MS site.

The next session I attended was about the "Bing Dialog Model". I must confess that I'm unconvinced on how different Bing is from Google. Here's a side-by-side comparison of each search engine on the query "Michael Nelson":

They seem nearly identical to me: the tri-panel layout (controls on left, content in center, ads on right), the link layout/colors (blue title, black summary, green URI), interspersed images, tabs at the top, etc. The extended summary Bing gives you when you mouse over a link region is nice, and some of the details are different, but this is basically the same interface. They did demonstrate some summarize content (e.g., current and comparative status in sports box scores), but I can't help but feel that this is propagating anonymous (e.g., not bookmarkable) Web resources. The "Los Links" commercials are funny (episode 1 & episode 2), but I can't help but feeling they are addressing a problem I don't think I have. I also realize that given my knowledge of the tools, my search behavior and strategies have likely adapted (cf. Sapir-Whorf).

Next up was me during a lunchtime brown bag seminar chaired by Lee Dirks where I gave a well-attended talk about Memento:

I was a bit worried about how much detail to include, but I think I found the right level. There were a lot of good questions and discussion afterward, including a discussion with Alex Wade's summer intern, Kevin Lane, that extended through the next two sessions. The next session had a nice presentation from Tara McPherson about the journal "Vectors" and its follow-on "Scalar". The closing keynote was from Turing Award winner Chuck Thacker about FPGAs. Not being an architecture person the talk went over my head, but you don't pass up opportunities to catch a lecture from someone like Chuck.

The second day opened with a prestigious panel about transforming CS research. Rather than summarize it, I recommend you read it. Next was DemoFest, where you could play with Kinect. Over lunch I discussed possible Memento & Zoetrope integration with Eytan Adar. The next session featured an exciting presentation from Walter Alvarez about "Big History" and the ChronoZoom interface -- very exciting stuff. That session ended with a presentation about WorldWide Telescope, which was impressive but suffered for being scheduled after Walter. The closing keynote was about the making of Avatar (some of the concepts can be found in videos from Sequence Mag and Wired).

More notes can found with the hash tag "#facsumm".

This was an excellent conference and I was very happy to have the opportunity to speak about Memento. Thanks to Alex and Lee for the invitation.


Thursday, July 15, 2010

2010-07-15: AMS Cloud Physics and Atmospheric Radiation 2010

I presented a poster at the 2010 13th Conference on Cloud Physics 13th Conference on Atmospheric Radiation in Portland Oregon, June 28 - July 2. This was my first atmospheric science meeting in the 2 years since taking off from NASA to attend full-time graduate studies at Old Dominion University. It was good to be back and catch up on old and new atmospheric sciences research being conducted by my colleages and others.

This conference takes place every 4 years. There were approximately 300 hundred scientists from around the world in attendance of which 60-70 were from NASA Langley. This was one of our important conferences to showcase our latest cloud and radiation results and products. Clouds and the Earth's Radiant Energy System (CERES) group were well represented. It seem like everyone at Langley who works on CERES were there. I saw many familiar faces and met several new CERES folks.

My paper was entitled Alternative Method for Data Fusion of NASA CERES and A-TRAIN datasets: An Evaluation of Triplestore. This paper examines a different method for placing geospatial scientific data into a Triplestore and ultimately directly on the World Wide Web where they can be indexed, discovered, and addressed. The mapping of geospatial data (or fusion) to a geo-location and Triplestore queries were presented at a high level.

I was disappointed that I did not plan a day trip to Mount St. Helens. My friend, who went with his family, said it was spectacular. I did however managed a late afternoon trip to Mt. Hood. The 1.5 hour drive was very scenic. I took this picture with my iPhone. It was my first trip in Oregon and it wont be my last.

-- Louis

Tuesday, July 6, 2010

2010-07-06: Travel Report for Hypertext and JCDL 2010

As mentioned earlier I had two papers accepted at HT and JCDL. In June it was time to travel to the conferences and represent the Old Dominion University colors.
HT 2010 took place in Toronto, Canada from June 13th-16th and was hosted by the University of Toronto. The acceptance rate of 37% was slightly higher than last year but the number of registered attendees seemed comparable.
I was glad to be able to give the very first presentation since it secured the probably greatest audience of the entire conference. My slides are available through Slideshare.
The paper itself titled "Is This a Good Title" can be obtained through the ACM Digital Library and its content was covered in my earlier post.
My personal highlight of the conference was the keynote by Andrew Dillon. He argued that research on Hypertext today is shaped too much by the Internet and its (inter-)linked nature. Representing his iSchool he made a point to support cross discipline research and suggested calling for papers in a topic independent track at HT 2011.
My fellow student Chuck also presented his paper and shared a detailed report of the conference in an earlier post.

Random facts:
- HT 2011 will be in Eindhoven, The Netherlands
- a cruise is not a good idea for a conference dinner
- the CN Tower is impressive and dining 1,151ft above the city is a great experience
- it feels good to have now two papers published at a conference where even Tim Berners-Lee's paper got rejected (if you don't know what I am talking about read this book)

Just one day after coming back from Toronto I left for Brisbane, Australia. JCDL 2010 took place from June 21st-25th in Surfers Paradise at the Gold Coast south of Brisbane. It was held in conjunction with ICADL which meant you had to chose between three parallel sessions. Presentations early on in a conference are usually good and the audience volume typically diminishes the longer the conference lasts. Hence I again was glad to see my presentation scheduled in the second session overall.
I mentioned the paper (available here) in a previous blog and my slides are also available via Slideshare.
My two personal favorites were again keynote speeches. The first given by Katy Börner from Indiana University. Listing her titles and appointments would take too long (see her web site) but amongst others she is the founder of the Cyberinfrastructure for Network Science Center at IU. She gave an insight into a few of her projects related to data analysis and visualization. It is amazing how information access changes through visualization techniques.
David Rosenthal is the chief scientist of the LOCKSS project. In his keynote he argued for the need of new models and approaches of preservation focused on the dynamic and service aspects of the Internet. In his opinion the "old copy-and-re-publish model" works for collecting and preserving static content but not for the web of services. Publication and preservation need to be re-defined to keep up with the transition towards services. The entire keynote is available here.

Random facts:
- JCDL 2011 will be in Ottawa, Canada
- the Australian Outback Spectacular is neither spectacular nor at all suitable for a conference dinner
- US Airways has a hard time distinguishing between the airports in Sydney (SYD) and Syracuse, NY (SYR) and may send your luggage to either one
- the Gold Coast winter is very pleasant and "Surfers Paradise" is not exaggerated

Monday, July 5, 2010

2010-07-05: Foo Camp 2010

I attended the 2010 Foo Camp in Sebastopol CA, June 25-27. For those who are unfamiliar, Foo Camp is an invitation-only "unconference" -- which is basically a conference that consists entirely of birds-of-a-feather sessions as well as the impromptu hallway and dinner conversations that make conferences useful.

There were approximately 250 people there and by my estimation they were mostly young (25-35) entrepreneurs (current and former). There were a smattering of others as well: artists, writers, professors, VCs, etc. The best way I can describe Foo Camp is a combination of Burning Man (culture of participation), SIGGRAPH (culture of demonstration), and a country club (culture of capitalism). Geeks aren't really known for being extroverted, but the format of Foo Camp pretty much requires meeting new people and interaction with people outside of your existing circle of colleagues. I was surprised at how approachable most people were.

Formulating the schedule consists of a big scrum on the opening night to place stickers on the schedule board. On Saturday the contents were transcribed to the wiki, but the actual schedule changed frequently. Sessions covered many topics and varied greatly in the number of attendees. The highlights of Friday and Saturday were the Ignite presentations: 5 minutes, 20 slides on auto-advance (15 sec each). The videos from Foo Camp haven't been uploaded yet to the YouTube channel, but hopefully soon.

Getting an invitation to Foo Camp is a pretty big deal, and you're reminded several times throughout the weekend that a return invitation depends on the quality of your contribution to Foo Camp. I was there to talk about Memento, which was the impetus for the Foo Camp invitation. Given the entrepreneurial focus of Foo Camp, the interest in archiving and preservation was obviously not as a strong as it is in most of the conferences and workshops I attend. Still, I made a few really good contacts that I will detail in the future.

I had a great time, but it was very exhausting. Johan, who has been to a couple of SciFoos, told me that few people camp. That may be true at SciFoo, but there were many campers at Foo Camp. Sebastopol is a pretty small town and the hotels were booked, so I drove in from Santa Rosa each day. I took a few photos but there are a handful of other photos and blog posts that are better capture the spirit: Laughing Squid, Dean Putney, Scott Berkun. The hash tag was "foo10".

Very special thanks to Tim O'Reilly, Sara Winge and the rest of the folks at O'Reilly for the invitation and event. Three days sounded like a lot at first, but it was over in a flash.


Monday, June 21, 2010

2010-06-23: Hypertext 2010; We laughed, we cried, we danced on air.

Hypertext 2010 13 - 16 June 2010 has come and gone, but the memories linger.

Martin Klein and I presented our respective papers. He will be detailing his experience and his paper Is This a Good Title? at Hypertext 2010 when he returns from JCDL 2010. My paper Analysis of Graphs for Digital Preservation Suitability and it's associated PowerPoint presentation are available. The paper and the presentation was given at the Hypertext 2010 in Toronto, Ontario, Canada. A complete Hypertext program is available here.
Day zero, 12 June

Mary (my wife) and I got to Toronto late Saturday. We were four and a half hours late out of Norfolk because of weather problems in Chicago. Fortunately Mary made alternative reservations out of Dulles to Toronto as soon as we thought we were going to miss our connection. Pays to pay attention and to have alternative plans.

Martin (the sly dog) chose to travel 13 June on a direct flight from Richmond, VA to Toronto.

Day one, 13 June

I attended the Modelling Social Media workshop hosted by Alvin Chin (Nokia Research Center, Beijing). The ways that social networks can be modelled and then networked together engendered all sorts of "Big Brother" feelings.

Day two, 14 June

Andrew Dillion gave an interesting talk as the opening keynote speaker stressing how difficult it is for those professionals whose interests are not confined to a single, well defined discipline can find a venue to promote their ideas. He promoted things (institutions) like iUniversities that fostered ideas that crossed boundaries. Andy challenged the Hypertext organizing committee to find a way to encourage "cross cultural ideas." He also said that if you had a passion about a topic or idea; regardless of how it was characterized to follow that passion and somehow things would work out. All in all, interesting ideas, but I'm not sure how it could be implemented. His presentation was the topic of lots of conversations.

Martin (the lucky dog that he is) was the first presenter after Andy. I never asked, but I bet he was glad to get his presentation out of the way so that he could sit back and enjoy the show. Martin is off to JCDL right now and I'm sure will have lots to "talk" about when he gets finished down under.

Ryen White's talk about how people use parallel browsing (having multiple tabs open simultaneously) was interesting to me mostly because of the way that data was collected. When a user upgrades their browser, they often permit their browsing activities to be sent back to the "mother ship" so that the developer can improve their product. If you think about it, do you want them to know where and how you browse?? Kind of an interesting question. Not sure that I do.

The presentation about using tags as a source for thematic data to create a narrative resulted in an interesting collection of photographs that at first glance may not have seemed related. I think that the Shakespearian quote used in the paper ("Freeze, freeze thou bitter sky" from As You Like It) and the imagery resulting from the tags were a little bit of a mismatch (to me the quote speaks to being forgotten vice the weather) points out how relying on just the tags can lead you astray.

Ricardo Kawase's "The Impact of Bookmarks and Annotations on Refinding Information" looks to the question of how to refind information that you once found. It is an interesting questions because not only does each of the techniques presented (tagging and spreadcrumbs) require that you "know" what you are looking for, but that the user is not the same person (in the sense that time has passed, new experiences acquired, contexts have changed, etc.) that annotated the data in the first place. Interesting question, how do you refind something when you aren't the person that found it in the first place??

Sihem Amer-Yahia and her "Automatic Construction of Travel Itineraries using Social Breadcrumbs" looked at constructing a travel itinerary based on tags that others had put on pictures in a particular area. That way a travel itinerary could be constructed that took you to the same places that most other people had been when they were there before you. In a sense you could repeat what they had done and therefore you must have been there because you saw the same things as everyone else. Interesting concept, in many ways there are a certain number of "must see" things you have to see or otherwise you haven't really been there. It seems that the hand editing of some of the data would make the technique hard to scale.

Kaipeng Liu's "Speak the Same Language with Your Friends: Augmenting Tag Recommenders with Social Relations" presentation showed that combining the tag sets from various socially related taggers can help to "normalize" the tag set across the users. This normalized tag set can then recommend additional tags based on the tags that the tagger has recently used. These ideas could also be used to help a someone trying to search for additional, or related works. Kind'a neat.

Huizhi Liu and "Connecting Users and Items with Weighted Tags for Personalized Item Recommendations" builds on the idea of having a "normalized" set of tags based and applies a mathematical bend to it. All this is to counter the problem that tags are user dependent and that the same tag can be used by two different taggers in at least two different ways. This "tagging noise" can make it very difficult to find the correct match based on tags.

"Topic-based Personalized Recommendation for Collaborative Tagging System" looks at reducing the noise in freeform tagging systems by applying a modified Latent Dirichlet Allocation (LDA) approach. LDA is used to identify unknown groups in data sets. By applying LDA techniques, to tags and taggers, better tags can be recommended.

Iñaki Paz's "Providing Resilient XPaths for External Adaptation Engines" has an interesting application of simulated annealing (SA) to derive a XPath specification for extracting data from a page that changes over time. The premise being that it is easy to tailor a XPath for someone else's static page, but given the page will probably evolve, how to derive a XPath that can be expected to be reasonably robust as the page changes and evolves. Nice to see SA used to extract data from a web page.

Vinicius F. C. Ramos's "The Influence of Adaptation on Hypertext Structures and Navigation" was about how students use an adaptive hypertext (AH) e-learning application. A fair amount of discussion ensued after the presentation as to why the students went off track and followed links that didn't "go anywhere." Was it because they didn't understand that they shouldn't have gone there, or was it a reflection of their attempting to be through, or was it something else??

Alexandra I Cristea's "The Next Generation Authoring Adaptive Hypermedia: Using and Evaluating the MOT3.0 and PEAL Tools" dealt with the problem of authoring and evaluating AH programs and lesson plans.

Evgeny Knutov's "Provenance Meets Adaptive Hypermedia" explored and offered a model of how to address the question of provenance in an AH environment by bringing together several existent models. Their overarching model was then used to answer the provenance questions of: Where?, When?, Who?, Why?, Which? and How? In the digital preservation arena, these type of provenance questions and data have to be preserved as well.

Day three, 15 June

Haowei Hsieh's "Assisting Two-Way Mapping Generation in Hypermedia Workspace," looked at presenting hypermedia in a X-Y spatial manner to reduce the complexity that became apparent from feedback by previous AH creators. Interesting ideas, I'm not sure how to apply them to what I am doing, but it does open my mind to the fact that while I may think that my way is best, others may have a different opinion.

My personal favorite paper and presentation was: Analysis of Graphs for Digital Preservation Suitability. The mechanics of the presentation went very well. Prior to the presentation, I watched as others had struggled with the microphone on the podium. It had a pick up range of about 4 feet. Beyond that, the speaker had to really project for people in the back of the auditorium to hear. I have a hard time being constrained to a 4 foot radius circle, so the first thing that I did was to walk away from the microphone and use by "outdoor voice" and get feedback from the back of the room if I was understandable. Then I found a way to engage the audience by pointing out how Calum's (the conference photographer) work would only be available for a few number of years and that I was proposing something different. Rather than relying on institutions to preserve digital data, have the data preserve itself. By using a laser pointer, moving around in front of the screen (if you look at the HT-2010 home page, you can see the bottom portion of the screen and the podium is just off image to the right), focusing some attention onto Calum, and then having a video that showed how these digital preservation graphs were going to be attacked and then repaired, everyone's attention seemed to be on the performance and not so much on what was happening on their respective screens. The first video was a lot of work to put together (figuring out which image format to create to feed the video creating software to feed Youtube to be usable on a large screen, etc.) but after that it was almost mechanical. I haven't found a command line video creation tool yet; but the GUI based one isn't too bad. There was a lot of power in the video, while it was playing, everyone was staring at the screen. The presentation lasted right at 20 minutes and then it was question and answer time. Mark Bernstein asked several questions about how the Web Objects from the paper would communicate and if I had thought about how they could live in their entirety inside a URI. Alvin Chin asked about how they could propagate across different types of social networks. Nathan (whose last name I have misplaced) a student from the University of Cambridge asked many questions about the energy costs of sending a message across the diameter of the Unsupervised Small World graph. I presented a version of this paper at JCDL-2009; initially I knew that there were two people there that understood what I was talking about. At the end there were three. At this conference, I started with one (me) and based on the conversations that I had with Mark, Alvin, Nathan and Jamie Blustein I ended up with at least five. And so at the end, we laughed.

Hypertext Final - Analysis of Graphs for Digital Preservation Suitability

BTW: One of the things that I've noticed and haven't figured a way around is that SlideShare seems to not support PPT animations (for instance on slides 5 and 8 of the presentation) and some clicks to external pages (for instance on slide 20). Inside Powerpoint, clicking on the graph on slide 20 will take you to the Youtube video shown here:

The movie starts off with a baseline graph that alternately gets attacked and repairs itself. Nodes that are isolated from the graph are shown in red, while those that are still connected to at least one other node are shown in cyan. This game goes on for 10 turns just to show how a graph can be evaluated to quantify how long it will last given that some nodes become isolated and then try to regain membership into the larger graph. These are some of the ideas behind the paper.

Heiko Haller's presentation of "iMapping - A Zooming User Interface Approach for Personal and Semantic Knowledge Management" was very interesting because of the tool that he used. I have had very little experience with Personal Knowledge Management tools, so I can't comment on the efficacy of iMapping, but the things that iMapping can do in terms of a presentation are very different and innovative when compared to Powerpoint and EndNote. iMapping still has some rough edges, but the ideas, the interface and the way that data is presented is really, really neat. Heiko's paper on the Ted Nelson Newcomer Award for best newcomer paper.

Tsuyoshi Murata's presentation "Modularity for Heterogeneous Networks" tackles the problem of detecting communities of users, tags, URIs/URLs using graph theoretical approaches. He started with the "simple" bipartite problem, expanded it to tripartite and implied that the approach could be extended to n-partite. My eyes started to cross after the third nested summations of various logs.

Dan Corlette presented data as part of his paper: "Link Prediction Applied to an Open Large-Scale Online Social Network" that looked at the links that LiveJournal users created between and amongst themselves. Dan and company then made predictions as to how many links a user would form as a function of the length of time they were members of LiveJournal. Their predictions were "reasonable" for new users, but less accurate for old users. I wonder if old users (those who have established themselves, vice old chronologically old) had focused on
what they wanted and that was enough links for them. Kind of an interesting human issue question.

Said Kashoob presentation on "Community-Based Ranking of the Social Web" compared different techniques to identify communities in social networks using different "community finding" techniques. Frankly, I got lost after the third integral. His group claims to ave a technique that works better then the "normal" ones, but I'm not qualified to talk to it.

Danielle H. Lee presented "Social Networks and Interest Similarity: The Case of CiteULike" looked at how social networks among CiteULike members tend to use the same tags and to reinforce the tags that members of their group use. As the connections between different groups becomes more and more distant, the similarity lessen. Is this a case of "birds of a feather, flock together?"

Christian Körner tackled the problem of what motivates people to tag in "Of Categorizers and Describers: An Evaluation of Quantitative Measures for Tagging Motivation." The presentation generated a lot of questions and introspection among the attendees who tried to categorize them selves as either someone who describes resources, or someone that categorizes resources. Christian provided an interesting insight into what motivates people to tag and why.

Jacek Gwizdka's "Of Kings, Traffic Signs and Flowers: Exploring Navigation of Tagged Documents" took the problem of how to represent the "goodness" of a set of tags by the use of a "heat map" of the tags. That was after looking at hypertext links with Kings, traffic signs and flowers. His paper and the presentation have interesting diagrams.

Jeff Huang presented "Conversational Tagging in Twitter" took the first look at the use of tags in Twitter. Looking at how long some last (only a couple of days in some cases), how many people use them (from a few to viral numbers) and how they meaning to only a few. Jeff claimed that this was the first time anyone had looked at Twitter tags and that the analysis showed some interesting and unexpected things. Jeff's group worked on a data set of Twitter tags prior to the LoC getting a copy of all public tweets. He said that he is interested in applying the same analysis on the much larger LoC data set.

Marek Lipczak looked at "The impact of resource title on tags in collaborative tagging systems" of tagging by members of a collaborative group. His team's results point to members of the group tagging resources more in line with their personal interests than those of the group. (Definitely a darker side of group participation.)

Day four, June 16

Daniel Gonçalves' "A Narrative-Based Alternative to Tagging" looked at placing tags into a narrative about a series of images and then measuring how the tags were reused, how long they conveyed information and how well an outsider could see the connections between the images. As a demonstration of the effectiveness of the approach, Daniel's presentation took 14 minutes with 90 slides and very few words on the screen. It is an interesting idea and approach, I'm not sure how I'll use it but it is something to remember that there are alternatives.

F. Allan Hansen's "UrbanWeb: a Platform for Mobile Context-aware Social Computing" reported on an on-going experiment at Aarhus University and in the city of Aarhus where tags were placed "in the wild" of the real world and people went in search of them. People used their smart mobile phones as a way to access a database based on their location, snaps of barcodes and other tagging information that was present. The combination of these data helps to create a richer and more interesting experience for the human, and to encourage humans in the same location with similar interests to connect. Neat application.

James Goulding presented the Douglas Englebart Best Paper Ward paper called "Hyperorders and Transclusion: Understanding Dimensional Hypertext." He addressed some of the limitations of the RDF representation of data and relationships between data and then went on to move the discussion into an arena where the number of different values that a database tuple can have can be viewed as its dimensions. Based on the intersection of these dimensioned data, it is possible to have data that is hyperconnected. Taking the idea that these data can be viewed as hyperlinks, then the dimensioned data can be hyperconnected. The transclusions come in when the data can be used in different contexts. After a while, my head started to hurt from trying to follow the different types of connections.

Mark Bernstein's paper, presentation and panel discussion called "Criticism" was a real delight to hear, see and follow. Mark's premise is that we (as a community) have done lots and lots of work with hypertext, but is it good work and is hypertext a good tool?? His insights about how we do things and that our tools and approaches taint what we see and how we see them rang true again and again. During his presentation he used the phrase "web of scholarship" that, I think speaks to the heart of the matter. Too often we get bound up in the way that we (and only we) do things and fail to see how there are influences outside of our immediate sphere that also influence us. I think that this is absolutely true and that we have to raise our heads up from time to time and see what the rest of the world is doing and drink in the bigger pciture.

The panel discussion "Panel Session: Past Visions of Hypertext and Their Influence on Us Today" by all the authors was an interesting reflection on some of the major ideas that got us to where we are today. By putting Vannevar Bush's seminal paper "As We May Think" into a cultural context (where he got his ideas, why he was published, where he was published, etc.) and then taking those ideas (not the only his, but also ideas from people like him) and seeing how they have and have not come to be. It was real pleasure to hear a panel of experts in their and our collective field talk about their views on someone that affected us all. A real pleasure.

Irene Greif's closing keynote address "The Social Life of Hypertext" brought things back to the real world. For most of the presentations, things had been very ethereal, existing in the abstract, with only limited connectivity to things of this world. Irene was able to bring things back, back to how having lots of real data can give new insights (, how Twitter backscatters can reveal things that may have passed unnoticed, and in general that all the things that we had talked about for the entire conference have a place in the real world.

And, so HT2010 closed.Link
Martin, Mary and I had a celebratory dinner at the 360 restaurant in the CN tower. At over 1150 feet, it provides a full view of the Toronto area. Including the area where the riverboat dinner cruise took us the night before.

After two circuits around the tower, Martin and I went down to the observation deck.

They have all sorts of pictures and statistics about the tower (height, weight, fastest climb from the bottom to the observation level, etc.). A section of the floor is glass, so you can see all the way down the side of the tower to the ground far, far below. Martin and I danced on the air that evening.

The hotel lost electrical power for most of the evening.

Day five, June 17

The hotel power came on early in the morning. Early enough that the water had a chance to heat for showers and for us to pack. As we went to the elevator, it went out again. Fortunately we weren't ready 90 seconds earlier, otherwise we would have been in the elevator when the power went out.

Down the stairs we went, six floors, 12 flights, clunking suitcases all the way. Our carriage awaited us at the bottom, and whisked us away to airport. Getting through Customs seemed to take forever. As we snaked our way to the front of the line, those that had signed up for NEXUS sailed by us all. Something to keep in mind, if you are plan on going back and forth across the US Canada border, NEXUS may be a way to save a lot time and effort.

We cried (almost) trying to get to Toronto. We laughed when our presentations were done. We danced on air (with a few inches of hardened glass beneath our feet) at the end.

Now that Hypertext 2010 has come and gone, it is time to get back on the paper and conference treadmill again.

More as events warrant,

-- Chuck Cartledge