2025-11-12: 36th ACM Conference on Hypertext and Social Media (HT 2025) Trip Report

 


The 36th ACM Conference on Hypertext and Social Media (HT 2025) was held from September 15–18 as a hybrid conference. The in-person event took place in Chicago, USA and virtual attendees joined via Zoom. The event was hosted by the Illinois Institute of Technology. The conference provides a research platform that focuses on hypertext technologies, applications, link structures, digital humanities, and social media. This year the theme was “The World as Hypertext” which targeted to understand technology, creativity, society, and scholarship through the lens of hypertext. 


The conference featured workshops, tutorials, and blue sky ideas along with paper presentations and keynote sessions. The trip report reflects experiences from Dominik Soós, Rochana R. Obadage, and Tarannum Zaki of the Web Science & Digital Libraries (WS-DL) research group at Old Dominion University (ODU), with Dominik Soós and Tarannum Zaki attending and presenting at the conference in person.


 

Conference Venue: Hermann Hall Conference Center, Illinois Institute of Technology, Chicago


Day 1: Sep 15, 2025  


The first day of the conference began with the Web Archives & Digital Library (WADL) workshop, followed by the opening remarks. It continued with the first keynote and concluded with the paper session titled “Hypercreation & Systems.”


Workshop: Web Archives & Digital Library (WADL)


Organized by Mat Kelly and Brenda Reyes Ayala, the workshop brought together researchers, practitioners, and archivists to discuss practical and theoretical issues across the full life cycle of born digital resources. The presenters addressed key aspects of the digital resource life cycle, including capture, storage, preservation, analysis, and access, while also examining how web archives and digital libraries can interoperate across processes such as creation, crawling, indexing, and long term preservation.

As the first presenter, Lesley Frew from WS-DL group at ODU presented “Coming Back Differently: A Case Study of Near Death Experiences of Webpages.” She discussed the Centers for Disease Control (CDC) webpages that temporarily disappeared from the live web and then returned with banners announcing changes. Lesley showed how web archives captured those gaps and how subsequent updates often occurred without public notice, making the visible last modified date unreliable. She framed the observed sequence as stages of a near-death experience for webpages and proposed that their case study could guide larger investigations into similar patterns. 

Rochana Obadage from WS-DL and LAMP SYS at ODU presented “Toward Robust URL Extraction for Open Science: A Study of arXiv File Formats and Temporal Trends.” Rochana compared URL extraction results across four arXiv file formats (Text, LaTeX, XML, and HTML) in a pilot dataset and showed that structured formats such as HTML and XML produced more accurate and complete extractions than plain text. He further explained that relying on a single file format risks missing links to datasets and software, and that combining formats improves coverage while also increasing the false positives. Their temporal analysis showed a steady rise in web-based references in scholarly communication over the last thirty-three years, from 1992 to 2024. He also mentioned that they are working on scaling human annotations of arXiv PDFs to a larger sample and expanding the study to other repositories such as S2ORC and PubMed

Next, Jonathan Schler from Holan Institute of Technology presented “Medieval Citation Networks as Digital Hyperlinks: Transformer Based Authorship Attribution in Historical Text Collections.” Schler treated medieval citation networks as hyperlink like structures and used a transformer based pipeline to recover authorial signals from those networks. His team used a BERT-CRF pipeline to extract references and then applied a citation fingerprint method. The pipeline resolved a longstanding attribution question by identifying “Rabbi Shem Tov ibn Gaon” as the likely author of disputed commentary sections with about 95.9% similarity confidence and achieved roughly F1 ≈ 0.90 on medieval Hebrew reference extraction. He emphasized that the method scales to other citation rich corpora and offers new tools for authorship analysis in digital cultural heritage. 

As the final presenter Sawood Alam from the Internet Archive (and WS-DL alumnus) presented “Lost, but Preserved: A Web Archiving Perspective on the Ephemeral Web.”  Alam reviewed link rot research and showed that many web resources disappear over time while web archives such as the Wayback Machine recover a large portion of those losses. He highlighted rescue efforts, including the “Turn All References Blue” project that has repaired over 23 million broken links across wikis with the help of the InternetArchiveBot and the Wayback Machine. Dr. Alam also discussed the limits that slow or block preservation, including resource constraints, pages that rely heavily on JavaScript, bot blocking, login walls, paywalls, parts of the deep web, and late discovery. He highlighted that they are striving for broader ingestion from feeds such as MediaCloud, GDELT, and Wikipedia EventStream and noted participation in IndexNow for faster link discovery soon after page creation or update.  

After the paper presentations, Jingyuan Zhu from the University of Michigan gave the invited talk “Detecting and Diagnosing Errors in Replaying Archived Web Pages.” Jingyuan presented an approach that goes beyond screenshot comparison because screenshots often flag harmless differences as failures. Their method compares layout trees and JavaScript writes to detect real fidelity violations. It also surfaces errors that the browser does not report and helps prioritize which replay problems merit investigation. The talk focused on both reliably detecting replay failures and tracing their causes so developers and archivists can fix the rewrite bugs that break replay fidelity.  

At the end of the invited talk, the workshop featured three lightning talks. Dene Grigar, director and founder of “The NEXT” virtual museum and library, presented about preserving participatory, interactive, and experiential web art. She discussed challenges in maintaining works like M.D. Coverley's “Endless Suburbs” from 1999, which originally used Java applets that no longer work on modern browsers. She mentioned that their team preserves around 3,000 digital works, using video recordings to document works that can't be fully saved. 


The second speaker discussed ClaimsKG, a repository developed at GESIS that collects verified claims from fact-checking websites and presents them as a knowledge graph in RDF format. The system extracts metadata from claims found on social media, including entities, sources, and truthfulness ratings. Then, Carson Gross, the creator of htmx, spoke about his new project called fixi.js, a minimalist implementation of generalized hypermedia controls in just 89 lines of code. He designed it as a smaller, more ownable alternative to htmx that developers can easily modify. 


The session wrapped up with a discussion about the future of the WADL workshop. The organizers asked attendees for feedback since they had branched out from their usual host conference JCDL to try Hypertext this year. An attendee suggested that the workshop's direction naturally follows where submitters plan to attend, so hosting at various relevant conferences makes sense for attracting appropriate submissions from both digital library and web archiving communities. 

WADL previous trip reports: 2023, 2022, 2020, 2019, 2018, 2017, 2016, 2015.

Opening Remarks

The conference started off with opening remarks from organizers of the conference after lunch break. The general chair, Yong Zheng and department chair, Gurram Gopal from the Illinois Institute of Technology welcomed everyone to the conference. The program chair, Charlie Hargood from Bournemouth University discussed the theme and tracks of the conference for HT 2025: Hyper-Systems, Hyper-Creation, Hyper-Society, and Hyper-Scholarship. A total of 24 research papers were accepted with an acceptance rate of 32.5% for long papers. Among several submissions from 15 countries, the majority of the submissions for 2025 were from the USA.

Keynote #1: From Hypertext to Hyper-AI: Introducing Human-Centered AI Agents


Michelle Zhou from Juji, Inc. delivered the first keynote, From Hypertext to Hyper-AI: Introducing Human-Centered AI Agents.” She discussed how human-centered AI agents can be thought of as a dynamic ecosystem of hypertext links where the entry point of every hyperlink directs to a live AI agent. She demonstrated how human capabilities can be augmented to optimize limited resources using different real-life examples for tasks that are challenging for humans. Some of these examples are: AI learning assistant for education, AI care assistant for healthcare, AI for HR career assistant, etc. She also discussed the importance of incorporating personal intelligence and interaction intelligence along with language capabilities of generative AI. 

Session #1: Hypercreation & Systems


Dene Grigar from Washington State University chaired the first paper session titled “Hypercreation & Systems.” 

The first session began with Sarah Abowitz from Tufts University, presenting their long paper on “Student Use of Commentaries with Inline Reference Resolution.” The authors conducted a survey from students of an undergraduate Greek class to gather their hypertext experience. They wanted to understand how students of Ancient Greek interact with ‘inline reference resolution’ which is commentaries where references are linked or resolved directly within the text interface. They found that, though such integration of hyperlink-based navigation can help to reduce cognitive load, different hypertext design strategies might be required for different phases of education – intermediate to advanced.  

Next up, Alexander Petros from Montana State University presented their long paper on “The Missing Mechanic: Behavioral Affordances as the Limiting Factor in Generalizing HTML Controls.” The authors analyzed ‘Triptych’ – a set of three proposals which extends HTML attributes to support more generalized hypertext controls. Their analysis reveals that Triptych-enhanced HTML currently lacks the expressive power of fully generalized hypermedia controls because it is incapable to further customize the event triggers. They introduced behavioral affordances and suggested that these limitations of Triptych might be filled by a standards-compatible semantic. They also presented a hypothetical mechanism for customizing the event triggers that conforms to existing patterns and syntax of HTML.   

Next, Salim Hafid from the University of Montpellier presented their long paper on “Disambiguation of Implicit Scientific References on X.” The authors introduced a novel task for disambiguating implicit scientific references in posts on Twitter/X. The goal is to link such implicit references to the correct underlying scientific publication. Many social media posts refer to scientific work (articles, findings) without explicitly citing them (no DOI, no full title), but readers may still infer which scientific work is being referenced. The work is important because inaccurate citations can compromise accuracy of scientific findings. This phenomenon further may lead to uninformed and polarized online debate for crucial topics like health pandemic and climate change. 

The first session ended with two short papers. Krzysztof Ziembik from Jagiellonian University presented one of the papers titled “Forth: The Best Programming Language for the End of the World? A Case Study on A.D. 2044 (1991) - Roland Pantoła’s Interactive Fiction.” The authors discussed how the choice of Forth as the implementation language in the 1991 interactive fiction game A.D. 2044 can illuminate broader design, cultural, and technical implications for programming languages and creative computing. Subasish Das from Texas State University presented another paper titled “HyperSumm-RL: A Dialogue Summarization Framework for Modeling Leadership Perception in Social Robots.” The author introduced a framework ‘HyperSumm-RL’ which is designed to provide summarized representations of long dialogues to predict how people judge a robot’s leadership behavior. 

Day 2: Sep 16, 2025


The second day opened with a tutorial session. The highlight for many came in the afternoon with the second keynote, followed by the paper sessions titled “AI for News” and “Social Media.”

Tutorial: Meta Content Library as a Research Tool


Yair Rubinstein, a representative from Meta’s research partnership team organized a tutorial session on “Meta Content Library as a Research Tool.” Meta Content Library provides comprehensive access to the public content like posts, videos, albums, and photos shared on Meta platforms – Facebook, Instagram and Threads. Users can query the library using both the UI and API to gather metadata about social media content such as view counts, reactions etc. The representative provided hands-on demonstration on how users can perform in-depth analysis using different functionalities available in both the UI and API. One of the use cases was about presenting popular trends during the 2024 U.S. presidential election. Another use case was about presenting frequency of posting activity from different football club channels. The tutorial also provided an overview on how individuals and research teams can easily access these tools. 

Keynote #2: Human-AI Collaboration in Adaptive Information Access: From Adaptive Hypermedia to Recommender Systems


Peter Brusilovsky from the University of Pittsburgh delivered the second keynote about how people and AI systems can share control in information access.  He contrasted navigation led by humans, where systems simply highlight useful options, with AI-led sequencing, where systems suggest an order that users can change at any time. He also described simple ways to keep people in the loop like make user models visible and editable, let users adjust how results are ranked for the task at hand, and keep explanations short so that decisions are easy to understand and to correct. Then, he pointed to the early web tutor, ELM-ART and NavEx as examples of this approach in practice. It used adaptive link annotation and link sorting to guide the learners through the material. The practical message was to combine human judgement with AI assistance to allow basic controls like profile and ranking preferences, and aim for recommendations that are transparent and easy to override when they miss the mark. 

Session #2: AI for News


The program chair, Charlie Hargood from Bournemouth University, led the second paper session about AI for news. This session paired a long paper on literary hypertext with a short paper on detecting AI-generated science news. 


The first paper presentation was given by Mariusz Pisarski from UITM. This paper asked two main research questions: what authors can learn from Google AI to stay discoverable, and whether AI can fairly assess intentionally non-linear hypertext. He pointed out that current evaluators can miss the value of hypertext and even push it toward more linear writing. Their discussion raised concerns about remediated censorship, the difficulty of current AI evaluators having reading hypertexts, and the risk that ranking norms may direct creative work toward more linear writing. 

Dominik Soós from WS-DL and LAMP-SYS at ODU presented “Can LLMs Beat Humans on Discerning Human-written and LLM-generated Science News?” The motivation is that 36% of Americans consume science news at least a few times a week so authorship and reliability matters. The goal of this work is to see if evaluators, both human and AI, can correctly tell between human-written and AI-generated content. This process helped us understand the effectiveness of AI in content generation and the ability of both humans and AI to discern the origin of the content. In their results, they showed that with structured prompting LLMs can match or even outperform humans in telling apart human and AI-written science news. They emphasized that this isn’t limited to commercial models, as open-weight models can rival commercial systems, making this capability broadly accessible. The broader implication is that LLMs may not only generate content, but also help us safeguard the quality of science news. 

Session #3: Social Media


Taimoor Khan from GESIS - Leibniz Institute for the Social Sciences chaired the third paper session titled “Social Media.” 

The session started with Ruijie Xi from Meta presenting their long paper titled “Moral Sparks in Social Media Narratives.” The authors explored how people decide what is right or wrong in stories shared on social media, especially in posts and comments like those in AITA on Reddit. They studied 24,676 posts and 175,988 comments of the subreddit AITA and found that people often focus on certain sentences in a story. The authors termed these key narrative excerpts as “moral sparks.” This study about moral reasoning in an online context helps us to understand the importance of content moderation on social media.  

Next, Jahangir Alam from the University of Texas at El Paso presented their long paper titled “Comprehensive Privacy Risk Assessment in Social Networks Using User Attributes, Social Graphs, and Text Analysis.” The authors presented a framework named “Comprehensive Privacy Risk Scoring (CPRS)” combining three dimensions of data – user attributes, social graph structures, and user-generated content to assess how much personal information is at risk. They validated the framework using two real-world datasets: the SNAP Facebook Ego Network and the Koo microblogging dataset. They found that the most high-risk attributes include email, date of birth, and mobile number. They also performed a user study with 100 participants where 85% rated their dashboard clear and actionable. 

Tarannum Zaki of WS-DL from ODU presented their long paper titled “Web Archives for Verifying Attribution in Twitter Screenshots.” Sharing screenshots on social media platforms is now common. The authors pointed out legitimate reasons why people share screenshots, such as to enable cross-platform sharing, to use as evidence for deleted posts etc. They further discussed how people can create fake tweets and share such screenshots on social media platforms. They demonstrated different ways the live web and archived web can be used to verify attribution of screenshot content. They particularly focused on using web archives to find attribution of deleted posts since they cannot otherwise be found on the live web. They introduced an automated process of how one can verify attribution of a screenshot using the Wayback Machine. 

The session ended with two short paper presentations. Keishi Tajima from Kyoto University presented “The Analysis of Echo Chamber Formation by Friend Recommendation Evidence.” The authors explored how these friend recommendation systems influence who becomes friends with whom, how connections are made, and how that drives social network structure towards homogeneity rather than diversity. They extracted a follow graph from Twitter/X and added edges repeatedly using a recommendation algorithm to observe biasness in the graph. Taimoor Khan, the chair himself, presented “Characterization of Tweet Deletion Patterns in the Context of COVID-19 Discourse and Polarization.” The authors investigated how COVID-19 pandemic tweets are deleted on Twitter/X and how deletion behavior correlates with factors such as sentiment, polarization, and discourse dynamics. They focused on discovering patterns in which tweets are removed, by whom, and under what conditions in the context of polarized COVID-19 discussions. They analyzed two datasets – general discourse (TweetsKB) and COVID-19 related discourse (TweetsCOV19) and found that negative and less polarized messages are more likely to be deleted. 

Day 3: Sep 17, 2025


The third day of HT 2025 opened with the Narrative & Hypertext Workshop organized by Charlie Hargood and David Millard before moving into the final keynote. The third day ended with the paper session titled “Information.” We covered the keynote and paper session here.

 

Keynote #3: Communicating Science in the Digital Age


Dr. Ágnes Horvát from Northwestern University explored how digital platforms and AI are reshaping how science is written, shared, and judged. She raised questions whether online sharing boosts citation counts, who actually benefits if it does, how pressure to self-promote interacts with field culture and status, and how false claims persist even after retraction, and how LLMs are changing the production of scientific text. Empirical studies backed these claims. One study found that word-frequency trends in PubMed abstracts from 2012 to 2024 reveal recent increases in terms such as “delves”, “crucial”, and “potential”. This can be mainly attributed to the rise of LLM-assisted writing. She also highlighted that researchers often use social media to disseminate their work, journalists mine these channels to gauge public opinion, and a majority of the US public now relies on online platforms for science news. She also highlighted visibility across scholarly, with differences tied to performance and affiliation rank and clear gender gaps in self-promotion. The take away was to build platforms, make AI use in writing transparent, and broaden participation so researchers from all backgrounds have a fair chance to be seen. 

Session #4: Information


Jamie Blustein from Dalhousie University chaired the paper session titled “Information.” 

There were two papers presented in this session. Mark Bernstein from Eastgate Systems presented their long paper titled “Back to the Information City.” In early days of hypertext, information spaces were imagined as cities where webpages were buildings, links were roads, and topics were neighbourhoods. Here, the authors discussed re-examining the “Information City” metaphor for hypertext visualization in the light of modern systems. They talk about the need of new and updated metaphors that would fit the scale and richness of today’s digital information spaces. 

Ashfaq Ali Shafin from Florida International University presented a short paper titled “Toxicity in State Sponsored Information Operations.” The authors studied how governments or state-backed groups use toxic language like insults, harassment, or hate-speech as part of their online propaganda campaigns on social media. They examined 56 million posts on Twitter/X from over 42 thousand accounts linked to 18 distinct geopolitical entities. They found that only 1.53% of the posts were toxic. However, they also found that the toxic posts received massive engagement and appeared to be strategically deployed in specific geopolitical contexts. 

Day 4: Sep 18, 2025


The last day of the conference started with the two paper sessions – “Hypersociety” and “Gen AI & Games.” Next, the TWEB and Blue Sky session was held followed by the closing and award ceremony. We covered the paper session “Gen AI & Games” and the rest here.


Session #6: Gen AI & Games


Dongwon Lee from Penn State University chaired the paper session titled “Gen AI & Games.”
 

The session started with Zhiyi Chen from Southern California University presenting their long paper titled “Synthetic Politics: Prevalence, Spreaders, and Emotional Reception of AI-Generated Political Images on X.” The authors discussed how AI-generated political images circulate on Twitter/X, who is spreading them, how prevalent they are, and how people react to them. They analyzed a dataset on the 2024 U.S. presidential election having 2.5 million images. They found that around 12% of the shared images are AI-generated and around 10% of users are responsible for spreading 80% of AI-generated images. However, the spreaders are more likely to be bots. This study helps us to understand how generative AI influences the online socio-political environment and the importance of platform governance. 

Next, Charlie Hargood from Bournemouth University presented their long paper titled “LUTE: A Hypertextual Mixed Reality Game Engine.” The authors introduced a hypertext-driven engine – ‘LUTE (LoGaCulture Unity Toolkit Engine)’ for creating mixed reality and locative games. The concept of this framework is instead of just designing a game as a linear sequence of levels, LUTE treats the game as a network of hypertext nodes and links. Each of these nodes can contain declarative ‘orders’ defining game content, mechanics, and interactions. This helps to govern game flow and branching so that game designers and developers can build rich locative mixed reality games without reinventing the navigation and flow structure each time. 

Navid Ayoobi from the University of Houston presented their long paper titled “ESPERANTO: Evaluating Synthesized Phrases to Enhance Robustness in AI Detection for Text Origination.” Large language models are widely used for generating text for various purposes (e.g., example essays, product reviews). But this also raises concern for content moderation for which AI-text detectors have emerged as a countermeasure. The authors examined the robustness of AI-text detectors and proposed a method to improve resilience of such systems. Their method involves back-translation as a novel technique to improve robustness of existing AI-text detection systems. They evaluated this method on nine AI detectors and found that the true positive rate declined by 1.85% after back-translation. 

The session ended with two short papers. Syed Nazmus Sakib from the University of Idaho presented their short paper titled “WriteAssist: A Personalized Generative AI System for Autonomous Authoring of Scholarly Literature Reviews.” The authors introduced an intelligent writing system that would help users to automatically author scholarly literature reviews, tailored to an individual user’s preferences and research topic. This is an attempt to reduce the cognitive load of researchers and to help with academic research productivity. Jiaqing Yuan from Amazon presented another paper titled “Reasoner Outperforms: Generative Stance Detection with Rationalization for Social Media.” The authors proposed a generative stance detection approach that simultaneously predicts the stance of social media content and generates an explanation for that prediction. The system would contribute to maintaining transparency on online platforms. 

TWEB & Blue Sky Session


The last day of the conference ended with the TWEB and Blue Sky session. Jahangir Alam from the University of Texas at El Paso chaired the session. There was one presentation from TWEB and four presentations for Blue Sky. 

Shawn M. Jones from Google (and WS-DL alumnus) presented their TWEB paper “Summarizing Web Archive Corpora via Social Media Storytelling by Automatically Selecting and Visualizing Exemplars.” The authors introduced a five-process model “Dark and Stormy Archives (DSA)” for helping users to understand large web archive collections by automatically selecting representative documents (exemplars) and turning them into a “story” format inspired by social cards. Social cards consist of metadata fields, the document’s title, a brief description, and a striking image. The five processes of the DSA model are: selecting exemplars, generating story metadata, generating document metadata, visualizing the story, and distributing the story. Shawn only focused on the first process – selecting exemplars, in the presentation. He discussed different types of algorithmic primitives that can be used to select exemplars from collections such as sample, cluster, score, filter, and order.  He further discussed the importance of selecting exemplars for generating and sharing stories to summarize collections. 

Mark Bernstein from Eastgate Systems started the Blue Sky session with his presentation “What is it like to be augmented?” He discussed how human augmentation via technology impacts the perception of humans towards the world, their own identity, relation to other humans and machines. He highlighted the fact that if augmentation changes human experience in fundamental ways, then designers and developers must not only consider what augmentation enables, but also think what it means for embodiment.  

Next, Peter Nuernberg from Hof University, Germany presented their extended abstract “It Really is Structure, all the Way Down.” The authors proposed an “all-structure” view of the world and used it to revisit earlier research ideas in a new light. They did it by laying out research questions inspired by several research threads that were active in prior iterations of the hypertext community, but have since slipped from focus. They explored implications across multiple domains to show how adopting this “structure‐first” perspective can reshape how we think about foundations, existence, identity, and any objects of the world.


Behnam Radhari from Stanford University presented their extended abstract “From Links to Dialogue; Hypertext Challenges and Opportunities in Conversational Navigation.”  The authors argued that as hypertext systems evolve, we should shift our thinking from static link-based navigation toward interactive, dialogue-style interactions. This would guide users through information spaces in a more natural, adaptive, and dynamic way. They further discussed some challenges and possible research opportunities for hypertext research in the era of conversational AI. 

The last presenter of this session was Sophia Liu from the University of California, Berkeley. She presented their extended abstract “Agency Among Agents: Designing with Hypertextual Friction in the Algorithmic Web.” The authors argued that we should not view “friction” (i.e., decision points, intentionality) as a usability flaw, but as a design value. They performed a comparative analysis of real-world interfaces – Wikipedia vs. Instagram Explore and Are.na vs. GenAI image tools, to examine how different systems structure user experience, navigation, and authorship. They introduced a stance, “Hypertextual Friction,” that treats friction, traceability, and structure as actionable interface values. 

Other


Two other workshops were “Human Factors in Hypertext (HUMAN)​” and “Interdisciplinary Applications of Narrative Hypertext (NHT).” The HUMAN '25 workshop was organized by Jessica Rubart from OWL University of Applied Sciences and Arts, Germany and Claus Atzenbeck from Hof University, Germany. The focus of this workshop was the user-centric view of hypertext that includes user interfaces and interaction, discussions about hypertext application domains and human-centered AI.  

NHT '25 was organized by Charlie Hargood from Bournemouth University and David Millard from Southampton University. The focus of this workshop was to discuss and explore interdisciplinary applications of narrative hypertext from diverse domains – games, e-literature, mixed reality, social media narratives, location-based storytelling, etc. 

Mark Anderson from Southampton University chaired the paper session titled “Hypersociety.” This was session #5 and there were three long papers and two short papers presented in this session:



There was a session for presenting posters, demos, and exhibits. The demos and exhibits are listed below:


Closing Remarks and Award Ceremony


The conference ended with acknowledgement to the sponsors and a vote of thanks to all the participants and organizers followed by an award ceremony. The program chairs delivered their speech and discussed future collaborative opportunities with the Hypertext conference. They also announced the possible venue of HT 2026, which may be held in London, UK. 


The awards for‘Best Paper’ and ‘Best Student Paper’ were announced:


Wrap-up


This was the first in-person conference for both Dominik and me (Tarannum). It was a great experience for us to meet with researchers from different places around the world. We also got to meet WS-DL alumnus Mat Kelly, one of the organizers of the WADL workshop. We are thrilled to receive this wonderful opportunity that helped us to present our research before a live audience as well as gather knowledge about the diverse research happening in the field of hypertext and social media. We will surely carry the knowledge and skills gained from this experience forward in our academic career. We are grateful to receive travel fund support from the ODU SEES, ODU CSGS, and ACM SIGWEB to attend this conference. 

This was my (Tarannum) first visit to Chicago, Illinois. Chicago is a city full of beautiful attractions and scenic spots. Apart from attending the conference, it was a wonderful experience for me to visit some of the beautiful landmarks.



Previous trip reports for Hypertext by WS-DL members: 2024, 2022, 2018, 20102009


— Rochana R. Obadage (@rochanaro), Dominik Soós (@DomSoos), Tarannum Zaki (@tarannum_zaki)

Comments