2025-12-17: Paper Summary: "Towards Enhancing Low Vision Usability of Data Charts on Smartphones"

 

   

The IEEE Visualization Conference (VIS) is the premier forum for research in visualization and visual analytics. It brings together a global community of scholars, practitioners, and designers committed to advancing visual methods for exploring, analyzing, and communicating data. This year, the 35th IEEE VIS Conference (VIS 2024) was held in St. Pete Beach, Florida, USA, from October 13 to 18, 2024. In this blog post, I highlight my work “Towards Enhancing Low Vision Usability of Data Charts on Smartphones,” which addresses the accessibility challenges faced by low-vision users when interacting with data charts on mobile devices and introduces GraphLite, a mobile assistive technology that transforms static charts into customizable interactive visualizations.



Motivation

Data charts are a powerful way to communicate trends, comparisons, and summaries, and they are now commonly found across news articles, financial tools, health dashboards, and social media platforms. For people with low vision, interacting with these charts on smartphones remains a significant challenge. While screen magnifiers allow users to enlarge content, doing so often hides important visual context, such as axis labels or relationships between data points. As a result, users are forced to pan repeatedly, remember individual values, and mentally piece together information, which can be both frustrating and cognitively demanding. Most existing accessibility solutions focus on blind screen reader users and do not address the specific needs of low-vision magnifier users. To bridge this gap, we introduce GraphLite, a mobile assistive technology that converts static charts into interactive and customizable visualizations tailored for low-vision users.



Figure 1 Prakash et al.: (A) The user selects a visual theme within GraphLite. (B) The user chooses specific data elements to focus on. (C) The appearance of selected data is customized through font and color adjustments.


Background and Related Work

Research on chart accessibility has primarily focused on blind users, offering solutions such as textual descriptions (e.g., Alt-Text generation), sonification (e.g., audio-based data exploration), and table conversions (e.g., replacing charts with structured tables). These approaches are designed for non-visual interaction and do not support the needs of users who depend on residual vision and prefer visual formats.

For low-vision users, who typically rely on screen magnifiers, the challenges are different. Magnification can obscure spatial relationships, disconnect data points from axis labels, and require constant panning, which increases cognitive effort. Prior studies have proposed general techniques like space-preserving magnifiers (e.g., SteeringWheel) and responsive visualization frameworks for small screens (e.g., MobileVisFixerDupo). While these techniques reduce layout disruption, they are not optimized for analytical tasks like comparing distant values or identifying trends in visual data.

These gaps highlight the need for solutions that not only preserve visual context during magnification but also support selective focus, customization, and seamless navigation across chart elements. Our work builds on this insight by designing an approach that directly targets these unmet needs in mobile chart interaction for low-vision users.

Uncovering Usability Issues with Chart Interaction

To better understand the challenges low-vision users face when interacting with data charts on smartphones, we conducted a semi-structured interview study with 14 participants who regularly use screen magnifiers. Participants had a variety of eye conditions and reported different levels of visual acuity. The interviews explored participants’ everyday encounters with charts, typical usage contexts, and the specific difficulties they experience when using magnification to interpret chart content. All sessions were conducted remotely and recorded with consent. We applied open and axial coding to extract recurring themes that directly shaped our design decisions.

Participants described three core challenges. First, they found it difficult to associate data points with axis labels when zoomed in, often losing context due to the limited field of view. Second, making comparisons between distant values in a chart was described as mentally exhausting, requiring repeated panning and memorization. Lastly, many resorted to memorizing individual values and mentally reconstructing comparisons, especially when dealing with larger datasets. These insights revealed a clear need for features that minimize panning, support selective data access, and preserve visual relationships between chart elements. This feedback directly informed the design principles behind our solution.

GraphLite Design

Key Features

GraphLite introduces three core features designed to address the chart interaction challenges identified in our user study: selective data viewing, personalized visual styling, and simplified gesture-based navigation.

Selective Data Viewing

Participants expressed difficulty in comparing distant data points within magnified views. To support targeted analysis, GraphLite allows users to selectively choose specific bars or line segments for focused viewing. These selections are grouped into distinct views, enabling users to compare relevant data without the need to pan across the entire chart. Users can switch between these views using simple swipe gestures, reducing cognitive load and helping retain context during visual comparisons.

Personalized Visual Styling

To accommodate varying visual preferences and improve clarity, GraphLite provides customization controls for color, contrast, font size, and background themes. Users can tailor the appearance of charts to match their visual comfort, which was particularly important for those who found high contrast or dark themes more readable. Styling adjustments are applied directly to the interactive chart interface, allowing for more legible and less fatiguing interaction.

Simplified Gesture-Based Navigation

Instead of relying on complex multi-finger gestures often required by default magnifiers, GraphLite supports one-finger tap, swipe, and long-press interactions for common tasks such as selecting data, switching views, or opening configuration menus. This design reduces the need for precision input and lowers the physical effort involved in exploring chart content on small screens.

GraphLite Architecture

The design of GraphLite was shaped by insights from our interview study and prior research on low-vision interaction. The system architecture supports automatic chart detection, data extraction, and personalized rendering within a mobile browser interface. Below, we outline the key components of the system.

Design Considerations and Requirements

GraphLite was designed to support selective focus, reduce cognitive load, and preserve spatial context during magnified interaction. Our goal was to enable users to control both the content and appearance of charts. To support these needs, we focused on three design principles: enabling selective data views to minimize unnecessary information, allowing visual customization to improve clarity, and offering simplified gestures to reduce physical effort. These principles guided both the system’s architecture and interface behavior.

Overview

When a user opens a webpage in the GraphLite browser, the system identifies and processes chart images on the page. A trained classifier recognizes whether an image is a data chart and determines its type, such as bar or line. Once detected, the chart is transformed into a structured, interactive version that replaces the original static image. The user can tap a chart to activate GraphLite’s proxy interface. From here, they can select specific data points, customize appearance settings, and create multiple views that group relevant values together. Navigation between views is performed using simple swipe gestures. This structure lets users focus on comparing small subsets of data without needing to pan across the entire chart.

Video 1 Prakash et al.: Demonstration video illustrating GraphLite's proxy interface for transforming static chart images into interactive, accessible visualizations.

Figure 2 Prakash et al.: System architecture of GraphLite showing chart input, type identification, data extraction, interface rendering, and user interaction flow.

Chart Data Extraction

GraphLite uses ChartOCR, a hybrid deep learning model, to extract data from chart images. For bar charts, the system detects the corners of each bar and calculates height values. For line charts, it clusters key pivot points and reconstructs the full line segments. OCR is performed using AWS-Rekognition to extract axis labels, titles, and legends. The final output is a structured JSON representation containing all chart elements, which is used to render the interactive proxy.

Proxy Interface Design

Figure 3 Prakash et al.: (1) Magnifying a portion of the chart. (2) Panning downward to view labels. (3) Panning upward to follow the data bar. (4) Panning horizontally to explore chart values.

The proxy interface is built for screen magnifier users and avoids reliance on multi-finger gestures. Users interact with charts using tap, swipe, and long-press actions. A swipe up gesture opens the theme picker, allowing users to adjust color, font, and contrast settings. A long press on the chart opens a list of x-axis labels as checkboxes, enabling selection of specific data points to include in a focused view. Users can create multiple such views and move between them with left or right swipes. To reduce panning, GraphLite applies space compaction, optimizing layout spacing while preserving interpretability.

Video 2 Prakash et al.: Real-time demonstration of GraphLite’s end-to-end pipeline, including chart detection, type classification, data extraction, proxy interface generation, and interactive user navigation.

Implementation Details

GraphLite is implemented as an Android mobile browser using the Flutter framework. The app extracts page content and sends it to a back-end server for chart processing. Image classification is handled by a custom-trained Inception-V3 model, and chart data extraction is performed using the ChartOCR pipeline. The server communicates with the app using JSON-based APIs. For rendering charts in the proxy interface, we use the Syncfusion Flutter charting library, which supports full customization and interaction. All components are modular, allowing future upgrades to chart models or OCR engines without rewriting the interface logic.

Evaluation

To assess the effectiveness of GraphLite, we conducted a comprehensive user study comparing it against three baseline methods: the default screen magnifier, a table-based chart conversion tool, and a space compaction-only interface. The study aimed to evaluate GraphLite’s impact on task performance, usability, workload, and user satisfaction when interacting with data charts on smartphones.

Table 1 Prakash et al.: Definitions of all abbreviations and placeholders used throughout the system and study.

Participants and Study Design

We recruited 26 low-vision participants who regularly use screen magnifiers on mobile devices. The participant pool included individuals with varying eye conditions and levels of visual acuity. The study followed a within-subjects design, where each participant completed chart-based tasks across five conditions: screen magnifier (SM), tabular representation (TBL), space compaction only (SC), space compaction with customization (SCC), and the full GraphLite system (SCCF). Tasks included pairwise comparisons, selective filtering, trend prediction, and trend comparison, using both bar and line charts.

Procedure: Data Collection and Analysis

Participants performed 12 tasks across the five study conditions. Tasks were designed to simulate realistic data interaction scenarios, such as comparing stock values or filtering sales figures. We measured task completion time, task success rate, and accuracy. After each condition, participants completed the System Usability Scale (SUS) and NASA-TLX questionnaires, followed by a semi-structured interview to gather qualitative feedback. Quantitative data was analyzed using non-parametric statistical tests due to the non-normal distribution of some measures. Interview responses were coded thematically to identify common experiences and suggestions.

Figure 4 Prakash et al.: Task completion times in seconds across all conditions and tasks. For the MBF task, only TBL and SCCF results are shown, as participants could not complete the task in the SM, SC, and SCC conditions within the allotted time.

Task Performance

GraphLite’s full system (SCCF) consistently led to faster completion times across all tasks. For example, participants completed pairwise comparison tasks nearly twice as fast with SCCF compared to the default magnifier. In more complex filtering and trend analysis tasks, SCCF outperformed both the tabular baseline and partial GraphLite conditions, especially when multiple data points had to be evaluated. Task success rates with SCCF reached 100 percent, compared to 61.5 percent under the default magnifier.

Usability and Workload Scores

SCCF received the highest SUS scores (average = 85.8), indicating strong perceived usability. By contrast, the screen magnifier condition received the lowest average score (51.3). NASA-TLX scores showed a significant reduction in cognitive load with SCCF (average = 22.7), compared to high workload ratings in the SM condition (average = 70.1). The TBL condition showed moderate performance, with usability and workload scores improving for simpler tasks but degrading as task complexity increased.

Error Patterns and Observations

In the screen magnifier condition, participants often lost track of axis labels or misestimated bar heights due to limited field of view. Errors were also common in the SC and SCC conditions, especially when users attempted to manually compare distant data points. In the TBL condition, errors were typically due to mental overload while scanning long tables. The SCCF condition showed the fewest errors, aided by selective data views and tooltip support.

Qualitative Feedback

Participants described GraphLite as significantly easier to use compared to other methods. Many appreciated the ability to focus on a subset of data without panning, and the option to adjust contrast and font size. Users noted that they felt more in control when they could select and compare relevant values directly. Several participants mentioned that it was the first time they could analyze trends visually without assistance. Suggestions for future improvement included adding vertical compaction, auto-focus gestures, and quick-access tooltips.

Discussion

This study surfaces a core gap in how accessibility tools support low-vision users. Many systems rely on magnification as a catch-all solution, assuming that increased size alone will make content usable. Our findings suggest that this approach is inadequate. Magnification often introduces new problems such as excessive panning, broken spatial relationships, and cognitive fatigue. 

These are not minor inconveniences but real obstacles to effective data comprehension. A more meaningful approach centers on structural adaptability. Instead of simply scaling visuals, interfaces need to adapt the organization of information to match user goals. GraphLite’s success points to the value of giving users control over which parts of a chart to focus on and how that content is presented. This form of selective interaction allowed participants to reduce visual effort while improving accuracy. Simplifying input is not just about convenience. Many participants struggled with precision-based gestures in baseline conditions. A single-tap or swipe model made chart navigation more manageable, especially for tasks that involved comparing multiple values or scanning for trends. This highlights the importance of minimizing fine motor demands in mobile accessibility design.

Finally, the study underscores how underserved this population remains. While screen reader users benefit from a mature ecosystem of accessibility tools, low-vision users often have to improvise with general-purpose features. There is a clear need for purpose-built solutions that recognize the distinct ways low-vision users interact with visual content, particularly on mobile platforms.

Limitations

  • GraphLite currently supports only bar and line charts, which limits its applicability to other common formats like pie, scatter, or stacked charts.
  • The system assumes clean and well-aligned input images. Charts with low resolution, visual clutter, or unusual layouts may lead to extraction errors.
  • Our study was conducted on a controlled device, not on users’ personal smartphones, which may affect the generalizability of usage behavior.
  • While GraphLite supports visual customization, it does not currently integrate audio feedback, screen reader compatibility, or other multimodal access features.

Future Work

One promising direction is to enhance data comprehension through details-on-demand (DoD) interactions. These techniques allow users to request specific information as needed, keeping the interface clean while still providing access to deeper insights. Future implementations could explore how selection-based and zoom-based DoD methods can reduce the need for constant panning and support a more efficient workflow for low-vision users. A controlled study that simulates various DoD techniques could help identify which specific interactions are most beneficial for tasks like trend recognition or value comparison, and how they can be optimized for mobile magnifier users.

Another avenue for future research is predictive magnification, which involves automatically guiding the user’s focus to salient regions of a chart. Participants in our study expressed a desire for such behavior, especially to minimize the number of manual gestures. Building on prior work in content saliency and gaze tracking, future systems could use predictive models to pan toward important chart elements like axis labels or data peaks. This could reduce navigation effort and cognitive load during analysis. Integrating predictive magnification with user-driven control mechanisms may lead to a more fluid and supportive chart navigation experience for low-vision users.

Conclusion

GraphLite demonstrates how mobile-first, visualization-specific accessibility tools can improve data comprehension for low-vision users. By transforming static charts into interactive, customizable, and compact interfaces, the system supports faster, more accurate, and less effortful interaction. User studies confirm significant improvements over screen magnifiers and tabular alternatives, validating the importance of selective focus and visual personalization. While current limitations include chart type support and generalizability to real-world images, GraphLite lays the groundwork for future research in predictive navigation and adaptive visualization design. It represents a step toward more inclusive visual interfaces, where accessibility is embedded as a foundational element rather than an add-on.

References

Prakash, Y., Khan, P. A., Nayak, A. K., Jayarathna, S., Lee, H. N., & Ashok, V. (2024). Towards Enhancing Low Vision Usability of Data Charts on Smartphones. IEEE Transactions on Visualization and Computer Graphics. DOI: https://doi.org/10.1109/TVCG.2024.3456348;

Code for GraphLite is available in the following Github link: https://github.com/accessodu/GraphLite.git

- AKSHAY KOLGAR NAYAK (@AkshayKNayak7) 


Comments