What's new

Proposal: GlyphBars: Networked Bargrams for Multi-Document Visualization

Basic idea is to have parallel bargrams for multi-modal (text and image) visual analysis.


Visualization hierarchy:

There are parallel bargrams. The bar of each bargram is composed of rectangular units (similar to a treemap visualization), where each unit represents a paper.

Above each tree/bar are glyphs which represent a different dimension, such as author names or year etc. that are associated with the bar's items (paper titles).


The visualization consists of parallel bargrams. Each bargram contains images or figures from documents in the cluster.

Above each bargram are glyphs. The glyphs compose a unit visualization of a beeswarm visualizaiton, with each glyph representing a paper.






1617624768278.png




The scatterplot or beeswarm unit visualization at the top of each bar:

1617624957928.png

We use SNAP in order to do the network citation analysis. Then we take the resulting graph and apply it to the glyphs/items and "invisible links" in the visualization.

Since there are generally more authors than papers (unless sufficient papers share the same authors), authors are represented by a dot in the beeswarm area of the bargram and by a square unit in the bar area of the bargram. If you select one item/paper/unit, then the other interconnected units are highlighted. The result is an "invisible link" as with EZChooser.

As with other interactive visualizations such as SIRIUS [], the bargrams can move in response to selection.

Bulilding on the work in EZchooser on invisible links, we add such links to both to the glyphs and the units of the bar visualizations.

Parallel bargrams are used by EZChooser, BarcodeTrees, and LocusTree.

1617636047488.png

 
Last edited:
Top