(2023-08-23) Obenauer Ln038 Semantic Zoom

Alexander Obenauer on LN 038: Semantic zoom. earlier constructions of this environment had two to three view definitions for items (small, large, and full screen). The latest, as seen in this demo, allows for view components within one item view definition to swap out depending on the space available. This allows for the physicality and fluidity that this latest experiment required.

Let’s take this thinking further.

This “undulant interface” was made by John Underkoffler. The heresy implicit within is the premise that the user, not the system, gets to define what is most important at any given moment; where to place the jeweler’s loupes for more detail, and where to show only a simple overview, within one consistent interface

Pushing towards this style of interaction could show up in many parts of an itemized personal computing environment: when moving in and out of sets, single items, or attributes and references within items.

A summary of Colin's recent run shown in a table on the left, a map on the right, and a timeline on the bottom.

Colin’s left needing more: What if there was a way to overlay somehow that I drank half a bottle of BodyArmor when I started the run? And at mile 6, I briefly stopped to drink some water and eat an energy gel.

Two weeks ago, it was much more humid than it was on Sunday. How can I connect the weather data, such as temperature, cloud cover, humidity, etc to each run?

this is what we explored in the last lab note: everyone has unique needs and context, yet that which makes our lives more unique makes today’s rigid software interfaces more frustrating to use.

If we have the latest run in our gestural itemized environment, we can look at more data by magnifying the item (Zooming UI)

Instead of selecting one view to switch to, as we first explored in LN 006, we could drag them into the space to have multiple open at once.


Edited:    |       |    Search Twitter for discussion