“Internet of Display” …Are you Viewing Your Information Through a Straw?

Most of you have probably heard of the term Internet of Things (IoT) which refers to the fact that millions and soon, probably billions of devices will be connected to information via the Internet.  Recently Andrew (Drew) Jamison at Scalable Displays has been chirping about what he calls  the “Internet of Display” (IoD).  Since reading his article introducing the concept, I have been having spirited debates with a number of people about this concept – and trying to decide if the term has merit and if so, a concise way to describe it.  In this article, I will lay out the concept in more detail as I understand it and I invite you to chime in with comments and your input.

One of the trends behind IoT and IoD is that functionality and data that used to reside on PCs, workstations or company servers is moving to the cloud.  The result is that the conventional display/workstation paradigm is changing and moving in the direction of simply a “dumb” display being all that an end user needs to do complex tasks.

For example, this means that a CAD designer can interact and render designs in the cloud delivering just images to his display.  A digital signage media player can migrate to the cloud delivering the content playlist in realtime.  A control room can use the cloud to aggregate multiple sources of data and video using management software resident in the Internet to deliver images to the display solution.  360-degree video of computer-generated or video images can reside in the cloud streaming to VR headsets or mobile devices.

What is common to these and many other applications is that there are huge data sets that the user is accessing.  Let’s call these content pixels.  For example, the data behind a game or a CAD model is three dimensional with great detail – hundreds of millions of content pixels.  360-degree video data has a lot of content pixels as well.  Any broadcast of an event featuring multiple cameras has a huge number of content pixels.

But what we often see is only a subset of the content pixels.  Let’s call these display pixels.  Returning to our examples above, the CAD or game designer sees one view (or two views if stereo) that is limited by the pixel count of the display.  To see more of the game or model, interface devices allow navigation.  With 360-degree content in a VR headset, you only see a portion of the environment at one time with different views presented as you move your head.  In broadcast, all those cameras are switched to provide one stream whose view is limited by the screen size and pixel count of your display.

Display pixels are almost always a lot less than content pixels.  And, just as importantly, the window into the content pixel world is typically focused on a single user.  To paraphrase David Park’s comment on LinkedIn, when the display pixels are much less than the content pixels, it is like looking at the world through a straw – you only see a tiny portion at a time.”

There are solutions that enable the display pixels to rival the content pixels.  A control room with a whole wall of displays, a massive digital sign, simulators with curved screens, five- or six-sided CAVEs, or planetariums with domed screens are some examples.  And notice that these displays are now focused on group and not single-person use.

The way to view rich data sets?

Figure 1: The way to view rich data sets?

For the most part, these are expensive systems but with content and processing moving to the cloud, it seems likely that wide field-of-view multi-megapixel display solutions will become more common place.  That means 2-3 monitor desktops or giant curved screens will be common.  Meeting rooms will be equipped with 2-3 (or more) blended projectors or wall-sized LCD or LED screens.  Theaters are even embracing the trend with the Barco Escape 3-screen format.

I think it is also likely that small personal domed or toroidal display systems will become much more popular as non-headset VR viewing devices.  And why not extend this to theme parks or special venues with multiple people in 360-degree “immersion” rooms for entertainment purposes like being at a concert or sporting event.

The TV wall of the future is doable today and cost effectively with blended short-throw projectors.  It does not seem so farfetched to imagine mobile devices from family members throwing up all kinds of content while other sources serve up TV shows, multiple sporting events, data, video chats and more.

The content pixels have already expanded well beyond the display pixels so it seems logical that end users will understand this and demand more display pixels.  And interest in 180- and 360-degree video is exploding with the VR craze.  Cameras to capture the content just got a big boost too with the announcement of GoPro and Google teaming up in this area.  The content is and will be there, but will the displays?

So how many display pixels is enough and under what circumstances?  Scalable says that in a conference room where the typical viewing distance is 12’ a benchmark of 1 megapixel per person is a number that has historically been used in many of their installations.  I think that in many situations the number should be far higher – depending on the application, your distance from the display and the field of view.  Since we want to get away from the “view the world through a straw” mentality, I suggest that a 90-degree field of view (FOV) is a good benchmark for viewing rich multimedia.

Figure 2 shows a simple rule for calculating the needed display pixels per person for such wide FOV displays.

megapixels per person

Figure 2: Megapixels per Person vs. Viewing Distance for 90-degree Field of View

To see if this is about right, let’s consider some examples.  On my desktop, for instance, I have two 1920×1080 monitors (2 megapixels).  They are side-by side creating about a 90 degree field of view from where I sit (about 30 inches away).  The chart suggests I should have about 3.5 megapixels as a single user, which indeed I would like to have.

In a single-user VR headset with a virtual distance of 1 to 5 feet, you want 2-4 megapixels of resolution for a 90 degree field of view, according to Figure 1.  Around 2 megapixels in about this FOV is what the Oculus Crescent Bay headset delivers today and it could use a few more pixels.  Want to allow 4 people to view the 180-degree content at 7 feet using blended projectors, without having to wear a headset?  You need about 8 megapixels.

Let’s consider a meeting or small huddle room.  It is not uncommon for these rooms to have a single projector or flat panel display for 2-4 people to use in the meeting or collaboration.  But such a display solution creates a very narrow field of view for each participant.  That may be fine for sharing a PowerPoint document, but for multiple pieces of data or a big data set, such a display is inadequate.

Using the chart, four people viewing data in a 90-degree FOV display at 13 feet requires 4 megapixels.  A single flat panel can achieve the megapixels, but not the wide FOV.  Blended projectors are needed or a LED or LCD wall.

What needs to be added to the discussion is the reality that when displays fill more than 90-degrees participants don’t all look at the same data when viewing a large screen.  The display needs to support sufficient pixels to accommodate the visual acuity of all participants.

If there is the expectation that participants will be close to the display to look at finer details, you need to decide how close when determining the megapixel requirement.  If participants are 3 feet away for example, you need about 15 megapixels for 4 people.

The market for these wide field of view display solutions that allow rich access to content on the Internet or from other sources is exploding.  The market drivers are:

  • Access to massive data sets from anywhere in the world
  • Wide field of view
  • Single to multiple person viewing
  • Eye limiting resolution based on interaction distance
  • Display pixels matched to content pixels

What is your “FOV” on “Internet of Display”?   Let’s take a “straw” pole on which term best captures the scope of this discussion

  • Internet of Display
  • Immersive Content Viewing
  • Field of Vision Viewing
  • Human Vision Displays
  • Directview VR

Please post your votes or comments below.

  • Chris Chinnock

    I like field of vision viewing

  • Sam Warburton

    Human Vision Displays or Human-Scale Display

  • Guy Van Wijmeersch

    chris, Great synthesis of what is indeed happening with large rich displays, the interesting part will be the virtual and augemting reality showing continous 3D enruched environment giving more context to the information we absorbed before, either being a simple powerpoint or complex excelsheets, these will be good for TED talks but not for collaborative meetings or control rooms for ecample

  • Guy Van Wijmeersch

    Btw, the straw association is not 100% correct as the amount of overview you get has also to do with the distance you are looking at, even in te real world, standing on the earth, you have a limited view of the millions of things that can be seen, it is only when i lift myself up to 10000 feet or more that i see the shape of the earth, of course you will miss details and that will be the challenge in the virtual world also. Of course in the virtual world it is much more convenient and faster to go from 0 to 10000 feet than in the real world 🙂

  • Geoff Walker

    I think Chris’ vision of the requirements for immersive displays is quite good, but I think the belief that these pixels are going to come from the cloud is unreasonable in the next 10 years in the USA. We’re lagging behind many other countries in the bandwidth that’s available to the average home, and many businesses are worse. I have more Mbps Down at home (~50) than I do at work (~35), and neither one of them is consistent enough to support the level of reliability that Chris envisions. Any serious CAD work today requires a high-end graphics card to support just a single FHD monitor. I just don’t see this moving to the cloud. Storage, sure, but not rotating a 100-component model at a usable frame rate.

    For me, the least objectionable name is “immersive content viewing”. The term “Internet” should never be in the name, and the actual human FOV is much wider in both directions, so “Human” shouldn’t be there either.

    • John

      Geoff: try using Onshape, I believe you will be surprised

  • Chad Byler

    Interesting article. I’d vote for Immersive Content Viewing. As mentioned the content pixels are greater than the display pixels, so you are now within or immersed in the content. As mentioned, bringing a Human FOV into the naming requires all displays to be immersive past a certain wide viewable angle. When you have one person walk up to a screen and one stand back in a conference room, one may be immersed and one may not. In a similar narrowing of the subject area, calling it Internet of Display would require that you get content through the internet, whereas many of these systems are closed loop. The military flight simulation people are certainly visually immersed in a much larger content of pixels, however they certainly aren’t running that through the cloud for anyone to be able to have a chance to hack into if they could ever even get the data transfer rates high enough for their real time simulations to work from an off-site server/cloud.

  • bigdisplay

    Interestingly yesterday this article addresses some of the same issues. I recommend reading the article to gain an application perspective. http://www.forbes.com/sites/valleyvoices/2015/06/08/no-more-head-games-why-the-future-of-virtual-reality-is-not-the-oculus/

  • Chris Chinnock

    here is another example of people talking about the need to “see” big data, but not describing how to do that.