TV resolution upgrades are on a seven‐year cycle, from standard definition (SD) to high (HD), full high (FHD), and ultra‐high definition (UHD).
Figure: IHS Markit and Samsung Display
Historically, higher‐resolution TVs are the first part of the Field of Dreams scenario—if TV makers build them, then customers, content, distribution, and everything else in the ecosystem will come. It seems likely that the 8K transition will follow a similar path. In the 4K transition currently underway, the rapid expansion of 8.5‐generation (G) fabrication plants (fabs) optimized for 50‐inch (8‐up) and 55‐inch (6‐up) panels helped fuel price reductions driven by competition and fab capacity. The same phenomenon is happening now with 10.5G fabs optimized for 65‐inch (8‐up) and 75‐inch (6‐up) panels.
According to the market data firm Display Supply Chain Consultants (DSCC), five panel suppliers (BOE, China Star, HKC, LG Display, and Sharp) plan to build seven 10.5 fabs over the next few years, largely driven by Chinese central and regional government subsidies. By the end of 2023, they expect capacity to reach 1.737 million substrates per quarter. Most of this capacity will be for LCDs (1.53 million) using an amorphous silicon (a‐Si) backplane, with the rest going to oxide‐based OLED production (207 million).
To put this in perspective, fab capacity in 2022 could produce 154 million 55‐inch TVs and 37 million 65‐inch TVs. Compare that to the 2018 capacity of 149 million 55‐inch TVs and just 12 million 65‐inch TVs. Because of their size, TVs dominate industry‐area capacity. All the area capacity growth is happening with 10.5G fabs that favor larger TV sizes within the sweet spot of 8K (Fig. 2).
Today’s 8K TVs are quite expensive, but all this new capacity will drive down prices. For example, 65‐inch TV panel prices have fallen 50 percent in the last 24 months, and DSCC expects them to fall at a 20‐percent‐per‐year rate as this new capacity comes online. As a result, 8K TVs will become more affordable every quarter.
Is 8K Necessary?
Clearly, many wonder whether we really need 8K displays. That’s a valid question, as the most common method for determining which resolution is visible to humans—that is, simple acuity—only tells part of the story. Generally, simple acuity is measured via the Snellen eye chart; this determines the ability to see distinct black‐on‐white horizontal and vertical high‐contrast elemental blocks. Proponents who claim that you can’t see the difference between a 4K and 8K image at normal TV viewing distances often cite this method. However, human vision is far more complex than a simple acuity measurement, and 8K images engage other aspects.
Florian Friedrich, the CEO of FF Pictures, a German company that specializes in services for image technologies, provided a breakdown at the 8K Display Summit to show why 8K displays are necessary. In his analysis, first he assumes that the ideal viewing distance is 2.5 picture heights (picture height H=0.49 times the screen diagonal for a 16:9 display), which affords a 45‐degree field of view (FOV). We can create this 45‐degree FOV by viewing a 65‐inch TV at 6.6 feet, a 75‐inch TV at 7.7 feet, or an 85‐inch TV at 8.7 feet. The common measure of limit for simple visual acuity is 1/60th of a degree or 1 arc minute. This is based on black‐and‐white contrast measurements, an already limited measure of human vision. The following equation shows the required horizontal resolution using this measurement of human vision and the 45‐degree FOV assumption. Essentially, this asserts that you only need 2,700 horizontal pixels to be at the limit of human vision.
Required horizontal display resolution equals
Research suggests that 8K images are now engaging other senses that can be difficult, if not impossible, to measure but are real nonetheless. Such results are now motivating investigators to undertake additional studies, which will be very helpful in convincing 8K skeptics to at least have an open mind on the benefits of higher resolution displays.
However, Friedrich says this calculation is wrong. For one thing, simply calculating the necessary horizontal pixels misses an important aspect of image quality: The Nyquist theory (Fig. 3) states that a signal must be sampled at twice the highest frequency contained in the signal. This means that you need twice as many pixels to accurately reproduce the signal. Second, we should consider visual acuity not only in the horizontal, but also the diagonal direction. That adds at least a factor of the square root of two (1.41) for square pixel displays. The result is what Friedrich refers to as “Florian’s Geeky Resolution Demand,” or FGRD. This indicates a horizontal pixel resolution of 7,512 for a 44.4‐degree FOV, quite close to the ultra‐high definition (8K resolution) of 7,680×4,320. If viewers have better eye vision or if the chroma subsampling (4:2:0) in compressed video deliveries is respected as well, the resolution demand only increases.
An online calculator for the individual demands of resolution or viewing distance
FOV = Field of view in degrees (35°…55°)
VAc = Visual acuity in degrees (0.4‥1)
Ni = Niquist factor (0.5)
DiaF = Diagonal factor (> 1.41)
So, for a normal TV viewing distance of 2.5H, and normal vision this would mean:
is available on the company’s website (FF. de). Thus, 20/20 vision corresponds to the ability to see two separate lines in 1/60th of a degree. The other way of defining this is the cycles per degree (CPD) we can see in a one‐degree FOV. Thirty CPD corresponds to 20/20 vision.
The Japanese broadcaster NHK has conducted research to determine the CPD at which the viewer thinks a displayed image looks like the real object (Fig. 4). Their findings suggest that this occurs at around 150 CPD—a clear indication that there’s more to vison than simple visual acuity. If simple visual acuity doesn’t fully describe our ability to “see” resolution, then what other mechanisms are involved? There appear to be two other factors at play: Vernier acuity and the brain. Vernier, or hyperacuity, refers to the ability to discern slight misalignments between lines—anability that’s impossible according to simple acuity descriptions of human vision. Hyperacuity means we can perceive fine details even at fairly long viewing distances. A classic way to prove this is to show two line pairs. The first pair has two perfectly parallel black lines on a light background. The second can be misaligned by just a single pixel, which many people spot, even at some distance away.
Because modern displays are pixelated in a square grid pattern, a line that isn’t parallel will have stair‐stepping. While our normal visual acuity may not detect this, our Vernier acuity can. As a result, if 4K and 8K images are displayed on 4K and 8K displays of the same size and viewed at the same distance—all other factors being equal—the 8K image will look sharper or crisper when compared to a 4K image, even when viewed at 8 to 10 feet. That’s because the spacing of the pixels on the 8K TV is half that of the spacing on the 4K TV, so there will be reduced stair‐stepping. That is, the 8K display will create a smoother line compared to the 4K in this example.
At the 8K Display Summit, Phil Holland, a director and cinematographer, called the reduction of such artifacts “eliminating the digital footprint in the image,” while Friedrich referred to 8K images as “quieter with less artifact flashing.” Meanwhile, YungKyung Park, Ph.D., an associate professor at Ewha Woman’s University in Seoul, Korea, introduced the concept of hyperrealness, and defined it as drawing “a sense beyond ‘real’ by extramural expression that goes beyond the limits of vision.”
Park then described an experiment her team performed with 120 test subjects in their 20s to 40s. Each participant sat in a dark room about nine feet away from side‐by‐side 4K and 8K TVs that were nearly matched in measured visual performance, aside from the resolution. Park’s team presented a series of still and video images and asked the subjects to rate them from a perception point of view (brightness, contrast, vividness, and resolution) as well as a cognitive point of view (3D‐ness, heaviness, temperatureness, spaciousness, beautifulness, fatigue, and image quality). The results were quite interesting. For example, some 8K images showed around a 30 percent improvement in both perceptual and cognitive factors compared to their 4K versions. Other images showed improvement, but not as much. Park noted that images that contain faces or objects and that are placed in the frame’s center score better. In other words, improvement is image‐dependent. Park also said that the improved perceptual factors of 8K led directly to the participants’ improved cognitive response, which led to the image’s improved realness—that is, hyperrealness. She summarized by noting that three factors are at play: optical illusion, object interpretation, and different senses (Fig. 5).
Park’s work suggests that 8K images are now engaging other senses that can be difficult, if not impossible, to measure but are real nonetheless. Such results are now motivating researchers to undertake additional studies, which will be very helpful in convincing 8K skeptics to at least have an open mind on the benefits of higher resolution displays.
Hopefully the aforementioned analysis, comments, and experiments demonstrate that improved methods are crucial to determine the need for and impact of 8K images. As higher‐order vision‐brain factors come into play, they create an increased sense of realness for 8K images—but this increase can be image‐dependent. Still, there’s no reason to think that 8K is the limit of this trend. Ahem, 16K anyone?
Chris Chinnock is the president of Insight Media, a display‐focused consultancy. He is also the executive director of the 8K Association, an organization dedicated to educating the industry on 8K and developing certified 8K products. He can be reached at firstname.lastname@example.org.
This article first appeared in Information Display Magazine