Category: The Front Line

Look Out, HDMI – Here Comes Super MHL!

Yesterday, the MHL Consortium announced its newest flavor of display interface – Super MHL (or, more accurately, superMHL). MHL, which stands for Mobile High-definition Link, was original developed to enable display connections over Micro USB ports on phones and tablets. You’ll find it most often on mobile devices and televisions from Samsung and Sony. It will also pop up on LG products and there’s even an MHL port on the Pioneer AV receiver I bought a couple of months ago.

There have been some clever demos of MHL applications at past CES events. One was to build a “dumb” laptop (no CPU or video card) – just keyboard, touchpad, and display – and use MHL to dock a smartphone into it make everything work. Another demo in the Silicon Image booth featured smartphones being used as video game controllers with the video playing back on the controller screen.

Yet another demo showed a Sony Experia phone being used as a remote control with a Samsung TV to select inputs, play video, and launch Internet applications. It’s easy to do this stuff when you can multiplex video and serial data through the same connector, which in MHL version 3.0 can even play back Ultra HD video at 30 fps with 8-bit color.

Note the emphasis on the “mobile” part. In the world of transition-minimized differential signaling (TMDS), MHL is one of a few players, the others being HDMI (the dominant digital display interface), its predecessor DVI (still going strong although the standard isn’t updated anymore), and Micro and Mini HDMI, niche connectors on smartphones and cameras.

The advent of Ultra HD, 4K, and higher display resolutions like the new “5K” widescreen workstation monitors use has created a problem: Our display interfaces need to get faster. A LOT faster!

But HDMI 2.0, announced in September 2013, isn’t fast enough. I get into frequent debates with people about why it isn’t, so let me clarify my position: HDMI 2.0 has a maximum overall clock (data) rate of 18 gigabits per second (18 Gb/s). 80% of that can be used to carry display signals; the rest is overhead using 8 bit/10 bit mapping.

So that limits HDMI 2.0 to supporting 3840×2160 pixels (4400×2250 pixels with blanking) in an RGB signal format @ 60 Hz refresh. That’s the hard, fast speed limit. For anyone using a computer workstation or media player with RGB output, this hard, fast limit is a serious obstacle: How will people who buy the new HP/Dell 27-inch workstation monitors connect them? Their working resolution is 5120×2880 pixels, and at 60 Hz, that’s just too fast for HDMI 2.0.

It looked like DisplayPort 1.2 would finally ascend to the top of the podium, since its existing speed of 21.6 Gb/s (17.28 Gb/s usable) was already faster than HDMI 2.0. And now, DisplayPort 1.3 has been announced, with a top speed of 32 Gb/s (about 26 Gb/s usable) and the adoption of Display Stream compression. Indeed, more computer manufacturers are providing DP connections on laptops: Lenovo seems to have moved completely to this format, and Apple has been supporting DP for some time now.

8K is here! (Okay, maybe that's a few years away...)

8K is here! (Okay, maybe that’s a few years away…)

With all of that in mind, I will admit I was completely blind-sided by superMHL at this year’s International CES. Instead of a 5-pin Micro USB connector, superMHL offers a 32-pin, full-size connector that’s symmetrical (the next big thing in connectivity, a la USB Type-C). It also supports Display Stream compression. And it’s compatible with USB Type-C, although not with all six lanes. And it has a maximum data rate of 36 Gb/s across six lanes of data. (According to the MHL Consortium, that’s fast enough to transport an 8K (7680×4320) image with 120 Hz refresh and 4:2:0 color.)

The MHL Consortium’s announcement yesterday featured Silicon Image’s new Sil97798 port processor, which can also handle HDMI 2.0 signals. Here are the key specs from the Super MHL press release:

  • 8K 60fps video resolution, as outlined in the superMHL specification
  • New, reversible 32-pin superMHL connector
  • USB Type-C with MHL Alt Mode
  • High Dynamic Range (HDR), Deep Color, BT.2020
  • Object audio – Dolby Atmos®, DTS:X, 3D audio, audio-only mode
  • High bit-rate audio extraction
  • HDCP 2.2 premium content protection

 

Here's the 32-pin superMHL reversible connector.

Here’s the 32-pin superMHL reversible connector.

Whew! That’s quite a jump up from MHL. Some might say that superMHL is on steroids, but no matter how you look at it, superMHL is now a serious contender for the next generation of display connectivity. In the press briefing, a representative of the MHL Consortium waxed on about the approach of 8K broadcasting (it’s already been operating for two years in Japan) and how we would see a migration to 8K TV and displays in the near future.

As Larry David says, “Curb your enthusiasm!” Supporting 8K would be nice, but we’ve barely started the transition to UHDTV. And right now, selling 8K TV to the average consumer is like trying to peddle a Ferrari to someone who lives on a dirt road.

Where superMHL will find its niche is in supporting the higher bit rates that high dynamic range (HDR), wide color gamuts (BT.2020), and higher frame rates (60/96/100/120 Hz) require. All will shortly become important parts of the next-generation (UHD) television system. DisplayPort is already there with version 1.3, and you’ll even find DP 1.2 connections on selected models of Ultra HDTVs so that gamers can connect laptops and desktops at Ultra HD resolutions with 60 Hz refresh.

Now, the elephant in the room: How does the emergence of superMHL affect HDMI? Even though version 2.0 is over a year and a half old, you don’t see many HDMI 2.0 jacks on Ultra HDTVs. Casual inspections at Best Buy, HH Gregg, and other outlets show that the typical HDMI 2.0 port count is usually one (1), even as we approach April of 2015.

In the superMHL presentation, the concept of a TV with multiple HDMI 2.0 inputs and one superMHL input was outlined. This would, in effect, be the next step up from where we are now, with the typical Ultra HDTV having one HDMI 2.0 input and three HDMI 1.4 inputs.

But if Silicon Image’s new Sil9779 port processor can handle both formats, why bother with HDMI 2.0 in the first place, especially with its speed limitations? Wouldn’t it make more sense to future-proof all inputs and go with superMHL across the board? (Of course, the cost of adopting superMHL could weigh heavy on that decision.)

In the commercial AV and broadcast worlds, it would definitely make sense to jump to superMHL in the interests of future-proofing installations. Given the limited rollout of HDMI 2.0 to date, maybe supporting both HDMI 1.4 for legacy devices and superMHL is a smarter approach. (Note that superMHL and HDMI 2.0 both support HDCP 2.2, which is the next level in encryption and NOT compatible with older versions of HDMI.)

Summing up; the race for faster interface speed just got a lot more interesting with the addition of superMHL to the lineup. I can imagine that manufacturers of AV matrix switchers and distribution amplifiers are feeling another migraine headache coming on…

EDITOR’S NOTE: Last week, it was announced that Silicon Image has been acquired by Lattice Semiconductor of Hillsboro, Oregon, “ a leading provider of programmable connectivity solutions” according to the press release. The acquisition price was about $600M and now leaves Lattice in control of HDMI, MHL and superMHL, and SiBEAM (WiHD) patents and IP. More information can be found on the Lattice Web site at http://www.latticesemi.com/.

EE LLC: A Million PCs and Nobody Knows Its Name

There is a company you’ve probably never heard of whose display technology sits inside a million notebook PCs. Entertainment Experience LLC has developed a total color management system whose mathematical and vision models are embedded in multidimensional look-up tables (LUTs). Current customers include Dell and Quanta, the world’s largest notebook PC ODM.

This week, I spoke separately with CEO John Parkinson and the technology’s inventor and developer Jim Sullivan.

When the image captured by a typical professional video camera is re-mapped to Rec.709, at least 70% of the color information is thrown away, Sullivan said. One example: grass green and laser green map to the same RGB values. All the energy that goes into maintaining color fidelity is applied only to that reduced gamut and does not address the loss of perceived fidelity that was inflicted early in the process. Part of what EE LLCs software product, eeColor, does is to compensate for that loss in perceived fidelity by intentionally breaking away from “hardware fidelity.” Parkinson noted that the software recomputes the values for each pixel in the frame individually in real time.

Most natural colors only occupy the central portion of the display’s color space. eeColor expands this portion to more completely fill the color space of the display. But memory colors, such as skin and sky, are displayed without modification. Doing this blindly would result in “memory colors,” such as skin and sky, becoming distorted. eeColor identifies the range of color coordinates that include these memory colors and preserves the color values of these pixels as the overal gamut is expanded. EE LLC calls these patches of preserved color coordinates “filters.”

eeColor incorporates models of skin tone developed at the Rochester Institute of Technology and uses them to maintain skin tone under brightness changes and color remapping. Different skin tones, from African to Scandinavian, are mostly a matter of brightness rather than color shift. But there are cultural preferences. Asians seem to like displays to show skin tones a bit more blue than do Americans and Europeans. This is addressed with slide bar in the UI.

Image after eeColor processing.

Image after eeColor processing.

Venice originale

Venice. The original image.

Sullivan said that engineering the filters was the toughest part of the job. There are times when the color space is remapped that the skin-tone vector (for example) most move in the opposite director for the image vector for the overall scene. Strange things can happen at the boundaries of these filters with the overall display color space.

At EE LLC, they use “colorfulness” to describe the color content of a frame. This is the volume of colors contained in the 3D volume of the IPT color space. Percent of NTSC, which the hardware people have been trained to use, “is useless.”

They use the IPT color space because it preserves hue and brightness under gamut mapping. Moving the color portion (PT) of the vector does not change the perceived brightness (I). CIELAB doesn’t do this, nor does RGB, Sullivan said.

The vision model used by eeColor allows the viewer to observe rec.709 imagery in a relatively bright home or office environment and see the colors as they would appear in darkened cinema. The adaptive color boost is working with the behavior of the human visual system.

eeColor’s “management produces greater brightness with the same power, while keeping color quality,” Sullivan said. It is therefore very attractive for battery-operated devices.

The transformations performed by eeColor affect color only. For a complete solution, eeColor needed to handle sharpness and contrast, as well. For this they teamed up with Razzor Technologies, an RIT spin-off. Razzor’s approach also uses LUTs, so it was possible to combined the two sets of technologies into a single software product. Razzor’s sharpening technology avoids the white fringe that convolution filters impose on sharpened edges.

The company’s approach allows every input color to be mapped to an output color based on sound visual models, which has interesting applications. Among these is the ability to adjust for unit-to-unit variations in color rendition, and also to allow product manufacturers to compensate for the color differences between panels from different manufacturers.

The company has also worked with LG Display to minimize use of the blue phosphor and thus retard blue-phosphor aging.

These functions can be implemented in hardware, but Sullivan says EE LLC’s licensing fee per unit is less than the $3.00 unit cost (in volume) of a popular graphics co-processor. And with current personal devices having processing power to spare, EE LLC believes that software is the way to go. (The folks at Pixelworks may disagree.)

eeColor is currently available for the Microsoft and Android OS’s running on popular chips.

Has Sony Finally Seen The Light?

Sony’s ongoing financial woes have been well-documented by this writer over the past few years. Gone are the days when the Tokyo-based electronics giant could invent and own all parts of a media format, like the Walkman and Betacam.

It’s exceedingly difficult to make any money selling hardware to consumers these days, as fellow CE giants Panasonic, Toshiba, and Hitachi have all found out. And one of the biggest loss leaders is the Bravia television business, thanks to cutthroat competition from Samsung and LG, and now Chinese brands like Hisense and TCL.

Sony’s late entry into the LCD television marketplace a little over 10 years ago didn’t help. Back then, the company had OEM deals for LCD and plasma TVs with Pioneer and the aforementioned LG, along with a joint venture with Samsung to manufacture LCD televisions (S-LCD). But even with the Sony brand and decent market share, profits were nowhere to be found.

As losses piled up in the television unit, more red ink started flowing from Sony’s VAIO computer operations (since sold off to Lenovo). And in a real head-scratcher, Sony bought out its share of a mobile phone joint venture from Ericsson, only to see that miscalculation produce even more financial misery than the TV group ever did.

Now, chairman Kazuo Hirai has made it official: Sony will no longer chase higher sales in smartphones, where its Experia models just can’t compete with Samsung and Apple. And Sony won’t get any traction in the world’s largest mobile phone market, China, where home-grown brands like Huawei play a dominant role.

Are the days of "Make Believe" are over at Sony?

Are the days of “Make Believe” are over at Sony?

Significantly, Hirai also said that he would not rule out an exit strategy for both smartphones and televisions. (Sony’s TV operations were recently spun off as a separate operating unit so their losses can be clearly identified from the rest of the company.) Sony is on track to post a $1.5B loss for the current fiscal year that ends March 31, continuing a string of down years. Layoffs have continued company-wide and about 1200 more employees will be let go from the mobile division this year.

Despite the gloomy news, Sony’s ace in the hole is a burgeoning entertainment division. Sony Pictures, Sony Pictures Television, Sony Music, and PlayStation – taken together – are profitable operations. More than one institutional investor has called for Sony to exit the hardware business altogether and concentrate on content and software, which is where the money is nowadays.

But Sony has such a strong and rich legacy in consumer electronics that they can’t bring themselves to let go of the past, even after posting year upon year of record losses attributable to that same CE hardware. It’s gotten so bad that the company even announced last year that they would not pay a stock dividend for the first time in 50+ years. (Boy, did THAT news wake everyone up!)

In a recent Reuters story, Hirai stated that Sony would target a return on equity of more than 10% by 2018, aiming for an operating profit of $4.2 billion for fiscal 2017. That would be quite a turnaround, given Sony’s performance over the past five years. And it won’t be possible unless the company kisses the TV and phone businesses goodbye, once and for all.

Did Sony learn the lesson of Panasonic, who bit the bullet and shut down their plasma TV manufacturing business cold turkey in 2013, returning to profitability last year? (Panasonic is on track to make about a $2 billion profit for FY 2014.)

Panasonic also shut down other underperforming business units and shifted its focus to commercial products, and it would not surprise me to see them walk away from consumer TVs altogether in the next year or so as their market share is so small.

What about Sony’s Japanese competitors? Hitachi read the tea leaves several years ago and gave up on TVs altogether, while Toshiba is retrenching to the Japanese market. Sharp continues to struggle in the television business as its once-dominant 21% worldwide market share in TV shipments (2006) has dwindled to about 3% and a $250 million loss is staring them in the face for FY 2014.

It seems like everyone but Sony figured out the way back to profitability several years ago. Now, has Sony wised up? Have they finally seen the light?

Time will tell…

Pixelworks Improves Mobile Display Quality While Reducing System Cost

We have just succeeded in wrapping our heads around the fact that quantum-dot enhancement can signficantly increase the color gamut and color saturation of LCDs with little — and eventually no — increase in system cost.

Now, with its “Iris” mobile display co-processor, Pixelworks is giving us another example of improved display performance with, in this case, reduced system cost.

In the Pixelworks suite at CES, Graham Loveridge (Senior Vice President of Strategic Marketing and Business Development), said Iris is the world’s first mobile display co-processor. Many of Iris’s functions have been performed by television video processing chips and cores for years. But incorporating those functions and others in a chip that takes up sufficiently little space and consumes sufficiently little power for a mobile device is new.

Pixelworks calls the display performance that results from Iris processing “True Clarity.”

One of the more obvious things Iris does is up-convert mobile-display video from 15 or 30 frames per second (fps) to 60 fps. In side-by-side demonstrations in the suite, this provided motion images with far less judder, much smoother scrolling, and motion that had much less blur. This shouldn’t be a surprise since we’ve seen the same evolution in large-screen television, and Iris uses motion estimation and motion compensation (MEMC) algorithms to do its work, which is also used for TV. Loveridge said that Iris is unique in that it does MEMC without producing a halo around moving images.

Pixelworks also claims enhanced colors and wider gamut through the use of a 3D look-up table, better contrast, better high-ambient visibility, and custom color tuning. The color tuning, Loveridge said, can be used to make sure that all displays in a production run look the same. But more than that, the OEM can buy displays from different manufacturers and tune them so they all look the same.

What is surprising is that all of this can be done with a power reduction of roughly 25%. Some of the saving comes from the Iris chip off-loading some functions from the GPU and CPU, and performing them more efficiently.

Because the Iris chip permits savings elsewhere in the display electronics, it can save $6 on panel cost, said Loveridge. The power savings permit a smaller battery, which can save another $2. Depending on order size, the Iris chip can had for less than $6. So, said Loveridge, “if a manufacturer is savvy he can improve system performance and simultaneously lower cost.”

TV-quality LCD cells are increasingly common in mobile devices. With the addition of TV-quality video processing, it will be even more appealing for viewers to do more of their entertainment viewing on mobile devices.

Ultra HD: Live From the 2015 HPA Tech Retreat

Ultra HD: Live From the 2015 HPA Tech Retreat

As I write this, it is the morning of Day 2 at the annual Hollywood Post Alliance Tech Retreat. This annual conference brings together the top minds across a wide range of disciplines in the media production business. Cameras, lenses, codecs, displays, file formats and exchanges, content protection, archiving – they’re all here, as are representatives from the major studios, TV networks, software companies, colleges, universities, government agencies, and standards organizations.

Many of the sessions over the past few days have focused on next-generation television – specifically, capturing, editing, and finishing 4K images. Hand-in-hand with these additional pixels comes high dynamic range (HDR), which was prominently featured at CES in January. There’s also a new, wider color space (ITU BT.2020) to deal with, along with higher frame rates (how does 120 Hz grab you?).

I present a review of the Consumer Electronics Show every year at HPA (which now stands for the Hollywood Professional Alliance), and try my best to cram as much as I can in half an hour. Obviously, HDR was a big part of my presentation. And the overemphasis on HDR at CES provided a nice contrast to the presentations at HPA – at CES, it’s all about marketing hype, while at HPA, it’s all about engineering and making things work.

It was a full house for this year's Tech Retreat - as usual!

It was a full house for this year’s Tech Retreat – as usual!

The average Joe may not understand much about “4K TV” or Ultra HD, but there is definitely more than meets the eye. At CES, an announcement was made about the new UHD Alliance, a partnership of TV manufacturers (Samsung, Sony, Panasonic), Hollywood studios (Disney, Warner Brothers, Fox), and other interested parties that include Netflix, Dolby, DirecTV, and Technicolor.

All well and good, but you need to understand the primary function of this Alliance is to promote the sale of Ultra HD televisions. And right now, television sales haven’t been as strong as they were five years ago. (The introduction of Ultra HD did boost sales a bit in 2014, which may have provided the impetus for the UHD Alliance.)

So here are a few of the problems with transitioning to Ultra HD. First off, not all of the pieces are in place for implementing add-ons like HDR, wider color gamuts, deeper color, and higher frame rates. It’s nice to talk about these features in conjunction with Ultra HD, but the mastering and delivery standards for HDR 4K movies and TV programs haven’t even been finalized yet.

Color is a particularly tricky issue, as LCD TVs with LED backlights render colors differently than LCD TVs equipped with quantum dot backlights. And OLED TVs require their own look-up tables as they are emissive displays, not transmissive. As far as frame rates go, consumer TVs generally can’t handle anything faster than 60 Hz and in fact prefer incoming signals to match up to one of four harmonically-related clock rates.

Next, there is a new version of copy protection coming to your television in the near future. It’s known as HDCP 2.2, and will ride along on an HDMI 2.0 connector. It is not backward-compatible with current versions, and you may be surprised to learn that early models of 4K TVs don’t support HDCP 2.2 yet. So there is a real compatibility problem lurking in the shadows if you are an early adopter.

You may be wondering where 4K Blu-ray content will come from. The first Ultra HD BD player was shown at CES, and you can expect those to show up late in the 4th quarter of this year. Suffice it to say that they will be running HDCP 2.2 on their HDMI outputs! Media players will also have to adopt version 2.2 if they are to access movies and other protected content.

Getting back to HDMI: Although version 2.0 was announced in September 2013, it’s pretty scarce on Ultra HDTVs. Most current-model sets I’ve seen have one or two HDMI 2.0 inputs, and as I just mentioned, many of those don’t support HDCP 2.2 yet. HDMI 2.0 is also speed challenged; with a maximum clock rate, it can’t support signals beyond 3840x2160p/60 with 8-bit RGB color.

Because of that, some UHDTV manufacturers are quietly adding DisplayPort 1.2 inputs to their products. Some of these interfaces are intended for connections to proprietary media players, but others are available for connections to set-top boxes, computers, and laptops. DP 1.2 can support 3840x2160p/60 with 10-bit RGB color as it has a much higher clock rate.

In summary, it’s all well and good that UHDTV is here, and initial sales are encouraging. But the plane isn’t finished yet, even though some of us want to fly it. The HPA presentations I’ve heard and seen the past two days clearly point out all of the back room details that have to be addressed before the media production, editing, mastering, and delivery ecosystem for UHDTV is ready to roll…