And sure it's in Google's self-interest so that they can bring these technologies to YouTube without paying anyone else. But it benefits everybody, so this is really fantastic news for everyone if it's something that takes off.
I’ll welcome anyone that wants to enter the space that thinks they can do better, but Dolby is good at what they do, and Google often has massive commitment issues for new projects (although notably, not usually when it comes to codecs). I suspect what will happen is YouTube will develop their own HDR and audio codecs, and it’ll just be used on YouTube and almost nowhere else. That’ll be enough to drive client support, but it’ll be one more HDR format in addition to HDR10+ and Dolby Vision, and it’ll be one more set of audio codecs in addition to like the half dozen to a dozen they already decode, and ultimately this will be to increase the quality of YouTube while minimizing their licensing costs. That’s fine.
Like if I stream netflix to a TV supporting Dolby Vision what format is the video being streamed in and is the TV manufacturer just paying Dolby for the right to correctly decode this HDR info then?
Since the format of this metadata is proprietary, the demuxers, decoders and displays need to understand it and properly apply it when rendering. That's the part that needs to be implemented and paid for.
But that's really not all of the technology - Dolby Vision isn't just the metadata format, it's also definition of how the videos are mastered and under which limitations (e.g. DV allows video to be stored in 12-bit per pixel format, allows mastering with up to 10.000 nits of brightness for white pixels and defines wider color range so better, brighter colors can be displayed by a panel capable of doing that).
https://www.elecard.com/page/article_hdr is actually a pretty good article that overviews this topic (although you do need a basic understading how digital video encoding works).
To see HDR content at its full dynamic range, you'll need an HDR-capable device or display. Viewers watching on non-HDR devices/displays will see an SDR¹ video derived from the HDR source.
¹ "Standard dynamic range" or "smooshed dynamic range"
Not exactly - you need an HDR Mastering display to see HDR content at full dynamic range. There are essentially no high volume consumer-level devices, with the exception of maybe Apple's XDR lineup (MBP, iPad, iPhone, Pro Display) with the capability of displaying non-windowed HDR content at full brightness.
Everything else relies on tone mapping, even the latest 2022 OLED & QDOLED TVs.
The entire points of Dolby Vision is that it should give you the same image no matter what display you display it on, if the device is capable of displaying it. It’s an absolute format of light and color values, rather than a format based on RGB values.
"Do better" is tricky to define. By some metrics, Ambisonics, a decades-old, license free technology, "does better" than Atmos does. But by others, it does worse. Which metrics are important?
Google has both with YouTube, Chromecasts and Android phones and TVs. They’re one of the few power players that can unilaterally change up their codec and metadata suite, but only as far as YouTube goes.
So “do better” means getting enough content behind a tech stack and still delivering a satisfying experience to the customer. If they can meet or exceed what Dolby delivers, I think that would be great! Even if they only match Dolby, that’s still pretty good.
Libre and gratis ambisonics encoding and decoding software has existed for more than a decade. It is the domain of enthusiasts rather than regular consumers.
If support for playing Ambisonics was added to (say) vlc, and easy-to-use conversion tools from Atmos to Ambisonics also existed, that'd probably go a long way to increasing it's adoption.
It is stated in the article they're backing HDR10+.
Not sure about the audio.
Is older standard dolby digital (and dolby digital plus) 5.1 surround sound still pretty damn good? Yep - and it should be free. They have 20 years of newer, superior stuff to make money from!
Even a ton of modern motherboards have TOSLINK and lots of new equipment as well so its a worthwhile way to get a baseline quality 5.1 audio that still holds up very, very well for home systems today. DD+ with a good receiver and large speakers will blow away most of the cheaper Atmos systems.
You could also theoretically do this with most computers or a laptop with the right hardware and a little software digital encoder but the issue is that most of the time the way they have instituted DRM means that the browser is not going to even have access to 5.1 in the first place or your device (like any Nvidia graphics card for example - even if the HDMI output can include high quality audio) is going to just get 2.1 audio.
Dolby is more than a standard or spitting six channels of raw audio out of speakers. That's just the end product, because it's not just standard itself.
It is, for example, the hardware that is in cinemas and home theaters (encoders, decoders, Dolby RMU...), the certification processes that Dolby does in cinemas and recording and mixing studios, the mixing technicians who work with all that and send the final mixes with the netflix/hbo/whatever specifications, vendors, integration partners, speaker manufacturers...
There are also plugins that work in DAWs like Pro Tools, the ecosystem (Dolby Atmos Renderer, Dolby Atmos Production Suite), just to scratch the surface.
One thing is to publish a standard and another the ecosystem around that standard. It is interesting that there are new standards, but given Google's history with its long term attention span I have my doubts that this will materialize into anything more than an internal asset for google.
Yes, I can also imagine they have specific requirements on the file format like quickly skipping to timestamps, highly variable bitrate, handling text and graphics well etc. I imagine their requirements to be so general that it'll benefit anyone, especially those that do streaming.
In either case "just one more standard" (or relevant xkcd) is an unavoidable obstacle for every new standard, and does not mean the project will fail. I have lots of critique against Google but this is one thing they are positioned to do well, and have a decent track record. And given how the competition operates, is frankly refreshing.
Though I'm quite disappointed with ATSC 3.0 which appears to have given in to them and used their proprietary audio codec which no one supports yet. I'm extremely skeptical that it provides a tangible benefit over more widely supported formats. Yay, regulatory capture.
Standards bodies are comprised of its member companies, who negotiate (oftentimes quite hard!) with one another over what IP ends up in the standards. They're non-governmental trade organizations (though I suppose with ISO it gets semi-governmental).
Disclosure: I work at Dolby but do not have anything to do with standards. I dabbled in standards bodies at a previous startup but am really not qualified to weigh in on any of this, besides the stray pedantry around language.
Fwiw, Dolby does bring something compared to PCM, which is metadata to dynamically change dynamic range on the final device, allowing higher ranges with perfect home cinema and smaller range when in a noisy environment
Interestingly enough, stereo was under patent in the 1930s -- so you did need a license then.
The best results come from ripping voiceovers out of the center channel... but busting through encryption and figuring out the right proprietary codec to open the audio is a pain.
But things like 5.1 audio or HDR or spatial audio aren't that much more than adding a bunch of extra channels/bits to a stream, defining relative power levels, and the signal strength follows a curve, and oh there's some positional metadata.
The heavy lifting is done by compression algorithms which deserve to be patented because they do genuinely non-obvious stuff. Just like the way Dolby got digital audio onto a filmstrip was similarly clever.
But stuff like 5.1 surround sound... it's just channels, man. In the digital world, it seems like it should be awfully easy to design an open standard.
Or... we could have governments begin funding universities like they did in the past, and the research would be available for all?
Seriously, we have to re-think patents. The amount of money all that rent-seeking crap is costing societies each year is absurd, and not just in payments to trolls, but also stifled progress - think of stuff like e-Ink that's barely affordable.
You think patents is why e-Ink is 'barely affordable'? Could you elaborate on what data you used to form that conclusion? A simple question, lets say price of a panel versus cost of the raw materials to make it? What margin do you think they're making? Do you know?
> And E Ink, the company, has such a patent moat that it has acted as a monopoly, which Behzadi says has kept prices perhaps too high. But E Ink lost a big patent fight in 2015, and the market could expand soon.
The fact that almost everyone wants e-Ink technology, but e-Ink prices still are still really high  leads me to believe that either there are still patents at play that prevent competition from rising up, or that the competition hasn't managed to catch up for some other reason. It might also be the case that all of this is simply due to the aftershock of COVID supply chain disruptions, in any case I haven't found a better explanation yet.
I disagree. It leads me to believe the underlying technology, electrophoretics isn't capable of achieving the volume scaling and update speeds that would make it achieve mass market pricing. Also the links you provided, do not substantiate the main thesis that's being asserted, ie "there are still patents at play that prevent competition from rising up". Further, the quotation you provided was from Behzadi who was a kickstarter guy who failed to deliver the product he promised and then proceeded to blame everyone else except himself.
There's a simple question you can ask to prove this to yourself. Ask everyone who is making this claim, which specific patent is blocking them and how exactly it blocks their idea. You'll instantly realize the people making these claims are not actual display engineers with knowledge of electrophoretics. Typcially, at best, they're bullshitters trying to sound clever or at worst like Behzadi, scammers who are trying to hide having overpromised and then misspent other people's money.
Sure, because that adds a ton of new complexity!
Going from 2 to 3+ in a digital format does not add complexity.
But the baseline of "okay, compressed audio isn't very demanding, throw 3x as much bandwidth and processing at it" does not add meaningful complexity.
The overall point remains: multichannel open container formats exist, and open audio codecs exist. An open standard for 5.1 surround sound, for example, seems like a relatively straightforward combination of the two. I'm not saying you can do it overnight, but compared to other open-source efforts, it's tiny.
Hmm, it's not:
In terms of petroleum infrastructure, it's like slightly adjusting the ratios at the refinery.
What adds complexity is determining how many channels to use and what to put through them.
That is the important part now.
Na. Just kidding I'm sure deltarholamda is fucking smart!
We don't see a lot of real innovation from small companies because of proprietary audio and video formats.
in schiit's case they do most things via rca audio or usb/pcm + a dac
in other cases, there are lots of wired headphones/iem via a 1/4" or 3.5mm audio jack, but basically none via lightning.
But no one is using it.
It does provide significant value worthy of patent protection.
The main thing they did was develop a nonlinear color space designed so that each “bit” of information provides equal value. This way no bits are wasted, making compression more efficient and have fewer artefacts.
The color space is also set up so that the “lightness” channel is accurate and brightness can be rescaled without introducing color shifts.
They also came up with a way of encoding the HDR brightness range efficiently so that about 12 bits worth of data fits into 10 bits.
The format also allows a mode where there is an 8-bit SDR base stream with a 2-bit HDR extension stream. This allows the same file to be decoded as either HDR or SDR by devices with minimal overhead.
Last but not least they work with device manufacturers to make optimal mapping tables that squeeze the enormous HDR range into whatever the device can physically show. This is hard because it has to be done in real time to compensate for maximum brightness limits for different sized patches and to compensate for brightness falloff due to overheating. Early model HDR TVs had to have FPGAs in them to do this fast enough!
In practice it's an entire ecosystem of certifications, professional calibration of panels, efficient encoding formats, etc...
To reproduce the end effect of Dolby Vision you'd have to have a team of people liaising with television manufacturers, and makers of production software like DaVinci Resolve.
It's not a trivial task that can be done through open source. It's real work that costs money.
We can all hope and wish for open standards, but it's a bit like trying to come up with an open architecture for a railway bridge. It'll still take real work to customise it to any particular valley, the local geography, and the specific requirements. Dolby Vision is similar. The mapping from the full signal range to each specific panel is a complicated thing that requires quite a bit of work to determine.
Dolby Vision is effectively 12 bit while using only 10 bits for encoding the actual signal. HDR10 is effectively... 10 bit. To achieve the same 12-bits of dynamic range they'd have to come up with a HDR12 format or something.
You can think of 8-bit SDR brightness as something like 0 nits to 255 nits of brightness. This is technically wrong because it's a nonlinear curve, but ignore that for a minute. Increasing this to 10 bits like with HDR10 gives you 0..1,023 nits with the same "steps". Going further to 12 bits lets you take this to 4,096 nits while continuing to preserve the same level of gradation.
The hiccup with this is that some displays have 600 nits of peak brightness and some have 2,500 nits. There are rare displays that can go to 4,000, and prototype displays that go to 10K nits.
HDR10 only goes to 1,000 nits.
Dolby Vision goes to 4,000 nits.
Does this matter now on some cheap LCD TV that only goes to 600 nits? Probably not.
Does it matter on an OLED panel that only goes to 1,000 nits? Maybe, because Dolby Vision has more true-to-life mapping from the maximum range to the display panel range.
Will it matter more with next-generation panels capable of 3,000+ nits? Almost certainly.
Then again, HDR10+ has dynamic metadata, which compensates a lot for its lower bit depth. Additionally, most smart televisions smooth the "steps" in smooth areas, largely eliminating the artefacts caused by HDR10.
At the end of the day, they're both significantly better than SDR, but Dolby Vision is a touch better, especially on high-end panels.
 - https://www.itu.int/dms_pubrec/itu-r/rec/bt/R-REC-BT.2100-2-...
 - https://www.rtings.com/tv/learn/hdr10-vs-dolby-vision
 - https://avdisco.com/t/demystifying-dolby-vision-profile-leve...
 - https://forum.blu-ray.com/showpost.php?p=16546620&postcount=...
When i watch See and compare the DV version vs plain HDR version on my LG C1 the difference is big
I have an (older) LG OLED, and haven't seen anything in DV that I didn't think would be just as good in HDR10, although I haven't compared the same content in both DV and HDR10 directly.
This is opposed to HDR which has reference monitors and should look exactly the same on different, calibrated systems.
HDR10 and all the other stuff came along much much later
Aren't both formats just some meta data on top of the same video file? Netflix etc. serve both, depending on what your device supports.
Pretty sure that both blurays and "4k TV" channels use HDR10(+) as well.
Not like the viewer has to know or care about it (from what I can tell, they're basically identical anyway).
Probably most important for good uptake is a fancy name. HDR10+ just doesn't sound snazzy enough.
--But it goes to 11.
The post production sound department for a film will create and mix the sound to a specific format (usually 5.1) and depending on how it will be distributed, Dolby (or other) is used for encoding the distribution.
Dolby has been popular because it solves for cases when someone only has stereo or when a theatre doesn’t have a specific capability (Atmos) by mixing down or up the film’s delivery format.
It is preferred that audio sources are uncompressed PCM in the WAV format. Most audio in a film is actually mono, but is mixed to a surround format with various effects and processing applied to achieve a desired effect.
When the mixing process is complete, it is then rendered to a delivery format which is usually a lossless format and/or a property format.
It is worth mentioning that the sound mix for cinema distribution is not the same for digital distribution.
> Google has a lot of influence on hardware manufacturers
You know we could make it so that living does not require "business model".
Correction: greedy assholes rentseeking proprietary schemes. There is nothing in capitalism that says you must obtain your money through unethical means.
It is actually fine for trailblazers to charge large amounts for new tech. But open source gift economies can eventually break their stranglehold.
Unless they use the power of government to enforce their rentseeking, which can be especially egregious with “intellectual property”.
(Yes it is possible to be a libertarian who criticizes capitalism as using government force.)
Considered unethical by who? I wouldn't classify me disagreeing with those who consider those two concepts as unethical as a "lack of understanding"
In fact, there are several different explanations why these are unethical. One (Marx's) comes from labor theory of value (which I don't believe in, I think it is self-contradictory, but even classical liberals did believe in, that's why they hated rent seeking).
Every ethical argument is based on some ethical principles (like axioms). The question is, do you disagree with the argument (the derivation of the conclusion) or with the principles?
I'm tired of this sophistry where people claim "this is not a real capitalism". Why cling to the term then? Capitalism has always been a designation of the actual, real system, not some liberal utopia with no rent seeking. (The actual reason is that you can, somehow hypocritically, acknowledge the flaws of the system, while claiming TINA, anyway.)
There is. Ethics constitute a voluntary constraint on behavior. Businesses with no ethics are less constrained, and therefore can outcompete businesses so encumbered.
Company with online video and advertising monopoly wants to use that monopoly to destroy competitors isn't "amazing", it's "business as usual".
Apple's stance is especially interesting because it's unclear to me what they gain by pushing license fee encumbered formats.
When de facto standards develop enough momentum to have customer value on their own or as part of other de jure standards, Apple will support them at the OS or app level. Examples include MP3, VP9, Opus, VST3, etc.
They joined quite late but are there.
It's clear to me! I have an iPhone Pro 13, and I tell you: the Dolby and HEIC formats are a key reason I use the Apple ecosystem instead of Android. The pictures and video I take have a huge dynamic range, accurate true-to-life colour, and have a surprisingly small encoded size. The 4K Dolby Vision HDR video the iPhone takes looks like it has been professionally graded and rarely needs any further touching up. To reproduce it with any other device would require a significant setup of something like DaVinci Resolve and a RAW video workflow.
I don't know if everyone but me is colorblind or what, but it's night & day to me. The Apple + Dolby Vision videos are mindblowingly good, whereas everything I've seen taken by or displayed on an Android device is always incorrect in some way. Blown highlights, oversaturated, or whatever.
Google has little clue about colour, HDR, standards, quality, or anything at all along those lines that photographers or videographers care about. They're still releasing SDKs and entire operating systems with the baked-in assumption that images are always 8-bit sRGB SDR. Then they don't bother color-managing that at all, leading to the inevitable end result of either desaturated or garish color depending on the display panel used.
PS: Microsoft used to be the best of the bunch back in the Vista days, and is now regressing to be the worst of the bunch. In part, this is driven by the need to be "cross platform" and compatible with Google's Android. Only Windows and Mac uses 16-bit linear light blending for the desktop window manager, whereas Android uses 8-bit unmanaged God-only-knows-what-color-space.
Possibly. In most blind tests, for shots straight out of the camera, people prefer the Google Pixel photos to iPhone photos. Professionals complain that google over-amps the HDR, but consumers prefer that. Apple may have better codecs, but Google has better software post-processing.
iPhone still wins in video though.
People tend to favor shots that are punchy and high contrast, as far as I can tell from the many blind tests various youtubers have performed over the years.
From my viewpoint, iPhones have almost always had better color processing straight out of the camera as it is much closer to true to life than Samsung or Google's. Google's image processing in other areas seems to be better. Low light continues to not be that great on iPhone, which is a big use case while the Pixel excels at it. Denoising isn't great. The iPhone can also tend to lose some fine details that are present in Google's photos.
Overall, I think Apple's photos are more accurate and serve better as a baseline to edit photos, but Pixel wins out in many common scenarios that the iPhone just doesn't do as well along with having a more pleasing image straight out of the camera (because most people want to look better than real life, not like real life).
"True toMEMORY of life" is what's more important, and Pixel photos absolutely feel like they more match what I feel like I see.
Maybe it’s a matter of getting used to.
Fun experiment that will have you filing radars until the end of time: find a red notification bubble on an iPhone home screen. Now, slightly pull down the screen (as if to open the search screen) slowly. Note how the blurred bubble suddenly got darker instead of retaining the same visual brightness only redder.
Keying/Grading video is a significantly bigger chore than color correcting a still.
You don’t need to shoot in RAW to get good results. I’m curious what your complaints are?
Even with the 48 MP sensor I'm getting subpar results. Apple might have better hardware but their computational photography hasn't quite caught up yet.
Here's a sample gallery (not mine) https://imgur.io/a/vG6Yskx
I was expecting a major upgrade going from the Pixel 5 to the 14 Pro. It's actually a slight downgrade.
This is my point: Apple is far above the lowest common denominator.
Whereas the Pixels just make better web-ready photos from the get-go, not because their hardware is better, but because their algorithms are. That seems a lot more useful to me because you can actually capture good photos nearly every time, with zero effort, and send them to anyone.
If you have a specialized professional workflow, I entirely believe that you can -- with effort -- generate a better end product. But that's not what most people use their phones for. "Dynamic range" doesn't mean anything to the average person, and if they use HDR it's to create that 2000s-like super amplified gimmicky lighting effect. There's not really even a way to get real HDR to properly/intercompatibly show on most consumer computing devices anyway... same with gamut and color spaces.
IMO (only) making it look good on the average device should be a more important goal than making a workflow that only professionals with specialized knowledge and software can utilize.
> special hardware
Any iPhone, or any Mac device. The other vendors refusing to support HEIC and/or HDR image formats is their prerogative. Keep in mind that the Apple formats are just h.265 still frames, not some bizarre propriety standard!
> special workflow
I press the shutter button and send the result via iMessage. Is that some esoteric professional workflow?
Again, “web ready” means precisely: the lowest common denominator supported by browsers as far back as NetScape Navigator 4.
Apple decided not to limit themselves to 1990s era standards.
Google is content to remain hobbled.
If I edit a photo and I send it to someone with an iPhone I know the colors will look exactly like how I intended it to look. You can’t say the same about Androids because the screens often aren’t even trying to be color accurate.
I’m never sure if some of the photo file types are an extension of Apple’s “walled garden” approach or if other companies just make lazy products. But the rarity of consumer displays that aren’t tuned for maximum blue-light emission is maddening and other companies really need to step up their game in this area.
I'm sure LG and all the other manufacturers know what they're doing.
Consumers must prefer the garish vivid mode, aka "torch mode". Implying that they shouldn't seems to disregard their preferences.
The person you responded to seems to work with media or at least knows some things about video formats and editing. To them, true, raw colors and representations are what they want. You on the other hand may like the slightly unrealistic, but appealing look of the Pixel phones, which tend to look oversaturated to professionals, but better to consumers.
Similar to how audiophiles like headphones which sound flat and boring to regular people, yet audiophiles think regular peoples headphones have way too much bass and treble.
Different products and needs for different use-cases, I guess.
IMO the Pixel phones aren't more saturated than the iPhones (and certainly less than default Samsungs), they just have better dynamic range and color balance. Here's a sample gallery from a sibling post (not my gallery): https://imgur.io/a/vG6Yskx
The pool picture with the red roof and teal/blue water, for example... the Pixel has a less saturated roof but better color correction for the pool. It's way sharper in the tree photo. In the last two photos with the yellow car by the fence, the iPhone completely blew out the sky but the Pixel captured it fine.
That's just one gallery, but that's been my experience the overwhelming amount of time. The Pixels are unique among Android cameras in their ability to process natural-looking photos at very high quality, very quickly, at a very reasonable price point. Other manufacturers ramp up hardware specs or added thirty lenses, but Google's approach is to tackle it all computationally, stacking and analyzing multiple exposures and using algorithms to produce a single output (all automatically). The results are really quite amazing -- to my eyes.
At the end of the day digital photography is always a series of judgment calls -- by the sensors, by the firmware, by the software, by the format, by the compressor, by the photographer, doing signals processing on detected light in various ways. Apple optimized the beginning and end stages of that, whereas Google focused on the middle.
It might be possible to get better output out of the iPhones if you run it through special software and workflows, but out of the box the Pixels generate better outputs, and I think that's the way most users (on either side) are going to use them...
I'm not exactly a professional, but I've done enough photography that I twitch slightly whenever I see color balance that hasn't been set correctly to neutral grey.
It's like a typesetter that can't handle bad kerning: https://xkcd.com/1015/
You're maybe not a professional per se, but if I read "color balance that hasn't been set correctly to neutral grey", I have no clue what that's supposed to mean. (I know a bit more now because I fell into a color balance rabbit-hole after reading your comment).
I genuinely can't tell if this is humor or not. If it is, bravo.
 Specifically, Leonardo Chiariglione got fired and the MPEG working group cut up into a bunch of pieces.
This is not exactly true, they are a "founding member" of AOM: https://aomedia.org/membership/members/
> Apple's stance is especially interesting because it's unclear to me what they gain by pushing license fee encumbered formats.
My guess is cheaper hardware. AV1 is simply behind HEVC in terms of hardware (ie, ASIC encoder/decoders) support.
Doesn't mean that hey are there to adopt those standards rather than to be informed on how to best compete with them.
> My guess is cheaper hardware. AV1 is simply behind HEVC in terms of hardware (ie, ASIC encoder/decoders) support.
This might be a reason for anyone else but Apple makes their own hardware.
Its in IOS 16 and in beta Ventura.
And almost no body supports JpegXL, is it much better than AVIF?
Both Chrome and Firefox have implemented JXL support (to some extend, FF's implementation did not work for me when I tried it) but it is still hidden behind flags for now.
Now I think that is mostly gone in Apple.
The real problem isn't the hardware manufacturers but the content producers. Dolby engages in blatant anticompetitive behavior that basically requires hardware manufacturers to support their codecs and make it impossible to innovate on the actual media formats in a way that might compete. For example: paying for content to be released in atmos or giving away the tools to author it for free.
The reason is simple: VLC/OSS developers have been implementing Dolby technologies without paying a dime or using proprietary blobs. How dare they!
Would you say the same about a search engine company that gives its browser away for free and pays its competitor in mobile a reported $18 billion a year to be the default search engine for its platform?
I would much rather tie my horse to Dolby than a company that has the attention span of a toddler.
Open standards are good for the general public, as are allowing re-implementations of APIs. Taking a look at Google's anticompetive use of search combined with ads would be absolutely fantastic too, but I'm not going to gate other actions on it unless there's some semblance of a chance that the connections between the two actions are anything other than theoretical.
I thought the patent ran out.
And in those cases you've listed, you're left with strictly more options than if it's not an open standard.
Additionally, being an open standard, you can probably rely on ffmpeg supporting it. This allows you to transcode into something that your proprietary encoder will support for ingest if it comes to that.
This is also ignoring the first sentence: your whole supposition is based on a scenario where you as the content producer for some reason encoded in a format that your customers don't have, and don't have the masters for some reason. Which is basically absurd for anything that would need dynamic HDR or spatial audio.
Because selling a product below cost is fundamentally unsustainable, there is no logical reason to sell a product for less than cost besides doing so temporarily with the hopes of being able to later recoup the loss with higher, above cost prices. This is anticompetitive because an inferior product can win out if it is backed by bigger pockets that can afford to stay unprofitable longer than the company making the superior product.
This is basic economics, not really something that needs to be thought out and debated from scratch by HN over and over everytime it comes up, so it really would be helpful if everyone who is thinking about commenting on economic issues like this tries to at some point spend a couple hours reading an AP Microeconomics text. If a high school kid or college kid can do it in a semester, and intelligent adult can cover the high points in a weekend.
> Because selling a product below cost is fundamentally unsustainable, there is no logical reason to sell a product for less than cost besides doing so temporarily with the hopes of being able to later recoup the loss with higher, above cost prices.
There are plenty of other reasons. The one applicable here is "commoditize your complement". Zero cost to consumer codecs mean more eyeballs on youtube videos, which means more ad revenue for Google. That thought process doesn't lead to later ramping up consumer costs. And if it's truly an open standard, how are they going increase costs when anyone can simply release a free implementation?
Absolutely. Go look at how carnegie won the steel market by starving his competition.
Now all platforms are bundled with browsers and plenty of other software.
> Now all platforms are bundled with browsers and plenty of other software.
This is an interesting question. You could take it as Microsoft's argument before the DOJ being correct, that browsers become an inextricable part of an OS. Whether or not they would have been had they not included it, it seems like we can say in hindsight, of course it would have. But surely Microsoft's decision to do so influenced the way the market went.
If the intent is to drive others out of the market, it could be right?
Or if the same company gave its mobile operating system away for free to undercut a rival and then as soon as it became ubiquitous, started making much of it closed source and forcing companies to bundle its closed source software?
Although honestly, it's always a nuanced thing.
But typically the idea is to use money and undercutting to force out competition, then when the competition dies quality goes to crap.
 And within Profile 7, there is the difference between the MEL (Minimum Enhancement Layer) which just adds HDR data, versus the FEL (Full Enhancement Layer) which actually adapts a 10-bit core video stream into a 12-bit one for FEL compatible players. Not all Profile 7 implementations can handle FEL, but can handle MEL. So even the profiles themselves have fragmentation. FEL and MEL are, within Profile 7, actually HEVC video streams that are 1920x1080 that the player reads simultaneously with the 4K content. So a FEL/MEL player is actually processing 2 HEVC streams simultaneously, so it's not a huge surprise why it isn't used for streaming DV.
 Profile 8 comes in 3 different versions, Profiles 8.1 through 8.4. 8.3 is not used. Profile 8.1 is backwards compatible with an HDR10 stream, Profile 8.2 a SDR stream, and Profile 8.4 an HLG stream. Big surprise that iPhone uses 8.4 because HLG can be seamlessly converted into SDR or some other HDR formats when necessary.
This seems like one corporation flexing on another rather than great sense of mission; it's not like Google doesn't have IP of its own that it prefers to keep locked up. I suspect that this signals a strategic desire to move into the A/V production space, where customers have big demands for storage and computing resources.
ATSC 3.0 is a government standard for how public airwaves should be used. It strikes me as wrong that the government has basically mandated Dolby licensing for hardware manufacturers and software libraries.
There is an argument for patent protection as innovation motivator, but lockup periods are more likely to lead to runaway market dominance due to preferential attachment. Where there's a monospony (like government as owner of spectrum) that's probably going to lead to negative outcomes.
Thanks for widening my perspective on that issue.
This is because ATSC3.0 requires Plex to secure licensing for AC-4 audio in order to process the sound, which they legally haven’t been able to do yet.
Unfortunately, my reception is significantly better via ATSC3.0, yet I’m left waiting and unable to utilize it.
Not that it changes anything. Proprietary formats have no place in such government specifications. But possibly a relevant ffmpeg build could help you do what you wanted to.
I'm not sure if ac-4 is mandatory, but it seems like it is? Kind of a big pain indeed.
C'mon, this is market capitalism 101
What's weird to me is how google is selling this as a win for the public, when the marginal costs added by Dolby are so low. Even in the audio production space, Dolby stuff is a little expensive for an individual (surround sound plugins costing hundreds of dollars) but it's not a big overhead for a recording studio. Their product is quality and consistency at industrial prices and imho they deliver on this.
There isn't an underground of frustrated audio engineers dreaming of how theatrical sound could be so much better if it weren't for big D. Spatial audio rebels build quadrophonic sound systems for raves, but you didn't hear it from me.
The world at large has settled on Dolby Vision and Atmos and it will be very difficult to change this. Not only from the consumer end but specially in the pro audio/video end.
Google would need first to offer plugins for DAWs, video software, etc, to work with these formats before there's enough content that manufacturers and streamers consider it.
Hollywood movies primarily standardized on Dolby Vision, but the entire HDR ecosystem very much did not. Sony cameras for example primarily only shoot in HLG, even for their cinema cameras.
Similarly games regularly opt for HDR10/HDR10+ for their HDR output instead of Dolby Vision. Why? Because it's cheaper, and dynamic metadata is largely pointless in an environment where the lighting content of the next dozen frames aren't known
No, pretty much the entire video/streaming industry did. Apple, Netflix, Disney, HBO, etc, either stream in DV or HDR10 (non plus).
Physical Bluray is slowly dying (I own a bunch of those) so streaming is really where most of the HDR video content lives.
> Similarly games regularly opt for HDR10/HDR10+ for their HDR output instead of Dolby Vision
Fair point, but consumers keep complaining the PS5 doesn't have DV which is an indicator of what people want. DV is actually a big selling point for the Xbox Series X.
On PC, I don't know. I've been playing HDR in consoles for years but support on Windows has been pretty bad until recently. My impression is HDR is so much more popular on consoles vs PC. Same with Atmos and surround.
(And even if you have a home theatre system, Windows games will still prefer outputting 5.1 / 7.1 PCM and mixing 3D effects by themselves).
I'd also be interested to hear where those Dolby Vision complaints for PS5 are coming from, I haven't heard anyone really say that despite HDR being debated quite a lot :)
You need to buy a license if you want things Atmos-ified (so HRTF) for your stereo headphones. It's basically worthless.
You don't need to buy anything if your media player can decode and downmix Atmos to surround (like Windows Movies & TV).
Home Theatre systems just get Atmos passed through to them if compatible, so they can then decode and downmix the positional audio according to your configuration.
> Windows games will still prefer outputting 5.1 / 7.1 PCM and mixing 3D effects by themselves).
I wish. If games have surround at all, it's usually only analog 5.1/7.1. You need Dolby Digital Live for a digital surround output in most cases (and that can be a PITA to arrange, e.g. patched Realtek drivers). DDL basically provides a fake analog output for software and then sends a compressed digital signal to your decoder.
> Fair point, but consumers keep complaining the PS5 doesn't have DV which is an indicator of what people want. DV is actually a big selling point for the Xbox Series X.
They want it because there are realistically two options right now: HDR10 (not HDR10+) and Dolby Vision. DV being superior in every aspect from viewer perspective. I didn't even know about HDR10+ until today. In other words, what people actually want is HDR dynamic metadata because it looks a lot better than static metadata.
Since, like you said, every streaming service is either DV or static HDR10, it means people say that they want DV.
It unfortunately does not. That's what a Dolby Vision Mastering display requires, but to get the DV logo on your display all you really have to do is pay Dolby money and use their tonemapper. Unlike Vesa they don't actually have a display certification system at all.
Well, then PC HDR is doomed.
There is also issue with HRD calibration on PC for some reason. I have no issues on console connected to TV, but the same game on the same TV running on PC would get all weird looking.
So HDR10+ is dynamic metadata on top of HDR10's static metadata on top of BT2020 PQ which is what makes it "HDR" in the first place. That's easy.
Dolby Vision is then a profile clusterfuck. Sometimes it's just dynamic metadata on top of HDR10. Sometimes it's dynamic metadata on top of HLG. Sometimes it is its own thing entirely.
Do they have researchers working on new audio and video formats?
Or is it now all just a self-perpetuating machine for generating licensing revenue, based on existing patents?
Sorry for the ignorant question but I'm clueless about their ongoing contributions to the industry.
So they do actually contribute some worthwhile stuff. And some of it is open standards (like PQ & ICtCp).
They also absolutely troll licenses though. Dolby Vision being a perfect example, check out the "profiles" section of https://en.wikipedia.org/wiki/Dolby_Vision and you'll see some truly dumb Dolby Vision profiles that exist obviously just to slap a Dolby Vision license & branding on an otherwise boring, generic video format. Dolby Vision 9 is a perfect example, it's not even HDR at all. It's literally the same stuff we've all been watching for a decade+, but with marketing wank shoved onto it.
Short answer is yes. Search for "Dolby" under "Author Affiliation" in the AES paper search  and you can see the research they publish. (the papers themselves are unfortunately behind the AES paywall, but if usually authors will send you the paper if you ask nicely).
Imagine android tv disabling atmos because of licensing or netflix, then using the new format which needs a new receiver.
I spent 2k on mine for atmos 4 ceiling speakers..
Also still plenty of Blu-Rays around using VC-1 which is garbage compared to current codecs. Perhaps all new ones have migrated to H.264 - at least all UHD ones I have seen have.
Ocassionaly you can see an odd DTS/DTS-MA now and then, but not a lot.
The very opposite of cutting edge technology (with the exception of tiny niches like VR).
Can anyone understand how it's in anyone's best interest to investigate / potentially stop an open source standard / royalty-free format that has buy-in from tons of big orgs?
The Commission has information that AOM and its members may be imposing licensing terms (mandatory royalty-free cross licensing) on innovators that were not a part of AOM at the time of the creation of the AV1 technical, but whose patents are deemed essential to (its) technical specifications
And then it is Facebook and Oracle that can do no right.
Because "using my monopolistic profits in one area to destroy your business in another area" is textbook anticompetitive behaviour.
I have never understood how or why it is that expensive proprietary codecs keep taking over. Maybe there is more value add somewhere, but it's very unclear, esepcially under the gloss of (usually deeply non technical) marketting fluff.
When I did audio production in the film industry, Dolby stuff was a post production expense but not e very big one. Their license fees aren't staggeringly expensive, and the quality and reliability of the playback system was its own argument - if the Dolby 5.1 sounds right in one theater it's going to sound right in another, and that's a big deal because bad sound can really kill a movie, even if the audience can't articulate why (most people don't think too much about sound).
Digidesign (the manufacturers of Pro Tools & later owners of Avid) are a far more aggressive company that has maintained a virtual lock on its market with a combination of very expensive hardware and moat-building strategies.
Breaking those proprietary realtionships with open source has always been a losing battle - look at HEVC vs. VP9 vs. AV1 battles or AptX vs. AAC vs. Opus.
Media industry is a surprisingly tight knit and very conservative club that doesn't adopt outsiders easily.
There's nothing stopping open source digital codecs from ruling, but they need people working for them.
Personally, I'd rather pay dollars than data.
Sure, there are defintitely things Google has done that have benefited the public.
> Let's not forget how Chrome liberated the web from Microsoft's won't-fix attitude whilst IE remained the dominant browser.
But I am unconvinced that Chrome is one of them. Firefox was doing fine displacing IE on its own and the main reason Chrome managed to pull ahead so fast was the huge marketing drive including ads on the home page of the worlds #1 website. Meanwhile, Chrome-driven extensions have made the web incredibly complicated to the extend that new browsers are almost impossible while its dominance holds back any real effort to put users into control of their web browsing experience since few websites care to support anything other than Chrome.
Also, ironically, even Google Chat didn't seem to support webp images until recently. I appreciate the idea of open standards, but compatibility matters way more to the end user.
It felt very Google-y: lots of attention to the cool CS problems, less so the boring ones like tool support and sensible defaults for non-experts.
I mean, they could make a better open video codec, give me AV2 any day. But why not push the pre-existing standards as "premium offerings"?
And btw [Opinion Incoming!] I believe Opus is as good as lossy audio formats will ever get. I'd love to be proven wrong...
That's supposed to be a "full sphere" surround sound format (developed ~50 years ago), but hasn't been picked up widely:
Though you don't actually need any of the fancy new stuff being worked on to use Ambisonics - you can already use Opus with Ambisonics today in MP4.
For example, lets say I have some-atmos-movie.mkv on my local computer, that I'd like to play back through my 7.1 speakers (attached to computer).
In my head, I'm thinking:
1. Needs to be converted to Ambisonics format
2. Player software (eg vlc) needs to understand the resulting format, and send an appropriate bitstream to each output device
Guessing it's not that simple?
What are some of the risks of media formats being centralized by a mega corp like Google who works with nation states?
Can we truly expect something free… or can we expect all of the content we create to be steganographically watermarked in surveillance states that appear to be fully cracking down on encryption?
My immediate reaction to reading these few words is - "another tool for the Google graveyard"
While I see both sides, I don't agree with this strategy. In fact, I think the public should be more aware of just how damaging Google/YouTube is to the streaming ecosystem and if you really stretch this argument, the planet.
It is true - HEVC's original licensing structure was a nightmare, but it seems to have been resolved and we now have hardware decoders in nearly all modern consumer devices.
This is also becoming true of Dolby's formats. maybe I am biased or not as informed as I could be but they did the R&D, worked with some of the brightest (pun intended) in the industry and created a production-to-distribution pipeline. Of course there are fees, but vendors are on board and content creators know how to work with these standards.
Now here comes one of the largest companies in the world. HEVC? Nope - they don't want to pay anyone any fees so instead they're going to develop the VP9 codec. Should they use HLS or DASH? Nope, they are going to spin DASH off into our own proprietary HTTP deliverable and only deliver AVC HLS for compatibility reasons. Apple customers complain and after years they cave and support VP9 as a software decoder starting with iOS14. This means millions of users eat significant battery cycles just to watch anything, including HDR video.
Then we get to Chrome. HEVC? Nope. Dolby? Nope. HLS? Nope. The most popular browser in the world doesn't support any of the broadcast standards. It's their way or fallback to SDR and the less efficient AVC codec.
So now anyone else in the streaming industry trying to deliver the best streaming experience has to encode/transcode everything three times. AVC for compatibility (and spec) reasons, HEVC for set-top boxes and iOS, and VP9 for Google's ecosystem. If it wasn't for CMAF the world would also have to store all of this twice.
In the end, to save YouTube licensing and bandwidth costs, the rest of the industry has to consume 2-3x more compute to generate video and hundreds of millions of devices now consume an order of magnitude more power to software decode VP9.
If and when Project Caviar becomes reality, it'll be another fragmented HDR deliverable. Dolby isn't going away and Chrome won't support it, so the rest of the industry will have to add even more compute and storage to accommodate. In the name of 'open' and saving manufacturers a couple dollars, the rest of the industry is now fragmented and consumers are hurt the most.
YouTube weirdly admitted this fragmentation is becoming a problem. They can't keep up with compute and had to create custom hardware to solve. Of course, these chips are not available to anyone else and gives them a competitive edge:
You're literaly commenting here on an article where Dolby CEO gleefuly explains how he made profit by using streaming services to make users pay for their own patents and royalties. And we didn't even get to the DRM which lies deeply integrated into every part of those formats. Or insane complexity of HEVC and Dolby Vision profiles which somehow don't bother you at all.
So, AVC, HEVC, Dolby anything, DTS anything, burn the rentseekers to the ground. I'm sorry if you need to transcode an additional video format for that.
Now Google is in the Phone business they have to somehow support HEVC on their phone.
Making hardware devices like Android TV cheaper helps adoption of Googles platforms and services.
Nobody ever passes the savings on to customers.
The royalty is probably a couple of cents at the most per device. It makes more sense for them to just pocket the millions in savings and show extra profit to their shareholders. It would be dumb to give up millions to drop the price a few cents, which won't even be noticed by consumers anyways.
Is Google evil because it wants the royalty money how can it be so low that it doesn't matter? :P
Maybe it would be easier for them to tie in to their speech recognition/translation if they controlled the whole stack?
Does anyone have any experience with Dolby and could shed some light?
We lost a lot of people.
They've been repeatedly bricked (features rolled back, support changed, can't set up complete groups, etc) by Google in the last few years, to the point where I don't even think I have them connected right now.
I don't trust consumer products from Google at all.
Month M + 4: Google shows off $GOOGLE_THING and announces $PARTNER devices
Month M + 9: $PARTNER releases first devices with $GOOGLE_THING support (also supports $INDUSTRY_STANDARD, of course)
Month M + 18: Google disappointed with lack of adoption of $GOOGLE_THING announces first-party products with $GOOGLE_THING support
Month M + 24: Google's internal team working on first-party $GOOGLE_THING products dissolved
Month M + 36: $PARTNER announces future products will no longer support $GOOGLE_THING due to lack of demand
Month M + 48: Google removes all mentions of $GOOGLE_THING from their websites, docs, etc.
And uncertainty ... how long will such efforts last before Google loses interest or is forced to abandon them?