Fun coverage of what all goes in to such a fascinating compelling topic, that so so so few people have any real idea about (computer × audio pipelines). I'd like to run through the major players as I see them right now: Sound Open Firmware and then the ever present widely adopted Cadence Tensilica Xtensa HIFI IP that's on a good number of systems-on-chip. Then gaze forward a little.
Based on the whose who of who contributors/users (Intel, MediaTek, Realtek, AMD (although it seems they dropped the hardware for it after Zen3?), my impression is that Sound Open Firmware is the 900 lb gorilla in this area. There's a ton of work & capabilities poured in here. No one wants to own this and differentiating yourself is extremely hard. So cooperation makes sense. So: SOF; started 2018 & quickly picked up by the Linux Foundation. With the 2.4 release (January 2023) they switched to Zephyr as the base embedded OS, which seems like an awesome win to offload development efforts & perhaps to prosper from subsystems like Bluetooth support. https://archive.fosdem.org/2024/schedule/event/fosdem-2024-2...https://thesofproject.github.io/latest/introduction/index.ht...https://www.phoronix.com/search/Sound+Open+Firmware
SOF is a software consolidation effort, and has done great. The somewhat wilder side is the hardware. Not my field but again my impression is that, at this point, Tensilica (acquired by Cadence in 2013)'s Xtensa IP is basically used by everyone, often the incredibly featurefully integrated DSP rocking HIFI 4/5/5s/1. Tensilica's Xtensa is famously used in Espressif ESP series (and now with RISC-V designs). But Tensilic (mostly "LX"?) cores are everywhere, almost always explicitly for audio (or vision, in the case of Hololense) processing. With nice chunky DSPs and now coming with 32 bit float, nicely parallelizing/auto-vectorising with Cadence's proprietary XCC compiler, this is everywhere. There's HIFI4 and HIFI5 (and now 5s) cores in a wild number of chips: almost all modern Intel laptops/desktops, to tiny little Cortex A7 chips to tiny little TWS systems. Qualcomm has some Xtensa but I'm not sure where. For example there's a great Defcon talk on hacking ath10k wifi, which is Xtensa based!
https://media.defcon.org/DEF%20CON%2031/DEF%20CON%2031%20pre...https://www.cnx-software.com/2022/02/13/allwinner-t113-s3-du...https://semiwiki.com/forum/threads/wuqi-microelectronics-sel...
Cadence has the DSP market down right now, seemingly.
Thoughts on the frontier/future: I'm interested to learn what AMD did after walking away from TrueSound hardware (having a HIFI block on their designs). Ther was work they were trying to pursue GPGPU efforts, and maybe on APU's the latency wouldnt be an issue, but I sort of expect its mostly software now, and I'd love to know more about what that software.
SOF has had years of wanting to get more open. But that XCC compiler seems like a so-far holding moat, and I'm not sure who else is even trying to have more DSP like instructions or cores to target the market. SOF has been trying to get running on ARM or x86 or RISC-V for a while, but without some vectorization and nice low power parallel hardware it's hard to hope for too much. I keep hoping RISC-V RVV vectors might maybe do the thing. But how much success folks have had with the plans to get llvm doing vectorization is very unknown.
MomsAVoxell 5 hours ago [-]
Disclaimer: I work for a manufacturer of headphones and microphones. If you’re into Pro Audio, you have undoubtedly heard of or used our products, which are in widespread use throughout the industry. I spend my days in the lab with DSP engineers who are responsible for noise cancellation and other algorithms some of you may even be using right now, in fact. I myself work on plugins related to post-processing features to support our microphones, which are in wide use throughout the industry, on stage and in the studio, by amateurs and pros alike.
I found a lot wanting in this article. The emphasis on a lack of tooling seems very ill conceived. The #1 DSP tool in use in this industry is .. Matlab.
Yet, it’s not even mentioned.
Here’s how it goes: Matlab is used to model and test everything. Algorithms are developed, refined, tested and evaluated in Matlab, which has ample capabilities to do realtime audio with high performance. Once the algorithm has been demonstrated and proven, it is ported from Matlab to C, and then core parts are implemented in assembly.
This does not take years. It takes months and in some cases, with a great DSP team, even just weeks. Matlab has plenty of tools for doing this properly .. from visual tools, to optimization functions. There are very, very few professional DSP engineers who don’t work with Matlab in this way.
So this article really just felt a bit more like marketing copy designed to sell their product .. and as I got to the bottom of the article to see the mention of the JS event loop and the browser, I felt like the article was intentionally being disingenuous about the real issues. (The real issue with embedded DSP development is that hardware vendors are terrible compiler developers; but this has had the side effect of making great DSP guys into great assembly programmers. You absolutely must use assembly; but you must also absolutely use Matlab.)
And then, there is JUCE. JUCE is rapidly becoming THE tool to use for high performance audio - it dominates the desktop plugin market - but, as it matures, it has become more and more embeddable. There are devs out there who have made millions with JUCE plugins and are now porting their JUCE code to hardware and embedded platforms with a great deal of ease.
If you want to do embedded DSP properly, start with Matlab, get your ideas implemented, port to C++, integrate into JUCE, and then port your JUCE code to embedded, if your BOM budget supports a decent processor - or, port your processBlock() to assembly, if it doesn’t. This is the future of DSP platform development. Meanwhile, Matlab->C->Assembly is a very well refined workflow, and it is quite productive. One of the best reasons to keep those assembly language programming chops sharp.
flogoe 5 hours ago [-]
Very interesting. I work for a large (for the industry at least) company that builds audio effect plugins and virtual instruments. Me and most of my collegues have not touched Matlab since we left uni and when talking to devs from other companies I got the same impression.
For prototyping and algorithm development we mainly use python, sometimes domain specific languages like Faust or CMajor but most of the time we go with C++ right away.
MomsAVoxell 3 hours ago [-]
That’s also a perfectly reasonable way to do things if you have the right kinds of competency with the compiler, but we have found that Matlab enables very rapid prototyping, development, and most important of all: validation and testing of the results.
Of course, there are valid reasons for why there is a difference in approaches taken by audio effects plugins/software/virtual instruments developers, and pro audio hardware manufacturers. We are targeting DSP’s designed very specifically for audio-related tasks - you (I assume, correct me if I’m wrong) are targeting the vast array of platforms which the audio plugin/virtual instrument market demand be supported. There is somewhat of a gap between these worlds - whereas we have dedicated DSP’s which only run our hand-crafted, carefully designed and tested code, you have to target a vast array of different systems (differing DAW’s, different plugin formats, different CPU architectures) and therefore take a different approach to solving similar problems. It’s no surprise that you don’t have exposure to Matlab in that context - it’s more important, I would wager, for your devs to know the differences between compilers and plugin architectures.
That being said, our DSP engineers definitely write C++ code too - just that for the embedded DSP use cases, it’s not as productive, nor as necessary to do so. C++-based audio algorithms enable a great deal of mobility in terms of platform ports; using Matlab to refine DSP algorithms prior to final implementations in DSP-based assembly (and C code) is more feasible because we control, in its entirety, the nature of our hardware platform.
There’s no VST vs. AU vs AAX requirements in our specs - but there are many of these kinds of thorny issues in your specs, I’d wager.
Rendered at 00:25:18 GMT+0000 (Coordinated Universal Time) with Vercel.
Based on the whose who of who contributors/users (Intel, MediaTek, Realtek, AMD (although it seems they dropped the hardware for it after Zen3?), my impression is that Sound Open Firmware is the 900 lb gorilla in this area. There's a ton of work & capabilities poured in here. No one wants to own this and differentiating yourself is extremely hard. So cooperation makes sense. So: SOF; started 2018 & quickly picked up by the Linux Foundation. With the 2.4 release (January 2023) they switched to Zephyr as the base embedded OS, which seems like an awesome win to offload development efforts & perhaps to prosper from subsystems like Bluetooth support. https://archive.fosdem.org/2024/schedule/event/fosdem-2024-2... https://thesofproject.github.io/latest/introduction/index.ht... https://www.phoronix.com/search/Sound+Open+Firmware
SOF is a software consolidation effort, and has done great. The somewhat wilder side is the hardware. Not my field but again my impression is that, at this point, Tensilica (acquired by Cadence in 2013)'s Xtensa IP is basically used by everyone, often the incredibly featurefully integrated DSP rocking HIFI 4/5/5s/1. Tensilica's Xtensa is famously used in Espressif ESP series (and now with RISC-V designs). But Tensilic (mostly "LX"?) cores are everywhere, almost always explicitly for audio (or vision, in the case of Hololense) processing. With nice chunky DSPs and now coming with 32 bit float, nicely parallelizing/auto-vectorising with Cadence's proprietary XCC compiler, this is everywhere. There's HIFI4 and HIFI5 (and now 5s) cores in a wild number of chips: almost all modern Intel laptops/desktops, to tiny little Cortex A7 chips to tiny little TWS systems. Qualcomm has some Xtensa but I'm not sure where. For example there's a great Defcon talk on hacking ath10k wifi, which is Xtensa based! https://media.defcon.org/DEF%20CON%2031/DEF%20CON%2031%20pre... https://www.cnx-software.com/2022/02/13/allwinner-t113-s3-du... https://semiwiki.com/forum/threads/wuqi-microelectronics-sel...
Cadence has the DSP market down right now, seemingly.
Thoughts on the frontier/future: I'm interested to learn what AMD did after walking away from TrueSound hardware (having a HIFI block on their designs). Ther was work they were trying to pursue GPGPU efforts, and maybe on APU's the latency wouldnt be an issue, but I sort of expect its mostly software now, and I'd love to know more about what that software.
SOF has had years of wanting to get more open. But that XCC compiler seems like a so-far holding moat, and I'm not sure who else is even trying to have more DSP like instructions or cores to target the market. SOF has been trying to get running on ARM or x86 or RISC-V for a while, but without some vectorization and nice low power parallel hardware it's hard to hope for too much. I keep hoping RISC-V RVV vectors might maybe do the thing. But how much success folks have had with the plans to get llvm doing vectorization is very unknown.
I found a lot wanting in this article. The emphasis on a lack of tooling seems very ill conceived. The #1 DSP tool in use in this industry is .. Matlab.
Yet, it’s not even mentioned.
Here’s how it goes: Matlab is used to model and test everything. Algorithms are developed, refined, tested and evaluated in Matlab, which has ample capabilities to do realtime audio with high performance. Once the algorithm has been demonstrated and proven, it is ported from Matlab to C, and then core parts are implemented in assembly.
This does not take years. It takes months and in some cases, with a great DSP team, even just weeks. Matlab has plenty of tools for doing this properly .. from visual tools, to optimization functions. There are very, very few professional DSP engineers who don’t work with Matlab in this way.
So this article really just felt a bit more like marketing copy designed to sell their product .. and as I got to the bottom of the article to see the mention of the JS event loop and the browser, I felt like the article was intentionally being disingenuous about the real issues. (The real issue with embedded DSP development is that hardware vendors are terrible compiler developers; but this has had the side effect of making great DSP guys into great assembly programmers. You absolutely must use assembly; but you must also absolutely use Matlab.)
And then, there is JUCE. JUCE is rapidly becoming THE tool to use for high performance audio - it dominates the desktop plugin market - but, as it matures, it has become more and more embeddable. There are devs out there who have made millions with JUCE plugins and are now porting their JUCE code to hardware and embedded platforms with a great deal of ease.
If you want to do embedded DSP properly, start with Matlab, get your ideas implemented, port to C++, integrate into JUCE, and then port your JUCE code to embedded, if your BOM budget supports a decent processor - or, port your processBlock() to assembly, if it doesn’t. This is the future of DSP platform development. Meanwhile, Matlab->C->Assembly is a very well refined workflow, and it is quite productive. One of the best reasons to keep those assembly language programming chops sharp.
For prototyping and algorithm development we mainly use python, sometimes domain specific languages like Faust or CMajor but most of the time we go with C++ right away.
Of course, there are valid reasons for why there is a difference in approaches taken by audio effects plugins/software/virtual instruments developers, and pro audio hardware manufacturers. We are targeting DSP’s designed very specifically for audio-related tasks - you (I assume, correct me if I’m wrong) are targeting the vast array of platforms which the audio plugin/virtual instrument market demand be supported. There is somewhat of a gap between these worlds - whereas we have dedicated DSP’s which only run our hand-crafted, carefully designed and tested code, you have to target a vast array of different systems (differing DAW’s, different plugin formats, different CPU architectures) and therefore take a different approach to solving similar problems. It’s no surprise that you don’t have exposure to Matlab in that context - it’s more important, I would wager, for your devs to know the differences between compilers and plugin architectures.
That being said, our DSP engineers definitely write C++ code too - just that for the embedded DSP use cases, it’s not as productive, nor as necessary to do so. C++-based audio algorithms enable a great deal of mobility in terms of platform ports; using Matlab to refine DSP algorithms prior to final implementations in DSP-based assembly (and C code) is more feasible because we control, in its entirety, the nature of our hardware platform.
There’s no VST vs. AU vs AAX requirements in our specs - but there are many of these kinds of thorny issues in your specs, I’d wager.