Eric Anholt (anholt) wrote,
Eric Anholt
anholt

new job!

Yesterday was my first day working at Broadcom. I've taken on a new role as an open source developer there. I'm going to be working on building an MIT-licensed Mesa and kernel DRM driver for the 2708 (aka the 2835), the chip that's in the Raspberry Pi.

It's going to be a long process. What I have to work with to start is basically sample code. Talking to the engineers who wrote the code drops we've seen released from Broadcom so far, they're happy to tell me about the clever things they did (their IR is pretty cool for the target subset of their architecture they chose, and it makes instruction scheduling and register allocation *really* easy), but I've had universal encouragement so far to throw it all away and start over.

So far, I'm just beginning. I'm still working on getting a useful development environment set up and building my first bits of stub DRM code. There are a lot of open questions still as to how we'll manage the transition from having most of the graphics hardware communication managed by the VPU to having it run on the ARM (since the VPU code is a firmware blob currently, we have to be careful to figure out when it will stomp on various bits of hardware as I incrementally take over things that used to be its job).

I'll have repos up as soon as I have some code that does anything.
Subscribe
  • Post a new comment

    Error

    default userpic

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 15 comments
This is great news!

Anonymous

June 17 2014, 12:40:35 UTC 4 years ago

Nice, I wish you a ton of luck in your new job!!!

Anonymous

June 17 2014, 14:18:21 UTC 4 years ago

Are you going to use the Gallium 3D framework for the new Mesa driver?

Anonymous

June 17 2014, 16:58:28 UTC 4 years ago

The approach I'd suggest is:

* Strip down the binary firmware to the bare minimum required to:

- Load the kernel/bootloader on the ARM core and boot it.

- Stick around enough to accept requests from the ARM core to load firmware onto the VideoCore.

* Create a new SW stack based on the mainline kernel (+U-Boot?).

* Manage VC SW loading/communication entirely from the ARM core.

* Write a document that guarantees a stable ABI between VC and ARM core for the initial VC firmware loading. The current firmware protocol documentation doesn't really commit to being a stable ABI that mainline can rely on.

* Write a kernel driver to talk to the VC firmware in a way that's acceptable upstream.

* When reading to switch the Pi Foundation's images to the new system, replace the firmware and kernel all at once (at least in the initial image generation process; upgrades to existing installs might need some care, or just fingers-crossed that everything upgrades OK at once).

That way, you get to:

* Avoid having both the VC firmware and ARM driver think they own the VC.

* Avoid any compatibility transition code.
That's really outside of the scope of what I'm doing. I've got many months of work ahead of me just to build a graphics driver that doesn't use the VPU. Delaying starting that to rebuild how boot works would be a major delay and reduce the chance of succeeding at all. Better, in my opinion, to build the graphics driver now and let smart people who already know how to build firmware and bootloader stuff handle changing that some day when they decide to make the switch to my driver.
Which city are you working in?
I'm going to be working remotely from Portland, but I'm currently visiting them in Cambridge (UK).

Anonymous

July 22 2014, 17:10:40 UTC 4 years ago

Oh, bother! Wish I'd known, I'd have suggested pubbage. Erm, semi-unrelatedly, are you coming to DebConf?

-a wildly anonymous Cambridge resident
Yeah, I'll be at debconf for sure. I don't know when I'll be back in cambridge, though.

Anonymous

June 18 2014, 15:47:51 UTC 4 years ago

It would be nice, if your driver could run on other Broadcom cpu's, for example on BCM28145/28155 which also has VideoCore 4.
As far as I can tell, that should just be a matter of adding the arch-specific/devicetree bits to what I'm building to say where the VC4 lives in address space on your device. (There are some minor revs to the hw, so some devices have more bugs than others, but from what I've seen the number of errata is small and reasonably specified).

Anonymous

June 19 2014, 06:29:37 UTC 4 years ago

It sounds like you are going to shift work to an already overworked arm core?
Let's say that I'm skeptical that packaging up a GLX-like protocol stream is actually cheaper than just sending off hardware batchbuffers. We also know we've still got a lot of useful things we can do in Mesa core to reduce overhead. But even then, Mesa's seen a lot more optimization effort already than a small team at any one hardware vendor can do. If I can't beat the performance of the previous stack, I'll be pretty disappointed.

However, I'm currently building a gallium driver, and looking at how gallium manages GL state I'm pretty sure I'm going to have to throw out gallium to get performance. The state tracker is using this constant state object infrastructure, which looks like an even-more-expensive version of what I ripped out of i965 a few years back for a major performance win.

Anonymous

June 21 2014, 14:49:09 UTC 4 years ago

Would you implement a full OpenGL 2.1 implementation *?


*The VideoCore IV QPU supports (by hardware) OpenGL ES 3.0 , OpenCL 1.2 and OpenGL 2.1 but they are not implemented.

A fun link:

https://github.com/simonjhall/llvm_qpu

An LLVM backend for VideoCore and its assembler :
https://github.com/simonjhall/qpu-assembler
OpenGL renderer string: Gallium 0.4 on VC4
OpenGL version string: 2.1 Mesa 10.3.0-devel (git-09deb47)

but really that's a lie. It doesn't do 3D texturing, and I'm not currently planning on actually doing it. GL 2.1 only requires a 3D texture size of 16x16x16, so I could stuff the data into a 2D texture and do all the mipmap filtering by hand in the shaders, but I don't really see myself ever being bothered to do so.