This week in vc4 (2016-09-19): firmware KMS, apitrace testing

While stuck on an airplane, I put together a repository for apitraces with confirmed good images for driver output.  Combined with the piglit patches I pushed, I now have regression testing of actual apps on vc4 (particularly relevant given that I'm working on optimizing one of those apps!)

The flight was to visit the Raspberry Pi Foundation, with the goal of getting something usable for their distro to switch to the open 3D stack.  There's still a giant pile of KMS work to do (HDMI audio, DSI power management, SDTV support, etc.), and waiting for all of that to be regression-free will be a long time.  The question is: what could we do that would get us 3D, even if KMS isn't ready?

So, I put together a quick branch to expose the firmware's display stack as the KMS display pipeline. It's a filthy hack, and loses us a lot of the important new features that the open stack was going to bring (changing video modes in X, vblank timestamping, power management), but it gets us much closer to the featureset of the previous stack.  Hopefully they'll be switching to it as the default in new installs soon.

In debugging while I was here, Simon found that on his HDMI display the color ramps didn't quite match between closed and open drivers.  After a bit of worrying about gamma ramp behavior, I figured out that it was actually that the monitor was using a CEA mode that requires limited range RGB input.  A patch is now on the list.

This week in vc4 2016-09-13: glmark2 performance work

Last week I spent working on the glmark2 performance issues.  I now have a NIR patch out for the pathological conditionals test (it's now faster than on the old driver), and a branch for job shuffling (+17% and +27% on the two desktop tests).

Here's the basic idea of job shuffling:

We're a tiled renderer, and tiled renderers get their wins from having a Clear at the start of the frame (indicating we don't need to load any previous contents into the tile buffer).  When your frame is done, we flush each tile out to memory.  If you do your clear, start rendering some primitives, and then switch to some other FBO (because you're rendering to a texture that you're planning on texturing from in your next draw to the main FBO), we have to flush out all of those tiles, start rendering to the new FBO, and flush its rendering, and then when you come back to the main FBO and we have to reload your old cleared-and-a-few-draws tiles.

Job shuffling deals with this by separating the single GL command stream into separate jobs per FBO.  When you switch to your temporary FBO, we don't flush the old job, we just set it aside.  To make this work we have to add tracking for which buffers have jobs writing into them (so that if you try to read those from another job, we can go flush the job that wrote it), and which buffers have jobs reading from them (so that if you try to write to them, they can get flushed so that they don't get incorrectly updated contents).

This felt like it should have been harder than it was, and there's a spot where I'm using a really bad data structure I had laying around, but that data structure has been bad news since the driver was imported and it hasn't been high in any profiles yet.  The other tests don't seem to have any problem with the possible increased CPU overhead.

The shuffling branch also unearthed a few bugs related to clearing and blitting in the multisample tests.  Some of the piglit cases involved are fixed, but some will be reporting new piglit "regressions" because the tests are now closer to working correctly (sigh, reftests).

I also started writing documentation for updating the system's X and Mesa stack on Raspbian for one of the Foundation's developers.  It's not polished, and if I was rewriting it I would use modular's build.sh instead of some of what I did there.  But it's there for reference.

This week in vc4 (2016-09-06): glmark2, X testing, kernel maintaining

Last week I was tasked with making performance comparisons between vc4 and the closed driver possible.  I decided to take glmark2 and port it to dispmanx, and submitted a pull request upstream.  It already worked on X11 on the vc4 driver, and I fixed the drm backend to work as well (though the drm backend has performance problems specific to the glmark2 code).

Looking at glmark2, vc4 has a few bugs.  Terrain has some rendering bugs.  The driver on master took a big performance hit on one of the conditionals tests since the loops support was added, because NIR isn't aggressive enough in flattening if statements.  Some of the tests require that we shuffle rendering jobs to avoid extra frame store/loads.  Finally, we need to use the multithreaded fragment shader mode to hide texture fetching latency on a bunch of tests.  Despite the bugs, results looked good.

(Note: I won't be posting comparisons here.  Comparisons will be left up to the reader on their particular hardware and software stacks).

I'm expecting to get to do some glamor work on vc4 again soon, so I spent some of the time while I was waiting for Raspberry Pi builds working on the X Server's testing infrastructure.  I've previously talked about Travis CI, but for it to be really useful it needs to run our integration tests.  I fixed up the piglit XTS test wrapper to not spuriously fail, made the X Test suite spuriously fail less, and worked with Adam Jackson at Red Hat to fix build issues in XTS.  Finally, I wrote scripts that will, if you have an XTS tree and a piglit tree and build Xvfb, actually run the XTS rendering tests at xserver make check time.

Next steps for xserver testing are to test glamor in a similar fashion, and to integrate this testing into travis-ci and land the travis-ci branch.

Finally, I submitted pull requests to the upstream kernel.  4.8 got some fixes for VC4 3D (merged by Dave), and 4.9 got interlaced vblank timing patches from the Mario Kleiner (not yet merged by Dave) and Raspberry Pi Zero (merged by Florian).

This week in vc4 (2016-08-29): derivatives, GPU hangs, testing, kernel maintaining

I spent a day or so last week cleaning up @jonasarrow's demo patch for derivatives on vc4.  It had been hanging around on the github issue waiting for a rework due to feedback, and I decided to finally just go and do it.  It unfortunately involved totally rewriting their patches (which I dislike doing, it's always more awesome to have the original submitter get credit), but we now have dFdx()/dFdy() on Mesa master.

I also landed a fix for GPU hangs with 16 vertex attributes (4 or more vec4s, aka glsl-routing in piglit).  I'd been debugging this one for a while, and finally came up with an idea ("what if this FIFO here is a bad idea to use and we should be synchronous with this external unit?"), it worked, and a hardware developer confirmed that the fix was correct.  This one got a huge explanation comment.  I also fixed discards inside of if/loop statements -- generally discards get lowered out of ifs, but if it's in a non-unrolled loop we were doing discards ignoring whether the channel was in the loop.

Thanks to review from Rhys, I landed Mesa's Travis build fixes.  Rhys then used Travis to test out a couple of fixes to i915 and r600.  This is pretty cool, but it just makes me really want to get piglit into Travis so that we can get some actual integration testing in this process.

I got xserver's Travis to the point of running the unit tests, and one of them crashes on CI but not locally.  That's interesting.

The last GPU hang I have in piglit is in glsl-vs-loops.  This week I figured out what's going on, and I hope I'll be able to write about a fix next week.

Finally, I landed Stefan Wahren's Raspberry Pi Zero devicetree for upstream.  If nothing goes wrong, the Zero should be supported in 4.9.

vc4 status update for 2016-08-22: camera, NIR, testing

Last week I finally plugged in the camera module I got a while ago to go take a look at what vc4 needs for displaying camera output.

The surprising answer was "nothing."  vc4 could successfully import RGB dmabufs and display them as planes, even though I had been expecting to need fixes on that front.

However, the bcm2835 v4l camera driver needs a lot of work.  First of all, it doesn't use the proper contiguous memory support in v4l (vb2-dma-contig), and instead asks the firmware to copy from the firmware's contiguous memory into vmalloced kernel memory.  This wastes memory and wastes memory bandwidth, and doesn't give us dma-buf support.

Even more, MMAL (the v4l equivalent that the firmware exposes for driving the hardware) wants to output planar buffers with specific padding.  However, instead of using the multi-plane format support in v4l to expose buffers with that padding, the bcm2835 driver asks the firmware to do another copy from the firmware's planar layout into the old no-padding V4L planar format.

As a user of the V4L api, you're also in trouble because none of these formats have any priority information that I can see: The camera driver says it's equally happy to give you RGB or planar, even though RGB costs an extra copy.  I think properly done today, the camera driver would be exposing multi-plane planar YUV, and giving you a mem2mem adapter that could use MMAL calls to turn the planar YUV into RGB.

For now, I've updated the bug report with links to the demo code and instructions.

I also spent a little bit of time last week finishing off the series to use st/nir in vc4.  I managed to get to no regressions, and landed it today.  It doesn't eliminate TGSI, but it does mean TGSI is gone from the normal GLSL path.

Finally, I got inspired to do some work on testing.  I've been doing some free time work on servo, Mozilla's Rust-based web browser, and their development environment has been a delight as a new developer.  All patch submissions, from core developers or from newbies, go through github pull requests.  When you generate a PR, Travis builds and runs the unit tests on the PR.  Then a core developer reviews the code by adding a "r" comment in the PR or provides feedback.  Once it's reviewed, a bot picks up the pull request, tries merging it to master, then runs the full integration test suite on it.  If the test suite passes, the bot merges it to master, otherwise the bot writes a comment with a link to the build/test logs.

Compare this to Mesa's development process.  You make a patch.  You file it in the issue tracker and it gets utterly ignored.  You complain, and someone tells you you got the process wrong, so you join the mailing list and send your patch (and then get a flood of email until you unsubscribe).  It gets mangled by your email client, and you get told to use git-send-email, so you screw around with that for a while before you get an email that will actually show up in people's inboxes.  Then someone reviews it (hopefully) before it scrolls off the end of their inbox, and then it doesn't get committed anyway because your name was familiar enough that the reviewer thought maybe you had commit access.  Or they do land your patch, and it turns out you hasn't run the integration tests and then people complain at you for not testing.

So, as a first step toward making a process like Mozilla's possible, I put some time into fixing up Travis on Mesa, and building Travis support for the X Server.  If I can get Travis to run piglit and ensure that expected-pass tests don't regress, that at least gives us a documentable path for new developers in these two projects to put their code up on github and get automated testing of the branches they're proposing on the mailing lists.

vc4 status update for 2016-08-15: DSI panel, Raspbian updates, and docs

Last week I mostly worked on getting the upstream work I and others have done into downstream Raspbian (most of that time unfortunately in setting up another Raspbian development environment, after yet another SD card failed).

However, the most exciting thing for most users is that with the merge of the rpi-4.4.y-dsi-stub-squash branch, the DSI display should now come up by default with the open source driver.  This is unfortunately not a full upstreamable DSI driver, because the closed-source firmware is getting in the way of Linux by stealing our interrupts and then talking to the hardware behind our backs.  To work around the firmware, I never talk to the DSI hardware, and we just replace the HVS display plane configuration on the DSI's output pipe.  This means your display backlight is always on and the DSI link is always running, but better that than no display.

I also transferred the wiki I had made for VC4 over to github.  In doing so, I was pleasantly surprised at how much documentation I wanted to write once I got off of the awful wiki software at freedesktop.  You can find more information on VC4 at my mesa and linux trees.

(Side note, wikis on github are interesting.  When you make your fork, you inherit the wiki of whoever you fork from, and you can do PRs back to their wiki similarly to how you would for the main repo.  So my linux tree has Raspberry Pi's wiki too, and I'm wondering if I want to move all of my wiki over to their tree.  I'm not sure.)

Is there anything that people think should be documented for the vc4 project that isn't there?

vc4 status update for 2016-08-08: cutting memory usage

Last week's project for vc4 was to take a look at memory usage.  Eben had expressed concern that the new driver stack would use more memory than the closed stack, and so I figured I would spend a little while working on that.

I first pulled out valgrind's massif tool on piglit's glsl-algebraic-add-add-1.shader_test.  This works as a minimum "how much memory does it take to render *anything* with this driver?" test.  We were consuming 1605k of heap at the peak, and there were some obvious fixes to be made.

First, the gallium state_tracker was allocating 659kb of space at context creation so that it could bytecode-interpret TGSI if needed for glRasterPos() and glRenderMode(GL_FEEDBACK).  Given that nobody should ever use those features, and luckily they rarely do, I delayed the allocation of the somewhat misleadingly-named "draw" context until the fallbacks were needed.

Second, Mesa was allocating the memory for the GL 1.x matrix stacks up front at context creation.  We advertise 32 matrices for modelview/projection, 10 per texture unit (32 of those), and 4 for programs.  I instead implemented a typical doubling array reallocation scheme for storing the matrices, so that only the top matrix per stack is allocated at context creation.  This saved 63kb of dirty memory per context.

722KB for these two fixes may not seem like a whole lot of memory to readers on fancy desktop hardware with 8GB of RAM, but the Raspberry Pi has only 1GB of RAM, and when you exhaust that you're swapping to an SD card.  You should also expect a desktop to have several GL contexts created: the X Server uses one to do its rendering, you have a GL-based compositor with its own context, and your web browser and LibreOffice may each have one or more.  Additionally, trying to complete our piglit testsuite on the Raspberry Pi is currently taking me 6.5 hours (when it even succeeds and doesn't see piglit's python runner get shot by the OOM killer), so I could use any help I can get in reducing context initialization time.

However, malloc()-based memory isn't all that's involved.  The GPU buffer objects that get allocated don't get counted by massif in my analysis above.  To try to approximately fix this, I added in valgrind macro calls to mark the mmap()ed space in a buffer object as being a malloc-like operation until the point that the BO is freed.  This doesn't get at allocations for things like the window-system renderbuffers or the kernel's overflow BO (valgrind requires that you have a pointer involved to report it to massif), but it does help.

Once I has massif reporting more, I noticed that glmark2 -b terrain was allocating a *lot* of memory for shader BOs.  Going through them, an obvious problem was that we were generating a lot of shaders for glGenerateMipmap().  A few weeks ago I improved performance on the benchmark by fixing glGenerateMipmap()'s fallback blits that we were doing because vc4 doesn't support the GL_TEXTURE_BASE_LEVEL that the gallium aux code uses.  I had fixed the fallback by making the shader do an explicit-LOD lookup of the base level if the GL_TEXTURE_BASE_LEVEL==GL_TEXTURE_MAX_LEVEL.  However, in the process I made the shader depend on that base level, so we would comple a new shader variant per level of the texture.  The fix was to make the base level into a uniform value that's uploaded per draw call, and with that change I dropped 572 shader variants from my shader-db results.

Reducing extra shaders was fun, so I set off on another project I had thought of before.  VC4's vertex shader to fragment shader IO system is a bit unusual in that it's just a FIFO of floats (effectively), with none of these silly "vec4"s that permeate GLSL.  Since I can take my inputs in any order, and more flexibility in the FS means avoiding register allocation failures sometimes, I have the FS compiler tell the VS what order it would like its inputs in.  However, the list of all the inputs in their arbitrary orders would be expensive to hash at draw time, so I had just been using the identity of the compiled fragment shader variant in the VS and CS's key to decide when to recompile it in case output order changed.  The trick was that, while the set of all possible orders is huge, the number that any particular application will use is quite small.  I take the FS's input order array, keep it in a set, and use the pointer to the data in the set as the key.  This cut 712 shaders from shader-db.

Also, embarassingly, when I mentioned tracking the FS in the CS's key above?  Coordinate shaders don't output anything to the fragment shader.  Like the name says, they just generate coordinates, which get consumed by the binner.  So, by removing the FS from the CS key, I trivially cut 754 shaders from shader-db.  Between the two, piglit's gl-1.0-blend-func test now passes instead of OOMing, so we get test coverage on blending.

Relatedly, while working on fixing a kernel oops recently, I had noticed that we were still reallocating the overflow BO on every draw call.  This was old debug code from when I was first figuring out how overflow worked.  Since each client can have up to 5 outstanding jobs (limited by Mesa) and each job was allocating a 256KB BO, we coud be saving a MB or so per client assuming they weren't using much of their overflow (likely true for the X Server).  The solution, now that I understand the overflow system better, was just to not reallocate and let the new job fill out the previous overflow area.

Other projects for the week that I won't expand on here: Debugging GPU hang in piglit glsl-routing (generated fixes for vc4-gpu-tools parser, tried writing a GFXH30 workaround patch, still not fixed) and working on supporting direct GLSL IR to NIR translation (lots of cleanups, a couple fixes, patches on the Mesa list).

VC4/RPi3 status update

It's been a busy month.  I spent most of it working on the Raspberry Pi 3 support so I could have a working branch for upstream day 1.  That involved cleaning up the SDHOST driver for submission, cleaning up pinctrl DT, writing an I2C GPIO expander driver, debugging the I2C controller, fixing HDMI hotplug handling, debugging EMMC (not quite done!), scraping together some wireless firmware, and a bunch of work trying to get BT working on the UART.  I'm happy to say that on day 1 I published a branch that worked the same as a RPi2, and by the end of the day I had wireless working.  Some of the patches are now out for review, and I'll be working on cleaning up the rest in the near future.

For VC4, my big push recently has been to support some sort of panel.  Panels are really big with Raspberry Pi users, and it's the primary complaint I hear about the open driver.  The official 7" DSI touchscreen seems like the most promising device to support, since it doesn't hog all your GPIOs (letting me use my serial console) and it's quite popular.

Unfortunately, DSI isn't going well.  The DSI0 peripheral is responding to me, but while I can read from DSI1 it won't respond to any writes.  DSI1 is, unfortunately, the one that the RPi exposes on its DSI connector.  (Note: this debugging is with the panel up and running from the firmware's boot configuration).  Debug code is at drm-vc4-dsi-boot

So, since DSI1's not cooperating, I switched tasks.  I had also picked up a DPI panel using the Adafruit Kippah, and a little SPI-driven panel.  I hadn't started with DPI because hogging all the GPIOs makes kernel debugging a mostly black box experience.  The upside is that DPI is crazy simple -- set the GPIOs muxes to output from DPI, set one register in DPI, and use the same pixelvalve setup from before.  I was surprised when 2 days in I got display output.  Here it is, running HDMI and DPI at the same time:



Expect patches soon on a mailing list near you.  Until then, it's at drm-vc4-dpi-boot

vc4 status update 2015-12-14

Big news for the VC4 project today:

commit 21de54b3c4d08d2b20e80876c6def0b421dfec2e
Merge: 870a171 2146136
Author: Dave Airlie
Date: Tue Dec 15 10:43:27 2015 +1000

Merge tag 'drm-vc4-next-2015-12-11' of http://github.com/anholt/linux into drm-next


This is the last step for getting the VC4 driver upstream: Dave's pulled my code for inclusion in kernel 4.5 (probably to be released around mid-March).  The ABI is now stable, so I'm working on getting that ABI usage into the Mesa 11.1 release.  Hopefully I'll land that in the next couple of days.

As far as using it out of the box, we're not there yet.  I've been getting my code included in some builds for the Raspberry Pi Foundation developers.  They've been working on switching to kernel 4.2, and their tree has VC4 support up to the previous ABI.  Once the Mesa 11.1 merge happens, I'll ask them to pull the new kernel ABI and rebuild userspace using Mesa 11.1-rc4.  Hopefully this then appears in the next Raspbian Jessie build they produce.  Until that release happens, there are instructions for the development environment on the DRI wiki, and I'd recommend trying out the continuous integration builds linked from there.

The Raspberry Pi folks aren't ready to swap everyone over to the vc4 driver quite yet, though.  They want to make sure we don't regress functionality, obviously, and there are some big chunks of work left to do: HDMI audio support, video overlays, and integration of the vc4 driver with the camera and video decode support come time mind.  And then there's the matter of making sure that 3D performance doesn't suffer.  That's a bit hard to test, since only a few apps work with the existing GLES2 support, while the vc4 driver gives GLX, EGL-on-X11, EGL-on-gbm, most of GL2.1, and all of GLES2, but doesn't support the EGL-on-Dispmanx interface that the previous driver used.  So, until then, they're going to have a devicetree overlay that determines whether the firmware sets itself up for Linux using the vc4 driver or the closed driver.

Part of what's taken so long to get to this point has been trying to get my dependencies merged to the kernel.  To turn on V3D, I need to turn on power, which means a power domain driver.  Turning on the power required talking to the firmware, which required resurrecting an old patchset for talking to the firmware, which got bikeshedded harder than I've ever had happen to my code before.  Programming video modes required a clock driver.  Every step of the way is misery to get the code merged, and I would give up a lot to never work on the Linux kernel again.

Until then, though, I've become as Raspberry Pi kernel maintainer, so that I can ack other people's patches and help shepherd them into the kernel.  Hopefully for 4.5 I can get the aux clock driver bikeshedding dealt with and resubmit it, at which point people can use UART1 and SPI1/2.  I have a third rework to do of my power domain driver so that if we're lucky we can get it merged nad actually turn on the 3D core (and manage power of many other devices, too!).  Martin Sperl is doing a major rewrite of the SPI driver (an area I know basically nothing about), and his recent patch split may deal with the subsystem maintainer's concerns.  I want to pull in feedback and merge Lubomir's thermal driver.  There's also a cpufreq driver (for actually doing the overclocking you can set with config.txt) from Lubomir, which I expect to be harder to deal with the feedback on.

So, while I've been quiet on the blogging front, there's been a lot going on for vc4, and it's in pretty good shape now.  Hopefully more folks can give it a try as it becomes more upstreamed and accessible.

VC4 driver status update

I've just spent another week hanging out with my Broadcom and Raspberry Pi teammates, and it's unblocked a lot of my work.

Notably, I learned some unstated rules about how loading and storing from the tilebuffer work, which has significantly improved stability on the Pi (as opposed to simulation, which only asserted about following half of these rules).

I got an intro on the debug process for GPU hangs, which ultimately just looks like "run it through simpenrose (the simulator) directly. If that doesn't catch the problem, you capture a .CLIF file of all the buffers involved and feed it into RTL simulation, at which point you can confirm for yourself that yes, it's hanging, and then you hand it to somebody who understands the RTL and they tell you what the deal is." There's also the opportunity to use JTAG to look at the GPU's perspective of memory, which might be useful for some classes of problems. I've started on .CLIF generation (currently simulation-environment-only), but I've got some bugs in my generated files because I'm using packets that the .CLIF generator wasn't prepared for.

I got an overview of the cache hierarchy, which pointed out that I wasn't flushing the ARM dcache to get my writes out into system L2 (more like an L3) so that the GPU could see it. This should also improve stability, since before we were only getting lucky that the GPU would actually see our command stream.

Most importantly, I ended up fixing a mistake in my attempt at reset using the mailbox commands, and now I've got working reset. Testing cycles for GPU hangs have dropped from about 5 minutes to 2-30 seconds. Between working reset and improved stability from loads/stores, we're at the point that X is almost stable. I can now run piglit on actual hardware! (it takes hours, though)

On the X front, the modesetting driver is now merged to the X Server with glamor-based X rendering acceleration. It also happens to support DRI3 buffer passing, but not Present's pageflipping/vblank synchronization. I've submitted a patch series for DRI2 support with vblank synchronization (again, no pageflipping), which will get us more complete GLX extension support, including things like GLX_INTEL_swap_event that gnome-shell really wants.

In other news, I've been talking to a developer at Raspberry Pi who's building the KMS support. Combined with the discussions with keithp and ajax last week about compositing inside the X Server, I think we've got a pretty solid plan for what we want our display stack to look like, so that we can get GL swaps and video presentation into HVS planes, and avoid copies on our very bandwidth-limited hardware. Baby steps first, though -- he's still working on putting giant piles of clock management code into the kernel module so we can even turn on the GPU and displays on our own without using the firmware blob.

Testing status:
- 93.8% passrate on piglit on simulation
- 86.3% passrate on piglit gpu.py on Raspberry Pi

All those opcodes I mentioned in the previous post are now completed -- sadly, I didn't get people up to speed fast enough to contribute before those projects were the biggest things holding back the passrate. I've started a page at http://dri.freedesktop.org/wiki/VC4/ for documenting the setup process and status.

And now, next steps. Now that I've got GPU reset, a high priority is switching to interrupt-based render job tracking and putting an actual command queue in the kernel so we can have multiple GPU jobs queued up by userland at the same time (the VC4 sadly has no ringbuffer like other GPUs have). Then I need to clean up user <-> kernel ABI so that I can start pushing my linux code upstream, and probably work on building userspace BO caching.