At work I’ve been taking the proof of concept depth of field work I’ve done and iterating on it. One thing I wanted to do was have easier control over authoring and visualizing a bokeh pattern.
I decided to write a quick web page which will generate a bokeh pattern based on a couple of user supplied inputs. It will output a visualization of it and the hlsl code for it. If anyone is interested in using it the link is here:
Recently I was having trouble reproducing a bug the perf team was running into, it had to do with a specific camera position in our workload. I decided the simplest and most time saving approach (for our purposes and for them) was to have them copy the camera data to a clipboard and email it to me – then I could reproduce the position exactly.
The msdn example code I found for copying data to the clipboard worked fine but it was a little overblown for the simple case I needed. I’ve boiled it down to just a few lines and I wanted to post it here in case anyone else wants to add this simple functionality to their application:
if ( ! OpenClipboard(hWnd) )
char text = "Your clipboard text (or data)";
size_t text_len = strlen(text);
// Allocate a global memory object for the text.
HGLOBAL hglbCopy = GlobalAlloc(GMEM_MOVEABLE, (text_len + 1) * sizeof(char));
// Lock the handle and copy the text to the buffer.
char *lptstrCopy = (char *) GlobalLock( hglbCopy );
memcpy(lptstrCopy, text, text_len * sizeof(char) );
lptstrCopy[text_len] = (char) 0;
// Place the handle on the clipboard.
I’ve been doing some Vulkan recently and I needed to lock the GPU frequency. Nvidia has posted some code to do it – but strangely enough – it was just on a webpage not on github. So, I’ve added it to a repo and included a compiled executable. Just run it as a separate process and it’ll lock your GPU frequency for any workloads you’re profiling.
After a some digging, reading the RFC, etc. I decided to write my own as a test – it’s a very simple test – but it works and establishes a connection. I posted it on github just in case someone else might be curious…it could save that person a couple of hours of header, sha, endian swapping headaches.
My colleague and I wrote a blog post on Dynamic Resolution Rendering with DX12, specifically it’s an implementation which uses DX12 Placed Resources for render target resizing instead of scaling the viewport.
I wanted to take a stab at a depth of field technique – overall I’m happy with the results but I do have some areas I’d like to improve. This is a simple, straight forward technique which runs at ~.4ms on a 1080 Ti at 1080p resolution. You can read about it here.
Hemisphere Sampling Algorithms and Cosine Weighted Hemisphere Sampling Algorithms are once again in the news because of the buzz around Ray Tracing. This post describes what they are and walks through the algorithms step by step so you understand them rather than just copy and pasting code (which I’m sure none of us ever do!).
I’ve decided to take a plunge and try to build my own gaming console. The 6502 is special to me, so that’s the heart of it – but I’m using some ATmegas for more modern stuff: Serial I/O, Internet?, Video out, etc.
Over the past 2 years in my spare time I’ve been helping to push out an indie game. Mostly I’ve focused on framework mechanics; check point system, save game system, gameplay triggering system, etc. You can find it on early release here: