In March 1968, Communications of the ACM published a letter to the editor from the legendary Edsger Dijkstra entitled: “Go To Statement Considered Harmful.” Dijkstra opens the letter by wondering aloud if “go to” statements should even be present in higher-level programming languages. Later, he wrote: “The go to [sic] statement as it stands is just too primitive; it is too much an invitation to make a mess of one’s program.”
With this memorable salvo, early in a decades-long discussion of best practices in software engineering, Dijkstra relegated the goto
statement to the basement – and rightly so, with a caveat we’ll discuss in a moment. Dijkstra’s criticisms of goto
are well-founded. Others have extended these criticisms to break
and continue
statements, pointing out that they, too, are goto
statements.
Despite this controversy, which is reinforced in computer science academia, I still write code that uses goto
in a very structured, specific manner for error handling.
My first introduction to this coding pattern was more than 30 years ago, at my first job, where I was working on industrial machine vision software for the Apple Macintosh. A contractor who was amazing at writing UI code for the Mac, had to write a lot of code that allocated resources and cleanly handled failures of those allocations. The challenge with such code is that if a failure occurs, the resources that have been allocated up to the point of the failure must be freed, to avoid leaking them. He wrote code that would goto
a label at the end of the function, where he’d clean up if a failure had occurred.
At the time, I challenged him on the cleanliness of the code, on the same ideological basis that “go to Considered Harmful,” and he asked me to counterpropose a cleaner structure, and I tried and failed, and that was the end of the discussion. (Rick, if you’re out there, know that you made an impression on a young programmer that day! -Ed.)
Let’s look at the type of function where this type of error handling might be warranted. We’ll start with a pattern that I call “cascading ifs,” where you continue allocating resources as long as the resource allocations succeed:
// allocate two buffers, return true on success
bool
allocateTwoBuffers(
void **bufferA, size_t nA,
void **bufferB, size_t nB )
{
*bufferA = malloc( nA );
if ( *bufferA ) {
*bufferB = malloc( nB );
if ( *bufferB ) {
return true;
}
free( *bufferA );
}
return false;
}
The flow of this function is reasonably easy to follow. It’s not hard to find sample code that follows the same pattern.
HRESULT MainWindow::CreateGraphicsResources()
{
HRESULT hr = S_OK;
if (pRenderTarget == NULL)
{
RECT rc;
GetClientRect(m_hwnd, &rc);
D2D1_SIZE_U size = D2D1::SizeU(rc.right, rc.bottom);
hr = pFactory->CreateHwndRenderTarget(
D2D1::RenderTargetProperties(),
D2D1::HwndRenderTargetProperties(m_hwnd, size),
&pRenderTarget);
if (SUCCEEDED(hr))
{
const D2D1_COLOR_F color = D2D1::ColorF(1.0f, 1.0f, 0);
hr = pRenderTarget->CreateSolidColorBrush(color, &pBrush);
if (SUCCEEDED(hr))
{
CalculateLayout();
}
}
}
return hr;
}
These functions have something in common, which is that they are allocating resources and not releasing them. Some functions allocate ephemeral resources, and release them before returning. In such cases, the release must be added to both code paths (success and failure).
But before we go there, let’s consider what happens if we need to allocate a third resource:
// allocate three buffers, return true on success
bool
allocateThreeBuffersAndMutex(
void **bufferA, size_t nA,
void **bufferB, size_t nB,
void **bufferC, size_t nC )
{
*bufferA = malloc( nA );
if ( *bufferA ) {
*bufferB = malloc( nB );
if ( *bufferB ) {
*bufferC = malloc( nC );
if ( *bufferC ) {
return true;
}
free( *bufferB );
}
free( *bufferA );
}
return false;
}
The indentation and error handling are both disrupted by the new allocation.
Now let’s look at a goto
-based formulation of the same function:
// allocate two buffers, return true on success
bool
allocateTwoBuffers(
void **bufferA, size_t nA,
void **bufferB, size_t nB )
{
void *pA = NULL;
void *pB = NULL;
pA = malloc( nA );
if ( ! pA ) goto Error;
pB = malloc( nB );
if ( ! pB ) goto Error;
*bufferA = pA;
*bufferB = pB;
return true;
Error:
free( pB );
free( pA );
return false;
}
Now, this function has a couple of attributes that I look for: it does not pass back the allocated resources until all have been secured; it initializes pA
and pB
to guaranteed-invalid values (NULL
in this case), so the cleanup code runs correctly; it is cognizant of and exploits that fact that free(NULL)
is valid and defined to be a no-op. The use of goto
is carefully structured and enables the function to come very close to an ideal of having a single return point. In this case, that of a function allocating resources on behalf of its caller, I think it would make the code more complicated to try to unify the code paths for success and failure.
Now let’s see what happens when we decide to allocate a third buffer:
// allocate three buffers, return true on success
bool
allocateThreeBuffers(
void **bufferA, size_t nA,
void **bufferB, size_t nB,
void **bufferC, size_t nC )
{
void *pA = NULL;
void *pB = NULL;
void *pC = NULL;
pA = malloc( nA );
if ( ! pA ) goto Error;
pB = malloc( nB );
if ( ! pB ) goto Error;
pC = malloc( nC );
if ( ! pC ) goto Error;
*bufferA = pA;
*bufferB = pB;
*bufferC = pC;
return true;
Error:
free( pC );
free( pB );
free( pA );
return false;
}
Minimal disruption. That’s what I am looking for. We’ll never be able to write bug-free code, but we should be able to write code where the bugs are easy to fix, with minimal changes and a minimal regression risk.
As an aside, I should say that resources like memory or CUDA streams or events, are not always the types of resources being acquired in code that benefits from this idiom. Opening a file, or acquiring a mutex or other thread synchronization primitive also creates a condition where the function must clean up resources before returning, by closing the file or releasing the mutex.
The goto
-based paradigm described here is used in the Linux kernel, though those usages tend to be more bespoke, with a cleanup label per resource allocated. There is a section of the kernel.org Web site that explains this coding style.
At NVIDIA, we also used this idiom in writing CUDA’s driver API, in part because we were using C, and in 2005 we didn’t think a C++-based code base would be portable to all the toolchains and operating systems we wanted to target. But as the code base developed, I was surprised at how little I missed C++. It would surprise me a bit if they haven’t refactored the code base to pick up some choice C++ features, but I’m guessing that work was done with intention and restraint.
I plan to do another video explaining the error handling idioms used in The CUDA Handbook, which build on the material presented here In the meantime, I feel obliged to mention that this design pattern, though suitable for many applications and in many contexts, may not be the best fit for every application. Exception handling may be a better option for some. I tend to write low-level, user mode code that trafficks in a lot of resource allocation, sometimes allocating resources ephemerally and sometimes allocating them on behalf of the caller. But this idiom can be used in combination with more modern language constructs. In C++, I’ve even had occasions when I used lambdas to reuse goto-based resource allocation code, looping over the needed resources and cleaning up and propagating a failure if any resource allocation fails.
So although your mileage may vary, it’s worth knowing that in 2025, there are still people who are partial to goto-based error handling. Instead of “goto
statement considered harmful,” we might say the goto
statement can be considered “occasionally useful.”