Stockpile Stewardship: (Almost) 20 Years On Recap

Oct 21, 2011

 

 
By Eli Jacobs
 
Yesterday the American Association for the Advancement of Science hosted an event about the Stockpile Stewardship Program (SSP). Victor Reis, the former head of the SSP, gave the keynote presentation on the history of the program, and Daryl Kimball, Everet Beckner, and David Overskei participated in a panel discussion of Reis’s presentation. The discussion proved extremely interesting and occasionally contentious. It left little doubt that Stockpile Stewardship will prove crucial to the continued success of the American nuclear deterrent.
 
Victor Reis’s presentation highlighted two major themes: the historical emergence of the SSP from the Clinton administration’s pursuit of the test ban treaty and its success. After the United States’ last nuclear test in 1992, the vitality of nuclear labs became crucial to the success of the deterrent. Confidence in the SSP model of “validated simulation” was instrumental in Clinton’s decision to pursue a total ban on nuclear testing in CTBT negotiations rather than a ban with, for example, a 1 kiloton limit. A great deal of politics and personal persuasion went into producing this conclusion – including the organization of a StratCom conference and a JASON report on the issue.
 
Reis identified three factors that have been crucial to the SSP’s success. First, the government owned the problem and established a quantitative approach that was not beholden to private industry. Second, all the stakeholders bought into the program – the President, the labs, and private industry, which achieved substantial spin-off benefits from the computing work done to design experiments. Third, funding was consistently strong, largely because of the support of Senator Domenici.
 
Finally, Reis gave a brief history of the origin of the National Nuclear Safety Administration (NNSA) – the organization in charge of Stockpile Stewardship. After the Cox Report (which accused China of significant nuclear espionage) was released to the public in the late 90s, the NNSA was created in response to secrecy concerns to separate the nuclear labs from the Defense Department.
 
Daryl Kimball commented that the SSP needs to focus on key priorities in order to retain effectiveness and keep political support. The program is a historical success (we have more knowledge now than we did during the era of testing) and it currently has plenty of resources, but its tasks will come under increasing scrutiny given budgetary pressures. A reevaluation of major construction projects and maintenance of design discipline, which means sticking with old capabilities rather than developing new ones in Life Extension Programs (LEPs), will be critical to keep preserve political support for core SSP functions.
 
Everet Beckner identified two dangerous trends in the SSP: the more active regulatory role of the Defense Nuclear Facilities Safety Board (DNFSB) and the increase in contracting within the program, where the government pays more in order to encourage competition, and requires a higher degree of private responsibility for their work in return. The consequence of this is increased risk aversion – among the labs to changing procedure in a way that runs afoul of the DNFSB and among private industry to creating new technologies that don’t work. This causes significant cost increases, as the NNSA tends to manage risk by throwing money at problems.
 
David Overskei emphasized the need of the nuclear weapons complex to remain adaptable and flexible in order to respond to future threats. Although the scientific knowledge cultivated by the SSP is good, other factors such as manufacturing are extremely backward looking. This approach of recreating the past is also evident in NNSA’s planned new facilities, which are heavy infrastructure that can’t be easily re-tasked. Instead, the SSP should focus on demonstrating the effectiveness of all weapons components, which includes the designing and building of new systems. We need to be prepared to respond to the nuclear equivalent of an improvised explosive device (IED) – a new nuclear threat that catches us off-guard.
 
Two lines of audience questions were particularly interesting. First was the question of what constitutes a new weapon. Overskei proposed the analogy of an old gun with new bullets; if this is not a new weapon, it is unclear why an old delivery vehicle fitted with a new warhead is. Beckner pointed out that even basic replication uses new today’s electronics, which results in a substantial improvement to components such as guidance systems. Finally, Kimball pointed out that modifications that deviate too significantly from the initial design may produce uncertainty about the effectiveness of the device.
 
The second question was about the complicated role of the DNFSB. Both Overskei and Beckner argued that the DNFSB is too oppositional and regulatory. It makes decisions without any written guidelines and does not offer proactive advice about how to correct problems – a deviation from past practice that has terrified labs. Kimball, on the other hand, argued that a watchdog role is important to guard against problems such as poor environmental management.
 
I had two lingering questions after the event. The first was regarding the discrepancy between Kimball’s emphasis on “design discipline” and Overskei’s call for pursuit of new weapons capabilities. There’s undoubtedly a political angle here – does focus on core competencies make funding the SSP more attractive to Congress? I’m interested, however, in the scientific question: does developing new capabilities that we cannot test reduce confidence in our nuclear weapons? Kimball thinks so; he argues that “confidence in the reliability of U.S. nuclear stockpile could erode if warhead designs are changed to those not validated by past nuclear testing.”
 
This argument seems specious to me. The SSP’s validated simulation is designed to promote confidence in the U.S. nuclear arsenal in the absence of testing. If the labs are confident that this process works, it’s unclear why the formal absence of a test should meaningfully affect their certainty that new warheads would work. Indeed, this position may be more rhetorical than scientific; given the broad consensus that the purpose of nuclear weapons is to never be used, arguments such as these risk needlessly introducing dangerous uncertainty into ally and adversary threat assessments. It’s one thing to suggest that new weapons research may seem superfluous and jeopardize NNSA funding; it’s entirely another to argue that the confidence of labs in new warhead designs is misplaced just because these weapons have not been tested.
 
Further, experiments aimed at refining and improving current weapons capability may be essential for the labs to retain the ability to perform core tasks of replication. As Dr. Reis wrote in a letter to then-DOE Secretary Hazel O’Leary, working to ensure the efficacy of our nuclear weapons without the benefit of tests is a “technical challenge worthy of our best minds.” If the labs’ exclusive task is replicating old weapons – consistent with Kimball’s calls for “design discipline” – insufficiently stimulated scientists and engineers may decide to take their talents elsewhere. Without the appeal of new, experimental research, the base of human capital necessary to perform even the most basic tasks may erode.
 
My second question concerns the “nuclear IEDs” that Overskei thinks we may need to respond to in the future. Namely, what form might these future threats take? It is evident that adversaries are pursuing hardened and deeply buried targets and mobile missiles to ensure a nuclear second-strike capability against nuclear adversaries. An inability to respond to these threats would reduce U.S. freedom of action, and new weapons designs – such as earth penetrating warheads – would be helpful in combating this threat. However, the necessary modifications should be possible without the creation of new warheads. If the goal is to exercise all parts of the nuclear enterprise, it seems that the proper (and more legal) response is not new warheads, but a greater emphasis on engineering issues such as warhead assembly and delivery vehicles within the SSP.
 
In any case, Stockpile Stewardship plays a crucial role in safeguarding our nuclear future. The issues of its organizational structure, ambitions, and priorities are crucial in dictating the direction of our nuclear weapons complex.
 
Eli Jacobs is a research intern for the Project on Nuclear Issues. The views expressed above are his own and do not necessarily reflect those of the Center for Strategic and International Studies or the Project on Nuclear Issues.

 

"Design change discipline"

Eli:

Thanks for the post ... you say your are skeptical about the risks to warhead reliability that may occur due to the accumulation of small changes in the warhead life-extension programs. To clarify, the NNSA's LEP plans call for increasingly intrusive and relatively more expensive modifications designed to improve the safety, security and reliability of the warhead designs.

While some enhancements may be warranted, there is a risk.

For years, stockpile managers and designers preached design-change “discipline,” noting that the an accumulation of unnecessary design and materials modifications could undermine the confidence in reliably predicting the performance of the weapons without nuclear testing.

The NNSA’s own budget plan acknowledges that:

"As the stockpile continues to change due to aging and through the inclusion of modernization features for the enhanced safety and security, the validity of the calibrated simulations decreases, raising the uncertainty and need for predictive capability. Increased computational capability and confidence in the validity of comprehensive science-based theoretical and numerical models will allow assessments of weapons performance in situations that were not directly tested."

The NNSA and the Congress need to review the current program of LEPs to ensure that the growing enthusiasm and costs associated with extensively modifying all warheads does not get out of hand.

Marginal improvements in weapons surety and safety should not come at the expense of long-term weapon reliability.

"Predictive capability"

Hey Daryl,

Thanks so much for your comment! I'm honestly a bit shocked that you discovered my post (I typically try to inform the people I write about, but I didn't this time - probably because it was Friday - so my apologies for that) and appalled that it took so long to get your comment approved. We're working on fixing the delay.

As far as your comment goes, I am no nuclear scientist and can't pass judgment on the scientific claims that underlie our disagreement. But I think I understand the logic of your position: minor design modifications introduce small uncertainties in the function of a particular warhead component and, compounded over time and across a number of changes that introduce variable amounts of uncertainty, this could ultimately amount to fairly significant doubt about the viability of our nuclear arsenal.

Your quote, though, indicates a requirement for "predictive capability" in the absence of testing. It goes on to conclude that "computational capability and confidence in the validity of comprehensive science-based theoretical and numerical models will allow assessments of weapons performance in situations that were not directly tested." Our computers and simulations will soon allow us to run tests that generate such predictive capability, significantly reducing the uncertainty introduced by each successive modification.

In brief, what makes you think that these new processes - in which the labs have full confidence - will be ineffective at eliminating uncertainty?