The obvious question raised by forecasts is, “How accurate are the results?”
TechCast has been using this method for 20 years on a variety of projects, and analyses of these results show that the variation among forecasts averages +3/-1 years at 10 years out.  Some forecasts vary more widely because they are controversial, while others show little variance because they are well established. We have also recorded “arrivals” of several forecasts, all roughly within this likely error band of +3/-1 years. These results are compelling when it is recalled that the expert panel changed over this time, as did the prospects and other general conditions.
The field of “Knowledge Management” (KM) offers a useful perspective for understanding the rationale underlying this methodology. From a KM view, the TechCast approach can be understood as a “learning system.” conducted by a “community of practice” to “continually improve” results. This process of gathering background information, organizing it into a coherent analysis, surveying experts, and using results to improve the system allows the experts to continually learn and thereby approach a “best possible forecast” based on a “scientific consensus.
Some contend that methods replying on expert judgment are subjective, whereas quantitative methods are more precise. Quantitative methods also involve large amounts of uncertainty because of underlying assumptions that must be made. The TechCast approach subsumes quantitative forecasts into the analyses provided to experts, and then allows their considered judgment to resolve the uncertainty that remains.
This consensus can be in error, of course. But it represents a synthesis of the best available background information and authoritative knowledge to produce the best possible answer to a tough question. Experts may have their own bias, naturally, but it is usually distributed normally, washing out in the aggregate results.
Overall, if the present uncertainty is defined as 100%, we have found through experience that this process reduces uncertainty to about 20-30%. Some think of the outcome as “good enough to get a decision-maker into the right ball park.” As noted above, results can become even more accurate by using expert comments to improve the background analyses and by tracking forecasts over time.
1. William E. Halal, “Forecasting the Technology Revolution: Results and Learnings from the TechCast Project, Technological Forecasting & Social Change 80 (2013) 1635–1643