The word vaporware refers to a system that may work in some controlled contexts but that does not work robustly. There has always been and will always be vaporware, and I am glad we have a word for it.
There is a similar but distinct phenomenon that is especially common in the AI community, which is to build systems that may work robustly but that are only relevant to the extent that they showcase purportedly "general" algorithms. While some such algorithms are truly general and work out-of-the-box on a variety of new problems, others fail catastrophically without extensive tweaking on the next problem they are tried on.
I have coined a new word to describe this latter case: vaporithms. A vaporithm is an algorithm that may work extremely well for some problems, and that may constitute a core component of real, functioning software systems, but that nonetheless is vastly less general than claimed.
I encourage everyone in the AI community to ask themselves once in a while: is my algorithm really going to work for somebody else's similar problem, or is it finnicky and extensively tuned for the small subset of problems I have so far considered?