I'm not sure it's a good idea to separate out sample rate and bit depth. Typically, there are two things that go on -- I call them oversampling and upsampling. Oversampling is what has been going on for years -- they increase the sample rate, put zeroes in for the soon-to-be interpolated sample slices that don't exist yet, then run it through a digital filter and let that hash it all out, usually at the DAC level. Upsampling (a word some people use to describe oversampling, but which I consider fundamentally different) is raising both the sample rate and the bit depth at the same time (E.G. from 16/44.1 to 24/96), and interpolating the values in between. There are a myriad ways to do this, and the higher end companies (E.G. Wadia, Pacific Microsonics, et al) have proprietary algorithms that are considered by some to be better than others.
If these "people", when they talk about the "chip", are referring to the DAC, then not only are they (the inbetweeny bits) not random, they're zero (which strikes me as counterintuitive)...that is, until they're run through a digital filter which does the interpolation indirectly (typically, the in-chip DSP algorithm is just a smoothing algorithm, not an explicit interpolation algorithm). But that's not upsampling (in the sense that I use it), that's oversampling.