Thanks, Jasem and Eric. More than likely I am doing something wrong, but I am ending up with a lower resolution image (half the size in each dimension) when I record on my OSC ZWO ASI1600MCPro in 16 bit (resulting in a file size of ~32.8 MB) than when I record in 24 bit RGB (~49.2 MB). These numbers arise obviously from the number of pixels on the sensor (16 MP) multiplied by the number of Bytes required to encode them (2 Byte for 16 bit, 3 Byte for 24 bit). My understanding why this is the case is because of the Bayer pattern on the sensor, which basically splits the color information over 4 pixels. Essentially, the color matrix results in a 2x2 binning of the information. In RGB, this information is computationally interpolated so the color information for every individual pixel (whether it has an R, G, or B filter on top) on all three channels is "guessed" from the corresponding intensity of the nearest neighbor and the known spectral curves for each filter. Therefore, the information in the resulting output file is effectively tripled, i.e. a 16 bit image corresponding to ~ 65,000 ADU for a 16 MP sensor would now swell to 3 x 32.8 MB, i.e. ~ 98 MB) if the color information where encoded by 16 bits instead of 8 bits.
As Jasem writes, I did not think that the RGB conversion algorithm would support this and had intuitively reasoned that this is because the interpolation is by definition not exact, but an approximation, so calculating an interpolated value to an accuracy of 65,000 (16 bit) instead of settling on 256 (8 bit) was overkill, but the other day in our astrophotography club I was "ridiculed" for this. One of the members showed his images, obtained with an ASI294-MCPro, which has an ~11.7 MP sensor. He reportedly gets a ~70 MB FITS file, which by the same calculation I laid out above for the larger sensor of the ASI1600-MCPro works out to 11.7 M x 2 Bytes (16 bit) x 3 (1 each for R, G and B ) = 70.2 MB. Unfortunately, he did not remember what capture software he was using. However, because he correctly stated the file size that would result from the capture of 16 bit image with an OSC correctly, I suppose it is possible?
PS: I am aware that all the information the OSC sensor can capture is contained with the 32 MB 16 bit FITS file. Any software should be able to do the same interpolation later on that 16 bit file. However, when I use Deep Sky Stacker, I am ending up with an effectively 2x binned image (~2300x 1700 pixels) instead of ~4600x 3500 pixels). Have not tried this in PixInsight yet, but for practical reasons, it would be great if I could directly feed a 16bit color image into DSS (effectively what happens when I take the picture with my DSLR in RAW - I am not losing resolution).