Astrophotography Image Processing

Warren Keller’s Tutorials, which are excellent:

A compendium of lots of tutorials and information for PixInsight:

Madratter’s PixInsight Tutorial:

Harry’s Astro Shed:

Rogelio Bernal Andreo’s tutorials:

Light Vortex Astronomy:

The PixInsight team’s own Tutorials:

Tutorials on the PixInsight Forum:

The Astro Imaging Channel (a mix of image processing and other tutorials related to astrophotography):

Adam Block’s Video Tutorials on PixInsight:


Stellarvue Telescopes and Accessories

Oceanside Pacific Telescope sells all sorts of equipment related to the astronomy hobby.

Scope Stuff sells lots of accessories.  It is locally based near Austin, TX but has no store front.

Astronomics, similar to OPT, they sell all equipment related to astronomy.

Astro-Physics makes high-end telescopes and mounts.


Other Astrophotographer’s pages

Jerry Gardner is an astrophotographer friend from Cloudy Nights.  He’s created some fantastic images and has done some of the comparison shots for Sky Watcher that you may have seen in Sky and Telescope and Astronomy Magazine.

11 thoughts on “Links”

  1. Hi David

    I hope you don’t mind if I write to you separately about a post of yours on CN: http://www.cloudynig…stem/?p=6646438

    The procedure to add a small offset to a master flat you describe in it has helped me solve a nagging issue I’ve had with my red (and sometimes Ha) filter with my CCD images – so firstly, thank you!!!

    If you allow me, I would want to understand more about the expression since, although it seems to solve my issue for one filter, if I apply to other filters (that seem to calibrate ok using the standard routines) results in under calibration of those frames.

    I have posted in the PxI forum about this:…hp?topic=9515.0 with a useful suggestion from JKMorse that unfortunately didn’t work for me.

    Many thanks


  2. Hi Roberto,

    To understand the expression I posted in that thread you need to understand what we are trying to do during calibration. To do that, let’s see what each image we capture is actually composed of:
    light frame = object signal * flat signal + bias signal + dark light signal
    flat frame = flat signal + bias signal + dark flat signal
    dark light frame = bias signal + dark light signal
    dark flat frame = bias signal + dark flat signal
    bias frame = bias signal

    Technically each frame also contains a noise component from each signal but I’ve left that out for simplicity. You can see that the light frame you capture contains the signal from the bias and dark current and the object signal is modulated by the flat signal. We use multiplication between the object and flat signals because light is attenuated as it passed through the optical system. By object signal I’m referring to any light falling on the primary optical surface. That may include other unwanted signals, like light pollution, which we have to deal with outside of calibration. I call bias and dark signals because they are fixed patterns. We generally associate them with noise, partially because they have a strong noise component, but also because they are unwanted signals.

    Ultimately, calibration is about taking the frames we can acquire and trying to get the object signal. So let’s re-write each of the equations for the signal components:
    object signal = (light frame – bias signal – dark light signal) / flat signal
    flat signal = flat frame – bias signal – dark flat signal
    dark light signal = dark light frame – bias signal
    dark flat signal = dark flat frame – bias signal
    bias signal = bias frame

    If we substitute those into the object signal equation we get:
    object signal = (light frame – bias frame – (dark light frame – bias frame)) / (flat frame – bias frame – (dark flat frame – bias frame))

    You can make some assumptions based on the characteristics of most sensors. For example, the dark signal will be very small for short exposures so you can simplify the expression. Let’s also assume that we have already done the calibration of the flat frame to get the flat signal. We are left with:
    object signal = (light frame – bias frame – (dark light frame – bias frame)) / flat signal

    Here’s the expression I used in that post, re-written in the same terms I’m using here (I used flat signal because I’m assuming the master flat has already been bias or dark subtracted):
    ((light frame – bias frame) – k*(dark light frame – bias frame)) * (mean(flat signal) + offset) / max(0.000002, flat signal + offset)

    You can probably already see the similarities between these two expressions. The max function is just a way to protect from divide by zero conditions and ‘k’ is a dark scaling value allowing you to scale the darks for temperature or exposure length differences. The ‘mean(flat signal) + offset)’ part of the expression is simply there to get the result back into a brightness range similar to what it was prior to calibration. Let’s drop those and see what the expression looks like:
    (light frame – bias frame – (dark light frame – bias frame)) / (flat signal + offset)
    You can see how close that is to our simplified object signal equation above:
    (light frame – bias frame – (dark light frame – bias frame)) / flat signal

    The only difference is the ‘offset’ variable. If the flat frames we have are not correctly calibrating our images then one of the potential culprits is that the bias or dark flat signal was not correct. For example, say we used bias frames only to calibrate the flats, then there may be an offset from the dark signal that is not accounted for. By adding a small offset back in we have approximated that missing signal in order to achieve a more accurate calibration.

    Does this explanation help?


  3. David
    Your explanation is as clear as it gets and very helpful. I believe my integrated red flat to be at fault from the formulae above. That would be the only way to explain why this over correction does occur in other filters.
    Thank you again!

  4. In your PixInsight image calibration notes ( ) — which is very well written and very helpful, thank you — can you please elaborate or add about the Superbias tool itself? Specifically, is there any reason to deviate from the default (Orientation Columns and Multiscale layers 7, all options on?), with either DSLR or CCD equipment? I generally take 200 RAW (recently using ASI1600MC-C) and still do a Superbias of that integrated result, using the PixInsight defaults; I think my results are OK, though some suggest with 200 maybe the Superbias is not necessary. Also, do you Save the integrated result masters as 16, 32, 64 bit? (I use the 32-bit default)

  5. Hey Steve,

    The Superbias tool isn’t really designed for CMOS sensors, although with a little bit of PixelMath it can be made to work (see this post on the PI forum). This is because CMOS sensors have both row and column patterns so you have to do the math to combine the horizontal and vertical components.

    I have stopped using Superbias though. Generally I am able to take more than enough frames so that the noise from the bias signal is insignificant relative to the signal in the light frame. I’d say at 200 is more than enough to not worry about a Superbias. If you just had 10 to 25 frames I might consider using Superbias.


  6. Hello David,

    I wanted to thank you for the very helpful tutorials you have on this site! I started AP about 2.5 years ago and have taken a 20 month sabbatical from it while moving from the light polluted skies of 6 million people to pristine dark skies of southwest NM. I was finally able to capture some photons and forgot what little I had learned about processing with PI. Your basic tutorials helped turn a disaster into a fairly decent photograph. I’m looking forward to going through all of them. Thanks again!

    I don’t know if you ever found a remote dark site opportunity or not but if you’re interested in a possibility contact me at the email address I provided.

    Best regards,

  7. Hi Keith,

    I’m glad the tutorials have helped.

    I haven’t pursued a remote dark site mostly because my time for astronomy has been reduced by other commitments. I still get a little time to image and participate in the community but it is only a tiny fraction of what it used to be. Hopefully, that will change in the future and I’ll be able to participate more and might consider a remote site.


  8. Hi David,

    I ran across a thread on PixInsight from early November 2016. There you posted a link to your “dnaGeneratorPSF” script. In your comments you cautioned that it might not work with current and future PI versions (probably W10 updates also).

    I am running W10 (latest updates within the last month) and PixInsight version (latest).

    I downloaded the script and executed it in PI and applied it to an unstretched image. It successfully generated a PSF and presented the field of stars it had chosen. The PSF was used in the DECON process and everything appeared to work properly.

    I’m assuming that your script is intended to replace my having to manually select several stars and proceed to grade the selection in order to build a PSF. I think this is true and it does appear to work well. It also appears to select a lot more stars than I take time to do.

    Anyway, correct me if I’m wrong on any of the above points. The script saves me a lot of mouse clicks.


  9. Hi Mark,

    That is exactly the intent of the script. I use it quite frequently with deconvolution. Like you said, it saves you a lot of mouse clicks compared to the manual DynamicPSF method. I also had a secondary goal which was to generate a more accurate star mask, however, I was never able to get good recognition of the faint stars without including other structures so that part never really panned out.


  10. Hi Dave,

    I’d like to thank you for your tutorials. They have been of a great help as I have been learning to navigate Pixinsight.

    I would like to point out though that I think you have made your job in “Star Halo Removal with PixInsight” a bit more complicated than it needs to be.

    I used what you were doing by splitting the picture into R, G, and B components but instead of trying to measure the “circle” of halos — which often turn out to be elliptical — I used the luminance layer to create a mask that blocked out everything except the halo. Instead of transferring the measured lightness to a specific circle I just created a new image with that lightness value across the entire image.

    With the mask in place on individual channels, I set pixelmath to replace the target image and subtracted the newly created lightness difference image — the mask prevents it from affecting other areas with the replace target setting engaged.

    I followed up with the clonestamp to cleanup any remaining artifacts, just as you did.

    It’s just a tweak on what you did, and in all honesty without seeing how you managed this in the first place I wouldn’t have had any idea where to start. Again, thank you very much for the tutorial.

Leave a Reply

Your email address will not be published. Required fields are marked *