Satellites have sensors which respond to light power.
Different sensors respond to different kinds of light: red, green, blue, infrared light...
LandSat: carries two different "cameras."
AVHRR (Advanced Very High Resolution Radiometer)
The spatial resolution of a sensor is the size of the smallest detectable feature.
LandSat: images have more detail, but the data sets are huge. Coverage only repeats every 18 days.
AVHRR: images can have daily coverage and are more reasonable for large projects, but can't detect small features.
If your satellite has a large spatial resolution, you run the risk of your pixels including so many things that it's meaningless!
The light measurements can be displayed as a grayscale image, where 0 is black and 255 is White.
Otherwise we create "color composite" images.
Satellite images can monitor environmental changes in a number of ways, including:
![]() |
|
![]() |
|
Healthy vegetation tends to absorb visible red light (AVHRR Ch1) but reflect NIR (AVHRR Ch 2). This leads to NDVI:
N | ormalized |
D | ifference in |
V | egetation |
I | ndex |
Definition. NDVI of a pixel = (NIR - Red) / (NIR + Red).
![]() |
![]() |
This has been done globally on a 1km scale with AVHRR data, and a 30m scale in the US with LandSat TM data. There are two ways to classify pixels, supervised and unsupervised.
Idea: use previous knowledge to help the computer recognize certain spectral signatures.
Harder than you'd think!
There are too many cloud types to reliably recognize every "cloud spectral signature;" Besides, we can't do LCC for every pixel ever beamed down. We need fast run-time cloud detection techniques.
We tend to use quick "threshold tests." If a pixel fails, say, 3 out of 5 tests, we say it's a cloud.
Includes tests like:
Cloud edges are hard to detect, so CLAVR is often done on a 2x2 grid.