CS 5043: HW7: Semantic Labeling
Assignment notes:
- Deadline:
- Deadline: Tuesday, April 25th @11:59pm.
- Hand-in procedure: submit a zip file to Gradescope
- Do not submit MSWord documents.
Data Set
The Chesapeake Watershed data set is derived from satellite imagery
over all of the US states that are part of the Chesapeake Bay
watershed system. We are using the patches part of the data
set. Each patch is a 256 x 256 image with 26 channels, in which each
pixel corresponds to a 1m x 1m area of space. Some of these
channels are visible light channels (RGB), while others encode surface
reflectivity at different frequencies. In addition, each pixel is
labeled as being one of:
- 0 = no class
- 1 = water
- 2 = tree canopy / forest
- 3 = low vegetation / field
- 4 = barren land
- 5 = impervious (other)
- 6 = impervious (road)
Here is an example of the RGB image of one patch and the corresponding pixel labels:
Notes:
Data Organization
All of the data are located on the supercomputer in:
/home/fagg/datasets/radiant_earth/pa. Within this directory, there are both
train and valid directories. Each of these contain
directories F0 ... F9 (folds 0 to 9). Each training fold is composed
of 5000 patches. Because of the size of the folds, we have provided
code that produces a TF Dataset that dynamically loads the data as you
need it. We will use the train directory to draw our training
and validation sets from and the valid directory to draw our
testing set from.
Local testing: the file chesapeake_small.zip
contains the data for folds 0 and 9 (it is 6GB compressed).
Data Access
chesapeake_loader.py is provided. The key function call is:
ds_train, ds_valid, ds_test, num_classes = create_datasets(base_dir='/home/fagg/datasets/radiant_earth/pa',
fold=0,
train_filt='*[012345678]',
cache_dir=None,
repeat_train=False,
shuffle_train=None,
batch_size=8,
prefetch=2,
num_parallel_calls=4)
where:
- ds_train, ds_valid, ds_test are TF Dataset objects that load and manage
your data
- num_classes is the number of classes that you are predicting
- base_dir is the main directory for the dataset
- fold is the fold to load (0 ... 9)
- train_filt is a regular expression filter that specifies which
file numbers to include.
- '*0' will load all numbers ending with zero (500 examples).
- '*[01234]' will load all numbers ending with 0,1,2,3 or 4.
- '*' will load all 5000 examples.
- '*[012345678]' is the largest training set you should use
- cache_dir is the cache directory if there is one ('' if cache to RAM, LSCRATCH on the supercomputer)
- repeat_train repeat training set indefinitely
- shuffle_train size of the training set shuffle buffer
- batch_size is the size of the batches produced by your
dataset
- prefetch is the number of batches that will be buffered
- num_parallel_calls is the number of threads to use to
create the Dataset
The returned Datasets will generate batches of the specified size of
input/output tuples.
- Inputs: floats: batch_size x 256 x 256 x 26
- Outputs: int8: batch_size x 256 x 256
The Problem
Create an image-to-image translator that does semantic labeling of the
images.
Details:
- Your network output should be shape (examples, rows, cols,
number of classes), where the sum of all class outputs for a single pixel
is 1 (i.e., we are using a softmax across the last dimension of
your output).
- Use sparse_categorical_crossentropy as
your loss function. This will properly translate between your
one-output per class per pixel to the outs that have
just one class label for each pixel.
- Use sparse_categorical_accuracy as an
evaluation metric. Because of the class imbalance, a model
that predicts the majority class will have an accuracy of ~0.65
- Try using a sequential-style model, as well as a full U-net model (with skip connections).
Deep Learning Experiment
For what you think is your best performing model type (and
hyper-parameters), perform 5 different experiments:
- Use '*[012345678]' for training (train partition). Note: when debugging, just use '*0'
The five different experiments will use folds F5 ... F9. There is no overlap between the folds.
Reporting
- Figures 1a,b: model architectures from plot_model(). One figure
- Figure 2: Validation accuracy as a function of training epoch.
Show 10 curves.
- Figures 3...7: for each model, evaluate using the test data set
and generate a confusion matrix. (so, one confusion matrix per
rotation)
- Figure 8: scatter plot of test accuracy.
- Figures 9a,b: for both models, show three interesting examples (one per row).
Each row includes: Satellite image (channels 0,1,2); true
labels; predicted labels.
plt.imshow can be useful here, but make sure for the label
images that the label-to-color mapping is the same
- Reflection
- How do the training times compare between the two model types?
- Describe the relative performance of the two model types.
- Describe any qualitative differences between the outputs of the two model types.
What to Hand In
- Your python code (.py) and any notebook files (.ipynb)
- Figures 1-9
- Your reflection
Grades
- 20 pts: Clean, general code for model building (including in-code documentation)
- 5 pts: Figure 1a,b
- 10 pts: Figure 2
- 5 pts each: Figures 3-7
- 10 pts: Figure 8
- 10 pts: Figures 9a,b
- 10 pts: Reflection
- 10 pts: Reasonable test set performance for all rotations
References
Hints
- Start small. Get the architecture working before throwing lots
of data at it.
- Write generic code.
- Start early. Expect the learning process for these models to
be relatively long.
andrewhfagg -- gmail.com
Last modified: Thu Apr 13 23:39:16 2023