AWS Rekognition Improve / FineTune
after further reading, it seem we have 2 choices that is possible and easy to implement
- Improve accuracy with Rekognition Custom Label model with our own dataset
- AWS Model Feedback Solution
https://docs.aws.amazon.com/rekognition/latest/customlabels-dg/improving-model.html
cost
- we can use the dataset that is being provisioned by the VA and use the keyword as label to feed into AWS Rekognition Custom Label model. Using this, we need to pay for the Custom Label API which include 2 part : training and inference
training is an one time cost which is use at the beginning and inference is use for process image using custom model
however, if we keep increase the keyword, we have to train separately each time, so better we increase the keyword in batch
https://aws.amazon.com/rekognition/pricing/
- aws also have a code example that is using SageMaker GroundTruth to create the labeled dataset directly for AWS Reokgnition Custom Label to use.
Using internal employees for human labeling
A manufacturing company uses ML to classify images of their products. To train their model, they label 40,000 images with product names. Using the built-in workflow for image classification, their employees label all 40,000 images.
Because the company used internal employees, the price for the 40,000 human-labeled images is the same $0.08 per image.
Total Cost = 40,000 human-labeled images x $0.08 per image = $3,200
This cost is build base on top on the AWS Rekognition Custom Label model cost
expected result
- need to check if the F1 score or the confusion matrix result, for some label that is not in our keyword, we would not getting those result anymore
- there is no guarantee that the label is wt we want for keyword but the accuracy would be higher and the dataset can use for Rekognition Custom Label model later