Programming a Robotic Cashier
In my last Medium post, I described how robots have massive potential in retail automation. This post is more of a tutorial explaining my…
In my last Medium post, I described how robots have massive potential in retail automation. This post is more of a tutorial explaining my previous work. It is intended to be partly educational about programming in general and about the Anki Vector SDK in specific. So lets get it rolling. For reference, the complete program is available on my github repository.
There are five main steps.
Connect to the Anki Vector and set it up. In this example, we open a connection to Vector requesting it to enable detection of faces and custom objects. The SDK provides these as options because each additional feature consumes valuable resources from the robot.
with anki_vector.Robot(args.serial, enable_face_detection=True, enable_custom_object_detection=True) as robot: connected_cube = robot.world.connected_light_cube robot.behavior.set_head_angle(degrees(35.0)) robot.behavior.set_lift_height(0.0)
2. Subscribe to events.
on_object_appeared = functools.partial(handle_object_appeared, robot) robot.events.subscribe(on_object_appeared, anki_vector.events.Events.object_appeared)
In this snippet, we are requesting Vector to invoke the API handle_object_appeared() whenever the robot sees a new object. Event based triggers are one of the most powerful pieces that Vector SDK supports.
3. Define markers. In the ideal world, the robot should be able to read a bar code or a QR code and identify the item. However, in practice, I found it very hard to read a bar code or QR code very accurately with the Vector, mainly because this robot is dependent on ambient light to take a picture, it does not have a flash. To circumvent this issue, I chose to define markers that Vector understands, and affix these markers to items that customers would buy.
cube_obj =\robot.world.define_custom_cube(custom_object_type=CustomObjectTypes. CustomType00, marker=CustomObjectMarkers.Circles2, size_mm=44.0, marker_width_mm=50.0, marker_height_mm=50.0, is_unique=True)
On the left is how this marker looks like in real life. Now, we need to take a printout of this marker sized (5cm x 5cm) (as defined in the above code), and affix it to the item we want Vector to scan (In the demo, I affixed the marker to a bag of pretzels).
4. Implement what would happen when events are encountered. This routine contains the crux of the logic in the code. For the purpose of this demo, we defined the following events.
on_robot_observed_face(): We write code to ensure that whenever Vector observes the face, it greets the person and offers to play the role of a cashier. Potentially, this can be tied in with facial recognition, in which Vector upon recognizing the person, customizes the greeting to the person.
on_object_appeared(): Vector recognizes objects when it sees the custom marker shown above. Over here, we introduce the concept of billing. Since we associate the marker with pretzels in our case, we add the cost of the pretzels (predefined in a JSON file) to the customers’ bill.
lightcube_tapped(): The lightcube is the equivalent of a computer keyboard for Vector. In our case we use the lightcube to allow the user to signal that they are finished with scanning all the items, and are now interested in making the payments for the purchase.
5. Process payments: In this part, Vector talks to a payment hub and processes the payments. In this implementation, we implemented a bitcoin based payment mechanism to process mobile payments. The sequence of actions is as follows.
Generate a QR code: in this step, we generate a QR code corresponding to the billing amount in bitcoin. imageUrl =\
imageUrl =\'https://chart.googleapis.com/chart?chs=184x96&cht=qr&chl=bitcoin:%s&amount=%s'\% (bitcoinHash, bitcoinFloat)f = open(imageFName,'wb') f.write(urllib.request.urlopen(imageUrl).read())
Display the QR code on Vector’s screen. At this point, we expect the user to scan the QR code with their mobile client and submit payment. Here in the code snippet to display the image of Vector’s screen.
image_data = image_file.getdata() pixel_bytes = convert_pixels_to_screen_data(image_data, image_file.width, image_file.height) robot.screen.set_screen_to_color(anki_vector.color.Color(rgb=[255, 128, 0]), duration_sec=1.0) robot.screen.set_screen_with_image_data(pixel_bytes, 60, interrupt_running=True)
Wait for the payment to be processed: In this step, we wait for payment confirmation. Once, the confirmation is received, we can thank the user for the payment and tell the user that the transaction is complete. The code snippet is as follows.
def checkPayment(bitCoins): # Access Url and check for notification notifications = coinbaseClient.get_notifications() # Check if there exists a payment notification for the same amount if notifications.get("type") == "wallet:addresses:new-payment": #Process new payment additionalData = notification.get("additional_data") if additionalData: amount = additionalData.get("amount") if amount: bitCoinAmount = amount.get("amount") if bitCoinAmount == bitCoins: print("---Payment received and verified---") return True return False
That concludes the implementation of Vector as a cashier. If you want to take a look at how all this looks in practice, the following video has a demonstration.
Thanks for reading, and please do clap if you liked this story.
If you want to learn more about Vector, or learn AI with Vector, I have a new course at: https://robotics.thinkific.com