Elephant Robotics
DEC 15 - DEC 25 | HAPPY CHRISTMAS

Types of Robotic Arms in AI Kits: A Comparative Analysis

Types of Robotic Arms in AI Kits: A Comparative Analysis
Types of Robotic Arms in AI Kits: A Comparative Analysis

Choosing the Right Robotic Arm for Your AI Kit

In the rapidly advancing field of artificial intelligence and robotics, AI Vision Kits are revolutionizing how machines interact with their environment. The AI Kit works seamlessly with three distinct robotic arms: the myPalletizer 260, myCobot 280, and mechArm 270. Let's dive in and discover how they differ to help you make an informed choice.

What is Robotics AI Kit?

The AI Kit is an entry-level artificial intelligence kit that integrates vision, positioning, grasping, and automatic sorting modules. Based on the Linux system and built-in ROS (Robot Operating System) with a one-to-one simulation model, the AI Kit supports the control of the robotic arm through software development, allowing for a quick introduction to the basics of artificial intelligence.

Benefits of Using the AI Kit

Currently, the AI kit can achieve color and image recognition, automatic positioning and sorting. This kit is very helpful for users who are new to robotic arms and machine vision, as it allows you to quickly understand how artificial intelligence projects are built and learn more about how machine vision works with robotic arms.

Compatible Robotic Arms for the AI Kit

myCobot 280 (6 axis robot arm)

myCobot 280 is the smallest and lightest 6-axis collaborative robotic arm (Cobot structure) in the world. The myCobot 280 has a weight of 850 g, a payload of 250 g, and an effective working radius of 280 mm. It is small but powerful and can be used with various end effectors to adapt to different application scenarios. It also supports software development on multiple platforms to meet diverse needs, such as scientific research and education, smart home applications, and preliminary business R&D.

mechArm 270 (6 axis robot arm)

mechArm 270 is a small 6-axis robotic arm with a center symmetrical structure (like an industrial structure). The mechArm 270 weighs 1 kg with a payload of 250 g, and has a working radius of 270 mm. Ideal for markers, designers & anyone who loves to create!

myPalletizer 260 (4 axis robot arm)

myPalletizer 260 is a lightweight 4-axis robotic arm that is optimal space-removing fin design concept that can be loaded into a backpack subverts the traditional link-type educational four-axis robotic arm. It weighs 960 g, has a 250 g payload, and has a working radius of 260 mm. Ideal for makers and educators and has rich expansion interfaces.

How the AI Kit Works with Robotic Arms?

Taking the color recognition and intelligent sorting function as an example, we can learn about the visual processing module and the computing module. Now, let's watch the video to see how the AI Kit works with these 3 robotic arms.

Use OpenCV Vision Processing Module

OpenCV (Open Source Computer Vision) is an open-source computer vision library used to develop computer vision applications. OpenCV includes a large number of functions and algorithms for image processing, video analysis, deep learning based object detection and recognition, and more.

We use OpenCV to process images. The video from the camera is processed to obtain information from the video such as color, image, and the planar coordinates (x, y) in the video. The obtained information is then passed to the processor for further processing.

Below is a part of the code used for image processing (color recognition):


# detect cube color
def color_detect(self, img):
# set the arrangement of color'HSV
x = y = 0
gs_img = cv2.GaussianBlur(img, (3, 3), 0) # Gaussian blur
# transfrom the img to model of gray
hsv = cv2.cvtColor(gs_img, cv2.COLOR_BGR2HSV)
for mycolor, item in self.HSV.items():
redLower = np.array(item[0])
redUpper = np.array(item[1])
# wipe off all color expect color in range
mask = cv2.inRange(hsv, item[0], item[1])
# a etching operation on a picture to remove edge roughness
erosion = cv2.erode(mask, np.ones((1, 1), np.uint8), iterations=2)
# the image for expansion operation, its role is to deepen the color depth in the picture
dilation = cv2.dilate(erosion, np.ones(
(1, 1), np.uint8), iterations=2)
# adds pixels to the image
target = cv2.bitwise_and(img, img, mask=dilation)
# the filtered image is transformed into a binary image and placed in binary
ret, binary = cv2.threshold(dilation, 127, 255, cv2.THRESH_BINARY)
# get the contour coordinates of the image, where contours is the coordinate value, here only the contour is detected
contours, hierarchy = cv2.findContours(
dilation, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
if len(contours) > 0:
# do something about misidentification
boxes = [
box
for box in [cv2.boundingRect(c) for c in contours]
if min(img.shape[0], img.shape[1]) / 10
< min(box[2], box[3])
< min(img.shape[0], img.shape[1]) / 1
]
if boxes:
for box in boxes:
x, y, w, h = box
# find the largest object that fits the requirements
c = max(contours, key=cv2.contourArea)
# get the lower left and upper right points of the positioning object
x, y, w, h = cv2.boundingRect(c)
# locate the target by drawing rectangle
cv2.rectangle(img, (x, y), (x+w, y+h), (153, 153, 0), 2)
# calculate the rectangle center
x, y = (x*2+w)/2, (y*2+h)/2
# calculate the real coordinates of mycobot relative to the target
if mycolor == "red":
self.color = 0
elif mycolor == "green":
self.color = 1
elif mycolor == "cyan" or mycolor == "blue":
self.color = 2
else:
self.color = 3
if abs(x) + abs(y) > 0:
return x, y
else:
return None

Merely obtaining image information is not sufficient; we must process the acquired data and pass it to the robotic arm for command execution. This is where the computation module comes into play.

Use Numerical Python Computation Module

NumPy (Numerical Python) is an open-source Python library mainly used for mathematical calculations. NumPy provides many functions and algorithms for scientific calculations, including matrix operations, linear algebra, random number generation, Fourier transform, and more.

We need to process the coordinates on the image and convert them to real coordinates, a specialized term called eye-to-hand. We use Python and the NumPy computation library to calculate our coordinates and send them to the robotic arm to perform sorting.

Here is part of the code for the computation:


while cv2.waitKey(1) < 0:
# read camera
_, frame = cap.read()
# deal img
frame = detect.transform_frame(frame)
if _init_ > 0:
_init_ -= 1
continue
# calculate the parameters of camera clipping
if init_num < 20:
if detect.get_calculate_params(frame) is None:
cv2.imshow("figure", frame)
continue
else:
x1, x2, y1, y2 = detect.get_calculate_params(frame)
detect.draw_marker(frame, x1, y1)
detect.draw_marker(frame, x2, y2)
detect.sum_x1 += x1
detect.sum_x2 += x2
detect.sum_y1 += y1
detect.sum_y2 += y2
init_num += 1
continue
elif init_num == 20:
detect.set_cut_params(
(detect.sum_x1)/20.0,
(detect.sum_y1)/20.0,
(detect.sum_x2)/20.0,
(detect.sum_y2)/20.0,
)
detect.sum_x1 = detect.sum_x2 = detect.sum_y1 = detect.sum_y2 = 0
init_num += 1
continue
# calculate params of the coords between cube and mycobot
if nparams < 10:
if detect.get_calculate_params(frame) is None:
cv2.imshow("figure", frame)
continue
else:
x1, x2, y1, y2 = detect.get_calculate_params(frame)
detect.draw_marker(frame, x1, y1)
detect.draw_marker(frame, x2, y2)
detect.sum_x1 += x1
detect.sum_x2 += x2
detect.sum_y1 += y1
detect.sum_y2 += y2
nparams += 1
continue
elif nparams == 10:
nparams += 1
# calculate and set params of calculating real coord between cube and mycobot
detect.set_params(
(detect.sum_x1+detect.sum_x2)/20.0,
(detect.sum_y1+detect.sum_y2)/20.0,
abs(detect.sum_x1-detect.sum_x2)/10.0 +
abs(detect.sum_y1-detect.sum_y2)/10.0
)
print ("ok")
continue
# get detect result
detect_result = detect.color_detect(frame)
if detect_result is None:
cv2.imshow("figure", frame)
continue
else:
x, y = detect_result
# calculate real coord between cube and mycobot
real_x, real_y = detect.get_position(x, y)
if num == 20:
detect.pub_marker(real_sx/20.0/1000.0, real_sy/20.0/1000.0)
detect.decide_move(real_sx/20.0, real_sy/20.0, detect.color)
num = real_sx = real_sy = 0
else:
num += 1
real_sy += real_y
real_sx += real_x

The AI Kit project is open source and can be found on GitHub.

Key Differences Between the 3 Robotic Arms

After comparing the videos, content, and program code for the three robotic arms, it appears that they share the same framework and only require minor data modifications to operate effectively. There are two main differences between these three robotic arms:

4 DOF vs. 6 DOF Robotic Arms

As observed in the video, both the 4-axis and 6-axis robotic arms exhibit sufficient range of motion to effectively operate within the AI Kit's work area. However, they differ in setup complexity. The 4-axis myPalletizer 260 features a streamlined design with fewer moving joints (4), enabling a faster start-up process. In contrast, myCobot 280/mechArm 270 requires 6 joints, two more than myPalletizer 260, resulting in more calculations in the program and a longer start time (in small-scale scenarios).

Centralized Symmetrical vs. myCobot Structure

Industrial robots predominantly utilize a centrosymmetric structure. This design, exemplified by the MechArm 270 with its 2, 3, and 4-axis joints, offers inherent stability and smooth motion due to bilateral support. Conversely, the Cobot structure employs a design that prioritizes a larger working radius and enhanced movement flexibility by eliminating the central support column. However, this flexibility may introduce minor deviations in movement precision compared to the centrosymmetric design, as the robot arm relies solely on motor control for stability.

Making Your Choice

Selecting the most suitable robotic arm from the 3 included in the AI Kit depends on the intended application. Key factors to consider include the arm's working radius, operational environment, and load capacity.

For those seeking to explore robotic arm technology, any of the currently available robotics models can serve as a valuable learning tool. Here's a brief overview of Elephant Robotics each included arm's design philosophy:

  • myPalletizer 260: Inspired by palletizing robots, it excels in palletizing and handling goods on pallets.
  • mechArm 270: Its design prioritizes operational stability with a specialized structure.
  • myCobot 280: Reflecting a recent trend in collaborative robots, it boasts human-safe interaction and capabilities mimicking human strength and precision.

Leave a comment

Please note, comments must be approved before they are published

What are you looking for?