Top 10 Matrox Imaging Library X Alternatives

  • RoboDK RoboDK
  • Open Robotics GAZEBO
  • MVTec MERLIC
  • neurala Vision AI Software
  • SOLOMON vision JustPick
  • SOLOMON vision Solmotion
  • Matrox Imaging Matrox Design Assistant X
  • Liebherr Group Robot vision technology packages - LHRobotics.Vision
  • Hikrobot Vision Master
  • EasyODM.tech EasyODM Machine Vision Software
  • Delfoi SPOT
1
RoboDK RoboDK
4.9/5 (4)
Offline programming | Simulation | Developer tools | Vision

RoboDK is a comprehensive robotic simulation and programming software that offers a user-friendly experience with five easy steps to simulate and program robots. It provides access to an extensive library of over 500 industrial robot arms from various manufacturers like ABB, Fanuc, KUKA, and Universal Robots. Moreover, it supports external axes such as turntables and linear rails, allowing users to model and synchronize additional axes effortlessly.

The software facilitates precise tool definition by loading 3D models of tools and converting them to robot tools by drag-and-drop actions. Users can calibrate robot tools accurately using RoboDK. Additionally, it enables the loading of 3D models of parts and placing them in a reference frame for a quick proof of concept. The simulation capabilities enable users to create robot paths using an intuitive interface, integrating with CAD/CAM software and accessing plug-ins for various design software.

RoboDK's standout feature lies in its ability to generate robot programs offline with just two clicks, supporting over 40 robot manufacturers with more than 70 post-processors. It eliminates the need for programming experience, making it accessible to a broader user base. Furthermore, it automatically generates error-free paths, avoids singularities, axis limits, and collisions, ensuring efficient and collision-free robot programming. The software's capability to split long programs enables easy loading into the robot controller, streamlining the programming process for industrial automation tasks.

2
Open Robotics GAZEBO
4/5 (8)
Simulation | Developer tools | Vision

Gazebo, developed by Open Robotics, is a collection of open-source software libraries designed to simplify the development of high-performance applications. While its primary audience includes robot developers, designers, and educators, Gazebo is versatile and can be adapted to various use cases. The modular design of Gazebo allows users to choose specific libraries tailored to their application's needs, promoting flexibility and avoiding unnecessary dependencies.

One of Gazebo's standout features is its trust and reliability, ensured through a curation and maintenance process led by Open Robotics in collaboration with a community of developers. Each library within Gazebo serves a specific purpose, reducing code clutter and promoting consistency between libraries. The development and maintenance process adheres to a rigorous protocol, including multiple reviews, code checkers, and continuous integration, ensuring the software's stability and robustness.

Gazebo offers an array of powerful features, making it an excellent option for industrial automation. It includes built-in robot models like PR2 and Pioneer2 DX, enabling users to start simulations quickly. The TCP/IP transport allows simulations to run on remote servers, providing flexibility and scalability. The 3D graphics environment offers advanced and realistic rendering, enhancing the simulation experience. Additionally, Gazebo supports custom plugin development for its API, facilitating integration with other software systems. The dynamic simulation capabilities, along with various sensors and camera modules, make it a comprehensive tool for testing and validating robotic systems in a virtual environment. Furthermore, Gazebo's compatibility with cloud simulations allows users to run simulations on platforms like AWS, leveraging the power of cloud computing for their automation needs.

3
MVTec MERLIC
No reviews yet.
Vision

A NEW GENERATION OF MACHINE VISION SOFTWARE


MVTec MERLIC is an all-in-one software product for quickly building machine vision applications without any need for programming.

It is based on MVTec's extensive machine vision expertise and combines reliable, fast performance with ease of use. An image-centered user interface and intuitive interaction concepts like easyTouch provide an efficient workflow, which leads to time and cost savings.

MERLIC provides powerful tools to design and build complete machine vision applications with a graphical user interface, integrated PLC communication, and image acquisition based on industry standards. All standard machine vision tools such as calibration, measuring, counting, checking, reading, position determination, as well as 3D vision with height images are included in MVTec MERLIC.


MERLIC 5

MERLIC 5 introduces a new licensing model for greatest possible flexibility. It allows customers to choose the package – and price – that exactly fits the scope of their application. Depending on the required number of image sources and features (“add-ons”) for the application, the packages Small, Medium, Large and X-Large as well as a free trial version is available. This new "package" concept replaces the previous "editions" model.


With the release of MERLIC 5, MVTec’s state-of-the-art deep learning technologies are finding their way into the all-in-one machine vision software MERLIC. Easier than ever before, users can now harness the power of deep learning for their vision applications.

MERLIC 5 includes Anomaly Detection and Classification deep learning technologies:

The "Detect Anomalies" tool allows to perform all training and processing steps to detect anomalies directly in MERLIC.
The "Classify Image" tool enables using classifiers (e.g., trained with MVTec's Deep Learning Tool) to easily classify objects in MERLIC.

4
neurala Vision AI Software
No reviews yet.
Vision


Improve your visual quality inspection process

Neurala’s Vision Inspection Automation (VIA) software helps manufacturers improve their vision inspection and quality control processes to increase productivity, providing flexibility to scale to meet fluctuating demand. Easy to set up and integrate to existing hardware, Neurala VIA software reduces product defects while increasing inspection rates and preventing production downtime – all without requiring previous AI expertise.



How Neurala’s Vision AI software works

VIASystemOverview2 (1)


Understanding what your Vision AI is “seeing”

Neurala’s Explainability feature highlights the area of an image causing a vision AI model to make a specific decision about a defect. With this detailed understanding of the workings of the vision AI model, manufacturers can build better performing AI models that continuously improve processes and production efficiencies.


explainability - thread-1



Multiple inspection points on complex products
Neurala's Multi-ROI (region of interest) feature helps ensure that all components of particular interest are in the right place and in the right orientation, without any defects. Multi-ROI can run from a single inspection camera, which dramatically reduces the cost per inspection, without slowing down the run time.





Innovation that’s fast, easy and affordable

product_workflow V2-1


Neurala VIA - Beyond machine vision
Defects that are easy for a human to see can be difficult for machine vision to detect. Adding vision AI dramatically increases your ability to detect challenging defect types such as surface anomalies and product variability. Neurala VIA helps improve inspection accuracy and the percentage of products being inspected. You can build data models in minutes, then easily modify and redeploy for changes on the production line.


Quality Inspection Production Line


5
SOLOMON vision JustPick
No reviews yet.
Vision


As e-commerce expands at phenomenal rates, handling shipment is proving to be a herculean task. Effective supply chains that process more and deliver in less time are critical to managing consumer expectations and capitalizing on this upward trend.


JustPick Key Advantages

AI-powered vision systems help robots ‘see’ so they can perform tasks for order fulfillment much more cost-efficiently. With JustPick, robots no longer need to be trained to recognize objects, allowing them to sort and handle large quantities of unknown SKUs on the fly.

Process large, random inventories without programming

Process large, random inventories without programming

Automated sorting to increase throughput

Automated sorting to increase throughput

Reduced overhead costs & uninterrupted operations

Reduced overhead costs & uninterrupted operations

Compatible with over 20 major robot brands

Compatible with over 20 major robot brands


JustPick Key Features
Created for the e-commerce picking process, JustPick’s custom-built functions help robots identify and pick packages of different shapes and sizes autonomously.

‘Unknown’ picking
JustPick eliminates the need for robots to ‘know’ what objects are in order to pick, allowing large inventories to be processed without having to learn each SKU one by one.


Intelligent gripper

Upon locating an item, JustPick automatically configures the optimum gripping approach, and if needed, adjusts the number of active suction cups to ensure a secure grasp.

Easy integration
JustPick is compatible with over 20 robot brands, numerous PLCs, and 3D cameras including ToF, structured light, and stereo vision.

Faster set up
No programming is required to operate JustPick. Our user-friendly UI allows users to drag-and-drop preset function blocks to create their unique workflow.


6
SOLOMON vision Solmotion
No reviews yet.
Vision

Vision-guided robot solution combines 3D-vision with machine learning

Increases manufacturing flexibility and efficiency

Solmotion is a system that automatically recognizes the product’s location and makes corrections to the robot path accordingly. The system reduces the need for fixtures and precise positioning in the manufacturing process, and can quickly identify the product’s features and changes. This helps the robot to react to any variations in the environment just as if it had eyes and a brain. The use of AI allows robots to break through the limitations of the past, providing users with high flexibility, even when dealing with previously unknown objects.

Solomon’s AI technology combines 2D and 3D vision algorithms, alternating them in different contexts. Through the use of neural networks, the robot is trained not only to see (Vision) but also to think (AI), and move (Control). This innovative technology was honored in the Gold Category at the Vision Systems Design Innovators Awards. In addition to providing a diverse and flexible vision system, Solmotion supports more than twenty world-known robot brands. This greatly reduces the time and cost of integrating or switching different robots, giving customers the ability to rapidly automate their production lines or quickly move them to a different location. This makes Solmotion a one-stop solution that provides system integrators and end-users with a full range of smart vision tools.

Through a modular and intelligent architecture, Solmotion can quickly identify product changes and make path adjustments in real-time, regardless of any modifications made to the production line. This results in a more -flexible manufacturing process while improving the production environment to become a zero mold, zero inventory smart production factory.


Solmotion Key Advantages:

Cuts mechanical tooling costs
Saves the costs of fixture and storage space
Reduces changeover time
Decreases the accumulated tolerance caused by placing position


Solmotion Key Features:
CAD/CAM software support (offline instead of off-line)

Graphical User Interface, easy for editing program logic

Automatic object recognition, and corresponding path loading

Automatic/manual point cloud data editing
User-friendly path edition/creation/modification

Project Management/Robot Program Backup

Support for more than 20 world-known robot and PLC brands

ROS automatic obstacle avoidance function



Solmotion Key Functions

AI Deep Learning tool


AI Deep Learning tool

Neural networks can be used to train the AI to learn to identify object features/defects on the surface of the items. Comparing it to traditional “rule-based” AOI, AI inspection application scenarios are wider, smarter, and do not require deep technical knowledge. Together with the Solmotion vision-guided robot technology, a camera mounted on a robot can perform just like human eyes and inspect each detail on the surface of the objects.

Applications :
Painting defects and welding inspection, mold repairing, metal defect inspection, and food sortation.

3D vision positioning system


3D vision positioning system

The objects can be placed randomly without the need for precision fixtures or a positioning mechanism. Through the use of visual recognition of partial features, AI can locate the position of the parts in space, generating their displacement and rotation coordinates in real-time, which are then fed back to the robot for direct processing. Also, to achieve flexible production, the system uses a path-loading function based on an object feature recognition function. The software can also generate the robot path through off-line programming, making it a suitable solution for Hi-Mix or Low-Volume, mixed production scenarios.

Applications:
Various Robot Machine Tending Applications.

Robotic path planning auto-generation


Robotic path planning auto-generation

There is no need to manually set the robot path, Solomon’s AI will learn the edge and automatically generate the path planning. The processing angle can be adjusted to “vertical” or “specified” according to the situation. The surface-filling path generating and corner path optimization functions are also available. Supporting more than 20 robot and PLC brands; our solution is suitable for products that are time-consuming, Hi-Mix or Low-Volume, and highly variable in path teaching.

Applications:
Cutting, Gluing, Edge trimming, Painting.

3D Matching Defect Inspection


3D Matching Defect Inspection

The software will perform a comparison between the generated 3D point cloud data of the object and the standard CAD in real-time, generating a report according to the pre-set difference threshold. The report will contain the differences in height, width, and volume data. This data can also be used to automatically generate the robot path. This solution is suitable for object matching and deformation compensation applications.

Applications:
Inspection,Trimming,Repairing,Milling, and 3D Printing.

7
Matrox Imaging Matrox Design Assistant X
No reviews yet.
Vision

Matrox Design Assistant® X1 is an integrated development environment (IDE) for Microsoft® Windows® where vision applications are created by constructing an intuitive flowchart instead of writing traditional program code. In addition to building a flowchart, the IDE enables users to design a graphical web-based operator interface for the application.

Matrox Design Assistant X can operate independent of hardware, allowing users to choose any computer with CoaXPress®, GigE Vision®, or USB3 Vision® cameras and get the processing power needed. Image capture from CoaXPress cameras happens with the use of a Matrox Rapixo CXP frame grabber. Matrox Design Assistant X works with multiple cameras all within the same project, or per project running concurrently and independently from one another, platform permitting. This field-proven software is also a perfect match for a Matrox Imaging vision controller or smart camera. Matrox Design Assistant X includes classification steps that categorize image content using deep learning.

This flowchart-based vision software offers the freedom to choose the ideal platform for any vision project and speeds up application development.


Matrox Design Assistant X at a glance

  • Solve machine vision applications efficiently by constructing flowcharts instead of writing program code
  • Choose the best platform for the job within a hardware-independent environment that supports Matrox Imaging smart cameras and vision controllers and third-party PCs with CoaXPress, GigE Vision, or USB3 Vision cameras
  • Tackle machine vision applications with utmost confidence using field-proven tools for analyzing, classifying, locating, measuring, reading, and verifying
  • Leverage deep learning for visual inspection through image classification and segmentation tools
  • Use a single program for creating both the application logic and operator interface
  • Work with multiple cameras all within the same project or per project running concurrently and independently from one another, platform permitting
  • Interface to Matrox AltiZ and third-party 3D sensors to visualize, process, and analyze depth maps and point clouds
  • Rely on a common underlying vision library for the same results with a Matrox Imaging smart camera, vision system, or third-party computer
  • Maximize productivity with instant feedback on image analysis and processing operations
  • Receive immediate, pertinent assistance through an integrated contextual guide
  • Communicate actions and results to other automation and enterprise equipment via discrete Matrox I/Os, RS-232, and Ethernet (TCP/IP, CC-Link IE Field Basic, EtherNet/IP™2, Modbus®, OPC UA, and PROFINET®, and native robot interfaces)
  • Test communication with a programmable logic controller (PLC) using the built-in PLC interface emulator
  • Maintain control and independence through the ability to create custom flowchart steps
  • Increase productivity and reduce development costs with Matrox Vision Academy online and on-premises training
  • Protect against inappropriate changes with the Project Change Validator tool


Application design

Application design

Flowchart and operator interface design are done within the Matrox Design Assistant X IDE hosted on a computer running 64-bit Windows. A flowchart is put together using a step-by-step approach, where each step is taken from an existing toolbox and is configured interactively. Inputs for a subsequent step—which can be images, 3D data, or alphanumeric results—are easily linked to the outputs of a previous step. Decision-making is performed using a flow-control step, where the logical expression is described interactively. Results from analysis and processing steps are immediately displayed to permit the quick tuning of parameters. A contextual guide provides assistance for every step in the flowchart. Flowchart legibility is maintained by grouping steps into sub-flowcharts. A recipes facility enables a group of analysis and processing steps to have different configurations for neatly handling variations of objects or features of interest within the same flowchart. In addition to flowchart design, Matrox Design Assistant X enables the creation of a custom, web-based operator interface to the application through an integrated HTML visual editor. Users alter an existing template using a choice of annotations (graphics and text), inputs (edit boxes, control buttons, and image markers), and outputs (original or derived results, status indicators, and charts). A filmstrip view is also available to keep track of and navigate to previously analyzed images. The operator interface can be further customized using a third-party HTML editor.









Custom flowchart steps

Users have the ability to extend the capabilities of Matrox Design Assistant X by way of the included Custom Step software development kit (SDK). The SDK, in combination with Microsoft Visual Studio® 2019 or 2022, enables the creation of custom flowchart steps using the C# programming language. These steps can implement proprietary analysis and processing, as well as proprietary communication protocols. The SDK comes with numerous project samples to accelerate development.

Custom Step SDK



Application deployment

Once development is complete, the project—with flowchart(s) and operator interface(s)—is deployed either locally or remotely. Local deployment is to the same computer or Matrox Imaging vision controller as was used for development. Remote deployment is to a different computer, including Matrox Imaging vision controllers, or a Matrox Imaging smart camera.



Project templates for quicker start-up

Matrox Design Assistant X includes a series of project templates and video tutorials to help new developers get up and running quickly.

These templates serve as either functional applications or application frameworks intended as a foundation for a target application. Templates also permit dynamic modifications, allowing users to tweak functionality at runtime and immediately see the outcome of any adjustments. The project templates address typical application areas, with examples for:



Barcode and 2D code reading
Measurement
Presence/absence
Recipes
Robot guidance (pick-and-place)
Dot-matrix text reading (SureDotOCR®)
Color checking










Customizable developer interface

The Matrox Design Assistant X user interface can be tailored by each developer. The workspace can be rearranged, even across multiple monitors, to suit individual preferences and further enhance productivity.

Customizable developer interface



Project Change Validator
Project Change Validator is a utility employing a client-server architecture for ensuring that changes made to a deployed project are not detrimental to the functioning of that project. It provides the ability to record reference images—along with the associated inspection settings and results, for a given project.

This archived reference data is then used to validate changes made to the project. Changes are validated by running the modified project with the reference data and comparing the projects’ operation against this data. Validation is performed by the server—typically running on a separate computer—which is reachable over a network.

The Matrox Design Assistant X management portal provides access to the validation data and results. Validation requests are made on demand from the management portal, an automation controller, or an HMI panel.

Project Change Validator (view from portal)
Project Change Validator (view from management portal)
Project Change Validator


PLC interface emulation

While developing a project in Matrox Design Assistant X, the PLC interface emulator is used to test communication in instances when a physical one is not connected. Values can be changed and viewed dynamically to test the communication between the project and the PLC. The PLC interface emulator supports CC-Link IE Field Basic, EtherNet/IP2, MODBUS over TCP/IP, and PROFINET protocols for communication; these can be activated and controlled from the management portal.

PCL interface emulation



Connect to devices and networks

Matrox Design Assistant X can capture images from any CoaXPress, GigE Vision, or USB3 Vision compliant camera. Image capture from CoaXPress cameras happens with the use of a Matrox Rapixo CXP frame grabber. For GigE Vision cameras, the exact capture time can be obtained from IEEE 1588 timestamps.

The software can communicate over Ethernet networks using the TCP/IP as well as the CC-Link IE Field Basic, EtherNet/IP2, Modbus over TCP/IP, and PROFINET protocols, enabling interaction with programmable logic/automation controllers. Its QuickComm facility provides ready-to-go communication with these controllers. Matrox Design Assistant X supports OPC UA communication for interaction with manufacturing systems and direct communication with select robot controllers for 2D vision-guided robotic applications. Supported robot-controller makes and models currently include the ABB IRC5; DENSO RC8; Epson RC420+ and RC520+; Fanuc LRMate200iC and LRMate200iD; KUKA KR C2; and Stäubli CS8, CS8C HP, and CS9 controllers.

Matrox Design Assistant X can be configured to interact with automation devices through a computer’s COM ports. Matrox Design Assistant X can also directly interact with the I/Os built into a Matrox Imaging vision controller, smart camera, and I/O card as well as the I/O available on a GigE Vision or USB3 Vision camera.

DA version 2109 connectivity
8
Liebherr Group Robot vision technology packages - LHRobotics.Vision
No reviews yet.
Bin picking | Vision

Liebherr is making its expertise in the field of industrial robot vision applications available to a wide user group with the LHRobotics.Vision technology packages


The technology packages consist of a projector-based vision camera system for optical data collection and software for object identification and selection, collision-free withdrawal of parts, and robot path planning to the stacking point.


Software
Basic license:suitable for customers who only need to roughly set the gripped workpiece down. Path planning is not necessarily required for this. Due to the lack of robot model and obstacles, it is also suitable for customers who place less value on collision checking outside the bin, for example in cases where the gripper is never able to fully enter the bin.


Pro license:
The professional license offers unrestricted use of the LHRobotics.Vision software. This license is particularly suitable for customers who place value on full collision checking of the path from removal to possible stacking


Smart bin-picking software

Design a complex application without any programming knowledge at all. LHRobotics.Vision makes it possible. The intuitive graphical user interface enables quick entry of all necessary information using simple steps, starting with:

- Attaching workpieces

- Configuration of transport containers

- Models can be created directly in the software for simple geometry, or existing CAD data can be imported


Step by step to the right grip

In order to be able to realistically represent withdrawal of parts including all axis movements in the software, highly detailed grippers are created – even bendable grippers or a 7th or 8th axis can be represented. The interfering contour takes into account the current status of the gripper – open or closed. Again, existing CAD data can be accessed.


When the workpiece and gripper meet, the task is to define suitable picking positions. This can be done by visually positioning the gripper or by precisely defining the coordinates. Degrees of freedom can be tested, and even realistic removal cycles can be simulated with the LHRobotics.Vision Sim plugin. This enables the optimization of the gripper already within the software.


Integrated simulation possibilities with LHRobotics.Vision Sim

If changes are necessary, for example modified or completely new parts are to be gripped, you'll want to test in advance if it works. In our option package LHRobotics.Vision Sim, this possibility is included. As offline programming, i.e. without intervening in the running cell, the feasibility can be tested. The entire process can therefore be tested in the cell and put into operation in advance.


The environment in view

Once parts and grippers have been created, the periphery follows. The robot used is selected from the library to check the work area. Any obstacles can be taken into account:

- Input of obstacles present in the robot’s work area

- Hand-eye calibration of the system between sensor and robot

- Definition of framework conditions for collision-free withdrawal of parts

- Path planning

9
Hikrobot Vision Master
No reviews yet.
Vision

Self-developed by HIKROBOT, VM is a machine vision software that is committed to providing customers with algorithm tools to quickly build vision applications and solve visual inspection problems. Can be used in various applications such as visual positioning, size measurement, defect detection, and information recognition.


Multiple development modes

Graphical interface development

Graphical software interface, intuitive and easy-to-understand function modules, and fast visual solutions


SDK secondary development model

Customized product development can be completed by using the control and data acquisition interface provided by the VM


Operator design mode

Package the operator into a unique visual tool and integrate it into the user-defined inspection process.

Integrated Vision Master platform with 1000+ image processing operators
The VM provides 1000+ completely self-developed operators and several interactive development tools, supporting a variety of <br/>image acquisition equipment, can meet vision requirements of positioning, measurement, identification, and detection.

Positioning
Measurement
Identification
Detection


Efficient positioning tool matching tool can overcome the differences caused by sample translation, rotation, zoom and illumination, and quickly and accurately find the position of geometric objects such as circles, lines, spots, edges, and vertices. Provide location information and presence information, which can be used in robot guidance and other vision tools.

Provide continuous, accurate and high-speed reading of ID information required for component tracking: OCR algorithm based on deep learning can adapt to the recognition of complex background, low-contrast, and deformed characters; One-dimensional code and two-dimensional code recognition algorithms can recognize information codes of multiple formats, different positions, angles, illumination, and effectively overcome the impact of image distortion.

Accurately identify defects in the surface, shape, and contour of the workpiece: it can detect small surface scratches and spots based on deep learning technology, which can overcome the surface texture, color, and noise interference of the workpiece; accurately detect the shape and contour defects of the workpiece, which can overcome burrs, colors. The interference of noise. Reliable standard parts comparison tool can locate small differences in workpieces.


High-performance deep learning algorithm

The VM algorithm platform is equipped with high-performance deep learning algorithms. After a large number of cases, the optimized algorithm can have good adaptability to common detection products. The deep learning algorithm provides algorithm modules such as classification, target detection, character positioning and recognition, and segmentation. Through the built-in graphical data annotation interface, the complete process from collection, training to detection can be completed within the VM algorithm platform.

  • predict the location of various defects in the picture and present it in the form of a heat map

  • including text positioning and character recognition, which are used to predict the text position and the text truth value in the graph respectively.

  • to determine the type of objects in the picture

  • Determine the category of the object appearing in the picture and predict its location

  • Search for images with similar characteristics to the input image in the specified image dataset

  • Determine the object category and simulate its contour position

  • Present in heatmaps, detect abnormal locations of objects in images based on normal samples


Graphical interface & easy-to-use interaction

The VM provides a fully graphical interactive interface, with intuitive and easy-to-understand function icons, simple and easy-to-use interactive logic and drag-and-drop operation to quickly build a visual solution.


Complete external resource management, can control cameras, IOs, light sources, etc.
The VM integrates the SDKs of various interfaces of industrial cameras, smart cameras, vision controllers and other devices, and embeds efficient and stable occupation and control logic, which has good compatibility with external device resources and can establish a complete management mechanism.

10
EasyODM.tech EasyODM Machine Vision Software
No reviews yet.
Vision
EasyODM software utilizes AI and computer vision to achieve 99% accuracy and 27x faster inspections compared to traditional methods, resulting in significant cost savings.