A machine vision system for on-line fruit colour classification*

Similar documents
On-line machine vision system for fast fruit colour sorting using low-cost architecture

Machine Vision System for Color Sorting Wood Edge-Glued Panel Parts

In-process inspection: Inspector technology and concept

The software concept. Try yourself and experience how your processes are significantly simplified. You need. weqube.

DT3162. Ideal Applications Machine Vision Medical Imaging/Diagnostics Scientific Imaging

The software concept. Try yourself and experience how your processes are significantly simplified. You need. weqube.

Pivoting Object Tracking System

UniVision Engineering Limited Modpark Parking System Technical Description. Automatic Vehicle Access Control by Video Identification/

THE NEW LASER FAMILY FOR FINE WELDING FROM FIBER LASERS TO PULSED YAG LASERS

DT3130 Series for Machine Vision

EngineDiag. The Reciprocating Machines Diagnostics Module. Introduction DATASHEET

EngineDiag. The Reciprocating Machines Diagnostics Module. Introduction DATASHEET

Smart Traffic Control System Using Image Processing

Coaxlink series Ultimate in performance with superior value CoaXPress frame grabbers

CAN Application in Modular Systems

-Technical Specifications-

COLORSCAN. Technical and economical proposal for. DECOSYSTEM / OFF.A419.Rev00 1 of 8. DECOSYSTEM /OFF A419/09 Rev November 2009

HEAD. HEAD VISOR (Code 7500ff) Overview. Features. System for online localization of sound sources in real time

High Performance Raster Scan Displays

Data flow architecture for high-speed optical processors

Chapter 1. Introduction to Digital Signal Processing

About... D 3 Technology TM.

F250. Advanced algorithm enables ultra high speed and maximum flexibility. High-performance Vision Sensor. Features

PRODUCT GUIDE CEL5500 LIGHT ENGINE. World Leader in DLP Light Exploration. A TyRex Technology Family Company

Using the VideoEdge IP Encoder with Intellex IP

APPLICATION NOTE. Fiber Alignment Now Achievable with Commercial Software

New GRABLINK Frame Grabbers

NEXT/RADIUS Shelf Mount CCU

8 DIGITAL SIGNAL PROCESSOR IN OPTICAL TOMOGRAPHY SYSTEM

Colour Matching Technology

The 3D Room: Digitizing Time-Varying 3D Events by Synchronized Multiple Video Streams

microenable IV AS1-PoCL Product Profile of microenable IV AS1-PoCL Datasheet microenable IV AS1-PoCL

EDDYCHEK 5. Innovative eddy current testing for quality and process control. Touchscreen. Networking. All major applications.

Savant. Savant. SignalCalc. Power in Numbers input channels. Networked chassis with 1 Gigabit Ethernet to host

MONDO VIDEOSCREENS AND LED SOLUTIONS

Introduction. Edge Enhancement (SEE( Advantages of Scalable SEE) Lijun Yin. Scalable Enhancement and Optimization. Case Study:

Speech Recognition and Signal Processing for Broadcast News Transcription

microenable IV AD1-PoCL Product Profile of microenable IV AD1-PoCL Datasheet microenable IV AD1-PoCL

Building Video and Audio Test Systems. NI Technical Symposium 2008

CS2401-COMPUTER GRAPHICS QUESTION BANK

TV Character Generator

B. The specified product shall be manufactured by a firm whose quality system is in compliance with the I.S./ISO 9001/EN 29001, QUALITY SYSTEM.

Image Acquisition Technology

THE NEXT GENERATION OF CITY MANAGEMENT INNOVATE TODAY TO MEET THE NEEDS OF TOMORROW

PRACTICAL APPLICATION OF THE PHASED-ARRAY TECHNOLOGY WITH PAINT-BRUSH EVALUATION FOR SEAMLESS-TUBE TESTING

Advanced Test Equipment Rentals ATEC (2832)

2D/3D Multi-Projector Stacking Processor. User Manual AF5D-21

Simple motion control implementation

World s smallest 5MP stand-alone vision system. Powerful Cognex vision tool library including new PatMax RedLine and JavaScript support

A COMPUTER VISION SYSTEM TO READ METER DISPLAYS

microenable 5 marathon ACL Product Profile of microenable 5 marathon ACL Datasheet microenable 5 marathon ACL

V9A01 Solution Specification V0.1

Part 1: Introduction to Computer Graphics

A COMPUTERIZED SYSTEM FOR THE ADVANCED INSPECTION OF REACTOR VESSEL STUDS AND NUTS BY COMBINED MULTI-FREQUENCY EDDY CURRENT AND ULTRASONIC TECHNIQUE

A Real Time Infrared Imaging System Based on DSP & FPGA

Laser Beam Analyser Laser Diagnos c System. If you can measure it, you can control it!

Automatic Defect Recognition in Industrial Applications

LED-Strip C25 MK2.6. Product Sheet

Concept of ELFi Educational program. Android + LEGO

LCD Colour Analyser, PM 5639/06, handheld LCD Colour Analyser, PM 5639/26, industrial LCD Colour Sensor, PM 5639/94

Bar Codes to the Rescue!

ZOOM LED Spot. Market leading colour control and future proofed functionality with simplicity of use Designed and built in the UK

SPECTRO Series SPECTRO-3-30-UV-ANA. Design. SPECTRO-3 Series True Color Sensors. Product name:

PROTOTYPE OF IOT ENABLED SMART FACTORY. HaeKyung Lee and Taioun Kim. Received September 2015; accepted November 2015

BEAMAGE 3.0 KEY FEATURES BEAM DIAGNOSTICS PRELIMINARY AVAILABLE MODEL MAIN FUNCTIONS. CMOS Beam Profiling Camera

LED-Strip C50 MK2.6. Product Sheet

Transmitter optics with 9x white light LED (optics cover made of glass)

LEDs, New Light Sources for Display Backlighting Application Note

PROFOMETER PM-6 ADVANCED COVER METERS. 60 Years of Innovation. Made in Switzerland Design Patent Pending 4H H H H32-01 U-BARS

FRONT-END AND READ-OUT ELECTRONICS FOR THE NUMEN FPD

Digital Systems Based on Principles and Applications of Electrical Engineering/Rizzoni (McGraw Hill

Figure 2: Original and PAM modulated image. Figure 4: Original image.

LHC Beam Instrumentation Further Discussion

Reconfigurable Neural Net Chip with 32K Connections

Analyser WAWEON AD Intended for Measurement Mainly in Textile Industry

Microincrements IP67-related solutions

LED-Strip C12 MK2.6. Product Sheet

(Refer Slide Time: 00:55)

PYROPTIX TM IMAGE PROCESSING SOFTWARE

PCI Express JPEG Frame Grabber Hardware Manual Model 817 Rev.E April 09

SPECTRO Series SPECTRO-3-30-UV. Design. SPECTRO-3 Series True Color Sensors. Product name: Accessories: (p. 9-10)

News from Rohde&Schwarz Number 195 (2008/I)

Applying Machine Vision to Verification and Testing Ben Dawson and Simon Melikian ipd, a division of Coreco Imaging, Inc.

SHOWLINE SL BEAM 100 LED LUMINAIRE SPECIFICATIONS.

SPECTRO Series SPECTRO-3-50-COF-...-CL. Design. SPECTRO-3 Series True Color Sensors. Product name:

VOB - data over Video Overlay Box

MODE FIELD DIAMETER AND EFFECTIVE AREA MEASUREMENT OF DISPERSION COMPENSATION OPTICAL DEVICES

XC-77 (EIA), XC-77CE (CCIR)

PAK 5.9. Interacting with live data.

High performance optical blending solutions

Practical Application of the Phased-Array Technology with Paint-Brush Evaluation for Seamless-Tube Testing


VIDEO GRABBER. DisplayPort. User Manual

Real-time Chatter Compensation based on Embedded Sensing Device in Machine tools

* This configuration has been updated to a 64K memory with a 32K-32K logical core split.

Tender Notification for the procurement of a Scanning Electron Microscope" at IISc (Last Date for submission of tenders: 3 rd October 2018)

Scanner PERENITY 5K The best complete scanning solution for Archives

D-Lab & D-Lab Control Plan. Measure. Analyse. User Manual

Multi-Frame Matrix Capture Common File Format (MFMC- CFF) Requirements Capture

innovative technology to keep you a step ahead 24/7 Monitoring Detects Problems Early by Automatically Scanning Levels and other Key Parameters

Transcription:

A machine vision system for on-line fruit colour classification* Filiberto Pla, José M. Sanchiz, José S. Sánchez, Nicolas Ugolini, Miguel Diaz University Jaume I, Computer Vision Group, 12080 Castelló, Spain Maxfrut, S.L., Avda de los Deportes s/n, 46600 Alzira. Spain Abstract The work presented in this paper is part of a system developed for fruit sorting. The machine vision unit is part of a distributed control system in which several machine vision modules can be integrated with a control module and a user interface unit. The machine vision unit is an embedded module based on a Pentium-III processor with an image acquisition card, with no user interface. Each machine vision module can process two lines at the time. The system allows fruit size and colour sorting in RGB and IR at 15 fruits/second using the aforementioned hardware. Keywords: Fruit inspection, Colour, On-line classification, Real time. 1. Introduction Fruit and vegetables market is getting highly selective, requiring their suppliers to distribute the goods according to high standards of quality and presentation. In the last years, a number of fruit sorting and grading systems have appeared to fulfil the needs of the fruit processing industry. Present sorting systems tend to include the development of an electronic weight system and a vision-based sorting and grading unit which also measures size, with a friendly user interface that enables definition of classification parameters, reconfiguration of the outputs and maintenance of production statistics. Some commercially available systems are approaching this objective, but prices are becoming almost prohibitive for small and medium companies which try to maintain competitive levels. Most of the systems we can find in the market are based on special architectures, for instance, DSP-based processors boards, hardware implementation of special purpose algorithm, VME architectures, etc. * Work partially funded by by contract No. 8I079 and CICYT project No. 1FD97-0977-C02-02A from the Spanish Ministerio de Educación y Cultura.

This is the case of many Spanish fruit packing companies which are usually small and agriculture products are price-sensitive and suffer from a hard competitive market like the European Union market. The work we are presenting in this paper is the result of a project partially funded by an agricultural machinery company, Maxfrut S.L., with the participation of an electronics designer and manufacturer company, Dismuntel SAL, the Digital Signal Processing Group at the University of Valencia and the Computer Vision Group at the University Jaume I in Spain. Previous work done by the same team [1] was directed to integrate existing control and weight systems, but they were limited by the capabilities of that system, trying to reduce costs by using special purpose image acquisition devices [2], designed for the project. Thus, the idea was to build a new system integrating in a flexible way all parts (mechanics, control, weight and vision) of a fruit sorter. From the very beginning there was the criterion that the system should be conceived as an open platform ready to evolve and incorporate, without major changes, new requirements from the customers or simply an upgrade of any of its modules to avoid the obsolescence of its design or components. The knowledge in the computer vision field has made a significant progress in the last years and hardware improves very fast providing powerful electronics and low cost architectures due to its standardisation and use for many purposes. Therefore, one of the objectives of the project was to develop a system using standard hardware when possible, which is the basis of a low cost architecture, trying to meet the requirements of the system, mainly in speed and accuracy of the measurements. The result of this work has been a system that is able to control up to 10 lanes, but not limited to, classifying fruits according to their weight, size and colour and distributing the fruits in different outputs at a maximum speed of 15 fruits per second approximately. The speed limitation of the system, in the case of the vision module, is imposed by the constraints of the standard image acquisition devices used. 2. System overview As it has been pointed out in the previous section, the complete system is a flexible modular system consisting of: 1) a central control unit. 2) a user interface and storage unit. 3) a set of weight modules. 4) a set of vision modules. 5) a set of output control units. The central control manages all the information about devices and sensors. It also manages the encoder and generate synchronisation signals to the weight, vision and out-

put modules. It controls the speed of the conveyor belt and gather information from all the other modules during the fruit sorting process, like weight measurements, colour and size estimations, error messages, configuration messages, etc. The control unit is linked with all other modules through a CAN (Control Area Network) bus which allows real time communication for control purposes. All real time information is sent across this bus, like synchronisation signals, classification results, control orders to sensors and devices, etc. CAN interfaces for PC-based and embedded systems have been developed in the project by the electronics company. The control unit is also connected via LAN (Local Area Network) to the user interface module and the vision modules. All messages which do not have to meet real time requirements are sent through the LAN using an ethernet protocol. 3. Machine vision module Each machine vision module is in charge of inspecting visually the fruits, estimating their size and classifying them according to their colour properties. Each vision module is able to inspect two lines at the same time and it is an embedded system based on a PC architecture without user interface and controlled through the CAN and LAN by the central control unit and the user interface module. The vision module is composed by a PC-based motherboard with a Pentium III processor and a commercial image acquisition card. This is configured as an embedded system with no display and input devices, running the software previously stored in a flash memory card. The application software of the vision module runs under DOS operating system, using a DOS extender to work in protected mode. The colour and IR cameras are also commercial cameras which provide non interlaced video output and asynchronous reset facility. The cameras are also provided with a progressive scan method to avoid image blurring due to high speed movement of objects. Processing results are sent through the CAN bus to the control unit. While the vision system is in on-line state, the system is also listening to the LAN network and can receive messages from the user interface and control unit. 3.2. Image analysis process. Image acquisition Fruits on the line are singulated and rotated by transport rollers (Figure 1). When a new fruit enters the illumination chamber, a synch signal from the control unit through the

CAN bus warns the vision module to send a trigger signal to the camera and an acquisition request for the frame-grabber. The illumination chamber has been designed in order to provide diffuse illumination over the fruit surfaces, with the aim of avoiding highlights and specular reflections. This is achieved by illuminating indirectly the fruits, getting the light beams reflected on the chamber walls, coated with a mate white. The walls of the illumination chamber have a semi-circular shape in order to provide light beams on the fruit surfaces from as many directions as possible, simulating diffuse illumination (Figure 1). Lamps covers the range of visible and near IR spectrum and the cameras are provided with their corresponding filters to acquire RGB and IR images. In the case of the IR filter, the camera is provided with a high pass filter at 700 nm. Figure 1. View of the acquisition chamber of the machine vision module Fruit location Because of the synchronized image acquisition process, in every image fruits are located approximately at the same place with respect to the image coordinates, but they have to be singulated in the image, since they may touch each other and have different sizes. To identify and singulate every fruit, an algorithm based on projections is used. After the image is segmented, a projection histogram is calculated on the abcisa axis, that is, along the movement direction. Due to the fact that most of fruits are approximately round shape, their projection usually show a modal shape, and the projection of all fruits in the line appears as the intersection of several modal shapes with their corre-

sponding maxima and minima. The algorithm developed looks for minima in the projection histogram which correspond to the fruit limits along the abcisa axis. Colour processing Image analysis begins with colour segmentation by means of a LUT (Look Up Table). The colour LUT for image segmentation is built previously to the image processing step using the colour map defined by the user. We have chosen the RGB representation, mainly because the camera provides images in this representation, although data is further processed and transformed into an adequate format to simplify its interpretation. Although the illumination is controlled, changes on the illumination level at different points of the fruit surface arise from the geometry of the light reaching the imaging device [3]. One of the objectives in the segmentation step is to avoid the problems caused by highlights on the fruit surface. To avoid the information provided by highlights we would have to use either some colour representation regardless of the illuminant [4] or any other representation which allows identify and characterize them. We adopted a scheme based on characterising the highlights using spherical coordinates representation of RGB space [5] assuming the dichromatic reflection model [6]. The user can either define clusters in this chromatic space or perform a clustering algorithm on a sample image providing the number of colour clusters. The clustering, in this case, is done on the RGB space using a C-means algorithm [7]. Size estimation Size parameters for fruits are usually given in terms of their maximum, minimum or average diameters. To calculate the maximum and minimum diameters of a fruit, they are approximated by the maximum diameters projected on the principal axes of the fruit image. Size in pixel units is transformed to millimeters through the calibration factors worked out in the calibration step during the system configuration. 3.3. Size and colour classification Once fruits are singulated and located in the image, two types of information is used to classify them, their size and their colour. The area of each colour label defined by the user is calculated on the fruit surface, and ratios of every colour label with respect to the total area of the fruit are also calculated. The information for each fruit is stored and, after processing up to four views for every fruit, a decision about its class is worked out and sent to the control unit via CAN interface. In order to discard surface areas of the fruit which may have been seen twice or more times, repeated views of the same surface patch are estimated by approximating the rotation undergone for each fruit. The repeated area of the surface of the fruits are calculated modelling the fruit as a sphere, taking its radius as the transversal radius calculated from

the fruit image. Knowing the translation suffered by the conveyor from image to image, the angle rotated by fruit with respect to the previous view can be estimated. To classify each fruit into the classes defined by the user, some classification rules are applied which are derived from an approximation of the classification rules provided by a binary decision tree. The decision tree uses the ratios of colours defined by the user as feature vectors. Decision trees are generated from a learning process using Murthy s approach [8]. Tree rules are simplified as logical ands of rules for each colour ratio c i of the form r i <c i <R i, being r i and R i the upper and lower bounding constants derived from the tree learning process. Figure 2. Colour map editor of the graphical user interface. 3.4 User interface A graphical user interface with an icon-direct manipulation-based style (Figure 2) allows to handle all the options of the system such as the initial set up, monitoring statistics and classification parameters configuration. The simplicity and usability of the interface makes it easy enough to be used for non technical operators. The parts of the user interface concerning the vision module are: Colour map editor (Figure 2). It allows to define the colour labels and the clusters assigned in the colour space to each colour label.

Colour class editor. To define the colour classification rules. Colour calibration. In order to make that colour measurements of all vision modules in the system are the same for the same objects in the same conditions, the cameras of each vision module are calibrated and a set of colour parameters to correct the colour measures of each camera are calculated. These parameters correspond to a linear model of the colour camera measurements. Camera and size calibration. To calculate the ratio between pixels in the image and millimetres the calibration uses a calibration grid and a calibration object to calculate this relation. The calibration grid also helps to set the image plane parallel to the object plane. 4. Performance of the system The maximum number of colour labels is at present fixed to eight, and the maximum colour classes are 12, which covers the most types of fruits and vegetables of the market in Spain. The system performance has been compared with human criteria and no significant disagreement has been found between human and machine decisions in colour classification. Noteworthy is the case of fruits that can be assigned to two different classes: human decisions often vary, nevertheless the machine vision rarely changes its decision. Concerning the computation time required for the standard classification, and using a PC-based motherboard with a Pentium II at 450 MHz, the system can process up to 15 fruits/second and line, inspecting two lines at the time. Image processing speed is limited to the image acquisition card and cameras used, due to video signal standard specifications. To increase image processing rate, non standard colour cameras or digital camera with high frame rate should be used, but present mechanical specifications of rolling chains and transport lines are not designed to support much higher speeds. The machine vision module has been tested with satisfactory results in several facilities in Spain grading tomatoes, apples, pears, oranges, peaches, etc. Previous versions of the system [2] have been also working, with satisfactory results for long periods of time. Size calculation accuracy depends on the camera set-up. Typical camera set-ups (6 mm focal length and 70 cm distance from the camera to the transport lines) in the fruit sorting system developed can provide 1 mm error approximately. 5.Conclusions and future trends We have presented a fruit grading machine vision system for colour and size classification which is being commercialised in Spain. The vision system is part of a modular fruit

grading system that integrates mechanics, control unit, user interface, weight cells and output control units, all linked with a real time CAN based network and a LAN for non real time communications. The system can process up to 15 fruits per second and sort them according to its weight, size and colour. The vision module uses a low cost architecture. The architecture consists of a PCbased embedded system with a commercial image acquisition card and which makes the cost of the system really competitive with respect to existing systems in the market. The modularity and distributed nature of the approach makes the system easy to be upgraded in the future, although at present it covers most Spanish fruit-market requirements of the small and medium fruit packing plants. Processing speed achieved is considered enough for the existing mechanics of the transport lines and present packing houses facilities. Future work is directed to add other fruit inspection capabilities, like detection of specific features on the fruit surface which need more specific image processing techniques, in order to increase quality standards. References [1] J. Calpe, D. Gallego, J.M. Mateos, Video Image-Grabber board for P.C. VI Spanish Symposium on Pattern Recognition and Image Analysis. Córdoba, April, 1995. [2] J. Calpe, F. Pla, Monfort, J., Diaz, P. and J.C., Boada. Robust Low Cost Vision System for Fruit Grading. 8th Mediterranean Electrotechnical Conference, Bari, pp. 1710-1713, 1996. [3] G.J. Klinder, S.A. Shafer, T. Kanade, The Measurement of Highlights in Color Images. Int. J. Computer Vision, (2):7-32, 1988. [4] R. Gershon, The Use of Color in Computational Vision. Ph. D. Thesis, Department of Computer Science, University of Toronto, 1987. [5] F. Pla, F. Juste, F. Ferri, M. Vicens, Colour Segmentation Based on a Light Reflection Model to Locate Citrus Fruits for Robotic Harvesting. Computers and Electronics in Agriculture, (9):53-70, 1993. [6] S.A. Shafer, Using Color to Separate Reflection Components. Color Research and Applications, 10(4):210-218, 1985. [7] Duda, R. O. and Hart, P. E., Pattern Classification and Scene Analysis, John Wiley & Sons, 1973. [8] Murthy, S. K; Kasif, S. And Salzberg, S.; A System for Induction of Oblique Decision Trees, Journal of Artificial Intelligence Research, (2):1-32, 1994.