I gave a talk at the Agri-Tech East pollinator on Robo-Cropping last Tuesday (see slides below), and there were some really interesting contrasts at the conference. Before we went, one of my colleagues, Niall said “You’ll like this conference, they’ll be talking about using lasers to kill weeds”. I was definitely looking forward to it!
One of the things that was clear from the start of the conference was that the other speakers, and quite a number of the audience, were from an academic background, whereas we’re definitely at the “development” end of “Research and Development”, so it was good to get a different view point. We occupy different niches in different markets, but in agritech, our work often starts at the “Proof of Concept” phase. Using NASA’s technology readiness levels, the academics generally work in TRL1 and TRL2, whereas we go from TRL3 to TRL7. So when the universities stop and move on to the next challenge (because, from their point of view, the interesting problem has been solved) we step in and accelerate the development to market.
First up was Professor Simon Blackmore, who gave a brief run-down of all the work on robotics in agriculture that’s going on at the National Centre for Precision Farming at Harper Adams University. Some of the themes there were quite close to the work we’re doing on machine vision and harvesting for example, and I really liked the “back to the fundamental physics” approach of looking at where all the energy is used in a farm. Perhaps surprisingly, it’s all in the ploughing, so anything you can do to reduce this – like microtillage, permanent planting positions, or just using much lighter machinery – really impacts the carbon footprint and energy bills. And yes, he even mentioned laser weeding!
Next Dr Andre Rosendo from Cambridge showed us some slides on soft robotics, including a video of a Baxter robot trying to harvest lettuces in a field. Although we did eventually see it succeeding, there were a lot of mashed and trampled lettuces in the video – it reminded me a lot of the first few months of work with our fruit-picking robot. It also struck me as a classic case of “we’ve got a robot, what can we do with it” rather than starting with the problem (harvesting lettuces with less manual effort) and working out the best technology for solving it. But the soft robotics side of his presentation was fascinating, and got me thinking about the links between soft robotics and collaborative robots, or cobots.
The final talk was from Professor Tom Duckett about the automated harvesting of broccoli, or more precisely, the automated detection of broccoli heads. Tom’s talk was particularly memorable for his bravery in using two separate laptops for the presentation, and a live demonstration of ROS talking to a Kinect – which promptly crashed, a situation we were all too familiar with in our robot lab a few months ago, before we worked some of the kinks out of our latest demo. I liked the machine vision part of his talk, especially the fact they were using real data gathered from fields in Spain and the UK. Our work on machine vision, especially the project that involved Deep Convolutional Neural Networks, has shown us the importance of gathering as much real-world data as you can, especially if you’ve got diverse sites of interest.
It’s clear that the academic research presented by the other speakers, and the product development work we’ve done, while different, are two pieces of the same puzzle. The clear desire from the farmers for innovative solutions is a third piece, but what seemed to be missing is a disruptive company with the clarity of vision and deep enough pockets to invest the money to get the benefit of that innovation. It’s not so much “the Dyson of Agritech” – as one of the delegates suggested – that is needed, more “the Netflix of Agritech”. It’s not engineering resource that’s missing (Cambridge Consultants can provide that!) but the willingness to be disruptive. We’re going to the World Agri-Tech Investment Summit next month – I wonder if we’ll find them there?