Computer's today can recognise objects in photographs. This ability has formed the basis of many familiar applications such as Facebook tagging, Google Image Search, Google Goggles, and automated passport checking at UK borders. Yet a significant restriction remains: computers can only recognise objects in photographs. At least, their ability to recognise objects in drawings and paintings - in artwork of any kind - is strictly limited. If this limitation can be overcome then many more applications will become possible. One is a new way to search the web for images in which a drawing (say) is dragged from the desktop into a search bar, and paintings and photographs are given back to the user (at the moment a user gets the same sort of image back as was dragged into the search bar). Another is the automated production of catalogues for taxonomy - which is important to scientists faces with tens of thousands of microscopic creatures; species catalogues are hand-drawn right now so automation would be a significant advance for them. The output of the programme would also allow ordinary photographs to be converted to icons. This is not as dry as it sounds, but could help the visually impaired to gain access to photographic content. If photographs and drawings can be linked in the way this project has in mind, then objects in photographs could be turned into icons rendered by a set of raised pins. So there would be a symbol for car, say, not unlike that which might be drawn by a child - and in fact this is very close to the icons blind artists draw. This would allow the visually impaired to read photographs in newspapers, or in text books, and allow them to share the holiday snaps of family and friends. This proposal is about building the basic technology that underpins these applications, and quite possibly others too. Key to it is lifting the barrier that computers of today face - allowing them to recognise objects no matter how they are depicted.