“The greater danger lies in setting our aim too low, and achieving our mark.” Michelangelo di Lodovico Buonarroti Simoni, paraphrased (1475–1564)
Current AI news has us thinking about intelligent software — about products that are so intelligent that they can understand what we are saying, grasp context, and even predict what we might need before we tell them. Products breaking ground in this space include Siri, Echo, Google Now, and IBM Watson. Everyone is expecting fully self-driving cars to go mainstream shortly. Some of us even want our computers to pick the perfect holiday presents.
So, what if your company doesn’t have a big team of AI experts? I’d suggest you stay in the race. In the early 2000’s at Ingenuity Systems my team & I designed a system to enable machine-assisted crowd-sourced knowledge extraction from published articles in genomics. At that time, natural language understanding was too inaccurate for us, but it was good enough to help a distributed team quickly and efficiently create structured data. Using this approach we built a huge and high quality knowledge base, the foundation of Ingenuity’s product portfolio, and the major driver of value for the business. “[T]he foundation of Ingenuity’s product portfolio is the Ingenuity Knowledge Base, which together with software applications, allow researchers to interpret large amounts of biological data in order to guide scientific experiments and medical treatment decisions.” Since then, automated knowledge base construction has come a long way, so this sort of approach is more accessible than ever.
So why bring semantics into your product? In 2008 I helped my team at Microsoft demonstrate that grounding user experience in familiar domain concepts can radically increase usability of a complex task. (Our case study was online self-support for malfunctioning personal computers.) We published an article with all the details, including our testing methodology, here: Ontology Models for Interaction Design.
In that project we created an application whose core was a knowledge graph similar to the one described by the team at LinkedIn (although much smaller). In our case study, the graph linked concepts familiar to our users with concepts describing all of the ways in which the personal computer or computer game could malfunction. The interface allowed faceted browsing over the graph of concepts. It assisted users in traversing that graph, starting with their observations of the problem, and leading them to the code or knowledge base articles that could solve the problem. We used machine-assisted tagging to build both the graph itself and the association of nodes to solutions. In that case again human correction of the machine-predicted concept or association allowed us to quickly construct high quality structured data.
A little bit of AI is better than none — and you can always build on it later.