Programing computers can be extremely frustrating -- even for the best of coders. The problem is that they have native languages unto themselves with an overlaying syntax, which is a purely textual combination of regular expressions that form a specific function or program. Any deviation of those letters, numbers, or symbols written in an incorrect sequence can become a catastrophe on an Influenza scale. That can lead to whole portions of source code needing to be rewritten.
In an effort to simplify the way in which we program computers, researchers from MIT's Computer Science and Artificial Intelligence Laboratory are developing a way to code using what they call ordinary language.
The researchers, led by Regina Barzilay, designed their algorithm to generate expressions from natural language, using semantic parsing that analyses a string of symbols or text, to generate an expression. In other words, their program reads or checks the natural language (as close to written English as possible) input and coverts it to source code. However, it does so on a limited scale.
Their algorithm analyses input comes from limited sources, such as spreadsheets, web forms, or databases, extracts phrases that contain one or more several descriptive letters or phrases, and builds the code on those foundations. Their natural-language system begins to write syntax with minimal information on how that syntax might be written using information from the software's parser program. It looks for familiar words or phrases that are common with the corresponding data types. It takes that information and builds a probable data structure that corresponds with similar source code found in other types of data structures almost as a spellchecker does.
The researchers tested their system with over 100 natural language descriptions from different input sources (programming types) derived from previous ACM International Collegiate Programming Contests -- a team-based programming contest held at Baylor University. They found after testing several formats that their system could accurately generate the correct syntax outcome at over 70 percent of the time.
The software is similar to a simplistic AI in that it tries different parsers to find the correct syntax and builds on those data sets, becoming more familiar in recognizing the correct formats that are specified. The team found that it only took ten minutes to calculate the correct responses using an ordinary laptop to generate the correct parsers for the natural language test sets, which is quite remarkable given the fact the software hadn't previously been exposed to the test source codes.
As it stands right now, the algorithm is only in its infancy stage. However, the team hopes to refine the software so that everyday users can program their computers while having little knowledge on how to actually program. They also hope the software will give experienced programmers the ability to focus more on the primary functions of coding by bypassing the routine syntax code itself. It's still a promising start for software that may one day understand the human language so well that it can interpret slang in order to code.