First, the usual disclaimer: there is no such thing as actual artificial intelligence
A recent research paper has shown that programmers that use AI tools to help them code are actually hurting the quality of their code:
“We found that participants with access to an AI assistant often produced more security vulnerabilities than those without access, with particularly significant results for string encryption and SQL injection,” the authors state in their paper. “Surprisingly, we also found that participants provided access to an AI assistant were more likely to believe that they wrote secure code than those without access to the AI assistant.”
AI assistants help developers produce code that’s insecure • The Register
(Also, Register –NYU eggheads, really? Bite me.)
This makes perfect sense. So-called artificial intelligence tools are programmed by ingesting an enormous amount of code written by human beings, learning from that the correlations (what usually comes after X), and outputting new text based on those correlations. Github’s Copilot, for example, trawls the publicly available Github repositories for its samples.
The problem, then, should be obvious. Since the tools need an enormous amount of data to produce reasonable results, they tend not to be too picky about their inputs. And anyone who has ever pulled code sample from Github or other open repositories has noticed, the quality is not always the best. It is entirely unsurprising that security, one of the trickier areas to code correctly, is an area that suffers by relying on the public mind.
Nor is it surprising that programmers don’t realize this. First, as I said, security programming is tricky. There are lots of ways to break a program, almost none of them obvious. And very often, your language either doesn’t help you or is actively working against making your code secure. More importantly, these tools sell themselves as a way to help programmers, to remove a lot of the grunt work, the boilerplate code. The hype is intense. Got Copilot’s home page, and you ar einnudated with quotes like this:
Spend less time creating boilerplate and repetitive code patterns, and more time on what matters: building great software. Write a comment describing the logic you want and GitHub Copilot will immediately suggest code to implement the solution.
GitHub Copilot · Your AI pair programmer · GitHub
I couldn’t even find a disclaimer about the quality of the code. I am sure they are there, but the message they clearly want you to take away is that buying access to Copilot will make you a better, faster, smarter programmer.
Except it won’t, not really. It might be a decent learning tool, if used correctly. You could imagine a world where Copilot is a fancy index to a large instruction book — instead of flipping to the back and looking up concurrency, you ask Copilot for concurrency examples play with the examples it provides. And every larger scale coding shop eventually automates the production of boilerplate code. But every shop that does so, every good shop anyway, secures and optimizes that code to an inch of its life. Copilot isn’t sold to be used in those manner — it is sold as a magic bullet, as artificial intelligence making you smarter.
It doesn’t obviously. There are no magic bullets. There are ways in which these correlation engines can help — I just outlined a couple. But they have to be used properly. And the hype machine is dedicated to the proposition that AI is actually intelligent, that it can replace thought rather than augment. It cannot and pretending that it can leads to situations where its use makes us functionally dumber than we were before.
And in this case, it means your credit cards, medical history, and online identity are less secure than they were before its introduction.