
The Implications of AI’s Newfound Capability to Code on the IT Industry
Although software is said to be “eating the world,” it is severely hampered by a lack of available development talent and an increase in the complexity of software development. Jobs for software developers are growing at a rate far outstripping the supply of qualified applicants. Even for those who are already employed as programmers, the vast majority of their time is spent not creating brand new features, but rather on maintenance tasks like testing, security fixes, code reviews, and problem fixes. The need to increase worker productivity is heightened by these two trends, and new developments in AI-driven natural language processing (NLP) models are making this a reality. The newest iteration of state-of-the-art NLP models, known as generative pre-trained transformers (GPTs), can translate between many languages, including from text to code, thanks to their size, underlying architecture, training data, and training regime. The incorporation of this potent capability into developer tools is already proving invaluable in improving developer efficiency and removing technological barriers to software development for a wider audience.
Software Struggles to Eat the World
There has been a parallel increase in the number of jobs that require coding expertise alongside the proliferation of software for a wide variety of use cases and business applications over the past decade and a half. In 2017, Code.org conducted a survey that suggested there were 500,000 unfilled programming positions in the United States alone. This is due in large part to the fact that only 43,000 people graduate with a degree in computer science each year, a number that has been steadily declining. Furthermore, it typically takes between three and five years to become qualified for these positions; by the time a programmer is ready to join a more senior role, the number of positions will have grown by 28%. (U.S. Bureau of Labor Statistics). Therefore, many businesses’ efforts to develop significant software are hampered by a lack of qualified workers.
Even after a business has put together a team of developers, they will still face obstacles. There is a growing complexity in the quality, security, and timeliness requirements for software development. Surprisingly, developers spend only 30%-40% of their work on actual feature development or on improving existing code. (Newstack). This is due to the fact that they spend a considerable amount of time creating tests, implementing fixes, and addressing security concerns. When not writing code or reviewing the work of their younger colleagues, senior developers should be spending some of their time teaching and guiding those less experienced in the field. A software project’s timeliness and budgetary efficiency may be affected by the sum of these factors. It’s also a great chance to combine AI with human programmers to fix some of the issues that have plagued the field so far.
AI + Developers = Pair Programming Dream Team
AI-driven Coder Tools
A number of AI-based tools have appeared in the past few years, inspired by the successes of deep learning-based NLP, with the goal of enhancing developers’ efficiency and output. The models utilized by these instruments, in particular, can parse code to locate errors and defects, thus eliminating the need for some of the more time-consuming steps typically associated with a code review. These recently released tools, such as CodeGuru and DeepCode, were able to find vulnerabilities that are difficult for humans to identify, and they also found that 50% of the pull requests examined had issues. (AI-News).
In addition to assisting with auto-completion of code, monitoring code output for mistakes, and even auto-generating unit tests, cutting-edge NLP methods have been shown to increase development efficiency and quality. Codex, an OpenAI program that was recently integrated into GitHub’s Copilot, is capable of doing this with remarkable precision, even to the point of generating code from natural language input. (Figure 1). The training material, which included both code and snippets of natural language, is responsible for this ability. According to an early study conducted by GitHub, using AI in tandem with a developer can cut the time needed to complete a routine job like creating an HTTP server in half. Additionally, the fact that the underlying model of this utility can auto-complete entire sections of code from a single comment greatly reduces the barrier to entry for novice and non-technical programmers.
Coding Made More Accessible
Due to the new iteration of NLP models’ impressive AI translation capabilities, it is now possible for nearly anyone to use natural language to generate the necessary snippets of code. These programs can be written in any language, including scripting languages like Python and Java and object-oriented languages like SQL and NoSQL used to retrieve information stored in relational and nonrelational databases. Codex and similar tools are useful if the software one wishes to write is not too complex. Making basic websites, deriving Excel functions, and translating user intent into database query language are all areas where this technique has proven helpful. In spite of this, it is acknowledged by the developers of these methods that the models are not always spot on. The code that is generated is usually mostly accurate but needs tweaking by a human programmer. Models can help a human coding instructor be more efficient by taking over when the AI has trouble while being used by a novice. That can also mean less time spent supervising and helping junior devs and a boost in their output.
From Human Language to Computer Language
So, how do most of these gadgets function, exactly? The GPT is the workhorse behind these cutting-edge instruments, and most modern versions are GPT-3s. OpenAI pioneered this design, which was then trained on a humongous corpus of online text including prose, digital books, tweets, open-source repositories, comments, and online articles. Codex was ultimately developed as a byproduct of the model, whose primary purpose was to improve the realism of language creation.
There are a number of key differences between GPT generations and earlier deep learning-based NLP models. A few examples are the number of parameters the models have and their state-of-the-art underlying architecture, as well as the quantity of data used for training and the manner in which the models were trained. Because these and their ancestors are neural networks, the complexity of relationships they can record in data increases with the number of parameters, so larger models can pick up on more nuanced patterns than smaller ones. These models were also taught in a self-supervised, multitask environment. Most neural networks are designed to carry out a single function and require labeled data to learn how to perform that function; AlphaGo is a wonderful illustration of this because it excels at Go but struggles with chess. Supervised learning refers to a type of learning that requires annotated data. Unlike the GPT-2, which relies on labeled data for its performance, the GPT-3 was trained to anticipate the next word in a sequence; this forms the basis of many tasks, including translation, text generation, and question answering.
In addition, we now have transformer models, which outperform long short-term memory and recurrent neural network models used as NLP standards in the past. Sentences are processed as a whole rather than individually, attention is used to keep track of how close two words are, and information about where in the sentence a token occurs is encoded. (Figure 2). Because of this, we can now build a more sophisticated model with greater learning potential without encountering the same difficulties with parallelization and iteration that we did in the past. It also fixes the problem that older versions had of losing track of connections between disjointed chunks of text. OpenAI’s API makes the GPT-3 accessible to developers, further expanding the reach of goods that use artificial intelligence to teach programming.
Conclusion:
Artificial intelligence (AI) is coming in to help developers increase their productivity so that they can continue creating the massive amount of software required to cover all the applications that require it. Clearly, the demand for programmers far exceeds the supply, so businesses are increasingly wise to use alternative solutions to boost the productivity and quality of the programmers they already employ. The dream of an AI-powered pair programmer for human developers is becoming a reality thanks to recent dramatic advances in AI-based NLP models, such as the especially powerful GPT-3. Such models embedded into developers’ daily tools have the potential to greatly benefit them by decreasing the amount of time they spend on mundane but necessary activities like writing tests and increasing the quality of their code through automated reviews and auto-generated snippets. The text-to-code capabilities that are currently accessible are useful for everyone, from inexperienced developers to non-technical users. Software by itself might not be able to devour the planet, but AI can assist in that regard.