Despite their impressive capabilities, experts caution that AI tools like ChatGPT are not as intelligent as they may initially appear. Careful usage is essential.
Unless one has been residing beneath a boulder, awareness of ChatGPT – a conversational AI chatbot developed by OpenAI in San Francisco, California—is practically unavoidable. ChatGPT, fueled by the power of artificial intelligence, astounds users with its uncannily human-like responses to inquiries on a wide range of subjects. Trained on an extensive corpus of text, this chatbot’s ability to engage in text-based conversations allows users to refine its responses. Even if its initial answers may seem off-kilter, it often produces accurate results, including software code.
Researchers harness ChatGPT for tasks such as code debugging, code annotation, cross-language software translation, and routine operations like data visualization. A March preprint indicated that the program successfully tackled 76% of 184 tasks in an introductory bioinformatics course after a single attempt, and 97% within seven attempts, demonstrating its potential usefulness.
For researchers who find coding uncomfortable or lack the resources to employ full-time programmers, chatbots like ChatGPT can be empowering tools, democratizing access to coding expertise.
However, despite their apparent intellectual capabilities, chatbots are not truly intelligent. They have been likened to stochastic parrots, mindlessly echoing what they have learned before. Amy Ko, a computer scientist from the University of Washington, Seattle, aptly compares ChatGPT’s limitations to those of a former Jeopardy contestant desperately trying to catch up with current pop culture, devoid of consciousness, agency, morality, embodiment, or emotional inner life. It is worth noting that ChatGPT’s training data only extend until 2021.
In essence, ChatGPT and similar tools based on large language models (LLMs), including Microsoft Bing and GitHub Copilot, are indeed powerful programming aids, but their use must be approached with caution. Here are six recommended strategies for doing so effectively.
Select Your Applications Wisely
Chatbots excel at small, specific programming tasks such as data loading, basic manipulations, website creation, and visualization. However, they fall short in the realm of software engineering, which involves broader considerations like test frameworks, maintainable code writing, and understanding the trade-offs inherent in building complex systems. Recognizing these limitations is crucial.
Trust, But Verify
Chatbots may provide responses that sound knowledgeable, but they can still be prone to errors. They might misunderstand questions or offer incorrect answers. While glaring mistakes become evident when code fails to run, subtle errors can go unnoticed, particularly single-line bugs that are easy to fix but challenging to identify. It is essential to read and test the generated code carefully to ensure its accuracy and functionality, especially in “edge cases” or situations that push the boundaries of expected behavior.
The code produced by chatbots reflects the quality of their training data. Unfortunately, the quality of code found online, which serves as the training material for these chatbots, is often subpar in terms of efficiency and robustness. As a result, the generated code may not perform well on large datasets and can even introduce security vulnerabilities. Vigilance is necessary, particularly when dealing with critical applications or sensitive data.
Embrace an Iterative Approach
Utilizing chatbots for coding is not a one-and-done experience. It should be treated as an ongoing conversation. Users write code, receive responses, scrutinize them skeptically, request further details or ask for revisions. Gangqing (Michael) Hu, a bioinformatics expert, leveraged this iterative workflow to develop OPTIMAL, a method for optimizing chatbot prompts. Effective communication and continuous refinement are key to achieving desirable results.
Anthropomorphize the Chatbot
While recognizing that chatbots lack human characteristics, it can be beneficial to interact with them as if they were human. Adopting an analogy of a diligent, yet inexperienced summer intern can guide the approach. Clearly articulate your problems, break them down into manageable pieces, and specify the tools or libraries you prefer. Such instructions help the chatbot navigate the vast landscape of possible responses and improve the chances of receiving relevant and accurate solutions.
Embrace Change and Evolving Capabilities
Language models like ChatGPT are constantly evolving and growing more powerful. Prompt lengths are increasing, enabling more nuanced responses, and new tools continue to emerge. One such example is Code Interpreter, a plugin that transforms ChatGPT into a digital data analyst, allowing users to upload datasets, ask questions about the data, and obtain downloadable results. Remaining adaptable and staying updated with the latest advancements in language models offer researchers exciting possibilities.
In conclusion, ChatGPT and similar AI-powered chatbots hold remarkable potential as programming aids. They can assist with tasks such as code debugging, annotation, language translation, and routine operations, offering convenience and efficiency to researchers and programmers alike. However, it is crucial to approach their usage with caution.