Meetup Summary: AI-Powered Coding Tools: A Side-by-Side Look

September 26th, 2024

In September of 2024, at the headquarters of Solution Street, the NOVA Software Architecture Roundtable Meetup group held a meetup to discuss several AI-Powered coding tools. There were four presenters, each summarizing a coding tool and discussing aspects of the tool like functionality, security and privacy concerns, cost and IDE integration.

Below are brief summaries of each aspect of the meetup. Enjoy!


Overview – Jeff Schuman

Jeff Schuman, Managing Director at Solution Street, welcomed everyone to the meetup. He explained that the night would consist of a discussion and comparison of four popular tools: ChatGPT, Amazon Q Developer, Claude.ai and GitHub Copilot. The goal was to explore the strengths of each tool without declaring an overall winner, as the best tool depends on individual use cases. In addition, he introduced a comparison chart, which describes the various aspects of each tool, with the finished chart shown at the end of the meetup and made available to all attendees.

ChatGPT – Vikas Grover

In his presentation on using ChatGPT as a coding tool, Vikas Grover highlighted how AI is transforming the development landscape by offering productivity enhancements, especially at major companies like Amazon and IBM. He explained that AI tools such as ChatGPT can assist developers by generating code, writing test cases, and providing explanations, all of which streamline the coding process and reduce the need for manual intervention. Vikas demonstrated ChatGPT’s integration into VS Code, showing how developers can use it to generate API endpoints or test cases with ease. He emphasized that while ChatGPT can handle various tasks quickly, developers must be cautious about the code’s quality, advising them to review and understand the AI-generated code before deploying it into production to avoid functional issues.

Vikas also explored ChatGPT’s flexibility in handling different coding tasks, noting that it allows users to configure various models and customize settings, such as adjusting the temperature to get more precise or creative answers. He praised ChatGPT’s ability to assist with repetitive tasks, like generating unit tests and optimizing code, which can significantly boost developer efficiency. However, he stressed the importance of clear and well-structured prompts to maximize the tool’s effectiveness, particularly for more complex coding projects. Vikas also touched on the compliance of ChatGPT with GDPR and CCPA, noting that it does not store session data, and discussed its ability to work across multiple files, enabling developers to compare code and ensure consistency across projects.

Claude.ai – Matthew Rigdon

Matt Rigdon’s presentation on Claude.ai focused on demonstrating its capabilities as a code assist tool through a series of practical, real-world coding scenarios. He began by showing how the tool can generate and style a login page using React and Tailwind CSS. By providing simple instructions and an image, Matt was able to iteratively improve the design, adding components like a password field without needing to provide extensive details. He highlighted Claude’s ability to maintain consistency in design and code quality across iterations, making it useful for developers who want to accelerate design tasks while ensuring cohesion.

One key feature Matt emphasized was the ability to store project knowledge, allowing Claude to retain information between chats. This capability helps when working on larger projects, as developers can add code to a project’s context and reference it in future sessions, promoting continuity and collaboration. He demonstrated this with an example where Claude added previously generated code to a new chat and successfully recalled it for further use, illustrating its powerful project management capabilities.

Matt also explored a more complex use case by asking Claude to generate a class and validation logic for an API based on business requirements provided in a CSV file. Claude produced a validation class using the Fluent Validation library, following precise field constraints and types specified in the data. Additionally, he walked through unit test generation using xUnit, stressing the importance of providing explicit instructions to ensure thorough and accurate test coverage. Matt guided Claude to further refine the unit tests by consolidating them with xUnit’s InlineData feature, resulting in cleaner and more efficient test cases.

In conclusion, Matt highlighted Claude.ai’s strengths in accelerating coding tasks, iterating on designs, and ensuring code consistency. While the AI-generated code might not always be immediately production-ready, Claude can greatly speed up development processes, assist with learning, and improve code quality through iterative refinement.

Amazon Q Developer – Steve Radich

Steve Radich’s presentation on Amazon Q Developer explored its origins and capabilities as a tool designed to improve developer productivity. Originally branded as CodeWhisperer, it has been rebranded under Amazon’s broader Q suite of AI tools. Steve emphasized that while tools like Amazon Q can boost productivity by up to 40%, they are still prone to errors, often behaving like an “arrogant college intern” that requires careful oversight. He noted that these tools are not replacing jobs but are making developers more efficient.

Steve demonstrated how Amazon Q integrates with AWS services and development environments like VS Code, showing how it can help generate and explain code or provide insights from AWS console queries. He walked through the process of creating a simple Flask app using Amazon Q, showing how the tool can sometimes make mistakes but eventually provide the correct solution. Additionally, he highlighted the use of Dev Containers for isolated development environments and the tool’s ability to offer step-by-step suggestions in response to plain English requirements.

A standout feature of Amazon Q Developer is its ability to handle code transformations—such as upgrading projects from Java 8 to Java 17—which significantly reduces manual effort for developers. While useful, Steve pointed out that these tools still require human intervention to ensure correctness, particularly when dealing with complex code or large projects. Overall, Amazon Q Developer offers valuable support for developers, but its limitations mean that developers must stay vigilant when using it.

GitHub Copilot – Ryan Gehl

Ryan Gehl’s presentation on GitHub Copilot focused on its advanced features for code assistance, particularly highlighting code completion and the CoPilot Chat. With a background in solving Big Data and AI problems, Ryan showcased how GitHub Copilot enhances productivity by offering intelligent code suggestions. He demonstrated the tool’s capabilities in comment completion and mapping line completion, noting how it anticipates developer needs and provides contextually relevant suggestions. Despite its advanced features, Ryan acknowledged that the tool sometimes needs refinement to align perfectly with user intent.

Ryan illustrated how GitHub Copilot improves code writing through real-time code completion. He shared examples where the tool initially misinterpreted the intended functionality but adjusted its suggestions based on updated comments, showing its iterative learning process. Additionally, he explored the chat interface, which allows users to ask questions about their code and receive explanations or modifications, enhancing understanding and flexibility in development. This chat feature also helps generate commit messages, demonstrating the tool’s versatility.

The discussion included a critical look at the tool’s impact on code quality and developer roles. Ryan and the audience debated the effectiveness of AI-generated code and its implications for junior developers, suggesting that while Copilot can boost efficiency, it still requires human oversight to ensure code quality. Ryan concluded by touching on the use of AI in searching for similar code across projects, highlighting both its potential and the ongoing need for careful human review in code development and deployment.

Wrap Up – Jeff Schuman

Jeff wrapped up the meetup with a summary of the evening’s events and key takeaways. He announced that the comparison chart and a video of the event would be made available on the Meetup website, encouraging attendees to review them at their convenience. Jeff emphasized the importance of exploring various AI code assist tools, noting that many offer free versions, so users should try different ones to find what best suits their needs. He suggested considering personal use cases, such as learning new technologies or quickly aiding projects, and pointed out that understanding licensing, security, and privacy details is crucial for those working with clients.

Jeff highlighted the significance of code reviews in working with AI tools, stating that developing strong code review skills will help in evaluating AI-generated code. He mentioned that while tools like GitHub Copilot can be beneficial, their output should still be scrutinized and reviewed carefully. Jeff encouraged attendees to experiment with multiple tools simultaneously to find the most effective ones for their specific needs and to continue the discussion about AI tools in the Meetup comments section.

In closing, Jeff thanked the presenters—Steve, Ryan, Matt, and Vikas—for their voluntary contributions and the attendees for their participation. He expressed appreciation for the engaging discussions and feedback and encouraged everyone to continue exploring and sharing their experiences with AI code assistance tools.

The presenters worked together to create a handy comparison chart of the code assist tools discussed during the meetup. Here is a link to the chart: Solution Street AI Code Assist Tools Comparison Chart

The full meetup can be viewed using this link.