Sunday, August 3, 2025

 

AI Code Assistant Tooling: Initial Perspectives from a Developer

I’m writing this article to share my first impressions of using AI code assistant tools during software development.

First, I should mention that I’ve been involved in software development for over 30 years. I’ve worked on mainframes using COBOL/JCL and IMS databases, all the way to the latest Spring Boot frameworks in Java and Kotlin. I’ve programmed in Python, PHP, Perl, C, C++, and Bash. I’ve practiced waterfall, agile, TDD, and BDD. Over the years, I’ve used various databases, IDEs, and text editors like vi and emacs.

Based on this experience, I feel I can speak credibly about what software development entails—and how AI tooling is beginning to shape it.


1. AI Doesn’t Replace Developers

First and foremost, I don’t see AI tools replacing developers. Quite the opposite—AI needs to be driven by a skilled developer to produce the right results. In my experimentation with Claude Code, I encountered many scenarios where domain expertise was critical. Without my background in various frameworks, databases, and design patterns, the responses would have been inadequate—or even counterproductive.


2. You Need the Skills to Drive AI Tools

AI tools don’t inherently know what tech stack you’re using. That’s something the developer must specify in the prompts. This already requires experience and judgment.

Software that goes to production must meet specific standards for maintainability. Therefore, the developer—not the LLM—should dictate the programming language, frameworks, databases, design patterns, and build tools. Teams must define a focused set of technologies, and then ensure the AI generates code accordingly. Again, this reflects the essential role of the developer.


3. Functional Requirements Must Be Documented

LLMs are trained primarily on public data, like GitHub repositories. But most company-specific business logic is not public. AI cannot infer these internal processes unless they are documented and provided as part of the prompt.

Ironically, this brings us back to something similar to the waterfall model, where upfront documentation was emphasized. In contrast to the agile mantra "the documentation is in the code," we now need clear, detailed specifications—for the AI to generate meaningful code.


4. Be Explicit and Precise in Prompts

Precision matters. In one case, I forgot to specify that the project should use an H2 database for development. As a result, the generated application failed at runtime when trying to connect to a non-existent database server.

This is where developer troubleshooting skills come in. I had to recognize the issue and prompt Claude to include H2 as a development database.


5. Unexpected Compilation Errors

In another case, Claude told me a task was complete and successful—but one of the classes had a compilation error due to a missing type or unresolved symbol.

Once again, it took a developer’s eye to identify and fix the issue. I had to provide Claude with a prompt specifying the file and the line number needing correction.


6. Unexpected Runtime Errors

In one instance, Claude said the application was running and the Swagger UI was accessible. It wasn’t. I received a “site can’t be reached” error.

In another case, Claude indicated a public endpoint was available, but I was getting HTTP 403 (forbidden). I had to guide Claude to modify the endpoint’s access permissions.

These are real-world examples of how developer involvement is still essential.


7. Some Tasks Are Better Done Manually

I’ve also learned that it’s sometimes faster to just do certain tasks manually than to prompt the AI. For example, running a Gradle command or editing a README.md file can often be done in seconds—while Claude might take minutes or generate incomplete results.

Knowing when to intervene manually is a skill that comes from experience with build tools, running local databases, and reviewing running processes.


8. The Good: Well-Structured Code

On the positive side, I was impressed with the code generated for a user microservice. Claude applied good software design principles: separating controllers, services, DTOs, repositories, and security layers.

Having designed and built many microservices myself, I saw a strong resemblance between the structure I would have created and what Claude generated.


9. The Tests Were Lacking

That said, the tests generated were always passing—which is actually a red flag. Upon review, I saw they didn’t cover edge cases. This suggests that while the test scaffolding may be helpful, it can’t be relied upon for thorough coverage.

More work is needed in this area, and developers must continue to play a key role in writing meaningful test cases.


10. LLMs Struggle with Legacy Systems

Having worked in mainframe environments with COBOL, JCL, Mark-IV, and IMS, I seriously doubt AI tools will be of much help there. Much of that codebase isn’t publicly available online, and LLMs are not trained on it. So, for now, don’t expect AI to assist much with legacy systems like those.


Conclusion

LLMs can accelerate code generation, but skilled developers are still essential. Crafting good prompts is crucial to getting useful results.

At the end of the day, computers are simple machines, responding to 1s and 0s through layers of logic gates. Everything above that—frameworks, languages, interfaces—is built by humans to abstract complexity.

LLMs are just another abstraction layer. They won’t replace the need for deep knowledge, critical thinking, and engineering judgment. And there's still a great deal of institutional and domain-specific knowledge that LLMs will never have access to.

In short, AI tools are helpful copilots, but we’re still in the driver’s seat.


Thursday, June 19, 2025

Microservices and the Single Responsibility Principle

The Single Responsibility Principle (SRP) is the first software design principle in the well-known SOLID acronym introduced by Bob Martin. SRP is a guiding principle for designing classes and their corresponding functions so that they change only for one reason. In other words, the functionality of a class should be implemented to satisfy a single actor, as defined in a use case UML diagram.

Similarly, there is another principle called the Common Closure Principle (CCP), which states that classes with similar behavior should be grouped within the same component or package. This principle is clearly reflected in the Java API packages, as well as in other well-structured programming language APIs.

From an architectural point of view, there is also the concept of a Context Boundary, borrowed from Eric Evans’ book Domain-Driven Design (DDD).

Both CCP and Context Boundaries can be seen as higher-level design principles that stem from SRP. Microservices, too, should follow the same guiding principles. That is, a microservice should focus on a narrow subset of a given business domain. In fact, when designing a microservices architecture, we should approach the system from a domain-driven design perspective: first delineating the different context boundaries, and then designing the microservices accordingly.

Therefore, SRP is a principle that applies not only to class-level design but also to the design of microservices architectures.

Saturday, July 27, 2024

Microservices and Standards; Request / Response

Standards are essential for implementing a Microservice Architecture. In this article, I will focus on the structure of requests and responses involved in such an architecture.

It's important to remember that a microservice does not expose its API to the outside world. These microservice APIs are intended to be consumed within the context of the overall application. In fact, a microservice architecture is a design pattern where the implementation of an application's sub-domains is distributed across isolated, independent services. Since we are still dealing with a single, overarching domain, it’s important to ensure that certain data attributes in both request and response types are used consistently across the system.

Typically, a request enters the application’s domain through an API gateway. This entry point functions similarly to the API interface in a monolithic system. Ideally, we aim to define a single API that, through the use of attributes, can handle various functionalities within the system.

The API request is then passed to an application orchestration layer, which is responsible for applying rules, performing validations, and determining the best route (i.e., the appropriate microservice) before forwarding the request. In some cases, the orchestration layer may enrich the request with additional attributes of its own.

Once the request reaches a given microservice, it is processed. This may result in another request being generated—either to another microservice or to an external entity or system. Responses are then generated by the called services and eventually returned to the API gateway, which sends a final response back to the original client.

In many cases, the initial microservice may forward the request to another microservice, forming a chain of services involved in the processing pipeline—second, third, fourth, and so on. This process continues until a final response is produced, traveling back through the same route—first to the orchestration layer and finally to the API gateway. In complex applications, this can involve several, even dozens, of microservices before the final response exits the system.

To accurately correlate the various requests and responses, it is crucial that they all share a unique transaction ID. Additionally, it’s important to identify the originating client that initiated the request. Therefore, having a standardized base structure for both requests and responses across all microservices is essential for maintaining consistent and reliable processing throughout the system.

Transitioning to a Microservices Architecture - Part 2

Microservices and the Development Organization

by Rubens Gomes

Microservice architecture is based on the principles of modular systems that align with the domain-driven design (DDD) paradigm. That is, the architecture is divided into sub-domains, each with specialized responsibilities, and delineated from other modules by what is known as a bounded context. To develop deep expertise and effectively address the concerns of these sub-domains, it is ideal to have development teams composed of specialists in each particular domain. These teams become the owners of the microservices within their respective sub-domains. This type of organizational structure is key to successfully implementing a microservices architecture.

As stated in Conway's Law:

“Organizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations.”

The implications of Conway's Law are fundamental to the successful implementation of a microservices architecture. Because a microservice is a module with responsibility within a specific domain—and thus has a clear bounded context—assigning dedicated teams to work on specific sub-domains tends to naturally facilitate the development and maintenance of microservices.

In essence, what I’m emphasizing is that to achieve the best results from a Microservice Architecture, an organization must be structured around sub-domain expertise. Microservice architecture goes hand in hand with Conway’s Law. By structuring organizations around sub-domains, you naturally encourage the creation of applications that reflect the structure and purpose of the corresponding microservices. Communication within these expert teams becomes highly cohesive, focused, and aligned with the specific requirements of their sub-domain. This, in turn, supports the development of a clean and efficient Microservice Architecture.

In fact, one way to guide an organization toward a Microservice Architecture is to apply the Reverse Conway Maneuver (also known as Inverse Conway’s Law). This approach suggests that by organizing teams around sub-domain expertise, the system architecture will begin to reflect that structure—resulting in the natural emergence of a Microservice Architecture.


Tuesday, July 9, 2024

Why I Like Microservices

Why I Like the Microservices Architecture Style

By Rubens Gomes

In order to explain some of the reasons why I prefer the microservices architecture style, I’ve written this comparison between monoliths and microservices, based on over 30 years of practical, real-life experience in software development across both small and large companies.


Monolith Development Environment Setup

Setting up a development environment is one of the most important steps a developer must take when starting a new job. It involves not only configuring tools (e.g., IDEs, text editors), network access, databases, and source control, but—most importantly—creating an environment that facilitates the development and maintenance of the application.

This is, right off the bat, one of the issues I have with monolithic systems: monoliths are usually very complex to set up. I’ve had real-life experiences where it took over a week just to get a development environment ready for a single monolith application.

Monoliths are often tied to licensed platforms such as J2EE, include numerous libraries, have complicated builds involving multiple components, and use large, complex databases. Getting tests running and learning the entire business domain adds further complexity. Everything in a monolith is orders of magnitude more complicated than in a microservices setup.

In one job, it took me over two weeks to get my environment working for one of the department’s main monolith applications. The system used a J2EE backend server implemented on IBM WebSphere, which required a local installation and extensive configuration across multiple components: databases, system interfaces, and shared libraries. The build process itself was very complex, with different modules relying on various property configuration files.


Monoliths: Long Meetings and Long Releases

I remember, as a senior architect at a large enterprise, attending weekly meetings with project managers, QA leads, and department managers to review features being implemented for upcoming releases. The meeting room wall was covered with lists showing the lifecycle status of each feature.

As release dates approached, we had to coordinate what was ready, what was still in QA, and when we could potentially deploy. Everything took longer—development, testing, cross-team communication, orchestrating different projects, and aligning multiple parallel feature tracks. We also had to coordinate with the Operations team to schedule deployments.

This is, in my opinion, one of the greatest drawbacks of monoliths: how long it takes to get something into production. A single release could take months due to the number of steps and teams involved.

Often, we had to split features into different branches, each moving in parallel. While development was ongoing, production issues would arise, triggering the need for hotfix branches. To make things even more complicated, we had to manage operational logistics: when to deploy, who would deploy, at what time, and how to handle rollbacks if needed.


Microservice Development Environment Setup

Microservices, on the other hand, typically have smaller databases, fewer libraries, simpler business sub-domains, fewer tests, and significantly fewer lines of code. I’ve seen real-life cases where a contractor joined a team in the morning and by the afternoon had their development environment set up and was already coding on one or more microservices.

In my own experience, once we migrated from a monolith to microservices, everything became so much easier. Setting up a development environment in Eclipse or IntelliJ IDEA took only minutes. All I had to do was clone the microservice project, import it into Eclipse as a Maven project, and run a build via CLI or DevOps tools like Microsoft Azure. From the IDE, I could begin coding in no time.


Why Is It Easier to Work with Microservices?

Microservices are small, focused applications that are designed to do one thing—and do it well. You no longer have to juggle all the technologies, components, databases, and libraries that a monolith typically involves.

The build process becomes much simpler and faster since it doesn’t depend on various cross-cutting component libraries. Because microservices are focused on a specific sub-domain, the learning curve is much shorter. Our brains are better equipped to focus on one smaller, well-defined problem at a time—as opposed to having to understand a large, all-encompassing monolith.

Getting things up and running, writing tests, implementing features, delivering updates, fixing bugs—everything is so much faster and easier with microservices. You can stay focused on a specific business area and ensure features are delivered to production quickly. Testing is faster, and builds and deployments are simpler.

I’ve seen many cases where a pull request was approved in the morning and the code was in production the same day. Troubleshooting is also easier with microservices since logs can be scoped to individual applications, making it much simpler to trace issues.

Transitioning to a Microservices Architecture - Part 1

Transitioning to a Microservice Architecture – Part 1

By Rubens Gomes

I had the opportunity to serve as a technical lead during the implementation of the microservices architecture for the American Airlines Ticketing department from 2016 to 2023. The Ticketing department is responsible for the booking and payment processing of over 700,000 airline tickets daily. The company’s IT transformation began around 2017, and the ticketing team was among the first to have microservices running in production.

Transitioning to a microservice architecture involves significant changes—not only to the technical architecture and continuous integration/delivery pipelines, but also to the organizational culture and team dynamics. In a microservice architecture, teams become more independent and self-organized, each responsible for a specific part of the business domain. These teams develop deep expertise in their respective sub-domains and take full ownership of development, deployment, and production support.

In addition to these structural and organizational changes, a successful transition to microservices requires several critical foundational components. In future parts, I’ll elaborate on these elements and explain why they are essential for effectively implementing a microservices architecture.

Stay tuned for the next edition of "Transitioning to a Microservices Architecture – Part 2". I have much more to share on this topic, as I had the unique opportunity to witness—and help shape—a real-world microservices implementation from the ground up in a very large enterprise.

Wednesday, October 26, 2022

Installing LENS // The Kubernetes Platform IDE

Installing LENS // The Kubernetes IDE

  • Download and install the latest version of LENS // The Kubernetes IDE . 
  • Ensure the cluster "kubeconfig" configuration settings have been previously downloaded following the steps from Installing IBM Cloud CLI and Kubectl.
  • All the cluster configurations should be stored in the KUBECONFIG file (e.g., C:\Users\rubens\.kube\config).

Running LENS

  • Prior to running, LENS ensures you have previously configured the "Kube" cluster configurations as LENS will use those configs and the embedded certificates to authenticate itself to the different clusters.
  • If issues with Authorization failed, you may have to update the IBM cloud bluemix certificate files (e.g.,  C:/Users/rubens/.bluemix/plugins/container-service/clusters/...) with the latest tokens.  In order to do that you need to download all the previous cluster configurations again.
  • LENS WILL USE THE KUBECONFIG FILE LOCATED AT THE "KUBECONFIG" ENVIRONMENT VARIABLE FOLDER.  MAKE SURE YOU HAVE THE LATEST/UPDATED KUBECONFIG FILES/TOKENS IN THE KUBECONFIG FOLDER (E.G. C:\Users\rubens\.kube\config).  IF NOT YOU MAY RUN INTO AUTHORIZATION ERRORS.

Lens Authorization Errors

If you run into Authorization Errors when attempting to connect to a cluster within LENS follow these steps:

  1. Ensure all previous steps have been followed and that you are running the latest version of IBM cloud, Kubernetes and IBM cloud Kubernetes service plugins
  2. Ensure you have the KUBECONFIG environment variable correctly configured in your environment.  See previous notes.
  3. Ensure that prior to running LENS you are currently logged in to IBM Cloud.
  4. Ensure you have the latest updated configuration/tokens in your KUBECONFIG file.  You can run the previous steps to download the Cluster Configuration.  If you are on Linux you may consider using a script similar to the one below:

$ cat $HOME/bin/kubetoken.sh
#!/bin/sh -ahu
##
##       author : Rubens Gomes <Rubens.S.Gomes@gmail.com>
##
## written date : October 26, 2022
##
##      purpose : This script is used to update the IBM Cloud Cluster Configuration
##                tokens and settings in the KUBECONFIG ($HOME/.kube/config) file.
##

# local path to IBM Cloud tools
IBMCLOUD="ADD PATH TO IBM CLOUD CLI TOOLS"

# define a clean UNIX binary PATH
PATH=
PATH=${PATH}:/bin
PATH=${PATH}:/sbin
PATH=${PATH}:/usr/bin
PATH=${PATH}:/usr/sbin
PATH=${PATH}:/usr/local/bin
PATH=${PATH}:${IBMCLOUD}/bin
export PATH

# define a clean UNIX LD_LIBRARY_PATH
LD_LIBRARY_PATH=
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/lib
export LD_LIBRARY_PATH

# logging into IBMCloud using Single Sign On
\ibmcloud login --sso

if [ ${?} -ne 0 ]
then
 echo "Failed to login to IBM Cloud" 1>&2
 exit 1
fi

echo "-------------------------------------------------------------------"
echo "Downloading Cluster Configuration..."
\ibmcloud ks cluster config --cluster "<cluster ID>"

if [ ${?} -ne 0 ]
then
 echo "Failed to download IBM Cloud CLUSTER Configuration" 1>&2
 exit 2
fi

echo "Done"
exit 0