DEV Community

Steven Read for Read the Architecture

Posted on • Originally published at jacquiread.com

Using large language models in software architecture

It's been maybe 6 months since I started using corporate-approved large-language models, and I thought it would be worth writing a summary of how it has affected my work.

The good:

πŸ‘ LLMs have enabled me to change how I approach some low-level software analysis tasks. I have been able to reduce the keyboard and mouse overhead of extracting requirements from existing systems, eg from SQL statements or DAOs, by using one-shot prompting. A high quality review and refactor is faster than doing the whole thing from scratch
πŸ‘ Creating API-level examples such as SQL statements or CSS layouts has worked well
πŸ‘ Image generation makes it easier to create compelling analogies for stakeholders

The bad:

πŸ‘Ž LLMs suffer from the same common misunderstandings or limitations as the general internet - for example, recommending performance when actually the situation requires responsiveness, and not knowing when high availability or fault tolerance are required, etc
πŸ‘Ž Sources/references matter. They are a non-negotiable when you are working on important decisions and enterprise strategy
πŸ‘Ž The more complex your context the less helpful the AI becomes. It's fine for throwing together some quick proof of concept but right now it won't help you uncover years of unintended complexity in poorly documented architectures.
πŸ‘Ž The last place I want to see the scaffolding of an application coming from is an LLM. Reference architectures provide much more value to your landscape
πŸ‘Ž Nascent concepts aren't well supported (expected, but given API-level aspects seem a strongpoint this is important if you think it will help you learn a NEW language - as opposed to a NEW-TO-YOU language!)

The ugly:

πŸ‘Ί Architecture is as much about why you're doing something as it is about what to do. LLMs score very badly at this
πŸ‘Ί The amount of low-quality material on the internet is increasing due to AI, meaning we are all having to resort to using heuristics to find good quality reference materials
πŸ‘Ί The internet and communities are at risk when corporations offering AI services hide the collaborative environments that have created the state of the art in our industry. Scraping laws and terms of use are presently inadequate for balancing the trade offs. We will see more paywalls, such as medium and substack, if these trends continue

What observations do you have from applying LLMs in your workplace?

©️ Read the Architecture Ltd.

Top comments (0)