linkedin tracking
icon-sprite Created with Sketch.
Skip to main content
Critical Decision Points to Maintain an Agile Architecture April 16, 2019
Mobile & Web

Critical Decision Points to Maintain an Agile Architecture

reading time
Cantina logo

Written by

David Purdy, Aaron Stevenson, Marc Gajdosik, and Jeremy Brody

Share this article

Developers make hundreds of micro and macro decisions during the course of any software project. Some seem relatively innocuous, yet have an out-sized effect downstream. Several Cantina engineers got together and reviewed key inflexion points we pay special consideration to after learning some hard lessons.

Stakeholder Requirements

Your first priority as an architect or system designer will almost always be to involve all necessary stakeholders to discover requirements as soon as possible.

Nothing can derail a project faster than finding out that something you’ve spent months putting together needs substantial refactoring to support an essential requirement. Frustratingly, these requirements can be the hardest to acquire during the initial phases of a project. While this concern will typically be associated with seemingly last-minute audit and security requirements, remember the rule of context. Never forget that the niche you live and work in is likely just as foreign to others as theirs would be to you. It is crucial to the long-term success and timeliness of first deployments to start these conversations early. The more sensitive the data involved, the larger the organization, and the more visible your project becomes, the more likely you ought to try to get ahead of these potential issues. When your requirements are opaque, and there is no buy-in from key stakeholders, treat it like any other blocker — unblock it, be proactive, and raise red flags early.

Data Read/Write Balance

A critical first decision when deciding how to architect your scalable web application is knowing the balance of reads and writes that your datastores might require. The answer ends up underpinning many aspects of the application as it is initially developed. As it grows in size and scale, the wrong technologies or architectures may work just fine for a while, but can break and fail under massive scaling or load. The simplest early question you can ask is whether the application is expected to be:

  • Write-heavy applications suffer from consistency issues as they scale, so careful attention should be paid early to the model’s write completion, data distribution, and various batching procedures. Write-heavy applications like Twitter and others often embrace a stream model of writing, focusing on messaging as the backbone to support scalable write semantics. Leveraging concurrency concepts like publish/subscribe (pub/sub) and Google Spanner can help. (Also remember: logs and databases are just duals of one another).
  • Read-heavy applications are perhaps the easiest to scale as they grow. There are many out-of-the-box technologies that facilitate this model, including scores of caching mechanisms, high performance key-value stores, and replicated read-only nodes for popular storage systems like MongoDB.

Data Consistency

Not one of us can escape CAP theorem, but the choice only comes into play when there is a system fault, network failure, or other inability to satisfy the partition (P) element of the triad. In that case, application designers and architects have to ask the messy and difficult question: which can this product better tolerate, a failure of consistency or a failure of availability?

  • Consistency - If it is ok for your application to access different states from different nodes at that same time, then consistency isn’t paramount. This would present when displaying a social media feed to two different users, each getting slightly different results.
  • Availability - If being able to always produce a result to a request is a requirement, data storage solutions of the NoSQL movement or implementing BASE semantics would be viable choices. If, however, it is important to also provide the right and same answer to all simultaneous requests (think financial transactions, banks, stock exchanges), then consistency is paramount and it is better to return no result than the wrong one. This can sometimes produce lethargic result responses or locking states, but it is often worth the cost of the correct response. In this modality, traditional relational DB systems with ACID semantics are often a necessary initial choice for your application design.

Domain/Data Models

In considering what should be domain for a given system (see domain driven design), the context of the system should be considered - allowing information passing through the system that is not inherent to the context to be set aside as a Document rather as part of the Domain model.

For example, take a system that returns weather data to a user, but which also gets that data from another third-party service. A first-pass solution might seek to persist the data coming back from the third-party as is and replicate its domain as its own. This will force the architecture to manage a more complicated domain with many relationships that it likely will never make use of, but will have to maintain over time. It will also obfuscate the potential to improve data access performance and simplicity by having a smaller relational database combined with a document store. Problems like this can be solved with paradigms such as ports and adapters. By fully understanding the domain of the system and not including information not inherent to the domain context, the architect is able to make storage technology decisions that can vastly improve the performance and maintainability of the design.

Variability of Load

An API that receives massive load spikes, such as Facebook during a disaster or public event, will have drastically different requirements than one that receives expected, hourly metrics from air quality sensors.

The Facebook example also requires significant focus be placed into concurrency techniques during software creation and automatic scaling paradigms during deployment. If project constraints dictate deployment to an environment without support for auto-scaling, serious attention should be paid to whether these constraints are antithetical to achieving project goals.

Leveraging architectures that can meter requests effectively (queues, hot tables, etc.) can alleviate some of these problems.

If data consistency is not the paramount concern in your system (as mentioned earlier), a replicated data system can help keep the pressure off of singular points of entry if requests are load-balanced or routed intelligently.

While serverless computing provides significant benefits such as built-in scaling, it requires more purely functional, stateless code structures. For greenfield implementation, this might be fine, but migrating legacy systems could prove to be a challenge.

Takeaways

It seems obvious that swapping structural components or architectures down the road can often be a very expensive proposition. Sometimes, consideration of seemingly small things like decoupling a domain can provide a more significant benefit than it might seem at first. Taking a moment to consider some of these concerns before making technical decisions could pay dividends if they can help mitigate future cost.

Insights

Contact us

Talk with our experts today

Tell us how you need help. We’ll be in touch.

Thank you! Your message has been sent!

We’ll be in touch.

Want to know more about Cantina? Check out our latest insights or join us at one of our upcoming events.