DDX Insights

Key considerations when planning your cloud strategy

Written by Frank Nankivell | 21 February, 2024

At DDX we come across a number of different organisations seeking to write or rewrite their cloud strategy. As we advise companies on the best way forward we often come back to a few key considerations on developing an organisation's cloud strategy.

Build resilience

Whether you commit to a single public cloud offering or a multi cloud approach, it is fundamental that your data and systems are resilient to change and disruption. This means going further than simply following a set of ‘well architected’ guidelines. It means ensuring that contingency measures are readily available when disaster strikes. We advise companies to not base all their decisions on cloud/technology/service providers being ‘too big too fail’, as this in itself leads to bad design patterns. Data and storage is a key component of resilience and it's crucial that backups are available multi-tenancy. This is not just to counter the fairly unlikely outcome of system failure by a public cloud provider but also to mitigate vectors such as ransomware and other data disruptive attacks.

Resilience should also be designed across systems, ensuring that business critical units are modular enough to be able to be re-deployed to alternative providers should a critical incident occur. This may seem like ‘over-designing’ but having escape options for crucial systems is pivotal.

Know and understand the law(s)

What data you can store, where it can be processed and what can be shared where, is crucial to any cloud strategy. In larger organisations it is important that CTO’s and Legal / Data teams are closely aligned and speak together regularly regarding the changing legal landscape. Companies are now facing significant fines for collecting incorrect amounts of data or storing information in regions without customers consent. What this can mean for your strategy is considerations around how data lakes and data meshes are developed and what ‘hot’ data can be stored in what part of the world. It may also mean for companies working in less regulated countries that teams need to think and expect updates to legal requirements in the near term.

Don’t lift and shift - optimize

At DDX we are huge fans of serverless workloads. Utilising Cloud Functions (GCP) or Lambda (AWS) can significantly reduce costs and increase your systems performance. However if you take the approach of simply ‘lifting and shifting’ your existing systems from on prem servers onto ‘like for like public cloud servers' you are actually more likely to increase your costs without utilising any increased performance.

It's important within your strategy that each component and system is independently assessed and your strategy ensures you leverage the optimal infrastructure for all your systems or services. Containerised workloads also allow increasing amount of optimisation, whilst also enabling more interoperability between providers. Whatever your systems are make sure you give the time in your strategy to really assess the options and optimize and don't over-simplify.

Be iterative in your rollout plan

Strategies that rely on everything being done yesterday are likely to fail and cause huge internal challenges. For most companies, deciding to change the infrastructure of even a small number of services can be challenging, and in the short term will mean increases in costs and development time. Therefore it's important that your strategy allows for incremental change to be delivered - making piecemeal updates and amendments to systems over time. This should ensure that there are always ‘wins’ available to present to board level stakeholders who (may or may not) be sold on your strategy, whilst also ensuring there are no ‘big bangs’ and long nights of QA before systems can be switched over.

In the short term this could see increases in costs but it will also allow engineers to benchmark things and make sure you can pivot at any point along your cloud strategies journey.

Grow your team's knowledge

Not understanding any new technology you are using carries risks. Therefore its important that you staff the organisations with a few core team members that have a good fundamental knowledge of new infrastructure and have gone through significant change before. This can often mean re-staffing an organisation temporarily or working with other companies that have gone through the journey before.
Before you set out, it's also pivotal that existing team members can up-skill on the new tech before they start to use it fully. Luckily all major public cloud providers have a very comprehensive set of certifications and training available, so your company can upskill quickly. It's important though that you create the space to learn and develop internally first instead of pushing staff to get straight into work.