Sizing is a topic that solutions managers typically leave until the end after decisions about the application have been settled. But there are often many variables that can impact the final size requirement. We have seen across our customer base that sizing and the number of environments has been determined by predicted data volumes, the types of environments that need to be supported and the budget available.
Technical architects spend time debating what environment is right for their business and of course this is no easy decision. Often the business changes its mind, data volumes increase (often with little or no advance warning), data sources vary, different teams need access and with this performance issues creep in.
Production – this one is a must so is easy to say yes to. It’s perhaps the easiest of the estimates as long as the solutions team is able to predict the volume of data.
Undersizing is a common problem here for many reasons. The most common reason is when the solution has been far more successful and has attracted more users, and/or data sources. The second common cause of this is where the procurement team has persuaded the solutions managers that they can make do with less resources. Finally we also sometimes see incorrect assumptions being used when sizing.
Test – what and when will be tested, and how frequently, are the key questions here
Development – Increasingly we are finding customers trying to minimize the environments. However, for data quality a development server is pretty key to have in place. Don’t let the development server be an afterthought.
The most effective way we found to determine optimum sizing is an in-depth workshop with an experienced architect. But such a workshop typically requires a lead time of two-to-four weeks to set up as experts review requirements and work on proposed options. The fallacy of budget constraints - companies may try to save money by reducing environments, however this can end up costing more.
Sometimes though, solutions managers discover very late in the implementation cycle that they need to revisit their sizing/number of environments. If a resizing has to take place, additional budgets secured and a reworking of the installation, this can have a real impact on time/resource/cost and more importantly the time it takes to start gaining business benefit.
In such situations, we have found three workarounds:
- Spend time with the vendor’s architect team to understand what is required now and moving forwards – get advice
- Understand the business’s expectations and requirements for the next 24 months so it’s a robust scaleable solution
- Get back on track as soon as possible so the business can realize value from improved data quality/analytics/access to Hadoop etc.
Interesting article here by David Loshin virtualised environments.
Communication is king throughout the sizing exercise - between the technical teams of both vendor and customer and between the business and IT teams about exact usage, data volumes and critically growth plans for the coming 12-to-18 months.
Across all organizations we see various debates happening – our advice is get this topic out on the table and into the open as early as possible. It is critical to the success of any project, it will enable the deployment to be smoother and the adoption and therefore the time to value will be reduced (which is critical).
We would love to hear from you – does any of the above resonate with you? Have you had good/bad experiences? Share your thoughts with firstname.lastname@example.org
For more hot topics follow me @hermon100 for more tips from Caroline at the Coalface!