Article by: James Baldassarra (VP, Data Sciences) and Liz McLaughlin (Data Scientist)

When we think about traditional business intelligence or analytics solutions, there are well-trodden methods for data extraction, manipulation and storage that have matured over several decades. But today, thanks to the ever-increasing popularity of cloud infrastructures, this approach’s accepted challenges and limitations are finally beginning to fall away.

Cloud migration is the most commonly discussed topic across our customer base and arguably the number one priority on many of our customers’ technology roadmaps. Driven by the desire to find a more efficient home for existing enterprise architectures and applications or the need to adopt new cloud-first software, moving to the cloud is now very much the direction of travel. In addition, the move to the cloud has the added benefit of allowing clients to do more with the data they generate and store.

The challenges of traditional analytics

Traditionally, creating an analytics solution involved embarking on lengthy investigations to identify the data needed to satisfy the analytical needs of the business at the time of project inception. This approach included finding on-premise homes for the data points and creating rigid extract, transform and load (ETL) processes that consume data from often only one data source and then transform it into a structure better suited to analytics. From this point, consumable content (dashboards) is created for downstream business users.

The issue with this method is that it doesn’t easily accommodate the changing needs of the business as it becomes more analytically mature. Inevitably, requests to include data from more sources—other than any identified during the project’s initial phase—become insistent. It is at this stage that the traditional approach becomes less efficient.

To meet these demands, those traditional BI architectures must be adjusted to incorporate new data points. This can be a convoluted process.Firstly, relationships between data sources must be established and developed where necessary. Secondly, ETL processes need to be modified to incorporate stages that enable new data points to be loaded. And finally, additional on-premise storage and processing power have to be provisioned to ensure the growing analytical platform can perform as expected for the business.

Changing trends in data consumption

While all of this is achievable—and something Wilson Allen has completed successfully on many projects—several forces in the market are driving customers to seek more efficient alternatives. These trends can be summarised as follows:

  • Analytical maturity in professional firms is increasing rapidly. The need for more data to be available in a consumable format for business users puts pressure on those traditional on-premise BI solutions
  • Cloud-based business systems are being adopted throughout the industry, and stakeholders want access to a broader range of data over and above traditional financial information
  • Analytical requirements are ever-changing. They evolve alongside the business, so the platform that underpins the analytics strategy needs to be more easily extensible and responsive
  • The shift from on-premise servers to cloud data warehouses is sparking a shift from ETL to ELT (extract, load, and then transform). Taking advantage of the power of cloud tech enables data to be extracted from a range of different sources, making it readily available to use as needs and initiatives evolve.

Leveraging the cloud-based paradigm

So, what advantages are delivered by cloud-native analytics? Our Data Sciences team has developed a new lake and cloud-based solution—the Wilson Data Cloud— that’s capable of ingesting data from many different business systems (both on-premise and cloud). By leveraging Microsoft best practice guidelines and applying our deep analytical and domain expertise, we have created an infinitely scalable storage solution that can transform a wide range of data into consumable information. This approach has the following advantages:

  • Less time is spent deciding what data is needed from each source. Cloud storage enables all data from each source to be transferred into the platform in a cost-effective way
  • Greater scalability. There’s no need to request additional storage and processing power from on-premise infrastructure teams—these can be dialled up/down via the centrally managed Azure portal. And, customers only pay for what they use
  • Industry-leading visualization tools (PowerBI, Qlik, Tableau) connect seamlessly to enable user content creation.

Enabling deeper learning

The Wilson Data Cloud leverages and connects the mainstream business systems that are commonly used by professional firms, and futureproofs their analytical infrastructure needs. But more excitingly, it also paves the way for more powerful analytical insights. Using the Wilson Data Cloud, data scientists can develop machine learning models that reveal insights that were extremely difficult to discover in the past.

This is achievable because the cloud-based storage behind the Data Cloud removes pain points that previously resulted in roadblocks and bottlenecks. In addition, it provides an environment that allows data scientists to utilize all types of organizational data—structured, unstructured, raw, and transformed.

A recent project we undertook to define client health across a broad range of features provides an example of what can be achieved. Working the old way using traditional BI tech and approaches required extensive work to access, transform, and link the disparate data into what could finally be fed into our machine learning pipelines. Any alterations to the set of fields defining a client were time-consuming and laborious, requiring extensive revisions to engineering scripts. But migrating the effort to the new Data Cloud solution alleviated all these pain points. With data stored, linked, and refreshed in a central repository, data wrangling speed was significantly improved, allowing more time to be spent on machine learning development and training.

Analytical flexibility built-in

Cloud-based storage solutions, such as the Wilson Data Cloud, can be scaled quickly and create an ideal environment to combine disparate types of data upon which the true power of machine learning can be released.

Historical approaches to data wrangling are blind to future needs, limiting the potential insights that data scientists can uncover. By ingesting data sources in their entirety, in both raw and curated formats, cloud-based solutions allow improvements in both effort-to-outcome and the range of data that can be congregated, dissected, and evaluated.

Analytical consultants can select features from curated endpoints or engineer additional features as required—rapidly expediting data model refinement and significantly improving speed to value.

The Wilson Data Cloud enables segmentation analysis of both curated and raw data points, allowing professional services firms to build a holistic view of the health of their clients. It ushers in a new era of flexible, advanced analytical capability and, combined with machine learning, will help lift business intelligence information to new levels of dependability.

The post How the move to the cloud creates faster time-to-value analytics and more powerful business insights appeared first on Wilson Allen.