Scaling From Legacy

By: Josh Hull

Adjacent Technologies has had a successful metamorphosis as an on-premise-only solution provider of Enterprise Content Management (ECM), into data and process stewards of hosted ECM platforms, largely made possible by the cloud. We are not a company to act first with a new technology, as history shows this can be risky business. Rather, when our area of focus began to show remarkable improvement and efficiencies from virtualization and elasticity, we were able to strike the proverbial hot iron and migrate existing customers from terrestrial datacenters to off-premise infrastructure-as-a-service providers. This is my perspective on cloud versus secure cloud, and where we may see next steps in solutions for our customers.

If we are not able to evaluate legacy process, improve it such that the majority of iterations successfully follow the path of least resistance, and then manually tend only to the exceptions that precipitate out, we are destined to carry forward improper design, defunct solutions, and error caused by human variance. This applies both to process and framework. Analyzing the infrastructure of an ECM solution and determining it’s potential for scale, redundancy, failover, and availability is a uniquely constrained endeavor, given the complexity of document-level security, records retention policies, concurrent users evaluating multiple versions of the same content, and the assumed immediacy of enterprise-class solutions.

Through direct interaction with a large state agency, Adjacent was able to determine that there is a significant difference between a cloud offering that is offered as-is, and secure cloud. An as-is cloud offering, is one where you are responsible for securing the infrastructure, platform, and solution, as it pertains to your instance of the hosted application. This is fine if you are competent at security, and can roll your own solution for your enterprise customers. But what of your virtual neighbor? Cloud is only advantageous in a multi-tenant solution: the server instance you interact with resides next to other as-is tenants on shared hardware and appliances, diffusing the cost of each instance. Without verifying the propensity for cross-tenant attacks, are you certain that your neighbor’s instance is equally secure?

Secure cloud alleviates the concern for cross-tenant threat vectors, while enabling non-internet-facing systems to do their work, unmolested. With access to the solution limited solely to browser-based interaction, end customers can be confident in their work and process without needing to be concerned with security, performance, reliability, or availability. Limiting access to databases, process engines, and LDAP, or user management engines, increases solution stability and security. This design has proven to be a fantastic route for our customers to engage in their custom processes quickly, efficiently, and confidently.

What’s next for cloud? At the enterprise level, we create data constantly, yet are poor at determining the future value of data: will this document or record ever be reviewed again? How frequently? By whom? For what purpose? As we create content, do we define it such that we can easily set its value? A record or image that will never be referenced again in its lifecycle, in theory, should not be stored. This scares a lot of people, so we tend to store everything. But what if you could confidently predict the likelihood that an item of data will be recalled? Data with low likelihood of review could immediately be routed to low-cost storage mediums, while content that had a higher probability of recall would remain on top-tier storage volumes. From Pareto’s principle, we could write 4/5ths of our data once, to cold storage, and happily consume the remainder for a fraction of the original cost. It is this concept that excites us, and has us eagerly anticipating the near future of content in the cloud.

Posted in Blog

Leave a Reply

Your email address will not be published. Required fields are marked *