What insurers face now, digital giants like Amazon and Netflix faced when they moved to operate exclusively in the digital marketplace: transactions increasingly shifting to digital, and operations affected by an unprecedented wave of automation. Let’s explore the lessons these companies learned as they confronted the challenges that insurers face as they adapt to the digital marketplace.
One critical change for Amazon and Netflix was making a fundamental shift in the way their core systems and architecture were developed: they evolved – out of necessity – to migrate to a more flexible and responsive architecture by incorporating microservices. The factors that led to this shift sound strikingly similar to those affecting the insurance world. Here are five key factors for insurance companies to consider when planning their future technology directions, with examples of how Amazon and Netflix addressed similar issues.
Availability is a fundamental need when designing a digital user experience. Streamlining customer journeys depends on having technology and data at the point of the transaction. Netflix’s big availability issue was with its video library – which is a key selling point of the service. From a customer perspective, being able to watch thousands of movies and other content is less attractive if the customer can’t access the catalog any time. Netflix brought microservices to bear on this challenge, isolating the library functionality and running it independently from the rest of the user experience. This provided the capability to continually and frequently upgrade the catalog. For insurers, intermittent outages – especially on nights and weekends, when consumers and small business owners shop for insurance and digital agents are still working – are equally unacceptable.
Scalability and availability go hand in hand. When the volume of transactions goes up, processing power must be able to scale up, too. Monolithic tech stacks struggle here, especially because the points of failure can be so small – as insurers well know! Amazon’s shopping cart functionality had plenty of capacity for regular traffic but was challenged when required to scale up for the incredible volume of purchases on Black Friday and Cyber Monday. Inventory control is critical because you have to understand what products are in shoppers’ carts, what inventory can still be offered and when to cut off the sale of a specific item. Amazon decoupled the cart functionality from its monolithic tech stack and deployed a microservice that ran alongside the rest of their tech environment. The shopping cart microservice had the much simpler task of checking the inventory and maintaining customers’ carts. It could access additional processing power as the volume went up without relying on the same servers running the rest of the Amazon architectural stack. And because the shopping cart service was decoupled from the main system software, it could be continually updated and enhanced.
Speed is critical for scalability, and microservices have a lot to offer here. Both Netflix’s library and Amazon’s shopping cart experience are changing rapidly, with requests coming from thousands of users at a time from different front ends. Digital giants are known for providing a responsive user experience that is highly scalable without the need for serial data processing. Using microservices to support multi-threaded requests have given both companies an edge. For insurers, the support of an increasingly complex maze of distribution outlets requires rating capabilities that can consistently deliver sub-second responses. The ability to decouple this from core processes while dynamically scaling based on the needs of the front end is critical, regardless of line of business.
Maintainability and upgradability are significant areas of consideration for all insurers, based on the current state of their technology environments. As we look to the policy, billing and claims systems or the front-end user experiences, etc., insurers need the ability to increase the speed of software upgrades to be a more continuous, less disruptive and therefore higher-value undertaking. As we look at the dynamically changing user experiences needed in today’s digital world, the ability to upgrade these components and reuse discrete services at a greater frequency than back-end functionality is becoming a critical capability.
This is where microservices really shine. Each isolated process supports a small, discrete function. Therefore, it is easier to focus on a very specific capability with an update. There is flexibility gained in adapting to new integration points and integrating new services. The magnitude of the testing effort decreases significantly.
These are game changers for insurers that have been struggling with a monolithic architecture where everything affects everything else. And microservices give insurers the ability to ease pain points in their current technology environments and add capabilities without going through a full rip and replace.
We have much to learn from other industries’ successes and failures within the digital marketplace. But, let’s not reinvent the wheel. Let’s look at the lessons learned by the leaders in other markets and apply the knowledge they have gained.
Insurers, I have some good news and some bad news. Insurers have made tremendous progress in core modernization, purchasing and implementing new core systems and beginning to adapt their businesses to take full advantage of modern core systems’ capabilities. This is genuine cause for celebration – insurers that have made or are making these efforts are advancing their companies and our industry in general.
As insurers were engaged in these core modernization efforts, though, the personal lines market and technology itself have kept moving forward. A core system that was top of the line in 2013 may be showing its age unless it has been continually upgraded to serve the capabilities needed in 2018 and beyond. This may not be the most welcome news for those still thinking about core systems with an average lifespan of 10 years or more, but this is our new reality.
This is especially true for personal lines insurers, which are typically the first to catch the core modernization wave. They have also tended to be the leaders regarding new computing capabilities. It was true for mainframe systems, client servers, web-based applications. Now, heading into the new era of computing, it is true with computing trends like microservices and serverless computing coming to the fore. This is not technology for technology’s sake – insurers need to be able to handle an increasing number of transactions, including multi-threaded calls, and that increase is approaching an infinite number.
Further pressure comes from the insurtech startups active in the personal lines market. The original, widely known insurtech startups in P&C insurance were focused on personal lines. As the insurtech movement has matured, startups’ focus has widened to commercial lines and workers’ comp as well as crossing product lines. However, startups have been active in personal lines the longest, and those insurtechs that have thrived have gained market experience and are beginning to focus on organizational maturity. That means that for incumbent personal lines insurers, their insurtech counterparts tend to be comparatively robust and mature when compared with the commercial lines insurtechs I discussed in my earlier blog.
A key characteristic of insurtechs is that they are digitally native companies. That means that they are natively fluent, with enormous quantities of data and digital interactions, and their technology is geared toward this.
Core systems that can be described as “digitally native” have an edge in the digital market going into the future. Even though digital has been a crucial focus area for years, the insurance industry is still learning what a truly digital business entails – and what technology is needed to support it. Insurtech startups have given the insurance industry new examples of how to operate in the digital world.
Few core systems are built with the digitally native characteristics, but the core systems marketplace is beginning to adapt. Continued evolution toward open APIs and new data sources will provide insurers with the opportunity to interoperate with the new distribution channels and directly with the customer.
Whether you are a large insurer that is trying to support new digital brands and new product models (on-demand, telematics and others dependent on high amounts of data) or a small regional insurer trying to power consumer service portals, the key question is data availability and digital connectivity with the consumer and agent.
So, for insurers asking themselves: “Do we really need to think about modernizing our modern core systems?” the better question may be this: “Are your modern core systems digitally native?”
In Part 2 of this blog series, we shared how a microservices architecture is applicable for the insurance industry and how it can play a big role in insurance transformation. This is especially true because the insurance industry is moving to a platform economy, with heavy emphasis on the interoperability of capabilities across a diverse ecosystem of partners. In this segment, we will share our views on best practices for adopting a microservices architecture to build new applications and transform existing ones.
Now that we have made a sufficient case exploring microservice architecture’s abilities to bring speed, scale and agility to IT operations, we should contemplate how we can best think about microservices. How can we transform existing monoliths into a microservices architecture? Although the approach for designing microservices may vary by organization, there are best practices and guidelines that can assist teams in the midst of making these decisions.
How many microservices are too many?
Going “too micro” is one of the biggest risks for organizations that are still new to microservices architectures. If a “thesis-driven” approach is adopted, there will be a tendency to build many smaller services. “Why not?” you may ask. “After all, once we buy into the approach, shouldn’t we just go ‘all in’?”
We encourage insurers to be careful and test the waters. We would caution against starting out with too many smaller services, due to the increased complexity of mixed architectures, the steep curve of upfront design and the significant changes in development processes as well as a lack of DevOps preparedness. We suggest a “use-case-driven” approach. Focus on urgent problems, where rapid changes are needed by the business to overcome system-inhibiting issues, and break the monolith module into multiple microservices that will serve current needs and not necessarily finer microservices based on assumptions about future needs. Remember, if we can break the monolith into microservices, then later we can make microservices more granular as needed, instead of incurring the complexity of too many microservices without an assurance of future benefits.
What are the constraints (lines of code, language, etc.) for designing better microservices?
There is a lot of myth about the number of lines of code, programming languages and permissible frameworks (just to name a few) for designing better microservices. There is an argument that if we do not set fixed constraints on numbers of lines of code per microservice, then the service will eventually grow into a monolith. Although it is a valid thought, an arbitrary size limit on lines of code will create too many services and introduce costs and complexity. If microservices are good, will “nanoservices” be even better? Of course not. We must ensure that the costs of building and managing a microservice are less than the benefit it provides — hence, the size of a microservice should be determined by its business benefit instead of lines of code.
Another advantage of a microservices architecture is the interoperability between microservices, regardless of underlying programming language and data structure. There is no one framework, programming language or database that is better-suited than another for building microservices. The choice of technology should be made based on underlying business benefits that a particular technology provides for accomplishing the purpose of microservices. Preparing for this kind of flexible framework will give insurers vital agility moving forward.
How do microservices affect development processes?
A microservices architecture promotes small, incremental changes that can be deployed to production with confidence. Small changes can be deployed quickly and tested. Using a microservices architecture naturally leads to DevOps. The goal is to have better deployment quality, faster release frequency and improved process visibility. The increased frequency and pace of releases mean you can innovate and improve the product faster.
Putting a DevOps pipeline with continuous integration and continuous deployment (CI/CD) into practice requires a great deal of automation. This requires developers to treat infrastructure as code and policy as code, shifting the operational concerns about managing infrastructure needs and compliance from production to development.
It is also very important to implement real-time, continuous monitoring, alerting and assessment of the infrastructure and application. This will ensure that the rapid pace of the deployment remains reliable and promotes consistent, positive customer experiences.
To validate that we are on right path, it is important to capture some matrices on the project. Some of the key performance indicators (KPIs) we like to look at are:
MTTR – The mean time to respond as measured from the time a defect was discovered until the correction was deployed in production.
Number of deploys to production – These are small, incremental changes being introduced into production through continuous deployment.
Deployment success rate – Only 0.001% of AWS deployments cause outages! When done properly, we should see a very high successful deployment ratio.
Time to first commit – This is the time it takes for a new person joining the team to release code to production. A shorter time indicates well-designed microservices that do not carry the steep learning curve of a monolith.
Principles for Identifying Microservices and Examples
More important than the size of the microservice is the internal cohesion it must have, and its independence from other services. For that, we need to inspect the data and processing associated with the services.
A microservice must own its domain data and logic. This leads to a domain-driven design pattern. However, it is often possible to have a complex domain model that can be better presented in interconnected multiple small models. For example, consider an insurance model, composed of multiple smaller models, where the party model can be used as claim-party and also as insured (and various others…). In such a multi-model scenario, it is important to first establish a context with the model called bounded context that closely governs the logic associated with the model. Defining the microservice for a bounded context is a good start, because they are closely related.
Along with bounded context, aggregates that are used by the domain model that are loosely coupled and driven by business requirements are also good candidates for microservices, as long as they exhibit the main design tenets of the microservices; for example, services for managing vehicles as an aggregate of a policy object.
While most microservices can be easily identified by following the domain model analysis, there are a number of cases where the business processing itself is stateless and does not result in a modification of the data model itself, for example, identifying the risk locations within the projected path of a hurricane. Such stateless business processes, which follow the single responsibility principle, are great candidates for microservices.
If these principles are applied correctly, loosely coupled and independently deployable services will follow the single responsibility model without causing chattiness across the microservices. They can be versioned to allow client upgrades, provide fallback defaults and be developed by small teams.
Co-Existing With Legacy Architecture
Microservices provide a perfect tool for refactoring the legacy architecture. This can be done by applying the strangler pattern. This gives a new life to legacy applications by first moving the business functions that will benefit the most gradually as microservices. Applying this pattern requires a façade that can intercept the calls to the legacy application. A modern digital front end, which can offer a better UX and provides connectivity to a variety of backends by leveraging EIP, can be used as strangler façade to connect to existing legacy applications.
Over time, those services can be built directly using a microservices architecture, by eliminating calls to legacy application. This approach is more suited to large, legacy applications. Within smaller systems that are not very complex, the insurer may be better off rewriting the application.
How to make organizational changes to adopt microservices-driven development
Adopting microservices-driven development requires a change in organization culture and mindset. The DevOps practice tries to shift siloed operation responsibilities to the development organization. With the successful introduction of microservices best practices, it is not uncommon for the developers to do both. Even when the two teams exist, they have to communicate frequently, increase efficiencies and improve the quality of services they provide to customers. The quality assurance, performance testing and security teams also need to be tightly integrated with the DevOps teams by automating their tasks in the continuous delivery process.
Organizations need to cultivate a culture of sharing responsibility, ownership and complete accountability in microservices teams. These teams need to have a complete view of the microservice from a functional, security and deployment infrastructure perspective, regardless of their stated roles. They take full ownership for their services, often beyond the scope of their roles or titles, by thinking about the end customer’s needs and how they can contribute to solving those needs. Embedding the operational skills within the delivery teams is important to reduce potential friction between the development and operations team.
It is important to facilitate increased communication and collaboration across all the teams. This could include the use of instant messaging apps, issue management systems and wikis. This also helps other teams like sales and marketing, thus allowing the complete enterprise to align effectively toward project goals.
As we have seen in these three blogs, the microservices architecture is an excellent solution to legacy transformation. It solves a number of problems and paves the path to a scalable, resilient system that can continue to evolve over time without becoming obsolete. It allows rapid innovation with positive customer experience. A successful implementation of the microservices architecture does, however, require:
A shift in organization culture, moving infrastructure operations to development teams while increasing compliance and security
Creation of a shared view of the system and promoting collaboration
Automation, to facilitate continuous integration and deployment
Continuous monitoring, alerting and assessment
A platform that can allow you to gradually move your existing monolith to microservices and also natively support domain-driven design
his article was written by Sachin Dhamane and Manish Shah.
In Part 1 of this blog series, we shared how a microservices architecture can bring value to a constantly changing business environment. In this segment, we will share our views on the benefits of a microservices architecture for the insurance industry.The traditional insurance value chain and subsequently the customer experience over the last two-plus years has operated across a number of functional silos. Each of these silos has unique characteristics and is organized around its own KPIs. For example, underwriting/risk management is focused on risk assessment, underwriting quality and booking of premium. Claims is focused on claims management, fraud detection, leakage reduction and processing turnaround time. Typically, each of these functions operates with limited organizational integration.
This organizational design is several decades old and inspired by Henry Ford’s assembly line innovation that optimizes processing costs through specialization and automation. This design has served insurance well during the industrial and information ages, leading to specialized functional IT systems targeting the business needs of the silo functions from underwriting to policy management, billing, claims and more. The core system components are integrated as a suite. The components and suite serve well-defined functions that have had less dynamic integration needs.
This organizational model and IT system landscape has been effective for decades. But as we enter the digital age, the painful and expensive business and IT modernization projects over the last decade, coupled with portals and complex integrations to these core systems to improve agent and customer experience, do not align with new market needs. Today’s insurance landscape demands agility to adapt with ease, innovation to reimagine the possibilities and speed to capture the opportunities. The digital age demands so much more to stay relevant and competitive.
Customer Experience is the Differentiator
Customer experience is front and center in differentiating insurers in the digital age. It is a key factor in driving higher customer acquisition and retention, which, in turn, drives growth. Customer experience is much more than offering a better user interface. It is about the customer journey that creates a unique, compelling and engaging experience that makes it “easy to do business” with insurers. Customer journeys must cut through functional silos, which are currently optimized for internal operational efficiency. These silos, however, as customer journeys are changing, are now contributing to the degradation of the customer experience. To design and refine customer journeys in today’s digital age, insurers will need to collect siloed capabilities into a new virtual capability designed to optimize the customer journey. This new virtual capability will require hyper-integration and micro-granularities of system capabilities to achieve the desired result.
As we highlight in our Future Trends 2018: Catalyzing the Shift to Digital Insurance 2.0 report, the insurance value chain is rapidly shifting to adapt to new business models, innovative products, expanded distribution channels, new competition with entrants from outside the industry, elevated customer expectations and emerging technologies.
Digital transformation is redefining the value chains and each component. New products such as on-demand products, connected products and micro-insurance are reshaping business assumptions and fundamentals. We are seeing innovative product design that uses new sources of data, new risk assumptions, micro-segmentation, expanded services, new customer engagement approaches and new channels to reach customers. These designs leverage new technologies such as artificial intelligence (AI), cognitive, analytics and microservices. The result is the disruption of the insurance value chain. With the value chain disrupted, the underlying systems must be disrupted, too.
Rise of Ecosystems and the Platform Economy
As we enter the digital age, the blurring of traditional industry boundaries is seeing the rise of ecosystems and the platform economy. Companies like Apple, Alibaba, Google, Amazon and Facebook are at the forefront of this shift. They are using an ecosystem with connected services from different parties to create a seamless customer experience.
An ecosystem is the DNA of the platform economy, enabling a business model to exchange and share value among its partners and customers. To meaningfully participate in the platform economy, insurers must embrace ecosystems and be prepared to partner with competitors, other industries and innovative technology-based service providers. ZhongAn, an online Chinese insurer, generated 70% of its 2017 car insurance premium in one month (January 2018) by using AI and big data with the ecosystem, including carmakers, dealers, after-sales service providers and lenders that created market reach and a loyal customer base. The ecosystem approach eliminates traditional industry or organizational boundaries in designing products and creating a new customer journey. However, it necessitates the need for a flexible and granular system composed of different services running on different technology platforms that can easily integrate with any ecosystem.
A New System Paradigm for the Digital Age
A common theme is emerging that highlights the need for a new set of capabilities to support the paradigm shift. To succeed, let alone survive, insurers will need to respond to the value chain disruption, elevated customer expectations and the rise of the ecosystem and platform economy by using granular (single responsibility principle) API/microservices to build an on-demand business solution with loosely coupled microservices and find-n-bind capabilities that can leverage any ecosystem.
A microservices architecture enables the building of new capabilities to meet these needs. The graphic below contrasts the anatomy of a traditional “pre-digital age” monolith insurance app and a “digital age” innovative microservices-based insured app.
Today’s monolith insurance systems, although partially accessible through APIs, are built as a large deployable monolith unit. This architecture does not easily adapt to the rapid pace of change because the change is to a large-system single codebase and specific localized API. A separate API layer exposed over the single-monolith code base makes it difficult to integrate with ecosystem partners as well as making it extremely complex to orchestrate services across various systems or apps.
In contrast, a microservices architecture decomposes a large unit into fine-grained single purpose, self-contained and independently deployable business services that enable the ability for rapid change and open the possibility of multiple change deployments daily instead of waiting for the periodic release cycle. Using microservices across various apps, insurers can orchestrate a composite user interface that is a tailor-made customer journey. It can be enhanced quickly based on customer feedback. The graphic below shows how a microservices architecture can assist in the design of a unique customer experience using a product offering and ecosystem. Multiple customer journeys can be assembled by orchestrating functional microservices and ecosystem services available outside the insurer enterprise.
The times are changing. And it is exciting! The ability to leverage powerful microservices architecture to build a new foundation for the digital age of insurance is game-changing. It will enable new business models, new products, refined customer experiences and timely responses to new business needs (in hours and days instead of months and years), and it will help insurers remain relevant and competitive. While microservices is exciting and will accelerate the industry’s ability to innovate, it is not the Holy Grail. The smaller, focused services have many advantages but also create complexity in the orchestration of those services. Employing best practices in designing microservices size and data model sizing is critical. Most importantly, determining the gradual transition to microservices rather than a big-bang approach will help insurers build a platform that can withstand the test of time and constant change to help insurers participate in the digital age and platform economy with agility, innovation and speed.
In Part 3, we look forward to covering our views on best practices in introducing and scaling microservices within the world of the monolith IT system environment. We encourage you to read our thought leadership, Cloud Business Platform: The Path to Digital Insurance 2.0, to gain a deeper insight on these topics. Please share your views on this exciting topic in the comments section. We would enjoy hearing your perspective.
This article was written by Manish Shah and Sachin Dhamane.
Whether you are part of building a modern digital enterprise platform for mid-sized to large insurance companies or part of a startup that distinguishes itself through innovative technologies, you are likely to be hearing about microservices.
Microservices architecture has increasingly become popular and often associated with benefits such as scale, speed of change, ease of integration, fault tolerance and ability to adapt to changing business demands and models. Commitment from digital giants such as Amazon, Netflix, PayPal, eBay, Twitter and Uber, which built and scaled their platforms based on microservices architecture, has galvanized adoption across many industries.
Source: Google Trends
A crucial question is, “How will microservices help insurers design open platforms for building sustainable competitive advantage?”
This four-part blog series will share our views based on our experience in building a modern digital platform using microservices. This first blog will provide a general primer about microservices. The second will share our view on the applicability and strategic potential of microservices for insurance. The third will illustrate best practices and applied principles of designing a microservices-based platform. The final blog will share how our innovative Majesco Digital1st platform will help insurers simplify and accelerate the development of microservices apps.
Let’s start with the basic question, “What are microservices?” You can find the answer through a simple Google search, but let’s explain it in simple terms. Think of a microservice as a micro application that enables a specific granular business function like payment, issue, policy documents, first notice of loss (FNOL), etc. The micro application can be independently deployed and can communicate with other micro applications serving other business functions through a well-defined interface. This approach is in stark contrast to “monolith applications,” such as policy management systems, billing systems and claims systems that work as an aggregation of multiple business functions tightly woven together and must be deployed as a large, monolithic unit.
An architectural pattern called self-contained-service (SCS) is often discussed along with microservices but does not provide the full benefits of microservices. The SCS pattern recommends putting cohesive services together as a self-contained, individually deployable unit. Because the individual services are no longer self-contained and individually deployable, they cannot be considered microservices. While this approach is better than the monolithic application, it is instead building multiple small monoliths!
So why does anyone advocate the microservices approach? Simply put, it addresses the issues of monolith architectures that inhibit digital models. Even after functional decomposition and the use of several deployment artifacts with monolith architectures, they are still part of a single code base that must be managed as a single deployment unit.
In contrast, a microservices architecture has the following advantages when done well:
Velocity and Agility – Maintenance and evolution of monolith applications is expensive and slow due to inadvertent side effects, because they affect other functions and services. Dealing with the side effects requires additional work, including vital tasks such as impact analysis, elaborate and expensive testing and forcing changes into large and infrequent releases to optimize testing efforts. In contrast, a microservice is a low-impact, single-responsibility business function that performs its own individual tasks, manages its own data and communicates with other microservices through a well-defined interface. It allows you to make and deploy changes reliably, incrementally and more quickly, in contrast to monolith architectures.
Scale – Microservices allow easy monitoring that can predict seasonal or unique business demands on a business function. Because each microservice runs in its own process, it can easily be scaled with elastic containers, which efficiently scale up and down. In comparison, a monolith architecture runs multiple business functions under a single process, making it harder to orchestrate the feeding of resources to targeted business functions.
Decentralized Governanceand Teams – The separated code base of microservices allows different parts of an organization to build business functions as opposed to a centralized large team. Each team can manage different microservices with full DevOps (development and operations) responsibility and accountability. This gives insurers the freedom to choose the technology best-suited for the business function.
Self-Contained and Sustainable – With monolithic applications, when introducing a new business capability that requires the upgrade of external dependencies (OS, shared libraries, etc.) the entire application must be tested. In contrast, microservices are self-contained from OS down to the actual code required for implementation. This enables microservices to separately and individually upgrade without affecting unrelated application functions based on business/operational needs. This keeps the application stack relevant and avoids the risk of running applications on an obsolete technology stack.
Hypothesis-Driven Development – The advantages outlined above lead to a completely different way of contemplating software development. The focus and conversation shifts from managing projects and defect backlogs to emphasizing new opportunities, experimentation and observing the application usage. Experimental software changes can be built and deployed quicker in small increments into production. When errors happen, they can be fixed in minutes and hours, rather than days or months. For major problems, the incremental functionality upgrade can quickly and easily be rolled back without loss of major functionality or downtime.
As with all innovation, there is a flip side to the coin. Unfortunately, not all organizations are ready to adopt a microservices architecture immediately. In particular, if a company cannot build a well-designed monolith, then building a microservices platform will be much harder. Microservices architecture is inherently complex to develop as well as operate, but the rewards of the complexity are worth the hurdles, because microservices will give the reconstructed organization far greater efficiency and capabilities focused on the future.
Fundamentally, microservices require organizational change, not just adoption of a technology pattern. Organizations must rethink end-to-end DevOps by thinking in terms of small business functions, distributed teams, decentralized governance and continuous delivery. In addition, the organization must embrace multiple technologies suited for a business platform rather than a single technology platform, which is a significant change for organizations schooled in building applications using traditional software development processes.
Even success stories like Amazon and Netflix did not start with a microservices architecture; rather, they evolved overtime as they matured. If you are building a MVP (minimum viable product) as a startup, it may not be advisable to delay market launch due to the large up-front effort of establishing microservices. However, startups should consider that at some point they’ll have to invest and migrate to microservices to support scalability and changing business models.
Operating a platform made of hundreds or thousands of microservices, while enabling scalability and growing business demands, does create tremendous complexity for deployment, auto-scaling, monitoring, logging and many other DevOps aspects. Microservices deployment at Amazon and Netflix (Images by AppCentrica) show the complexity of managing a reliable business operation with millions of continuing deployments within an ecosystem of microservices — often written using different languages and databases. Companies like Amazon and Netflix deal with this complexity through a high degree of automation and significant investment into sharing and automating the infrastructure to build resiliency.
Despite the complexity in managing microservices, separation of responsibilities across microservices offers organizations significant benefits in today’s platform economy. We outline these in our thought leadership report, Cloud Business Platform: The Path to Digital Insurance 2.0. The constant pivoting of business priorities requires a continuous and high degree of system changes that enable new strategies. Microservices can bring great value to agility, velocity, availability, scalability and accountability across both technical and business organizational dimensions.
We believe that every organization should exercise patient urgency, which author and futurist Chunka Mui describes as “thecombination of foresight to prepare for a big idea, willingness to wait for the right market conditions and agility to act straight away when conditions ripen.”
We look forward to covering our views on the role of microservices in insurance in Part 2. Please share your views on this exciting topic in the comments section. We would enjoy hearing your perspective.
This article was written by Manish Shah and Sachin Dhamane.