Tag Archives: Johansen

Underwriting Lessons From the PGA

One of the amazing things about where we are in the arc of data changing our lives is that analytic models are pervasive. They are changing our professional lives, for sure, but I was also reminded recently that models can be used in all areas of our lives. Why? Because, golf! As I watched the professional golf Tour Championship, I thought about how analytic models recently helped me to cash in on predictive golf data.

For the British Open golf tournament in July, the golf club where I play ran a Pick 5 pool. They divide the field into the Top 5 players and A, B, C and D groups of players. You pick one player from each group, and the handful of people who pick the best-performing groups of five players win some credits in the pro shop.

I could have simply made my picks based on research, gut feel for the players and a little knowledge of the game. Instead, in a surprise to nobody, I opted to pick using a big data approach. CBS Sports created a simulation of all the golfers in the field playing the course for the event 10,000 times. They used the current statistics for each player, mapped how those statistics would help or hurt the player on the specific course for the event and then ranked the projected scores for the golfers. I made my picks based on their results.

I won the pool for the British Open using this approach. The golfers that the CBS Sports model had as the lowest scorers for each of the groups created the best pick of about 150 picks from my club mates.

Where is the win in insurance data?

My experience has a corollary meaning for insurance. There is money to be made (and saved) in insurance data modeling by understanding where underwriting is heading with the power of analytics. While we look at what is changing in underwriting, we’ll also look at its impact on insurance profitability, examining three areas in particular:

  • Improving the pool of risk
  • Deeper analysis and new data sources that will drive product innovation
  • Artificial Intelligence (AI) and predictive analytics

Improving the pool of risk

Let’s start with the basics and define the pool. Our pool contains insureds (the breadth of the pool) and their data (the depth of the pool). It would be nice, as underwriters, to pick only pools of winners, but criteria that strict would give us pools that are too small to generate premiums, and underwriters would frequently “lose” because their best picks would disappoint them.

This is the first lesson from the golf simulation’s success: I didn’t use it to pick the winner of the tournament. I used the model to pick a portfolio of golfers who should have performed better than the others in their group. I actually didn’t have the winner of the tournament in my group.

As with putting together a baseball team, picking stocks for a mutual fund or filling any occupation where the performance of a group matters, that we need to build a healthy pool of risk is a “no-brainer.” Actually doing it, however, is more difficult than simply looking at a few key factors. It requires expert data analysis (some of it automated). It requires excellent visibility (into the pool of risk). And, it requires continual monitoring and tweaking (possibly with some assistance from AI and cognitive computing).

See also: The Next Step in Underwriting  

The basic idea, in summary, is that we need a complete knowledge of the full pool and a better visibility into the life of the individual applicant. Underwriters are trying to create a balanced portfolio. They don’t need to pick a perfect risk, but they need to know who is positioned to outperform their peers.  By figuring out how to identify those above-expectation performers, they are able to skew their portfolio risk lower and out-perform the odds and the market.

Deeper analysis, new data sources and “smarter” pools will prepare insurers for product innovation.

The second lesson from the golf simulation was this: Every piece of data that is available should be made available in the decision process. In Majesco’s recent report: Winning in a New Age of Insurance: Insurance Moneyball, we look at how outdated analytic techniques can hide strategic opportunities. The risk to insurers is that up-and-comers will evaluate and price risk with more sources of data and more relevant data.

Traditional underwriting characteristics will give you “A”, “B” and “C” risks (as well as those you’ll reject), but you won’t see within the peer group to see where there’s value in writing business. Traditional underwriting also assumes that factors don’t change on the applicant once they have entered the pool. And it treats everyone in the pool equally (same premiums, same terms) with the same expected outcomes.

But what if pools were built with the ability to tap into more granular data and to adapt forecasts based on current conditions and possible trends? Like looking at a golfer’s ability to play on a wet course, what if we could see how a number of new factors including both personal and global data will affect outcomes?  For example, what if commercial insurers could see how small changes in investor sentiment early in a cycle drive expensive, D&O-covered, class action lawsuits three years (two renewals) later?

Look at life insurance. When your company initially accepted Ron as an applicant, it placed him into the A pool. At the time, you only collected MIB data, credit data and some personal data. Since then, you’ve started giving small discounts to the same pool when given access to wearable data and social media data, and you have started collecting Rx reports. In running some simulations, you are realizing that a combination of factors can give you a much better picture of possible outcomes with the new data sources, such as Amazon purchase data or wearable data.

What if you set out to improve predictive analytics within the pool by re-analyzing the pool under newer criteria? Perhaps you offer to give wearables at a discount to insureds or free health check-ups to at-risk members of the pool. It could be any kind of data, but the key is continuous pool analysis.

Preparation’s bonus: Product agility and on-demand underwriting

Every bit of work that goes into analyzing new data sources has a doubly valuable incentive: preparation for next-generation product development.

Once we have our data sources in place and our analytics models prepared, we can grasp the real value in the source, creating some redundancy and fluidity to the process. So, if a data source goes away or is temporarily unavailable or it becomes tainted (imagine more Experian breaches), it could be removed without consequence.

This new thinking will help insurers prepare for on-demand products that will need, not just on-demand underwriting, but on-demand rating and pricing. As we noted in our thought leadership report, Future Trends 2017: The Shift Gains Momentum, we showed how the sharing economy is giving rise to new product needs and new business models that are using real-time, on-demand data to create innovative products that don’t fit under the constraints of current underwriting practices. P&C insurers, for example, are experimenting with products that can be turned on and off for different coverages … like auto insurance for shared drivers like Uber or Lyft. And this is just the start of the on-demand world. Insurance is available when and where it is needed and priced based on location, duration and circumstances of need.

If an insurer has removed the rigidity of its data collection and added real depth to data alternatives, it will be able to approach these markets with greater ease. At Majesco, we help insurers employ data and analytic strategies that will provide agility in the use of data streams.

Real-time underwriting will become instant/continuous underwriting. Analytics will be used more to prevent claims than to predict them.

Which brings us to the role of artificial intelligence in underwriting.

See also: Data Opportunities in Underwriting  

AI and predictive analytics

Simulations have been in use for decades, but, with artificial intelligence and cognitive computing, simulations and learning systems will become underwriting’s greatest asset. Underwriters who have seen hundreds and thousands of applications can pick out outlying factors that have an impact on claims experience. This is good, and certainly it should continue, but perhaps a better form for picking the winners would be for applications to run through simulations first. Let cognitive computing have the opportunity to pick out the outlying factors and allow predictive analytics to weigh applications and opportunities for protection. (For more information on how AI will affect insurance, be sure to read Majesco’s Future Trends 2017: The Shift Gains Momentum).

Machine learning will improve actuarial models, bringing even more consistency to underwriting and greater automation potential to higher and higher policy values. And it will also allow for “creativity” and rapid testing of new products. Can we adapt a factor and re-run the simulation? Can we dial up or dial down the importance of a factor? Majesco is currently working with IBM to integrate AI/cognitive into the next generation of underwriting and data analysis.

Perfection is unattainable. But if we aim for the best process we can produce, we can certainly use new sources of data and new methods of analysis to improve our game and take home a higher share of the winnings.

How do I know this? Well, the golf club ran a pool for the PGA Championship the month after the British Open. I didn’t win that pool. Out of more than 200 entries — I came in second. Cha-ching!

Producing Data’s Motion Pictures

Reality is tough to capture. It keeps moving. But somehow we’re growing faster and better at capturing it.

Consider visual reality. In 200 years, we’ve moved from illustrations and paintings, through still photography and into motion pictures. We then created technologies to transport the motion pictures across space to the places we wanted it. We’re now looking at 4K televisions and talking to family with FaceTime or Skype on displays that have the same or greater resolution than our eyes.

Data’s reality is no different. Back in the late 1980s, I did work for a paint manufacturer, trying to monitor the real-time operating conditions in one of its paint plants. We connected some PCs to the plant’s programmable logic controllers and then asked the controllers every 30 seconds, “How are things going? What are you working on?” The controllers spit out lots of data on operating conditions. We charted, we graphed (in real time!), and the plant operators had new insights on how things were going with paint production.

We were augmenting the physical instrumentation of the plant with virtual instrumentation.

Instrumentation — Data’s Virtual Reality  

So how is your insurance company instrumented? Are things running a little hot? Do you find yourself running short on any raw materials? How full is the pipeline? When do you find that out? Is it tucked into a spreadsheet a few weeks after the end of the month? Could you make more money if you found out in five minutes instead of five weeks?

Are “modern” insurers still living on static pictures of data’s reality?

Insurance leaders are creating real-time instrumentation for their companies, allowing them to open and close everything from granular geographies to wind risk and monitor premium production compared with last week, last month, last quarter, last year, as of today or any day.

To better instrument our companies we need to think about: acquisition and transportation; accuracy; presentation timing and type; automation and cognitive capabilities; and actions and reactions. When you finish this post, I think you’ll agree with me that instrumentation should carry a high priority in insurance’s digital agenda.

See also: How Virtual Reality Reimagines Data  

Acquisition and Transportation of Data

How do we monitor the data in a flow of information in constant motion, not just the discrete sets that are static and in place? First, our goal is to NOT be another weigh station in a step-by-step process. We need to be tapped into the flow without impeding it. To do this, we set up measurement devices that allow us to peek into the flow and draw of our information, then shuttle it to where we need it. This is not unlike the earliest “vampire” network connectors, feeding on Ethernet cables as opposed to a light socket sitting within a circuit.

There are any number of tools that one can use for real-time streaming and visualization, but the key to having any of them working properly is the setup of the data acquisition. A vampire approach will allow for real-time monitoring, as opposed to relying on continual requests and responses from data sources.

Accuracy of Data

One of the challenges in looking at continuous data is that spurious results may throw off the averages, so we need to be careful about outlier events. When looking at real-time data, it is far more likely that outliers will appear.

For example, as I was driving the other day, one of the “Your speed is…” signs I passed registered 110 mph. (I’ve driven 110 mph before, but not this day.) It quickly corrected itself to 55 mph. Data “in flight” like that needs the right periodicity to make sure that it is capturing the 55s, not the spurious 110s. And data obviously needs to be trained on what to notice and what to ignore. Automated removal of outliers helps keep the data pure. Keeping a concrete set of rules regarding data’s use will be very important in allowing people to trust the data when it is presented.

Presentation Timing and Type

In 2007 and 2008, Starbucks began opening stores as an undisciplined growth strategy. Eighteen months later, many of them were shuttered in a massive restructuring. In 2011 and 2012, Starbucks was adding stores again, but this time based on GIS traffic-flow data and demographics. Real-time reporting had become a more valued part of the business structure. Former Starbucks CEO Howard Schultz reportedly received store performance numbers as frequently as four times each day.

How often an insurer needs data and how it wishes to have information presented is a matter of need and preference, but it can clearly be tied to business strategy. For one client we worked with, they realized that continual data visualization in public locations, such as lobbies and meeting areas, helped the whole community see how important data was to the decision process. Others may wish to keep their data tucked out of sight but still available via tablet or cell phone.

Depending on the insurer and the insurer’s reactive capability, they may want feedback every day, every hour or every few minutes. Whether you choose to use dashboards, standard reports or e-mailed updates will also depend on your role and your need to know.

Automation and Cognitive Use

One of the drawbacks to data visuals of any kind is that they are subject to perspective. Trends and movements can be hard to spot over time. Anyone familiar with Excel line graphs will understand what I’m talking about.

The graph below looks fairly flat.  But it shows a 5% move from start to finish.  Identifying that size movement will be important.

Here is where automation in data’s motion pictures plays an important role. If the system can “learn” what good performance looks like, then it can also improve its ability to communicate vital information in a timely manner. I was just on a call where we discussed facial recognition in insurance. The use case was that there are teams working to identify faces and emotions on facesIf we have tools that can tell if someone is unhappy, surely we can use those tools to recognize a hidden pattern in our data. Data’s flow won’t just represent current trends, it will also identify oft-hidden patterns. What we think we know from our common snapshot approach to data may be overturned when cognitive capabilities start to bring new insights to our eyes. Once again, data’s motion pictures aren’t just for our own amusement, but they greatly enhance our strategies and decisions.

Actions and Reactions

If I run a chemical plant, I’m deeply concerned with monitoring real-time flow. Every action I take to tune that plant has a reaction. As insurers, we should also be concerned with real-time flow, capturing our understanding of reality.

But there is also a historical component to data’s adjustments. In the chemical plant, if I change the mixture of a certain compound based on my data and the new mixture works, then I need to capture that moment in time as well. It is equally important for insurers to capture the timing of their corrective actions to make sure that we can see the relationship between action and reaction.

See also: Your Data Strategies: #Same or #Goals?  

Overlaying notes to explain that “we reduced available capacity in less profitable zip codes in June” should show some point of inflection in our results. Having that as a part of our reporting is critical to creating the positive action, a reaction cycle that we want to reinforce.

We have an embarrassment of riches when it comes to data, and we are only going to get richer in the coming years. By instrumenting our organizations and realizing that we need some new tools and techniques to turn that information into actions that create the right reactions in our organizations, we can improve our results every day, week and month — not just when we close the books.

Don’t Believe Your Own Fake News!

According to Gallup’s long-running Honesty and Ethics in Professions survey, trust in journalists over the last 40 years has seen a steady decline and is now at an all-time low. Part of the reason is the wide variety of sources available to journalists and the speed with which people are clamoring for news. Back when there were only three primary networks and a limited number of major newspapers, seasoned reporters seemed to keep a tighter rein on journalism’s criteria and standards.

Insurance executives are suffering from many of the same issues when trying to rely on their data and analytics. They may frequently ask themselves, “Where am I getting my news about my business?” and “Can I trust what I’m being told?” Data within the organization can be coming from anywhere inside or outside the company. Analytics can be practiced by those who may be reaching across departmental boundaries. Methods may contain errors. Reporting can be suspect. Decisions may be hastily made based on “fake news.”

No industry is immune. Google Flu Trends (2008-2013) was supposed to predict flu outbreaks better than the Centers for Disease Control and Prevention (CDC) using a geographic picture of search terms loosely related to the flu. Somehow, though, the algorithms consistently overrated correlations and over-predicted outbreaks. After several years of poor results, teams from Northeastern University, the University of Houston and Harvard concluded that one of Google’s primary issues was opaque methodology, making it “dangerous to rely on.”

See also: Innovation Won’t Work Without This  

Here are four actions that insurers can take to close data and analytic gaps and create an environment where news reflects reality and is able to be trusted.

Watermarks

One simple recommendation is to watermark views of data as certified. Certified sources, certified views and certified analyses could carry a mark that would only be allowed if a series of steps had been taken to maintain source and process purity. This Good Housekeeping Seal of Approval will provide your organization’s information consumers with the confidence that they are looking at real news. Of course, the important part in this process is not the mark itself, but developing the methods for certifying.

Attribution

Attributing information that is used in an ad hoc way to the data source also allows other team members to trust that the source is vetted and that the information presented will be verifiable. In any research project, it is common to add data citations, just as one would add a footnote in an article or paper.

Attributions add one other important layer of security to data and analytics — historical reference. If a team member leaves or is assigned to another project, someone attempting to duplicate the analysis a year from now will know where to look for an updated data set. It is also more likely that the results from decisions made on the data are many months or years away. If those results are less than optimal, teams may wish to examine documented data sources and analytic processes.

Governance

Organizationally focusing on the benefits of good data hygiene and creating a culture of data quality will increase your organization’s data quality and improve trust levels for information. Governance is the core of safe data usability. Poor practices and fake news arise most easily from a loosely governed data organization.

The concepts of governance should be communicated throughout the organization so that those who have been practicing data analytics without oversight can “come in from out of the cold” and allow their practices to be verified. But governance teams should always act less like data police and more like best practice facilitators. The goal is to enable the organization to make the best decisions in a timely manner, not to promote rigidity at the cost of opportunity.

See also: Are You Still Selling Newspapers?  

Constant Listening

Finally, when data teams constantly have their ear to the ground and are continuously aligning the information that is available with the needs of the consumers of that information, then best practices will happen naturally. This awareness not only ensures that fake news is kept to a minimum but also ensures that new, less reliable reports and views are not cropping up with the excuse that necessity is the mother of invention.

It also means that data teams will have their eyes open to new sources with which to assist the business. When data teams and business users are frequently helping each other to attain the best results, a crucial bond is formed where everyone is unified behind the visualization of timely, transparent, usable insights. Data stewards will have confidence that their news is real. Business users will have confidence to act upon it.

How Smart Can Get Insurance Get?

For insurers and technology partners, this is a fun question to ponder: How smart can insurance get? Perhaps an even broader question might be: “What is smart insurance?” What does it look like to apply analytics-based decisions to the process — from underwriting through claims? More importantly, what does it look like to apply penetrating data knowledge to individual people and individual risks?

I think these answers may lie in a closer look at our human relationships, and how they closely parallel what insurers are trying to do with a wide and growing array of risks. As the insurance industry shifts its concerns, adds digital connectivity and mature data analytics to its portfolio, it may come to look, sound and act much more like your mother.

After you’re done thinking about that picture, let’s consider it a moment. Insurance technology is striving to become cognitive and connected. The cognitive part will be forecasting problematic issues and preventing claims events. It will be asking who your friends are and wondering where you hang out. It will seem like it cares about you, and in some ways it will. The connected part will be deriving relevant insights from everywhere.

See also: 4 Steps to Ease Data Migration  

“Smart insurance” will be insurance that knows its insureds well, and the insurers that survive and thrive will be INCREDIBLY smart, powerful and successful.

The only way insurance will become smart, however, is through data. Data is the gatekeeper to future insurer success. The long-term competitive advantages to be found in data will be found by those who are collecting data across long periods. Data is like calculus or learning a foreign language. It is a building-block science that requires hands-on learning and manipulation to grow its usefulness.

Insurers that are dealing with data well today, are going to have a long-term advantage.

To illustrate my point, I’d like to look at three aspects of data that we will probably be thinking about for however long insurance is in existence. These three are: patterns, volume and experience. All three play into an insurer’s data capabilities.

Analytics is all about patterns or lack of patterns — finding the signal in the noise.

In one of my previous blogs, I discussed my affinity for Pandora. Just as my Mom could tell you my first word, Pandora can tell me the first song that I ever listened to on its service. With every song I listen to, it learns more about me. We’ve grown close. It knows what I like and what I don’t, so it is able to identify the signal data and tune out the noise data. How does it do this? It takes my personal data and cross-references it with its 100 million other users to find patterns.

As amazing as that is, pattern analysis in insurance has far greater implications and far more exciting applications. With it, we’ll be able to home in on signal indicators within the data and tune out the noise, identifying what’s unnecessary. This will result in an insurer’s ability to make “on the fly” decisions based on patterns that have been learned through cognitive systems, such as IBM’s Watson. Recently, IBM and Majesco announced a partnership (you can read the press release here) to bring cognitive capabilities to cloud insurance offerings and insurance capabilities into the cognitive sphere. Data gives a cognitive learning system the food it needs to accomplish well-rounded learning and growth. The more relevant data it can consume, the better it can find patterns and separate good risks from bad.

Data volume is a crucial aspect of the long-term data advantage.

While some companies worry that they have too much data to structure, organize and store effectively, many simply don’t have enough. They are either letting data streams sift through their fingers like sand, or they are not seeking the relevant data streams that will empower their risk selections.

When they are thinking of data, they may be thinking about the three or four traditional data sources that normally point to good risks or bad. In underwriting, for example, a common point for data scoring, insurers may only pull from a few common sources for information on applicant history.

Yet, the future of data decisions may look more like Mom than we know — weighing the big picture and all of the little details. There may come a point where insurance companies shy away from questionable risks on a sort of “data-formulated hunch,” based not on any one large factor but on a hundred tiny hints. Applicants with previous similar profiles turned out to be bad risks for no apparent reason. Maybe we’ll call it insurance intuition. But insurance intuition will only be possible with large volumes of long-term histories, combined with relevant real-time data streams. The difference between insurance decisions yesterday and those of the future will look like the difference between a handwritten description and a hologram. Insurers are beginning to crave the transparency that data can provide.

To prepare, insurers need a well-planned and well-structured data organization. They need definitive data knowledge across the enterprise, knowing where they are generating data and which data streams are currently being used. How is the data structured for usability? How is the organization archiving the data for later use? Then insurers need an understanding of what new data streams may exist outside the organization that will add value to their analytics. All of these considerations require insurers to continuously build their volume of usable data.

Experience unlocks data’s long-term value.

Insurance is about experiences. The more experiences that insurers can record and analyze, the better they will be positioned to accept risks. But the future of experiences and modeling likely outcomes is so much more than that. For an excellent example, let’s look at Google’s work with autonomous vehicles.

Google can’t just place a car on the road and let it drive. It needs the system to learn about hazards, driver behavior, traffic patterns and sensing the unexpected. It needs millions of hours of experiential data — far more data than it can acquire with daily driving. What Google has done, is to use real data as the seed for simulations. These simulations model thousands of possible outcomes to any given situation, “teaching” and rewriting the software to adapt without road time. In this way, the Google car is gaining experiences without experience.

See also: 3 Types of Data for Personalization  

Think of what insurers could do with similar simulations. Using experiences to build new experiences and model thousands of different outcomes to the same event will make insurers better equipped to predict, prevent and protect their policyholders over the long term. As insureds approach a likely claims scenario, data’s cognitive déjà vu will kick in and avert a claims event. For insurance to grow smarter, it needs to reframe what it means to model scenarios based on experience.

Experience of a different kind is also a key factor in data’s long-term value. Insurers simply need time to grow their data mastery. Analytics requires testing and validation. Experience, as well as tools, approach and data sources, is what will allow insurers to mine the best analytics from the data they own.

There is no time like now.

Now is related to the future. It’s the future’s history. If you would like to build an effective data organization or plan your company’s vital data strategy, there is no time like now.

Your Data Strategies: #Same or #Goals?

Goldilocks entered the house of the three bears. The first bowl she saw was full of the standard, no-frills porridge. She took a picture with her smart phone and posted it to Instagram, with the caption #same. Then she came to Papa Bear’s bowl. It was filled with organic, locally grown lettuce and kale, locally sourced quinoa, farm-fresh goat cheese and foraged mushrooms. The dressing base was olive oil, pressed and filtered from Tuscan olives. It was presented in a Williams Sonoma bowl on a farm table background. She posted a photo with the caption #goals. By the time Goldilocks went to bed, she had 147 likes. The End.

Enter the era of the exceptional, where all that seems to matter is what is new, different and better. When Twitter came out, it didn’t take me long to pick up how to use hashtags. But then hashtags took on a life of their own and spawned a new language of twisted usage. Now we have #same — usually representing what is not exciting, new or distinctive. And we have #goals — something we could aim for (think Beyoncé’s hair or Bradley Cooper’s abs).

See also: Data and Analytics in P&C Insurance  

Despite their trendy, poppy, teenage feel, #same and #goals are actually excellent portable concepts. When it comes to your IT and data strategies, are they #same or are they #goals? What do your business goals look like? Are you possibly mistaking #same for #goals? Let’s consider our alternatives.

Are our strategies aspirational enough?

If you are involved in insurance technology — whether that is in infrastructure, core insurance systems, digital, innovation or data and analytics — you are perpetually looking forward. Insurance organizations are grappling daily with their future-focused strategies. One common theme we encounter relates to goals and strategies. Many organizations think they are moving forward, but they may just be doing the work that needs to be done to remain operational. #Same. When thinking through the portfolio of projects and looking at the overall strategy, it is common to wonder, “Isn’t this just another version of what we did three months ago, even three years ago?” Is the organization looking at business, markets, products and channels and asking, “Are we ready to make a difference in this market?” No one wants the bowl of lukewarm, plain porridge — especially customers.

Are we aiming one bowl too far?

On the flip side, our goals do need to remain rooted in reality. It’s almost as common for optimistic teams to look at a really great strategy employed by Amazon, only to be reminded that the company isn’t Amazon and doesn’t need to be Amazon. It just needs to consider using Amazon-like capabilities that can enable the insurance strategy.

Data lakes can be a compelling component in modern insurance business processing architectures. But setting a goal to launch a 250-node cloud-based Hadoop cluster and declaring you’ll be entirely out of the business of running your own servers is not a strategy that’s right for everyone.

If the organization is pushed too far on risk or on reality, it creates organizational dissonance. It’s tough to recover from that. Leaders and teams may pull back and hesitate to try again. Our #goals shouldn’t become a #fail.

Finding the “just right” bowl.

Effective strategies are certainly based in reality, but do they stretch the organization to consider the future and how the strategies will help it to grow? When the balance is reached and the “just right” bowl full of aspirations is chosen, there is no better feeling. Our experience is that well-aligned organizational objectives married to positive stretch goals infuse insurers with energy.

This example of bowls, goals, balance and alignment is especially apropos to data and analytics organization. It is easy for data teams to lay new visuals on last year’s reports and spin through cycles improving processing throughput. To avoid the #same tag, these teams also need to evaluate all the emerging sources for third-party aggregated data and big data scalable technologies. With one foot in reality and one stretching toward new questions and new solutions, data analysts will remain engaged in providing ever-improving value.

See also: How to Capture Data Using Social Media  

Even if an organization could be technically advanced and organizationally perfect, it would still want to reach for something new, because change is constant. Reaching unleashes the power of your teams. Reaching challenges individuals to think at the top of their capacity and to tap into their creative sides. The excitement and motivation that improves productivity will also foster a culture of excellence and pride.

We are then left to the analysis of our individual circumstances. If you could snap a photo of your organization’s three-year plans, would you caption it #same or #goals? Inventing your own scale of aspiration, how many of your goals will stretch the organization and how many will just keep the lights on?