Tag Archives: risk analysis

AI’s Huge Potential for Underwriting

For decades, the insurance industry has led the world in predictive analysis and risk assessment. And today, with the treasure trove of big data available from historical processes, IoT and social media, insurance companies have the opportunity to take this discipline to a whole new level of accuracy, consistency and customer experience.

The actuarial models that were once driven solely by large databases can now be fueled with tremendous quantities of unstructured data from social media, online research and news, weather and traffic reports, real-time securities feeds and other valuable information sources as well as by “tribal knowledge” such as internal reports, policies and regulations, presentations, emails, memos and evaluations. In fact, it is estimated that 90% of global data has been created in the past two years, and 80% of that data is unstructured.

A large portion of this data now comes from the Internet of Things — computers, smart phones and wearables, GPS-enabled devices, transportation telematics, sensors, energy controls and medical devices. Even with the advancement of big data analytics, the integration of all this structured and unstructured data would appear to be a monumental achievement with traditional database management tools. Even if we could somehow blend this data, would we then need thousands of canned reports, or a highly trained data analytics expert in every operating department to make use of it? The answer to this dilemma may be as close as our smartphones.

Apps that Unleash the Power

As consumers, we are no stranger to the union of the structured and unstructured datasets. A commuter, for example, used to rely on Google Maps to get from his office to his home. But with the advent of apps like Waze, not only can he get directions and arrival times based on mileage and speed data, but can also combine this intelligence with feeds from social media and crowd-sourced opinions on traffic. Significant advances in the power of in-memory processing, machine learning, artificial intelligence and natural language processing have the potential to blend millions of data points from operational systems, tribal knowledge and the Internet of Things — using apps no more complicated than Google Maps.

Using apps that harness the power of artificial intelligence and machine learning can provide far superior predictive analysis simply by typing in a question, such as: What are the chances of a terrorist act in Omaha during the month of December? Where is the most likely place a power blackout will occur in August? How many passenger train accidents will occur in the Northeast corridor over the next six months? What will be the effect on my fixed income portfolio if the Federal Reserve raises short term interest rates by .25 percentage point?

Using a gamified interface, these apps can use game theory such as Monte Carlo simulations simply by moving and overlaying graphical objects on your computer screen or tablet. As an example, you could calculate the likely dollar damages to policyholders caused by an impending hurricane simply by moving symbols for wind, rain and time duration over a map image. Here are some typical applications for AI app technology in insurance:

Catastrophe Risk and Damage Analysis

Incorporate historical weather patterns, news, research reports and social media into calculations of risk from potential catastrophes to price coverage or determine prudent levels of reinsurance.

Targeted Risk Analysis (Single view of customers)

With the wealth of individual information available on people and organizations, it is now possible to apply AI and machine learning principles to provide risk profiles targeted down to an individual. For example, a Facebook profile of a mountain climbing enthusiast would indicate a propensity for risk taking that might warrant a different profile than a golfer. Machine learning agents can now parse through LinkedIn profiles, Facebook posts, tweets and blogs to provide the underwriter with a targeted set of metrics to accurately assess the risk index of an individual.


Each individual assessor has his own predilection to assessing risks. By some estimates, insurance companies could lose hundreds of millions of dollars either through inaccurate risk profiling or through lost customers because of overpricing. AI apps provide the mechanics to capture “tribal knowledge,” thereby providing a uniform assessment metric across the entire underwriting process.

Claims Processing

By unifying unstructured data across historical claims, it is possible to establish ground rules (or quantitative metrics) across fuzzy baselines that were previously not possible. Claims notes from customer service representatives that would previously fall through the cracks are now caught, processed and flagged for better claims expediting and improved customer satisfaction. By incorporating personnel records when a major casualty event occurs, such as a severe storm or flood, you can now dispatch the most experienced claims personnel to areas with the highest-value property.

Fraud Control

Integrate social media into the claims review process. For example, it would be very suspect if someone who just put in a workers’ compensation claim for a severe back injury was bragging about his performance at his weekend rugby match on Facebook.

A Powerful Value Proposition

The value proposition of artificial intelligence apps for better insurance industry underwriting and risk management is too big to ignore. Apps have been transformational in the way we intelligently manage our lives, and App Orchid predicts they will be just as transformational in the way insurance companies manage their operations.

It’s Time to Revise ISO 31000

With the recent release of a new British standard BS 65000 on organizational resilience and the announcement by the Committee of Sponsoring Organizations of the Treadway Commission (COSO) of a review of its 2001 enterprise risk management (ERM) framework, I believe that business is moving ahead of ISO 31000 as a necessary response to the evolving business environment and accelerating rate of technical change. Therefore, there is a strong case for a taking a fresh look at ISO 31000.

As I’ve stated many times, the pace of business changes and evolution of management systems is accelerating in the 21st century. So, too, has the role of risk management. The ground is continuing to move under our feet. Long a supporter of Martin Davies’ causal approach to risk management, I feel the albatross of risk heat maps and 20th century occupational health and safety (OHS) perceptions of risk are causing business to bypass risk management.

Has Risk Management Been Lost in Operational Risk?

In a recent article by David Vos titled “Ten steps to corporate risk analysis,” he refers to the need for quantitative risk analysis (QRA) and says “only about one quarter of corporate strategic planning departments truly use simulation analysis (the most useful means of evaluating risks), and only a third quantify their risks at all.” This left me dumbfounded, for if risk is the level of uncertainty on objectives, how can any system claim to be managing risk without quantifying it? It leads me to ask, outside banking and insurance, how many people are really “managing” risk as opposed to recording it?

Could it be arrogance, where we have elevated ourselves to the “opportunity and decision making” levels of business, causing us to lose sight of our primary role in the business landscape?

Is the Legal Department Taking Over Risk?

In a recent article, I criticized plan, do, check, act (PDCA) as an outdated, serial approach to continuous improvement, proposing instead realization, optimization and innovations as an interactive real-time approach using mathematical predictive analytics. It seems the usually lagging legal fraternity is advocating a similar approach “that may be used by the legal department for risk management purposes. These innovative uses of available technology can increase the return on investment in the technology and provide an added incentive to move forward with new approaches to risk management.” Is the legal department to become the vanguard for ERM? With legal’s relationship to corporate governance, that is not beyond the realm of possibilities!

Although I am most likely preaching to the converted, we need to change the purpose of risk management from being administrative to being an active, valuable tool. This mandates, at a minimum, a reasonable level of understanding of statistical and analytic mathematics and the realization that an Excel spreadsheet cannot be proactive. As ISO 31000 is the only tool we have to wage this war, and 2009 was a lifetime ago in terms of business practice (basically, before the end of the Great Financial Crisis), I believe it requires a major overhaul or risk becoming irrelevant.

Finally, risking the wrath of the ever-swelling ranks of generalist operational risk consultants out there: However altruistic was the original decision for ISO 31000 not to be certifiable, there is a need to introduce a method of certification to engender value and consistency into the reputation of ISO31000.

My Suggestions for a Revised ISO 31000

As a starting point, I would suggest:

  • Strengthen requirements on risk culture and risk appetite
  • Mandate the use of quantitative risk analysis (QRA)
  • Mandate the use of causal analysis and monitoring
  • Take an active approach to risk management
  • Incorporate BS65000 and resilience as part of ISO 31000
  • Introduce certification to protect the ISO 31000 brandaszzz

What Risk Reports Won’t Tell You

Usually, the first questions the project director asks are,

  1. “What are the top 10 risks by cost P80?”
  2.  “What is the P80 of cost risk?”
  3. “How does the total compare with the cost contingency?”

These seem like fundamental, simple questions for a project director, but they actually display a complete failure to understand the nature of risk or risk over time.

In this short paper, I want to summarize just what information monthly risk reports can provide that is useful to the project managers.

1.     Quantitative Risk Analysis

Monte Carlo simulation is the core of quantitative risk analysis (QRA) and is used to combine risk distribution assessments for probability and consequence.

Risk is historically defined as the product of probability and consequence (De Moivre 1711). But multiplying two distributions together is no casual mathematical exercise. On a mega-project, there can easily be a thousand-plus risks. The sum of all the products of the individual risks is a distribution for the total risk.

Risk has two components:

i. Probability of occurrence, the subjective belief that it will occur. This is a binary distribution because it has two states — i.e., it happens or doesn’t happen — and is called a Bernoulli distribution

ii. A consequence measured in terms of cost, delay or performance deterioration. This is also a distribution. In project risk, three-point triangular or PERT distributions are commonly used.

With the understanding that risk is composed of two probability distributions, one can see that describing risk magnitude in the “project management way,” by a single value (the P80 of cost) doesn’t make any sense at all.

The usual way to show a risk distribution for either an individual risk or for total risk is with a Pareto graph, which combines a probability density function (pdf) and a cumulative density function (cdf). These are also known as a histogram with an S-curve.

Figure 1. A Pareto Graph

2.    What are the Top 10 Risks?

It is common for the project director to request the top 10 risks in monthly risk reports for both cost risk and schedule (delay) risk. These are usually ranked in descending of P80.

What is P80? This means the 80 percentile of the distribution — 80% of the data points are to the left of the 80th percentile and 20% to the right.

The interpretation of this is that one can be 80% sure that the cost or delay will be at that value or less and, conversely, that one can be 20% sure that the cost/delay will be greater.

Some companies use the P90, which suggest they are more risk averse. Some use P75, which is the upper quartile, Some use P68.2, which is one standard deviation – the statistical metric for uncertainty. And some companies use the P50, which is the same as tossing a coin.

It is not possible to use Pareto graphs to identify the top risks. This is best done using either or all of the following graph types:

  1. Box and whisker graph
  2. Tornado diagram
  3. Density strip

All of these three methods work well in visually presenting the risks in order of magnitude, although the tornado chart is rather a “black box” method that may give different results from the other two graphs.

Figure 2 Box & Whisker Graph

Figure 3 Tornado Diagram

Figure 4 Impact Density Strips

It is important to understand that the P80 value does not tell one which is the biggest risk; the P80 is a single point on the pdf that simply means that one can be 80% sure that it will cost $X or less or that you can be 20% sure that it will cost $X or more!

Do you get the message there about uncertainty?

To truly explain this important point, I have plotted 10 risks, all with approximately the same P80 = 54.2, in the iso-contour graph below. Each of the risks has a different consequence and different probability.

Figure 5 Iso-Contour Chart of 10 Risks With P80=54.2

Using the box & whisker plot and impact density strip, it should be immediately apparent, even to the untrained eye, that the risks are very different in terms of uncertainty and consequence. The challenge is determine which is biggest.

Figure 6 Density Strip of the 10 Risks



Figure 7 Box & Whisker Plot of the 10 Risks

We can see that risk 5 is actually quite certain, whereas risk 2 is very uncertain, and yet they both have the same P80. Here we need to understand how to deal with a risk and its certainty.

It should now be clear that ranking and prioritizing risks on the basis of P80 alone is neither correct nor particularly meaningful, as all evidence of the probability distribution and impact distribution are missing. The three graphical solutions – box plot, density strip and tornado diagram — make it easier for the managers to prioritize the risks visually by relating directly to both consequence and uncertainty.

3.    What Is the Significance of the Total P80 Cost?

Almost the very first number that appears in the monthly risk report will be the P80 total for all cost risks. You might wonder why the P80 instead of the P90 or P50 or the standard deviation (P68.2).

To project directors, the P80 is a magic number that can be shared with colleagues, the directors, the client. Why the P80 became the popular percentile is unknown. There is obviously a relationship between risk aversion and risk taking — the more risk averse, the higher the P value that is preferred.

— Contingency as a percentage of baseline cost

The project planning process will involve detailed cost estimates by quantity surveyors and cost engineers. These estimates will become the baseline cost of the project covering materials, labor and inflation. The risk manager will endeavor to get the cost team to do a risk review and build a range of uncertainties around the costs. During the design stage, this will usually be a +/-25% ball park figure, with the range narrowing as design and time progress.

The formula used for determining cost based contingency is usually:

P80 of cost estimate – base cost = contingency

Often, the cost team includes project risks in the calculations, which are based on their personal experiences, which are usually undocumented and which inflate the base cost. You do not want this to happen.

The planning team will, at the outset, establish some percentage of the total cost as a contingency. On the most recent mega-project valued at $2.3 billion, the contingency was 7% of the total forecast cost. How this contingency was determined was undocumented but presumably based on some experiential rule of thumb of the planning team. Curiously, this figure was shown on the management reports as a P80, presumably in an endeavor to give credibility to the contingency figure.

— Contingency as a function of risk assessment

The risk management process is a journey over the duration of the project. It starts at the design phase, progresses through manufacturing, then on to construction and finally to commissioning. Although these are broadly distinct phases, there will be many overlapping time periods.

The time of greatest risk will be during the design phase, when everything is pretty much unknown to all the project team. The uncertainties will be legion, from planning permission to technology, contracts to quality control, civil engineering works to change management.

The risk should appear as a series of waves, growing rapidly during the design phase and then decreasing until approaching zero as the problems are solved. After all, you wouldn’t begin a project with huge quantities of unresolved risk.

The graph below gives a idea of the risk over time over the course of the project:


Figure 8 Risk Over Time

As each phase progresses, the risk will ebb and flow, progressively decreasing as the project concludes successfully.

The risk total for the month will have meaning only in the context of the previous month’s risk total, the phase of the project and the forecast for the future risk over the course of the project.


Figure 9 A Box & Whisker Plot of the First 10 Monthly Total Risk Values

It can be seen from Figure 9 that the risk is progressively increasing until month 9, after which it appears to start declining. Risk will follow the phases described in Figure 8. Risk can be graphed according to each individual phase or a global overview.

It should be apparent that the P80 doesn’t help the project director understand the current or future risk on the project, the nature of the uncertainty or the risk over time.

A simple enhancement in Excel combining Figures 8 and 9 is given in Figure 10 so that deviations from forecast are clearly visible and comparable.


Figure 10 Current Monthly Risk Total Vs. Forecast P50 & P90.

The range of uncertainty in the current situation and the forecast are clearly displayed. Alternative measures of uncertainty can be used — e.g., mean+/- 1 standard deviation.

In Figure 10, there is a noticeable  discrepancy between the current total risk and the forecast. It is essential to understand and report on the source — for example, possibilities such as these:

  1. Fewer risks have been identified than expected
  2. The quantification of risks is too optimistic, i.e., lower cost
  3. The handling plans are assessed as more effective
  4. The forecast risk is higher than actually being experienced during design stage
  5. The design phase is running behind schedule
  6. Improved estimation skills are required, so a calibration training course needs to be put in place

It is important for the project director to understand exactly what is being measured in this concept of total risk.

Useful reference: How to Manage Project Opportunity and Risk: Why Uncertainty Management Can be a Much Better Approach Than Risk Management, by Stephen Ward and Chris Chapman.

The Right Way to Enumerate Risks

In my experience, there are a number of traps that organizations fall into when they are identifying the risks they face. The traps make it very difficult to manage the risks.

#1 – The Broad Statement

Some organizations fall into the trap of capturing “risks” that are broad statements as opposed to events or incidents. Examples include:

• Reputation damage;
• Compliance failure;
• Fraud
• Environment damage

These terms tell us nothing and cannot be managed – even at a strategic level. Knowing that you might face, say, reputation damage doesn’t help you understand what might hurt your reputation or how you prevent those incidents from happening.

#2 – Causes as Risk

The most common issue I see with risk registers is that many organizations fall into the trap of capturing “risks” that are actually causes as opposed to events/incidents.

The wording that indicates a cause as opposed to a risk include:

• Lack of …. (trained staff; funding; policy direction; maintenance; planning; communication).

• Ineffective …. (staff training; internal audit; policy implementation; contract management; communication).

• Insufficient …. (time allocated for planning; resources applied).

• Inefficient …. (use of resources; procedures).

• Inadequate …. (training; procedures).

• Failure to…. (disclose conflicts; follow procedures; understand requirements).

• Poor….. (project management; inventory management; procurement practices).

• Excessive …. (reporting requirements; administration; oversight).

• Inaccurate…. (records; recording of outcomes).

These “risks” also tell us very little and, once again, cannot be managed. Knowing that you might face a lack of training, for instance, doesn’t tell you what incidents might occur as a result or help you prevent them.

#3 – Consequences as Risk

Another trap that organizations fall into when identifying risk is capturing “risks” that are actually consequences as opposed to events or incidents. Examples include:

• Project does not meet schedule;

• Department does not meet its stated objectives

• Overspending

Once again – these are not able to be managed. Having a project not meet schedule is the result of a series of problems, but understanding the potential result doesn’t help you prevent it.

So, if these are the traps that organizations fall into, then what should our list of risks look like? The answer is simple – they need to be events.

I look at it this way – when something goes wrong like a plane crash, a train derailment, a food poisoning outbreak, major fraud .etc. it is always an event. After the event, there is analysis to determine what happened, why it happened, what could have stopped it from happening and what can be done to try to keep it from happening in the future. Risk management is no different – we are just trying to anticipate and stop the incident before it happens.

The table below shows the similarities between risk management and post-event analysis:


To that end, risk analysis can be viewed as post-event analysis before the event’s occurring.

The rule of thumb I use is that if the risk in your register could not have a post-event analysis conducted on it if it happened – then it is not a risk!

If you apply this approach to your list of risks events, you will:

• Reduce the number of risks in your risk register considerably; and (more importantly)

• Make it a lot easier to manage those risks.

Try it with your risk register and see what results you get.

A Risk Is a Risk

Commonly, people talk of different types of risk: strategic risk, operational risk, security risk, safety risk, project risk, etc.  Segregating these risks and managing them separately can actually diminish your risk-management efforts.

What you need to understand about risk and risk management is that a risk is a risk is a risk — the only thing that differs is the context within which you manage that risk.

All risks are events, and each has a range of consequences that need to be identified and analyzed to gain a full understanding. For example;

You have a group identifying hazard risks, isolated from the risk-management team (a common occurrence), and they tend to look at possible consequences in one dimension only – the harm that may be caused. Decisions on how to handle the risk will be made based on this assessment. What hasn’t been done, however, is to assess the consequence against all of the organizational impact areas that you find in your consequence matrix.  As a result, the assessment of that risk may not be correct; for instance, there may be significant consequences in terms of compliance that don’t show up as an issue in terms of safety.

If you only look at risk in one dimension, you may make a decision that creates a downstream risk that is worse than the event you’re trying to prevent. For instance, you may mitigate a safety-related risk but create an even greater security risk.

The moral of the story: Managing risk in silos will diminish risk management within your organization.

In about 80% of cases, you can’t do anything about the consequences of the event; what you are trying to do is stop the event from happening in the first place.

OCR Nails Hospice For $50K In First HIPAA Breach Settlement Involving Small Data Breach

Properly encrypt and protected electronic protected health information (ePHI) on laptops and in other mediums!

That’s the clear message of the Department of Health and Human Services (HHS) Office of Civil Rights (OCR) in its announcement of its first settlement under the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security Rule involving a breach of ePHI of fewer than 500 individuals by a HIPAA-covered entity, Hospice of North Idaho (HONI).

The settlement shows that the Office of Civil Rights stands ready to penalize these healthcare providers, health plans, healthcare clearinghouses and their business associates (covered entities) when their failure to properly secure and protect ePHI on laptops or in other systems results in a breach of ePHI even when the breach affects fewer than 500 individuals.

HIPAA Security & Breach Notification For ePHI
Under the originally enacted requirements of HIPAA, covered entities and their business associates are required to restrict the use, access and disclosure of protected health information and establish and administer various other policies and safeguards in relation to protected health information. Additionally, the Security Rules require specific encryption and other safeguards when covered entities collect, create, use, access, retain or disclose ePHI.

The Health Information Technology for Economic and Clinical Health (HITECH) Act amended HIPAA, among other things to tighten certain HIPAA requirements, expand its provisions to directly apply to business associates, as well as covered entities and to impose specific breach notification requirements. The HITECH Act Breach Notification Rule requires covered entities to report an impermissible use or disclosure of protected health information, or a “breach,” of 500 individuals or more (Large Breach) to the Secretary of HHS and the media within 60 days after the discovery of the breach. Smaller breaches affecting less than 500 individuals (Small Breach) must be reported to the Secretary on an annual basis.

Since the Breach Notification Rule took effect, the Office of Civil Rights’ announced policy has been to investigate all Large Breaches and such investigations have resulted in settlements or other corrective action in relation to various Large Breaches. Until now, however, the Office of Civil Rights has not made public any resolution agreements requiring settlement payments involving any Small Breaches.

Hospice Of North Idaho Settlement
On January 2, 2013, the Office of Civil Rights announced that Hospice of North Idaho will pay the Office of Civil Rights $50,000 to settle potential HIPAA violations that occurred in connection with the theft of an unencrypted laptop computer containing ePHI. The Hospice of North Idaho settlement is the first settlement involving a breach of ePHI affecting fewer than 500 individuals. Read the full HONI Resolution Agreement here.

The Office of Civil Rights opened an investigation after Hospice of North Idaho reported to the Department of Health and Human Services that an unencrypted laptop computer containing ePHI of 441 patients had been stolen in June 2010. Hospice of North Idaho team members regularly use laptops containing ePHI in their field work.

Over the course of the investigation, the Office of Civil Rights discovered that Hospice of North Idaho had not conducted a risk analysis to safeguard ePHI or have in place policies or procedures to address mobile device security as required by the HIPAA Security Rule. Since the June 2010 theft, Hospice of North Idaho has taken extensive additional steps to improve their HIPAA Privacy and Security compliance program.

Enforcement Actions Highlight Growing HIPAA Exposures For Covered Entities
While the Hospice of North Idaho settlement marks the first settlement on a small breach, this is not the first time the Office of Civil Rights has sought sanctions against a covered entity for data breaches involving the loss or theft of unencrypted data on a laptop, storage device or other computer device. In fact, the Office of Civil Rights’ first resolution agreement — reached before the enactment of the HIPAA Breach Notification Rules — stemmed from such a breach (see Providence To Pay $100000 & Implement Other Safeguards).

Breaches resulting from the loss or theft of unencrypted ePHI on mobile or other computer devices or systems has been a common basis of investigation and sanctions since that time, particularly since the Breach Notification rules took effect. See, e.g., OCR Hits Alaska Medicaid For $1.7M+ For HIPAA Security Breach. Coupled with statements by the Office of Civil Rights about its intolerance, the Hospice of North Idaho and other settlements provide a strong warning to covered entities to properly encrypt ePHI on mobile and other devices.

Furthermore, the Hospice of North Idaho settlement also adds to growing evidence of the growing exposures that health care providers, health plans, health care clearinghouses and their business associates need to carefully and appropriately manage their HIPAA encryption and other Privacy and Security responsibilities. See OCR Audit Program Kickoff Further Heats HIPAA Privacy Risks; $1.5 Million HIPAA Settlement Reached To Resolve 1st OCR Enforcement Action Prompted By HITECH Act Breach Report; and, HIPAA Heats Up: HITECH Act Changes Take Effect & OCR Begins Posting Names, Other Details Of Unsecured PHI Breach Reports On Website. Covered entities are urged to heed these warnings by strengthening their HIPAA compliance and adopting other suitable safeguards to minimize HIPAA exposures.

Office of Civil Rights Director Leon Rodriguez, in OCR’s announcement of the Hospice of North Idaho settlement, reiterated the Office of Civil Rights’ expectation that covered entities will properly encrypt ePHI on mobile or other devices. “This action sends a strong message to the health care industry that, regardless of size, covered entities must take action and will be held accountable for safeguarding their patients’ health information.” said Rodriguez. “Encryption is an easy method for making lost information unusable, unreadable and undecipherable.”

In the face of rising enforcement and fines, the Office of Civil Rights’ initiation of HIPAA audits and other recent developments, covered entities and their business associates should tighten privacy policies, breach and other monitoring, training and other practices to reduce potential HIPAA exposures in light of recently tightened requirements and new enforcement risks.

In response to these expanding exposures, all covered entities and their business associates should review critically and carefully the adequacy of their current HIPAA Privacy and Security compliance policies, monitoring, training, breach notification and other practices taking into consideration the Office of Civil Rights’ investigation and enforcement actions, emerging litigation and other enforcement data, their own and reports of other security and privacy breaches and near misses, and other developments to determine if additional steps are necessary or advisable.

New Office Of Civil Rights HIPAA Mobile Device Educational Tool
While the Office of Civil Rights’ enforcement of HIPAA has significantly increased, compliance and enforcement of the encryption and other Security Rule requirements of HIPAA are a special focus of the Office of Civil Rights.

To further promote compliance with the Breach Notification Rule as it relates to ePHI on mobile devices, the Office of Civil Rights and the HHS Office of the National Coordinator for Health Information Technology (ONC) recently kicked off a new educational initiative, Mobile Devices: Know the RISKS. Take the STEPS. PROTECT and SECURE Health Information. The program offers health care providers and organizations practical tips on ways to protect their patients’ health information when using mobile devices such as laptops, tablets, and smartphones. For more information, see here.

For more information on HIPAA compliance and risk management tips, see here.