Tag Archives: machine

How Tech Created a New Industrial Model

With a connected device for every acre of inhabitable land, we are starting to remake design, manufacturing, sales. Really, everything.

With little fanfare, something amazing happened: Wherever you go, you are close to an unimaginable amount of computing power. Tech writers use the line “this changes everything” too much, so let’s just say that it’s hard to say what this won’t change.

It happened fast. According to Cisco Systems, in 2016 there were 16.3 billion connections to the internet around the globe. That number, a near doubling in just four years, works out to 650 connections for every square mile of Earth’s inhabitable land, or roughly one every acre, everywhere. Cisco figures the connections will grow another 60% by 2020.

Instead of touching a relatively simple computer, a connected smartphone, laptop, car or sensor in some way touches a big cloud computing system. These include Amazon Web Services, Microsoft Azure or my employer, Google (which I joined from the New York Times earlier this year to write about cloud computing).

Over the decade since they started coming online, these big public clouds have moved from selling storage, network and computing at commodity prices to also offering higher-value applications. They host artificial intelligence software for companies that could never build their own and enable large-scale software development and management systems, such as Docker and Kubernetes. From anywhere, it’s also possible to reach and maintain the software on millions of devices at once.

For consumers, the new model isn’t too visible. They see an app update or a real-time map that shows traffic congestion based on reports from other phones. They might see a change in the way a thermostat heats a house, or a new layout on an auto dashboard. The new model doesn’t upend life.

For companies, though, there is an entirely new information loop, gathering and analyzing data and deploying its learning at increasing scale and sophistication.

Sometimes the information flows in one direction, from a sensor in the Internet of Things. More often, there is an interactive exchange: Connected devices at the edge of the system send information upstream, where it is merged in clouds with more data and analyzed. The results may be used for over-the-air software upgrades that substantially change the edge device. The process repeats, with businesses adjusting based on insights.

See also: ‘Core in the Cloud’ Reaches Tipping Point  

This cloud-based loop amounts to a new industrial model, according to Andrew McAfee, a professor at M.I.T. and, with Eric Brynjolfsson, the coauthor of “Machine, Platform, Crowd,” a new book on the rise of artificial intelligence. AI is an increasingly important part of the analysis. Seeing the dynamic as simply more computers in the world, McAfee says, is making the same kind of mistake that industrialists made with the first electric motors.

“They thought an electric engine was more efficient but basically like a steam engine,” he says. “Then they put smaller engines around and created conveyor belts, overhead cranes — they rethought what a factory was about, what the new routines were. Eventually, it didn’t matter what other strengths you had, you couldn’t compete if you didn’t figure that out.”

The new model is already changing how new companies operate. Startups like Snap, Spotify or Uber create business models that assume high levels of connectivity, data ingestion and analysis — a combination of tools at hand from a single source, rather than discrete functions. They assume their product will change rapidly in look, feel and function, based on new data.

The same dynamic is happening in industrial businesses that previously didn’t need lots of software.

Take Carbon, a Redwood City, CA maker of industrial 3D printers. More than 100 of its cloud-connected products are with customers, making resin-based items for sneakers, helmets and cloud computing parts, among other things.

Rather than sell machines, Carbon offers them like subscriptions. That way, it can observe what all of its machines are doing under different uses, derive conclusions from all of them on a continuous basis and upgrade the printers with monthly software downloads. A screen in the company’s front lobby shows total consumption of resins being collected on AWS, the basis for Carbon’s collective learning.

“The same way Google gets information to make searches better, we get millions of data points a day from what our machines are doing,” says Joe DeSimone, Carbon’s founder and CEO. “We can see what one industry does with the machine and share that with another.”

One recent improvement involved changing the mix of oxygen in a Carbon printer’s manufacturing chamber. That improved drying time by 20%. Building sneakers for Adidas, Carbon was able to design and manufacture 50 prototype shoes faster than it used to take to do half a dozen test models. It manufactures novel designs that were previously theoretical.

The cloud-based business dynamic raises a number of novel questions. If using a product is now also a form of programming a producer’s system, should a company’s avid data contributions be rewarded?

For Wall Street, which is the more interesting number: the revenue from sales of a product, or how much data is the company deriving from the product a month later?

Which matters more to a company, a data point about someone’s location, or its context with things like time and surroundings? Which is better: more data everywhere, or high-quality and reliable information on just a few things?

Moreover, products are now designed to create not just a type of experience but a type of data-gathering interaction. A Tesla’s door handles emerge as you approach it carrying a key. An iPhone or a Pixel phone comes out of its box fully charged. Google’s search page is a box awaiting your query. In every case, the object is yearning for you to learn from it immediately, welcoming its owner to interact, so it can begin to gather data and personalize itself. “Design for interaction” may become a new specialization.

 The cloud-based industrial model puts information-seeking responsive software closer to the center of general business processes. In this regard, the tradition of creating workflows is likely to change again.

See also: Strategist’s Guide to Artificial Intelligence  

A traditional organizational chart resembled a factory, assembling tasks into higher functions. Twenty-five years ago, client-server networks enabled easier information sharing, eliminating layers of middle management and encouraging open-plan offices. As naming data domains and rapidly interacting with new insights move to the center of corporate life, new management theories will doubtless arise as well.

“Clouds already interpenetrate everything,” says Tim O’Reilly, a noted technology publisher and author. “We’ll take for granted computation all around us, and our things talking with us. There is a coming generation of the workforce that is going to learn how we apply it.”

How to Think About the Rise of the Machines

The first machine age, the Industrial Revolution, saw the automation of physical work. We live in the second machine age, where there is increasing augmentation and automation of manual and cognitive work.

This second machine age has seen the rise of artificial intelligence (AI), or “intelligence” that is not the result of human cogitation. It is now ubiquitous in many commercial products, from search engines to virtual assistants. AI is the result of exponential growth in computing power, memory capacity, cloud computing, distributed and parallel processing, open-source solutions and global connectivity of both people and machines. The massive amounts and the speed at which structured and unstructured (e.g., text, audio, video, sensor) data is being generated has made a necessity of speedily processing and of generating meaningful, actionable insights from it.

Demystifying Artificial Intelligence

The term “artificial intelligence” is often misused. To avoid any confusion over what AI means, it’s worth clarifying its scope and definition.

  • AI and Machine Learning—Machine learning is just one area or sub-field of AI. It is the science and engineering of making machines “learn.” That said, intelligent machines need to do more than just learn—they need to plan, act, understand and reason.
  • Machine Learning and Deep Learning—”Machine learning” and “deep learning” are often used interchangeably. Deep learning is actually a type of machine learning that uses multi-layered neural networks to learn. There are other approaches to machine learning, including Bayesian learning, evolutionary learning and symbolic learning.
  • AI and Cognitive Computing—Cognitive computing does not have a clear definition. It can be viewed as a subset of AI that focuses on simulating human thought process based on how the brain works. It is also viewed as a “category of technologies that uses natural language processing and machine learning to enable people and machines to interact more naturally to extend and magnify human expertise and cognition.” Cognitive computing is a subset of AI, not an independent area of study.
  • AI and Data Science—Data science refers to the interdisciplinary field that incorporates statistics, mathematics, computer science and business analysis to collect, organize and analyze large amounts of data to generate actionable insights. The types of data (e.g., text, audio, video) and the analytic techniques (e.g., decision trees, neural networks) that both data science and AI use are very similar.

Differences, if any, may be found in the purpose. Data science aims to generate actionable insights to businesses, irrespective of any claims about simulating human intelligence, while the pursuit of AI may be to simulate human intelligence.

Self-Driving Cars

When the U.S. Defense Advanced Research Projects Agency (DARPA) ran its 2004 Grand Challenge for automated vehicles, no car was able to complete the 150-mile challenge. In fact, the most successful entrant covered only 7.32 miles. The next year, five vehicles completed the course. Now, every major car manufacturer plans to have a self-driving car on the road within five to 10 years, and the Google Car has clocked more than 1.3 million autonomous miles.

See Also: What You Must Know About Machine Learning

AI techniques—especially machine learning and image processing— help create a real-time view of what happens around an autonomous vehicle and help it learn and act from past experience. Amazingly, most of these technologies didn’t even exist 10 years ago.

1 2 3 4 5 6

Emerging risk identification through man-machine learning

“People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.” —Pedro Domingos, author of The Master Algorithm

Emerging Risks & New Product Innovation

Identifying emerging risks (e.g., cyber, climate, nanotechnology), analyzing observable trends, determining if there is an appropriate insurance market for these risks and developing new coverage products in response historically have been creative human endeavors. However, collecting, organizing, cleansing, synthesizing and even generating insights from large volumes of structured and unstructured data are now typically machine learning tasks. In the medium term,  combining human and machine insights offers insurers complementary, value-generating capabilities.

Man-Machine Learning

Artificial general intelligence (AGI) that can perform any task a human can is still a long way off. In the meantime, combining human creativity with mechanical analysis and synthesis of large volumes of data—in other words, man-machine learning (MML)—can yield immediate results.

For example, in MML, the machine learning component sifts through daily news from a variety of sources to identify trends and potentially significant signals. The human-learning component provides reinforcement and feedback to the ML component, which then refines its sources and weights to offer broader and deeper content. Using this type of MML, risk experts can identify emerging risks and monitor their significance and growth. MML can further help insurers identify potential customers, understand key features, tailor offers and incorporate feedback to refine product introduction.

Computers That “See”

In 2009, Fei-Fei Li and other AI scientists at Stanford AI Laboratory created ImageNet, a database of more than 15 million digital images, and launched the ImageNet Large Scale Visual Recognition Challenge (ILSVRC). The ILSVRC awards substantial prizes to the best object detection and object localization algorithms.

The competition has made major contributions to the development of “deep learning” systems, multilayered neural networks that can recognize human faces with more than 97% accuracy, as well as recognize arbitrary images and even moving videos. Deep learning systems can now process real-time video, interpret it and provide a natural language description.

Artificial Intelligence: Implications for Insurers

AI’s initial impact relates primarily to improving efficiencies and automating existing customer-facing, underwriting and claims processes. Over time, its impact will be more profound; it will identify, assess and underwrite emerging risks and identify new revenue sources.

  • Improving Efficiencies—AI is already improving efficiencies in customer interaction and conversion ratios, reducing quote-to-bind and FNOL-to-claim resolution times and increasing speed to market for products. These efficiencies are the result of AI techniques speeding up decision-making (e.g., automating underwriting, auto-adjudicating claims, automating financial advice, etc.).
  • Improving Effectiveness—Because of the increasing sophistication of its decision-making capabilities, AI will soon improve target prospects to convert them to customers, refine risk assessment and risk-based pricing, enhance claims adjustment and more. Over time, as AI systems learn from their interactions with the environment and with their human masters, they are likely to become more effective than humans, and the AI systems will replace them. Advisers, underwriters, call center representatives and claims adjusters will likely be most at risk.
  • Improving Risk Selection and Assessment—AI’s most profound impact could well result from its ability to identify trends and emerging risks and assess risks for individuals, corporations and lines of business.

Its ability to help carriers develop new sources of revenue from risk- and non-risk-based information will also be significant.

See Also: How Machine Learning Changes the Game

Starting the Journey

Most organizations already have a big data and analytics or data science group. (We have addressed elsewhere how organizations can create and manage these groups.) The following are specific steps for incorporating AI techniques within a broader data science group:

  1. Start from business decisions—Catalogue the key strategic decisions that affect the business and the related metrics that need improvement (e.g., better customer targeting to increase conversion ratio, reducing claims processing time to improve satisfaction, etc.).
  1. Identify appropriate AI areas—Solving any particular business problem will, very likely, involve more than one AI area. Ensure that you map all appropriate AI areas (e.g., NLP, machine learning, image analytics) to the problem you want to address.
  1. Think big, start small—AI’s potential to influence decision making is huge, but companies will need to build the right data, techniques, skills and executive decision-making to exploit it. Have an evolutionary path toward more advanced capabilities. AI’s full power will become available when the AI platform continuously learns from both the environment and people (what we call the “dynamic insights platform”).
  1. Build training data sets—Create your own proprietary data set for training staff and measuring the accuracy of your algorithms. For example, create your own proprietary database of “crash images” and benchmark the accuracy of your existing algorithms against them. You should consistently aim to improve the accuracy of the algorithms against comparable human decisions.
  1. Pilot with parallel runs—Build a pilot of your AI solution using existing vendor solutions or open-source tools. Conduct parallel runs of the AI solution with human decision makers. Compare and iteratively improve the performance/accuracy of the AI solution.
  1. Scale and manage change—Once the AI solution has proven itself, scale it with the appropriate software/hardware architecture and institute a broad change management program to change the internal decision-making mindset.

How Machine Learning Changes the Game

Insurance executives can be excused for having ignored the potential of machine learning until today. Truth be told, the idea almost seems like something out of a 1980s sci-fi movie: Computers learn from mankind’s mistakes and adapt to become smarter, more efficient and more predictable than their human creators.

But this is no Isaac Asimov yarn; machine learning is a reality. And many organizations around the world are already taking full advantage of their machines to create new business models, reduce risk, dramatically improve efficiency and drive new competitive advantages. The big question is why insurers have been so slow to start collaborating with the machines.

Smart machines

Essentially, machine learning refers to a set of algorithms that use historical data to predict outcomes. Most of us use machine learning processes every day. Spam filters, for example, use historical data to decide whether emails should be delivered or quarantined. Banks use machine learning algorithms to monitor for fraud or irregular activity on credit cards. Netflix uses machine learning to serve recommendations to users based on their viewing history and recommendations.

In fact, organizations and academics have been working away at defining, designing and improving machine learning models and approaches for decades. The concept was originally floated back in the 1950s, but – with no access to digitized historical data and few commercial applications immediately evident – much of the development of machine learning was largely left to academics and technology geeks. For decades, few business leaders gave the idea much thought.

Machine learning brings with it a whole new vocabulary. Terms such as “feature engineering,” “dimensionality reduction,” “supervised and unsupervised learning,” to name a few. As with all new movements, an organization must be able to bridge the two worlds of data science and business to generate value.

Driven by data

Much has changed. Today, machine learning has become a hot topic in many business sectors, fueled, in large part, by the increasing availability of data and low-cost, scalable, cloud computing. For the past decade or so, businesses and organizations have been feverishly digitizing their data and records – building mountains of historical data on customers, transactions, products and channels. And now they are setting their minds toward putting it to good use.

The emergence of big data has also done much to propel machine learning up the business agenda. Indeed, the availability of masses of unstructured data – everything from weather readings through to social media posts – has not only provided new data for organizations to comb through, it has also allowed businesses to start asking different questions from different data sets to achieve differentiated insights.

The continuing drive for operational efficiency and improved cost management has also catalyzed renewed interest in machine learning. Organizations of all stripes are looking for opportunities to be more productive, more innovative and more efficient than their competitors. Many now wonder whether machine learning can do for information-intensive industries what automation did for manual-intensive ones.

Graphic_page_02_1024x512px

A new playing field

For the insurance sector, we see machine learning as a game-changer. The reality is that most insurance organizations today are focused on three main objectives: improving compliance, improving cost structures and improving competitiveness. It is not difficult to envision how machine learning will form (at least part of) the answer to all three.

Improving compliance: Today’s machine learning algorithms, techniques and technologies can be used on much more than just hard data like facts and figures. They can also be used to  analyze information in pictures, videos and voice conversations. Insurers could, for example, use machine learning algorithms to better monitor and understand interactions between customers and sales agents to improve their controls over the mis-selling of products.

Improving cost structures: With a significant portion of an insurer’s cost structure devoted to human resources, any shift toward automation should deliver significant cost savings. Our experience working with insurers suggests that – by using machines instead of humans – insurers could cut their claims processing time down from a number of months to a matter of minutes. What is more, machine learning is often more accurate than humans, meaning that insurers could also cut down the number of denials that result in appeals they may ultimately need to pay out.

Improving competitiveness: While reduced cost structures and improved efficiency can certainly lead to competitive advantage, there are many other ways that machine learning can give insurers the competitive edge. Many insurance customers, for example, may be willing to pay a premium for a product that guarantees frictionless claim payout without the hassle of having to make a call to the claims team. Others may find that they can enhance customer loyalty by simplifying re-enrollment processes and client on-boarding processes to just a handful of questions.

Overcoming cultural differences

It is surprising, therefore, that insurers are only now recognizing the value of machine learning. Insurance organizations are founded on data, and most have already digitized existing records. Insurance is also a resource-intensive business; legions of claims processors, adjustors and assessors are required to pore over the thousands – sometimes millions – of claims submitted in the course of a year. One would therefore expect the insurance sector to be leading the charge toward machine learning. But it is not.

One of the biggest reasons insurers have been slow to adopt machine learning clearly comes down to culture. Generally speaking, the insurance sector is not widely viewed as being “early adopters” of technologies and approaches, preferring instead to wait until technologies have become mature through adoption in other sectors. However, with everyone from governments through to bankers now using machine learning algorithms, this challenge is quickly falling away.

The risk-averse culture of most insurers also dampens the organization’s willingness to experiment and – if necessary – fail in its quest to uncover new approaches. The challenge is that machine learning is all about experimentation and learning from failure; sometimes organizations need to test dozens of algorithms before they find the most suitable one for their purposes. Until “controlled failure” is no longer seen as a career-limiting move, insurance organizations will shy away from testing new approaches.

Insurance organizations also suffer from a cultural challenge common in information-intensive sectors: data hoarding. Indeed, until recently, common wisdom within the business world suggested that those who held the information also held the power. Today, many organizations are starting to realize that it is actually those who share the information who have the most power. As a result, many organizations are now keenly focused on moving toward a “data-driven” culture that rewards information sharing and collaboration and discourages hoarding.

Starting small and growing up

The first thing insurers should realize is that this is not an arms race. The winners will probably not be the organizations with the most data, nor will they likely be the ones that spent the most money on technology. Rather, they will be the ones that took a measured and scientific approach to building their machine learning capabilities and capacities and – over time – found new ways to incorporate machine learning into ever-more aspects of their business.

Insurers may want to embrace the idea of starting small. Our experience and research suggest that – given the cultural and risk challenges facing the insurance sector – insurers will want to start by developing a “proof of concept” model that can safely be tested and adapted in a risk-free environment. Not only will this allow the organization time to improve and test its algorithms, it will also help the designers to better understand exactly what data is required to generate the desired outcome.

More importantly, perhaps, starting with pilots and “proof of concepts” will also provide management and staff with the time they need to get comfortable with the idea of sharing their work with machines. It will take executive-level support and sponsorship as well as keen focus on key change management requirements.

Take the next steps

Recognizing that machines excel at routine tasks and that algorithms learn over time, insurers will want to focus their early “proof of concept” efforts on those processes or assessments that are widely understood and add low value. The more decisions the machine makes and the more data it analyzes, the more prepared it will be to take on more complex tasks and decisions.

Only once the proof of concept has been thoroughly tested and potential applications are understood should business leaders start to think about developing the business case for industrialization (which, to succeed in the long term, must include appropriate frameworks for the governance, monitoring and management of the system).

While this may – on the surface – seem like just another IT implementation plan, the reality is that it machine learning should be championed not by IT but rather by the business itself. It is the business that must decide how and where machines will deliver the most value, and it is the business that owns the data and processes that machines will take over. Ultimately, the business must also be the one that champions machine learning.

All hail, machines!          

At KPMG, we have worked with a number of insurers to develop their “proof of concept” machine learning strategies over the past year, and we can say with absolute certainty that the Battle of Machines in the insurance sector has already started. The only other certainty is that those that remain on the sidelines will likely suffer the most as their competitors find new ways to harness machines to drive increasing levels of efficiency and value.

The bottom line is that the machines have arrived. Insurance executives should be welcoming them with open arms.

New Way to Spot Loss in Workers’ Comp

You’ve heard it before, “It’s not the tip of the iceberg that cost you so much; it’s what you can’t see. It’s what’s below the water level that costs you real money.” We hear that the total loss to a company from a workers’ comp loss is six to 10 times the value of that work comp loss. But risk managers have neither the right tools to understand and measure the loss, nor the right tools to improve productivity to capture the cash flow that comes from preventing that loss.

During my initial journey into lean sigma consulting, a seasoned Japanese colleague shared an important concept. While this principle was developed to improve the quality and efficiency of output in manufacturing, it has many other applications, including in improving safety and reducing workers’ comp costs. Understanding and applying the rule has improved the profitability of many companies.

Dr. Genichi Taguchi, a Japanese engineer, theorized (and ultimately proved mathematically) that loss within any process or system develops exponentially–not linearly–as we move away from the ideal customer specification or target value.

An example of Taguchi’s Loss Curve is shown below:

graph

Another way to look at it is this: Anything delivered just outside the target, (labeled as LTL and UTL in the diagram above) creates opportunity for exponential financial improvement as we move toward the center of the U-shaped curve. And the farther away from the target we are, the greater the opportunity.

I explain Taguchi’s principle using an example from a kaizen event that dramatically improved machine setup times within a CNC shop.

For years, our client assumed it took 46 minutes to set up and change over machinery. After all, for 10 years, it did take 46 minutes. But our kaizen team was hired to challenge this thinking.

If the CEO and his team were right, setup times couldn’t be completed any faster. But if setup times could be better, loss had been occurring beneath the water line, which meant the iceberg was growing, but no one knew.

Machine setup time is loss because no value is produced during the setup process. And setup times can represent 35% of the total labor burden, so there’s a lot at stake. While employers can compute labor and overhead costs easily, when their assumptions are incorrect about setup times, they’re losing big money. But rarely do they know it or how much.

Here’s our client’s story:

Our client used people and machinery to produce aircraft parts. Machines were not dedicated to product families or cycle times. In other words, the client could build a Mack Truck or Toyota Corolla on the same machinery. And because setup times were slow, the client built large batches of products. When defects struck, they struck in large quantities, and, financially, it was too late to find causes. The costs were already sunk.

Our client borrowed capital to purchase nine machines, leased the appropriate space to house them and purchased electricity, water, and cutting fluids, as well. Each machine had affiliated tool and dies, and mechanics to service them. In other words, when you own nine machines, you need the gear, people and money required to operate and maintain nine machines. And all of this cost was based on 46-minute setups.

Think about that for a moment.

If the client didn’t need nine machines, it wouldn’t have had to spend all of that money and for all of those years! And a wrong assumption in setup times could be leading to loss that never appeared on any income statement. What would show would be the known labor, materials, machinery and overhead costs. But what wouldn’t show would be what wasn’t needed if the team could complete a setup in less than 46 minutes.

After videotaping, collaborating and measuring cycle times on the existing operations and processes, it was evident: The team had ideas that would challenge the 46-minute setups.

After some 5S housekeeping, the team produced a 23-minute setup. One more day of tweaking, and the team got it down to 16. By the last day, the team was consistently producing 10-minute results.

Now let’s talk about the impact.

Under the better state, the client could indeed produce parts faster. It also needed far less capital, insurance, labor, gear, electricity, fluids, tooling, floor space, etc. And because our client’s customer would now get parts faster, the company would get paid faster.

While banks may not like these facts, clients and employees do. Employees can do their jobs more efficiently, and the company makes more money while borrowing less.

Here’s an explanation of the 5S tool the team used to make their setup times faster. This tool–when used properly––not only improves operating efficiency but removes or reduces safety hazards like: tripping, standing, walking, reaching, handling, lifting and searching for lost items.

In addition, the kaizen event itself creates an opportunity for employees to improve their own job conditions and use their curiosity and creativity to solve production-related problems. The event also creates a more engaged employee, one less likely to file future work comp and employment-related claims.

The 5S Process consists of five steps.

  1. Sort the work area out.
  2. Straighten the work area out, putting everything in the right place.
  3. Clean the entire area, scrub floors, create aisle ways with yellow tape, wash walls, paint, etc.
  4. Create standardized, written work processes.
  5. Sustain the process

Using the tools like 5S, I continue to improve my thinking relating to identifying, and managing work comp risks. But during each kaizen event, I also gain perspective about why stakeholders rarely change their ways. What I’ve learned is this: Clients typically need to have one of two conditions met for good change to occur.

  1. They need to have something to motivate them––which often means facing a crisis.
  2. They need to physically see and experience things to believe them.

If you’re like me, you probably need proof, too. Here it is: A reduction in setup times from over two and a half hours to just over ten minutes.

What the Lean Assessment Does

The lean assessment helps find improvement opportunities. That’s because assessments study and measure cycle times, customer demand, value-adding and non-value-adding activities. The assessment helps everyone—including the executive team— see how people physically are required to do their work and understand why they are required to do it the way they are.

In the week-long assessment process, we’re no longer studying the costs of just safety; we’re studying all of the potential causes that drive productivity and loss away from the nominal value. Safety is not necessarily why we are measuring outcomes. Safety is the benefactor from learning how and why the company adds value, and precisely where it creates loss.

That is the power of good change. And good change comes from the power of lean.

The best approach is to dig out and eliminate problems where they are assumed not to exist.” – Shigeo Shingo