Tag Archives: MIT

How Tech Created a New Industrial Model

With a connected device for every acre of inhabitable land, we are starting to remake design, manufacturing, sales. Really, everything.

With little fanfare, something amazing happened: Wherever you go, you are close to an unimaginable amount of computing power. Tech writers use the line “this changes everything” too much, so let’s just say that it’s hard to say what this won’t change.

It happened fast. According to Cisco Systems, in 2016 there were 16.3 billion connections to the internet around the globe. That number, a near doubling in just four years, works out to 650 connections for every square mile of Earth’s inhabitable land, or roughly one every acre, everywhere. Cisco figures the connections will grow another 60% by 2020.

Instead of touching a relatively simple computer, a connected smartphone, laptop, car or sensor in some way touches a big cloud computing system. These include Amazon Web Services, Microsoft Azure or my employer, Google (which I joined from the New York Times earlier this year to write about cloud computing).

Over the decade since they started coming online, these big public clouds have moved from selling storage, network and computing at commodity prices to also offering higher-value applications. They host artificial intelligence software for companies that could never build their own and enable large-scale software development and management systems, such as Docker and Kubernetes. From anywhere, it’s also possible to reach and maintain the software on millions of devices at once.

For consumers, the new model isn’t too visible. They see an app update or a real-time map that shows traffic congestion based on reports from other phones. They might see a change in the way a thermostat heats a house, or a new layout on an auto dashboard. The new model doesn’t upend life.

For companies, though, there is an entirely new information loop, gathering and analyzing data and deploying its learning at increasing scale and sophistication.

Sometimes the information flows in one direction, from a sensor in the Internet of Things. More often, there is an interactive exchange: Connected devices at the edge of the system send information upstream, where it is merged in clouds with more data and analyzed. The results may be used for over-the-air software upgrades that substantially change the edge device. The process repeats, with businesses adjusting based on insights.

See also: ‘Core in the Cloud’ Reaches Tipping Point  

This cloud-based loop amounts to a new industrial model, according to Andrew McAfee, a professor at M.I.T. and, with Eric Brynjolfsson, the coauthor of “Machine, Platform, Crowd,” a new book on the rise of artificial intelligence. AI is an increasingly important part of the analysis. Seeing the dynamic as simply more computers in the world, McAfee says, is making the same kind of mistake that industrialists made with the first electric motors.

“They thought an electric engine was more efficient but basically like a steam engine,” he says. “Then they put smaller engines around and created conveyor belts, overhead cranes — they rethought what a factory was about, what the new routines were. Eventually, it didn’t matter what other strengths you had, you couldn’t compete if you didn’t figure that out.”

The new model is already changing how new companies operate. Startups like Snap, Spotify or Uber create business models that assume high levels of connectivity, data ingestion and analysis — a combination of tools at hand from a single source, rather than discrete functions. They assume their product will change rapidly in look, feel and function, based on new data.

The same dynamic is happening in industrial businesses that previously didn’t need lots of software.

Take Carbon, a Redwood City, CA maker of industrial 3D printers. More than 100 of its cloud-connected products are with customers, making resin-based items for sneakers, helmets and cloud computing parts, among other things.

Rather than sell machines, Carbon offers them like subscriptions. That way, it can observe what all of its machines are doing under different uses, derive conclusions from all of them on a continuous basis and upgrade the printers with monthly software downloads. A screen in the company’s front lobby shows total consumption of resins being collected on AWS, the basis for Carbon’s collective learning.

“The same way Google gets information to make searches better, we get millions of data points a day from what our machines are doing,” says Joe DeSimone, Carbon’s founder and CEO. “We can see what one industry does with the machine and share that with another.”

One recent improvement involved changing the mix of oxygen in a Carbon printer’s manufacturing chamber. That improved drying time by 20%. Building sneakers for Adidas, Carbon was able to design and manufacture 50 prototype shoes faster than it used to take to do half a dozen test models. It manufactures novel designs that were previously theoretical.

The cloud-based business dynamic raises a number of novel questions. If using a product is now also a form of programming a producer’s system, should a company’s avid data contributions be rewarded?

For Wall Street, which is the more interesting number: the revenue from sales of a product, or how much data is the company deriving from the product a month later?

Which matters more to a company, a data point about someone’s location, or its context with things like time and surroundings? Which is better: more data everywhere, or high-quality and reliable information on just a few things?

Moreover, products are now designed to create not just a type of experience but a type of data-gathering interaction. A Tesla’s door handles emerge as you approach it carrying a key. An iPhone or a Pixel phone comes out of its box fully charged. Google’s search page is a box awaiting your query. In every case, the object is yearning for you to learn from it immediately, welcoming its owner to interact, so it can begin to gather data and personalize itself. “Design for interaction” may become a new specialization.

 The cloud-based industrial model puts information-seeking responsive software closer to the center of general business processes. In this regard, the tradition of creating workflows is likely to change again.

See also: Strategist’s Guide to Artificial Intelligence  

A traditional organizational chart resembled a factory, assembling tasks into higher functions. Twenty-five years ago, client-server networks enabled easier information sharing, eliminating layers of middle management and encouraging open-plan offices. As naming data domains and rapidly interacting with new insights move to the center of corporate life, new management theories will doubtless arise as well.

“Clouds already interpenetrate everything,” says Tim O’Reilly, a noted technology publisher and author. “We’ll take for granted computation all around us, and our things talking with us. There is a coming generation of the workforce that is going to learn how we apply it.”

Complexity Theory Offers Insights (Part 1)

In the first of this series of four segments, we will look at the current state of the risk markets and the insurance industry; the emerging peer-to-peer (P2P) segment of the risk markets; how blockchain technology is enabling a new taxonomy in the risk markets; and what changes may occur as a result of these new technologies and methods.

The purpose of this series hails from the open source movement in the software industry. Key to the open source philosophy is the transparent and voluntary collaboration of all interested parties. While this work has been kept fairly close to the vest for the past few years, I have taken meetings with two Fortune 500 insurance companies’ strategy and venture teams, both of which asked for a proof of concept — as well as with a handful of other large international insurance companies and one of the big four accounting firms.

At the other end of the spectrum, I have also spoken with other founders of P2P insurance startups around the world, and I have participated in the communities surrounding blockchain technology. I feel that these handful of folks have already enjoyed early access to these concepts, and my motivation with this series is to achieve a more level playing field for all parties interested in the future of the risk markets.

There are links at the bottom of this article to join the conversation via a LinkedIn group and get access to the whole series.
To begin, let’s take a look at the current state of risk markets. It is important to distinguish between drivers of economic systems and the impact they have on business models in the industrial age vs. in the information age.

See also: Should We Take This Risk?  

Hardware and technology was a key driver throughout the industrial age, which saw a growing batch of new technologies — from cars and planes, to computers and smart phones, to industrial robots, etc.

Industrial age business models were almost always “extractionary” in their nature. The business model engages with some market, and it profits by keeping some portion of the market’s value.

Extracting value from the market

The strategies of the industrial age were:

  • Standardization — interchangeable parts
  • Centralization — big factories, vertical integration, economies of scale
  • Consolidation —an indication that an industry is about to experience a phase change

In the information age, business models almost always embody some creation of “network effect.” When the business model engages with a market, the individual actors all benefit as more actors engage with the business model. The value creation is usually tied to a network’s graph, and the value creation will grow exponentially as the network’s density grows.

Creating value for the market, not extracting value from the market

The strategies and efficiency-drivers in the information age are:

  • Cheap connections — enabling multiple paths through the network’s graph
  • Low transaction cost — in terms of time, effort and money
  • Lateral scaling — not vertical structures, which will be flattened out (“top down” increases network fragility)
  • Increase in node diversity — and in the ways each node can connect

All of these drivers lead to increasing network density and flow. Things are moving away from large, brittle centralized organizational structures and toward “distributed,” P2P, “crowd” or “sharing economy” types of organizational structures.

Moving away from command-and-control organizational structures is almost impossible for organizations that profit from efficiency gains derived from a centralized effort. It is this attribute of their business model that necessitates startups and new business models coming in and bringing improvements to the market — challenging incumbent economic and business models.

The information age is all about networks (not technology), and building graphs that create positive network effects.

The conceptual framework best suited to understanding networks and the networked world we now live in is complexity science. The study of complex adaptive systems has grown out of its roots in the 1940s and has proliferated since the 1990s and the explosion of computer networks and social networks. Here is an introduction:

When looking at complex systems, we start by looking at the system’s graph. To get an idea of what a graph is, let’s look at a few examples of “graph companies.”

  • Facebook built the “social graph” of acquaintances; it did not create acquaintances.
  • Linkedin built the “professional graph” of coworkers and colleagues; it did not create coworkers and colleagues.
  • Google built the “link graph” for topics searched; it did not create back links for the topics searched.

Notice that, in each of these cases, the company built and documented the connections between the things or nodes in the network and did not create the things or nodes themselves. Those already existed.

To start looking at the risk markets, we must first understand what is being connected or transferred between the nodes (a.k.a. the users). It should be of little surprise that, in the risk markets, it is risk that is being transferred between nodes, like a user transferring risk to an insurance company. In terms of risk graphing, there are currently two dominant graphs. A third is emerging.

Let’s take a look at the graphs that make up the risk markets and the insurance industry.

  1. Insurance — is the “hub and spoke” graph.
  2. Reinsurance — is the decentralized graph connecting risk hubs.
  3. P2P Coverage — will be formalized in a distributed graph. (This is the one that does obviously not exist formally, but, informally, you see people calling parents/friends and using GoFundMe/their church/their office/other community organizations to spread risk out laterally.)

In today’s risk markets, insurance companies act as centralized hubs where risk is transferred to and carried through time.

The reinsurance industry graph is enabling second-degree connections between insurance companies, creating a decentralized graph. In the current industry’s combined graph structure or stack, only these two graphs formally exist.

While an insurance company’s ledgers remain a hub where risk is transferred to and carried through time, reinsurance enables those risk hubs to network together, achieving a higher degree of overall system resilience.

See also: Are Portfolios Taking Too Much Risk?  

The P2P distributed graph currently exists via informal social methods.

Stack all three graphs, and you can observe how total risk is addressed across all three graph types. Each has its strengths and weaknesses, which leads to its existing in its proper place within the risk markets.

The fact that insurance as a financial service gets more expensive per $1,000 of coverage as coverage approaches the first dollar of loss means that, as a financial service, there is a boundary where insurance’s weaknesses will outweigh its strengths.

My expectation is that much of the risk currently being carried on the hub-and-spoke insurance graph will accrue to the P2P distributed graph because of improved capital efficiency on small losses via a trend of increasing deductibles. This may lead to some of the risk currently carried on the reinsurance decentralized graph being challenged by centralized insurance.

The proportion of total risk — or “market share” — that each graph carries will shift in this phase change.

When people say insurance is dropping the ball, they are expressing that there is a misunderstanding or poor expectation-setting about how much of total risk the first two graphs should be absorbing. Users are unhappy that they end up resorting to informal P2P methods to fully cover risk.

To increase the resilience of society’s risk management systems and fill the gaps left by the insurance and reinsurance graphs, we need the third risk distribution graph: a distributed P2P system.

Society needs a distributed system that enables the transfer of risk laterally from individual to individual via formalized methods. This P2P service must be able to carry un-insurable risk exposures, such as deductibles, or niche risk exposures that insurance is not well-suited to cover.

Much of this activity already occurs today and, in fact, has been occurring since the dawn of civilization. KarmaCoverage.com is designed to formalize these informal methods and enable end users to benefit from financial leverage created by the system’s network effect on their savings.

When observing a system through the complexity paradigm, another key measure to observe is a system’s level of resilience vs. efficiency. Resilience and efficiency sit on opposite sides of a spectrum. A system that is 100% resilient will exhibit an excess of redundancy and wasted resources, while a system that is 100% efficient will exhibit an extreme brittleness that lends itself to a system collapse.

When we look at the real world and natural ecosystems as an example, we find that systems tend to self-organize toward a balance of roughly 67% resilient and 33% efficient. Here is a video for more on this optimum balance.

Industrial-age ideas have driven economics as a field of study to over-optimize for efficiency, but economics has, in recent years, begun to challenge this notion as the field expands into behavioral economics, game theory and complexity economics — all of which shift the focus away from solely optimizing for efficiency and toward optimizing for more sustainable and resilient systems. In the risk markets, optimizing for resilience should have obvious benefits.

Now, let’s take a look at how this applies practically to the risk markets, by looking at those three industry graphs.

Centralized network structures are highly efficient. This is why a user can pay only $1,000 per year for home insurance and when her home burns down get several hundred thousand dollars to rebuild. From the user’s point of view, the amount of leverage she was able to achieve via the insurance policy was highly efficient. However, like yin and yang, centralized systems have an inherent weakness — if a single node in the network (the insurance company) is removed, the entire system will collapse. It is this high risk of system collapse that necessitates so much regulation.

In the risk markets, we can observe two continuing efforts to reduce the risk of an insurance system collapse. We observe a high degree of regulation, and we see the existence of reinsurance markets. The reinsurance markets function as a decentralized graph in the risk markets, and their core purpose is to connect the centralized insurance companies in a manner to ensure that their inherent brittleness does not materialize a “too big to fail” type of event.

Reinsurance achieves this increase in resilience by insuring insurance companies on a global scale. If a hurricane or tsunami hits a few regional carriers of risk, those carriers can turn to their reinsurance for coverage on the catastrophic loss. Reinsurance companies are functionally transferring the risk of that region’s catastrophic loss event to insurance carriers in other regions of the globe. By stacking the two system’s graphs (insurance and reinsurance), the risk markets’ ability to successfully transfer risk across society has improved overall system resilience while still retaining a desired amount of efficiency.

Observations of nature reveal what appears to be a natural progression of networks that grow in density of connections. Therefore, it makes sense that the reinsurance industry came into existence after the insurance industry, boosting the risk markets’ overall density of connections. Along the same line of thought, we would expect to see the risk markets continue to increase in the density of connections from centralized to decentralized and further toward distributed. A distributed network in the risk markets will materialize as some form of financial P2P, “crowd” or “sharing economy” coverage service.

A network’s density is defined by the number of connections between the nodes. More connections between nodes mean the network has a higher density. For example, a distributed network has a higher density of connections than a centralized network. However, a higher density of connections requires more intense management efforts. There is a limit to how much complexity a centralized management team can successfully organize and control.

See also: 5 Steps to Profitable Risk Taking  

When a network’s connections outgrow centralized management’s capacity to control, the network will begin to self-organize or exhibit distributed managerial methods. Through this self-organization, a new graph structure of the network’s connections will begin to emerge. As this process unfolds, an entirely new macro system structure will emerge that shows little resemblance to the system’s prior state, much like a new species through evolution.

What emerges is a macro phase change (aka “disruption”) that does not necessitate any new resource inputs, only a reorganization of the resources. For example, the macro state of water can go through a phase change and become ice. The micro parts that make up water and ice are the same. The macro state, however, has undergone a phase change, and the nature of the connections between the micro parts will have been reorganized.

In his book “Why Information Grows: The Evolution of Order from Atoms to Economies,” MIT’s Cesar Hidalgo explains that, as time marches forward, the amount of information we carry with us increases. That information ultimately requires a higher density of connections as it grows. This can be understood at the level of an individual who grows wiser with experiences over time. However, as the saying goes, “The more you know, the more you know you don’t know.”

In the history of human systems, we have observed the need for families to create a tribe, tribes to create a society and society-organizing-firms to achieve cross-society economic work. We are now at the point of needing these firms to create a network of firms that can handle increased complexity and coordination.

It is this network of firms that will be achieved via distributed methods because no individual firm will ever agree to let another single firm be the centralized controller of the whole network — nor could a single firm do so.

In the next segment of this series, we will look more closely at the distributed graph that will become formalized, creating a P2P system in the risk markets.

I have started a LinkedIn group for discussion on blockchain, complexity and P2P insurance. Feel free to join here: https://www.linkedin.com/groups/8478617

If you are interesting exploring working with KarmaCoverge please feel free to reach out to me.

How to Build ‘Cities of the Future’

Our cities are built brick by brick, often using construction practices that have evolved little in the last century and giving little regard to proper planning and sustainable development.

Yet innovations and new technologies have produced progressive means of constructing the built environment to ensure that urban infrastructure, once in place, can make a valuable contribution to the workings of a city for centuries to come, withstanding many changes in use and function. Good urban infrastructure needs to anticipate change, be built to adapt and to be resilient.

The Global Agenda Council on the Future of Cities has detailed 10 of the most important urban innovations that will shape the future of our cities. At the heart of these innovations is an understanding that the cities of the future need to be flexible and adaptive on a day-to-day level – doing more with less space and resources – and, in the long term, be able to adapt to the powerful mega-trends placing heavy pressures on the urban environment. The three key trends that will shape the agenda of cities for years to come are: demographic shifts, a changing environment and resource scarcity and technology and business model disruption.

Demographic shifts

The UN reports that the global population will rise to 9.6 billion by 2050. Nearly all of this population growth will occur in cities – it estimated that 66% of the global population will live in urban areas by 2050. Most of these cities are located in the global South and, at present, lack the capacity and resources to ensure that growth is sustainable.

Unchecked urban population growth can lead to vast unsustainable urban sprawl, or the creation of dense slums. Cities will need to accommodate more people without increasing their urban footprint; increasing density, without decreasing quality of life. This can be achieved with reprogrammable living space such as MIT’s reprogrammable apartments or by building structures with multiple uses in mind, ensuring that they can be used for different purposes at different times of the day or week, such as reusing office space or schools for social or leisure activities during the evenings or at the weekend.

In the developed world, years of declining birth rates and longer life expectancy are leading to a rapidly aging population, with its own set of challenges. The effects of this demographic shift are already being felt in countries including Japan, Italy, Germany and Norway, with pressure being put on cities to rethink the provision of urban infrastructure, embrace universal design and reuse and repurpose buildings and infrastructure that is becoming obsolete.

See also: Moving Closer to the ‘Smart City’  

This trend is also increasing the demand for health and social services and the provision of housing that will meet the needs of people during their 100-year life. Tokyo is at the forefront of this trend; an estimated 200 schools per year are closing, and the city is repurposing them as adult education centers, senior homes and places of leisure and exercise for the elderly. Cities in other advanced economies need to prepare for this eventuality.

Changing environment and resource scarcity

The world’s climate over the next century is likely to shift dramatically. Increased occurrences of extreme weather events, desertification and rising sea levels all threaten the world’s cities. Fifteen of the world’s 20 largest cities are located in coastal zones threatened by sea-level rise and storm surges. To prepare for these challenges, cities need to be resilient, building coping mechanisms into their urban fabric. If well-designed, infrastructure that protects against high-impact climate events can also be flexible, serving a valuable purpose for the entirety of its life. Projects such as New York’s Dry Line, or Roskilde’s flood defense skatepark combine resilient infrastructure with a space for community leisure activities.

The urban planner Patrick Abercrombie, who created London’s post-World War II master plan, reserved its hinterland as a “green belt” aimed to preserve the countryside, while also providing nourishment to the city. Today, the city’s greenbelt is global, and water and resource scarcity in any region can easily disrupt the delicate balance between a city and its worldwide network of production.

The advent of urban farming will help to alleviate this risk. Urban farms are largely hydroponic – feeding water and nutrients directly to the roots – and closed-loop, meaning they use as much as 90% less water. They can be placed anywhere and stacked vertically, making them as much as 100 times more productive per hectare. By 2050, the world’s population will demand 70% more food than is consumed today; urban farms will help cities to feed their growing populations, creating a vertical green belt, adding flexibility into the food system with guaranteed yields and low-risk supply chains.

Cities consume vast amounts of all resources, from the materials of which they are constructed, to the demands of their citizens for products and packaging. Cities cannot continue to follow a take/make/waste pattern, filling landfills and depleting finite resources, and need to move toward a more circular economy. Systems of reuse and recycling need to be in place to smartly deal with waste, and building materials themselves need to be designed for reuse. The European Union program Buildings as Material Banks creates reusable buildings that store and record the value of their composite materials over their lifetime. Others use up-cycled materials such as shipping containers to provide low-cost, flexible housing to students and young professionals.

Technology and business model disruption

Cities are economic engines. According to McKinsey, 600 cities are responsible for 60% of global GDP. The healthy economy of a city sustains its population through salaries and entrepreneurial activity. However, all economic activity is subject to disruption; shifts in business models can create opportunities, but cites from Detroit to Liverpool have seen the possible negative effects of industrial change.

In the fourth industrial revolution, we are likely to see the biggest industrial shifts in a generation, changing the way we work and live in the urban environment. Innovations such as 3D printing, artificial intelligence and next-generation robotics will shift models of work and production in ways that are impossible to predict. Cities and businesses need to be adaptive. Google, a company at the forefront of this change, anticipates that its business model could shift dramatically. The company’s new Mountain View, CA, headquarters is adapted for this, a series of giant domes under which any number of structures, fit for any purpose, can be quickly assembled; making it completely reprogrammable for any eventual use. Cities need to take a similar approach to construction.

See also: Can Insurance Become Utility, Like Electricity?  

The sharing economy can be defined as the distribution and sharing of excess goods and services between individuals, largely enabled by modern technology. This new model is having a deep impact on the urban environment. Many consumers are moving away from ownership and toward access, renting access to mobility, entertainment or space.

Companies of the sharing economy naturally add a layer of flexibility into the city. Airbnb, for example, allows people to rent out their apartments when they are out of town, easily increasing a city’s capacity to accommodate influxes of visitors as demand increases. As the sharing economy develops, similar companies will enable cities to turbocharge their efficiency, ensuring that no excess capacity is wasted.

Humanity faces the mammoth task of adding more than two billion people to the urban population before 2050, the equivalent of creating a city the size of London every month for the next two decades. To house, feed and employ these people, cities will have to do more with less. They have to be smarter, greener and more efficient. They will have to innovate.

Secret to Finding Top Technology Talent

In the pundit scramble following the 2016 presidential election, I heard a commentator say the electorate could be divided into two distinct groups: those who have benefited from the technical revolution and those who have not. While we know the outcome of a national election is based on a variety of complex factors, history provides plenty of examples of workers left behind in the wake of technical advances. Clerks and scribes lost their niches after the invention of the printing press; textile mills displaced weavers; and steel manufacturing plants that once employed thousands can now be run with a few hundred managing the automated processes.

Today, few jobs do not involve technology at some level. We’ve innovated nearly everything, and we continue to update with the speed of the next idea. For technology companies, in particular, the search for talented staff is becoming increasingly difficult. It’s not breaking news that there’s a shortage of workers with software and programming skills in the U.S. In September 2015, Fortune reported that there are 1.42 tech jobs for each worker. This statistic is exacerbated by the fact that skilled tech job seekers are reportedly 3.6 times more interested in working in tech hubs like San Francisco, San Jose, Austin and Seattle — not reassuring for those of us in other geographies.

What’s alarming is the overwhelming lack of creativity in the face of this talent shortage. At a recent national technology and innovation conference, I listened to a panel focused on recruiting and retaining top female technical talent. I found it surprising that panel members were not enthusiastic about the benefits of encouraging career changers to enter the technical field. The panelists’ view was very traditional: go to a top-tier program like MIT or Stanford and get a technical degree. That view may work for Amazon and Google, but it is not particularly realistic for a small company recruiting tech talent in the Midwest.

See also: All Insurers Must Become Insurtechs  

Career changers are a valued resource pool

In NIPR’s Java shop in Kansas City — where we build and support software for the insurance industry — we are pursuing a different (and, I would argue, more innovative) recruitment path. We compete with large national telecom and tech companies for architects, developers and automated testing experts, and we have found that career changers have provided us with some of our top talent. We have hired former welders, medical researchers, math teachers and Marines. They now build software, run our scrum teams and provide production support for our customers.

It would be overstatement to suggest that this successful talent recruitment approach was a result of leadership’s foresight and brilliance. Rather, we benefited from strong employee recruitment and retention programs and a bit of luck. Through a vigorous internship program, many of today’s technical staff members started out in entry-level positions in customer support areas. All — and this is where the luck comes in — were logical thinkers, problem solvers and hard workers who were open to trying new things.

A common theme among our career changers is that their path to a technology career was less intentional than you might expect. Few saw the career path ahead. Rather, as one of our team members said, “I had no idea what I wanted to do, so I was willing to take a low-level job to get in the door.”

Demystifying technology jobs and “growing” technical talent

Software development and technology jobs are shrouded in an aura of mystery, often appearing inaccessible and likely preventing smart, capable people from ever considering technical careers. NIPR — with luck and good recruitment — has demystified tech jobs for career changers. We know from experience that great technical talent can be grown. One of our talented software developers has a unique perspective that gives clarity and a “common sense” dimension to his technology work. As a former union welder and Walmart employee, his seems an unlikely path to a tech career. But that’s not how he sees it. “I’ve always had the same job, I’ve just used different tools to get the work done. Whether moving freight at Walmart, moving steam through pipes as a welder or pushing code into production, I am doing the same thing. I have to keep things moving, find the road blocks and fix them. Whether data, freight or steam, my job is to keep it moving seamlessly to the customer.”

NIPR’s willingness to take a risk on career-changers has also netted staff members who typically have well-established work habits, understand collaborative work environments and, as one of our mangers put it, come to the job “more fully formed.” These are people who are adaptable to change — a constant in a technical environment. An added benefit: Career-changers bring a level of excitement to their role, adding energy and value across the company.

See also: 3 Major Areas of Opportunity  

With an appreciation of the rewards that career-changers bring to our workplace, we are now trying to turn luck into strategy. We are looking for smart, creative, hard-working, engaged thinkers who are willing to learn.

NIPR’s strong tuition reimbursement program and a recently added student loan contribution plan help build skills and improve retention. The next step is a stronger in-house training program and a mentoring program that will help provide an even more supportive environment for career changers.

One of NIPR’s team leaders summed it up nicely: “Support from others is key to everyone’s success. We see it as part of the job to transfer knowledge and help others succeed. That’s a win for all of us.”

4 Ways to Manage Remote Workers

Very few entrepreneurs can go from idea to success without a team of people supporting their projects. Besides hiring people in-house for human resources, marketing, production and service jobs, they may need to hire virtual employees and contractors to fill these positions. Either way, they need to create and foster a team-based environment to create a feeling of accountability and responsibility within the shared goal of success.

Creating a team atmosphere can be very difficult with virtual staff, whether employees or contractors. It’s a growing issue because technology opens up so many options for people around the world to work together as a team. According to Gallup, as many as 37% of workers were telecommuting as of 2015.

But there’s more to teamwork than simply working together on the same project. Teamwork involves a sense of camaraderie, support, respect and cohesiveness that can’t always be manufactured simply by the process of a shared project.

See also: The Keys to Forming Effective Teams  

Remote teams are not at a disadvantage in terms of overall performance. In a study conducted by MIT, it was found that teams of dispersed, remote workers often outperformed teams composed of workers within the same location. In part, this is due to the increased productivity that employees and contractors enjoy while working on their own, within their own ideal environment.

But to truly harness that productivity, entrepreneurs with dispersed teams must learn to effectively manage those teams and create a sense of teamwork within them. This can be done by:

  1. Having at least one face-to-face or screen-to-screen meeting. Even virtual face-to-face communication, such as through sites like Skype, helps build relationships and foster trust within the team. People like human contact.
  2. Encouraging collaboration. There is a difference between true collaboration and simply working together. Collaboration allows the team to get excited over a shared goal and inspiration, rather than simply doing their part to achieve an end to a project. Schedule occasional brainstorming chats or conference calls to foster a collaborative environment.
  3. Being clear about expectations, guidelines and standards. One of the best ways to undermine a team is to give every member a different set of rules to play by. Assume that your team members are going to talk and share information outside of scheduled meetings. Keep all their expectations, guidelines and standards uniform so there is no jealousy, competitiveness or implied favoritism.
  4. Giving the team a place to collectively debrief. Create a “virtual water cooler” so that remote employees and contractors can talk, exchange ideas and have an informal place to bond outside of meetings, Harvard Business Review suggests. You can do this by setting up a private group on a social networking platform or by using a program that has group chats or forums.

In a world where more and more employees are working remotely, it is important to take extra steps like these to create a team environment among people who don’t work in the same location. The result can be a sense of community and loyalty that cannot be quantified. Feeling like you’re part of the team leads to lower employee turnover, greater job satisfaction and higher productivity and creativity.

See also: How to Pick Your Insight Team  

So why not schedule that weekly team call? And allow the same technology that enables us to work apart to bring us together.