Tag Archives: stanford university

Does College Matter Any More?

In the technology future we are headed into, the half-life of a career will be about five years because entire industries will rapidly be reinvented. Education counts more than ever. A bachelor’s degree is now the equivalent of high school, and technology skills are as fundamental as reading and writing. Given this, my greatest frustration is that Silicon Valley is regressing by encouraging children to skip college and play the start-up lottery. That approach glorifies college dropouts who start companies—even though the vast majority will fail and permanently wreck their careers. Billionaire Peter Thiel, who cofounded PayPayl and Palantir, goes as far as giving elite students $100,000 to drop out of college.

Sadly, I am on the losing side of this debate. My first defeat was in a globally telecast Intelligence Squared debate on whether too many kids go to college. With Northwestern University President Emeritus Henry Bienen by my side, I debated Peter Thiel and conservative icon Charles Murray. We lost, with 40% of the well-educated Chicago audience voting against the need to college and 39% agreeing with us. Needless to say, I was shocked.

I lost again over the weekend, on a segment on CBS Sunday Morning, which is the most watched morning news show in the U.S. CBS hyped the college dropouts without showcasing the dozens of failures and lives that have been ruined. CBS took the Thiel Foundation at its word that its fellows have started world-changing companies, created 1,000 jobs and raised $330 million in venture capital. These are gross exaggerations; even the  start-ups that CBS featured are all more of the same silly apps—and there are literally thousands more like these.

Here is what I said on the show:

“It breaks my heart when some of the most promising students don’t fulfill their potential because they’re chasing rainbows.

“It’s like what happens in Hollywood: You have tens of thousands of young people flocking to Hollywood thinking that they’re gonna become a Brad Pitt or an Angelina Jolie; they don’t.

“They don’t become billionaires. There haven’t been many Mark Zuckerbergs after Mark Zuckerberg achieved success.”

I added that there is little evidence the Thiel dropouts are doing much that isn’t already being done in Silicon Valley. “Everyone does the same thing: It’s social media, it’s photo-sharing apps. Today it’s sharing economy. It’s ‘Me, too,’ ‘More of the same.'”

You can see the full article published by CBS here, and you can view the segment here.

What Is and What Isn’t a Blockchain?

I Block, Therefore I Chain?

What is, and what isn’t, a “blockchain”?  The Bitcoin cryptocurrency uses a data structure that I have often termed as part of a class of “mutual distributed ledgers.” Let me set out the terms as I understand them:

  • ledger – a record of transactions;
  • distributed – divided among several or many, in multiple locations;
  • mutual – shared in common, or owned by a community;
  • mutual distributed ledger (MDL) – a  record of transactions shared in common and stored in multiple locations;
  • mutual distributed ledger technology – a technology that provides an immutable record of transactions shared in common and stored in multiple locations.

Interestingly, the 2008 Satoshi Nakamoto paper that preceded the Jan. 1, 2009, launch of the Bitcoin protocol does not use the term “blockchain” or “block chain.” It does refer to “blocks.” It does refer to “chains.” It does refer to “blocks” being “chained” and also a “proof-of-work chain.” The paper’s conclusion echoes a MDL – “we proposed a peer-to-peer network using proof-of-work to record a public history of transactions that quickly becomes computationally impractical for an attacker to change if honest nodes control a majority of CPU power.” [Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System, bitcoin.org (2008)]

I have been unable to find the person who coined the term “block chain” or “blockchain.” [Contributions welcome!] The term “blockchain” only makes it into Google Trends in March 2012, more than three years from the launch of the Bitcoin protocol.

Blockchain

And the tide may be turning. In July 2015, the States of Jersey issued a consultation document on regulation of virtual currencies and referred to “distributed ledger technology.” In January 2016, the U.K. Government Office of Science fixed on “distributed ledger technology,” as does the Financial Conduct Authority and the Bank of England. Etymological evolution is not over.

Ledger Challenge

Wuz we first? Back in 1995, our firm, Z/Yen, faced a technical problem. We were building a highly secure case management system that would be used in the field by case officers on personal computers. Case officers would enter confidential details on the development and progress of their work. We needed to run a large concurrent database over numerous machines. We could not count on case officers out on the road dialing in or using Internet connections. Given the highly sensitive nature of the cases, security was paramount, and we couldn’t even trust the case officers overly much, so a full audit trail was required.

We took advantage of our clients’ “four eyes” policy. Case officers worked on all cases together with someone else, and not on all cases with the same person. Case officers had to jointly agree on a final version of a case file. We could count on them (mostly) running into sufficient other case officers over a reasonable period and using their encounters to transmit data on all cases. So we built a decentralized system where every computer had a copy of everything, but encrypted so case officers could only view their own work, oblivious to the many other records on their machines. When case officers met each other, their machines would “openly” swap their joint files over a cable or floppy disk but “confidentially” swap everyone else’s encrypted files behind the scenes, too. Even back at headquarters, four servers treated each other as peers rather than having a master central database. If a case officer failed to “bump into” enough people, then he or she would be called and asked to dial in or meet someone or drop by headquarters to synchronize.  This was, in practice, rarely required.

We called these decentralized chains “data stacks.” We encrypted all of the files on the machines, permitting case officers to share keys only for their shared cases. We encrypted a hash of every record within each subsequent record, a process we called “sleeving.” We wound up with a highly successful system that had a continuous chain of sequentially encrypted records across multiple machines treating each other as peers. We had some problems with synchronizing a concurrent database, but they were surmounted.

Around the time of our work, there were other attempts to do similar highly secure distributed transaction databases, e.g. Ian Griggs and Ricardo on payments, Stanford University and LOCKSS and CLOCKSS for academic archiving. Some people might point out that we weren’t probably truly peer-to-peer, reserving that accolade for Gnutella in 2000. Whatever. We may have been bright, perhaps even first, but were not alone.

Good or Bad Databases?

In a strict sense, MDLs are bad databases. They wastefully store information about every single alteration or addition and never delete.

In another sense, MDLs are great databases. In a world of connectivity and cheap storage, it can be a good engineering choice to record everything “forever.” MDLs make great central databases, logically central but physically distributed. This means that they eliminate a lot of messaging. Rather than sending you a file to edit, which you edit, sending back a copy to me, then sending a further copy on to someone else for more processing, all of us can access a central copy with a full audit trail of all changes. The more people involved in the messaging, the more mutual the participation, the more efficient this approach becomes.

Trillions of Choices

Perhaps the most significant announcement of 2015 was in January from IBM and Samsung. They announced their intention to work together on mutual distributed ledgers (aka blockchain technology) for the Internet-of Things. ADEPT (Autonomous Decentralized Peer-to-Peer Telemetry) is a jointly developed system for distributed networks of devices.

In summer 2015, a North American energy insurer raised an interesting problem with us. It was looking at insuring U.S. energy companies about to offer reduced electricity rates to clients that allowed them to turn appliances on and off — for example, a freezer. Now, freezers in America can hold substantial and valuable quantities of foodstuffs, often several thousand dollars. Obviously, the insurer was worried about correctly pricing a policy for the electricity firm in case there was some enormous cyber-attack or network disturbance.

Imagine coming home to find your freezer off and several thousands of dollars of thawed mush inside. You ring your home and contents insurer, which notes that you have one of those new-fangled electricity contracts: The fault probably lies with the electricity company; go claim from them. You ring the electricity company. In a fit of customer service, the company denies having anything to do with turning off your freezer; if anything, it was probably the freezer manufacturer that is at fault. The freezer manufacturer knows for a fact that there is nothing wrong except that you and the electricity company must have installed things improperly. Of course, the other parties think, you may not be all you seem to be. Perhaps you unplugged the freezer to vacuum your house and forgot to reconnect things. Perhaps you were a bit tight on funds and thought you could turn your frozen food into “liquid assets.”

I believe IBM and Samsung foresee, correctly, 10 billion people with hundreds of ledgers each, a trillion distributed ledgers. My freezer-electricity-control-ledger, my entertainment system, home security system, heating-and-cooling systems, telephone, autonomous automobile, local area network, etc. In the future, machines will make decisions and send buy-and-sell signals to each other that have large financial consequences. Somewhat coyly, we pointed out to our North American insurer that it should perhaps be telling the electricity company which freezers to shut off first, starting with the ones with low-value contents.

A trillion or so ledgers will not run through a single one. The idea behind cryptocurrencies is “permissionless” participation — any of the billions of people on the planet can participate. Another way of looking at this is that all of the billions of people on the planet are “permissioned” to participate in the Bitcoin protocol for payments. The problem is that they will not be continuous participants. They will dip in and out.

Some obvious implementation choices are: public vs. private? Is reading the ledger open to all or just to defined members of a limited community? Permissioned vs. permissionless? Are only people with permission allowed to add transactions, or can anyone attempt to add a transaction? True peer-to-peer or merely decentralized? Are all nodes equal and performing the same tasks, or do some nodes have more power and additional tasks?

People also need to decide if they want to use an existing ledger service (e.g. Bitcoin, Ethereum, Ripple), copy a ledger off-the-shelf, or build their own. Building your own is not easy, but it’s not impossible. People have enough trouble implementing a single database, so a welter of distributed databases is more complex, sure. However, if my firm can implement a couple of hundred with numerous variations, then it is not impossible for others.

The Coin Is Not the Chain

Another sticking point of terminology is adding transactions. There are numerous validation mechanisms for authorizing new transactions, e.g. proof-of-work, proof-of-stake, consensus or identity mechanisms. I divide these into “proof-of-work,”  i.e. “mining,” and consider all others various forms of “voting” to agree. Sometimes, one person has all the votes. Sometimes, a group does. Sometimes, more complicated voting structures are built to reflect the power and economic environment in which the MDL operates. As Stalin said, “I consider it completely unimportant who in the party will vote, or how; but what is extraordinarily important is this — who will count the votes, and how.”

As the various definitions above show, the blockchain is the data structure, the mechanism for recording transactions, not the mechanism for authorizing new transactions. So the taxonomy starts with an MDL or shared ledger; one kind of MDL is a permissionless shared ledger, and one form of permissionless shared ledger is a blockchain.

Last year, Z/Yen created a timestamping service, MetroGnomo, with the States of Alderney. We used a mutual distributed ledger technology, i.e. a technology that provides an immutable record of transactions shared in common and stored in multiple locations. However, we did not use “mining” to authorize new transactions. Because the incentive to cheat appears irrelevant here, we used an approach called “agnostic woven” broadcasting from “transmitters” to “receivers” — to paraphrase Douglas Hofstadter, we created an Eternal Golden Braid.

So is MetroGnomo based on a blockchain? I say that MetroGnomo uses a MDL, part of a wider family that includes the Bitcoin blockchain along with others that claim use technologies similar to the Bitcoin blockchain. I believe that the mechanism for adding new transactions is novel (probably). For me, it is a moot point if we “block” a group of transactions or write them out singly (blocksize = 1).

Yes, I struggle with “blockchain.” When people talk to me about blockchain, it’s as if they’re trying to talk about databases yet keep referring to “The Ingres” or “The Oracle.” They presume the technological solution, “I think I need an Oracle” (sic), before specifying the generic technology, “I think I need a database.” Yet I also struggle with MDL. It may be strictly correct, but it is long and boring. Blockchain, or even “chains” or “ChainZ” is cuter.

We have tested alternative terms such as “replicated authoritative immutable ledger,” “persistent, pervasive,and permanent ledger” and even the louche “consensual ledger.” My favorite might be ChainLedgers. Or Distributed ChainLedgers. Or LedgerChains. Who cares about strict correctness? Let’s try to work harder on a common term. All suggestions welcome!

Atlanta: The Ripening Silicon Peach

When evaluating the beginnings of established tech markets in the U.S., there are several similarities about their regional characteristics that can serve as indicators for their tech trajectory. Consider Palo Alto, New York and Seattle, also known as the centers of Silicon Valley, Silicon Alley and Silicon Forest, respectively. Each has unique advantages with its geographies, easy access to Millennial tech talent, attractive quality-of-life benefits and specialized technology roots.

The same pattern is beginning to emerge in Atlanta. Atlanta’s combination of low cost of doing business, educational institutions and growing population of Fortune 1,000 companies is making it one of the fastest-growing tech hubs in the country. This year, Atlanta was ranked among the top 10 tech talent markets with a 21% growth in tech jobs since 2010, according to the latest CBRE report.

One driver in Atlanta’s recent economic and tech growth is the infiltration of insurance. The insurance industry is undergoing a tech transformation of its own, and of late several of the industry’s leading insurance companies have set up shop in the region.

Let’s take a look at how we got here.

Tech Market Drivers

In addition to a prominent business ecosystem – Georgia is home to 20 Fortune 500 headquarters and 33 Fortune 1,000 companies – Atlanta’s tech surge is largely fueled by its world-class universities, which emphasize technology specialization and diversity, and its reputation as an attractive work-life destination.

Like Silicon Valley’s beginnings with tech recruits from local Stanford University, Atlanta’s midtown is walking distance from two respected universities, Georgia Institute of Technology and Georgia State. Georgia Tech is currently ranked seventh in the nation among public universities, and its college of engineering is consistently ranked in the nation’s top five. Georgia State is ranked fifth in the nation for its risk management and insurance program. The vast pool of graduate talent each year is a huge attraction for start-ups and Fortune 1,000 companies alike.

Atlanta’s universities are also known for their emphasis on diversity. Georgia Tech is consistently rated among the top universities with high graduation rates of underrepresented minorities in engineering, computer science and mathematics. This has transcended the universities into the region’s broader tech community — Atlanta is ranked as one of the top five states for women-owned businesses, with a 132% growth rate from 1997 to 2015, according to the U.S. Census Bureau.

As far as work-life attractiveness, Atlanta has been named the “top city people are moving to” by Penske for the last five years, because of the range of job opportunities, low cost of living and appealingly warm weather. Similarly, according to the job website Glassdoor, Atlanta was named one of the top 10 cities for someone to be a software engineer.

Insurance Intersect

The insurance industry is one of the key drivers of economic growth in the country — and it is establishing major roots in the Atlanta region. Just in the last few years, Atlanta has seen a number of insurance companies relocate their head offices to the south. Recently, State Farm announced the addition of 3,000 jobs over the next 10 years, and MetLife just announced a significant investment in Midtown, choosing this area for its proximity to rapid transport and the international airport. Where Atlanta is situated, travelers can reach 90% of the U.S. in fewer than three hours.

Insurance growth in the region is also likely linked to the density and size of insurance claims on the East Coast, with the largest insurance providers located along the corridor from Boston down to Miami. The 10 most costly hurricanes in the U.S. history have hit the East Coast, and four have greatly affected Georgia.

2015 and Beyond

Looking ahead, I expect the majority of insurance companies to increase their visibility in Atlanta, as they’ll find a wider pool of insurance experts and other advantages that cater to the industry’s growth. Similar to the tech hubs ahead of it, Atlanta will continue taking advantage of its geography, access to talent and cultural ideals to not only build its tech community but to also push the insurance industry forward. The U.S. will soon have another major tech hub to be proud of.

What Is a Year of Life Worth? (Part 1)

Most conservatives and liberals agree that we should not consider cost in deciding whether people should undergo medical procedures that have the potential to save lives and cure diseases. Unfortunately, most conservatives and liberals are wrong.

Declaring the idea of cost-effectiveness a “forbidden topic in the health care debate,” Aaron Carroll shows just how averse we are to the idea of comparing money cost with health outcomes. It’s even written into the Affordable Care Act:

“… We in the U.S. are so averse to the idea of cost-effectiveness that when the Patient Centered Outcomes Research Institute, the body specifically set up to do comparative effectiveness research, was founded, the law explicitly prohibited it from funding any cost-effectiveness research at all. As it says on its website, ‘We don’t consider cost-effectiveness to be an outcome of direct importance to patients.’”

He gives another example:

“Take the U.S. Preventive Services Task Force, which was set up by the federal government to rate the effectiveness of preventive health services on a scale of A to D. When it issues a rating, it almost always explicitly states that it does not consider the costs of providing a service in its assessment.

“And because the Affordable Care Act mandates that all insurance must cover, without any cost-sharing, all services that the task force has rated A or B, that means that we are all paying for these therapies, even if they are incredibly inefficient.”

Here is the brutal reality: We don’t have an unlimited pile of money to spend on anything. And if we don’t pay attention to what we get for the money we spend (which has historically been the case for government regulatory agencies), we will end up spending money in ways that actually reduce life expectancy for the average American. In a 1996 study for the National Center for Policy Analysis, Tammy Tengs found that:

  • By spending $182,000 every year for sickle cell screening and treatment for black newborns, we add 769 years collectively to their lives at a cost of only $236 for each year of life saved.
  • By spending about $253 million a year on heart transplants, we add about 1,600 years to the lives of heart patients at a cost of $158,000 per year of life saved.
  • Equipping 3% of school buses with seat belts costs about $1.6 million a year, but this effort will save less than one life-year, so the cost is about $2.8 million per year of life saved.
  • We spend $2.8 million every year on radionuclide emission control at elemental phosphorus plants (which refine mined phosphorus before it goes to other uses), but this effort will save at most one life every decade, so the cost is $5.4 million per year of life saved.

Tengs, along with Professor John Graham and a team of researchers at the Harvard Center for Risk Analysis, systematically gleaned from the literature annual cost and lifesaving effectiveness information for 185 interventions. Some of these interventions had been fully implemented, some partially implemented and some not implemented all. The researchers then asked: What if we reallocated funds from regulations and procedures that give us a low rate of return to those procedures that give us a high one?

  • The 185 interventions cost about $21.4 billion a year and saved about 592,000 years of life.
  • If that same money had been spent on the most cost-effective interventions, however, more than 1.2 million years of life could have been saved — about 638,000 more years of life than under the status quo.
  • Implementing the more cost-effective policies, therefore, could save twice as many years of life at no additional cost.

This same principle applies to health insurance. Unless you want your premium to go through the roof, you should choose an insurer that follows a reasonable standard for what care is covered. But that brings us back to Carroll’s point. How are you to know what standard your insurer is using if the whole subject is a “forbidden topic”?

A few years ago, Time Magazine reported that $50,000 for a year of life saved is

“… the international standard most private and government-run health insurance plans worldwide use to determine whether to cover a new medical procedure…. Nearly all other industrial nations — including Canada, Britain and the Netherlands — ration healthcare based on cost-effectiveness and the $50,000 threshold.”

But a Stanford University economist calculated that the threshold for kidney dialysis for Medicare enrollees should be $129,000. Mark Pauly and his colleagues suggested a standard of $100,000 in Health Affairs. Economists generally believe that such standards should be based on the implicit values people reveal when they make choices between money and risk in the job market and make choices as consumers. Studies show that the implicit “value of a statistical life year,” to use a term of art, ranges from $50,000 to $150,000. As Pam Villarreal, Biff Jones and I explained in Health Affairs:

“This is not the amount of money that people would accept to give up their lives. It is instead the implicit value that people place on their lives when making choices between additional risk and money, when the risks involved and the amount of compensation needed to induce people to accept those risks are both small.”

For the many problems involved in arriving at a figure, see a review by Ike Brannon. For an extension of the idea to “quality adjusted life years,” or QALYs, see Aaron Carroll’s discussion and links to the literature. The main point there is that a year spent on a respirator shouldn’t count anywhere near as much as a year doing normal activities.

There remains the question of “rationing” and “death panels.” I’ll address that in a future post.

This article first appeared on Forbes.com.