Out of the Box with Jaron Lanier #exsum13

LanierJaron Lanier, who wrote this well-received book Who Owns The Future (WOTF), is associated with Microsoft Research and has been active in connected things and big data for a very long time. He loves the world of distributed instruments and big data. Yet his talk is one of caution and concern, and perhaps a way out:

‘I have a profound concern that we’ve taken a wrong turn and that the business models we’re pursuing are unsustainable.’

Using an analogy from Quantum Physics, Maxwell’s demon, Lanier talks about how “there is no free lunch. Whenever there is the appearance of easy or free money, there are serious side effects. The first stock market flash-crash was already in 1987. If you had superior computation capabilities, you could almost create money out of thin air. But if for example healthcare companies start excluding their most costly clients, the model will break. If in a hot climate everybody uses air-conditioning, it will actually get hotter and hotter.

The same applies in the economy: the margin is being squeezed out of everything. It’s hard to make money out of devices, hardware. If you can get the data, that’s everything! But what is the business model?

The financial sector became addicted to big data early on. The scheme of long-term capital management. I believe the whole world will pay for that. Any slow changing variable can be forecast by statistics to some degree, but how much is uncertain. The Scheme doesn’t actually represent modeling of the world, just a limited view into the future. But that reality is hard to accept because at first it works! You first have profits out of nothing, and then it fails. This is one of the problems.

What changed was the financial crisis of five years ago (bundled mortgages). Now statistics started to incorporate individual data, scraping the net in order to sell people stupid mortgages in a Big Data sales effort. If you look at the market of personal data, the biggest customers are insurers and banks. Because it works. I believe it will fall of a cliff.

Problems with this model
If you’re going to be selling data, it’s very hard to be in control of the data. Google for example, imported same model as financial data and uses the personal data to place advertisements. But now ad-block blocks the ads and Google needs to pay ad-block to let ads through. Almost with no value, ad-block got in between Google and their business model.

There is a limit to how useful Big Data is: Big Data certainly is not a substitute for wisdom or understanding. If you’ve used the obvious examples of finding where to build a hospital or mall, you start running out of examples quickly. There is a limit on how much useful information is in data. So soon that space becomes very crowded. And if there are a few power players who apply ‘sorting’ like Maxwell’s demon, you’ll have a power-law outcome with just a few very successful startups, YouTube videos or universities. Yes, it becomes inexpensive for new entrants to enter, but the tail is very long and flat.

If Big Data is only used as a routing or sorting mechanism, it can only be used to sell ‘something else’. But at the same time, all that data is being used to replace the ‘something else’ – this process of replacement is true in every industry. The people doing well are closest to the Big Data computers, until the whole thing collapses. There are more and more examples of industries that are being hollowed out, which is unsustainable.

What to do about it?
I’m not sure, but an answer I’ve been exploring is an old model, discussed in the sixties by Ted Nelson (Harvard). In this model, you keep track of where bits originated in an online system. Then you pay people in micropayments based on what data is valuable. It creates a more balanced system that is sustainable. It could create a big data driven society with a high degrees of automation and still a high employment.

Would the micro-payment paradigm result only in negligible amounts? Probably not:

  1. Mathematics: if you look at distribution of outcome in hub-spoke network, the sources of information follow a power-low. But in graph network (like Facebook) there is wide number of sources and it follows a bell-curve. If explored well, it would create a middle class.
  2. It’s hard to calculate what data is worth, but you can look at the differential between what you would pay for something with or without sharing all your data. Or look at what third parties will pay for your data. In research I hypothesized that even for the most boring person the data is already worth hundreds or even thousands of dollars. If that number hits the poverty line, it may be new model. Today we have basically two models to solve inequality: redistributive systems or a laissez-faire market. This could be a third model, that is not political and much more comprehensive and it still is laissez faire, and perhaps much more valuable.

Will we be in time for IoT? It remains to be seen: Even basic functions such as the phone are still pretty glitchy. All companies have had data breaches. It is my major concern for IoT: do we even have the systems in place to make the ‘Things’ run well?

Should every business have an API?

No. Well, not yet. API’s allow data from one site to flow outside of it, be it through an app or a mashup with another Internet service. With a lot more companies sitting on huge piles of data, the idea of a business API enabling data flows to partners, employees and even consumers is catching on in business IT-land.

Last year we have seen a milestone when it comes to API’s. The site ProgrammableWeb has been tracking web APIs since 2005 and crossed the 5,000 API mark in its directory, covering everything from Twitter to government sites. From their statement:

Our API directory has hit another major milestone. We now list 5,000 APIs, just a short four months since passing 4,000. No longer is the web simply about links connecting one site to another. Instead, developers are using tools to connect data and functionality from one site to another site. It’s an incredible transformation that has happened over a very short period of time.

When a business owns a enormous database of data just sitting there, it’s weird that people should consume this line by line in a spreadsheet. By having an API, this data can be re-displayed and re-shared in new, and maybe better, ways.

I have embedded a video below called Containerization that should open up your eyes on business API’s, at least that is what it did for me. It compares the idea of API’s to the concept of containers that revolutionized shipping by sea and created new economic opportunities.

Check it out and share your thoughts on the idea of a business API or possible applications.

Nyenrode Big Data Research at VINTlabs

“In a time where we are flooded with data we need to rethink the way we swim”

Towards the end of 2012, Nyenrode University and Sogeti/VINT embarked upon a big data research project. Through structured interviews with experts in large commercial and science-driven organizations we gained valuable insight in their experience, expectations and plans for the coming years.

Johan Schaap & Menno Hamburg, Nyenrode University

It was the biggest IT and marketing trend in 2012, to continue in 2013 and beyond: Big Data. Only few still stubbornly refer to it as a hype that will slowly fade away. Currently, major organizations are working out proofs of concept for big data to be more valuable to their organization. No one can ignore the opportunity to extract valuable information from the variety of rapidly available data today.

No doubt, there will be major breakthroughs during 2013. The solutions differ per industry, and on company level some move ahead of the pack while others deliberately stay more behind. Most interesting is the question which organization will take the lead in their industry? Three elements at this moment are crucial in finding this answer.

1. Knowledge
Regarding data streams and big data sets, both developing the technology to analyze and asking the right questions dominate the current maturity phase. People that have business knowledge as well as a technical understanding seem to be most suited for the job. Such key resource already is in place in the scientific non-profit sector like governmental and non-governmental research centers since these have been working with big data for a longer period.

2. Experience
Trial and error is unavoidable when applying a new concept. Many commercial organizations focus on customer behavior by doing social media analysis, in order to enhance marketing. Detecting patterns in this way is synonymous with gaining experience in refining the decisional parameters.

3. Tooling
Especially when the analysis to be performed is more specific, the right tooling becomes essential. Currently, there is Hadoop in combination with dashboard software that can perform the analysis needed. For organizations that deal with industry-specific questions these solutions are limited. This is often resolved by the development of own applications, sometimes building upon or connecting to BI and data warehouse platforms.

 

The Big Data Benchmarked Corporation?

Everything that goes on inside an organization is digital, or has a digital representation. Emails, voip-calls, meeting minutes, web-conferences, collaborative online spaces, production lines, scans, you name it! There are employee directories, personalized digital entry passes to enter the building, security cameras and … almost everything happens on corporately managed devices.

Many tools try to tap into this wealth of company data. First there were tools to enable company search. Install Google inside your network and you can search all data that is hidden on the corporate networks. But then search became more savvy and started recognizing people, places and ‘topics’ to make it easier to find what you were looking for. Now, search can spot trends, display results graphically and blend search data with the more traditional BI tools of ‘slicing and dicing’.

But then it became really interesting: there are the tools (such as IBM Atlas) which analyze the interactions among employees and show the ‘real’ structure of the organization: who works with whom, who are the go-to experts on a certain topic and who are the most popular people in general.

But wait! What about privacy? Can organizations simply scan everything we do at work? The answer depends a bit on where you work, but generally: yes, they can! In the United States, organizations can analyze almost anything you do, as long as there is a valid business reason. In Europe, things are a bit more complicated, naturally. I think it will be interesting to see what the effect will be of ‘bring-your-own-device’ and the increased blending of business and personal: will that give us more privacy, or the company more insight?

I can imagine a world where all IT systems used inside an organization are automatically benchmarking against an industry standard. Where a company like Salesforce.com tells you how you are doing with regard to your sales process. Where through Big Data, we can optimize business processes and find out which people to give a bonus and which ones to fire. But in a world where productivity improvement is dictated by data analyses, what would a working day look like? Crystal ball, anyone?

Milking the masses with Big Data

Making use of the internet population in all kinds of business processes, better known as open-source-inspired-innovation or crowdsoucing, was the thing that triggered business week in 2006 to write about “milking the masses”.  In todays Big Brother world, making use of the internet population, is a lot easier. No active participation of the masses, milking is done automatically by making use of the human exhaust, the digital footprint.

Will we move in the right direction?
In 2006, Jesus Villasante criticized the open-source community for getting milked in his speech at the Holland Open conference. Villasante is the current head of the trust and security unit of the European Union (DG Information Society and Media). He said:

“From the moment they [the open-source community] realize they are part of the evolution of society and try to influence it, we will be moving in the right direction.”

That message was for the open-source community. But what will happen when you and I realize that we, our data, are getting milked? In “what direction” will we move? The digital culture clash model we presented in The App Effect could give some insights.

Digital Culture Clashes
Traditional business systems want to preserve what we have. They are presented as the source of alienation, though we face a crisis of trust. Digital subcultures in the right lower corner represent the conversation and participation economy of bloggers, trip advisors and the like. They discuss and charge. The empowered individuals in the top left corner are the activist and looters, they expose data and revolt. London riots, project X-ers, Wikileaks, ethical hackers, Anonymous.

The ambition should be to realize that only a “we all benefit” model is sustainable. That is the right direction Villasante was talking about. In the digital commons milking and being milked is part of the game since privacy is a trade off. In the case of the smart grid for instance, we need to accept smart meters in our home for better predictions of energy production and consumption. They milk your energy data and in return you get a cheaper and sustainable energy system. Pay-as-you go car insurance systems require a tracking device in your car. Your location is being “milked” but you get cheaper insurance. Same accounts for digital healthcare systems, like the one that health insurance company Menzis introduced in the Netherlands. Their pay-less-when-you-train model is built upon the idea that you provide personal data about your behavior and in return you get a low priced insurance.

Perhaps Jesus Villasante will repeat his message on the “Reloading Data Protection” conference in Brussels in 2013. A message about milking, a message about taking ourselves seriously, a message about a new kind of society and economy and a message about trusted systems. And then, as soon as we realize what we get in return, a better planet, a better price or a better service “we will be moving in the right direction”.

Expert Talk: Anjul Bhambhri and Jacob Spoelstra on big data & business opportunities

“It’s very different from what we have seen before, in terms of volume, in terms of the knowledge that is being gleaned or sought from this data.”

“One thing that’s clear is that companies that know how to make use of their data effectively are going to be the ones that are going to have a very strong competitive advantage.”

Anjul Bhambhri has 23 years of experience in the database industry with engineering and management positions at IBM, Informix and Sybase. Bhambhri is currently IBM’s Vice President of Big Data Products, overseeing product strategy and business partnerships.

Jacob Spoelstra is Global Head of R&D at Opera Solutions and has more than 19 years experience in machine learning, focusing in particular on neural networks. He headed the Opera team that, as part of “The Ensemble,” ended up “first equals” in the prestigious Netflix data mining competition, beating out over 41,000 other entrants.

We talked to them both about using big data for competitive advantages, leveraging customer data and understanding decision makers.

New technologies allowing businesses to mine everything

Leveraging customer data for action

[Read more...]

The Future of the Company with Big Data: Insight or Execution, Evolution or Revolution?

Innovation happens when good ideas are met with good execution. With a little bit of focus, it’s easy to find good ideas, through crowd-sourcing, an open management style and social tools.

In fact, there are usually more ideas than that there is budget or organizational capacity to implement. While originally we may have thought that ‘all we need is a great idea’, in reality execution has become the bottle neck in innovation, especially in larger organizations.

What happens when Big Data enters the picture? Today we are hunting for new revelations, great insights, new understanding of the world, our clients and our market. We want to tap into the knowledge that is hidden in the enormous data-streams. But what does that insight consist of, really? Will it be revolutionary? Or will it simply mean that insights will become a commodity? Where anyone can do a quick market analyses from behind any device, using publicly available services? Where everyone has the same insights? There are probably a few ways to remain competitive in the Big Data era, but like all competitive advantages they are bound to erode over time. I could imagine that initially access to unique data sources, a special tool for finding the hidden patterns or simply asking better questions will lead to some (or a lot of?) advantage. But then, over time as insights start to pile up, it’s probably the same as with innovation: it’s not the insights themselves, it’s how you execute: what you do with the insights and how quickly you can make a change.

Or is there a second way to competitive advantage, a much more radical way where Big Data leads to radical new business models that open up unexplored markets? Sure, there will be data brokers and data refiners and data enhancers and the like, but those merely exist to help optimizing the Big Data supply chain (which admittedly may in fact be very profitable). But what about revolutionary ways to do banking, healthcare, manufacturing, mining, etc? Quite a few Big Data examples of today are incremental and often marketing or market-insight oriented, but there are some glimpses of a revolution. Is fighting criminals before they’ve committed the crime the start of a true revolution? IBM Watson is looking very promising (or threatening, depending on your perspective) for healthcare and finance.

Will Big Data lead to revolutions or is this simply the hype speaking?

Is it still too early for Big Data?

Talking to some C-level executives and BI experts across the USA, it struck me how Big Data is seen as a great opportunity, but also met with quite a bit of hesitance. “I love it in concept, but I’m not sure it’s for us, right now”. For years, companies have been trying to become better at Business Intelligence, make better use of their internal data, create better insights to base their decisions or provide more people with ‘self-serve’ access to data. In private, off the record, IT executives admit they’ve been struggling at it. Business Intelligence has ended up on the wish-list between the many important priorities of IT, the budget pressure, the demands to address mobile, virtualize the datacenters and desktops, move to the cloud, address social media, necessary migrations and upgrades, implement voip and video conferencing, increase security,  think about Bring-your-own device etc. etc. And these are just the technology related innovations. I’m not even talking about the great industry transformations that are happening in finance, healthcare, education etc., each with their own impact on systems and information.

Perhaps for some companies, it’s a matter of walk before trying to run. First get better at using existing, internal, low-volume, fairly structured data sources before trying to tap into the fire-hose that is big data. Current business BI users, who love a simple report or dashboard, aren’t necessarily looking to give up those tools, even though Big Data may promise more and better insights.

And while one view is it can’t hurt to just start a pilot of some sort with big data, some true BI experts I spoke to shivered at the thought of collecting and storing terabytes of data without a proper plan. They know: once the business gets hooked on something, it’s hard or impossible to touch it, to redesign it, to revisit the model. For example, once the marketing team has seen sentiment tracking linked to sales figures, that pilot system is there to stay. One CIO even regretted starting a pilot at all, because after the pilot, everyone quickly retreated into familiar territory, instead of embracing the bigger change that is still ahead.

So, while the concept and insights of Big Data are increasingly clear, the actual organizational change that comes with it, may keep companies from effectively using it for quite some time. Agree?

Expert Talk: Michael Chui on 5 ways to leverage Big Data

“If you can better understand the characteristics of an individual, of a customer, of a supplier, you can often tailor programmes for those people or those organizations, which makes a win-win situation for both you as well as the partner”

“Some of the things that big data allows you to do is truly improve the performance of overall companies”

Today in our video section: Michael Chui, Senior at McKinsey Global Institute).

5 ways to leverage Big Data

Data driven decision making: higher returns

Expert Talk: John Hagel on change and learning fast on the edges of an ecosystem

“Our belief is that the individual becomes really the catalyst for change; that we as individuals will see the power of pull and will start to use it in our own lives, personal and professional. And then, as a result of our learning and actions, we’re going to become catalysts for change within the institutions”

Today in our video section: author and consultant John Hagel who specializes in the intersection of business strategy and information technology. In 2007, Hagel, along with John Seely Brown and Lang Davison, founded the Center for the Edge.

Making sense of change in an uncertain world

Strategic focus on two horizons

[Read more...]