Tinkering with algorithms

I read Franklin Foer’s Facebook’s War on Free Will the Guardian’s “Long read” for Tuesday 19 September 2017.

He recapped a familiar argument: you are Facebook’s product. But when he hit “data science” I turned up my sensors. He says, “There’s a whole discipline, data science, to guide the writing and revision of algorithms”. Then he picks up on Cameron Marlow, “the former head of Facebook’s data science team”:

Facebook has a team, poached from academia, to conduct experiments on users. It’s a statistician’s sexiest dream – some of the largest data sets in human history, the ability to run trials on mathematically meaningful cohorts. … Marlow said, “we have a microscope that not only lets us examine social behaviour at a very fine level that we’ve never been able to see before, but allows us to run experiments that millions of users are exposed to.”

The point the experimentalists miss is that the experiment is directed towards outcomes already. The ethics are, at least, sensitive.

There may be five billion data subjects times 100,000 data points per subject or whatever. Yet that is still a fundamentally different proposition from addressing the human condition. It is a redaction of life and of humanity. Big but still bounded.  A bounded data space with specific outcome measures (shareholder return), no matter how large, it is outstripped by the possibilities of human thought and emotion. Anyone using the phrase “approaching infinity” should be thrown out of the academy. He makes an assertion that misses the point:

Algorithms have retired many of the bureaucratic, clerical duties once performed by humans – and they will soon begin to replace more creative tasks.

The fear is that (horrors!) our role (as Guardian readers?) will be replaced. Teachers worry about their jobs (justifiably). If the only outcome measure of algorithmic success is, in the end, shareholder value measured in money, can we say Facebook’s profit correlates with increased “creativity”? Or, any other quantum of human flourishing?

I suggest we won’t fix relations evidenced by inequality of income or any other outcome, by refining databases, or getting more clever with data. Which is not an excuse for not using data better.

I am happy that much of what I do be replaced by machines. Because much of what I would rather do is communicate with infinite human possibility, not algorithms. I may be deluded, of course. But I do think that is what makes us human. That is what learning is for. Not to be read and written to as if we were a big hard drive.

But there is still a problem in the present: in the world we inhabit.

There are good people who suggest tinkering with the algorithms can produce a beneficially directed utilitarian outcome. There were and are those who feel the greatest good for the greatest number is the best we can hope for. I have done that. Does anyone who works in education not engage with utility? Should we not tweak the algorithms? That is the 1,000 mile question.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.