Showing posts with label Technological Unemployment. Show all posts
Showing posts with label Technological Unemployment. Show all posts

Sunday, March 29, 2026

Grounded In Reality Piece On AI Mania

I don’t say that because I think that AI models are bad or because I think they won’t get better; I think that AI models are very good and will get much better. No. The fault is not with the models, but with us. The world is run by humans, and because it’s run by humans—entities that are smelly, oily, irritable, stubborn, competitive, easily frightened, and above all else inefficient—it is a world of bottlenecks. And as long as we have human bottlenecks, we’ll need humans to deal with them: we will have, in other words, complementarity.

People frequently underrate how inefficient things are in practically any domain, and how frequently these inefficiencies are reducible to bottlenecks caused simply by humans being human. Laws and regulations are obvious bottlenecks. But so are company cultures, and tacit local knowledge, and personal rivalries, and professional norms, and office politics, and national politics, and ossified hierarchies, and bureaucratic rigidities, and the human preference to be with other humans, and the human preference to be with particular humans over others, and the human love of narrative and branding, and the fickle nature of human preferences and tastes, and the severely limited nature of human comprehension. And the biggest bottleneck is simply the human resistance to change: the fact that people don’t like shifting what they’re doing. All of these are immensely powerful. Production processes are governed by their least efficient inputs: the more efficient the most efficient inputs, the more important the least efficient inputs.

In the long run, we should expect the power of technology to overcome these bottlenecks, in the same way that a river erodes a stone over many years and decades—just as how in the early decades of the twentieth century, the sheer power of what electricity could accomplish gradually overcame the bottlenecks of antiquated factory infrastructure, outdated workflows, and the conservatism of hidebound plant managers. This process, however, takes time: it took decades for electricity, among the most powerful of all general-purpose technologies, to start impacting productivity growth. AI will probably be much faster than that, not least because it can be agentic in a way that electricity cannot. But these bottlenecks are real and important and are obvious if you look at any part of the real world. And as long as those bottlenecks exist, no matter the level of AI capabilities, we should expect a real and powerful complementarity between human labor and AI, simply because the “human plus AI” combination will be more productive than AI alone.

- More Here


Saturday, February 10, 2018

Wisdom Of The Week

I wonder how many of the people making predictions about the future of truck drivers have ever ridden with one to see what they do?

One of the big failings of high-level analyses of future trends is that in general they either ignore or seriously underestimate the complexity of the job at a detailed level. Lots of jobs look simple or rote from a think tank or government office, but turn out to be quite complex when you dive into the details.


For example, truck drivers don’t just drive trucks. They also secure loads, including determining what to load first and last and how to tie it all down securely. They act as agents for the trunking company. They verify that what they are picking up is what is on the manifest. They are the early warning system for vehicle maintenance. They deal with the government and others at weighing stations. When sleeping in the cab, they act as security for the load. If the vehicle breaks down, they set up road flares and contact authorities. If the vehicle doesn’t handle correctly, the driver has to stop and analyze what’s wrong – blown tire, shifting load, whatever.

In addition, many truckers are sole proprietors who own their own trucks. This means they also do all the bookwork, preventative maintenance, taxes, etc. These people have local knowledge that is not easily transferable. They know the quirks of the routes, they have relationships with customers, they learn how best to navigate through certain areas, they understand how to optimize by splitting loads or arranging for return loads at their destination, etc. They also learn which customers pay promptly, which ones provide their loads in a way that’s easy to get on the truck, which ones generally have their paperwork in order, etc. Loading docks are not all equal. Some are very ad-hoc and require serious judgement to be able to manoever large trucks around them. Never underestimate the importance of local knowledge.

I’ve been working in automation for 20 years. When you see how hard it is to simply digitize a paper process inside a single plant (often a multi-year project), you start to roll your eyes at ivory tower claims of entire industries being totally transformed by automation in a few years. One thing I’ve learned is a fundamentally Hayekian insight: When it comes to large scale activities, nothing about change is easy, and top-down change generally fails. Just figuring out the requirements for computerizing a job is a laborious process full of potential errors. Many automation projects fail because the people at the high levels who plan them simply do not understand the needs of the people who have to live with the results.

Take factory automation. This is the simplest environment to automate, because factories are local, closed environments that can be modified to make things simpler. A lot of the activities that go on in a factory are extremely well defined and repetitive. Factory robots are readily available that can be trained to do just about anything physically a person can do. And yet, many factories have not automated simply because there are little details about how they work that are hard to define and automate, or because they aren’t organized enough in terms of information flow, paperwork, processes, etc. It can take a team of engineers many man years to just figure out exactly what a factory needs to do to make itself ready to be automated. Often that requires changes to the physical plant, digitization of manual processes, Statistical analysis of variance in output to determine where the process is not being defined correctly, etc.

A lot of pundits have a sense that automation is accelerating in replacing jobs. In fact, I predict it will slow down, because we have been picking the low hanging fruit first. That has given us an unrealistic idea of how hard it is to fully automate a job.


Will truckers be automated? via MR comments

Friday, April 7, 2017

A.I. VERSUS M.D. - Siddhartha Mukherjee

“A deep-learning system doesn’t have any explanatory power,” as Hinton put it flatly. A black box cannot investigate cause. Indeed, he said, “the more powerful the deep-learning system becomes, the more opaque it can become. As more features are extracted, the diagnosis becomes increasingly accurate. Why these features were extracted out of millions of other features, however, remains an unanswerable question.” The algorithm can solve a case. It cannot build a case.

Yet in my own field, oncology, I couldn’t help noticing how often advances were made by skilled practitioners who were also curious and penetrating researchers. Indeed, for the past few decades, ambitious doctors have strived to be at once baseball players and physicists: they’ve tried to use diagnostic acumen to understand the pathophysiology of disease. Why does an asymmetrical border of a skin lesion predict a melanoma? Why do some melanomas regress spontaneously, and why do patches of white skin appear in some of these cases? As it happens, this observation, made by diagnosticians in the clinic, was eventually linked to the creation of some of the most potent immunological medicines used clinically today. (The whitening skin, it turned out, was the result of an immune reaction that was also turning against the melanoma.) The chain of discovery can begin in the clinic. If more and more clinical practice were relegated to increasingly opaque learning machines, if the daily, spontaneous intimacy between implicit and explicit forms of knowledge—knowing how, knowing that, knowing why—began to fade, is it possible that we’d get better at doing what we do but less able to reconceive what we ought to be doing, to think outside the algorithmic black box?

I spoke to David Bickers, the chair of dermatology at Columbia, about our automated future. “Believe me, I’ve tried to understand all the ramifications of Thrun’s paper,” he said. “I don’t understand the math behind it, but I do know that such algorithms might change the practice of dermatology. Will dermatologists be out of jobs? I don’t think so, but I think we have to think hard about how to integrate these programs into our practice. How will we pay for them? What are the legal liabilities if the machine makes the wrong prediction? And will it diminish our practice, or our self-image as diagnosticians, to rely on such algorithms? Instead of doctors, will we end up training a generation of technicians?”

He checked the time. A patient was waiting to see him, and he got up to leave. “I’ve spent my life as a diagnostician and a scientist,” he said. “I know how much a patient relies on my capacity to tell a malignant lesion from a benign one. I also know that medical knowledge emerges from diagnosis.”

The word “diagnosis,” he reminded me, comes from the Greek for “knowing apart.” Machine-learning algorithms will only become better at such knowing apart—at partitioning, at distinguishing moles from melanomas. But knowing, in all its dimensions, transcends those task-focussed algorithms. In the realm of medicine, perhaps the ultimate rewards come from knowing together.


- More Here

Saturday, March 18, 2017

Wisdom Of The Week

Monsanto isn’t evil. It’s run by a boring old bald guy named Hugh Grant, for Christ’s sake. Hugh Grant is not trying to starve or enslave the world. But, intentionally or not, he and the rest of biotech are making it easier for us to give up our food sovereignty in a broader environment where doing so seems to be the easiest option.

We’re all so “busy.” We have to feed 9 billion people. We’re running out of land and water. The climate is changing. The world demands cheap meat. We want quick solutions to these problems, within our lifetimes, with minimal impact on our lifestyles. We suck. We want technology to save us from ourselves. Maybe it’s this country’s founding Christian ethos: someone paid for our sins before; won’t someone do it again? Sorry, Hugh Grant ain’t Jesus.

Here’s more news: engineered food isn’t going anywhere. Not only because it’s profitable, but because it’s promising. Cultured meat really could be part of the solution to feeding valuable protein to the developing world while reducing herd sizes in the interest of the environment.

Hydroponics/aquaponics could be a clutch player in urban agriculture, shortening supply chains and helping make Local a pervasive concept. GMOs do have some environmental benefits that warrant exploring even by dyed-in-the-wool permaculturalists.

The answer here is not fighting engineering and innovation under the misguided notion that these things can (or should) be stopped. The answer is in refusing to surrender time-honored growing methods to the relentless march of technology — and that’s not nearly as exciting as it sounds. It’s not picketing, protesting, and writing witty essays about the evils of biotech to the adulation of the echo chamber. The answer is being for, not against, something. And it’s in the decisions each of us has control over.

It’s the decision to plant gardens; open farms and homesteads; save, share and sell seeds; raise and breed a little livestock; learn to can, salt, smoke, and butcher. It’s in the decision to travel less and plant more. To patronize your nearby farmers even if it’s inconvenient, and find ways to make it less inconvenient. To say no to cheap and processed food whenever, wherever, and if ever your budget allows. To reorient your social capital around how many plants you’ve grown, how much soil you’ve built, how many seeds you’ve saved, and how many people you’ve fed — instead of where you’ve traveled, what your job title is, who you’ve met, and how jelly everyone is of your IG feed.

Recognize the miracle that nature is, and exercise your birthright to participate in that miracle. Breathe life into it by putting your hands in the ground as often as you can. Leave Monsanto alone and lead by example. It’s just that easy, and it’s just that hard.

- More Here

Sunday, October 9, 2016

Does Trump's Rise Mean Liberalism's End? - Yuval Noah Harari

But history has not come to an end, and following the Franz Ferdinand moment, the Hitler moment, and the Che Guevara moment we now find ourselves in the Trump moment. This time, however, the Liberal Story is not faced by a coherent ideological opponent like imperialism, fascism, or Communism. The Trump moment is a nihilistic burlesque. Donald Trump has no ideology to speak of, just as the British Brexiteers have no real plan for the future of the Disunited Kingdom.

On the one hand, this may imply that the present crisis of faith is less severe than its predecessors. At the end of the day, people won’t abandon the Liberal Story, because they don’t have any alternative. They may give the system an angry kick but, having nowhere else to go, they will eventually come back.

Alternatively, people may look further back and seek shelter with other stories, traditional nationalist and religious tales that have been pushed to the side in the twentieth century but never completely abandoned. This is arguably what has happened in places like the Middle East, where nationalist extremism and religious fundamentalism is on the rise. However, for all their sound and fury, movements such as the Islamic State don’t offer any serious alternative to the Liberal Story, because they don’t have any answers to the big questions of our era.

What will happen to the job market once artificial intelligence outperforms humans in most cognitive tasks? What will be the political impact of an enormous new class of economically useless people? What will happen to relationships, families, and pension funds when nanotechnology and regenerative medicine turn eighty into the new fifty? What will happen to human society when biotechnology enables us to have designer babies, and to open even larger gaps between the rich and poor? You are unlikely to find the answers to any of these questions in the Bible or the Koran. Radical Islam, Orthodox Judaism, or fundamentalist Christianity may promise an anchor of certainty in a world of technological and economic storms, but in order to navigate the coming twenty-first-century tsunami, you will need a good map and a strong rudder, as well.

The same is true for slogans such as “Make America Great Again” or “Give Us Back Our Country.” You can build a wall against Mexican immigrants but not against global warming; you can cut Westminster from Brussels but you cannot cut the City of London from global financial currents. If people cling in desperation to outdated national and religious identities, the global system may simply collapse in the face of climate change, economic crisis, and technological disruption that nineteenth-century nationalist myths and medieval piety can neither fathom nor solve.

Mainstream élites therefore look in horror at events such as Brexit and the rise of Trump, and hope that the masses will come to their senses and return to the fold of the Liberal Story in time to avert disaster. But it might be much harder for the Liberal Story to survive the current crisis of confidence, because the traditional alliance between liberal ethics and capitalist economics that has long underpinned the Liberal Story may be unravelling. During the twentieth century, the Liberal Story was immensely attractive because it told people and governments that they don’t have to choose between doing the right thing and doing the smart thing; protecting human liberties was both a moral imperative and the key to economic growth. Britain, France, and the United States allegedly prospered because they liberalized their economies and societies, and if Turkey, Brazil, or China wanted to become equally prosperous they had to do the same. In most cases, it was the economic rather than the moral argument that convinced tyrants and juntas to liberalize.

In the twenty-first century, however, the Liberal Story has no good answers to the biggest challenges we face: global warming and technological disruption. As the masses lose their economic importance to algorithms and robots, protecting human liberties may remain morally justified—but will the moral arguments alone be enough? Will élites and governments go on valuing the liberties and wishes of every human being even when it pays no economic dividends to do so? The masses are right to fear for their future. Even if Donald Trump loses the coming election, millions of Americans have a gut feeling that the system no longer works for them, and they are probably correct.


- More Here

Tuesday, June 7, 2016

Steve Jobs Explains Why AI Won’t Take Away Jobs


Jobs talks about the evolution of the personal computer. In the beginning of the personal computing revolution, few people had the programming skills to use a computer. In fact, experts needed to set up computers for end users. Programs were basic, with user interfaces that would seem very non-user friendly by today’s standards. He saw all of this as a “barrier” to overcome, for both people and the industry, because this kept the power of computing in the hands of a few. Remember, this video took place back in 1980.

“Right now if you buy a computer system and you want to solve one of your problems, we [the computer industry] immediately throw a big problem right in the middle of you and your problem, which is learning how to use the computer. Right? Substantial problem to overcome. Once you overcome that, it’s a phenomenal tool. But there is a barrier of having to overcome that problem.”

Today, the drawing insight from data analysis faces the same barrier. Those with the knowledge and tools to draw insight from data is limited to data scientists or others in similar job descriptions. As data volumes continue to grow as does the need for data-supported decisions, relying on a few to analyze data can be detrimental to a business. More and more companies are looking to adopt tools that incorporate the company’s best business practices while enabling non-technical users to analyze complex data sets instantaneously.

Steve Job’s vision was that anyone should be able to setup and use a PC, and I believe that the future of AI tools are the same. Systems will be self-service, easy to use, and will make complex data accessible to anyone in real-time. Jobs said that “something special happens with one computer and one user.” He was referring to the way in which software can amplify human ability. Again, although the speech was in the 1980s, Jobs could easily have been speaking about today with the growth in smart machines, which are able to dialog, reason and explain, all with the goal of boosting human performance.


- More Here

Tuesday, February 23, 2016

The Limits Of The Digital Revolution

Yes, I think what pretty much everybody agrees on is that we simply don’t know what is going to happen, that we have different versions of educated guesses.

I think, actually, where I disagree, I think there’s a lot of arrogance of prediction, a lot of arrogant prediction we won’t solve this problem, that we will not think of things for people to do. That’s certainly what you get from Martin Ford’s book The Rise of the Robots. That’s what you get from Frey and Osborne about the 47% of jobs that will be displaced. It’s basically a very bold prediction on the failure of human ingenuity and creativity to think of new things for people to do, and I would never make such a bet against humanity.

On the other hand, it’s also inaccurate to say that “It’s never been a problem before; therefore, it won’t be a problem this time.” Technological change has always been disruptive, it’s always created winners and losers, and this time could be worse or better than other times.

Let me say I’m much less pessimistic than many. I’ll tell you one reason, actually, that I think is underestimated or underemphasised in this discussion is that the rate of change matters as well. It’s not just where we’re going, it’s how fast we get there, because we can only adapt so rapidly.

If we knew, if we read today in The Guardian or The Wall Street Journal that 15 years from now no-one will be driving vehicles anymore because they’ll all be done by machinery, you’d say, “That’s good, but it’s going to create some challenges. We’d better stop training people to be lorry drivers and get them ready for other occupations,” but we could deal with that.

If it was announced that coming next Monday no-one will be driving vehicles, that would be a much bigger problem – not that it wouldn’t have the same economic benefits of safer, cheaper transportation, but we would have a lot of displaced workers to contend with.

It matters how fast things are changing, not just where they’re eventually going to go, and the evidence is not strong that they are changing extremely rapidly, in fact. The productivity statistics don’t show it, the investment statistics don’t show it. I think there’s a lot of enthusiasm and certainly there is no question that progress is occurring, but the sort of singularity thinking that we’re approaching this singularity – you can see it where just the rate of change is accelerating; it’s all Moore’s Law and stuff – that’s just not serious. There’s no serious data that support it.

[---]

Because there are ethical, legal, power obstacles of actually implementing this.

Also, you have to distinguish between qualitative and quantitative change. My computer can run Microsoft Word 1,000 times faster than my computer could 20 years ago, but it doesn’t make it 1,000 times more productive; maybe it’s 20% more productive. The point is there’s this false equivalence drawn between computing processor cycles and productivity or output, and it’s really diminishing marginal returns.

To give you an example of this, I was at a conference and an executive from McKinsey got up and said, “Your washing machine today has more processing power than the entire Apollo moon project.” He meant this to demonstrate the great rate of change and the fantastic progress, and to me that just said, “Diminishing marginal returns.” My washing machine is not going to the moon.


- More Here

Tuesday, June 2, 2015

What I've Been Reading

Alone among other animals, humans seek meaning in their lives by killing and dying for the sake of nonsense. Chief among these absurdities, in modern times, is the idea of a new humanity. 
The Soul of the Marionette: A Short Inquiry into Human Freedom by John Gray. This time around John has unleashed his pessimism at AI and Robotics. Being a student myself, such high dose of skepticism is highest level of education one can attain. Thank you John.

A degree of privacy may survive as a luxury good. Encrypting part of their lives, the rich may contrive for themselves a freedom that many people possessed without such effort in the past. For the rest, loss of privacy is the price of individualism. Anyone can achieve momentary fame, but for nearly everyone today fifteen minutes of anonymity has become an impossible dream. 

Accepting the fact that unknowing makes possible an inner freedom very different from that pursued by Gnostics. If you have this negative capability, you will not want a higher form of consciousness your ordinary mind will give you all you need. Rather than trying to impose sense on your life, you will be content to let meaning come and go. Instead of becoming an unflattering puppet, you will make your way in the stumbling human world. Uber-marionettes do not have to wait until they can fly before they can be free. Not looking to ascent into the heavens, they can find freedom in falling into earth. 


Thursday, May 7, 2015

The First News Report on the L.A. Earthquake Was Written by a Robot

Robo-journalism is often hyped as a threat to journalists’ jobs. Schwencke doesn’t see it that way.  “The way we use it, it’s supplemental. It saves people a lot of time, and for certain types of stories, it gets the information out there in usually about as good a way as anybody else would. The way I see it is, it doesn’t eliminate anybody’s job as much as it makes everybody’s job more interesting.”

Having spent some years as a local news reporter, I can attest that slapping together brief, factual accounts of things like homicides, earthquakes, and fires is essentially a game of Mad Libs that might as well be done by a machine. If nothing else, a bot seems likely to save beleaguered scribes from scouring the thesaurus for synonyms for “blaze.” (Lacking an ego, Quakebot does not concern itself with elegant variation.) And in the case of earthquakes, an algorithm may actually be better at judging the newsworthiness of a particular small quake than your average gumshoe reporter or editor. Quakebot knows, for instance, that a magnitude less than 3.0 means it’s probably not worth freaking out about, a lesson that over-eager wire reporters don't always grasp.

At the same time, Quakebot neatly illustrates the present limitations of automated journalism. It can’t assess the damage on the ground, can’t interview experts, and can’t discern the relative newsworthiness of various aspects of the story. Schwencke notes that it sometimes generates a report based on a false alert or glitch in the USGS system. (Like many of its human counterparts, Quakebot doesn’t double-check its facts before publishing.)


- More Here

Tuesday, December 16, 2014

A Century-Long Study of the Effects of Artificial Intelligence on Society

Scientists have begun what they say will be a century-long study of the effects of artificial intelligence on society, including on the economy, war and crime, officials at Stanford University announced Monday.

The project, hosted by the university, is unusual not just because of its duration but because it seeks to track the effects of these technologies as they reshape the roles played by human beings in a broad range of endeavors.

“My take is that A.I. is taking over,” said Sebastian Thrun, a well-known roboticist who led the development of Google’s self-driving car. “A few humans might still be ‘in charge,’ but less and less so.”

[---]

Dr. Horvitz will lead a committee with Russ Altman, a Stanford professor of bioengineering and computer science. The committee will include Barbara J. Grosz, a Harvard University computer scientist; Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley; Yoav Shoham, a professor of computer science at Stanford; Tom Mitchell, the chairman of the machine learning department at Carnegie Mellon University; and Alan Mackworth, a professor of computer science at the University of British Columbia.

The committee will choose a panel of specialists who will produce a report on artificial intelligence and its effects that is to be published late in 2015.In a white paper outlining the project, Dr. Horvitz described 18 areas that might be considered, including law, ethics, the economy, war and crime. Future reports will be produced at regular intervals.


- More Here


Saturday, October 25, 2014

Wisdom Of The Week

David Autor: So, I think it's easy to see the many ways in which machines substitute for the things we used to do. And then what's harder to see, typically, is: how is that complementing us? But of course you sit back and say, could you and I actually be having this conversation, could you and I have a podcast? Could I actually do significant[?] research as an economist without all this sort of hardware increasing my output per hour? The answer is: Not very well. So, on the one hand is the: Are you directly complemented, versus substituted? Second factor that affects how that automation affects your earnings in a given activity is sort of, the elasticity of final demand--so, in other words, if we get really productive at something but there's a fixed amount of it that people want, then eventually they just buy less and less of it. So you see, for example in agriculture, the vast increases in productivity in agricultural stemming from the green revolution and so on, have eventually reduced employment dramatically in agriculture. And the reason is that, all evidence to the contrary, there seems to be a finite amount that we can eat.  

Russ: It's a great example. So, food is incredibly cheap, which is a glorious thing; but it doesn't lead, therefore, to: Oh, there will be more farmers. There are fewer.

David Autor on the Future of Work and Polanyi's Paradox (Econtalk)

Saturday, September 20, 2014

Inside the Wolfram Language

Simply Brilliant !!

Wolfram Language = Progamming in English with "in-built" big data (watch out @ 36 to 40 minutes).



Saturday, September 6, 2014

Wisdom Of The Week

The most important skill of the future will be the ability to learn and adapt. You need to be resourceful, keep your eyes open for advances coming out of nowhere, and embrace the new opportunities as they emerge. You need to be able to collaborate with others and build relationships. You need to be able to share ideas, inspire, and motivate.

Whatever you do, don’t take a mindless, meaningless job with a big company just because they offer you a big salary. Try to be somewhere where you can constantly redefine yourself and keep learning. That is what it is going to be about: constant learning and reinvention.

The future is going to be what we make it. It can be the Star Trek utopia or a Mad Max wreck, a creative playground or an Orwellian nightmare. That is why we need people with good values and ethics leading the way. We need people who care about enriching humanity rather than just themselves. We need people who can lead by example and bring along those behind them; who give back to the world and make it a better place. I really hope you will amongst those who lead the charge, who watch out for the interests of humanity, who build the utopia.


- It’s a beautiful time to be alive and educated, commencement address by Vivek Wadhwa at Hult International Business School Friday

Wednesday, September 3, 2014

Millions of Children in England Will Begin a "Tough" New National Curriculum When They Return to School This Week

The new-look curriculum puts a stronger emphasis on skills such as "essay writing, problem-solving, mathematical modelling and computer programming".
  • The history curriculum takes primary pupils through British history from the Stone Age to the Normans. They can also study a later era, such as the Victorians. "Significant individuals" to be studied include Elizabeth I, Neil Armstrong, Rosa Parks and suffragette Emily Davison. Secondary schools will teach British history from 1066 to 1901, followed by Britain, Europe and world events from 1901, including the Holocaust and Winston Churchill
  • Maths will expect more at an earlier age. There will be a requirement for pupils to learn their 12 times table by the age of nine. Basic fractions, such as half or a quarter, will be taught to five-year-olds. By the end of Year 2, pupils should know the number bonds to 20 and be precise in using and understanding place value
  • English will strengthen the importance of Shakespeare, with pupils between the ages of 11 and 14 expected to have studied two of his plays. Word lists for eight- and nine-year-olds include "medicine" and "knowledge", by 10 and 11 they should be spelling "accommodate" and "rhythm"
  • Science will shift towards a stronger sense of hard facts and "scientific knowledge". In primary school, there will be new content on the solar system, speed and evolution. In secondary school, there will be a clearer sense of the separate subjects of physics, biology and chemistry. Climate change will also be included
  • Design and technology is linked to innovation and digital industries. Pupils will learn about 3D printing and robotics
  • Computing will teach pupils how to write code. Pupils aged five to seven will be expected to "understand what algorithms are" and to "create and debug simple programs". By the age of 11, pupils will have to "design, use and evaluate computational abstractions that model the state and behaviour of real-world problems and physical systems"
- More Here

Sunday, August 31, 2014

Inside Google's Secret Drone-Delivery Program

Google X began to come up with ideas and test them theoretically and experimentally. They considered many different wild options, sketching out new and wacky transportation systems. (“What if you took a glider up on a balloon with a super long string and the glider goes up, releases, and zooms down… You can—on paper—satisfy yourself that’s not the right solution.”) But eventually, Teller realized they needed an expert. They did a search and ended up pulling Roy across the county.

Roy was perhaps a less-than-obvious choice. For one, he’d never worked on drones flying outside. The challenges of the wind were new to him. Roy neither had a traditional aeronautics background nor had he dealt in logistics. Look back on his resume from the early 2000s, as he prepared to finish his PhD at Carnegie Mellon: There are almost no signs that he’d be the guy Google X would one day tap for a drone project. His most prominent work had been on tour guide and nursing robots.But that leaves out one very important detail: Roy's thesis advisor was Sebastian Thrun, the founder of Google X, and one of the most influential people in robotics. In the years before his tour at Google, Roy did important work with the support of the Office of Naval Research on indoor drone navigation in "GPS-denied" environments, where the vehicles can't rely on satellites to position themselves.

When Roy arrived in California, Project Wing’s initial focus was on delivering defibrillators to help people who have had heart attacks. The key factor in the success of using a defibrillator is how quickly it is deployed, so saving a few minutes of transit time could make for a lifesaving application. But as time went on, the Google team realized that tying into the 911 system and other practical exigencies eliminated the speed advantage they thought they could deliver.

So, now, Teller’s—and, by extension, I will assume Brin’s—big-picture vision has shifted to the ways ubiquitous, two-minute delivery can transform people’s relationship to stuff.


- More Here

Wednesday, August 13, 2014

The Happy Demise of the 10X Engineer

How long before we have a billion-dollar acquisition offer for a one-engineer startup?

The way to describe this software coding continuum might be pre-foundation – where in its extremist form, every piece of software started in Assembly — and post-foundation – where software is like Legos, just snap the pieces together.

Pre-foundation, even the simplest tasks took a tremendous amount of knowledge and labor, because you had to build up from the bottom. For a website this might have meant (going up the stack) a server OS you patched and managed yourself, running your homegrown or hand-tuned web server, caching system, database, account management system, rendering engine and front-end libraries, with your own hand built analytics platform, build process and bug reporting tool. If that sounds like a lot of stuff to manage, that’s because it was.

Post-foundation, one need only focus on the user-facing function at hand, working with only one level of abstraction. It will one day be laughable that building Facebook required tuning web server software, let alone building entire data centers. The other layers of the stack will be abstracted away entirely and writing software will continue to look more like assembling a collection of Github-hosted libraries and APIs.

[...]

But we are getting there, and software is eating software development. The foundation of open source based software platforms, infrastructure, knowledge and best practices continues to grow. I bet Stackoverflow alone has increased programming productivity by a few percentage points. Now add fifteen years of free or inexpensive developer tools (Github, too many IDEs to list), automated infrastructure (Mesosphere, AWS, Google App Engine, Heroku, DigitalOcean and more), databases (MySQL, MongoDB, PostgreSQL, Firebase and more), high level languages (Python, Ruby, PHP and more) and frameworks (Meteor, Angular, Django, Rails, Bootstrap and more): the faucets, pipes and water pumps of programming. All rooted in open source and all removing levels of detail that a creator making software for users shouldn’t have to care about.
 

Software engineering is not yet plumbing — or Legos — because our standards are incomplete, our libraries incompatible, scaling is still not free and our software still buggy.

Now, what does this mean for the 10x engineer? The “10x engineer” is still needed to build the foundation — building AWS or Mesos remains very difficult. But as we build out the common foundation, the skill and experience an individual needs to accomplish a task on top of the platform decreases.

More Here

Wednesday, April 30, 2014

What I've Been Reading

The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies by Erik Brynjolfsson and Andrew McAfee. If you have missed all the recent advancements in AI, then this book will bring help you up-to date with "reality". Personally, I didn't learn anything new from the book but it's well written and especially the later chapters are insightful.

Technology is a gift of God. After the gift of life it is perhaps the greatest of God’s gifts. It is the mother of civilizations, of arts and of sciences.

- Freeman Dyson

Three sets of winners in the second machine age:
The first two sets of winners are those who have accumulated significant quantities of the right capital assets. These can be either nonhuman capital (such as equipment, structures, intellectual property, or financial assets), or human capital (such as training, education, experience, and skills). Like other forms of capital, human capital is an asset that can generate a stream of income. A well-trained plumber can earn more each year than an unskilled worker, even if they both work the same number of hours. The third group of winners is made up of the superstars among us who have special talents— or luck.

On GDP:
Americans nearly doubled the amount of leisure time they spent on Internet between 2000 and 2011. This implies that they valued it more than the other ways they could spend their time. By considering the value of users’ time and comparing leisure time spent on the Internet to time spent in other ways, Erik and Joo Hee estimated that the Internet created about $ 2,600 of value per user each year. None of this showed up in the GDP statistics but if it had, GDP growth— and thus productivity growth— would have been about 0.3 percent higher each year. In other words, instead of the reported 1.2 percent productivity growth for 2012, it would have been 1.5 percent.

As Paul Samuelson and Bill Nordhaus put it, “While the GDP and the rest of the national income accounts may seem to be arcane concepts, they are truly among the great inventions of the twentieth century.” 27 But the rise in digital business innovation means we need innovation in our economic metrics. If we are looking at the wrong gauges, we will make the wrong decisions and get the wrong outputs. If we measure only tangibles, then we won’t catch the intangibles that will make us better off. If we don’t measure pollution and innovation, then we will get too much pollution and not enough innovation. Not everything that counts can be counted, and not everything that can be counted, counts.

As more data become available and as the economy continues to change, the ability to ask the right questions will become even more vital. No matter how bright the light is, you won’t find your keys by searching under a lamppost if that’s not where you lost them. We must think hard about what it is we really value, what we want more of, and what we want less of. GDP and productivity growth are important, but they are means to an end, not ends in and of themselves. Do we want to increase consumer surplus? Then lower prices or more leisure might be signs of progress, even if they result in a lower GDP. And, of course, many of our goals are nonmonetary. We shouldn’t ignore the economic metrics, but neither should we let them crowd out our other values simply because they are more measurable.


On Technological Unemployment:
The argument that technology cannot create ongoing structural unemployment, rather than just temporary spells of joblessness during recessions, rests on two pillars: 1) economic theory and 2) two hundred years of historical evidence. But both of these are less solid than they first appear.

There is no ‘iron law’ that technological progress must always be accompanied by broad job creation.

In the long run, low wages will be no match for Moore’s Law. Trying to fend off advances in technology by cutting wages is only a temporary protection. It is no more sustainable than asking folk legend John Henry to lift weights to better compete with a steam-powered hammer.

The consequences of high neighborhood joblessness are more devastating than those of high neighborhood poverty. A neighborhood in which people are poor but employed is different from a neighborhood in which many people are poor and jobless. Many of today’s problems in the inner-city ghetto neighborhoods— crime, family dissolution, welfare, low levels of social organization, and so on— are fundamentally a consequence of the disappearance of work.

- William Julius Wilson summarized a long career’s worth of findings in his 1996 book 

When Work Disappears: The World of the New Urban Poor.




Monday, January 27, 2014

The Future of Employment - How Susceptible are Jobs to Computerisation?

Abstract

We examine how susceptible jobs are to computerisation. To as- sess this, we begin by implementing a novel methodology to estimate the probability of computerisation for 702 detailed occupations, using a Gaussian process classifier. Based on these estimates, we examine ex- pected impacts of future computerisation on US labour market outcomes, with the primary objective of analysing the number of jobs at risk and the relationship between an occupation’s probability of computerisation, wages and educational attainment. According to our estimates, about 47 percent of total US employment is at risk. We further provide evidence that wages and educational attainment exhibit a strong negative relation- ship with an occupation’s probability of computerization.







- Full paper here