Monday, March 3, 2014

The Real Privacy Problem

As Web companies and government agencies analyze ever more information about our lives, it’s tempting to respond by passing new privacy laws or creating mechanisms that pay us for our data. Instead, we need a civic solution, because democracy is at risk.

 

Why It Matters

Most proposals for enhancing our privacy treat it as an end in itself. Instead we need to be talking about how to best stimulate democracy—a balancing act that laws or market mechanisms can’t achieve alone.

In 1967, The Public Interest, then a leading venue for highbrow policy debate, published a provocative essay by Paul Baran, one of the fathers of the data transmission method known as packet switching. Titled “The Future Computer Utility,” the essay speculated that someday a few big, centralized computers would provide “information processing … the same way one now buys electricity.”

Our home computer console will be used to send and receive messages—like telegrams. We could check to see whether the local department store has the advertised sports shirt in stock in the desired color and size. We could ask when delivery would be guaranteed, if we ordered. The information would be up-to-the-minute and accurate. We could pay our bills and compute our taxes via the console. We would ask questions and receive answers from “information banks”—automated versions of today’s libraries. We would obtain up-to-the-minute listing of all television and radio programs … The computer could, itself, send a message to remind us of an impending anniversary and save us from the disastrous consequences of forgetfulness.

It took decades for cloud computing to fulfill Baran’s vision. But he was prescient enough to worry that utility computing would need its own regulatory model. Here was an employee of the RAND Corporation—hardly a redoubt of Marxist thought—fretting about the concentration of market power in the hands of large computer utilities and demanding state intervention. Baran also wanted policies that could “offer maximum protection to the preservation of the rights of privacy of information”:

Highly sensitive personal and important business information will be stored in many of the contemplated systems … At present, nothing more than trust—or, at best, a lack of technical sophistication—stands in the way of a would-be eavesdropper … Today we lack the mechanisms to insure adequate safeguards. Because of the difficulty in rebuilding complex systems to incorporate safeguards at a later date, it appears desirable to anticipate these problems.

Sharp, bullshit-free analysis: techno-futurism has been in decline ever since.

All the privacy solutions you hear about are on the wrong track.

To read Baran’s essay (just one of the many on utility computing published at the time) is to realize that our contemporary privacy problem is not contemporary. It’s not just a consequence of Mark Zuckerberg’s selling his soul and our profiles to the NSA. The problem was recognized early on, and little was done about it.

Almost all of Baran’s envisioned uses for “utility computing” are purely commercial. Ordering shirts, paying bills, looking for entertainment, conquering forgetfulness: this is not the Internet of “virtual communities” and “netizens.” Baran simply imagined that networked computing would allow us to do things that we already do without networked computing: shopping, entertainment, research. But also: espionage, surveillance, and voyeurism.

http://www.technologyreview.com/sites/default/files/images/privacy.feature.1x700.jpgIf Baran’s “computer revolution” doesn’t sound very revolutionary, it’s in part because he did not imagine that it would upend the foundations of capitalism and bureaucratic administration that had been in place for centuries. By the 1990s, however, many digital enthusiasts believed otherwise; they were convinced that the spread of digital networks and the rapid decline in communication costs represented a genuinely new stage in human development. For them, the surveillance triggered in the 2000s by 9/11 and the colonization of these pristine digital spaces by Google, Facebook, and big data were aberrations that could be resisted or at least reversed. If only we could now erase the decade we lost and return to the utopia of the 1980s and 1990s by passing stricter laws, giving users more control, and building better encryption tools!

A different reading of recent history would yield a different agenda for the future. The widespread feeling of emancipation through information that many people still attribute to the 1990s was probably just a prolonged hallucination. Both capitalism and bureaucratic administration easily accommodated themselves to the new digital regime; both thrive on information flows, the more automated the better. Laws, markets, or technologies won’t stymie or redirect that demand for data, as all three play a role in sustaining capitalism and bureaucratic administration in the first place. Something else is needed: politics.

Even programs that seem innocuous can undermine democracy.

First, let’s address the symptoms of our current malaise. Yes, the commercial interests of technology companies and the policy interests of government agencies have converged: both are interested in the collection and rapid analysis of user data. Google and Facebook are compelled to collect ever more data to boost the effectiveness of the ads they sell. Government agencies need the same data—they can collect it either on their own or in coöperation with technology companies—to pursue their own programs.

Many of those programs deal with national security. But such data can be used in many other ways that also undermine privacy. The Italian government, for example, is using a tool called the redditometro, or income meter, which analyzes receipts and spending patterns to flag people who spend more than they claim in income as potential tax cheaters. Once mobile payments replace a large percentage of cash transactions—with Google and Facebook as intermediaries—the data collected by these companies will be indispensable to tax collectors. Likewise, legal academics are busy exploring how data mining can be used to craft contracts or wills tailored to the personalities, characteristics, and past behavior of individual citizens, boosting efficiency and reducing malpractice.

On another front, technocrats like Cass Sunstein, the former administrator of the Office of Information and Regulatory Affairs at the White House and a leading proponent of “nanny statecraft” that nudges citizens to do certain things, hope that the collection and instant analysis of data about individuals can help solve problems like obesity, climate change, and drunk driving by steering our behavior. A new book by three British academics—Changing Behaviours: On the Rise of the Psychological State—features a long list of such schemes at work in the U.K., where the government’s nudging unit, inspired by Sunstein, has been so successful that it’s about to become a for-profit operation.

Thanks to smartphones or Google Glass, we can now be pinged whenever we are about to do something stupid, unhealthy, or unsound. We wouldn’t necessarily need to know why the action would be wrong: the system’s algorithms do the moral calculus on their own. Citizens take on the role of information machines that feed the techno-bureaucratic complex with our data. And why wouldn’t we, if we are promised slimmer waistlines, cleaner air, or longer (and safer) lives in return?

This logic of preëmption is not different from that of the NSA in its fight against terror: let’s prevent problems rather than deal with their consequences. Even if we tie the hands of the NSA—by some combination of better oversight, stricter rules on data access, or stronger and friendlier encryption technologies—the data hunger of other state institutions would remain. They will justify it. On issues like obesity or climate change—where the policy makers are quick to add that we are facing a ticking-bomb scenario—they will say a little deficit of democracy can go a long way.

Here’s what that deficit would look like: the new digital infrastructure, thriving as it does on real-time data contributed by citizens, allows the technocrats to take politics, with all its noise, friction, and discontent, out of the political process. It replaces the messy stuff of coalition-building, bargaining, and deliberation with the cleanliness and efficiency of data-powered administration.

This phenomenon has a meme-friendly name: “algorithmic regulation,” as Silicon Valley publisher Tim O’Reilly calls it. In essence, information-rich democracies have reached a point where they want to try to solve public problems without having to explain or justify themselves to citizens. Instead, they can simply appeal to our own self-interest—and they know enough about us to engineer a perfect, highly personalized, irresistible nudge.

Privacy is a means to democracy, not an end in itself.

Another warning from the past. The year was 1985, and Spiros Simitis, Germany’s leading privacy scholar and practitioner—at the time the data protection commissioner of the German state of Hesse—was addressing the University of Pennsylvania Law School. His lecture explored the very same issue that preoccupied Baran: the automation of data processing. But Simitis didn’t lose sight of the history of capitalism and democracy, so he saw technological changes in a far more ambiguous light.

He also recognized that privacy is not an end in itself. It’s a means of achieving a certain ideal of democratic politics, where citizens are trusted to be more than just self-contented suppliers of information to all-seeing and all-optimizing technocrats. “Where privacy is dismantled,” warned Simitis, “both the chance for personal assessment of the political … process and the opportunity to develop and maintain a particular style of life fade.”

http://www.technologyreview.com/sites/default/files/images/privacy.feature.2x700.jpg

Three technological trends underpinned Simitis’s analysis. First, he noted, even back then, every sphere of social interaction was mediated by information technology—he warned of “the intensive retrieval of personal data of virtually every employee, taxpayer, patient, bank customer, welfare recipient, or car driver.” As a result, privacy was no longer solely a problem of some unlucky fellow caught off-guard in an awkward situation; it had become everyone’s problem. Second, new technologies like smart cards and videotex not only were making it possible to “record and reconstruct individual activities in minute detail” but also were normalizing surveillance, weaving it into our everyday life. Third, the personal information recorded by these new technologies was allowing social institutions to enforce standards of behavior, triggering “long-term strategies of manipulation intended to mold and adjust individual conduct.”

Modern institutions certainly stood to gain from all this. Insurance companies could tailor cost-saving programs to the needs and demands of patients, hospitals, and the pharmaceutical industry. Police could use newly available databases and various “mobility profiles” to identify potential criminals and locate suspects. Welfare agencies could suddenly unearth fraudulent behavior.

But how would these technologies affect us as citizens—as subjects who participate in understanding and reforming the world around us, not just as consumers or customers who merely benefit from it?

In case after case, Simitis argued, we stood to lose. Instead of getting more context for decisions, we would get less; instead of seeing the logic driving our bureaucratic systems and making that logic more accurate and less Kafkaesque, we would get more confusion because decision making was becoming automated and no one knew how exactly the algorithms worked. We would perceive a murkier picture of what makes our social institutions work; despite the promise of greater personalization and empowerment, the interactive systems would provide only an illusion of more participation. As a result, “interactive systems … suggest individual activity where in fact no more than stereotyped reactions occur.”

If you think Simitis was describing a future that never came to pass, consider a recent paper on the transparency of automated prediction systems by Tal Zarsky, one of the world’s leading experts on the politics and ethics of data mining. He notes that “data mining might point to individuals and events, indicating elevated risk, without telling us why they were selected.” As it happens, the degree of interpretability is one of the most consequential policy decisions to be made in designing data-mining systems. Zarsky sees vast implications for democracy here:

A non-interpretable process might follow from a data-mining analysis which is not explainable in human language. Here, the software makes its selection decisions based upon multiple variables (even thousands) … It would be difficult for the government to provide a detailed response when asked why an individual was singled out to receive differentiated treatment by an automated recommendation system. The most the government could say is that this is what the algorithm found based on previous cases.

This is the future we are sleepwalking into. Everything seems to work, and things might even be getting better—it’s just that we don’t know exactly why or how.

Too little privacy can endanger democracy. But so can too much privacy.

Simitis got the trends right. Free from dubious assumptions about “the Internet age,” he arrived at an original but cautious defense of privacy as a vital feature of a self-critical democracy—not the democracy of some abstract political theory but the messy, noisy democracy we inhabit, with its never-ending contradictions. In particular, Simitis’s most crucial insight is that privacy can both support and undermine democracy.

Traditionally, our response to changes in automated information processing has been to view them as a personal problem for the affected individuals. A case in point is the seminal article “The Right to Privacy,” by Louis Brandeis and Samuel Warren. Writing in 1890, they sought a “right to be let alone”—to live an undisturbed life, away from intruders. According to Simitis, they expressed a desire, common to many self-made individuals at the time, “to enjoy, strictly for themselves and under conditions they determined, the fruits of their economic and social activity.”

http://www.technologyreview.com/sites/default/files/images/privacy.feature.3x700.jpg

A laudable goal: without extending such legal cover to entrepreneurs, modern American capitalism might have never become so robust. But this right, disconnected from any matching responsibilities, could also sanction an excessive level of withdrawal that shields us from the outside world and undermines the foundations of the very democratic regime that made the right possible. If all citizens were to fully exercise their right to privacy, society would be deprived of the transparent and readily available data that’s needed not only for the technocrats’ sake but—even more—so that citizens can evaluate issues, form opinions, and debate (and, occasionally, fire the technocrats).

This is not a problem specific to the right to privacy. For some contemporary thinkers, such as the French historian and philosopher Marcel Gauchet, democracies risk falling victim to their own success: having instituted a legal regime of rights that allow citizens to pursue their own private interests without any reference to what’s good for the public, they stand to exhaust the very resources that have allowed them to flourish.

When all citizens demand their rights but are unaware of their responsibilities, the political questions that have defined democratic life over centuries—How should we live together? What is in the public interest, and how do I balance my own interest with it?—are subsumed into legal, economic, or administrative domains. “The political” and “the public” no longer register as domains at all; laws, markets, and technologies displace debate and contestation as preferred, less messy solutions.

But a democracy without engaged citizens doesn’t sound much like a democracy—and might not survive as one. This was obvious to Thomas Jefferson, who, while wanting every citizen to be “a participator in the government of affairs,” also believed that civic participation involves a constant tension between public and private life. A society that believes, as Simitis put it, that the citizen’s access to information “ends where the bourgeois’ claim for privacy begins” won’t last as a well-functioning democracy.

Thus the balance between privacy and transparency is especially in need of adjustment in times of rapid technological change. That balance itself is a political issue par excellence, to be settled through public debate and always left open for negotiation. It can’t be settled once and for all by some combination of theories, markets, and technologies. As Simitis said: “Far from being considered a constitutive element of a democratic society, privacy appears as a tolerated contradiction, the implications of which must be continuously reconsidered.”

Laws and market mechanisms are insufficient solutions.

In the last few decades, as we began to generate more data, our institutions became addicted. If you withheld the data and severed the feedback loops, it’s not clear whether they could continue at all. We, as citizens, are caught in an odd position: our reason for disclosing the data is not that we feel deep concern for the public good. No, we release data out of self-interest, on Google or via self-tracking apps. We are too cheap not to use free services subsidized by advertising. Or we want to track our fitness and diet, and then we sell the data.

Simitis knew even in 1985 that this would inevitably lead to the “algorithmic regulation” taking shape today, as politics becomes “public administration” that runs on autopilot so that citizens can relax and enjoy themselves, only to be nudged, occasionally, whenever they are about to forget to buy broccoli.

Habits, activities, and preferences are compiled, registered, and retrieved to facilitate better adjustment, not to improve the individual’s capacity to act and to decide. Whatever the original incentive for computerization may have been, processing increasingly appears as the ideal means to adapt an individual to a predetermined, standardized behavior that aims at the highest possible degree of compliance with the model patient, consumer, taxpayer, employee, or citizen.

What Simitis is describing here is the construction of what I call “invisible barbed wire” around our intellectual and social lives. Big data, with its many interconnected databases that feed on information and algorithms of dubious provenance, imposes severe constraints on how we mature politically and socially. The German philosopher Jürgen Habermas was right to warn—in 1963—that “an exclusively technical civilization … is threatened … by the splitting of human beings into two classes—the social engineers and the inmates of closed social institutions.”

The invisible barbed wire of big data limits our lives to a space that might look quiet and enticing enough but is not of our own choosing and that we cannot rebuild or expand. The worst part is that we do not see it as such. Because we believe that we are free to go anywhere, the barbed wire remains invisible. Worse, there’s no one to blame: certainly not Google, Dick Cheney, or the NSA. It’s the result of many different logics and systems—of modern capitalism, of bureaucratic governance, of risk management—that get supercharged by the automation of information processing and by the depoliticization of politics.

The more information we reveal about ourselves, the denser but more invisible this barbed wire becomes. We gradually lose our capacity to reason and debate; we no longer understand why things happen to us.

But all is not lost. We could learn to perceive ourselves as trapped within this barbed wire and even cut through it. Privacy is the resource that allows us to do that and, should we be so lucky, even to plan our escape route.

This is where Simitis expressed a truly revolutionary insight that is lost in contemporary privacy debates: no progress can be achieved, he said, as long as privacy protection is “more or less equated with an individual’s right to decide when and which data are to be accessible.” The trap that many well-meaning privacy advocates fall into is thinking that if only they could provide the individual with more control over his or her data—through stronger laws or a robust property regime—then the invisible barbed wire would become visible and fray. It won’t—not if that data is eventually returned to the very institutions that are erecting the wire around us.

Think of privacy in ethical terms.

If we accept privacy as a problem of and for democracy, then popular fixes are inadequate. For example, in his book Who Owns the Future?, Jaron Lanier proposes that we disregard one pole of privacy—the legal one—and focus on the economic one instead. “Commercial rights are better suited for the multitude of quirky little situations that will come up in real life than new kinds of civil rights along the lines of digital privacy,” he writes. On this logic, by turning our data into an asset that we might sell, we accomplish two things. First, we can control who has access to it, and second, we can make up for some of the economic losses caused by the disruption of everything analog.

Lanier’s proposal is not original. In Code and Other Laws of Cyberspace (first published in 1999), Lawrence Lessig enthused about building a property regime around private data. Lessig wanted an “electronic butler” that could negotiate with websites: “The user sets her preferences once—specifies how she would negotiate privacy and what she is willing to give up—and from that moment on, when she enters a site, the site and her machine negotiate. Only if the machines can agree will the site be able to obtain her personal data.”

http://www.technologyreview.com/sites/default/files/images/privacy.feature.4x700.jpg

It’s easy to see where such reasoning could take us. We’d all have customized smartphone apps that would continually incorporate the latest information about the people we meet, the places we visit, and the information we possess in order to update the price of our personal data portfolio. It would be extremely dynamic: if you are walking by a fancy store selling jewelry, the store might be willing to pay more to know your spouse’s birthday than it is when you are sitting at home watching TV.

The property regime can, indeed, strengthen privacy: if consumers want a good return on their data portfolio, they need to ensure that their data is not already available elsewhere. Thus they either “rent” it the way Netflix rents movies or sell it on the condition that it can be used or resold only under tightly controlled conditions. Some companies already offer “data lockers” to facilitate such secure exchanges.

So if you want to defend the “right to privacy” for its own sake, turning data into a tradable asset could resolve your misgivings. The NSA would still get what it wanted; but if you’re worried that our private information has become too liquid and that we’ve lost control over its movements, a smart business model, coupled with a strong digital-rights­-management regime, could fix that.

Meanwhile, government agencies committed to “nanny statecraft” would want this data as well. Perhaps they might pay a small fee or promise a tax credit for the privilege of nudging you later on—with the help of the data from your smartphone. Consumers win, entrepreneurs win, technocrats win. Privacy, in one way or another, is preserved also. So who, exactly, loses here? If you’ve read your Simitis, you know the answer: democracy does.

It’s not just because the invisible barbed wire would remain. We also should worry about the implications for justice and equality. For example, my decision to disclose personal information, even if I disclose it only to my insurance company, will inevitably have implications for other people, many of them less well off. People who say that tracking their fitness or location is merely an affirmative choice from which they can opt out have little knowledge of how institutions think. Once there are enough early adopters who self-track—and most of them are likely to gain something from it—those who refuse will no longer be seen as just quirky individuals exercising their autonomy. No, they will be considered deviants with something to hide. Their insurance will be more expensive. If we never lose sight of this fact, our decision to self-track won’t be as easy to reduce to pure economic self-­interest; at some point, moral considerations might kick in. Do I really want to share my data and get a coupon I do not need if it means that someone else who is already working three jobs may ultimately have to pay more? Such moral concerns are rendered moot if we delegate decision-making to “electronic butlers.”

Few of us have had moral pangs about data-­sharing schemes, but that could change. Before the environment became a global concern, few of us thought twice about taking public transport if we could drive. Before ethical consumption became a global concern, no one would have paid more for coffee that tasted the same but promised “fair trade.” Consider a cheap T-shirt you see in a store. It might be perfectly legal to buy it, but after decades of hard work by activist groups, a “Made in Bangladesh” label makes us think twice about doing so. Perhaps we fear that it was made by children or exploited adults. Or, having thought about it, maybe we actually do want to buy the T-shirt because we hope it might support the work of a child who would otherwise be forced into prostitution. What is the right thing to do here? We don’t know—so we do some research. Such scrutiny can’t apply to everything we buy, or we’d never leave the store. But exchanges of information—the oxygen of democratic life—should fall into the category of “Apply more thought, not less.” It’s not something to be delegated to an “electronic butler”—not if we don’t want to cleanse our life of its political dimension.

Sabotage the system. Provoke more questions.

We should also be troubled by the suggestion that we can reduce the privacy problem to the legal dimension. The question we’ve been asking for the last two decades—How can we make sure that we have more control over our personal information?—cannot be the only question to ask. Unless we learn and continuously relearn how automated information processing promotes and impedes democratic life, an answer to this question might prove worthless, especially if the democratic regime needed to implement whatever answer we come up with unravels in the meantime.

Intellectually, at least, it’s clear what needs to be done: we must confront the question not only in the economic and legal dimensions but also in a political one, linking the future of privacy with the future of democracy in a way that refuses to reduce privacy either to markets or to laws. What does this philosophical insight mean in practice?

First, we must politicize the debate about privacy and information sharing. Articulating the existence—and the profound political consequences—of the invisible barbed wire would be a good start. We must scrutinize data-intensive problem solving and expose its occasionally antidemocratic character. At times we should accept more risk, imperfection, improvisation, and inefficiency in the name of keeping the democratic spirit alive.

Second, we must learn how to sabotage the system—perhaps by refusing to self-track at all. If refusing to record our calorie intake or our whereabouts is the only way to get policy makers to address the structural causes of problems like obesity or climate change—and not just tinker with their symptoms through nudging—information boycotts might be justifiable. Refusing to make money off your own data might be as political an act as refusing to drive a car or eat meat. Privacy can then reëmerge as a political instrument for keeping the spirit of democracy alive: we want private spaces because we still believe in our ability to reflect on what ails the world and find a way to fix it, and we’d rather not surrender this capacity to algorithms and feedback loops.

Third, we need more provocative digital services. It’s not enough for a website to prompt us to decide who should see our data. Instead it should reawaken our own imaginations. Designed right, sites would not nudge citizens to either guard or share their private information but would reveal the hidden political dimensions to various acts of information sharing. We don’t want an electronic butler—we want an electronic provocateur. Instead of yet another app that could tell us how much money we can save by monitoring our exercise routine, we need an app that can tell us how many people are likely to lose health insurance if the insurance industry has as much data as the NSA, most of it contributed by consumers like us. Eventually we might discern such dimensions on our own, without any technological prompts.

Finally, we have to abandon fixed preconceptions about how our digital services work and interconnect. Otherwise, we’ll fall victim to the same logic that has constrained the imagination of so many well-­meaning privacy advocates who think that defending the “right to privacy”—not fighting to preserve democracy—is what should drive public policy. While many Internet activists would surely argue otherwise, what happens to the Internet is of only secondary importance. Just as with privacy, it’s the fate of democracy itself that should be our primary goal.

After all, back in 1967 Paul Baran was lucky enough not to know what the Internet would become. That didn’t stop him from seeing the benefits of utility computing and its dangers. Abandon the idea that the Internet fell from grace over the last decade. Liberating ourselves from that misreading of history could help us address the antidemocratic threats of the digital future.

Evgeny Morozov is the author of The Net Delusion: The Dark Side of Internet Freedom and To Save Everything, Click Here: The Folly of Technological Solutionism.

http://www.technologyreview.com/featuredstory/520426/the-real-privacy-problem/

An Artificial Hand with Real Feeling

image

 

A new nerve interface gives a sense of touch to a prosthetic limb.

  • By David Talbot | Photographs by Ryan Donnell on February 18, 2014

http://www.technologyreview.com/photoessay/524676/an-artificial-hand-with-real-feeling/

Igor Spetic’s hand was in a fist when it was severed by a forging hammer three years ago as he made an aluminum jet part at his job. For months afterward, he felt a phantom limb still clenched and throbbing with pain. “Some days it felt just like it did when it got injured,” he recalls.

Igor Spetic lost his hand in a workplace accident. Now he’s one of the first people ever to regain realistic finger sensations thanks to nerve interfaces (below) implanted in the arm.

He soon got a prosthesis. But for amputees like Spetic, these are more tools than limbs. Because the prosthetics can’t convey sensations, people wearing them can’t feel when they have dropped or crushed something.Now Spetic, 48, is getting some of his sensation back through electrodes that have been wired to residual nerves in his arm. Spetic is one of two people in an early trial that takes him from his home in Madison, Ohio, to the Cleveland Veterans Affairs Medical Center. In a basement lab, his prosthetic hand is rigged with force sensors that are plugged into 20 wires protruding from his upper right arm. These lead to three surgically implanted interfaces, seven millimeters long, with as many as eight electrodes apiece encased in a polymer, that surround three major nerves in Spetic’s forearm.

On a table, a nondescript white box of custom electronics does a crucial job: translating information from the sensors on Spetic’s prosthesis into a series of electrical pulses that the interfaces can translate into sensations. This technology “is 20 years in the making,” says the trial’s leader, Dustin Tyler, a professor of biomedical engineering at Case Western Reserve University and an expert in neural interfaces.

Left: To evaluate his sensory feedback, he picks up blocks held to the table with magnets.

Right: With sensation restored, he can pick up cherries and remove stems 93 percent of the time without crushing, even blindfolded.

As of February, the implants had been in place and performing well in tests for more than a year and a half. Tyler’s group, drawing on years of neuroscience research on the signaling mechanisms that underlie sensation, has developed a library of patterns of electrical pulses to send to the arm nerves, varied in strength and timing. Spetic says that these different stimulus patterns produce distinct and realistic feelings in 20 spots on his prosthetic hand and fingers. The sensations include pressing on a ball bearing, pressing on the tip of a pen, brushing against a cotton ball, and touching sandpaper, he says. A surprising side effect: on the first day of tests, Spetic says, his phantom fist felt open, and after several months the phantom pain was “95 percent gone.”

On this day, Spetic faces a simple challenge: seeing whether he can feel a foam block. He dons a blindfold and noise-­canceling headphones (to make sure he’s relying only on his sense of touch), and then a postdoc holds the block inside his wide-open prosthetic hand and taps him on the shoulder. Spetic closes his prosthesis—a task made possible by existing commercial interfaces to residual arm muscles—and reports the moment he touches the block: success.

While the results are promising, research that involves surgical implants is time-consuming. Completing the pilot study, refining stimulation methods, and launching full clinical trials is likely to take 10 years. Tyler is also finishing development of an implantable electronic device to deliver stimuli “so this is not just on a bench in a lab, but gets into the home eventually,” he says. And he is working with manufacturers of prostheses to integrate force sensors and force processing technology directly into future versions of the devices.

versions of the devices.

Left: Control boxes deliver signals to electrodes surrounding nerves in Spetic’s arm, producing sensations of touch.

Right: This device might eventually be implanted in his arm, replacing lab equipment to deliver signals. Force sensors and processing technology could be integrated into future prosthetic devices.

Nerve interfaces are implanted in the arm.

When the tests are over and the equipment is disconnected, Spetic’s sensory visitation with his lost hand abruptly ends. He says he’s “blessed to know these people and be a part of this.” But he can’t help thinking wistfully about what the future might bring. “It would be nice to know I can pick up an object without having to look at it, or I can hold my wife’s hand and walk down the street, knowing I have a hold of her,” he says, as he puts on his coat and starts back home. “Maybe all of this will help the next person.”

Watch video: Restoring a Sense of Touch in Amputees

http://www.technologyreview.com/photoessay/524676/an-artificial-hand-with-real-feeling/

Sunday, January 5, 2014

RBI cautions against use of bitcoins

TNN | Dec 25, 2013, 01.31AM IST

MUMBAI: The Reserve Bank of India (RBI) has issued a caution notice against bitcoins and other virtual currencies saying that users may end up violating laws against money laundering and terror financing.

Stopping short of declaring a ban on their purchase, RBI said it is examining the issues associated with the usage, holding and trading of virtual currencies under the extant legal and regulatory framework of the country, including Foreign Exchange and Payment Systems laws and regulations. "The creation, trading or usage of virtual currencies (VCs), including bitcoins, as a medium for payment are not authorised by any central bank or monetary authority. No regulatory approvals, registration or authorisation is stated to have been obtained by the entities concerned for carrying on such activities. As such, they may pose several risks to their users," RBI said.

Governments in Europe have expressed similar concerns about bitcoins.

In its cautionary advice, RBI said that it is looking at the developments related to certain electronic records claimed to be "decentralised digital currency" or "virtual currency", such as, bitcoins, litecoins, bbqcoins and dogecoins, their usage or trading in the country and various media reports in this regard.

Driven by speculators, the value of a bitcoin had shot to $1,124 in November from $13 in January 2013. But its price crashed by 50% after China banned financial institutions from bitcoin transactions earlier this month. In India, the price of bitcoins has fallen to Rs 42,737 from Rs 74,628 in November.

Among the risks, RBI has listed loss due to hacking and malware, loss of password and the absence of an established framework for recourse to customer problems and disputes. But the central bank's main concern appears to be that virtual currencies are being used for illegal activities. "There have been several media reports of the usage of VCs, including bitcoins, for illicit and illegal activities in several jurisdictions. The absence of information of counterparties in such peer-to-peer anonymous/pseudonymous systems could subject the users to unintentional breaches of anti-money laundering and combating the financing of terrorism (AML/CFT) laws," RBI said.

The central bank had earlier described virtual currency as a type of unregulated, digital money, which is issued and usually controlled by its developers, and used and accepted among the members of a specific virtual community. Virtual currency schemes provide a financial incentive for virtual community users to continue to participate and are able to generate 'float' revenue for their owners.

Bitcoins are the fastest growing currency in the internet world. They are stored by users in the form of a private key which enables transactions. The private key can be stored in an electronic wallet, which can be a client in a computer or in a pen drive. Unlike fiat money issued by governments or bullion, bitcoins are electronic records of the ownership of the virtual currency. The virtual currency offers users the advantage of low transaction costs and interoperability. But regulators are worried as this route can be used to circumvent laws and government sanctions. Earlier this year, US authorities cracked down on a virtual currency service company - Liberty Reserve - alleging that it was involved in money laundering.

There is no underlying or backing of any asset for virtual currencies. As such, their value seems to be a matter of speculation. Huge volatility in the value of VCs has been noticed in the recent past. Thus, the users are exposed to potential losses on account of such volatility in value.

"It is reported that VCs, such as bitcoins, are being traded on exchange platforms set up in various jurisdictions whose legal status is also unclear. Hence, the traders of VCs on such platforms are exposed to legal as well as financial risks," RBI said.

Physical Bitcoins by Casascius

https://www.casascius.com/


Nov 27, 2013: For the time being, I have suspended accepting new orders, pending resolution of some concerns I have as to regulatory issues. I am anticipating a possibility of having to prequalify buyers, and am holding off taking orders until I know for sure.

Casascius Bitcoins are physical coins you can hold - and each one is worth real digital bitcoins. 

Bitcoin is the most widely used open-source peer-to-peer "cryptocurrency" that you can send over the Internet without a bank or a middleman.

Each Casascius Bitcoin is a collectible coin backed by real Bitcoins embedded inside.  Each piece has its own Bitcoin address and a redeemable "private key" on the inside, underneath the hologram.

Current products available for sale:

฿1 Casascius Coin: This is a solid brass coin.  Each 1-bitcoin coin is about 1.125inch (28.6mm) in diameter (just bigger than a US quarter but smaller than a half-dollar) and weighs a quarter ounce.  Perfect as a small gift to introduce someone to Bitcoin.  Also available in a ฿0.5 version which is slightly smaller at 1 inch (25.4mm).

฿1 Gold-Plated Fine Silver Casascius Round. This is a 39mm 1oz silver round accented with gold electroplating on the rim and on the Bitcoin logo, loaded with one digital bitcoin.

฿0.5 and ฿0.1 Fine Silver Casascius Rounds These are a half-ounce and quarter-ounce rounds (respectively) of fine silver. Diameters are 30mm and 25mm.

Casascius 2-Factor Gold-Plated Savings Bar: Dress your Bitcoins for tomorrow, make them look their best in your vault today.  Would weigh about 12 ounces if it were solid gold, this is a 4.2-ounce metal alloy bar with gold plating.  A neat-looking novelty that looks unmistakably valuable.  Available as a pre-loaded 100 BTC bar, as well as a non-denominated savings bar.  Two-factor encryption is available at no charge.  Bar is 80mm x 40mm x 6mm.

How they work: The "private key" is on a card embedded inside the coin and is protected by a tamper-evident hologram.  The hologram leaves behind a honeycomb pattern if it is peeled. If the hologram is intact, the bitcoin is good. If you have purchased a 2-factor item, the private key is encrypted and will need to be decrypted using your original preselected passphrase before you can redeem the funds.

The 8-character code you see on the outside of the coin is the first eight characters of the Bitcoin address assigned specifically to that coin.  You can verify the coin's balance on Block Explorer.  There is a mathematical relationship between the Bitcoin address and the private key inside the coin. The digital bitcoin is actually located on the public "block chain" stored on the internet, but it is completely inaccessible to anyone unless the private key from the coin is loaded into a Bitcoin wallet.

To recover the digital bitcoins, there are several ways to convert the embedded code back into a digital bitcoin so it can be spent over the internet.  Most importantly, none of the methods relies on me or any other central issuer, due to Bitcoin's completely decentralized design.  The embedded private key code is everything a Bitcoin client needs to find and claim the digital Bitcoins from the peer-to-peer network. For example, you can enter (or "import") your coin's private key code directly into Bitcoin clients such as Armory, Blockchain.info, or directly into Mt. Gox as a deposit method.  (Casascius coins use the "minikey" private key format, and the main Bitcoin.org client does not yet support redeeming minikeys.)

Of course, since the face value of the coins depends on the integrity of the embedded key code, you should only accept Casascius Bitcoins bearing an undamaged Casascius hologram from others.

E-mail is casascius at mc2cs.com.

Links:
  • Frequently Asked Questions
  • Casascius Wordpress Blog
  • Free High-resolution coin photos
  • For more high-quality Casascius-related photos suitable for press use, go to gettyimages.com and search for Mike Caldwell or Bitcoin.
  • More about Bitcoin: We Use Coins.
  • Casascius Coin trackers (made by fans) casascius.uberbills.com casascius.appspot.com
  • I only accept Bitcoins for payment for these items.  I do not accept any form of national currency such as Dollars or Euros for my products. However, you may be able to buy Casascius Coins from others on eBay for such currencies.
  • Please note that I am relatively slow to ship products!  Delays of 7-10 days may be common, I consider myself mainly a wholesaler.  Things that slow me down include: my other responsibilities, other Bitcoin-related activism and projects, the unusual and often inconvenient procedures I go through to keep my products and bitcoins secure, and the fact that I might need to create or engrave your items before I can ship them.  Please consider shopping from resellers, eBay, or BitMit for fastest service.
  • Casascius LLC Terms and Conditions

My PGP key

World’s First Bitcoin ATM Launched in Canada

 
Latest move toward mainstream use for virtual currency

By Per Liljas Oct. 30, 20132 Comments

 

186215833

David Ryder / Getty Images

Katrina Caudle celebrates after using the world's first bitcoin ATM at Waves Coffee House on October 29, 2013 in Vancouver, British Columbia, Canada.

Bitcoins took yet another step toward mainstream use on Tuesday, as the world’s first ATM converting the virtual currency to conventional cash, and vice versa, was introduced at a coffee shop in Vancouver, Canada.

The machine will be operated by the bitcoin exchange companies Bitcoiniacs and Robocoin, and will perform transactions after a palm and ID scan, CBC reports. Four more ATMs are planned for the country in the near future.

Bitcoin, which made headlines as a method to buy illegal products on “deep web” portals such as the recently-raided Silk Road, is also gaining prominence among high street stores in Western Canada. Bitcoiniacs reported that it was selling the currency, valued at $12 in January, for around $200 today.

http://business.time.com/2013/10/30/worlds-first-bitcoin-atm-launched-in-canada/

Bitcoins: missing the real revolution

January 4, 2014

Updated: January 4, 2014 01:38 IST

Vasudevan Mukunth

BTC: Some investors have been smart enough to spot the potential for innovations that are proliferating on diverse fronts. A representational picture.

AP BTC: Some investors have been smart enough to spot the potential for innovations that are proliferating on diverse fronts. A representational picture.

The strength of cryptocurrencies like bitcoins has little to do with its monetary potential and more to do with its technical potential

The year 2013 was unequivocally the year of bitcoins, more than it will be the year of the commercialisation of 3D printers or the advent of private space flight. The bitcoins mining and transactions network first came online in late 2008, saw an adoption boom in early 2012, and got the attention of investors and governments late last year. It’s not really been as much a roller-coaster ride as an initiation into the Gartner hype cycle, and the slope of enlightenment is nowhere in the vicinity.

Unfortunately for it, there’s a bigger problem: people have been having the wrong debate, all the way from those who want to get on the bandwagon because they know a bitcoin is worth $825.43 (1616 IST, January 3), to regulators arguing over whether or not cryptocurrencies can replace American dollars, to political economists asking if this is a libertarian agenda plotting to subvert the federal reserve. Needless to say, they’re all wrong.

There are two aspects to bitcoins: one as the digital currency that uses complex mathematical functions to be acquired, moved around and secured; the other as the transaction verification system. The former is the honey that attracts the bees, the occupant of mainstream imagination; the latter is the hive of the future, the real revolution.

The strength of cryptocurrencies like bitcoins has little to do with its monetary potential and more to do with its technical potential. What Satoshi Nakamoto, the enigmatic Japanese programmer(s) who conceived the bitcoins system, created is pertinent to the notion of a transaction cost: the price of mobilising your resources, irrespective of the nature of these resources.

Within the bitcoins transaction verification network, both value and validity are established democratically. The person who intends to use a bitcoin needs to show proof of work — that he mined or acquired the coin through legitimate means — and proof of knowledge — that the transaction being requested is verifiable. If most users on the network agree that a transaction was legitimate to the tune of some amount, then that’s that. The identities of the transactors are irrelevant.

In microfinance
Therefore, adopting bitcoins would help small businesses to grow unburdened by disproportionate transaction costs incurred to mobilise relatively small amounts. Even broadly, bitcoins hold the potential to reform microfinance in rural India. For example, some Assamese tea growers are exploring the option of transacting in bitcoins to avoid foreign exchange fees and the need to set up complex bank transfer wires between them — the producers — and their global consumers, often through middlemen such as PayPal whose participation comes with an automatic loss of value, around 7 to 10 per cent of the transaction.

Instead, using bitcoins means pure value transfer.

Scope for innovation
For transactions to hold their ground while preserving anonymity and ensuring security, the network of users needs to be large and consistent. And the rewards system that keeps these bees loyal is the bitcoin, the honey. Unfortunately, regulators and laymen alike have been paying undue attention to its value and the inherent anonymity. Some investors, on the other hand, have been smart enough to spot the potential for innovation.

Innovations are proliferating on diverse fronts, almost all of them building on the answer to the question why bitcoins are actually disruptive: they’re not erected on existing platforms but one all its own that’s strong on privacy and security. Bitmessage, for instance, is bitcoins all over again but with emails being sent around instead of coins. Gliph is Bitmessage for push-messaging. Coinbase, BTC-E and BitPay are PayPal with bitcoins. Even Visa and Mastercard are starting to see their counterparts in Canada, where the first bitcoin ATMs were installed for Christmas.

In fact, the most interesting application stemming from bitcoins I have heard was from American entrepreneur Chris Dixon, which factors in the cryptocurrency’s high divisibility. Like the rupee’s lowest relevant denomination at the moment is 25 paise, a bitcoin’s lowest relevant denomination is 1 satoshi, which is equal to 0.00000001 bitcoin. As of January 1 2033 hrs IST, one satoshi would have been worth 0.047 paise. This isn’t much.

Say, with every email you send, you are also required to send one satoshi to the recipient. Still not much. But what if you’re a spammer? What if you’re sending out tens of thousands of emails because you’re an annoying advertiser? You will also be sending out a few tens of rupees, and that’s a significant amount for an enterprise that used to be free. Suddenly, bitcoins’ high divisibility has become a tool to fight spam — that too without having to forge cumbersome relationships based on credit cards.

For all we know, bitcoins could be on their way out, but that wouldn’t be the real tragedy as much as that we let slip the best single solution we’ve had in years to tackle diverse issues that affect individual and small enterprises.

vasudevan.m@thehindu.co.in

Copyright© 2014, The Hindu