You are now visiting our Global professional lighting website, visit your local website by going to the USA website
You are now visiting the Philips lighting website. A localized version is available for you.

A brand of

Suggestions

    Dealing with new digital ethics

    How IoT vendors can use digital ethics to protect personal privacy

    The so-called tech-lash continues, as a tech industry that just several years ago was a public darling keeps coming under fire. Witness the accelerating U.S. presidential election cycle, in which candidates have found it politically useful to attack tech giants for their transgressions.
     

    This reflex won't be enough to sideline the best products of our ongoing tech revolution—such as that complex of systems and solutions that cluster under the label of the Internet of Things (IoT). But it does mean that the decision-makers responsible for these solutions will have to do better in justifying them. More to the point, they'll need to make clear to the world how they'll grapple with the unforeseen consequences that any major tech initiative will bring.

    When it comes to the IoT, the sticking point is privacy: how to approach it, preserve it, think about it, claw it back where it's been lost, and even define it in the first place. The stakes are high. Just ask politicians in Canada. That's where a public outcry has meant the end of Quayside, a project in which Alphabet's Sidewalk Labs subsidiary was to build a model high-tech IoT-enabled smart city on a stretch of Lake Ontario waterfront. Five years ago, this initiative, which Prime Minister Justin Trudeau supported, would likely have sailed right through. But in the current climate of anxiety about overweening big tech, the “end of privacy," the misuse of data, and “surveillance capitalism," the rules of the game have changed.
     

    How can IoT vendors cope with these new rules? First of all, by taking them seriously. That means more than ensuring, or claiming to ensure, that you're going to handle user data responsibly. It means devising and hewing to a comprehensive framework for digital ethics writ large—a framework within which privacy forms only a part.

    Apple CEO Tim Cook has inti­mated that digital ethics in general and digital privacy in particular represent a definitional issue for our developing connected economy. What follow are some pointers to guide IoT vendors as they ethically confront the ethical and privacy issues inherent in the products they sell.

    Pointer #1: Be sure to build ethics and privacy into a system — don't stick them on as an afterthought.


    In other words, mechanisms to safeguard privacy should form an inherent part of any system. It might sound like too much to insist that system builders consider privacy their system's most important product, whatever other function it might provide—but, as a way to keep priorities in line, that's not a bad way to think about it.
    Conveniently, help exists for IoT decision-makers in “privacy-by-design" (PbD), a developing series of industry protocols that promote what Dr. Ann Cavoukian, Executive Director of the Privacy and Big Data Institute at Ryerson University, describes as a “comprehensive, properly implemented risk-based approach," one in which “risks are anticipated and countermeasures are built into systems and operations."

    What might such an approach look like in practice?
     

    Look for the following things. First, transparency has to be the watchword when it comes to the collection and use of data. When they go to market, vendors should “lead with" their explanations of what they plan to do with any given project's data, as opposed to providing that information on a need-to-ask basis.
     

    Second, data risks, and response protocols in the event of data breaches, need to be painstakingly—and, again, transparently—mapped out. Breakdowns both large and small need to be predictable.
     

    And finally, there has to be clear ownership of privacy and data issues at every step along the data-use process. (Hiring a chief privacy officer might not be a bad idea towards making this happen.)

    Pointer #2. Offer an easy opt-out.


    We might not all be able to define what a “dark pattern" is, but we're all familiar with the wiles of digital system-builders. With, for instance, how it's easier to opt into a particular online service than it is to opt out. IoT vendors should leave opacity to the digital marketers and make it as easy as possible for users to choose not to participate in a given system.

    Pointer #3. Or, for that matter, offer an opt-in instead.


    Even better than the chance to opt out of a given system is being able to choose whether to participate in the first place, and how much and in what way. As digital ethicist Luca van der Heide writes, it should be the user who makes the choice whether or not to interact with a digital system, not the system itself.
     

    To put it another way, a system shouldn't invade our space. It should offer us the chance to participate in it.
     

    A couple of obvious objections pop up here regarding the issue of opting out, or in, to a comprehensive IoT system. After all, lots of IoT systems preclude choice. With help from IT colleagues graced with super-human patience, an employee who works in a smart office might be able to refuse to participate in his company's IoT-enabled conference room booking system. But avoiding his city's IoT-enabled street lighting system promises to be a heavier lift.
     

    Which leads us to our next pointer . . .

    Pointer #4: Make sure the system makes itself obvious.


    Many IoT systems aren't only pervasive. They're also unobtrusive to the point of invisibility.
     

    Which is the point. One of the beauties of an IoT system is the way it frictionlessly augments reality, helping us save energy, manage our offices or homes, or find a parking space without our even noticing that we're getting any help at all.
     

    That said, such unobtrusiveness can raise ethical flags. To find a way around this problem, van der Heide insists that every IoT system at some point “show itself," in his words—that is, make clear to the human beings who populate its space that it exists, and that those human beings exist within its terms. “Showing itself" might be as simple a matter as signage that announces to city residents that they're entering an area where sensors are tracking how and where they move.
     

    Still, a system that shows itself isn't necessarily a system that you can easily opt out of. But there is a major ethical difference between a pervasive system that transparently announces its presence and one that doesn't.
     

    On the other hand, context matters—in digital ethics, as in all things. IoT decision-makers should approach ethical matters with an eye towards what a reasonable human being would consider reasonable. An IoT system that manages office lighting, and that uses the information it gathers for no grander purpose than to route employees between well-lit spaces, is in a different ethical category than other, more comprehensive systems with more ambitious (and intrusive) plans for how they're going to use their data.

    Pointer #5. When in doubt, rule in favor of the individual human being as free, unsurveilled subject.


    In matters of IoT ethics, any tie should be resolved in the interests of the human runner—and of his or her privacy and dignity. If a given initiative raises levels of reasonable doubt, that initiative should be scrapped. It's as simple as that.

    Pointer #6. Follow the data privacy role models that are starting to crop up on the corporate landscape.


    Bloomberg is one good example to follow. Its Data for Good Exchange positions itself as “part of a long Bloomberg tradition of advocacy for using data science and human capital to solve problems at the core of society." One of its core areas of activity is “data philanthropy," which involves “[u]nlocking the power of data to reduce information inequality and advance social good."
    Or take a look at Clue, a Berlin-based company that offers a fertility-tracking app—an area of business with especially obvious invasion-of-privacy issues. Clue's privacy policies have garnered praise from the Electronic Frontier Foundation, a watchdog group. The company partially credits its success to the good reputation it's earned.
    Or take JPMorgan Chase & Co., whose JPMorgan Institute has dedicated itself to using data “for the public good." A recent institute project involved analyzing “the role of liquidity, equity, income levels, and payment burden as determinants of mortgage default," ultimately towards helping people avoid default.

    Like other transformative technology, the IoT is going to keep generating ethical questions that are as fascinating as they are unavoidable. The above guidelines can ground IoT decision-makers as they work towards answers that will have material effects in our homes, on our streets, and throughout our economies. The good news is that, just as ethics have proven equal to other technological leaps forward in human development, they'll prove equal to the IoT, too—with benefits for us all.

    About the author

    Profile picture of the author Jonathan Weinert
    Jonathan Weinert has been researching and writing about LED lighting and the IoT since joining Signify in 2008. He focuses on the full range of professional connected lighting systems, including smart cities, smart buildings, and other global trends in the illuminated IoT.

    Share this article

    What can Interact do for you?

    Follow us on: