Avoiding Harm in Technology Innovation

To capitalize on emerging technologies while mitigating unanticipated consequences, innovation managers need to establish a systematic review process.

Reading Time: 20 min  

Topics

  • Jedi Noordegraaf/Ikon Images

    Advances in science and technology promise to solve some of the world’s most vexing problems, create new markets, and fuel economic growth. At the same time, they often raise ethical questions, and their deployment may lead to unanticipated adverse consequences that many companies are ill-equipped to identify or address.

    For example, technology that enables the editing of human DNA is leading to new medical treatments, but the practice is also rife with unintended consequences. In 2018, an ambitious researcher in China ignored both the norms of the field and the law and altered human embryos’ genetic material in an effort to confer resistance to HIV. The changes he made to the embryos’ genes may be passed down to future generations, and their effects on human development are unknown.1

    Failure to fully consider possible outcomes when deciding whether and how to develop any technology can lead to negative consequences for companies, customers, and other stakeholders. In a 2024 case involving an AI chatbot, Air Canada’s enthusiasm for automated customer service ran ahead of its understanding of the significant limitations of generative AI when it comes to accuracy of output. It fielded a chatbot that misrepresented company policy on bereavement rates in response to a customer inquiry and then denied the customer reimbursement because they did not follow the policy, refusing to take responsibility for providing bad information. When the customer sued and won, the headlines did little to burnish Air Canada’s reputation for customer service.

    When considering new developments that are truly at the forefront of applied science and technology, innovators are usually well ahead of regulations that might provide guardrails around commercialization decisions. So how do they decide when to proceed and when to hold back on releasing a new technology into the world?

    We asked executives and managers involved with technology scouting, adoption, and commercialization about the extent to which they consider the potential harmful effects of their companies’ innovations on human health and well-being, the environment, and society. (See “The Research.”) We learned that there are few systematic processes for thinking through these issues. While we gained insight into practices that help teams consider the ethical implications of commercializing emerging technologies, we observed patterns of behavior and priorities that can make it harder to recognize and mitigate potential harms.

    Why So Little Time Is Spent on Risks and Consequences

    Our research revealed common problems with companies’ cultures and processes that often limit the extent to which they explore how — or whether — to responsibly develop and deploy emerging technologies.

    One significant and unsurprising dynamic is that business imperatives claim the lion’s share of attention. Early in the process of working with a new technology, companies focus primarily on how they can use it, their market objectives, other internal goals, and business model constraints. At later stages, they attend to legal or regulatory compliance rather than the potential for new forms of harm that the law does not yet address. They give little forethought to possibly nefarious uses of the technology and its potential effects on society, or what to do about them.

    It’s not that companies don’t consider risk — but they may give more thought to offloading liability onto other parties than to avoiding causing harms altogether. One company we studied mitigated its risk by forming separate businesses that were responsible for developing new technologies. Others would take an observer seat on a startup’s board, where they could influence the direction of the company without liability for any unintended scenarios that might emerge.

    Organizations sometimes form committees to advise on the limitations of new products and how they might be received in the market. However, according to our interviewees, these committees often focus on risk management. They ensure regulatory compliance, account for the appropriateness of marketing campaigns, and remind managers to remain aligned with the company’s mission statement.

    The practices described above deliver what they’re designed to — commercial products and revenue — but they offer companies little protection from problems they haven’t considered. With rare exceptions, individuals are left to their own devices when making calls about ethical or responsible use of an emerging technology. Yet the organization will ultimately suffer the consequences if those choices lead to widespread harm.

    A Framework for Responsible Innovation

    Companies can both profit from innovation and deploy new technologies responsibly if business leaders adopt a consistent approach to vetting proposed applications. We have created a structure for this based on the Responsible Research and Innovation (RRI) framework, a set of principles for aligning the scientific research process and outcomes with the values, needs, and expectations of society. It emerged from discussions of ethical concerns among social scientists about genetic research at the beginning of the century and aims to reduce the societal risks of scientific and technological advances.2

    The RRI centers around four principles: anticipation, reflexivity, inclusion, and responsiveness. To adapt the RRI into our Responsible Innovation and Commercialization (RIC) framework, we redefined each principle in terms relevant to business activities. We also described actions and outcomes that support each principle and address the limitations of current business practices.

    Our recommendations reflect the experiences of the executives and managers we interviewed, as well as their insights about improvements they would like to see in their organizations’ innovation processes. Let’s look at the principles of RIC and how to practice them in more detail.

    Anticipation: What could happen? Anticipation — the process of considering what future outcomes are possible or likely — is essential to proactively managing any risks or ethical challenges associated with innovations. When practiced systematically, anticipation yields an exhaustive list of possible uses for a given technology over time, the potential consequences of each, and possible means to mitigate or prevent them. With such a window on the possible results, business leaders can be better informed when deciding whether to invest in an emerging technology and in navigating their relationships with innovation partners.

    Anticipation relies on thoughtful conversations with trusted, resourceful advisers. Here, technology scouts and corporate venture capital (CVC) managers are an untapped internal resource. Those we interviewed were curious and had deep subject matter expertise, including doctorate degrees in their respective fields.

    Their expertise helped them gauge the likelihood of negative consequences for specific applications with greater certainty. One CVC manager described a potential investment in a startup that had designed a microbe to remove toxic chemicals from the environment. Two chemists on the team had questions: What if the microbes were accidentally dispersed? Could they take over naturally occurring microbe colonies and cause environmental harm? Ultimately, the company declined to invest because it was not satisfied that the technology could be safely deployed.

    Managers also can be guided by a clear statement of their organization’s purpose and values. Another CVC manager told us about a time he identified a startup with compelling technology but some questionable use cases. He took the issue to his company’s ethics committee — another source of expertise — which informed him of the areas in which the company wanted to be active and those that were off-limits.

    Together, they worked through the gray areas, asking questions about the startup’s mission and how a relationship with it would reflect on the company and fit with the company’s ethical code. Ultimately, the ethics committee concluded that the investment could align with the company’s values and ethical principles under certain conditions. They advised the CVC manager to monitor those conditions as he continued to evaluate the opportunity.

    A systematic process of anticipation involves the following steps:

    • Take a systematic and thorough approach to imagining all the potential uses of the technology, both good and bad, for various groups of stakeholders and for society. Then identify the potential positive and negative consequences of each use case for each group.
    • Identify subject matter experts and trusted advisers who can help your team understand the potential consequences of commercializing the technology and your company’s ability to influence its evolution.
    • Compile what is known and predicted about the development road map for the technology. Assess whether it aligns with your company’s intended purpose for its use, as well as with the company’s mission and values. If the technology’s trajectory could potentially extend to capabilities your company is not comfortable with, define the boundaries beyond which the company will opt out of investing in further development.

    Reflexivity: Do we understand our own biases and assumptions? When managers question what they know and how they know it, they begin to acknowledge that their own views on risks and ethical decisions are necessarily limited. Every industry, profession, and organization has beliefs and values that shape its decisions. Processes that encourage reflexivity give employees and managers opportunities to surface and question these norms without risk of retribution or negative career implications.

    That, in turn, promotes a culture where anyone can raise concerns, debate potential outcomes, and challenge whether a project reflects the company’s purpose or contributes to society. This introspection can contribute to changes in practice or reveal that industry standards fall short of a company’s values and aspirations. For example, a corporate venture manager for a global chemical company thought he had found a solution to the critical problem of how plastics can be made to biodegrade in the ocean. A startup had invented a plastic additive that would meet the industry standard. But that standard made it acceptable to leave 30% of plastic trash on the ocean floor.

    The manager passed on the opportunity. “How would it reflect on us if we harmed the ocean and we knew we were going to?” he explained. Pausing to question its industry’s standards rather than simply accepting them, and considering them in light of its own ethical values, helped this company choose a more responsible course of action.

    In another case where the scope of the new technology was more ambiguous, an advisory committee that included staff experts and an ethicist was formed to function as a reference group for decision-making. The focus was on managing risk, complying with regulations, assessing public acceptability, and reminding managers of the core mission. Advisory committees like that one can provide an organizational support mechanism as managers grapple with whether to support the emergence of a specific technology and how it aligns with the company’s values and mission.

    Practicing reflexivity involves the following steps:

    • Clearly establish and promote the team’s awareness of the company’s mission and purpose to inform discussions on whether commercializing a new technology aligns with them. Considering both the technology and the business model, determine how the effects of commercialization will be measured, such as quality of life improvement, cost reductions, or waste minimization.
    • Assess psychological safety throughout the organization. Ensure that the organization’s culture enables employees to feel safe and take responsibility for raising discussion points about the project’s impact on any particular stakeholder group in the company or society. Also ensure that employees are included in the discussion that leads to a decision about how or whether the company will continue to pursue the opportunity. If the culture is not conducive to challenges from within, identify a secure mechanism through which team members can express their concerns.
    • Check inherent biases. Team members, the team leader, and the relevant governance board must ask about and assess what industry, professional, or company norms may influence their evaluation of the technology.
    • Train members of a governance board, or anyone charged with making decisions about the project, in ethical decision-making and the company’s priorities, and ensure that they’re empowered to make those decisions.

    Inclusion: What do stakeholders think? Inclusion — the process of engaging with stakeholders — is essential to getting beyond the limitations of business leaders’ assumptions about a technology and its uses. It also promotes transparency and trust in the organization and builds relationships that can support a business’s social license to operate. Note that inclusion is not the process of building consensus with stakeholders; rather, it ensures that decisions are considered from multiple perspectives.

    Managers don’t need to seek input from everyone who might have an opinion. Rather, they can focus on the most relevant stakeholders, who may include scientists, nonprofit advocacy groups, professional associations, civic volunteers, local governments, competitors, end users, suppliers, and experts in related or adjacent fields. Discussions with these groups help ensure that there is a common understanding of the science in question, the problems it can address, and potential threats it might introduce.

    When inclusion succeeds, companies can see the potential consequences of a given technology for different stakeholders and modify its commercialization approach, if necessary. The CVC group described earlier that chose not to include advocacy groups in its discussions about genetically modified foods now regularly engages with them to surface and understand concerns stakeholders have about its emerging technology investments, check the scientific merit underpinning those concerns, and share the results of their research.

    These practices help the organization make better-informed, forward-looking decisions. Our source mentioned that this effort, while costly, time consuming, and sometimes politically charged, has raised technical issues that have prompted the company to back away from some product formulations and pivot toward others. In addition, the company has faced fewer controversies that could undermine the public’s trust.

    An inclusive process encompasses these steps:

    • Undertake a deliberate consideration of which key stakeholders will be affected, directly or indirectly, by the technology’s commercialization. Review the list to ensure that it encompasses a broad set of stakeholders.
    • Invite representatives of each identified group to participate in discussions about the consequences and alternative scenarios associated with the company’s commercialization of the emerging technology. The objective is to maximize understanding of the consequences to these stakeholders.
    • Measure public trust granted to the company throughout the development process. Consider developing an advisory board of external stakeholders to establish, maintain, and strengthen public trust as decision points in development arise.

    Responsiveness: How will we be held accountable for our decision? Well-developed competency in anticipation, reflexivity, and inclusion can limit the need for crisis management, but companies can still be caught by surprise when a technology has unintended consequences. Responsiveness calls for decision makers to accept responsibility for the consequences of commercialization and to take action to resolve issues that arise as users begin to experience the technology firsthand.

    In some cases, interviewees described small experiments their company had conducted prior to a technology launch to test for potential unintended consequences. One of them was using early-stage nanomaterial technology for surface modification, which had many possible applications. The first one it explored was an aerosol spray that could be used to keep things clean. The company was aware of stories circulating in the media about potential risk of toxicity from nanoparticles and decided to look into it further to find out whether these fears had any basis. Its review of the scientific literature found reports of early indications that inhalation was, indeed, a potential issue, so it pivoted to different applications.

    Taking action to resolve concerns usually involves adjustments or pivots, which require a capacity to learn from experience, changes in conditions, and stakeholder feedback. Managers can start the conversation about how to pivot by reviewing the risks and opportunities from a scientific perspective. Then they can engage stakeholders more broadly to identify potential issues around societal acceptance. These inputs can form the basis for envisioning a path on which the new technology and end product can be more sustainable and beneficial for everyone. In situations where the balance can be found early on, there’s a chance it can inform regulation and even become the new standard, which can result in commercial advantage. Companies that respond quickly and visibly to adverse results can earn the public trust that’s needed to maintain a loyal and growing customer base.

    In settings where there is poor learning transfer between projects, individual project members may struggle to build a capacity for change. Here, unlearning practices was identified as being valuable in supporting an agile capability. As an interviewee shared with us, “I wish we had organizational learning and unlearning practices. We’re good at learning, we’re somewhat OK at sharing learnings, but we’re nowhere near maturity at unlearning as an organization. We do fortnightly showcases, monthly town halls, and more. But how do you reference all those learnings to go forward? We record the learning, and we drop it somewhere, but there’s no rule to make you go back there.”

    Responsiveness also affects resource allocation and reallocation decisions. And it requires balancing the opportunity to reap short-term profit against negative consequences for some stakeholders. For example, one manager described a new technology that was being integrated into a customer-facing IT platform. Although the team was aware of a problem in the system, the pressure to adhere to an established timeline and achieve revenue targets was too great to delay the project. The team acknowledged its trade-off decision and the resulting technical debt it was accountable for: Proceeding would incur a bigger future cost than delaying launch and fixing the bug would have had.

    Several of the CVC managers we spoke with recognized that their companies were accountable for any negative consequences that resulted from the technologies they introduced to the market — even if legal liability lay elsewhere. “We bear the risk, not the technology developer,” said one. “The buck stops with us. We need to own it.”

    Responsiveness can be practiced in the following ways:

    • Before launching the product, conduct market tests with specific segments of the population to check for potential negative consequences in more controlled settings, and resolve them before launch. Where negative consequences to specific stakeholders are obvious, respond in a preemptive manner.
    • Institute monitoring mechanisms to identify early warning signs of unanticipated consequences. For example, stakeholder groups that were engaged earlier (as part of inclusion) can act as watchdogs, informing leadership of any potential misuses of the technology as they arise. Have a governance mechanism in place and decision criteria about the degree of harm or unintended consequence the company is willing to allow, if any.
    • Establish mechanisms for sharing lessons learned from projects, to help managers and teams share experiences and takeaways from current or recent projects and unlearn practices that are outdated.
    • Recognize that responsibility for negative consequences ultimately rests with the company, and be prepared with a budget and contingency plans — including withdrawal of the product from the market, if necessary — at the first sign of a problem.

    Putting Responsible Innovation Into Action

    Anticipation, reflexivity, inclusion, and responsiveness aren’t stages of a linear process. Every step along the product development path may invoke one or more of these principles as new information about an innovation emerges. Incorporating these principles into the innovation process systematically will require change — and a willingness to slow down occasionally when hard questions come up.

    There can be obstacles to adopting the RIC framework, such as competitive threats that increase pressure to prioritize profits; weak or ambiguous mission and value statements that lead to disagreements about ethical principles; and a lack of sufficient resources to support an organizationwide responsible innovation capability.

    Ideally, top leaders will commit to responsible innovation and commercialization practices. However, decision makers at any level can use the RIC framework to change the practices of their teams, even if full-scale organizational adoption proves difficult.

    Managers who use the questions we’ve provided above to monitor specific projects and raise questions in ongoing conversations may influence the rest of their organization. As they apply the principles and demonstrate results, the case for responsible innovation governance across the business can grow. One of the managers we interviewed has since been promoted to a leadership role where he is applying responsible innovation principles more widely in his organization.


    Today’s rapid scientific and technological advancement means that there are increasing opportunities for companies to commercialize new discoveries. At the same time, they face increased responsibility to understand how the innovations they develop will affect society and to guard against harm. Leaders who don’t take that responsibility seriously risk losing customers and stakeholder trust and may face financial consequences. But by applying the principles in the Responsible Innovation and Commercialization framework, they can transcend the limits of regulations, corporate practices, and industry standards that have yet to catch up with the latest developments.

    With a commitment to making ethical decisions and a consistent approach, companies can deliver the benefits of new technologies to the market — and the world — safely.

    Topics

    References

    1. P. Rana, “How a Chinese Scientist Broke the Rules to Create the First Gene-Edited Babies,” The Wall Street Journal, May 10, 2019, www.wsj.com.

    2. E. Pain, “To Be a Responsible Researcher, Reach Out and Listen,” Science, Jan. 17, 2017, www.science.org; and I. Nakamitsu, “Responsible Innovation for a New Era in Science and Technology,” UN Chronicle 55, nos. 3 and 4 (December 2018).

    More Like This

    You must to post a comment.

    First time here? : Comment on articles and get access to many more articles.