Searching for: social media
Search results: 1 of 81
Blog | Monday October 1, 2018
Are Companies and Stakeholders Focusing on the Same Sustainability Priorities?
We looked at online news and social media conversations around the business practices of 4,000 global companies across sectors to determine alignment between company and stakeholder priorities.
Blog | Monday October 1, 2018
Are Companies and Stakeholders Focusing on the Same Sustainability Priorities?
Preview
This year, we conducted our 10th annual survey on the State of Sustainable Business in collaboration with GlobeScan. This year’s results show that the line between sustainability issues and mainstream business issues is increasingly blurred as businesses experience the consequences of social and political disruption in a rapidly changing world.
A New Blueprint for Business
Join us at BSR18 for a conversation on stakeholder engagement in today’s world: Everyone's an Activist.
This conclusion is most striking when we look at this year’s top priority issues, both included for the first time this year to capture changes in the business landscape. These are ethics/integrity (high priority for 76 percent of companies) and diversity/inclusion (high priority for 71 percent of companies). While ethics has traditionally been regarded as the domain of the compliance function and diversity/inclusion has primarily been seen a human resources issue, sustainability practitioners rank these issues at the top—suggesting, at minimum, that they are coming to be considered priorities from a holistic, organizational perspective.

Our member companies also told us that they view corporate reputation as the primary driver of their sustainability efforts, at a time when public trust in business continues to decline. In response to this, we worked with our big data partners at Polecat to understand whether companies are correctly prioritizing the issues that are most relevant to their reputations: Specifically, we looked at online news and social media conversations around the business practices of 4,000 global companies across sectors.
In online media reporting across the board, the top two priorities were (1) ethics and integrity and (2) diversity and inclusion, with climate change, water, and community health following closely behind. This seems to suggest that company priorities are highly aligned with issues in the news.
While human rights is not explicitly called out separately as a high priority in online media, each of the top five priority issues that emerged (listed above) has a strong human rights component. Journalists seem to be increasingly focused on the impact that business has on communities and other stakeholders.
We also used Polecat to track social media attention toward business—a more direct reflection of which issues the public is focused on than online media reporting affords. On Twitter and other social media sites, people are overwhelmingly focused on questions of corporate ethics and integrity, even more intently than in online media.
The second-highest concern for the public was corporate influence over public policy—arguably a corporate integrity issue. In recent years, scrutiny over lobbying and political financing has mounted dramatically. Indeed, the level of social media attention to this issue eclipses concerns about such topics as company approaches to diversity, climate change, and water.

This finding indicates that, for all the concern over reputational risk, companies may not be ready to address the public’s more challenging questions about interaction with policymakers, as well as what is perceived as the undue influence exercised by business. Given recent increases in transparency and business activism on social issues, it is likely that scrutiny of how corporate rhetoric aligns with corporate political spending will continue. BSR also considers it highly likely that questions about good jobs and tax avoidance will, over time, become more central to public understanding of what it means to be an ethical company.
Overall, despite some obvious gaps, companies generally seem to be paying attention to the issues of greatest potential reputational impact. The next question is how they should organize themselves to respond.

Approaches to diversity and integrity pose big questions for an entire organization, not just the sustainability function. While this year’s survey results show that three-quarters of practitioners are working to better integrate sustainability issues into strategic planning, it also indicates that they have had limited success so far. Corporate communications, procurement, and public affairs continue to be the internal areas in which sustainability has the most traction, though fewer than half of respondents deemed their organization’s communications on sustainability effective at present.
Overall, the 2018 State of Sustainable Business results give reason for both optimism and caution when considering how prepared business is to respond to disruptive trends. Companies are clearly hearing concerns. However, they are still focused on messaging, reputation management, and on the symptoms—rather than the root causes—of reputational risk.
Finally, companies are realizing that most of the profound challenges they face require alignment and collaboration across the organization. This has led directly to our focus at BSR on exactly how and when to collaborate with key functions across the organization, as well as our reframing of the sustainability practitioner’s role as that of change agent, futurist, and builder of coalitions.
Blog | Thursday July 14, 2022
Delaying Climate Action: The Challenges of Moderating Climate Misinformation on Social Media
At a point when delays in climate action may lead to catastrophic and irreversible harm, companies must address climate misinformation urgently and decisively. Our new brief explores the specific challenges of moderating climate misinformation.
Blog | Thursday July 14, 2022
Delaying Climate Action: The Challenges of Moderating Climate Misinformation on Social Media
Preview
Misinformation about climate change has been around for decades, mostly in the form of climate denialism. Today, climate misinformation is focused on seeding doubt about climate science and the measures that are taken to mitigate climate change. Examples include: suggesting that the consequences of global warming may not be as bad as scientists claim, arguing that climate change policies are bad for the economy or national security, describing clean energy as unreliable, or claiming that no action will be able to halt climate change.
These varying manifestations of climate misinformation all have the same outcome: delaying climate action.
Earlier this year, the IPCC drew attention to the impacts of climate misinformation for the first time:
Vested interests have generated rhetoric and misinformation that undermines climate science and disregards risk and urgency. Resultant public misperception of climate risks and polarized public support for climate action is delaying urgent adaptation planning and implementation.
Social media brings both opportunities and risks to the climate science dialogue. Scientific information related to climate change is accessible to larger populations through social media platforms—including real-life experiences of affected populations. On the other hand, social media can significantly undermine climate science by allowing for the rapid and widespread sharing of misinformation through user-generated content and online advertising. Social media platforms are also just one part of the information ecosystem, which also includes news media and professionally created entertainment.
At a point when delays in climate action may lead to catastrophic and irreversible harm, companies must address climate misinformation urgently and decisively.
In 2021, Ford Foundation, the Ariadne Network, and Mozilla Foundation commissioned a research project to explore grantmaking strategies that can address issues at the intersection of environmental justice and digital rights. As part of this project, BSR wrote an issue brief on the role of social media companies in creating, shaping, and maintaining a high-quality climate science information environment.
The brief explores the specific challenges of moderating climate misinformation. We describe some of these challenges below:
-
Climate Misinformation Is Happening in “Subtler” Ways and Is Increasingly Intersectional.
While outright climate denialism is easy to refute, it is more difficult to identify subtle ways of spreading climate misinformation—such as claims that green policies are too costly. The response to climate change is a topic of political debate, making climate misinformation closely tied to politics, elections, and the larger civic space. These intersectional ties help grow the reach of misinformation and take it to different levels that can be difficult to anticipate. -
Existing Content Moderation Frameworks Are Not Sufficient in Addressing Climate Misinformation.
Most of the content moderation principles and frameworks used by social media platforms today were written to address immediate harms related to hate speech, incitement to violence, and other objectionable content, and they are not as applicable for scientific misinformation that may be associated with broader, longer-term harms. -
Content Removals May Not Be Adequately Effective in Fighting Scientific Misinformation.
While the removal of content is effective in fighting harmful content such as hate speech, scientific misinformation may require different approaches. Platforms should not only rely on content removal but also focus on tactics to reduce the visibility of misinformation and display high-quality information to inform users. -
Climate Misinformation Is Political and Is Backed by Institutions.
Since the 1980s, climate disinformation campaigns have been largely driven by the fossil fuel industry’s intentional efforts to undermine climate science. Today, climate misinformation can still typically be traced to fossil fuel interests. In addressing climate misinformation, it is important to consider the material incentives of the producers of such content.
In our brief, we make recommendations to social media companies, as well as civil society actors, and funders. These include the addition of climate misinformation under content policies, applying content moderation frameworks to climate misinformation, strengthening fact-checking capabilities, investing in user resiliency, and increasing scrutiny on advertising by oil and gas companies. We envision a high-quality climate science information environment that supports informed public debate, ambitious business action, and science-based policy making.
Social media companies have made significant commitments to reduce the climate impacts of their businesses (i.e., reducing GHG emissions), but they also have a responsibility to address the potential harms that they may be connected to through climate misinformation on their platforms.
Civil society groups and funders have an essential role to play in holding companies accountable for their actions or omissions to address climate misinformation and keep this topic on the agenda. Among these actors, it is our observation that environmental groups are less familiar about the practical challenges, complexities, and nuance of misinformation, and the content governance community is less familiar with how climate information can adversely impact our collective efforts to address the climate crisis. These communities would benefit from increased collaboration and knowledge sharing.
Fostering a deeper understanding of this topic across sectors will not only help remove one of the biggest barriers in the way of climate action, but it will also broaden our understanding of scientific information, and how human rights may be impacted online.
BSR will continue to work with social media companies, civil society groups, and funders on this topic. Please reach out if you’re interested in connecting with us.
Blog | Wednesday March 6, 2019
#ThisIsALeader: Raising the Profile of Women Making a Difference
Out of the spotlight and away from the media attention, women throughout global supply chains are quietly taking on leadership roles and driving change for their colleagues, families, and communities.
Blog | Wednesday March 6, 2019
#ThisIsALeader: Raising the Profile of Women Making a Difference
Preview
When you think of inspirational women leaders in business, who comes to mind? Is it Mary Barra, Chairman and CEO of General Motors—the first female CEO of a major global automaker? Or maybe Indra Nooyi, who served for 13 years as CEO of PepsiCo, one of the largest food and beverage businesses in the world?
With women at the helm of such large enterprises, it’s hard to believe that there had not been a woman CEO of a Fortune 500 company until 1972—when Nooyi was 17. No doubt she and Barra faced formidable obstacles in their journeys to leadership, including discrimination against women in education, in business, and across societal expectations. In spite of this, they rose to prominence and success, blazing a trail for others to follow, and we are right to celebrate what they have achieved.
However, the celebration of women leaders should not be limited to the C-suite. Out of the spotlight and away from the media attention, women throughout global supply chains are quietly taking on leadership roles and driving change for their colleagues, families, and communities. To highlight these lesser-known leaders on International Women’s Day, we have partnered with four committed partners of BSR’s HERproject—The Estée Lauder Companies, Inc., Nordstrom, UGG®, and Williams-Sonoma, Inc.—to raise the profile of and increase support for women leaders right across the supply chain, from corporate offices to the factory floor.
Out of the spotlight and away from the media attention, women throughout global supply chains are quietly taking on leadership roles and driving change for their colleagues, families, and communities.
Our #ThisIsALeader campaign celebrates workers from factories in China, India, Vietnam, and other countries where HERproject is active. Through HERproject, these women have become peer educators: sharing knowledge and skills on health, financial inclusion, and gender equality with their colleagues. And as leaders in their respective communities, these peer educators advocate on behalf of their colleagues, provide them with support, and show what is possible even in societies where women often face overwhelming challenges.
Take Sapna, a HERproject Peer Educator in Agra, India:
I was the sole breadwinner of my family for a long time and I was responsible for the overall well-being of everyone at home. For reasons we never discovered, one of my brothers fell ill and suffered a paralytic stroke. He was bedridden for months. My mother was also unwell and eventually passed away after a heart attack. That happened just after my engagement.
Because of this series of sad events, culminating in my mother’s death, the groom’s family decided that I was a bad omen and called off our engagement. These were really bitter and tough experiences for me. But they only strengthened my determination to succeed in life.

I didn’t know what the HERfinance program was, but when I heard about it, I was curious, and I thought that I wouldn’t lose anything by attending it. So, I took part in the trainings at the factory and they helped me to regain the confidence I had lost. I realized that, along with education, financial planning is critical for our generation. The trainings made me decide to spend wisely and save so that my future is secure.
My brother has now recovered from his illness. He recently started a new job and is looking forward to a bright future. For myself, I’m pursuing higher education again and I’m hoping to graduate.
That’s just one of the amazing stories we hear through HERproject. Like Sapna, women leaders within brands are driving commitment to women’s empowerment. They are connected across geographies by the belief that they can help others and improve the lives of people around them—especially fellow women.
In addition to our #ThisIsALeader social media campaign, we are hosting an evening event on Thursday, March 7, to celebrate women’s leadership. Senior leaders from The Estée Lauder Companies, Inc., UGG®, and Williams-Sonoma, Inc. will join nonprofit leaders, public officials, and members of the media for a discussion on what is needed to ensure that women around the world can fulfil their potential as leaders. If you would like to join us, please register your interest here.
We invite you to join the conversation on social media by sharing a photo of an unsung woman leader with the sentence “#ThisIsALeader because…”.
Beyond celebrating women across the global supply chain, we want to catalyze support so that they can go further as leaders: spreading knowledge, belief, and confidence, and unlocking the potential of women around them. Because we believe that when business and partners work to unlock this potential, the impact will be unprecedented.
Ahead of International Women’s Day, we call on businesses to commit to empowering women leaders right across your supply chain. And we invite you to join the conversation on social media by sharing a photo of an unsung woman leader with the sentence “#ThisIsALeader because…”. Together, we can raise the profile of the women who are making a difference and step up our support for them as they drive positive change.
Blog | Tuesday March 2, 2021
Beyond Former User @realdonaldtrump: A Human Rights-Based Approach to Content Governance
How should a company’s responsibility to respect human rights according to the UNGPs manifest itself in the context of content governance? BSR shares its four-part approach to human rights and content governance.
Blog | Tuesday March 2, 2021
Beyond Former User @realdonaldtrump: A Human Rights-Based Approach to Content Governance
Preview
The recent action taken by social media companies against former user @realDonaldTrump following the insurrection at the Capitol sparked a fierce debate.
For some, the decisions were a legitimate, necessary, and proportionate response to the obvious incitement of violence and insurrection: a new line had been crossed, and tough action was merited.
For others, this action came too late—the damage had already been done, and the lack of action against previous transgressions (such as quoting “when the looting starts, the shooting starts”) had allowed users to push the boundaries of online speech too far, for too long.
For others still, the decisions were a problematic infringement on the right to freedom of expression and a step too far for companies to take.
A debate raged on about whether and how democratically elected leaders should be treated differently online. For some, the public has a right to hear from their elected officials, no matter how objectionable the message; for others, elected officials should be afforded less freedom when incitement to violence is at stake, owing to the substantial influence that the speaker has with the audience and increased risk of harm.
These are real dilemmas with potentially conflicting rights at stake, such as freedom of expression, public safety, and the right to participate in government. There are no simple solutions (if you think there are, here is a reading list) and we have nothing but admiration for those inside social media companies genuinely seeking to take principled approaches to resolve these dilemmas. It is not an easy task, and it's one where no good deed goes unpunished.
BSR’s engagement in this dialogue has focused on how social media companies can apply a human rights-based approach to the challenge of online speech. In other words, how should a company’s responsibility to respect human rights according to the United Nations Guiding Principles on Business and Human Rights (UNGPs) manifest itself in the context of content governance?
With this objective in mind, we are publishing a short paper today setting out a four-part approach to human rights and content governance, based on a combination of the UNGPs and the various human rights principles, standards, and methodologies upon which the UNGPs were built. These four parts are as follows:
- Content policy—statements about what is and is not allowed on a social media platform should encompass all human rights, be founded upon human rights standards and instruments, and draw upon engagement with affected stakeholders.
- Content policy implementation—given the challenges of enormous scale and rapid speed, companies should prioritize implementation based on the severity of human rights risk (globally, not limited to the United States), understand the link between online content and offline harm in the relevant local context, and provide effective remedy when mistakes are made.
- Product development—the features, services, and functionalities of social media platforms are constantly evolving, and it is important that potential human rights impacts are assessed during the development and deployment process, especially for high-risk and conflict-affected markets.
- Tracking and transparency—companies should maintain quantitative and qualitative indicators of the effectiveness of their approach and be transparent about the rationale for important content decisions, with reference to relevant human rights considerations.
There are two important features to highlight about this approach.
First, these four parts constitute a robust framework of ongoing human rights due diligence that enable content decisions to be made thoughtfully, deliberately, and grounded in rights-based analysis, rather than “on the go” or according to the whim of the moment. They emphasize that the process matters as much as the decision itself—that while different people or companies may reach different decisions, these decisions should be intellectually consistent, defensible on human rights grounds, and conveyed transparently.
Second, these four parts encompass more than simply what content is and is not allowed on a platform. Our approach assumes that international human rights law and the UNGPs provide an overall framework for principled decision making and action, not a “copy and paste” set of content rules for companies to follow. The four parts are intended to be considered as a package and enable companies to adapt as the reality of social media use unfolds.
One of the concerns most frequently expressed over recent weeks has been the unease that companies have so much power—with some arguing that decisions about content should be a role for governments, not companies.
However, there are three reasons why we believe companies should play a role in content governance and thus why setting out a company-based human rights-based approach to content governance remains essential.
First, many of the most significant public policy proposals on content governance—such as the U.K. Online Harms White Paper, the EU Digital Services Act, and proposals to reform U.S. Section 230—envision a very important role for companies taking responsibility for harm associated with content on their platforms.
Second, these public policy proposals relate to specific jurisdictions, whereas the internet is global. Human rights-based approaches enable consistent approaches to be taken across international borders, including jurisdictions where laws and regulations conflict with international rights standards. Indeed, a global approach based upon international human rights standards provide a strong foundation to push back against governments seeking to suppress freedom of expression and other rights.
Third, the UNGPs state that companies have a responsibility to address the adverse human rights impacts with which they are involved. User-generated content clearly has a connection to adverse human rights impacts, and therefore a human rights-based approach to content governance is essential to meet the responsibility of companies to address this connection.
Indeed, the most promising near-term contribution to dilemmas relating to @realDonaldTrump won’t come from government, but from the independent Facebook Oversight Board, which is due to review the former user’s suspension from the platform.
We hope that this paper provides a useful contribution for how respecting human rights and implementing the UNGPs can provide a foundation for this infrastructure, and we welcome comments to amend, improve, and build on this approach.
Blog | Wednesday June 19, 2019
Is Stakeholder Engagement the Key to Successful Community Standards?
Ongoing debates about leadership, governance, and regulation of social media are highly relevant to any stakeholder engagement discussion for platforms like Facebook, YouTube, and Twitter.
Blog | Wednesday June 19, 2019
Is Stakeholder Engagement the Key to Successful Community Standards?
Preview
Building stakeholder trust has become a core goal for corporate executives. With some of the biggest investors publicly challenging corporations to think beyond short-term financial goals, companies are working to map, anticipate, and respond to concerns across societal interest groups.
This task daunts most companies, not least global social media platforms such as Facebook, YouTube, and Twitter. These platforms seek to calibrate and reflect societal views, but in the process, they have become powerful actors that dramatically affect the trajectory and impact of popular expression. How they set and implement content policies on issues such as terrorism, hate speech, and political and religious extremism has directly impacted the lives of billions of people in hundreds of countries. Today, there is a consensus that technology platforms should no longer make high-stakes decisions without granting the public structured visibility and input.
Ongoing debates about leadership, governance, and regulation of social media are highly relevant to any stakeholder engagement discussion. The public will not differentiate an organization’s approach to content standards from its view of the organization’s overall behavior. But even a transformation of the regulatory and competitive environment for social media platforms will leave open the question of how best to set and implement content policies in the best interest of society—and what that interest is. There are no easy answers. How, for example, is freedom of expression to be protected without undermining privacy?
BSR has provided independent advice to Facebook on stakeholder engagement relating to its content policies (named “Community Standards”). Our work is based on our five-step methodology and our belief in the importance of proactive stakeholder engagement strategy. It suggests some principles for engagement by social media platforms that could help set direction for more long-term solutions, such as Social Media Councils.
Given the human rights impacts of social media platforms, it is important to prioritize the voices of such vulnerable groups as rights defenders, political dissidents, women, young people, minorities, and indigenous communities.
Why engage?
Engagement is needed to proactively identify areas in which content policies, newsfeed prioritization, and algorithms driving ads need to evolve to meet user expectations, social norms, and international standards—both to identify new issues that haven’t been explored and to revise policies on known issues as they evolve. Further engagement must then balance and resolve diverse perspectives on contentious topics which, given the near-universal reach of social media, could conceivably cover every issue of interest to any stakeholder anywhere in the world. To align with human rights and sustainability frameworks, engagement principles must be transparent, comprehensible, and consistent, even as issues play out in radically disparate ways across different geographic contexts.
Who is a stakeholder?
For most companies, stakeholder mapping involves categorizing stakeholder groups—typically investors, regulators, customers, suppliers, civil society organizations, and relevant communities. For social media platforms, however, the stakeholder landscape poses unprecedented challenges of scale, diversity, and complexity. Consider impact and representation: Beyond contemplating billions of users (itself a task of gigantic scale), social media platforms also need to consider “rightsholders”—employees, contractors, customers, and individuals whose images or words are shared even when they are not platform users. Given the sheer number and diversity of rightsholders, social media platforms need to locate organizations capable of speaking on their behalf. Depending on circumstances, rightsholders might be represented by civil society organizations, activist groups, or policymakers. How credibly any given stakeholder can represent a specific interest or opinion always requires deep analysis.
As a result of stakeholder mapping exercises, platforms will be able to evaluate gaps, seek expertise to fill them, and at least avoid uninformed attempts to balance a spectrum of views.
How should stakeholders be prioritized?
Unconscious biases, external pressures, and commercial incentives can easily foster approaches that fail to reflect the full range of effects on rightsholders. Given the human rights impacts of social media platforms, it is important to prioritize the voices of such vulnerable groups as rights defenders, political dissidents, women, young people, minorities, and indigenous communities.
Stakeholder mapping exercises need to begin with the landscape of contentious issues upon which to engage stakeholders. Terrorism, hate speech, sexual harassment, bullying, and disinformation are obvious examples, but new issues emerge constantly. For each issue, perspectives can be mapped across linguistic, geographical, and social identities, supplementing user data with academic expertise. This enables identification of representative organizations, should they exist. As a result of this mapping exercise, platforms will be able to evaluate gaps, seek expertise to fill them, and at least avoid uninformed attempts to balance a spectrum of views.
What is the best way to gather perspectives?
Social media platforms face strong incentives to transparently disclose their consultation processes—and the stakeholder perspectives they yield—but some vulnerable people and identity groups will prefer anonymity, for good reasons. While candid, one-on-one conversations with stakeholder can build trust, they are narrow in focus, extremely resource-intensive, and invite questions regarding overall balance and focus. Setting up groups according to geography or issue expertise is more efficient and can boost a platform’s analytical capacity, but groups are challenging to analyze and can develop blind spots. Given all this, a mix of approaches and formats is most appropriate.
Social media companies are beginning to experiment with advisory councils that typically comprise stakeholders that have a mature understanding of company policies and processes and can credibly represent interest groups or perspectives. This necessarily limits diversity and inclusivity, inviting allegations of elitism. The payment of honoraria to council members can be viewed as compromising their independence, but lack of compensation raises questions about exploitation of stakeholders and practical constraints on their ability to contribute. Setting standard industry practices and “arms-length” mechanisms would help to address this dilemma.
Facebook has proposed creating an independent oversight board to review the company’s most difficult decisions about content, and it is considering the board’s role with respect to content policy advice, too. This provides an additional channel for input, and Facebook will benefit from explaining how the board affects policy over time.
How should decisions be made and disclosed?
Intent on retaining accountability for their content decisions, social media platforms are unable and unwilling to outsource policy control. A key challenge they face will be to explain how—and why—collecting stakeholder views can and will inform their internal decision-making.
The platforms must also incorporate local political and social nuances without undermining their own global consistency; issues of origination and impact mean that setting national boundaries around content and opinion raises more questions than it solves. On highly contentious issues such as terrorism, hate speech, and incitement to violence, social media platforms need to credibly draw on existing academic expertise and then capture the range of values and opinion without defaulting to the median (or most commercially friendly) option. Human rights frameworks offer an appropriate reference point in that they take scale and severity of impact as a starting point and proceed to consider immediate, cumulative, and longer-term developments in navigating difficult trade-offs. For example, a decision to limit hate speech on a platform may protect vulnerable groups while having a detrimental long-term effect on democratic participation.
For now, the power to determine, promote, and limit content rests with a handful of companies that are both overwhelmingly powerful and keenly exposed to public anger. Rather than focusing on molding friendly legislation or reacting arbitrarily to the latest incident, social media platforms should continue to develop institutional approaches while steeling themselves to invite broader public participation.
Facebook is already posting minutes from its Content Standards Forum. As it works toward broader disclosure, the company should embrace full transparency of its positions and policy determinations and the relevant internal user data that informs them.
BSR’s work with Facebook raises questions, answers, and then more questions. What is clear is that in this rapidly evolving area, cross-industry and multi-stakeholder collaboration should become a priority. The prospect of effective external oversight from society remains nascent and contested. For now, the power to determine, promote, and limit content rests with a handful of companies that are both overwhelmingly powerful and keenly exposed to public anger. Rather than focusing on molding friendly legislation or reacting arbitrarily to the latest incident, social media platforms should continue to develop institutional approaches while steeling themselves to invite broader public participation.
Blog | Thursday March 23, 2017
Lessons in Technology and Convergence from SXSW 2017
Here are four trends that ran through this year’s South by Southwest (SXSW) in Austin, Texas.
Blog | Thursday March 23, 2017
Lessons in Technology and Convergence from SXSW 2017
Preview
Healthcare and social media platforms, dogs and tech wearables, environmental conservation and virtual reality—to many, these topics seem dramatically divergent. But while attending this year’s South by Southwest (SXSW) Interactive conference in Austin, Texas, I found that they are anything but.
SXSW brings together global professionals from across industries to network and explore diverse topics in the worlds of tech, entertainment, and culture. Tracks ranged from brands and marketing to health, and from web development and coding to government. And while these tracks attracted participants working in completely different arenas, one underlying theme ran throughout the event: convergence.
As Kevin Clark of the Center for Digital Media Innovation and Diversity at George Mason University put forth in a session on addressing gender norms in entertainment media, the world of technology and media is an ecosystem; it doesn’t operate in isolation. More and more, companies, particularly in the technology sector, are partnering with other companies, foundations, governments, and nonprofits to collaborate on global issues and use emerging technologies to drive social and environmental progress. In our increasingly connected world, businesses and others are finding that they, too, must find ways to come together.
Under the overarching theme of convergence, here are four trends from SXSW for companies in all sectors to keep an eye on.
1. The emergence of augmented and virtual reality (AR/VR)
These terms were everywhere, and you couldn’t walk more than a city block without an opportunity to play around with the latest gadgets. While AR and VR technologies are still nascent, many conversations revolved around the question of how they will be used—including as a tool for generating empathy and driving and scaling societal change. I put on VR goggles to go “under the canopy” in the Amazon rainforest as part of an effort led by Conservation International with support from HP Inc., Jaunt, Johnson & Johnson, the MacArthur Foundation, and the Tiffany & Co. Foundation to raise awareness of the region and its value to human well-being. In a session on VR as a tool for accelerating global policy change, I also learned about Merck For Mothers’ work using VR to immerse policymakers in maternal health issues in their countries and remind government officials of the healthcare for which they are responsible.
2. Connections between technology and journalism, activism, and politics
Following 2016’s Brexit vote and U.S. presidential election, much of SXSW was focused on what the current political climate means for big data tracking, communications and journalism, and the relationship between government and technology. As U.S. Senator Cory Booker stated in the Interactive opening keynote, “If we are silent in the face of injustice, we are complicit in that injustice.” Booker was joined on stage by Google’s Senior Counsel on Civil and Human Rights Malika Saada Sar, and they discussed the role technology played and will continue to play in politics and how it can be an essential tool for “re-stitching our fractured society.” Other sessions also focused on fake news and “alternative facts”—one speaker offered potential solutions for social media platforms to fix algorithms so that they eliminate echo chambers and show users more diverse views, with the aim of reducing political polarization. She also suggested that, as artificial intelligence (AI) expands, those companies will have more responsibility for their influence on users.
3. Partnerships between tech and other sectors
Innovation and disruption reign at SXSW, and this year emphasized how tech intersects with other sectors, particularly health and medicine. For example, David Karp, the CEO of Tumblr, is leading an initiative to create tech community support for Planned Parenthood—using #TechStandsWithPP as the rallying hashtag to generate support for access to healthcare in the United States. In his keynote, former U.S. Vice President Joe Biden celebrated the technology sector for its dedication to cancer research. He urged innovators to help with electronic data-sharing and to connect and empower cancer patients. Other notable discussions included lessons that healthcare can borrow from tech, including using rideshare apps to connect patients with medical services, using VR to alleviate pain after surgery, and even how technology can enhance veterinary medicine and the relationship between humans and dogs.
4. Machine learning and AI
Nearly every day, the SXSW program featured multiple sessions on the future of AI and its societal implications. While AI and automation will benefit society in many ways, they are also already negatively affecting jobs and changing the nature of work. With ubiquity of AI on the horizon, conversations throughout the week focused on everything from the ethical use of AI therapists to chat bots and other AI innovations in marketing to governance, policy, and transparency issues.
Innovation is everywhere, and SXSW showcased how previously separate worlds are converging to connect people to information and services, increase empathy for global societal issues, and affect policy change.
And now, thanks to the countless innovations exhibited at SXSW, I’m also one step closer to finally being able to “speak dog.”
Blog | Wednesday February 26, 2025
Protecting Children in the Digital Environment: The Role of Impact Assessments
BSR Technology and Human Rights experts discuss the benefits of conducting a Child Rights Impact Assessment and key takeaways from their recent report.
Blog | Wednesday February 26, 2025
Protecting Children in the Digital Environment: The Role of Impact Assessments
Preview
There is more awareness of the impact of technology on children than ever before. Despite the many benefits that digital technology provides to children (e.g., access to education, free expression, and maintaining social connections), concerns are rising around the adverse impacts on mental health, attention, and protection from harm.
In 2023, Amazon entered a US$25 million settlement with the US Department of Justice and Federal Trade Commission over charges related to children’s privacy. In 2024, concerns about children and social media platforms regularly made headlines. State attorneys general in the US sued social media companies alleging harms to children’s safety and well-being. The US surgeon general called for warning labels on social media platforms, and several tech CEOs publicly apologized to parents whose children were harmed by social media.
In response, regulators have begun to take action to address these harms. The UK passed an Online Safety Act that requires online platforms to prevent children from accessing age-inappropriate or harmful content. In Australia, efforts to protect children from harm has resulted in a social media ban for children under 16. Companies developing and deploying tech tools are being required to take stock of their impacts on children’s rights and the effectiveness of their measures to address such impacts.
While companies, governments, and civil society actors are increasingly invested in addressing the adverse impacts of technology on children, approaches remain inconsistent and fragmented. Research shows that company approaches are often focused on protection issues (e.g., illegal content and freedom from exploitation or sexual abuse) and responding to legal mandates, which may cause issues related to children’s participation in the digital environment (e.g., freedom of expression or access to culture) to be overlooked. Furthermore, current approaches to assessing impacts on children are often seen by companies as a "one-and-done" exercise, rather than an ongoing process that integrates external perspectives and evolves as new technologies and use habits arise.
Several factors complicate our ability to fully understand and mitigate the adverse impacts of technology on children, including the rapid pace of technological advancement, regulatory discrepancies across different jurisdictions, and the limited availability of data about children across diverse ages, socioeconomic statuses, gender identities, geographies, and individual circumstances. As a result of these challenges and the gaps in their current approaches, companies fail to track and mitigate evolving risks to children.
Child Rights Impact Assessments
Companies have a responsibility to identify and address adverse human rights impacts associated with their operations, products, and services.
To assess impacts to children in particular, companies can conduct Child Rights Impact Assessments. CRIAs are an effective way for companies to systematically evaluate their impacts on child rights as defined in the Convention on the Rights of the Child, and other internationally accepted human rights and child rights instruments. Similar to a Human Rights Impact Assessment, the CRIA uses a methodology informed by the UN Guiding Principles on Business and Human Rights (UNGPs) and seeks input from rightsholders to identify a company’s impacts on children’s rights, prioritize these impacts based on their severity, and determine appropriate action to address them.
In 2023, UNICEF engaged BSR to explore how companies are leveraging CRIAs and help develop a CRIA tool that companies operating in the digital environment can use to systematically identify, assess, and address their impacts on child rights. The project involved extensive research into existing resources to assess companies’ impacts on children, a review of current CRIA tools and practices, an assessment of child rights considerations in new regulations, and engagement with 130 stakeholders.
BSR has published a paper that brings together key findings from this research, as well as our observations on how companies are currently approaching child rights impact assessments in relation to the digital environment. It is a precursor to the digital environment CRIA tool that UNICEF will publish in 2025.
Benefits of Conducting CRIAs
BSR’s review of the current landscape showed that while many companies seek to understand their actual and potential impacts on children in the digital environment, few use CRIAs to do so. For companies operating in the digital environment, CRIAs can be a helpful tool for several reasons:
- CRIAs allow companies to assess their impacts against the full list of child rights. Assessing against a comprehensive list of rights ensures that key issues or new impacts are not overlooked and constitutes a defensible methodology that is grounded in international human rights instruments.
- CRIAs can enable age-appropriate design through early consideration of risks and opportunities. Proactively engaging stakeholders and identifying the potential risks/opportunities allows companies to integrate these considerations into the design of a technology product and address harms before they become more severe, which can happen quickly in the digital environment.
- CRIAs can support companies’ regulatory compliance efforts. Regulations like the EU Digital Services Act, UK Online Safety Act, Australian Online Safety Act, and the Corporate Sustainability Reporting Directive require companies to assess risks to people, including children, and implement mitigation measures. CRIAs can help companies address these requirements by aligning with UNGPs due diligence approach and integrating findings into broader human rights and regulatory risk assessments.
As the digital environment continues to shape children’s lives and regulatory expectations continue to grow, respecting, protecting, and fulfilling child rights should be a priority for all businesses that engage with digital technologies—whether they are developers, deployers, or users. UNICEF’s development of a CRIA tool specific to the digital environment is a critical step in that process.
Explore BSR’s full report for detailed insights on why CRIAs are an essential practice and what to expect from UNICEF’s forthcoming tool.
Blog | Monday November 5, 2018
Our Human Rights Impact Assessment of Facebook in Myanmar
The question of how social media platforms can respect freedom of expression while also protecting users from harm is one of the most pressing challenges of our time.
Blog | Monday November 5, 2018
Our Human Rights Impact Assessment of Facebook in Myanmar
Preview
Today, Facebook has published the human rights impact assessment (HRIA) of the company’s presence in Myanmar that it engaged BSR to undertake earlier this year.
The question of how social media platforms can respect the freedom of expression rights of users while also protecting rightsholders from harm is one of the most pressing challenges of our time.
This challenge is even more testing in Myanmar, where the majority of the population is still developing the digital literacy required to navigate the complex world of information sharing online, and where a minority of users are seeking to use Facebook as a platform to undermine democracy and incite offline violence. The lack of rule of law and recent political, economic, and social history in Myanmar add to the challenging environment.
We didn’t want the HRIA to reproduce what was already known—so we asked ourselves, what can we do to help increase human rights protections in the future?
Last month, the UN Human Rights Council’s Independent International Fact-Finding Mission on Myanmar concluded that serious crimes under international law have been committed by military and security forces that warrant criminal investigation and prosecution. The use of Facebook to spread anti-Muslim, anti-Rohingya sentiment featured prominently in the Mission’s report.
In this context, the commissioning of this comprehensive HRIA is an important new milestone in business efforts to undertake due diligence and respect human rights. Facebook’s decision to publish the report in unredacted form also demonstrates an impressive commitment to transparency and sets an example for other companies to follow.
The challenging human rights context for Facebook and questions about the company’s response have been well-documented by international bodies, civil society organizations, and the media. Given this context, we didn’t want the HRIA to reproduce what was already known—so we asked ourselves, what can we do to help increase human rights protections in the future?
At the basic level, addressing this question meant thoroughly applying the UN Guiding Principles on Business and Human Rights to the letter—identifying and prioritizing Facebook’s actual and potential human rights impacts and making recommendations for how to address them.
However, we concluded that answering this question would require us to be disciplined in two very important ways.
First, it was essential to take an inclusive approach. We consulted directly with over 60 potentially affected rightsholders and stakeholders during two visits to Myanmar by BSR staff, and we made sure that we spoke to as diverse a range of voices as we were able to in the time available. This approach generated very important insights. While many stakeholders—and Facebook itself—emphasized that Facebook’s efforts have fallen short in the past, we witnessed a strong determination from everyone we spoke with, inside and outside Facebook, to address human rights in Myanmar as a matter of utmost importance and urgency. We are hopeful about collaborative efforts, both now and over the long-term.
Second, it was essential to take a distinctly forward-looking approach. Understanding the mistakes and shortcomings of the past established crucial context, but it was important that we focused on forward-looking analysis and recommendations.
This approach led to us make recommendations in five main areas. These recommendations are shaped by the observation that Facebook’s human rights impacts in Myanmar cannot be addressed by Facebook alone, but also require system-wide change:
- Governance and accountability at Facebook, including human rights policies, formalized governance structures, and public communications.
- Community standards enforcement by Facebook, such as continuing to build a cross-functional team that understands the local Myanmar context, as well as a stricter implementation of Facebook’s credible violence policy and further development of artificial intelligence solutions that support human decision-making.
- Engagement, trust, and transparency, such as publishing a local, Myanmar-specific version of Facebook’s Community Standards Enforcement Report and supporting international mechanisms created to investigate violations of international human rights.
- System-wide change, including advocacy efforts aimed at policy, legal, and regulatory reform and continued investment in efforts to increase digital literacy and counter hate speech.
- Risk mitigation and opportunity enhancement, such as preparations for the 2020 elections in Myanmar and for the possibility that WhatsApp becomes more commonly used, as well as the development of new products and services that accelerate growth of the digital economy in Myanmar.
Facebook’s blog to accompany the publication of the HRIA described in some detail the company’s stance and current status for each of these five areas.
We believe that the implementation of these recommendations is far more important than the HRIA itself. For this reason, we hope that the publication of the HRIA in full provides three main benefits: first, to provide a clear action plan for Facebook; second, to increase awareness of the human rights approaches available to Facebook and other technology companies engaged in Myanmar; and third, to stimulate further dialogue, collaboration, and solutions.
The report was made possible thanks to the many stakeholders and rightsholders in Myanmar who participated in the assessment process. We encourage everyone interested in the human rights situation in Myanmar to read the report in full, and we look forward to updates from Facebook in the future on its progress implementing the recommendations.
Blog | Wednesday September 27, 2017
How We Can Use Big Data to Understand Sustainability Risks and Opportunities
BSR has partnered with big data intelligence software firm Polecat to understand sustainability risks, opportunities, and stakeholder perceptions.
Blog | Wednesday September 27, 2017
How We Can Use Big Data to Understand Sustainability Risks and Opportunities
Preview
We live in a world where sustainability challenges are among the most contentious topics of conversation for millions of individuals who are now able to publish their experiences, hopes, dreams, and fears online. Local conflicts and problems can quickly escalate into global reputational threats to companies, as stakeholders share their perceptions and concerns across online and social media, driving fluid and evolving advocacy agendas. At the same time, businesses have unprecedented opportunities to engage with customers, employees, suppliers, and communities to enhance their products and services, manage their risks, and tackle systemic challenges.
In short, stakeholder engagement has never been more important—or the opportunities to leverage it to better understand material issues this diverse. That is why today we are delighted to announce our new partnership with big data intelligence firm Polecat. Polecat is an award-winning and analyst-endorsed software firm based in London that helps organizations interrogate online conversations to identify strategic and operational risks for their reputations and licenses to operate. The company uses artificial and human intelligence to scan and interpret the universe of unstructured data from online and social media, giving users rapid insight into key trends and influencers shaping debate about their businesses as related to specific assets, brands, geographies, and value drivers to enable faster, smarter decisions and interventions.
BSR will use the Polecat platform to help our members understand and anticipate changes in our current dynamic environment—work that will complement our new Sustainable Futures Lab, which we introduced last week. This will allow BSR to scope consulting and research projects that use big data analysis to amplify our expertise on key sustainability topics.
Polecat’s platform will enhance our ability to help members explore stakeholder perceptions about their material risks and opportunities, conduct sector and country analysis, and design more robust engagement plans. We will be able to provide data-driven insights on material issues, which can be used to define, sense, or differentiate sustainability priorities. We will also be able to examine both existing perceptions and emerging issues, or ‘weak signals,’ to help organizations create more future-oriented strategies. Over time, we will work with Polecat to design new offerings and tools for our members.
“We are proud and delighted that BSR has chosen to work with Polecat to help enhance its members’ insights and decisions with regard to the many pressing sustainability challenges that the world faces, and where business and markets have a critical role to play in helping shape the future,” said Polecat CEO James Lawn. “Our software is developed precisely to provide actionable intelligence on these types of risks and opportunities that are so defining of corporate reputations today.”
By combining Polecat’s insights with our deep expertise in climate change, human rights, inclusive economy, supply chain sustainability, sustainability management, and women’s empowerment, we will continue to help companies create transformative strategies that are fit for the future.
As BSR celebrates its 25th anniversary and Polecat its 10th, we look forward to collaborating to enhance how we advise and support our members in making the smartest, best-informed decisions for a just and sustainable world. Caroline Skipsey from Polecat will speak on a panel on the "Era of Misinformation" at the BSR Conference 2017 in Huntington Beach, California, this October, and we invite you to join us to discuss opportunities to leverage these capabilities for your sustainability strategy.
Blog | Thursday September 22, 2022
Human Rights Due Diligence of Meta’s Impacts in Israel and Palestine in May 2021
BSR reviewed the human rights impacts of Meta’s company policies and activities during the May 2021 crisis in Israel and Palestine. Here are the results.
Blog | Thursday September 22, 2022
Human Rights Due Diligence of Meta’s Impacts in Israel and Palestine in May 2021
Preview
In September 2021, Meta commissioned BSR to review the human rights impacts of the company’s policies and activities during the May 2021 crisis in Israel and Palestine. Today, we are publishing the results of BSR’s analysis in Arabic, English, and Hebrew.
BSR would like to thank everyone that participated in this review and provided their valuable time, insights, and perspectives.
The discussions we held with affected stakeholders to inform this review brought home to us how so many detailed decisions about social media policy, technology, and practice have fundamental impacts on the protection, realization, and fulfillment of human rights as a common standard of achievement for all peoples—especially in the context of highly complex social and historical dynamics. We hope this review informs actions that result in meaningful improvements in the daily lives of all people connected to Israel and Palestine.
The primary purpose of the human rights due diligence is to provide Meta with prioritized, action-oriented, decision-useful, and forward-looking recommendations for policies and practices. In doing so, this human rights due diligence helps fulfill Meta’s commitments under its Corporate Human Rights Policy and responsibilities under the United Nations Guiding Principles on Business and Human Rights (UNGPs).
Specifically, Principle 20 of the UNGPs states that companies should track the effectiveness of their response to human rights impacts by engaging with stakeholders, integrating findings into relevant processes, and driving continuous improvement. Principle 22 states that companies should provide for or cooperate in the remediation of adverse impacts, including seeking to guarantee non-repetition of prior harms.
This human rights due diligence also helps fulfill the recommendation of the Meta Oversight Board that Meta should engage an independent entity not associated with either side of the Israeli-Palestinian conflict to determine whether Meta’s content moderation in Arabic and Hebrew has been applied without bias.
BSR found that Meta took many appropriate actions during the May 2021 crisis, including establishing a special operations center and crisis response team, prioritizing risks of imminent offline harm, seeking an approach to content removal and visibility based on necessary and proportionate restrictions consistent with the International Covenant on Civil and Political Rights (ICCPR) Article 19(3), and overturning policy enforcement errors in response to user appeals. For this reason, some of BSR’s recommendations build upon important foundations for a human rights-based approach to content governance that have already been established by Meta.
However, BSR also identified a variety of adverse human rights impacts for Meta to address, including impacts on the rights of Palestinian users to freedom of expression and related rights, the prevalence of anti-Semitic content on Meta platforms, and instances of both over-enforcement (erroneously removed content and erroneous account penalties) and under-enforcement (failure to remove violating content and failure to apply penalties to offending accounts).
BSR did not identify intentional bias at Meta, but did identify various instances of unintentional bias where Meta policy and practice (such as insufficient routing of Arabic content by dialect or regional expertise), combined with broader external dynamics (such as efforts to comply with US law), leads to different human rights impacts on Palestinian and Arabic-speaking users.
BSR has made 21 recommendations to Meta to address these adverse human rights impacts and bias.
Four recommendations relate to content policy, such as reviewing Meta’s policies relating to content that praises or glorifies violence, and specific elements of Meta’s Dangerous Individuals and Organizations policy.
Four recommendations relate to transparency, such as increasing the breadth, specificity, and granularity of information provided to users about Meta content policy enforcement, such as when action is taken on their content or accounts.
Ten recommendations relate to operations, such as determining the market composition (e.g., headcount, language, location) needed for rapid response capacities, the routing of potentially violating Arabic content to reviewers by dialect and region, improving classifiers (algorithms that assist with content moderation by identifying and sorting content that may violate Meta’s content policies), developing mechanisms to track hate speech based on type, and enhancing content moderation quality control processes to prevent large-scale errors.
Finally, three recommendations relate to systems change that goes beyond just Meta, such as how counterterrorism law applies to the social media industry and support for access to remedy.
The UNGPs lay out the expectation that Meta should avoid infringing on the human rights of others and should address adverse human rights impacts with which it is involved. In a conflict-affected context like Israel and Palestine, this includes understanding how ongoing conflict dynamics intersect with Meta’s platforms, how the online actions of a range of actors are possibly shaping offline events, and which groups are particularly vulnerable to adverse human rights impacts connected to Meta’s platforms given the conflict context.
We believe that BSR’s human rights due diligence will help Meta fulfill these expectations, and we look forward to Meta making progress reviewing and implementing our recommendations. We also hope that the insights we’ve shared today can inform the efforts of policy makers and other social media companies addressing complex challenges of content governance globally, especially in conflict-affected contexts.