May 13, 2026
Guests
-
Associate Director, Responsible Tech, BSR
Lindsey works at the intersection of technology and human rights, helping both tech and non-tech companies identify and address human rights impacts associated with the development and use of technology, and effectively incorporate business and human rights practices. Her focus areas include content governance, end-use risks of tech products and services, and the implications of artificial intelligence (AI) and other emerging technologies.
Prior to joining BSR, Lindsey worked with digital rights organization Access Now to drive the conversation on the human rights implications of AI. As part of this, she wrote the foundational report Human Rights in the Age of Artificial Intelligence. Lindsey previously worked at Internews, implementing a large portfolio of internet freedom projects across Latin America, which focused on equipping journalists and human rights defenders with digital security skills and defending the free and open internet. Lindsey has worked and lived across Latin America and is fluent in Spanish and Portuguese.
Lindsey holds a Master’s in Public and International Affairs from Princeton University and a BA in Political Science and International Studies from the University of Nebraska-Lincoln.
Recent Insights From Lindsey Andersen
- Responsible AI in Practice / May 13, 2026 / Audio
- Taking a Responsible Approach to AI: A Guide for Business / March 5, 2026 / Insights+
- Human Rights Across the Generative AI Value Chain / February 25, 2025 / Reports
- Effective Engagement with Technology Companies / May 23, 2024 / Reports
- A Business Guide to Responsible and Sustainable AI / March 27, 2024 / Insights+
-
Associate Director, Responsible Tech, BSR
Lale focuses on the intersection of human rights and technology, helping companies integrate human rights considerations into their technology products, services, and broader business operations. She specializes in responsible AI, end-use risks of tech products and services, and online platform risks.
Lale leads BSR's Tech Against Trafficking collaboration, which brings together some of the largest tech companies in the world to advance the use of technology to fight human trafficking and modern slavery, and to address human trafficking on online platforms and in global supply chains. Lale also leads BSR’s new multi-company working group focused on Labor Rights in the AI Data Supply Chain, which aims to promote labor rights for data enrichment workers preparing datasets and training algorithms that form the basis of AI.
Prior to BSR, Lale worked at Microsoft’s Cloud and Enterprise division, where she played a key role in establishing the company's cloud business across Turkey, the Middle East, and Africa. She also worked with the Partnership on AI to help build its AI for Social Good strategy.
Lale holds an MBA from UC Berkeley's Haas School of Business, where she focused on responsible business practices. She received a BA in Political Science and International Relations from Bogazici University in Istanbul, Turkey.
Recent Insights From Lale Tekişalp
- Responsible AI in Practice / May 13, 2026 / Audio
- Taking a Responsible Approach to AI: A Guide for Business / March 5, 2026 / Insights+
- From Innovation to Impact: Key Takeaways from the 2025 Tech Against Trafficking Summit / November 19, 2025 / Blog
- Harnessing AI in Sustainability: Emerging Use Cases / September 17, 2025 / Reports
- Protecting Children in the Digital Environment: The Role of Impact Assessments / February 26, 2025 / Blog
-
Managing Director, Marketing and Communications, BSR
David leads BSR’s marketing and communications initiatives, working with a global team to amplify the organization’s mission and showcase its activities, impacts, and thought leadership to members, partners, and the wider business and policy community.
David previously worked for The B Team, a group of global business and civil society leaders working to catalyze a better way of doing business for the well-being of people and the planet. Throughout his 20-year career, he has worked with businesses and nonprofits in economic development, public health, and sustainability to define and communicate their purpose and impacts. .
He has built high-impact communications campaigns for a collaboration to improve maternal health in Zambia and Uganda, driven top-tier media coverage for a major economic development project in upstate New York, and helped strengthen parliamentary capacity and voter education efforts in South Africa and Zambia. He began his career as a newspaper reporter.
David earned his M.A. from The Elliott School of International Affairs at the George Washington University and his B.A. in Journalism and Political Science from Michigan State University.
Recent Insights From David Stearns
- Responsible AI in Practice / May 13, 2026 / Audio
- Safeguarding Human Rights in High-Risk and Conflict-Affected Areas / April 14, 2026 / Audio
- Doing Business in an Era of Geopolitical Conflict / April 7, 2026 / Audio
- A Year of Uncertainty: Maintaining Progress Amidst a Battle of Ideas / February 13, 2025 / Audio
- A Conversation with Mario Abreu, Group VP, Sustainability, Ferrero / February 6, 2025 / Audio
Description
Lindsey Andersen and Lale Tekişalp, Associate Directors of BSR’s Responsible Tech practice, join host David Stearns to unpack the responsible AI landscape, its origins and evolution, and what responsible AI looks like in practice for tech companies developing models and products, and for those deploying and using these technologies. Together, they explore the role of sustainability teams in responsibly developing and deploying AI across environmental and societal considerations.
Listen
Transcription
David Stearns:
Welcome to BSR Insights, a series of conversations on emerging and cross-cutting business, economic, and social issues. Drawing on BSR's expertise for more than three decades of leadership in sustainable business, we'll help practitioners and decision-makers to navigate today's increasingly complex world. I'm your host, David Stearns.
Thank you for joining us. Over the last couple of years AI, and particularly generative AI, has been increasingly mainstreamed into everyday life and the operations, products, and services of many companies. While these innovative technologies undoubtedly offer new opportunities, they also raise questions about risks to people, increased environmental impacts, and how these technologies should be governed to ensure transparency and accountability.
Today, we're kicking off a three-part series on responsible AI. In today's episode, we'll be exploring what we mean when we speak about responsible AI and what that looks like in practice for both the tech industry companies developing new models and products, and the companies deploying and utilizing the technologies.
This year, BSR launched a new Responsible Tech practice and I'm happy to be joined by two of our leaders in this space who will help guide us through this rapidly evolving landscape. From New York, we're joined by Lale Tekişalp and from San Francisco, Lindsey Andersen. Lindsey and Lale are both Associate Directors on BSR's Responsible Tech team, where they advise companies on how to responsibly develop and deploy technology in line with international standards, including human rights. As a part of this, they do quite a bit of responsible AI work with companies, including things like developing responsible AI principles and strategies, conducting risk assessments and setting up risk assessment processes, and facilitating stakeholder engagement.
They also lead BSR's responsible AI working group, which brings together BSR members across industries to discuss challenges and lessons learned in operationalizing responsible AI. Welcome, Lindsey and Lale.
Lindsey Andersen:
Hello.
Lale Tekişalp:
Thanks for having us, David.
David Stearns:
Hello both. So we'll dive right in. I'm going to start with you, Lindsey, to sort of level set us. We're hearing a lot about responsible AI these days, but it's not a new topic. Can you give us a lay of the land?
Lindsey Andersen:
Yes. It is definitely not a new topic, but the field of responsible AI really follows the evolution of AI itself. I think it's helpful to think about pre-generative AI and post-generative AI periods. So responsible AI kind of coalesced as a field in the twenty-teens, which is when we saw the first AI boom. It was much smaller than the AI boom we're currently experiencing, but it was still considered a boom at the time. And in that case, it was predictive AI and it was the era of big data and machine learning. Folks probably remember those terms being thrown around.
It was during this time we started to see different types of AI technologies and use cases emerge—things like facial recognition and recommendation algorithms. You had virtual assistants like Siri on your phone. And then for corporate use cases, we started seeing things like demand forecasting, fraud detection, credit risk scoring, dynamic pricing, identifying job candidates, and hiring. All of these kinds of different use cases are actually still quite common today. And this was back in the twenty-teens when these emerged.
During this period, there was also a growing awareness of the risks associated with different AI technologies and some of the use cases I just mentioned—things like bias and discrimination, privacy issues, et cetera. And it was really through this kind of first understanding of risks that responsible AI was born. It started out of academic research but then pretty quickly, some of the major tech companies started drafting AI principles and hiring responsible AI teams.
We also started to see the first international responsible AI guidelines at that time—things like the OECD AI principles. Some companies and other industries also started to work on responsible AI too, but it tended to be pretty concentrated to highly regulated industries, like financial services and healthcare. BSR started to work on responsible AI during this period as well. So that was the pre-gen period of responsible AI. Of course now, we are in this generative AI boom all starting with the release of ChatGPT back in 2022, and we're still very much in this period today.
This kind of marked a real shift in responsible AI because suddenly, we had this general-purpose technology that was widely available to individual users and required very little technical skills to use and do a bunch of cool things with. And so then, suddenly, AI adoption inside of companies went from these more narrow, predictive AI systems that were utilized by really select teams, maybe focused on data analytics, to suddenly every employee using gen AI tools. There are tons of new use cases being rolled out all the time. Employers are really encouraging employees to innovate and adopt AI really rapidly.
Now we have things like customer service chatbots, enterprise knowledge assistance, coding tools. You can do really complex data analysis just using natural language prompts. You don't need technical skills anymore. So this is the new era we're in of responsible AI and we are starting to enter the Agentic AI boom period—Agentic AI being AI systems that can plan and execute multi-step tasks. We're still definitely in the early days here. A lot of the current use cases that are being branded as Agentic AI are kind of just generative AI models with some slightly more advanced automation, but they're pretty narrowly bounded in terms of what they can do and the systems they access.
A common example that we're seeing is customer service virtual agents that can be chatting with a customer, but also pulling data from internal systems. And then they can take some bounded tasks like issuing a refund, for example. We are not yet at the point where there are fully autonomous agents that are independently executing tasks across a bunch of different surfaces. There are a variety of technical, safety, and security reasons for that, but that is definitely where we're headed.
So this is the period we're in now and because of this boom—increasing adoption, increasing integration of AI across all parts of businesses—responsible AI has become much more important for companies in every single industry that are developing and deploying AI. Because of how fast adoption is happening and how rapidly the technology is evolving, responsible AI teams are now having to grow and scale quite quickly and figure out how to really deeply embed responsible AI throughout the business in a way that hasn't been necessary before. This is obviously a big challenge and we'll talk more about it.
David Stearns:
That's such a great intro and I can only imagine the challenges that are facing companies. I mean, just the general ground rules for using tech within companies at the most fundamental level is always so complicated. So now we layer in this responsible piece. And for that, I'll turn quickly to Lale. How do we as BSR define responsible AI and what is our role in the adoption of responsible AI by the companies that we work with?
Lale Tekişalp:
Yeah, great question. We're hearing this term so much these days. It's good to stop and think about what it really means. So at BSR, we define responsible AI as a set of practices that companies undertake to ensure that AI technologies benefit society and do not harm people and the environment. And these practices are informed by both voluntary AI standards, as well as regulatory developments. At BSR, we see responsible AI as an enabler of AI adoption and innovation. It's something that needs to be built in from the start as companies are adopting AI technologies. It's not something that can be added on later as a reactive measure and that's what we try to help companies do.
So that's a short answer to your question, but let me add a little bit more color to what I mean. So I think one thing that's really important about our perspective is that as BSR, we believe that social and environmental impacts are at the core of AI risks and opportunities. Currently, within the responsible AI landscape, we see a lot of focus on policies, principles, governance frameworks focused on compliance, which of course is absolutely necessary, but we also think that it's insufficient because it's the real world impact of AI on people and the environment that will create harms to society and risk for business.
Looking at this from the perspective of a company or an organization, let's say you're evaluating a vendor or a partner, or working on a transaction, or researching a company that says, "Oh, we have a good responsible AI policy." That's a good start, but it only means so much if their AI system is later found to show bias or create harms for children, et cetera. So assessing the real world impacts of AI is really critical because such impacts can lead to not just sustainability risk, but material business risk such as regulatory exposure, operational risk, reputational damage, and more.
And I'll add two other important perspectives here. So at BSR, we encourage companies to think about the full AI value chain when considering risks and opportunities. Often when companies think of AI impacts, they think of downstream impacts related to the use of AI, such as job displacement, bias, discrimination, et cetera. However, social and environmental impacts also occur in the upstream AI supply chain, such as worker rights issues related to data enrichment service providers or increased water use and land conversion for data centers that power AI systems.
So all of these impacts should be considered as part of responsible AI efforts. It's also important to think about responsible AI in short, medium, and long term. Often companies focus on short-term impacts; however, AI technologies may also have impacts in the longer term. For example, in the short term, responsible AI could be about managing impacts related to accuracy, data security, bias, but in the long term it also means understanding how AI affects information quality or the labor market, et cetera.
David Stearns:
Thanks for that, Lale. You mentioned in your comments the importance of companies assessing some of the risks that are attendant with the adoption or deployment of AI. So Lindsey, I'll turn it back to you to get a sense of what that actually looks like in practice for companies. What do we expect to see from companies around responsible AI practices and methods?
Lindsey Andersen:
So first of all, I'll clarify that there's no kind of cookie cutter, one-size-fits-all approach to responsible AI, but there are several elements that we would expect to see at any relatively large company that are pretty broadly applicable. And we have a set of practices that we recommend to BSR member companies that are grounded in both AI-specific standards that exist, like the NIST AI Risk Management Framework and the various ISO standards, but also that align with the general corporate responsibility standards that we all know and love, like the UNGPs and the OECD guidelines.
I will just flag a few of the practices because we could be here all day talking about all of the things that companies should do. The first is kind of the starting point for everything, which is the overall governance structure and there are several key elements here. The first is actually technically a policy and it's a set of corporate-wide responsible AI principles, or a policy, that really outline a company's approach to developing or deploying AI that connects responsible AI to business strategy and also to other statements of corporate values, like your human rights policy or your sustainability strategy.
This is super important because having a set of corporate principles sets a foundation for everything else and how you're actually going to operationalize responsible AI. It can become an important check on the business if you're maybe thinking about use cases that are riskier. Making sure you're deploying AI in line with your principles can be a kind of helpful guiding light for the company. So they are a very important thing to have in place. Then, some of the other governance components—you need board oversight, often a board subcommittee. You need senior management responsibility so that there is decision-making power to instate policies and processes to make go/no-go calls on AI systems.
And then you need some sort of centralized coordinating body to keep it all together. This could be a team, a cross-functional working group and it's definitely best practice for sustainability teams to be a part of this in some way. And then lastly, on the governance structure front, you need to make sure that the day-to-day responsibilities of responsible AI are delegated appropriately across the company. Given that AI adoption is kind of everywhere these days, pretty much every single team and business unit is going to have a role to play in the responsible development and deployment of AI.
So you need to make sure the legal team knows what their role is, the HR team knows what their role is, technical and AT teams know what their role is, et cetera. Those are the core components of the governance structure layer of things. But then to get specifically to your question about risk assessment, David, you need a risk assessment process that considers risk to people in the environment and that can scale across the company. What this means is, you need to be able to identify all your main AI use cases and systems, but within that you need to be able to identify and escalate high-risk products or high-risk use cases that can undergo a deeper review and be subject to maybe go/no go decisions.
We call this gating and escalation processes often. And the idea here is, you can't do a risk assessment on every single AI use case and you don't want to overburden teams with a bunch of unnecessary work if the use case is really low-risk. You’ve got to figure out how to create a process that scales, and this scaling issue is actually one of the main challenges that pretty much all responsible AI teams are facing these days, just because of how broad and rapid AI adoption is occurring inside companies. It's really hard to keep track of everything.
I think the last kind of component I'll talk about is a really underappreciated one, and that is performance incentives. A lot of companies don't pay attention to performance incentives and they can really clash with responsible AI goals. For example, you might have sales targets that incentivize sales reps to make a sale of an AI tool to a customer without appropriate due diligence to identify risks, or you might have product launch targets that make it really difficult for teams to take the time they need to identify and mitigate risks. So it's important to take a look at your existing performance incentives and make sure that they align with responsible AI goals and are incentivizing employees to do what they need to do to achieve your responsible AI principles in practice.
So those are kind of some core things to keep in mind, but what responsible AI looks like in practice is really going to vary depending on the type of company, your place in the AI value chain. So if you're a tech company, you're mostly a developer—you're going to be focusing on responsible product development and maybe sales due diligence. If you're more of a deployer, you're going to be focusing on procurement and reviewing use cases and AI literacy among your staff. And then of course, things like size and resources matter. Small companies are probably going to have much more informal light touch processes, larger companies are going to need processes that are much more robust and embedded, and there's definitely a maturity curve for responsible AI as well.
As I think everyone can probably tell by now, it can be pretty complex in the level of embeddedness that's needed. It can take several years before companies can kind of tick most of the boxes of responsible AI. In terms of where things are at today, definitely we see that the AI developers, which are mostly the tech industry, are much more mature and further along. As I mentioned in my intro, that's kind of where Responsible AI started. So they've been doing it for longer and have had the chance to iterate and learn and figure out best practices. And then same with the highly regulated industries, like healthcare financial services that are further along. But others are honestly catching up quite quickly.
I think probably the majority of large companies have some sort of responsible AI process up and running at this point. But one big gap that we're seeing across the board still is in how companies are thinking about risk. Lale mentioned this—we see a lot of compliance, legal, regulatory, maybe cybersecurity risk orientations. Those issues are really important of course, but they're pretty limited. It's important to be able to take a holistic approach to risk that also considers risk to people and the environment, in addition to those more business type risks, because that's going to be most effective in preventing harm to people and the environment and then actually achieving responsible AI in practice.
So I think that's the area that we would really like to see companies make more progress on—being more holistic about how they think about risk.
David Stearns:
Lale, anything to add?
Lale Tekişalp:
I would just add that many of the things that Lindsey talked about—I don't think those are new things. Companies are used to setting up such governance structures and setting policies and cross-functional teams to deal with similar issues. So we don't want companies to think that these are all brand new concepts. Of course, in the context of AI, they would be applied differently, but I think a lot of the companies that we work with are familiar with many of the things that Lindsey described.
David Stearns:
I think Lindsey did talk a bit about the upskilling and competence needed to be developed quickly, particularly for the companies on the deployer side around the proper governance. Are either of you seeing anything in terms of, how are boards getting up to speed quickly to be effective at doing the governance that is required to manage the deployment of these systems in a responsible way? Or is that a challenge that we're seeing across industries?
Lindsey Andersen:
I think it's safe to say it's a challenge. We're starting to see more board governance resources and at BSR, we even offer board responsible AI trainings. But I think there's definitely a question about, do you have the right people on your board to actually achieve effective oversight? There's a certain amount of upskilling, of course, that any board probably has to do. But particularly if your AI adoption could be higher-risk because, say you're a healthcare company, maybe you need to consider bringing on a board member who has the expertise there because the level of governance and oversight they're going to need to provide is going to be more technical and more substantial than at a company where maybe AI adoption is more like employee productivity and business process-based, and there's less risk to people related to it. Lale, I don't know if you have any thoughts you would add.
Lale Tekişalp:
I think the big challenge we're hearing from boards on this topic is balancing the adoption of AI with the risks, right? Well not only for boards, but from companies in general. So being aware of how the social and environmental risks of AI and these responsible AI topics we're discussing are actually impacting business risk, I think, is key for boards.
David Stearns:
Lindsey, you had mentioned that there's an important role for sustainability teams to play in the development and deployment of responsible AI. I think we're all well aware that sustainability teams are quite adept at understanding material risks and through their due diligence processes. So I'll turn it over to you, Lale, if you could talk a little bit about what is the role of sustainability teams in AI deployment?
Lale Tekişalp:
Yes, certainly. So what we're currently seeing is that responsible AI practices are often led by cross-functional groups that include teams like data science, privacy and compliance, legal, et cetera. We strongly believe that sustainability teams should be part of these cross-functional groups that are leading responsible AI. They should have a seat at the table and there are a number of reasons for that. The first one is that sustainability teams can bring significant value to this work by coordinating with existing initiatives to understand the company's environmental and social impacts, risks, and opportunities because that's what sustainability teams do, or part of what they do.
The second reason I would say is that sustainability teams already have toolkits that can be leveraged to address AI's impacts. So think of things like materiality assessments, human rights assessments, stakeholder engagement practices, sustainability reporting. These are all things that are critical for responsible AI governance and really can plug into some of those elements that Lindsey described. We see a lot of reinventing the wheel across the responsible AI field, but there are established best practices for assessing and managing risks to people and the environment that can be applied in the context of AI.
For example, the human rights framework offers a great foundation and toolkit to look at the social impacts of AI. It offers international standards that are agreed upon by governments and companies, and an established methodology to assess and address impacts. And finally, it's increasingly integrated into policies and regulations around the world such as the EU AI Act, which requires companies to assess the impact of their AI systems on fundamental human rights. So no need to reinvent the wheel. International standards on sustainability and human rights can be applied to AI and there's actually a lot of existing work that has been done on that translation of these standards to the AI context.
Maybe I'll end with a positive development. We're seeing more and more sustainability teams owning responsible AI efforts or getting involved in them. We've been having lots of conversations with chief sustainability officers and sustainability teams who are starting to think about, how is AI impacting the company's sustainability and business risks? How is it affecting our climate goals, et cetera? So companies are realizing that adopting AI changes their overall risk exposure for people and environment, and that there's a benefit in addressing AI and sustainability risks together.
David Stearns:
Thanks for that, Lale. You mentioned stakeholder engagement. I'm curious, and this is for both of you, who are the most important stakeholders in this context? Who are sustainability and responsible AI task forces? Who are they talking to and who is important for them to be talking to?
Lale Tekişalp:
Well, I guess it depends on the AI systems that the company is deploying. I would say users and workers are two very important stakeholder groups, especially if you're a consumer-facing business and if you have AI products and services that are used by consumers, then those users are very critical. And then if you're deploying AI systems internally in ways that might impact your workers, then workers are also a very important stakeholder group. Lindsey, anything to add?
Lindsey Andersen:
I'll just add that one, it can be overwhelming to do stakeholder engagement, particularly if you're consumer-facing. There are a lot of different people who could be impacted. So one model we've seen companies pursue to facilitate stakeholder engagement on AI is an external advisory council, where basically you identify a set of experts who can represent different stakeholder groups and different kinds of areas of knowledge, who then, you can engage with regularly. They can provide input on things like your risk assessment processes, or governance, or specific new AI systems that you're looking into developing or deploying. They can get to know your company quite well, and your issues, sign NDAs, and be a safe space. That is a model that we've seen work really well. And so, if you have a lot of particularly external stakeholder groups that are relevant for you, that's worth looking at.
David Stearns:
Thanks for that. It's really interesting. So let me wind it back a little bit to first steps for companies that are just getting started on this. It sounds like from the rapid changes for many companies, they are having to come to the grips with this really quickly and are just getting started. What is the first step? What's the first thing that they should be doing, Lale, as they're getting started on a path towards responsible AI use?
Lale Tekişalp:
Well, I have three steps or three things that I would recommend as a first step. One is setting up a working group. These are typically cross-functional groups including different functions, as we discuss, such as sustainability, privacy, legal, and of course, technology teams. I think the cross-functional aspect is really critical for responsible AI because it is such a multidimensional topic. So the first is setting up a working group that will lead the responsible AI efforts at the company. The second thing is to establish your AI principles. As Lindsey explained, these are typically a set of corporate-wide principles that outline the company's approach to developing and deploying AI systems.
And this is important because it serves as a foundation for everything else that will follow. And then the third thing we would recommend is to conduct an AI scan, taking an inventory of your AI use cases. Ask different business teams how they're using AI. For example, talk to your human resources team, your marketing, sales, customer service teams, because all of these teams are using or starting to use AI-enabled solutions and it's important to understand these different uses because AI related impacts can occur across these different functions. The impacts will really depend on how it's being used.
So once you have an inventory, identify the high-risk ones. Ask which use cases have the most significant impacts and this will help you prioritize efforts and resources to where they're most needed. And maybe one thing I will add specifically for sustainability teams—find out whether there are existing responsible AI efforts within your company, because often there's already a responsible AI initiative or conversation happening somewhere in the company and sustainability teams may not be aware of these. So ask your colleagues, find out if there's an existing effort within the company, and if so, make sure you're a part of it and make sure you're thinking through those social and environmental impacts of AI.
David Stearns:
Lindsey, anything to add?
Lindsey Andersen:
Just on Lale's last point, one kind of vector we've seen help sustainability teams get involved is the growing awareness of the environmental impacts of AI, which is kind of a new thing with the generative AI boom. And it's early days from a responsible AI perspective. We have much more established best practices for identifying and addressing risk to people. On the environmental impact side, we're still very much in figure-it-out mode, but I think because it's AI meets environment, it's much more clear to companies that there's a role for sustainability teams to play. And so that can be a nice entry point for you to get involved in the broader responsible AI conversation and start talking about social impacts as well.
One helpful new resource I would flag for sustainability teams to help wrap your head around responsible AI is that the OECD just released their due diligence guidance for AI. And what that does is take the existing OECD due diligence guidelines that everyone's super familiar with and basically apply them to responsible AI. It's a really helpful kind of document for translating responsible AI and sustainability and human rights. So would definitely recommend you take a look and also bring that to other teams, whoever's currently running responsible AI, to show them what that overlap looks like and help make your case to be involved.
David Stearns:
That's great, Lindsey. And I'll also give a plug to a resource at BSR—a report we published recently called, “Taking a Responsible Approach to AI: A Guide for Business,” which both of you were instrumental in helping to put together. It is one of a number of different pieces of content in our library of resources around responsible AI and responsible tech. I encourage anyone interested to come to the BSR website and check those out.
This has been a really fantastic conversation. I think it'd be great to put a pin in the calendar and come back a year from now to see what this conversation looks like, because I think things are changing so quickly. I bet it will be a very different conversation. So let's keep this conversation going. Thanks again, Lale and Lindsey for these really great insights. For the BSR Insights podcast, I am David Stearns. Thank you for joining us and we'll see you next time.
Thanks for listening. For more in-depth insights and guidance from BSR, please check out our website at bsr.org and be sure to follow us on LinkedIn.
BSR’s latest sustainability insights and events straight to your inbox.
Topics
Resilient business strategies for a complex world.