@media screen and (max-width: 1023px){section[data-id=”block_c8a6d5987c5fe80cbb0818851442b828″]{ }}@media screen and (min-width: 1024px) and (max-width: 1365px){section[data-id=”block_c8a6d5987c5fe80cbb0818851442b828″]{ }}@media screen and (min-width: 1366px){section[data-id=”block_c8a6d5987c5fe80cbb0818851442b828″]{ }}
Shoshana Rosenberg is the Senior Vice President, Chief AI Governance and Privacy Officer at WSP, one of the world’s leading engineering and professional services firms. She is also the Founder of SafePorter, Co-founder of Women in AI Governance, and a Strategic Program Advisor at Logical AI Governance. Shoshana is a seasoned attorney with over 16 years of experience in international data protection law, a US Navy veteran, and a passionate advocate for social entrepreneurship and inclusion.
body.single-post p, body.single-post li{131313;} body.single-post li a{e33e2b; font-weight: 400 !important;} .center-block{margin:0 auto;float:none;display:block;clear: both; margin-bottom: 0px;text-align: center;} .podwrap {margin-top:20px; }.podwrap img{margin-right:10px; width:98%; margin: 0px; } .podwrap.last{margin-bottom:12px; margin-top: 0px !important;}.podwrap.pod1{margin-bottom:0px;} .podwrap div{display:inline-block; width:21%;} iframe{text-align: center;display: block; margin: 20 auto; float : none;} .iframe-container{ position: relative;width: 100%;padding-bottom: 56.25%; height: 0;}.iframe-container iframe{position: absolute;top:0;left: 0;width: 100%;height: 100%;}
@media screen and (max-width: 640px){ .podwrap { width: 100%; position: relative; display: inline-block!important;}.podwrap div{width:36%;}.podwrap img{margin-bottom: 0px !important;} }
Here’s a glimpse of what you’ll learn:
- Shoshana Rosenberg’s transition from the US Navy to international law and global privacy
- An in-depth analysis and examples of AI governance
- How companies can begin developing AI governance programs
- How to obtain company buy-in for AI governance programs
- Emerging technology for managing AI governance programs
- Shoshana’s best AI privacy tip
In this episode…
In the ever-evolving and largely unsettled AI landscape, one certainty remains — the need for companies to develop governance programs to navigate and address the organizational impacts of AI. Such governance accounts for client, stakeholder, and employee expectations for AI use, as well as risk management and overarching visions for innovation. But the process involves more than simply understanding AI tools and vendors. So where do companies begin when developing AI governance programs?
AI governance isn’t another compliance program where decisions are made in a vacuum. Instead, it’s about building a centralized intelligence function across various teams to identify and understand AI tools, use cases, and vendors. A sustainable AI governance program evolves with the changing regulatory and technology landscape and is monitored and evaluated by the governance committee and other organizational stakeholders.
In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels sit down with Shoshana Rosenberg, the SVP, Chief AI Governance and Privacy Officer at WSP, to talk about how companies can build an AI governance program in an evolving landscape. Shoshana emphasizes the need for a proactive approach to AI governance and recommends regularly evaluating AI tools and use cases while creating and adapting associated risk profiles. This establishes a foundation that allows companies to keep moving forward, regardless of how business needs change and the AI landscape shifts.
Resources Mentioned in this episode
- Jodi Daniels on LinkedIn
- Justin Daniels on LinkedIn
- Red Clover Advisors’ website
- Red Clover Advisors on LinkedIn
- Red Clover Advisors on Facebook
- Red Clover Advisors’ email: info@redcloveradvisors.com
- Data Reimagined: Building Trust One Byte at a Time by Jodi and Justin Daniels
- Shoshana Rosenberg on LinkedIn
- WSP
- SafePorter
- Women in AI Governance
Sponsor for this episode…
This episode is brought to you by Red Clover Advisors.
Red Clover Advisors uses data privacy to transform the way that companies do business together and create a future where there is greater trust between companies and consumers.
Founded by Jodi Daniels, Red Clover Advisors helps companies to comply with data privacy laws and establish customer trust so that they can grow and nurture integrity. They work with companies in a variety of fields, including technology, e-commerce, professional services, and digital media.
To learn more, and to check out their Wall Street Journal best-selling book, Data Reimagined: Building Trust One Byte At a Time, visit www.redcloveradvisors.com.
@media screen and (max-width: 1023px){section[data-id=”block_aa22457dd44e34a94a9616f8ee9e65a8″]{ margin-top: -100px; margin-bottom: -50px;}}@media screen and (min-width: 1024px) and (max-width: 1365px){section[data-id=”block_aa22457dd44e34a94a9616f8ee9e65a8″]{ margin-top: -100px; margin-bottom: -50px;}}@media screen and (min-width: 1366px){section[data-id=”block_aa22457dd44e34a94a9616f8ee9e65a8″]{ margin-top: -100px; margin-bottom: -50px;}}
Intro 0:01
Welcome to the She Said Privacy/He Said Security Podcast. Like any good marriage, we will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st century.
Jodi Daniels 0:22
Hi, Jodi Daniels, here. I’m the founder and CEO of Red Clover Advisors, a certified women’s privacy consultancy. I’m a privacy consultant and certified informational privacy professional providing practical privacy advice to overwhelmed companies.
Justin Daniels 0:36
Hello I’m Justin Daniels, I’m a shareholder and corporate M&A and tech transaction lawyer at the law firm Baker Donelson, advising companies in the deployment and scaling of technology. Since data is critical to every transaction, I help clients make informed business decisions while managing data privacy and cybersecurity risk. And when needed, I lead the legal data cyber data breach response brigade.
Jodi Daniels 0:59
And this episode is brought to you by ding Red Clover Advisors. We help companies to comply with data privacy laws and establish customer trust so that they can grow and nurture integrity. We work with companies in a variety of fields, including technology e commerce, professional services and digital media. In short, we use data privacy to transform the way companies do business together, we’re creating a future where there’s greater trust between companies and consumers to learn more and to check out our best selling book, Data Reimagined: Building Trust One Byte at a Time. Visit redcloveradvisors.com. You were very smirky, and I almost had the giggles again.
Justin Daniels 1:35
I can’t — listeners, I can’t help her.
Jodi Daniels 1:38
Nothing that can be done. You know, if my biggest problem is that I have a case of the giggles, I guess that’s a good thing. Well, I see. Well, that could be good. So today’s episode is brought to you, not by ChatGPT Copilot, OpenAI, or fill in the blank AI, but we are going to have a really fun AI conversation. Yes, we will. So today we have Shoshana Rosenberg, who is Chief AI governance and privacy officer, co founder of Women in AI Governance, founder of SafePorter and advisor at Logical AI Governance. She is a seasoned attorney with 16 plus years in international data protection law, a US Navy veteran and a passionate advocate for social entrepreneurship and inclusion, and we’re so excited that you’re here today.
Shoshana Rosenberg 2:25
Thank you so much for having me. I’m very, very excited to have the chance to talk to you both amazing.
Justin Daniels 2:31
So as you know, we always like to start out with having you tell us a little bit about your career journey to where you are today.
Shoshana Rosenberg 2:41
So I thought it might be good if I kept this pretty succinct, so if I go off the rails, you’ll have to let me know. But I was a philosophy and a psychology major who then went into the Navy as a way to pay for the rest of my education and figure out where I was going next. And so I was an engineer in the Navy, and I was lucky enough to work in Sicily. Interestingly enough, I was an interior, interior communications engineer. So I was actually working on motherboards and helping to man the AFN, the Armed Forces Network television and radio station. So that is this sort of engineer that I was in the Navy, and that was an exceptional experience. I immediately got out finished school and went toward law school, but along the way, I had taken a detour and done some international development work for almost a year and a half, and so that was an important shift for me, and something that still informs the things I do. And it was where I became really passionate about human rights, which I did some focus on in my international law focused law degree and and then I went on to work for USA ID, to work at law firms, and then to land at a global firm that has evolved and been acquired, and now is part of a firm that continues to acquire other firms that works across so many disciplines. And I got to wear almost every hat, so I worked in M&A, I worked in employment law for four years. I’ve touched every component of this organization, and the many things that we touch internationally and globally around these many business lines. So I’ve had a very exciting opportunity to really see a full array from contract law all the way into these other components alongside that, starting quite some time ago, I’m afraid, I built out the global privacy program for that original organization, which then became the larger one. And so I’ve moved then down from having built out the global program at WSP global, down down back into WSP USA to focus on data strategy and some of the other components. So that is me. So. Far and certainly AI governance was something that I immediately took into because I had been working along the technology components for so long, and also building programs is probably my favorite thing.
Jodi Daniels 5:15
Well, like many people, I love building privacy programs and also building, you know, a variety of different types of governance and compliance. And so I kind of alluded at the beginning of our show, we were going to talk about AI today. And so thinking about AI governance, that can mean a lot of different things, depending on who you ask. I’d love if you can help that the foundation here and in your mind, help us understand what is AI governance?
Shoshana Rosenberg 5:46
So I think this is the question of the hour, and it’s the one that I’m very, very comfortable answering. So AI governance here is a means by which to navigate the several impacts that are acting the forces that are acting upon an organization at any given time in the AI ecosystem. So you have the client expectations, be they realistic, or otherwise, you have the internal stakeholder expectations from employees to business managers and leads around what they want for innovation from that company, what they want from Ai literacy, what they want from tooling right. And you have that as a continually changing and evolving thing. You also have the regulations, which are going to be a little bit more foreseeable, as we know in privacy. I always said the landscape is changing, but the horizon is clear. It’s not so different, though. I think that we are still going to be living into the regulatory landscape for AI, for quite some time, and then the technology is changing. So in all of these moving forces, I sort of alluded it to, alluded to it there with international development, but you need a very, very clear understanding of how to stay steady against these many forces. And the last component there is the organizational appetite for risk and innovation, which is going to continue to change alongside all of these. So AI governance is, in fact, both a means by which to get a lot closer to that business intelligence that every business has been talking about forever, and also to understand and keep a pace of these many components and the risks attendant. So I don’t think it is just about understanding your AI vendor inventory, which is a really critical component. And I don’t think it’s just about understanding the tools that your employees want, that your security perimeter hardens around if you don’t give them to them. I think it really is about a holistic approach to navigating the AI ecosystem and ensuring that the decision making processes that you put in place are agile enough to move with the information that you need to proactively be soliciting from all components of that was that big enough?
Jodi Daniels 7:53
Yeah, no, is that meaty enough? That was for sure. And what I was going to say is, you mentioned, right? It’s more than just knowing the tools and vendors. Can you provide perhaps an example? Because so many people focus and start there. I mean, it is a natural place. What do
Shoshana Rosenberg 8:12
I have? It’s an absolute. And who are the critical place to start?
Jodi Daniels 8:16
Yeah, what would be an example of what you’ve described, how it’s bigger than that. Just to put some color for people, of course.
Shoshana Rosenberg 8:22
So there’s one thing, right? So you do you want to, you want to run, I think, in parallel, and I think this leaks over into something we were going to talk about anyway, right? Which is where you’re starting in terms of your use case, inventory, both possible, which is a really important thing to look at, internally and externally for an organization, right, the foreseeable array of engagements that you want to consider or could consider as an organization, so you can start to narrow down where you actually want to be, as well as categorizing your vendors into AI, non AI. And then, of course, sort of it’s on their roadmap, or they have tools, but we’re not engaging with them. So beyond just that, the really critical point that I would suggest is that you understand the need for centralized intelligence within the organization, with regard to AI, right? It’s the Wild West. People are going to be giving training to other people because they’ve learned how to play with a tool, and they want to show everyone in the organization, whether it’s a sanction tool or not, you’re going to have a lot of people wanting to build something or saying, I built something with public data that you should use. I’m sure you’ve heard all of these at your organizations at this point. And so it is really about building, I think that centralized intelligence function and that need, that willingness to understand the holistic nature of AI governance and to acknowledge even gradually, not like in an emergency. GDPR, you know there’s a fine coming. You better run but understanding that you need to be proactive about soliciting this feedback and have a program that can adapt to it. So knowing why the people in your organization might be using the tools that they are. And wondering, and helping to work to get them the functionalities that they’re demanding or explaining to them clearly why they can’t have them. And then, and then, also working to understand what the client expectations are the competitive marketplace, which I failed to mention prior, but that’s one of the components. And also endeavoring to make sure that you have a way that you can get an actual intake from several different levels of the organization. So if we think about what we do as a, you know, with regard to governments and other things, we always want the problems to rise up right and for us to be able to hear them. So you want a number of ways in which to hear from your stakeholders, and especially your internal stakeholders, about their needs, their concerns. So whether or not you’re, you know, doing it just as an intake form that allows you to say it’s a question of concern, it’s a risk review in developing that piece, or you’re able to build more layers, where you could perhaps have someone from every group who is designated as sort of the AI curious or AI savvy person for finance, for admin, for a business line, to have them come together in a delegation committee and meet periodically, so that you can see the things that are accumulating or that are there are synergies or opportunities. And then, of course, to have an AI governance or AI oversight committee and a leadership council. So you have this sort of, I’m making sort of a line with my hands, but I sort of envision it as the spine of the thing is that if you don’t have a way to intake that information at many levels and have a bi directional flow for it, then AI is going to be something that proliferates in corners, and you lose the opportunity to make more efficient your use of these vendor technologies, or of the innovation that your staff is clamoring to build, or actually has been given the remit to build.
Jodi Daniels 11:55
That makes sense. Thank you, and I will dive more into many of that —
Justin Daniels 12:00
So with the end in mind, where should companies begin their AI governance program?
Shoshana Rosenberg 12:08
Well, that was sort of, I think we sort of touched on that. Right is that, I think, really the current use case through the vendors and through understanding what your employees are doing or want to be doing, as well as looking at the potential use cases for your organization and understanding your organizational appetite there. But I think you really want to start with figuring out what cross functional team is going to make up that AI oversight or governance committee, because the truth is this is not a compliance program, and it doesn’t rest with whoever is charged in terms of title as the AI governance component, because they cannot drive all of those many components or be there to respond to them or make those decisions in a vacuum. The program itself also has to be monitored and evaluated and evolved as well. So there are a number of things here where you the key part there is that governance committee, who really sees the summaries of all of the other inputs and awareness that is coming through the organization, around questions, concerns, requests for tooling, new tools for new uses, old tools for new uses, right? You have to have a body of people who are actually taking this into consideration and able to run that up the chain to whoever in leadership is addressing the strategy component.
Justin Daniels 13:24
So let me be a little bit more specific and reformulate my question a little bit in your experience. Do you have it where companies say, Hey, we’re going to put together a committee and come up with a risk profile for our company and then evaluate use cases, or is it more? Hey, these are the use cases we’re looking at. We’re going to try to create a risk profile out of that. In my experience, advising companies with AI, it’s the use cases that tend to drive what they’re doing, as opposed to creating the committee to come up with the company’s risk profile to then evaluate use cases.
Shoshana Rosenberg 13:58
I’m sorry if you misunderstood, or if I said it in a way that wasn’t clear, yeah, you don’t. There are a couple of things that have to happen. First of all, the use cases that the company thinks that it wants today are going to shift, and it is all pretty foreseeable, right? There’s a finite number of different things that the technology can do, that can be applied in ways that can be transformative, in ways that we can’t quite anticipate, but where you’re looking for predictions, decisions, the application of this is part of robotics, right? You can actually flesh out for that committee or that organization, depending on the size, you can flesh out both what’s currently happening, take in from them what they anticipate, wanting to use it for, and show them a great deal more of what the future might hold and what they might want to initially or at some later point, say we will never do that. We may someday do that. All I’m saying is that your sample use cases should exceed the boundaries of the current demand, because looking ahead and under. Understanding the types of things that that industry is doing or may do, is actually a way to start to evaluate at it from afar, the roadmap, because A is what they what they want to do exactly this minute you can facilitate if you have a program in place that allows them to put their proper, proper mitigations and controls in place, but they should be looking further ahead, even just to understand what they think their current appetite is, and to be able to see later how it shifts. That’s all I’m saying. You absolutely want the current vendor use cases, and you absolutely want to know what they currently think they want to do by way of innovation. But even if you have a company that says we want nothing to do with AI. We just want to be compliant. Just give us a program that tells us we can do, you know, and you help them to understand both the fact that it’s not just a compliance program, and you help them to understand where they think that they are. It’s still worth, I think, letting your clients know and anticipate where their industry could go, should go, or is going, with regard to this, so that they aren’t surprised later, and you’ve already helped them lay out that roadmap of what comes and how to deal with it. Makes sense. Any other thoughts? No other thoughts. One is that shook his head, but it seemed to be, in a sense, so I’m not sure, oh
Justin Daniels 16:22
no, I think you answered my question. I just, in my own personal experience with AI, you deal with people who are trying to just drink from a fire hose of what’s going on today. And my view is, if I want to set up a framework of what does risk look like? What is your risk profile of your organization look like, because obviously it’ll evolve. And my view has always been, I think that informs how you Eva evaluate use cases you have today and into the future, because different companies have different risk profiles for how they want to deploy AI.
Shoshana Rosenberg 16:55
So I want to say this, though, I don’t think that AI is so many in privacy and compliance. You know, in security, we certainly focus on risk, but this is so hand in hand with opportunity for organizations that I think that looking at it as a risk based component alone is part of the problem. And so I really think that when people say, because we have, I have many people come to me in different organizations and say, Well, how do I sell the fact that we need AI governance? And the truth is, I don’t know why. I always imagine that I Love Lucy conveyor belt, right? But like, you need a system and a process that is in excess of what is currently there, right? This notion that privacy should just eat AI governance and be fine, because privacy is not a full time job. It’s absolutely crazy. But this notion that what’s already in place is sufficient, I think, is not right. But it doesn’t mean that you need a full overhaul at the same time in order for the company to run at the things that it will suddenly realize or be asked to run at, right, realize that it wants to or be asked to run at, you need a system in place that can consistently apply the current, all of these, the current appetite and controls to every opportunity that it has. So it’s great that you’re looking at the risk profile, and it’s important because it helps them understand the vectors and the components that they need to have eyes on and have monitoring on throughout the organization. But I do think you need to start to put the conveyor belt in place in terms of, what are our current risk thresholds for evaluation. What do we do with those different risk thresholds, and how do we inform them and allow them to adapt, right so that you have that throughput to allow the business to keep moving forward, regardless of how things shift. And the other piece there is that understanding that the first chat put that you chat bot, that you apply to your systems or to something that’s client or consumer facing, is going to be something that is a much higher risk than a year from now, when you’ve deployed six chat bots, and you fully understand the mitigations and contingency plans that you’re going to put in place, but those risk thresholds will change, and that is something that that team has to be able to work with. So you’re being asked to come in and look at the risk profile for the organization. What they have to do with that risk profile gets a little deeper into the processes they’re going to put in place, and all I’m saying is those processes must be adaptable in excess of what companies are currently used to.
Jodi Daniels 19:26
One of the challenges, I think, and you alluded to this a little bit privacy, people are sometimes handed this AI thing go solve and figure it out. And you have a lot of companies who are trying to put in place those steering committees that we’ve been talking about, and then it’s like analysis paralysis. It just doesn’t go anywhere. And it’s we’re just going to keep talking and talking and trying to decide, and no one wants to own what do you say to those professionals or companies where you have people who are trying to do the right thing and. They can’t quite get the rest of the organization to kind of move alongside them. What have you seen to help companies break through that challenge?
Shoshana Rosenberg 20:09
I think it starts in many organizations. It starts when the board starts to worry, because it’s a little bit like so one of the funny things I’m just stepped slightly aside, which is to say, people say, like, Oh, our privacy. People are the best people for it. And then a lot of times, because we have this human rights basis, and there’s an analysis of risk we are. But also the OG privacy people who’ve been here for a long time, they’re scrappy, and they know how to fight for something that other people don’t see as necessary right now, right? And so there is an element of saying, You must be an advocate. But the truth is, when people say, look for, you know the old thing of, look for the helpers. We so often think of privacy people, but for people looking to imprint AI governance, you will find in leadership throughout these organizations that if you get to the people who are actually touching the clients or the consumers or taking on that feedback. They want it in place too, because they want the Innovation and Opportunity component of what AI governance can bring. So I think that you do have helpers within the organization and getting to the right team of interdisciplinary, cross functional people who are also hearing their employees clamor for AI literacy, hearing people clamor from innovation inside and outside the organization, you will find that they will support you, not only that, but they are part of the reason that the strategy will unfold, not just for the ad governance point, but for the innovation. So those two go hand in hand. You have a part of the business that will try to run with this, and a part of the business that will help to facilitate it. And so I would suggest that it may not be something that you immediately get the top level of leadership to acknowledge, as you know, sort of put a budget to this, and let’s really allocate everything we have, but as that sort of higher level, the next level down, is dealing with all of the issues, concerns, security components that come into play. I think you do have people within your organization that you can join forces with to make sure that that falls into place, to facilitate what they’re going to want
Jodi Daniels 22:17
to do. I really like that. You emphasize finding the people with the opportunity, because you have business people and marketing people and those trying to drive innovation, who want these projects, and then you have others who are concerned about some of the risks or the notice, or just the perception of what is actually happening. And I think that’s a great place. And I’ve seen successful where companies are able to bring those pieces together.
Shoshana Rosenberg 22:46
But Justin, you also said earlier, when we were talking pre-call, you said that you’re playing with the AI, and that is one other thing that I would say, get your leadership on a call and give them a demo with public information about how easy it is to build a GPT for yourself, to have a way to consult one pre-made, self-made GPT that deals with one particular topic. And then think about how likely it is that your employees are using your company’s information to do this unauthorizedly, because it is that quick, that easy and potentially that useful to them, and then figure out how you’re going to bridge that gap, because that will excite them. It’ll get them thinking. It’ll mean that they’ve had at least one engagement with how simple and accessible this is, and it will get them going in terms of either risk or opportunity, and either one is going to help propel that forward.
Jodi Daniels 23:43
A really great idea. Thanks for sharing.
Justin Daniels 23:46
That’s good. My next AI dinner won’t be with in-house counsel. It’ll be with board members. All right. So a question that Jodi and I get a lot, we’d love your perspective, is, what kind of types of tools or technology or methods have you seen in companies to kind of try to manage these AI programs, anything caught your fancy?
Shoshana Rosenberg 24:08
So there’s some exciting ones being built that I’m aware of, but I don’t think they’re here yet, right? We know, we know the pain of building tech, or seeing or dealing with built technology that wasn’t ready for prime time, that then you have to hire people to get trained on how to use to then train your business on how to use it and keep holding its hand forever. So the technology, I’m going to shift the question to say I haven’t seen anything that can rise to the occasion at the moment. But what we’re really, really looking at and looking for as a landscape, I think, is the means by which to turn continuous monitoring into business intelligence that actually facilitates not only changing our decision making processes as we evolve. As organizations, but also reaching into different components that exceed the AI governance boundaries to leverage that as well. So once an organization understands how to adapt to feedback and adapt to changing terrain, it is more adept and able to do so. The great part is AI is the perfect companion in this. And when we get the right company that has built the right tools, I think you might see something really spectacular. Now, what’s another thing, because he said security, that is key there, though, is that knowing that even if you put all of your so like you, when we think about certain tools that go across an entire enterprise, right? A federated model is a comfort, but it is still a model, and it is still something that can be accessed by threat actors, right? And so when you put your eggs in one basket, that is the one thing that I would urge us all to consider, is that all of that organizational intelligence, be it with AI deployed across many, many tools in your organization in a large federated model, or if they there are people out there listening who are building the tools to facilitate AI governance, is that this amount, this huge amount of data about the organization, the way that it shifts, the way that it adapts that forces acting upon it is a is a gold mine for a threat actor, and it’s something that has to be really considered in terms of making sure that you have as many gaps and protections in place as possible.
Jodi Daniels 26:36
Now, as you know, we asked everyone, what is your best personal privacy or AI, because we’re talking about that too. Tip that you might give someone at a party.
Shoshana Rosenberg 26:49
Just to have something ready for this one, and I’m not sure that I do. I’ll be, you know what? I’m going to say this for privacy people right now, even if you don’t want to take on AI governance, I think my tip would be that you shouldn’t be afraid of and you should move gradually and at your own pace toward a greater comfort with technology and understanding exactly how systems are built, how they deal with information, even if you don’t get into the coding aspect. I think it is a key way to make sure that your privacy stance is well informed and that you are navigating and meeting the landscape to come.
Jodi Daniels 27:32
Very well.
Justin Daniels 27:34
So when you are not practicing privacy and advising on AI, what do you like to do for fun?
Shoshana Rosenberg 27:43
Graphic design?
Jodi Daniels 27:45
Oh, what kind of graphic design?
Shoshana Rosenberg 27:48
I do a lot of graphic design things for the many businesses that I help and deal with in my own capacity, right? So whether it’s safe Porter or women in AI governance, or logical AI governance. I build logos and I really enjoy it. It sort of lets me think about all the other things. So it’s a creative means by which I’m still at the beck and call at 10 o’clock at night of whatever else needs me. But if I can distract myself a little bit of color and energy, it’s always a nice thing.
Jodi Daniels 28:22
So I hope it’s okay to ask, there, do you do it all free hand, using existing tools or AI tools as well?
Shoshana Rosenberg 28:32
I don’t like to work with AI because I’m doing these things for fun, right, color schemes and things like that. So I’m going to plug it. I mean, everybody knows, I think, I think the young woman who invented Canva did the smartest thing and the best thing anybody’s done in so so long. So Canva is definitely the place that I like to play.
Jodi Daniels 28:53
I played yesterday in Canva. I was so proud of my little now, I did use AI. I entered in what I was looking for. Changed it a few times, and it was, I was having a lot really? Oh yeah, I was so proud of myself. I turned me into a graphic designer. But yes, you’re looking at me strange.
Justin Daniels 29:09
Yes, because I’m going to ask, are you looking to commercialize that? Because then you get into the whole copyright and intellectual property issues.
Jodi Daniels 29:17
Now, Shoshana, if people would like to learn more about yourself, a porter of logical AI or women in AI governance, where should they go?
Shoshana Rosenberg 29:31
If you want to get to know more about me, you’re going to have to ask me. Logical governance is a very cool program. SafePorter is a very important thing. I think, as a privacy community, we’ve too long said, where it comes to engagement feedback, we say, oh, that doesn’t look like anything to me. That’s not personal data, is it? Whereas it is. And so SafePorter has helped with both DEI and engagement feedback to make sure that privacy and privacy rights are maintained. So that’s on safeportersecure.com website, and Women in AI Governance is going to soon be give us a couple of weeks at www.wiaig.com, and we have a lot of exciting things coming up for our members and to really support that community and its allies, which are a very welcome group in making sure that we’re breaking down the silos and doing the knowledge sharing that we need to around AI governance, because it doesn’t just impact our organizations and the individuals who are impacted by those it actually has a horizontal and vertical impact, well, well beyond to our societies.
Jodi Daniels 30:37
Well, thank you so much for sharing all that you have today on AI governance, and we are so grateful for your time.
Shoshana Rosenberg 30:43
Thank you for having me. I really appreciate it — lovely to see.
Outro 30:51
Thanks for listening to the She Said Privacy/He Said Security Podcast. If you haven’t already, be sure to click Subscribe to get future episodes and check us out on LinkedIn. See you next time.
(function($){
$(‘[data-id=”block_aa22457dd44e34a94a9616f8ee9e65a8″]’).find( ‘.accordion-title’ ).on(‘click’, function(e) {
e.preventDefault();
$(this).toggleClass(‘active’);
$(this).next().slideToggle(‘fast’);
});
})(jQuery);
@media screen and (max-width: 1023px){section[data-id=”block_6ad57b9fc67405be91f2b9a494720c95″]{ }}@media screen and (min-width: 1024px) and (max-width: 1365px){section[data-id=”block_6ad57b9fc67405be91f2b9a494720c95″]{ }}@media screen and (min-width: 1366px){section[data-id=”block_6ad57b9fc67405be91f2b9a494720c95″]{ }}
Privacy doesn’t have to be complicated.
As privacy experts passionate about trust, we help you define your goals and achieve them. We consider every factor of privacy that impacts your business so you can focus on what you do best.
The post Beyond AI Governance: Building a Program for the Future appeared first on Red Clover Advisors.