@media screen and (max-width: 1023px){section[data-id=”block_2fda9eee4c48e9bc6a6757793a2d96f3″]{ }}@media screen and (min-width: 1024px) and (max-width: 1365px){section[data-id=”block_2fda9eee4c48e9bc6a6757793a2d96f3″]{ }}@media screen and (min-width: 1366px){section[data-id=”block_2fda9eee4c48e9bc6a6757793a2d96f3″]{ }}

@media screen and (max-width: 1023px){section[data-id=”block_923f3d267d0f5613c6f60f3fb986b18a”]{ margin-top: -100px; margin-bottom: -50px;}}@media screen and (min-width: 1024px) and (max-width: 1365px){section[data-id=”block_923f3d267d0f5613c6f60f3fb986b18a”]{ margin-top: -100px; margin-bottom: -50px;}}@media screen and (min-width: 1366px){section[data-id=”block_923f3d267d0f5613c6f60f3fb986b18a”]{ margin-top: -100px; margin-bottom: -50px;}}

Click for Full Transcript

Intro 0:00

Welcome to the She Said Privacy/He Said Security Podcast, like any good marriage, we will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st century.

Jodi Daniels 0:22

Hi, Jodi Daniels, here, I’m the founder and CEO of Red Clover Advisors, a certified women’s privacy consultancy. I’m a privacy consultant and certified informational privacy professional providing practical privacy advice to overwhelmed companies. 

Justin Daniels 0:35

Hello, Justin Daniels, here I am a shareholder and corporate M&A tech transaction lawyer at the law firm, Baker Donelson, advising companies in the deployment and scaling of technology. Since data is critical to every transaction, I help clients make informed business decisions while managing data privacy and cybersecurity risk, and when needed, I lead the legal cyber data breach response brigade.

Jodi Daniels 0:58

This episode is brought to you by no one can hear that when you put your hand on my head, people, this is what I have to contend with. It is brought to you by Red Clover Advisors. We help companies to comply with data privacy laws and establish customer trust so that they can grow and nurture integrity. We work with companies in a variety of fields, including technology ecommerce, professional services and digital media. In short, we use data privacy to transform the way companies do business together. We’re creating a future where there’s greater trust between companies and consumers to learn more and to check out our best-selling book, Data Reimagined: Building Trust One Byte at a Time, visit redcloveradvisors.com. How are you today?

Justin Daniels 1:42

I think our viewers should tune into the video so they can see that you’re wearing your festive fall outfit today.

Jodi Daniels 1:49

Well, that’s because fall is amazing. And, oh, actually, I think we’re recording right when it’s actually officially going to be fall. So exciting, best season ever, everyone. But no, I don’t need pumpkin everything. I just like the actionable pumpkin and a pumpkin muffin, and I’m pretty good.

Justin Daniels 2:07

But wouldn’t you like to see fall in Colorado?

Jodi Daniels 2:09

I would just like to have some cool weather. Actually, it snowed in Utah yesterday. Awesome, I know. All right, but we’re gonna bring it back to privacy, security and AI today, because we have Rob Black and as the founder of Fractional CISO, Rob has guided numerous companies in enhancing their security posture with extensive experience in product and corporate security roles at prominent companies such as PTC, xada and RCA security, Rob is recognized as a trusted authority in risk management and cybersecurity innovation. So Rob, welcome to the show.

Justin Daniels 2:43

Great to be here. Thanks. Jodi thanks. Justin

Jodi Daniels 2:47

Now it’s your turn. Are you sure can hear the smirky smile?

Justin Daniels 2:52

I get to speak. Yes, you get to speak. So Rob, tell us about your career journey that brought you to where you are today.

Rob Black 2:59

Cool. All right, we’ll do so we will start, started my security career. So I was at RSA Security about 17-18 years ago. Before that, I was in technology field, developer, various technology roles, and went to RSA Security. And, you know, as a pro tip for anyone who’s interested in getting into security, great idea for working for a big security company. Whatever function you’re in, you will learn a lot about security. So I was there for a little under five years working on their identity and access management products. And then went to Exita, which is an IoT company, connecting our customers, devices on the edge, to cloud servers. A lot of security needs there, and I was doing security and product for them, and I was like, boy, every company seems to need the things I’m doing. Then we got acquired by a bigger company. And I was like, Oh, I guess this is over. But no, it wasn’t still doing security programs for them. And I decided, you know, it’s time to take the leap. I’m going to go start my own thing and help companies, on a fractional basis, comply and meet their security needs. So I started fractional CSO 2017 been doing it ever since and and you know now, we help mid sized companies with their cybersecurity leadership. You know, a lot of times, companies are selling a large enterprise, large enterprise says security program is not good, and we help those companies get on the right path and stay on the right path and meet their security and compliance needs. And so been doing it since 2017 it was just me then sitting in a desk in my attic. But now, fortunately, we’ve got, you know, bigger teams. So there’s 16 of us, and it’s been awesome.

Justin Daniels 4:56

See, 2017, isn’t that eerily familiar to you?

Jodi Daniels 4:58

It is Rob and I. Trajectory is very similar. It is indeed and Rob, I love what you just said about the smaller company trying to work with the enterprise company. I have been talking to a company for months about their privacy program in the b2b space, and I knew they were a little hesitant to keep going forward. And then got a call. Large Company has a lot of privacy requirements now they need to really get their privacy program in order. And it’s exactly the same thing it is often. What is that motivating factor those those customers that really —

Rob Black 5:36

Yeah, for sure. I mean, it’s, it’s great to say, “oh, security is important.” Then they get the bill and they’re like, Yeah, I don’t want to pay for it or anything. I just kind of wanted to of want it to be better. But when dollars are on the line, when large enterprises say you have to do it, when the government says you have to do it, you know, or maybe you had a past incident, and you realize, boy, that was very costly, and it would have been better if we were proactive, like, those are all kind of key drivers for making a buying decision. But, yeah, it’s, it’s, you know, it’s kind of like security is kind of like environmental stuff, right? Like, yes, you want a clean environment, but like, Wouldn’t it be a lot cheaper if I could just dump the chemicals into the river behind my factory? And that’s why you need, you know, some other entity to kind of force you to do the right thing, in this case, large enterprise.

Justin Daniels 6:24

Well, speaking of large enterprise, Rob, I’m just curious, from what you’re seeing in your practice, how much of a role yet is the SEC cyber rule playing on these larger enterprises telling your clients, hey, you’re part of our third party supply chain. These are things we require that you do if you want to do business with us, because we have the new cyber rule to deal with. Are you seeing a lot of that, or is it a trickle still?

Rob Black 6:48

So I think what’s interesting is we’ve definitely had an uptick in small, publicly traded companies coming to us, so I think that has forced the buying decision on their part, because, you know, they don’t, they want to be in compliance themselves. In terms of large enterprise, I’m not sure we see the SEC Rule as the governing factor. I think you know that one the vendor management space, I would say, probably precedes that. But you know, I could be wrong at the cause, just in general, vendor management is very important for large enterprise, and I’m sure the SEC bit added to it, but I’m not sure it was the, you know, I’m not sure it was the catalyst. Because we haven’t seen kind of, I wouldn’t say we have seen more recently. It’s probably been constant the past five years.

Jodi Daniels 7:42

Many companies are using all different kinds of cool AI tools, and I appreciate we were discussing different AI tools, and today we want to talk about how to use AI in a governance, risk and compliance program. Rob, can you share a little bit about maybe where are some of the easy places to start, or common places to start, or or just a good place for companies to be thinking about where to fit this in.

Rob Black 8:07

So probably the easiest place to start is summary of large documents all the time. You people will give you a big PDF. It might be a SOC two report. It might be just some big set of requirements, and understanding that document as a whole is pretty difficult. So uploading that to Claude and having it give you a nice summary, maybe ask it specific questions about some of the elements can be a huge time saver. And so that has been one of the bigger ones that you know that we’ve used so many things in security compliance, of course, is text generation. So anytime you get a questionnaire, also a great opportunity for maybe ChatGPT to answer those questions. We’ve done anonymized like a little paragraph about the environment for a customer. Give that to chat GBT, give it the questions, and it will give pretty good answers already and the trick isn’t necessarily to nail it. It’s can you generate an answer for these questionnaires and then as a jump start, and then a human can edit it and respond, and it can save a tremendous amount of time in terms of completely doing something, I would stay away from that. I think, you know, today, the technology is not good enough. Still need human intervention. You know, getting into the weeds. There was one SOC two report that I knew that it was unfavorable report, and Claude actually gave the inverse answer when I asked it to summarize it saying that it was so basically qualified is bad in SOC, two language, and it kind of it kind of got the kind of got it backwards. So you kind of have to be careful. You can’t just. Totally trust it, you know. Now I kind of read some of the things, but also sometimes I’ll scan it where AI has not been good. Not that you asked this question is I’ve had a list of, you know, so let’s say vulnerability scans lots of data from something, and I would give it to chatgpt, and I would say, find all the instances of this vulnerability and tell me all the servers that it has. And it will say, IP, 123445678, and a bunch of others. I’m like, that super uncool. You know, it’s like, it’s kind of like the lazy intern. It’s really smart, but it just decided to not do its job. So, you know, we still have challenges getting AI to do all the things that we wanted to do. We definitely need a lot of oversight. But in terms of anything involving text generation or text summary, is a really good initial use case.

Justin Daniels 10:58

So Rob kind of thought there is, and Jodi, maybe you have some thoughts as well. Is, have you started to like? I realized that ChatGPT now has a button where you can upload a document, and I like Claude as well. So what I’m starting to do is, just to see, is I’ll do the same prompt, upload the same document in both just to see what they come up with to see if they’re pretty similar. Are there some differences? And then the other part I was thinking is, in your example about IP address, 12349, what would happen? I’m just thinking out loud, if you had a prompt that said, hey, I need to know the IP addresses. And then you describe what the IP address is. And if that would have further iterated your prompt to help it get you the results you wanted.

Rob Black 11:44

I, as a recall, I tried multiple prompts on that, on that case, and it just was not. It wasn’t. It wasn’t that big, where it was that would have been that hard for just me to do it manually, you know. So it was, it was just one of those things like, Oh, I thought I could save 5-10 minutes, but I ended up spending 15 minutes trying to get the prompt to work properly. And I’m sorry, Justin you asked the first part of the question. I totally I just answered the second “no.”

Justin Daniels 12:13

And thank you for that answer, because I find I can spend 20 minutes on a prompt to get where I want to go. And I’m like, should have just done this myself, but the first part of my question was, have you done things where you might take the same prompt, upload the same report, or whatever, and do it in both chatgpt and anthropic or Gemini, just to see what they get, and if it’s similar, or what some of the differences might be, or you have some preferences?

Rob Black 12:37

Yeah, for sure. I mean, one of the reasons why we settled on Claude, for for summarization, is because it just blew away ChatGPT. Now it’s possible that the late, you know, we kept, we keep on testing. But it’s not like I couldn’t tell you if I tested today that would be the case. It’s just historically been way, way better in terms of, you know, certain use cases, you know, I definitely have preferences for certain use cases. Like, my son needed to generate an image for school, and, you know, like the assignment was to use AI, and I suggested grok, because it is way better for prompt for generating kind of what you need, like all the time. You do that with open AI, and, you know, the image almost always has problems, and grok is usually pretty close. And I know, you know, that might not be a great compliance use case, but certainly, like a marketing use case, might be the, you know, another grok use case, and not that we’re talking about grok, but my one other grok use case is, have it summarize a recent event, you know, tell me all the things that have happened recently, about X, whatever you know, whatever reason done is, and it’s really good at recent events anyway, but so, you know, so like, I would use grok for recent events and images. I use ChatGPT mostly for text generation. I’d use Claude for summarization, and, you know, so we have tested like a bunch of different problems and and seen, you know, seen the results for those systems and some others. We used llama as well. Have had less success with llama, but haven’t tested extensively the most recent version. So that might be, you know, so my data, there might be a little —

Justin Daniels 14:27

And where does rock come from? Is that who rock?

Rob Black 14:30

GR, okay, rock is X’s or Twitter’s, AI, okay, yeah, so, yeah. So I think the reason why it’s so good with the with current events, just because it has access to all the, you know, tweets, news articles, links, news articles, and, you know, it’s actually, I kind of feel like some point you could have it generate, like a newsletter for topics you’re interested in every morning to kind of get the latest info.

Jodi Daniels 14:59

There are a lot. Of software tools that people will use in a compliance arena, and many of them today have all different kinds of AI bells and whistles. Are there any of those types of tools that you have found, either for yourself or your clients are using that you think have been helping improve efficiency for programs?

Rob Black 15:23

So I wouldn’t necessarily say a specific tool in general. I think the security questionnaire, whole process is very broken. And, you know, so with the security questionnaire, company will send you a list of 100 questions, 300 questions, some huge number. And you’re like, oh, I have this document. Can you just, you know, take my answers in there? Like, no, no. We need our answers. I feel like that translation would be super valuable. I’ve seen some of those tools. I think they work, okay? We have not settled on one. We have not pushed clients to one. You know, it’s, you know, now, I would say it’s manual, ish, you know, probably using prompts on your own to answer those questions. But, but I can imagine three months from now, six months from now, one of those tools being amazing, but, you know, we have not seen it. It feels like, it feels like they’re kind of there, but, you know, still requires a lot of manual intervention. And, right, yeah, and, and even the answers, yeah. I mean, the reason your cause requires manual intervention is if they get the answer right 80% of the time, that means 20% of the stuff is not right, which isn’t good 60% of the time.

Justin Daniels 16:41

I’m right 100% of the time. But Rob, you bring up a good point, because you know, with your practice and your advising clients on a variety of security issues, obviously security has become a really important component of AI. And can you share any experiences when you’re talking to clients, or maybe in your own business, of how you’re training, or putting together guidelines that make sure that people understand that there needs to still be a pretty good amount of human intervention, human feedback? Because this is really, like you said, a very smart but lazy intern, yes.

Rob Black 17:21

So we’ve helped a lot of clients with their AI governance document, and we usually recommend specific, you know, having a specific document and not just kind of treating it as a vendor thing, just because the iterations are happening so fast. And, you know, people want to know specifically, am I allowed to do this? You know, am I allowed to do that in terms of what to be done the, you know, putting things in one of these third party AIS, where it’s hosted elsewhere, and you’re putting your proprietary information in, we highly recommend against that anytime there’s proprietary information. PII, very company specific type things. However, you know, if you’re uploading a, you know, let’s say a questionnaire and answer in the answer, you know that the prompt might say something like, we have a totally, total Microsoft environment answer this, you know, answer these questions. You know that clot or or chatgpt is not going to really have a huge amount of proprietary info of the maybe your Microsoft environment, which they could probably figure out from the DNAs records, right? So it’s, it’s not like you’re really revealing a lot of info. So we’ll, we’ll put guidance around that. A lot of bigger companies today have their own AI, meaning they’ve licensed something and are running it on their servers, or running it on Azure AWS. And so you know, for those ones, you know, they should just do what they should, what you know, whatever the business need is, obviously you do have to be careful about who has access to that information. So for instance, if you’re training, if you’re training data, has data that only you know, this department’s allowed to see, one department’s allowed to see you don’t want that in the general training data, where everyone gets the results right. So you do. There still is some things to be careful with there, but you know, certainly, at least it’s not being fed into the public llms. And so you know, we’re encouraging customers to be using their own or encouraging them when they are using the public ones to be really careful. We are unaware of any like great tools that you know would give you warnings or, you know, or take things out of your prompts that would make life easier. You know, I’m sure there are some of those tools are out there, but, you know, probably not for the mid market customers.

Justin Daniels 19:39

So Rob another question. So let’s say I’m a, you know, a middle market company, and I come to you and I say, Hey, Rob, we want to use the enterprise level version of ChatGPT and hey, in the configuration of it, there’s a little button I can press that says, don’t train on it, yeah, if I do that from your from your expert person. Perspective, is that something that you feel I can rely on or, okay, you bought the enterprise version, but that’s probably still not good enough. Do you have a viewpoint, or have you encountered a situation around something like that,

Rob Black 20:15

I would not rely on it? Okay? So by the way, I’m as you want, with your security person. I’m pretty paranoid about lots of things. I have much more trust with Google and Microsoft than I do with OpenAI. OpenAI, if you just look at their governance situation, not tied to security, but just like, you know, corporate governance situation from a year ago, with the CEO situation, you know, I would just say I don’t think they have their governance lined out for doing things. They’re moving very, very fast. They don’t have a history of good data practices around managing things and so totally independent of any exact company knowledge that I have. I would say, I would be wary of any assertions they’re making. Now, if Microsoft is taking the open AI code and running it in Azure, and they’re operating it, you know, that’s a totally different story. Because, you know, if they say, Hey, they’ve locked it down, and just like any you know, just like any other Azure service, could there be a vulnerability, sure, but you know, it would be a totally different situation. So I would be wary of the things that are hosted by, you know, let’s say not AWS, Microsoft or Google in general.

Jodi Daniels 21:34

When we talk about AI, people are always wondering, “Well, is it going to help my job, or is it going to replace me?” I’m curious how you’re seeing companies use these tools. Are they reallocating budget or time spent in other areas? For example?

Rob Black 21:55

I think, okay, so, so let’s just go to the job replacement. If you’re wary of AI, your job will be replaced, but not by AI, but by human that can do your job and knows AI. You know, there’s so many things you know AI today or LLM specifically, it’s just a huge time saver. So now, instead of you cranking out some report and spending a ton of time you can, you know, write an outline and have ChatGPT fill it in, and, you know, make sure the grammar is correct, and then you edit it right. And so whatever text you’re generating now, it should be a lot faster, right? So now you can do your job a lot faster. Llms today, don’t sit around and think about your job and what you do. So for today’s technology to replace them, it’s totally unrealistic in terms of reallocating resources toward hiring totally believe that’s true. I suspect that many cases, senior management’s pushing people to do more with less. Oh, we need to hire a new compliance person. Well, you know, maybe you should just be using more AI technology to divvy up the work amongst the existing people. But, you know, I don’t know that this, I have not seen this huge payoff with AI today. You know, my suspicion is companies are spending, you know, let’s say 100 billion, and they’re getting 10 billion of value today. So I think we’d have to see some sort of equalization on spend to benefit for companies to, you know, for things to get ironed out. I can imagine some killer use cases today. I mean, to me, I think the problem is inconsistency with the data totally requires human operators for things to be effective today, unless, unless it was a very, very specific use case. But, you know, I mean, you then, but then you’d be spending a fair amount of time on engineering of that, making sure that it didn’t mess up.

Jodi Daniels 23:57

I appreciate you sharing your thoughts, time will tell, but I have heard many times what you just shared, which is that those that do not know how to use AI and tools will be replaced by the people who do for anyone listening and you have not tried or dabbled just a little bit every day will make you more comfortable.

Rob Black 24:00

It’s very easy to try them. I mean, they basically make the entry level ones free. I mean, you might have to sign up, but it’s very easy to just to play with it and try it.

Justin Daniels 24:33

But I bet you the security-minded Rob would say, “remember, if it’s free, it’s because they want to train on your data, and security is at a premium.”

Rob Black 24:43

Yeah, premium, if you’re just trying it out, you know, say, write me a poem about, you know, like your kids names, and then they’ll make a little, you know, I’ll make a poem about your kids. You can share with the kids. It’s fun.

Justin Daniels 24:53

Um, Jodi. Jodi knows my favorite use case for what I did.

Jodi Daniels 24:58

Justin likes to use AI to create. Songs, yeah, that’s good.

Justin Daniels 25:03

Security, yeah, that’s good. No, the viewers want, I’ll repost it the so I use chat TPT to create a song for the Secret Service because I spoke at one of their events, and the song was an ode to the Secret Service based off of their mission, with Bill used for the lyrics of Billy Joel’s Piano Man. And you know, it was scenes from a threat actor restaurant, very cool, and I sang it.

Jodi Daniels 25:32

And even though I can’t see anything, no, that is not your special.

Justin Daniels 25:35

And now you’re on a list.

Justin Daniels 25:37

They laughed. That’s what I wanted. Um, well anyway, Rob, one thing we’d love to ask you is, you know, when you’re out at a party and people are like, Hey, Rob, we’re thinking about cybersecurity over this cocktail. Do you have a best security tip that you might give Jodi or I over a cocktail?

Rob Black 25:58

Yeah, for sure. So if my wife is nearby, the security tip for me is to stay away from the security topic. It’s a total party killer. But if, if, if this were legit, I would say, use a password manager and use a purpose built Password Manager. And the reason I say that is I know a lot of people, and they’re like, oh, yeah, I keep all my passwords in a spreadsheet and just that’s not a good idea, just for everyone who’s listening, that is a really bad idea, but a purpose built password manager is going to be a lot more secure. Obviously, you want to have a really good password, you want to have multi factor authentication on it, but a really good password manager is going to allow for you to have specific passwords for everything you log into. You can have very long strings they don’t even remember, have it generated and then copy and paste it into the website when you’re logging in. And that is probably going to reduce so many people’s risks from where they are today, they either have a spreadsheet with all their passwords, they have these the same password everywhere, and both those things are really bad. You know, if you if your Password Manager is a written piece of paper, and you have it in a secured location, and there’s no one that’s going to come get it, I think that might be an okay alternative. But, you know, but obviously that’s still subject to being, you know, caught in a fire, or, you know, other bad things happening to that. But to me, a password manager is probably the one thing the general public could do that would make a outsize difference in reducing their risk.

Justin Daniels 27:39

Is there a particular one that you like, or you think is pretty good. I know I had to fire one because they got hacked a bunch and went to a different one.

Rob Black 27:47

I am, I don’t, I don’t recommend them. I don’t recommend any specific one in general. Okay, so this is going to sound bad, because they just said, endorse the product in general, I believe any hosted Password Manager will be compromised at some point. Okay, so I know that that may have been the opposite of what you heard from before. Rob’s still recommending it. Keep in mind, even when they compromise it, they do not have your password, right? They have an encrypted blob with your password, which is why you need to have a good password to protect it and you know, so there’s that degree of protection. And also probably, if you’re not a target, whoever downloads all the passwords from that site, they probably are not trying to attack you specifically, but yeah, my my belief is all the hosted ones will be compromised at some point.

Jodi Daniels 28:48

Non-security news, when you are not advising companies on security topics, what do you like to do for fun?

Rob Black 28:54

What do I do for fun? So I joke in the winter that my full time job is actually basketball coach, and I just do part time security. So I have a 13 year old and 11 year old, and I coach their both their town League teams, which is really fun. I’m looking forward to it, and really enjoy that. My, yeah, my, so I’d say that is like my hobby. And maybe that sounds lame, but you know, I don’t know my kids work that those seem to be like the main things they focus on.

Jodi Daniels 29:28

So if people would like to learn more, where can they go to connect?

Rob Black 29:32

Yeah, sure. So first thing is, I have an outsized presence on LinkedIn, so please feel free to follow me on LinkedIn. Justin Daniels, probably find me. You may find in my profile, some videos of me with wigs on that is me. I do funny security videos with wigs. And then if. If you want to even connect more or at fractionalciso.com so it’s like fractionalciso.com Yes, I got that website early. Obviously good for me. Helps with SEO and and you know, if you’re actually interested in getting some help, please feel free to to reach out to us.

Jodi Daniels 30:20

Well, Rob, we’re so glad that you joined us today.

Rob Black 30:23

Thank you so much. My pleasure. Great being here.

Outro 30:31

Thanks for listening to the She Said Privacy/He Said Security Podcast. If you haven’t already, be sure to click Subscribe to get future episodes and check us out on LinkedIn, see you next time.

(function($){
$(‘[data-id=”block_923f3d267d0f5613c6f60f3fb986b18a”]’).find( ‘.accordion-title’ ).on(‘click’, function(e) {
e.preventDefault();
$(this).toggleClass(‘active’);
$(this).next().slideToggle(‘fast’);
});
})(jQuery);

@media screen and (max-width: 1023px){section[data-id=”block_3c8c5f1abdf1df325b38f057f7dc7d3b”]{ }}@media screen and (min-width: 1024px) and (max-width: 1365px){section[data-id=”block_3c8c5f1abdf1df325b38f057f7dc7d3b”]{ }}@media screen and (min-width: 1366px){section[data-id=”block_3c8c5f1abdf1df325b38f057f7dc7d3b”]{ }}

Privacy doesn’t have to be complicated.

The post A CISO’s Guide To Using AI in Governance, Risk, & Compliance Programs appeared first on Red Clover Advisors.