New to Defense Mavericks? Start here
Jan. 16, 2024

Balancing AI Automation, Privacy, and Federal Compliance with Igor Jablokov

Balancing AI Automation, Privacy, and Federal Compliance with Igor Jablokov

This week, Bonnie sits down with Igor Jablokov, CEO and founder of Pryon, to discuss the balancing act that AI-driven companies currently face when it comes to automation, privacy, and federal compliance. Igor breaks down the President’s recent Executive Order on AI safety as well as how to approach AI risk management, build trust through attribution, and promote responsible AI. Tune in to hear how a true AI legend approaches the power of this technology.

TIMESTAMPS:

(02:25) What’s happening in the world of AI today?

(05:57) How “big tech” benefits from the Executive Order

(12:42) Will the Executive Order change the relationship between government and industry?

(15:24) The power of AI literacy in privacy development

(19:25) What does the next decade of AI look like?

(25:54) Why the majority of “big tech” has displaced their AI ethics

(29:10) Positive uses of AI to mitigate potential threats

(32:04) How to stay updated on AI landscape

LINKS:

Follow Igor: https://www.linkedin.com/in/ryan-connell-8413a03a/

Follow Bonnie: https://www.linkedin.com/in/bonnie-evangelista-520747231/

CDAO: https://www.ai.mil/

Tradewinds AI: https://www.tradewindai.com/

Executive Order: https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/

Pryon: https://pryon.com/

Transcript

Igor Jablokov [00:00:00]:
One of the things that makes me squeamish is whenever I hear this phrase that a lot of Silicon Valley likes to bat around, which is, if sulfur is eating the world, then AI is its teeth. I mean, you guys have heard me say that before, and that's wildly inappropriate, because the old guard of AI, when we first got attracted to these technologies, it was actually because we foresaw AI being the heart, not the teeth.

Bonnie Evangelista [00:00:42]:
This is Bonie Evangelista with the chief digital and artificial intelligence office, and I feel like I'm joined by AI royalty with Mr. Igor Jablakov from this CEO and founder of Prion. Can you introduce yourself, sir? Tell us who you are? What you?

Igor Jablokov [00:00:59]:
Yeah, yeah. So I run a private company called Prion. Prion was actually the code name to Alexa because our last company ended up becoming Amazon's first AI related acquisition. Prior to that, I led the multimodal research team at IBM, where we discovered the baby version of Watson. We were frustrated because they didn't greenlight that, and that's why we departed and stood up the last company. When it comes to Pran, we knew that natural language experiences would come to more serious pursuits like defense, critical infrastructure. And so we decided to catch our own football, reconvene the team, and started working on this journey many years ago.

Bonnie Evangelista [00:01:36]:
Yeah, and that's why I was being funny when I called you AI royalty, because you've definitely been an AI thought leader. Maybe it's a more appropriate term in the having. Founding the tech that became Alexa, and that is certainly tech that the younger generations have grown up with and they become accustomed to. And now we're in this space where the proliferation of some of those, I won't say beginnings, but some of those things that have impacted our day to day lives are, I would say the velocity of the tech is growing tremendously. And so now we're in this space where everyone is kind of struggling in the chaos, trying to figure out, what do I do? How do I do it? And I think this is where I wanted to bring some conversation from you as well. We were just talking off the record, or before we started recording how you were our first guest on what was then AI proficiency, now defense mavericks. And you data dumped a ton of concepts and themes for us surrounding what was going on. So it's been a year, and a lot has happened in a year.

Bonnie Evangelista [00:02:38]:
So from your mind, what are some of the big things that are sticking out in your headspace in terms of what's going on and what are people thinking about talking about?

Igor Jablokov [00:02:48]:
Yeah, and there's nice symmetry to doing this a year later as well, because obviously a lot of things since Chat GPT's reveal happened that kind of just upset the Apple card, if you, you know, sometimes people don't know what they have even when it's right under their noses. So even beyond the, you know, a year after we started the last venture, we were on a stage at the first ever TechCrunch disrupt conference. If you guys ever saw HBO's comedy Silicon Valley, they lampoon that conference, and I pull out of my jacket pocket of razor flip phone, I speak into it and it talks out and crickets. Nobody in the audience knows what the heck I'm showing them. Marissa Mayer is there. Gi Kawasaki, the famed Apple evangelist is there. Mark Andreessen is there. And what they didn't know at the time is we were secretly working with Apple on Siri, and this is before an iPhone existed.

Igor Jablokov [00:03:41]:
So there's a lot of things that you're now encountering that you can't even tell how big a deal they're going to become at a certain point in time because they're so rough around the edges as well. Right. But it's moving a bit faster now because you have the combination and the collision of several different things between cloud computing, the gpus, the software, the models, and things of that sort. And then certainly what you're seeing is just a flush of capital going into it, which is going to create all rampant forms of experimentation at every layer of the AI stack. And so that's where everybody's expecting delete frog. Whether it's going to be large language models and these diffusion models that are ultimately going to be the thing that we use is practically irrelevant. It's just the fact that there's a lot of shots on goal being taken now, and so something is going to come out. We don't even exactly know what's going to come out.

Igor Jablokov [00:04:33]:
We all can bet that something's going to come out, whether it's things to support our way of life or from adversarial nation states overseas.

Bonnie Evangelista [00:04:42]:
So I'm just going to jump right into the executive order because it's a little bit hard as a practitioner to consume the words in the order because there was a lot of values promoted or principles promoted, and there was a lot of, I'll say, do outs for lots of different agencies across the federal government in terms of within 90 days or 120 days. Like, we want action, we want this done. And I'm curious, you on the industry side, how did you take the executive order? Was it positive, negative, wishful thinking? Because there was a lot in there. What's your read?

Igor Jablokov [00:05:17]:
I think at least for some of the folks that I talked to that are practitioners, they were pleasantly surprised. I mean, it covered a lot of ground and it was fairly rational in terms of their approach, and it gave people timescales. Now, whether they're aggressive timescales or too long a timescale to figure some of these things out, at least it basically said, figure out how to deploy AI. Get your senior executives trained up on it as well. Hire some folks that understand what this technology is. Don't leave workers behind. Be aware of these types of models, especially when they're used in sensitive things like the financial services industry and things of that sort. I think as a broad brush, it was relatively positive.

Igor Jablokov [00:05:57]:
Now, of course, the proofs in the pudding when we finally take this to legislation and turn it into actual packages that people have to regulate and comply against, that's where things get a little weird, because what you're going to see is big tech try to make this glide slope where they actually like regulation because it gums up the work and kind of freezes out competition so that you have to be of a certain size in order to conform to all these conditions that are going to be part of that. But at the same time, their lobbyists are going to ensure that they have freedom of action as well. This is similar to me sitting in front of the CEO of Bank of America in the past, a fellow that essentially built that bank into a prominent position, Hugh McCall. And he talked about how he loved regulation because it essentially froze out competition. And he always had enough lobbyists and staff members on his side to ensure that they had freedom of action and could still build and operate a business. So that's, I think what people foresee. So it's basically what that means is we need the other shoe to drop in terms of what's going to be the actuals when this stuff rolls out.

Bonnie Evangelista [00:07:03]:
Yeah, I'm not sure how this is going to play out because I'll say traditionally or typically, whenever governance is brought to the table, not saying that's a bad thing, it will slow everything down. And there is also a drive or a push to speed up. I'm feeling that in the department because we need to speed up delivery of capability because of operational imperatives and whatnot. So how do we balance the two from your perspective?

Igor Jablokov [00:07:30]:
Well, here's the thing. I mean, it's almost like big tech willed regulation that happened in the first place because they released products that mean, we've all read stories of reporters interacting with Bing, now rebranded as right from the Microsoft portfolio that started saying, hey, let's murder your spouse and figure out how to doom humanity. I mean, how is that appropriate for a product that you reveal to the public? And of course, we all have our favorite hallucination stories from interacting with Chat GPT or things of that sort. So if you look at the root cause of things like Chat GPT, they had plenty of computing resources. They were laying fallow. They started experimenting with things they were more worried about, competing with DeepMind, where DeepMind was getting a lot of attention for some of their experiments, and they just threw a Hail Mary and it ended up landing in something. But remember, most of us that were trained as computer scientists and engineers, we typically created products for you, right? Technologies for you, whether we were sitting on the govee side or we were sitting on the commercial side that were ones and zeros. When you press a button and you expect your lights to turn on, that's exactly what you're expecting, is that light turns on or off, right? If you're turning on the seat heater on your car, you're expecting it to turn off.

Igor Jablokov [00:08:43]:
And when you press the button, you're expecting it to turn off as well. This is unusual technology that's kind of fuzzy, and you're not even sure what is coming out of it as well. So this is like a new world order, if you will, in terms of deliveries that people need to wrap their minds around. And in some ways, that's why the EO even exists, right? Because it's been a year in the making, which was in parallel to the world being revealed to some of the generative risks, like not just the hallucinations, but prompt injection attacks, reverse engineering the training data. Is there copyrighted content in there as well? So that's what it's trying to address, is these new things that are getting discovered by the day.

Bonnie Evangelista [00:09:21]:
I think the values and tenets aren't too dissimilar to some of the things we've been seeing over the last year, whether it's even the department's stance on responsible AI and what those ethical principles look like or others out there. But I still haven't seen it in practice. I'll say, because that's where the rubber meets the road. If we ask a company to make sure your technology is explainable, for example, what does that mean? What does that look like? And I'm not sure how we're going to converge. I really thought it was interesting in the EO, it talked about an AI bill of rights, and I'm curious, do you see that actually being a meaningful step in the right direction for a practitioner to get towards? Okay. Responsible or ethical AI in practice.

Igor Jablokov [00:10:11]:
Let me take it in two parts. The first thing, explainable AI, right? I mean, not all AI can be explainable, but at least make stuff that connected to a trigger explainable, right? So it's not about essentially putting it as peanut butter across the entire portfolio of technologies, but the stuff that's closest to locality, the stuff that's closest to changing people's lives in financial services and things of that sort. Yeah. You better know why you're giving somebody a loan and why you're rejecting somebody, why you're going to take a certain course of action and knock down an airframe versus not, that should be well known. Some of the magical parts of these generative models are not explainable. Some of the stuff where they're discovering molecules for drug discovery and things of that sort. Look, some of us that are hearing this podcast are overweight, and yet we still eat donuts, right? So it's not like even humans can explain their actions on a day to day basis as well. And to think that we can have all manners of technologies that where their internal machinations could be understood is unknown.

Igor Jablokov [00:11:13]:
I mean, look, I hate to say this and to torture most of you, but some aeronautical engineers will tell you if you really get enough drinks in them that they don't 100% know how flight works. They know 99% of how the physics of flight works. But there is some strange things happening during that, and it doesn't mean that all of us don't get on that piece of technology, which is highly complex, and it takes us from point a to point b, safer, by the way, than being in any sort of roadbound vehicle as well. That's kind of where we want AI to go. You're never going to be able to 100% understand what's happening in there. Now, don't get me wrong, there's lots of great scientists and academics and things of that that are trying to understand the internal machinations of what's happening in the neural network and things of that sort. But look, we don't even understand what's happening inside of our own human mind. So it's a little bit of a fallacy to think that you can understand everything around us.

Igor Jablokov [00:12:08]:
Now, science and engineering does try to understand everything so that we can replicate behaviors and soothe certain behaviors out, like multimodal neurons, understanding all of the things that go into a particular memory so that you can essentially maybe remove some of the toxic elements in there as well. But the field is very nascent in doing so.

Bonnie Evangelista [00:12:30]:
How do you think, on that note, this executive order or the second or third order effects from the executive order is going to change the relationship between government and industry?

Igor Jablokov [00:12:42]:
Yeah, I think in some ways they're trying to prevent what ended up happening with social media. Right. In social media, there were a lot more hands off ish on the evolution of that technology. And we've seen all forms of disinformation, whether it came to vaccines, whether it came to election interference, and just all sorts of knuckleheaded things that were happening on those platforms, triggering genocides, disinformation, and things of that sort as well. And look, there was issues even in 2014 where some social media companies were a b testing good news and bad news, and it may have triggered teen suicides. So when you're completely hands off on a new piece of technology, that humans aren't able to fully grasp the second and third order effects, as you mentioned them, what they're trying to do is kind of leap in front of some of these risks before they become wider scale. But look, the first AI supported suicide already happened in Switzerland. Let's not sweep that under the rug, right? You have some elements of AI practitioners that are drawing people into interacting with these things as pseudo family members, pseudo friends.

Igor Jablokov [00:13:52]:
Right? Literally, they're saying you can use it for mental health support, and that is wildly inappropriate. So I get that there's a wild west whenever we end up discovering new technologies. But you know what? Just like a normal distribution curve, you're going to have certain folks that just are very laissez faire and they just wash their hands like Pontius pilot, and they just say, oh, well, if you guys take it to know it's not my know, I'm just making the tool.

Bonnie Evangelista [00:14:19]:
Right? And how much of a global impact do you think this is going to have? Because I think especially the EU in particular, is already taking some firmer stances and whatnot. So do you think we'll be a pathfinder globally, or do you think we're just playing catch?

Igor Jablokov [00:14:36]:
You know, the Republic tend to be a little bit more forward leaning on some of this regulatory stuff, think about what California did with carb. Think about what they know with the privacy stuff, where they tried to create an analog of the european laws. They're going to be doing something similar on the AI stuff. And that may just end up setting the tone for the rest of the states in some ways. Look, even the Europeans haven't fully figured out how they want to roll this out, whether they want to be policing the foundation models or not. In UK, they find that it's wildly inappropriate to have a GPT style model in things like Snapchat, where minors are interacting with it and again, using it for psychological support. And they're worried that that's going to get a lot of teens and children into trouble as well. So you're going to have this patchwork quilt.

Igor Jablokov [00:15:24]:
This is why whenever you had me give talks on AI literacy to many of the acquisitions officers, there's a reason why I spent act one of the three part presentation on understanding a personal history and journey of our development, of our style of ais that we've taken to market. Because when it's a black box like this, you kind of have to fall back to what are the values of the teams that are creating this stuff? Are they going to allow certain things to happen? Because you can't be in all places at all times. And that's something that we take very seriously, at least in our organization, in terms of how we do this stuff. And we always err on the side of privacy and security. We always do. And this isn't just me saying these words. I'll give you a perfect example. In our last company, we were the first ones to ever do speech recognition at scale, right? Cloud based speech recognition.

Igor Jablokov [00:16:16]:
And while everybody else was doing the transcriptions of your voicemails, of your phone calls and things of that sort, and taking it to human transcriptionists afterwards in order to correct these things. I bet the entire company on full automation, meaning it was only AI engines that would essentially crack these opens and send you back. If Bonnie would call me, leave a voicemail, I would get a text message of that nobody would see know it would only be for her eyes only as well. And everybody know we were nuts at that point, because literally the totality of the industry was using humans to correct these things. And I'm like, no, we're going to be key. Storedor is burning the ship. We're going to bet the whole company on that. It's more private, it's more secure, it's less costly, meaning you open up the aperture, you democratize access to these style of technologies.

Igor Jablokov [00:17:04]:
It's faster. Let's do it. Let's figure this out. So I think that level of discipline needs to exist in many of these organizations, and that's hard to police. Remember, regulations are always going to be a lagging indicator to try to correct what people are already doing rather than something that is going to leap ahead and try to prevent these things before they happen. That's the majority of time.

Bonnie Evangelista [00:17:25]:
So have you and your company done anything differently since the EO came out, or do you feel like you're pretty aligned with what it's asking?

Igor Jablokov [00:17:34]:
Yeah, we feel we're pretty aligned. It was good to read it, because in some cases, here's the literals of what some of the values and execution were in terms of our research methods and things of that sort. Like here, I'll give you a perfect example. One of the things is, don't steal other people's data in order to build your models. I don't know, maybe I should be wearing a captain obvious Halloween outfit when I say that. It just seems like a good idea to treasure and respect other people's creativity and ability to derive an income from it as well. So we know how to create synthetic data. That's how we create extra data for our models if we need it.

Igor Jablokov [00:18:10]:
It just seems like there's a whole bunch of captain obvious stuff that we end up doing that. It's just mind boggling to me that it's not obvious to other folks what.

Bonnie Evangelista [00:18:18]:
Other elements of the EO talked about risk based approaches. You're kind of, I feel like describing it in snippets. What other risk based approaches are you taking with your product in particular? Just as an example?

Igor Jablokov [00:18:32]:
Yeah. So we're working at the intersection between artificial intelligence and knowledge management. Right? So think about what would a digital library look like inside of your agencies, inside of your services and things of that sort. It would be knowledge apps and a knowledge Os that turns into knowledge fabric. Because one of the things that we're foreseeing with the rise of generative content, the Internet and the web that we knew died last year. I mean, you guys have heard me say that before, and that's because when we used to do Google searches, I mean, we would hear audio, we would see videos, we would read text that were composed by human beings. Ingenuity and creativity, right? That's what we were discovering when we, majority of the times did our Google searches. I know there was bot campaigns and things of that sort, and some automation, but it was far smaller than what you're getting on the other side of generative technologies, right? It's going to go up orders of magnitude.

Igor Jablokov [00:19:25]:
What that means, generally speaking, is well before the end of the decade, the majority of what we're going to be discovering out there is going to be a hall of mirrors. And that's where you're hearing terms like model collapse, where there's already examples now of foundation models eating output of foundation models, and starting to get a little wonky over time. That's happening on the vision side, that's also happening on the text generation side as well from us. It's everything you can think of. How do you make it so people can't reverse engineer the training data? How do you make it so you have per user access controls that mimic the controls that an organization already has? How do you make sure that it's a highly encrypted connection between the AI and the content repositories that it's ingesting before you can start doing searches or workflows against that content as well. So it's more than one know access controls. It's the network connectivity between the AI and the platform, and then ensuring that the model is essentially extracting exactly what it needs to. And that's why in our platform, we give organizations the choice of two different modes.

Igor Jablokov [00:20:31]:
One is extractive, so they literally only see the production of answers from their data. And the other is answer smoothing. That's sort of doing a pseudosummarization their documents. But the curious thing is, in both cases, you can tap on the answer and it literally opens up the exact page, or takes you right to the timecode of a video or an audio file where it's learned it from, so that you always have attribution to source and method. Remember this, this is a very important thing to say. People do not trust technology, they trust other people, right? There's subject matter experts that you have in your organization. It's important for our AI, at least, Brian's platform, to not show up in your organization as this God like entity. That's this magic eight ball giving you all sorts of solutions.

Igor Jablokov [00:21:22]:
My job to reveal the great resources that you have inside of your organization. Because then when you see the document from, let's say, Diane's Dehelli or someone else in your organization, you're going to trust that a lot more than this thing just blithering on, and you're not even sure where it came up with the solution from.

Bonnie Evangelista [00:21:41]:
So my mind is now somewhat pivoting not tremendously, but you've given us a little bit of a download on what your approach is, especially given light of the executive order. But has your approach changed in terms of collaborating with other companies or AI companies? Because it sounds like you were already moving toward this direction. So maybe not so much, but I'm wondering if anything has changed in terms of who you're working with, who you're partnering with or collaborating with to do x for the department as an example. Because one thing I know for sure is no one company can do it all. We have to work together. Has the approach changed your way of thinking about how to collaborate with others?

Igor Jablokov [00:22:21]:
When you encounter AI companies, they essentially fall into one of three different buckets. So I'll describe that first as well, and then I'll describe what we're doing, and then I'll describe our approach to partnering with other companies. The first bucket tends to just be application companies building on top of other people's APIs. These are the ones that show up and say, hey, I can solve this particular workflow for you, but when you actually scratch the surface, they're built on top of AWS, Azure, GCP, APIs, cohere, anthropic, OpenAI. They're essentially built on top of quicksand. They don't really control their cost of goods sold, their security or anything of that sort. They could build an interesting lifestyle company, but I worry about what they actually can do, especially in more serious and more secure environments. So that's the first bucket, and that's.

Bonnie Evangelista [00:23:10]:
Because of the dependency.

Igor Jablokov [00:23:12]:
Correct.

Bonnie Evangelista [00:23:13]:
To the underlying tech. Okay.

Igor Jablokov [00:23:14]:
Right. Yeah. They have to make API calls out to somewhere else. And of course your security personnel, whenever something goes down a particular pipe and they don't know where it ends up on the other side, they always start worrying. The second and the third buckets tend to be generationally and technologically different. So the second bucket tends to be created by younger founders and teams that don't have a lot of flight outers in AI yet. And so what they do is they end up going with a narrow niche. They'll just make a foundation model, they'll just make a dialogue manager, a speech recognition engine and things of that sort.

Igor Jablokov [00:23:46]:
And because they don't really know how to connect their technology to the business yet, what they do is build a developer ecosystem around themselves just for trial and error. Now, that's personally disinteresting to me, because that's like sending any one of us into an autozone and telling us to build a car from parts. You can take several of those different vendors and try to create an anti prime, if you will, and pull it all together. But it's never going to achieve the same level of accuracy, scale, security and speed of the category three company, which is a full stack AI company where they know how to build all of this stuff for themselves. That leads to the highest level of performance and practically speaking, the lowest energy use as well. This, by the way, is why Apple always tries to build their own chips, devices, operating systems and applications by doing so. That's why they always have market leading performance as well. And so that's what we've always tried to do.

Igor Jablokov [00:24:40]:
Now that also means we can control where it gets consumed, so that we can meet whatever the mission dictates. So we can go from public cloud multitenant for some of your publicly available content to private cloud, whether it's gov cloud or things of that sort. We're certainly working towards that. And then for the most sensitive emissions, we can even go on premises. Literally, you have some pizza boxes that goes in there and it's completely disconnected from the world for everything that you need. And because it is so energy efficient, we even foresee that this could be put in vehicle, in an airframe, and even on edge device at a certain point in time. So with respect to partnerships, it tends to be folks like, we're working closely with Dell what to qualify their server so that we can do some on premises work. As know, we work with partners like Nvidia.

Igor Jablokov [00:25:30]:
They're one of our customers. Dell is one of our customers as well. And that way we're always looking at the latest things that are happening in their Cuda stack and some of the other adjacent things that they have in dialogue management, speech recognition, their foundation models and things of that sort as well. That's how we tend to work with partners.

Bonnie Evangelista [00:25:49]:
Has it been hard finding partners that align to the same values as you?

Igor Jablokov [00:25:54]:
I mean, the ones that I mentioned are fairly aligned, the ones that are a little bit off the reservation, shall we say. I mean, they're not really even in our bubble, where we would typically have day to day interactions with them as well. So they're kind of like hands off to us anyway, again, this isn't something that we just woke up to in the last year as well. I mean, we've been operating this way for over two decades. I mean, my chief scientist has been working on AI since 1981, when he got a letter from Fredgellaneck, who invented speech recognition, to have him start working on, quote, unquote real artificial intelligence. It's just been part of our values. Now, you can tell that because one of the things that makes me squeamish is whenever I hear this phrase that a lot of Silicon Valley likes to bat around, which is, if software is eating the world, then AI is its teeth. I mean, you guys have heard me say that before, and that's wildly inappropriate, because the old guard of AI, when we first got attracted to these technologies, it was actually because we foresaw AI being the heart, not the teeth.

Igor Jablokov [00:26:58]:
Here's why. All the early uses of AI were what? Accessibility to help handicap folks interact and bring them into knowledge work. For instance, my chief scientist at the time, who's currently at Google, was a blindfellow. And technologies like that were very liberating to folks of that sort. You had us installing these technologies into cars so that all of us wouldn't crash into trees or wrap ourselves around telephone poles texting while driving. You had these technologies that were driven towards machine translations in order to bridge cultural divides, especially as everybody listening to this at times represented a multinational force going out into the world, right, and trying to affect outcomes and create more peace in the world as well. Those are the things that we were really known for and trying to drive into the industry. It wasn't this thing that was a predatory use of technologies to shove more ads in your faces or to make sure that the next video was always playing in front of your eyeballs so that you wouldn't connect with humanity anymore.

Igor Jablokov [00:28:02]:
That was never our intention.

Bonnie Evangelista [00:28:04]:
Yeah, that's interesting. It's the heart, not the teeth. The reason I asked you that question was more about trying to get a pulse on that tension point that you talked about earlier with industry, that if they're criticizing regulation and governance, it's sometimes maybe coming from that place of I just want to go fast and make money and put stuff out there for people to consume, whether I know I understand the second or third order effects or not. So I'm wondering if the people saying it's the teeth, that they're more of that ilk or whatever, and then I see things out there like I was trying to google it real quick because I can't quite remember, but I believe meta disbanded their responsible AI team. So there's things like that happening, and again, we're all here. It's happening to us, I feel like. And what do we do with stuff like that in the face of trying to be responsible, ethical, but still be swift to deliver what we need, especially. And you know, I'm thinking from a department lens.

Bonnie Evangelista [00:29:03]:
Can we balance it all? Can we really have it all? Or do we have to sacrifice one to get the other two kind of thing? What are your thoughts on that?

Igor Jablokov [00:29:10]:
Yeah, you brought up an important point here, and I think, frankly speaking, this is why the EO was birthed over the last year, because you're exactly right. It wasn't just meta. The majority of the big tech entities have displaced and dismissed their AI ethicists, which is counterintuitively, if you're now seeing like, hey, this is going to be a big deal, and we're going to drive AI technologies into all of our products that you would want people looking over your shoulders. This is why even this company, as a mid market entity, has a new chief trust officer that used to be an undersecretary in one of the administrations, has experience as a gubby to make sure that everything that we're doing matches the value of the citizenry that we all support. On the flip side, now, on the other side, let's start talking about the positive uses, right? We have an order of magnitude less citizens than some of the adversarial nations that are plotting against us as well. And so the reason why we're all fascinated by AI is the fact that it becomes a force multiplier. Each one of us is going to have the same productivity as what used to take ten of us in the past. That's what needs to happen over the course of the next 24 months or so, especially if we're going to go into a dicey second half of this decade, right? So that's what AI technology can do, by the way.

Igor Jablokov [00:30:27]:
Let's make it very simple. Do you know what AI actually is? All it does is it does something a human can do at scale. That's it. If you put a whole bunch of pictures of dogs and cats in front of Bonnie and Igor and say, hey, let's sort them out based on which ones are brown, which ones are black, which ones are white, which ones are this, which ones are that, which ones are cats, which ones are dogs, and then break it down into the different dog breeds and things of that sort, and AI can classify that a lot faster than you and I can do. But it's understood that what it's doing is what we can do at scale, right? That's what it means. I think I just oversimplified project maven, by the way, into finding cats and dogs. But in all practicality, that's what we need to do. We're trying to increase the productivity of everyone.

Igor Jablokov [00:31:15]:
The best part for me is when I see prime deployments in all sorts of places, whether it's engineering or technical staff that use it or front of the house stuff, staff with business teams, the fact that they say, holy smokes, this used to take me 2 hours and now I just got an answer in 2 seconds. That part is fantastic to me because it frees us as people to do the harder things that AI can't do yet. That's our vision for why we're really excited to be working in the knowledge management space.

Bonnie Evangelista [00:31:44]:
I mean, if our conversation isn't evident enough, this stuff is changing wildly. So what are, I would say, recommendations on how listeners can stay updated about the implementation and the effects of either the executive order or the AI landscape in general. How do we keep abreast of what is going on?

Igor Jablokov [00:32:04]:
Well, obviously this show is the first place to go. And what I would say is feel free to visit our website as well as prime. That's spelled P-R-Y-O-N as in Nancy.com, because we are going to start revealing some content on the executive order that's going to be codeveloped by ourselves, our chief trust officer and similar folks. And then we're going to start recording some videos and expressions as well in terms of what these things mean, at least from our vantage point, from the industry side as well. And the goodness is for us, we do want to fully understand the EO and eventually what it's going to do because that's going to become baked into the actual product where hopefully at a certain point in time when NiSt and others are going to have a certification for these platforms, we can be amongst the first, if not the first, to be fully qualified. That it's compliant with all of the machinations and ideas that the federal government has to both secure their platforms, ensure that their safety, and then do the job and mission that they're qualified to do. So those are the things that we're going to be supporting in the coming weeks and months.

Bonnie Evangelista [00:33:13]:
Very cool. Are you going to do the three part act? Are you going to make that a video? You should if you're not.

Igor Jablokov [00:33:18]:
I know. I mean, we have to do a refresher on that as well, because look, the one thrilling thing about this industry is it's literally changing by the day, and we all have to have a brisker tempo in terms of understanding these things as well. But ultimately to align values, we're going to be fielding responsible AI, and I know that's a loaded term, and everybody thinks it sounds like a hippiesque term in some cases, but it really means this. Create technologies that you would want leveraged against yourself. That's it. Keep it that simple. When you show up on the field, what type of technologies are you hoping to encounter, even in an adversarial way, whether you're talking about commercially or in a more military touristic way, where there's still expressions, where there's still humanity governing the world, if you will, that's what's important. When we lose sight of that, that's where we get into trouble.

Bonnie Evangelista [00:34:13]:
Yeah, it's the golden rule.

Igor Jablokov [00:34:15]:
Right, right.

Bonnie Evangelista [00:34:16]:
Do unto others as you would do to yourself. We've forgotten that. We just need to get back to it. Sounds like that's right. All right. This was such a pleasure. Thank you so much. I appreciate your time, your insights.

Bonnie Evangelista [00:34:27]:
You're always fun to talk to. I'm always learning when I talk to you, and I greatly appreciate that. Thank you so much.

Igor Jablokov [00:34:32]:
Yeah, thanks for having me. And have a happy holiday. Thanks.