Why Architecture Matters for Security and Privacy in Citizen ID
In this video blog, our CTO, Rohan Pinto, joins our CMO, Michael Cichon, to discuss why architecture matters for security and privacy in citizen ID.
Well, good morning everybody. This is Michael Cichon, chief marketing officer at 1Kosmos. I’m here today with Rohan Pinto, our co-founder and chief technology officer, to talk about citizen ID. So once again, Rohan, welcome to our video blog.
Thank you. Thank you, Michael. Good morning and welcome once again.
So citizen ID, this need for citizens to have a digital analog, if you will, for accessing government services. Can you talk a little bit about this need? Where is it arisen from and what are some of the requirements, I guess, for a citizen ID?
Absolutely. So citizen services is pretty crucial to our ecosystem. I mean, from the user or citizen having to go and pay his taxes online or pay his bills online, recuperate, let’s say his insurance costs, et cetera. I mean, everything is digital and online nowadays. And back in the day, when we are trying to secure access to a citizen, we always would think along the lines of having MFA, which is either you have your user ID and password, and then you send an OTP code over to the user. And in this day and age, when we know that OTP codes are not really as secure as what they’re supposed to be, especially with the advent of issues along the lines of sim jacking, et cetera, the entire concept around what you know, what you are, who you are, has changed. The paradigm has changed. And we tend to use biometrics to identify an individual because biometrics literally represents who the user really is.
So it’s extremely easy for the person to represent himself using his own live biometrics to access systems. And that’s where biometrics and facial recognition, et cetera, played a very crucial role in how citizens access government services and even private services, for that matter.
Okay. So a couple of questions here. First of all, these government services, these are federal, state, local? Are they one or the other? I presume they’re all three, right?
They’re all of them. It’s literally all of the above.
Okay. So essentially this is, if you will, it’s the business-to-customer use case, but it’s now government-to-citizen.
Absolutely. It is.
All right. So you mentioned biometrics and using biometrics to authenticate versus user ID, password, and SMS codes and all that. This has been in the news recently. So first of all, let’s qualify the biometrics. Are you talking about a fingerprint or a thumbprint or a selfie? What type of biometrics for authentication?
Okay. Fingerprints are very valid biometrics, but let’s be a bit practical about it. In order to identify a fingerprint, you would need specialized equipment, and not every citizen in this world has got access to specialized equipment or has the ability to connect a specialized device to his laptop or computer in order to access biometric services like fingerprint detection. However, what every user has today is a mobile phone with very high-definition cameras as opposed to the cameras that you have on laptops. And even laptops today have got pretty good quality cameras as well, which enables us to identify facial fingerprints of a particular user rather than the fingerprint itself. So the most commonly used biometric that has been adopted by the industry and also the federal government and citizen services is facial recognition more over fingerprint detection and other forms of biometrics today.
Okay. All right. So this idea of capturing a facial image and facial recognition has been in the news recently, and it’s been in the news in a bad way. Some concerns out of Congress about things like racial bias or decisioning bias. So can you talk a little bit about those issues and perhaps overcoming them?
Absolutely. There’s more to it than just identity profiling or racial bias or identity bias. There’s also ensuring that the person was trying to authenticate himself using his live biometrics, which is his face, is the real person, is an actual live person. Now, note there’s a difference between liveness detection and liveliness detection. Now I’ve seen a lot of vendors or providers out there who go by looking at perspectives. They ask the user to move forward towards a camera, to move backwards, just to ensure that the different perspectives that they capture from that particular individual identifies liveliness. Now, liveliness is not necessarily liveness detection because you can always move a video recording of a user towards a camera and back, and they bypass liveliness detection. But liveness detection is something else altogether, where you not only expect the user to express human emotions, random human emotions like smiling or blinking or winking, but you also do a lot of analytics on the face itself.
For example, you could in real-time calculate the difference or the distance between the nose and the ears or the lips and the eyes, the distance between the two pupils, the depth between the edge of the nose and the ears, to ensure that it is a 3D image of a person and not a 2D image. And even 3D images can be spoofed nowadays by using 3D masks. And that’s where facial expressions, random facial expressions come into play. So we call this as PAD detection, which is a combination of passive and active liveness detection, where you do a whole bunch of liveness detection actively on the phone or on the camera while the user is trying to authenticate himself.
And then you also have got a whole bunch of passive detection that happens on the service site to ensure that this is a valid person. You go through a whole bunch of templates to ensure that there’s no identity bias, there’s no racial profiling, there are no twins in the picture before you actually authenticate that individual using his biometrics. So it’s pretty crucial for us to also identify the difference between liveness and liveliness before we let a user use live biometrics to authenticate into a platform.
Got it. So all of these, I guess facial recognition approaches are not created equal. I’ve read there are facial recognition rates, and I guess, failure rates. What are some of the metrics of a good facial recognition versus a poor one? I mean, what should we be looking for if we’re looking for a good facial recognition system.
Gender bias is one thing that we need to pay attention to because we cannot be profiling users based on the color tone of your skin or how they look or whether it is male or female. So that’s where gender bias comes into play. So we need to ensure that when the user is authenticating himself, we don’t qualify that particular facial recognition capabilities based on the gender of the person or the skin tone or the race. We also look at things like twins. So you can have two identical twins who might be authenticating into a citizen service, and you want to make sure that it’s one user and not the other. I mean, it would be disastrous, Michael, if I look like you and I could authenticate into a platform and access all your tax records. It would be disastrous if that had to happen. So ensuring that identity bias is also taken care of is pretty crucial to any biometric authentication platform.
Yeah. Well, it might be disastrous if you looked like me just period, but let’s get beyond that. Security is another issue. So it’s one thing to, I guess, capture the facial image, but now you’ve got to store it. And I know that has raised a lot of concerns and arguments as well. So what are some of the issues with storing these biometrics then?
Absolutely. Now storing the biometrics is going to open up a can of worms. For example, in the traditional world where you had user IDs and passwords, where you’ve got user IDs, passwords, and other PII information stored in a centralized database, one hack of that centralized database leaks the person’s PI and information all over the world, and identities can be proliferated based on that. So you could actually go and create another account saying that this is Michael Cichon based on the information that I probably have hacked from some centralized identity management platform.
Now imagine the same effect if facial biometrics are also stored in a platform. So typically you should not be storing any biometrics. I’ve seen systems whereby all facial recognition templates are stored in a platform and you match a face against the entire library of templates, which is not what we do at 1Kosmos because at 1Kosmos, the identity is in the possession or control of the user. So the entire facial biometric template is actually encrypted and stored within the secure enclave of the device that the user is actually holding. And the actual match between the template that the user had used to register himself on the platform versus the template that the user is using to authenticate himself into the platform is all done in real-time and never stored on any backend at any given point in time.
Now, that’s not trying to say that systems that actually store biometrics in some kind of a centralized platform is bad. All I’m saying is that if biometrics are stored in some kind of a centralized platform, they need to go through a whole bunch of more stringent rules to ensure that that system cannot be compromised and you’re not really comparing the person’s face against a library of images that are stored, but rather trying to compare the user’s face against the user’s own face that was used at the time of registration. It’s more a one-to-one match rather than one to many.
Got it. So I’m being a little coy here in asking this last question because I know that a lot of the thorny issues you’re describing, these were anticipated years ago when the 1Kosmos platform was developed. We talk about things like privacy, privacy by design. But could you walk through some of the capabilities? Because a lot of these capabilities, you don’t know you need them until you need them. But can you explain a little bit about the architectural advantage of the 1Kosmos platform?
Right. Our primary or what I would say is our trump card is that our entire architecture is based on the principles of privacy by design, which is ensuring that the user is in control of his own identity. We look at security as a core protocol atop of which we build a solution rather than build a solution and then try to have privacy principles idea to it. So ensuring that any architecture or platform adheres to the seven principles of privacy by design. I mean you can Google it. Dr. Anka Lakin has got a fantastic writeup on the seven principles and what one can do to actually adhere to them online because if I start talking about it, we could go on forever.
But ensuring that any system or platform adheres to those standards and principles is critical to the success of not just the solution, but it also ensures that the authenticity of the individual to make sure that the platform cannot be compromised and the information that is also captured and verified in real-time is secure.
Okay. So privacy, interoperability. There are standards for this. And we’re not just compliant to the standards. We’ve gone through the rigor of certification. Correct?
Absolutely. We are NIST-certified. We are FIDO-certified. We are SOC-compliant. We are ISO-2000, and I think two or seven compliant. So the reason we have gone through all these certifications is to put ourselves in a position not to say that, “Hey, you know what? Our platform is secure just because we say it is secure. Our platform is secure because we have been certified to be secure by third parties, agencies like Kantara or NIST or the FIDO Alliance, et cetera.
Right. Well, you’ve built one amazing system. I appreciate your time today stepping us through it. As usual, pleasure’s all mine. Thank you very much today, and Rohan Pinto, everybody.
Thank you so much, Michael. The pleasure is mine as well. It was great talking to you once again.