Meeting Date

26 Apr 2023

Zoom Meeting Link / Recording

Attendees

Main Goal of this Meeting

Introduction of participating members and context setting for future work

Agenda Items and Notes (including all relevant links)

TimeAgenda ItemLeadNotes
5 min
  • Start recording
  • Welcome & antitrust notice
  • Introduction of new members
  • Agenda review
Chairs
  • Antitrust Policy Notice: Attendees are reminded to adhere to the meeting agenda and not participate in activities prohibited under antitrust and competition laws. Only members of ToIP who have signed the necessary agreements are permitted to participate in this activity beyond an observer role.
  • New Members: Shreya Kothari, Callum Haslam (find all the member introductions in the recording of this meeting)
  • Scott urges the participants to know more about vLEI work at GLEIF in the context of the work underway at this Task Force.
  • Philip mentions the topic of "How do we trust the issuers?"
  • Steven mentions the paradox of whether to do governance first or the technology/implementation, especially since the technology moves much more rapidly.


5 minsReview of action items from previous meetingChairsNot applicable at this meeting
5 minsAnnouncementsTF Leads
5 minsDetermining meeting content and capture of Issue Reqs and related experience from "Issuers in the wild"Co-Leads 
  • (from SlacL) Suggested an iterative approach, starting with:

    • General approach to Req's gathering with the group (expanding on what is here.

    • A quick template to capture and categorize that experience with the target of requirements

    • Have each of the following groups tabling what information they have and doing a briefing at the weekly meetings. The list so far:

      • The GLEIF team - banking-driven Identities, corporate roles, et.c
      • BC Gov (via yourself) - mining, emissions, registered business
      • Sezoo (John Phillips & Jo Spencer) - SSI In Australia
      • Savita - IEEE and Blockchain SSI
      • Ted Delvecchia

As most of the possible presenters were not at the meeting this is postponed to another meeting

50 minsDiscussion on Issuer RequirementsCo-Leads

Sal: If what we were talking about is trust of verifiable data, but at this point I don't think we are (mostly about data assciated with verifiable credentials, not about personal data being exchanged - Neil observation)

Directed to Phil Feairheller (GLEIF) Observation the GLEIF requirements for Qualified vLEI issuers (e.g., such as provenant) annual review is a list of in the range of 12 o 15 documents with respect to governance and operations, which are only mentioned by title. Myself and Steve Milstein have been unable to find the content or templates for those governance review documents.

Phil, there are a number of QvLEI (QVI) applicants who have submitted documents and GLEIF is in a process of back and forth with the candidates in a review process.
The documents cover a list of questions concerning:

  • Their business and business processes
  • Technical preparedness for becoming a QVI

That's work that this group will have to do. There are some core questions on the technology side, but also establishing the authoritative reference or organization (governance) authority role that fits right into providing both cryptographic and governance root of trust.

The DIF Trust Establishment WG has a Trust Establishment document which is signed by the DID of the document and is designed to hold key claims information in a Verifiable Credential "lite" approach that could be applied to ecosystem technical components as well as ecosystem structure. The only thing they are missing is the ability to multi-sign the document by someone in a governance authority role, but that would only be a small extension to their model.

A key point for today's discussion: Sal: what is the frequency of the confirmation of the validity of a (verifiable) credential (or including a credential or certificate for actor/entity/party such as an Issuer)? There is currently no ability to understand the "currency" (when was this last verified at depth).

Discussion: was that this is a real issue that will need to be addressed.

In other words an annual review of a QVI organization is not a frequent enough review, plus a review of documents annually does not guarantee that reflects day-to-day Issuer operation.

Sal: suggests that there is an internal state that could be monitored. Suggested that the VC be re-verified (details? what to monitor) and also the state of the Issuer (details?).This suggests using the KERI watcher, witness approach where key rotation and other actions cause a state change, which the witnesses/watchers monitor. Something that is refreshed with each interaction. Another solution is logging of internal processes and events (a form of monitoring).

Ideally, all credentials and certificates need to ensure validity and usability now (at the time of use)

Phil: so this impacts the ability of an external organization to make decisions on the trustability of the Issuer (and all the components in the ecosystem) and is the validity of credentials and the ecosystem that produced them being done sufficiently frequently that - as a business using the system - we are satisfied that we are working with continually verified credentials in a sufficiently frequently verified ecosystem. Or do I (as a business) that has more rigour.

Sal: if, as a business, you control your identifiers and internal credentials, but when it comes to dealing with ecosystem components and credentials outside of your control, then I think you need co-governance. You need all the parties to be involved in a transaction/interaction in the decision to trust the ecosystem support for the transaction, not just the business (one of the Parties) itself.

Neil: that's getting complex very fast.

Sal: if all we are going is registering verifiable data cryptographically, then the point is moot. But if we are providing credentials/certificates for all the moving parts of the ecosystem, then we are dealing with more than just cryptographic trust.

Neil: Use Case: an Issuer organization is taken over by a bad actor. It gives the appearance of leaving all the technology in place and continuing to operate, but could well back door the technology and leak information or issue credentials to unqualified entities, which currently is undetectable by external technical means. Note: this also points to the authenticity of the executables (e.g., signed executables) as part of trust

Sal: exactly my point. So identifying the responsible individuals and ensuring that they are still around on a business take over - still works for the company and what the (governance) delegation chain that that individual is involved in.

Neil: This also touches on the revocation or expiry of a credential. I don't see anyone talking about a publish/subscribe/notification system such that all users of a VC are notified in a timely manner that the VC is no longer valid.

Phil: do you want a detailed explanation of how KERI handles this type of scenario?

Neil: this was more a question of - does KERI and ACDC handle this?

Sal: a Notice can show the state of a relationship at any given time

Bre Blasicevic: Our assumption is that as part of the business process, you would check for validity at the time of use vs needing an event notification.

Neil: the rationale for push notification is that particularly with a rare event, such as expiry/revocation, constantly checking to see that a VC is still valid on each use (including long-running processes) is a significantly higher level of validation/polling traffic, particularly if all use of VCs need this level of traffic vs a notification on what amounts to an exception condition.

Rob: In not sure about the potential issues around a verifiable credential. As you know, Sal, with OCSP (Online Certificate Status Protocol) is around privacy in the sense that its possible for the issuer of the credential to understand where the user is using it (as they are required to check in) whether it's pub/sub or check-on-demand there is still a concern about whether the issuer or the credential should have any knowledge of where you use that credential.

In other words, if a Holder or Verifier is continually going back to the Issuer to verify credential validity at each point of use, then that may be providing information on the use of the credential to the Issuer. A pub/sub solution resolves that problem.

Sal: If you digital twin both parties and if all we're talking about is the Controller information, no identifiers whatsoever related to the data subject the and then from and but you've captured state somehow. Document whatever you know that is, and then go from there right rather than having to push me/pull you at any point behind, though you, you've already got state, and all you need to determine is whether or not. The one that you're in is the same as the one you think you are a user avatar.

Rob Sherwood: And so again, I'm not sure how relevant privacy is going to be in all of these contexts. Because if we're talking about, you know what, what are my privacy rights to my title from my employer? Probably it's actually my employer's information. They decide how it's used right. But whether it's pub-sub, or whether it's check. there may need to be a third option, which is the holder of the verified, verifiable credential, provides the status information at the time, and makes a request to a relying party.

It may be that the Holder of the VC determines the overall state to relay to the relying party.

Neil: I'm skeptical on what is being achieved with what appears to be a large amount of additional traffic.

Rob: if I have a VC (as a holder), which I get from you, Neil (as an issuer) and then I want to go to Bree, and I want to present that (my) verifiable credential in order to get access/permission for an operation. Bree (as relying party) goes to Neil (as the issuer) and asks for the status of my credential (#6). So you, Neil, now know that I'm using the VC you issued to me to talk to bRee. I may not want you to know who I am or who I'm sharing my VC with. That was the issue with OCSP and hence the concept of OCSP "stapling".

That solution is where I (Rob as the Holder) go back to you (Neil as Issuer) for an updated validation of the credential and present that to Bree. Bree says I need a more recent validation, which I (the holder) can ask for a more rigorous, recent validation without disclosing that Bree is asking for it.

Point: if all the users of a VC (e.g., Rob) registers with an Issuer for notification of expiry/notification, then that reveals who is using a VC to the Issuer. That problem is solved if the Issuer does a notification to the Holder, who holds a list of entities that are using that VC and they can do the notification, which preserves privacy. This is a variation on Rob's solution (or what the solution for OCSP "stapling" used).

Phil : I would agree that is a way, but it is not the only way.

Neil: I want to still emphasize that we are requiring a large amount of routine traffic for what is an exception condition (expire/revoke). With this approach, each time I get in my car, it may be checking to see if my driver's license is still valid is a large amount of unnecessary traffic for an exception event. We're not talking about a little bit of traffic; we're talking about orders of magnitude of additional traffic.

Neil: observation, we're talking solutions here; what are the requirements?

Rob: Relying parties require that they must know if a VC has expired or been revoked. In asking for the Issuer to provide notification, it cannot know who the VC is being used (relying party vs. the holder)

Neil: additional requirement, we need to verify on a more frequent basis the validity of not only the VC but also the Issuer of the VC. We need to know that the ecosystem and its actors are still valid. Ideally, we need a near real-time understanding that the ecosystem is still valid. So that includes signatures of ecosystem stewards who have co-signed on the validity of a credential, credential process or ecosystem actor (Issuer) or component. For example, Mary is no longer a valid governance authority, and where that signature has been used it needs to be updated by Mary's replacement.

Rob: a possible solution is the publication of all expiry/revocation to a separate registry. That is driven by the requirement not to expose use of a VC to other parties (including the Issuer). However, you have now centralized revocation.

Neil (post mtg observation): if credential expiry/revocation is the responsibility for detection by the Issuer, the reveal of use of a VC is resolved by posting/notifying the holder (or a holder controlled (but mandated) registry on behalf of the holder), which does the notifications to VC users (relying-parties & others). That resolves the privacy and centralization problem.

What is becoming clear is that visiting all these requirements on a VCred/Cert Issuer is going to visit a lot of issues that will be encountered throughout a governed SSI ecosystem, so this validates that focusing on the Issuer is a valuable exercise.

Bree - my understanding of the trust triangle is that a VC provided by a Holder to a Verifier, the verifier will need to check if the VC has been revoked and that Verifiers are connected to the Issuer.

Sal - the point is that if the Verifier is going back to the Issuer, then that reveals that the Holder is interacting with that Verifier (Relying Party)

Neil: the point of the Verifiable Data Registry is that it contains the Issuer public key and schema of the VC such that the Issuer does not need to be contacted for verification. But, in the current I/H/V triangle workflow, the VDR does not cover individual VC (e.g., the Holder's VC) revocation.

Bree: can the revocation not be written to a ledger?

Neil: that has been a proposal, and there are "blind revocation ledgers" that have been proposed. However, they still have the problem of polling - high traffic to check for an exception condition. An alternative is for all users of verifiable credentials to subscribe to revocation events from a revocation register. That solution provides both event pub/sub and polling.

Rob: using a public revocation ledger does remove the issue of revealing who is using a VC.

Neil (post mtg): however, both the act of polling the revocation register or registering for notification both reveal the interest of a specific verifier for a specific VC (of a Holder) , unless there is a "blinding" dis-intermediary which notifies users of the VC (the Holder of a VC controls that). It's not that all components leak all information; it's just that if you don't control the component, you can't count on privacy or confidentiality.

Phil Feairheller on how KERI/ACDC handles these scenarios:

Revocation would be handled by a credential transaction event logs, which are published to a KERI Witness. Keri maintains or creates decentralization by the concept of watchers and witnesses to who is publishing these events (like revocation), which is under the control of the people publishing the data.

Watchers are under the control of the people who are trying to keep an eye on what's going on to make sure that there's no duplicity, as we call it. No one's doing things that they shouldn't with their identifiers with, signatures, etc.

And so, in that way, there is no real one centralized push, or one centralized poll, or any or one centralized place where things belong. It's merely. I publish my stuff to my set of witnesses, and if you are interested in anything that I'm doing with my identifiers, you're going to keep an eye on my witnesses with your watchers, and you're going to keep an eye on them at all times.

Now. central to that working is the fact that everything in Kerry is end-verifiable. So you don't have to go back to a blockchain to check, to make sure that something's valid. You verify it yourself.

When you get a credential, you get it with all of its signatures, with all of its attachments. That would be the key event logs of the identifiers that perform those signatures, and you verify all the cryptography yourself. So you're not relying on one centralized place.

So that's kind of a real general answer to the questions you ask.

So in the specific example where someone is working for an organization, and they have a role credential for that job, that credential gets revoked, and let's say it's the organization that issued the credentials.

So they revoke it and they publish the revocation event to their witnesses. If I am valid, if I and then if I and then presented that credential, it is up to me to make the decision of how often I want to check for a vacation, and if it's every single time I get the credential, I'll check every single time, and I use my watchers.

They probably have been polling the witnesses for events, so they'll probably already have that event, and they will see that that credential has been revoked in terms of the ability to ensure privacy in there.

There's a certain amount of herd privacy already inherent in the way the ecosystem works right now; however, we do have the concept of a blinded Revocation registry. So you're publishing cryptographic accumulators instead of specific application events. So you'd be checking against the cryptographic accumulator, so that's I think that touched on all the issues.

We don't really have the concept of identifiers becoming invalidated; everyone's in control of their own identifier. So you don't really put any reputation into an identifier itself. It's into the credentials that I identify have been issued. So it'd be the credential revocation that you care about.

Neil: so Mary's identifier does not get revoked, the credential which assigns Mary to a Role (which may also have a credential) is what gets revoked.

Phil: yes

Neil (post mtg)

  • is the Watcher, Witness architecture a general pattern?
  • An Ecosystem will have a graph of Trust Registries where each level of Issuer, starting with the Ecosystem as top Issuer. This is a mechanism for having traceability and verifiability of the Issuer/Credential pattern.

Time Bouma: Basically, a trust registry facilitates access to authoritative sources of information needed to perform trust tasks and make trust decisions within the context of governance frameworks. A trust registry is a centralized database or anything that basically facilitates access. It could be decentralized. It can be distributed. So GLEIF for example, is a governance framework for issuing identifiers to legitimate organizations.

Neil : (and GLEIF) uses accreditation and certification to qualify QVI (Qualified vLEI Issuers), who are issuers with lineage and traceability back to GLEIF as a governance framework

Tim B: I like the idea of Witnesses and Watchers (more evidence for useful governance/tech pattern). Watchers are entities that can monitor for changes, including change of state, and don't need to notify (Neil: or then can decide to notify using their own criteria).

Judith - GLEIF is creating (Issuing) Identifiers to an organization and roles within that organization, where a Trust Registry is a list of Entities (e.g. Issuers, identified by an Identifier, possibly a GLEIF LEI/vLEI identifier) of (for example,) verified Issusers of Verifiable Credentials. I have an Academy of Banks (who verify Banks and put them in a Bank TR), which issues trusted Identifiers.

Neil: substitute (for Issuers) issuing of VCs. Including that, the ecosystem will be qualifying Issuers and issuing them an Ecosystem Credential (higher level issuer) as a Verifiable Issuer and including them in a Trust Registry of Issuers.

Credentials are the key, and Credentials are issued to Identifiers, that is the relationship

Phil F. It seems to me that the concept and operations of Trust Registries and what we're doing with KERI/ACDC is coming together in what we will be doing with the DID:KERI DID method. In the (KERI/ACDC) ecosystem as it stands right now, we don't rely on this on any centralized location to discover really anything about other credentials or identifiers in the ecosystem, we, the concept that we use is called "percolated discovery".

So if I need to present a credential to someone, I can also introduce them to the Issuer, and then they can make the connection they need to be able to get whatever information they need from the Issuer or to verify the cryptography because, again, remember, everything is then verifiable. How the introduction between the Verifier and Issuer is done is not important, it's up to the Verifier/Relying Party to interact with the Issuer for verification, etc.

So in looking at creating a DID:KERI method - one of the biggest problems with having an identifier system that's not centralized on a blockchain, for example, or on the web. So did ether did web, is that we have no centralized discovery mechanism. So we have to create one in order (Centralized or reachable decentralized discovery) in order to have a valid DID method.

So leveraging the concept of a "super watcher" to keep an eye o the entire ecosystem. So it gets configured with all the witnesses that are in the ecosystem, and part of our framework demands that for any Qvi (3rd party vLEI Issuer), as well as GLEIF, publishes their witness information.

Then we can have a DDI method implementation that can query that Super Watcher, so he will have knowledge of every single AID (or other identifie)r that is involved in the KERI ecosystem. And then, if you were to take that and expand the DID method to implement, resolve or implementation can point to another super watcher that's looking at another ecosystem. To me. That sounds an awful lot like a trust registry (or graph of trust registries).

Neil. There are two levels of verification. One is that any SSI object is signed by the keys controlled by the object's identifiers (DID). That provides tamper-detection and that the content of the SSI object is controlled by the DID/Sender. The next level of trust is sigantures of that object by a governance authority, which, for example for a VC would be the Issuer of the VC

Tim Bouma: I oversimply this into 2 things:. There is the Crypto-stuff. It's been issued, and you know the the cryptographic signatures. Check out. So that's you know Everything's fine. That's the dry code stuff. And then there's a what I call the wet code stuff.

Which is is this true to the actor's intention? And there's a lot of externalities there like it's not just about the technical system it's about. You know the institution is doing what it's supposed to do, and there's a lot of like kind of rules that might not necessarily be included in software, and that that's basically the the wet code stuff is a lot harder to solve.

For example, GLEIF. If I see a GLEIF identifier, I can check it out so I can check that it's legitimate and everything that's been signed by (governance steward, authority, etc.). But how do I know that GLEIF (the organization) is a legitimate organization and that their own governance is in order?

That's a much more difficult problem. And I basically have to trust GLEIF that their processes do that. So the distinction between Wet Code and Dry Code (the accreditation, certification, and auditing process)

Phil - so, trust is a business decision? (consensus - yes)

Judith - So the (KERI) watcher/witnesses (and the data they capture) supports cryptographic integrity and authenticity of identifiers. Trust Registries, while they employ signatures and credentials about Issuers, they are primarily about governance - do I trust the organization that certified the Issuer and the Issuer organization itself? 

And a business is going to make a decision on trusting any identifier, issuer or trust registry is both a requirement on cryptographic proofs but also on what governance and governance authorities are backing components like issuers and trust registries, which businesses will include in evaluating whether they trust identifiers, issuers and trust registries in an SSI Ecosystem. 

Steven Milstein - a Trust Registry (and higher level registries) are "white lists" for the ecosystem.

Ultimately, while crypto-graphic integrity and proofs are required (for trust), without the governance - the "wet code/wetware" of (organizational and ecosystem governance), then SSI ecosystems have not met their dual-stack goals of tech and governance.

Neil - we may find that trust in the component executables will be required in that the executables and support files have been cryptographically signed and verifiable. How else could an organization determine if the code had been tampered with? 

Sal to Phil - do the (KERI) watchers (and witnesses) watch the GLEIF registration (process)?

Phil - no, the watchers are simply a technical component of securing the keys & cryptography

Neil to Sal - but possibly we need something like a watcher (/witness) component to capture such processes?

Sal - yes

End of Meeting.

Screenshots/Diagrams (numbered for reference in notes above)

#1


Decisions

  • Sample Decision Item

Action Items

  • Next meeting in two weeks and discuss what the objectives of the TF are and what would be the targeted deliverables


  • No labels