Page tree
Skip to end of metadata
Go to start of metadata


Contributors/users in ToIP come from various backgrounds. Their culture may not be Western. English may not be their native tongue. They may be experts in non-technological topics that are relevant for ToIP. Working with one another presumes a setting where participants have some level of shared understanding. Often, sharing one's understanding at a superficial level suffices. Other situations require that underlying concepts are shared in a more in-depth fashion. It's like cars: people buying, selling, or driving cars do not need in-depth shared knowledge about cars, whereas (maintenance or construction) engineers or liability lawyers need to share a deeper knowledge of how cars do (or do not) work.

We expect to see situations of "language confusion", i.e. in which people use words or phrases, the intension (not: intention) of which differs from the interpretation of some listeners/readers. Sometimes a casual glance at a dictionary or glossary is the solution. In other cases, deeper understanding matters, e.g. in when drafting specifications or contracts. Then we need more than a set of definitions

This WG aims helps ToIP community participants understand one another at whatever level of precision they need.

Scope Statement (for the JDF Working Group Charter)

The scope of the Concepts and Terminology Working Group is to develop shared concepts and terminology available to all stakeholders in the Trust over IP stack. This includes developing artifacts and tools for discovering, documenting, defining, and (deeply) understanding the concepts and terms used within ToIP. Key deliverables include one or more glossaries together with a corpus of data underlying them. This data will consist of formally modeled concepts, plus their relations and constraints, and will encompass perspectives from technical, governance, business, legal and other realms. Although this Working Group (WG) will maintain these glossaries and this corpus of data via repositories that all ToIP WGs and Task Forces (TFs) can contribute to and inherit from, this does not preclude WGs or TFs from maintaining their own specialized glossaries if they require. Such specialized glossaries, together with other generators of concepts and terminology elsewhere in the industry, are expected to feed back into the glossaries and corpus of data maintained by this WG in a cycle of continuous improvement.

Intellectual Property Rights (Copyright, Patent, Source Code)

This WG uses the same IPR licensing selections as other ToIP Foundation WGs:

Conveners (add your name if you are interested to become one of the conveners)

  • Rieks Joosten, TNO
  • Drummond Reed, Evernym

Interested Members (add your name and organization if you may be interested in joining this proposed WG)

  • Daniel Hardman
  • Oskar van Deventer
  • Scott Perry
  • Shashishekhar S, Dhiway
  • Philippe Page
  • Paul Knowles
  • Taylor Kendal
  • Scott Whitmire
  • Arjun Govind
  • Vinod Panicker
  • sankarshan, Dhiway
  • Steven Milstein
  • Joaquin Salvachua
  • James Hazard
  • Ajay Madhok
  • Eric Drury
  • Abilash Soundararajan
  • Natarajan Chandrasekhar
  • Steven Milstein


The primary focus of the ToIP Foundation is not just on technology (e.g. cryptography, DIDs, protocols, VCs, etc.), but also on governance and on business, legal and social aspects. Its mission to construct, maintain and improve a global, pervasive, scalable and interoperable infrastructure for the (international) exchange of verified and certified data is quite complex, and daunting". This not only requires technology to be provided (which is, or should be the same for everyone, i.e. an infrastructure). It also requires that different businesses with their different business models can use it for their specific, subjective purposes. And that each individual business and user is provided with capabilities that facilitate its compliance with the rules, regulations and (internal and external) policies that apply to that entity - the set of such rules, regulations and policies being different for every such entity, and dependent on the society, the legal jurisdictions and individual preferences. All this is to be realized by people and organizations from different backgrounds - different cultures, languages, expertise, jurisdictions etc., all of whom have their own mindset, objectives and interests that they would like to see served.

The aim of this WG is to enable people in the ToIP community to actually understand what someone else means, to the extent and (in-depth) precision that they need, and facilitating this by producing deliverables/results/products that are fit for  the purposes that they pursue.

We expect such results to include a common glossary, that lists the basic words we use in the ToIP community and briefly explain/define them, using existing sources such as  NIST, Sovrin, W3C's VC, DID standards, and others. We may be able to leverage the new 'glossary effort' that the W3C CCG has recently initiated. We also expect such results to include additional glossaries, that subgroups of the ToIP community (e.g. TIPs) create to serve their needs as they focus on specific objectives (thus facilitating domain/objective-specific jargon). We currently envisage 'technology stack' and 'governance stack' glossaries that serve the specific needs of the associated WGs. We leverage the ToIP Glossary WG proposal from Dan Gisolfi (IBM).

Also, we expect such results to include more precise (theoretical?) specifications of underlying concepts, e.g. in terms of conceptual/mental models. Such models help to obtain a more in-depth understanding of ideas that are worth and necessary to be shared within one or more community sub-groups. They may also facilitate the learning process that (new) community members go through as they try to understand what it is we're actually doing. And they may help to 'spread the word' in specifically targeted (e.g. business and legal) audiences. A specific focus of this WG is to establish relations between the concepts of the mental models and the terms defined in the various glossaries.

A model for some of the deliverables of this WG is one or more websites that would resemble the Legal Dictionary. This site not only provides a definition of various terms, but also a brief description of their backgrounds, various use-cases that exemplify the relevance of (and distinctions made by) the terms, and other useful information.

Finally, we expect to see results that we haven't thought of yet, the construction of which will be initiated as the need arises, by (representatives of) those that need such results for a specific purpose. Perhaps we might produce a method for resolving terminological discussions that can be lengthy and do not always get properly resolved (e.g. as in id-core issues #4#122). Here, we leverage a prior collaboration between Daniel Hardman (Evernym) and  Rieks Joosten (TNO).


  1. Develop and maintain a high-quality corpus of terminology that covers the needs of the ToIP community.
  2. Develop a process whereby this corpus can be:
    1. Curated, based on evidence and using expert opinion, such that concepts, relations between concepts and constraints can e.g. be
      1. carefully defined,
      2. assigned an identifier (name/number/label) to distinguish it from any other concept in the corpus,
      3. mapped onto terms that are defined and/or commonly accepted in various relevant domains/contexts,
      4. their usage and relevance documented from organic sources,
      5. their status adjudicated into e.g. 'working', 'preferred', 'accepted', 'superseded' and 'deprecated'.
    2. Enhanced in a collaborative, open, and fair manner by interested community members.
    3. Versioned.
    4. Published in different ways (e.g. as a glossary, concept map, use-case stories ...), for specific purposes (e.g. education, reference, , ...) by different means (e.g. a PDF, a website, presentations/webinars, ...) and as needed by different audiences/stakeholders or domains (e.g. business domains, architectural domains, ...)
    5. Promoted as a valuable public resource and an influence for convergence and excellence.
  3. Train and organize volunteers so the initiative develops sustainable long-term momentum.
  4. Disseminate/promote the work across ToIP WGs and other relevant audiences.


The Corpus of Terminology MUST have:

  1. Source control and build processes managed in github.
  2. A well defined syntax for contributing concepts/relations, and for each of them an identifier by which it can be identified within the scope of the Corpus.
  3. A well defined syntax for attributing terms to such (established) concepts/relations for specific contexts/domains.
  4. A well defined CI/CD process that includes auto sorting of terms and concepts. (??? RJ: I'm not sure what this means.)
  5. A simple process for contributing further content.
  6. A simple publicly accessible website, containing at least the Corpus-identifiers and their definitions, possibly inspired by the 'Legal Dictionary'.
  7. A PDF document for every published version, containing at least the Corpus-identifiers and their definitions.

The Corpus MUST NOT have:

  1. A skill requirement on programming knowledge as that will reduce contributors.

The Corpus SHOULD be:

  1. Reusable and easy to leverage in TIP repos.
  2. Usable for language translation via separate self-organized language specific repos. These repos should be aggregators of the baseline glossary and any TIPs.
  3. Usable for mapping its identifiers/terms to those in use in other contexts/domains.
  4. Consumable at the RAW content level (.md files) by external groups who wish to render content in a different manner.

Solution Approaches


  1. Use a github repo to manage the corpus.
    1. Consider using a Creative Commons license instead of an Apache license; it may be more appropriate.
    2. Require DCO/IPR for contributors to the repo. Anybody who complies with the DCO/IPR requirements can submit to the corpus by raising a PR.
    3. No need to manually maintain metadata about who edited what, when. We have commit history and git praise/blame.
    4. Use github issues to debate decisions about term statuses. Anybody can raise an issue.
  2. Use existing pervasive opens source documentation tools such as mkdocs, Docusaurus, or GitHub Pages:
    1. Each concept is described in a separate markdown doc that conforms to a simple template (see below). Concepts link to related concepts.
    2. Each term is a separate markdown doc that conforms to a different simple template (see below again). Terms label concepts; links from concepts to terms remain implicit in the markdown version of the data, to avoid redundant editing. Having terms and concepts as separate documents that cross-link allows for synonyms, antonyms, preferred and deprecated and superseded labels for the same concept, localization, and so forth. They also allow for the peaceful co-existence of multiple terminologies (= sets of terms, namespaces, …)
    3. Each context glossary is a separate markdown doc that conforms to another different simple template (see below once again). A glossary is an alphabetic list of terms relating to a specific subject, or for use in a specific domain, with explanations. The markdown document specifies the scope of the glossary, and the selection criteria for terms. 
    4. Provides extendable CI/CD pipeline for the repo, and write unit tests to enforce any process rules, quality checks, and best practices the WG adopts.
    5. CI/DI process should enable live website and refreshed PDF document after each approved and merged PR.
  3. Define the criteria for giving a term the statuses. What are grounds for saying it is deprecated, superseded, etc. (Criteria are published in a doc in the repo, so debating changes to criteria means a PR and github issue.)
  4. Create a release process guidelines.
    1. Define difference between live glossary and a “blessed version”. Suggest once per quarter, with names like “2019v1” (where 1 is a quarter). This format is not semver-compatible, because we have no need to wrestle issues of forward and backward compatibility--but it is easy to understand, parse, and reference in a URI.
  5. Establish a ToIP website level access experience
    1. Access to main Glossary in all language versions
    2. Access to TIP Glossaries


  1. Leverage existing CI/DI approaches (sample code repos) for incorporating mkdocs, Docusaurus, or GitHub Pages.
  2. Suggest to the tech WG that they may write a generator tool that walks the repo, building in memory a semantic network of concepts that are cross-linked to terms, and emitting various incarnations of the content:
    1. Browsable static html that’s copied to a website, The website should be indexed by Google and have search based on elasticsearch.
    2. A .zip file of the static html that could be copied to other web sites.
    3. An ebook format (e.g., epub).
    4. Possibly, occasionally, a JIT-printed SKU published on
  3. Create a crawler process that collects terminology from various sources (contexts), for the purpose of mapping terminology as is used and/or defined in that context onto the concepts/relations in our Corpus
  4. Create a process for pulling new content (terms, concepts) from the MM_WG
    1. A source is declared in a config file that’s committed to the repo. This means anybody can propose a source by submitting a PR and debating its validity in a github issue.
    2. Sources could include W3C Respec docs, IETF RFCs, Aries RFCs, DIDComm specs hosted at DIF, etc. Corporate websites wouldn’t work because A) they’re too partisan; B) they’d require random, browser-style web crawling, which is too hard to automate well.
    3. Crawler pulls docs and scans them, looking for regexes that allow it to isolate term declarations, their associated definitions, and examples that demonstrate their usage.
    4. Output from crawler is a set of candidate terms that must be either admitted to a pipeline, or rejected, by human judgment. Candidates that are already in the corpus are ignored, so this just helps us keep up to date with evolving term usage in our industry.

Content Templates

Concept Template (to be further developed on github)

Concept ID: 12345 (this is a 5-digit number that’s embedded in the filename, such as


en text: <text that allows the reader to evaluate whether or not something qualifies as an instance of the concept in every (yes, every) relevant use-case>


en text: blah blah blah

<other language code> text: lorem ipsum cu prorat

links to media (diagrams, audio, video)

Links to any discussions in github issues


history and theory of the concept in its larger mental model


Related Concepts


Term Template (to be further developed on github)

Term: faster than light

Short form:

Acronym: FTL

Language: en

Labels concept: c-12345 (filename for this term would be, where 12345 comes from the concept, and x is 1-3 digits that uniquely identify the term in the context of its concept)

Links to any discussions in github issues


metaphors or mental/conceptual models (or namespaces) that inform the choice of this label for the concept


Examples of usage

Scope: (description of the scope of application)


Glossary Template (to be further developed on github)

Name: ToIP Governance Glossary

Scope: (description of the scope of application)

Language: en

Scope: (description of the scope and purpose for which the glossary is supposed to be used.)

Taglist: (any term that has a tag from this list will be included in this glossary)

Links to any discussions in github issues



  1. On 21 May, before this WG was proposed and before the WG/TF WIki submission process was outlined -- I had submitted a Glossary WG Proposal. The MM&T proposal seems very theoretical and minimally a larger effort that will take some time to get off the ground. Reading it forces me to reflect on the 1879 endeavor to compile the Oxford English Dictionary

    We (the ToIP) community needs to establish a working and living glossary with a simple yet governed process asap. My proposal focuses on HOW we can do that NOW!

    My questions for the backers of this proposal:

    a) How do your differentiate the two proposals?

    b) Can you take on this endeavor without minimizing the goals / requirements of my proposal under goal "1.f" above?

    c) Is there value in keeping the two proposals separate? If so I will formally submit  it under this new Wiki submission process.

    Maybe we should have a discussion before this is brought to the SC...

  2. I have made an attempt to integrate your proposal into this one. The particular changes I made have to do with the difference between 'glossary' (a list of words and explanations (OED)) vs. 'mental/conceptual model' (a set of carefully designed concepts, relations between them and constraints, designed to enable logical reasoning and arguing in groups of people). Does it make sense to you?

    I'm not trying to be theoretical, but I do think that what we do has to be theoretically sound. I've not seen glossaries being helpful in resolving discussions where the meaning of a term caused conflicts/misunderstandings (lots of evidence in e.g. W3C/DIF/ISO mailing lists, issues, etc.), whereas I did see mental/conceptual models being very helpful there. Perhaps that's the main difference...

    Let's see if we can talk this through before the SC. 

  3. For purposes of actually approving a WG charter, which would ideally be done at the 2020-06-10 ToIP Steering Committee meeting, the section titled "Working Group Charter" should ideally be condensed further as it currently contains a greater level of detail than most JDF WG charters. I'm happy to help with this but the WG conveners may wish to have a first go at it.

  4. Is it possible to address the Working Group Charter from an Agile perspective? Maybe something like an Epic Story to illustrate the problem we're trying to solve and its scope? We could write User Stories to depict how the community will derive value from a hypothetical solution we deliver.

    If so, then we don't have to commit to any execution details in the charter to get started. We could focus our energies on delivering a Minimum Viable Product, in this case, a Minimum Viable Glossary, so people could start contributing to it and referencing it in their own initiatives.

    One option is using a Confluence Glossary Plug-in. While it does not meet all the requirements described above, it will enable us to deliver a quick win. I already confirmed with Todd that we could use it. Also, we can export its data so it's disposable.

    Consider it an experiment to help us:

    1. Enable Users to start benefiting from a Glossary
    2. Gather their feedback and assess their priorities
    3. Further document our Requirements
    4. Iterate over possible solutions 
    5. Get some traction and start building momentum