- Drummond Reed
- Daniel Hardman
- Dan Gisolfi
- Scott Whitmire
- RJ Reiser
- Michael Herman
- Foteinos Mergroupis-Anagnou (GRNET) (at the end of the meeting due to Daylight Savings Time issues
Main Goal of this Meeting:
|40 mins||Review and discussion of PR #45 as a complete spec for CTWG tooling||Daniel Hardman||See CTWG PR #45|
|10 mins||Discussion of next steps||All|
|5 mins||Review of Decisions and Action Items &|
planning for next meeting
- New members
- Michael Herman introduced himself, his company Hyperonomy, and his Digital Identity Lab project
- Review and discussion of PR #45 as a complete spec for CTWG tooling — Daniel Hardman
- The first portion of the meeting focused on Dan Gisolfi's question as he had to go at the end of the hour
- Dan's main questions were around the export process as articulated in this issue he posted on Daniel's PR.
- Dan's goal was to make sure that a glossary that can be rendered using MkDocs or SpecUp.
- Daniel Hardmanclarified that he conceived of the export-def process as producing just a subset of the corpus as filtered by the export definition. He proposed that the formatting of the output become an input to another process (the Unix "piping" model) that formats the data for direct ingestion by a rendering tool.
- DECISION: There was consensus to follow the Unix piping model for piping exported data to another process to format it for ingestion into rendering.
- We continued the discussion of Daniel's PR #45 by talking about the internal data model. We reviewed the proposed internal data model and the exported data model to discuss whether they covered all of what we needed AND they do not go too deep.
- Michael Herman brought up the question of language-independence. Daniel explained that language-independence can be achieved by defining a concept and then providing terms and definitions in different languages that map to the same concept.
- We discussed concept mapping with the example of a visual thesaurus, e.g., https://www.visualthesaurus.com/
- We agreed with the idea that we need to be able to express certain types of cross-relationships between terms
- Michael shared this diagram of the glossary work he has been doing. Also this example of the tool he is using (ArchiMate). And this example of a glossary entry in XML. See this page of his website.
- Daniel clarified the relative depth we are trying to achieve with the CTWG work and his proposed ToIP Term (TT) tool—it aspires to some of the power of Archimate for glossary modeling but will realistically have just a fraction of the functionality.
- Discussion of next steps
- See the list of Action Items below
- Review of Decisions and Action Items & planning for next meeting
- We will follow the Unix piping model to pipe exported data from the TT Tool into to another process to format it for ingestion into rendering.
- All: read PR #45 and post feedback about it or issues against it
- Daniel Hardman Update spec for the TT tool to support piping into another process to format for rendering as proposed by Dan Gisolfi
- Daniel HardmanReconcile his PR against Dan Gisolfi's PR to see if all the issues are going to be addressed
- Michael Hermanif possible, do a comparison of PR #45 against what features of ArchiMate might be most desirable
- RJ Reiser to review to the ingestible data model
- Drummond Reed to post notes and request additional feedback on the spec during our next 2-week cycle
- Drummond Reed to provide a heads-up to the ToIP Steering Committee of a potential budget request of between $5K and $20K for the TT tool