@rossbar suggested I post this here.
Motivation: There is a https://devstats.scientific-python.org/ but that is more for high-level management metrics. What would be useful in day-to-day operations, at least for me, is a dashboard on things as follow (but not limited to):
- Various CI status badges (e.g., which repo failed nightly test last night)
- Who last committed to the repo and when
- What is the latest release and when did it happen
Astropy Project tried GitHub - astropy/astropy-dashboard: Status Dashboard for Astropy Project at one point but we did not have resources to keep it going. Then Space Telescope Science Institute tried GitHub - spacetelescope/repostats: Creates a sortable HTML table that contains a summary of all the repositories for the specified GitHub organization but it suffered a similar fate.
In principle, setting it up should be simple. The cost is finding a place to run this at cadence, store and host the data, and keep it going as needed. It is not a high cost but when maintainers already stretched tight and there are fires to put out, this is never a high enough priority.
I am wondering if Scientific Python is interested in enabling such a dashboard in addition to the other one. Thank you!
Thanks for the suggestion @pllim! I think you are right that this should be fairly straightforward to build, probably as a cronjob that runs and updates a website daily. Perhaps before we work on such a dashboard, we can start with your list above, but make it more detailed: for each measurement, who is the target audience and what would they learn from it / what action does it enable.
Thanks for your response, @stefanv !
who is the target audience and what would they learn from it
Maintainers. To me, the board is like a log file: It is always there even though most of the time no one reads it, but when something goes wrong, it will have useful (even if incomplete) info for debugging.
what action does it enable.
Example use case: Astropy has many packages in its org (“coordinated”, infrastructure, etc) and over 40 affiliated packages. When a breaking change gets introduce (either in astropy core library or maybe upstream from numpy), at a glance we want to know how many repositories now have failing CI (might not be related to that change but a good first indication on which packages to even pay attention to). Or in the case where we wonder if an affiliated package is still good enough to be called affiliated, this dashboard would give a first glance to its upkeep (kind of like that automated vetting that someone else suggested during the “domain stacks summit” on Zoom not too long ago).
I’m not sure how much this applies but there is also Software - CHAOSS which I heard about recently. What they are doing sounds very similar to what is proposed here. From their website:
In response to these issues, the CHAOSS project develops metrics, practices, and software for making open source project health more understandable.
Augur is a Flask web application, Python library and REST server that presents metrics on open source software development project health and sustainability.
GrimoireLab is a set of free, open source software tools for software development analytics. They gather data from several platforms involved in software development (Git, GitHub, Jira, Bugzilla, Gerrit, Mailing lists, Jenkins, Slack, Discourse, Confluence, StackOverflow, and more), merge and organize it in a database, and produce visualizations, actionable dashboards, and analytics of all of it.
E.g. here is their dashboard used by the Document Foundation.