In your one post Nick you state you are looking to group nodes for easy searching. Yeah! Has any thought been given to purging the catalog? There are nodes that haven't been touched for years and while some may work there are many that don't. This may be, no, I'm sure will be a contentious subject but maybe a purge of old nodes should be adopted? It's frustrating enough to search for one but after finding one and it doesn't work and hasn't been touched in years.... Yes, you can look at the dates but sometimes I get in a hurry and forget to look. Node creators can chime in but for us who can't create it would be nice to see the old ones gone.
I think the main obstacle is finding a reliable and correct way to identify these broken nodes.
The date of the last publication is a factor but not reliable.
I agree with that, but dragging along hundreds of broken nodes maybe more is frustrating. With all the brain power lurking in here there has to be a way.
Encourage users to use the "report" button when the nodes are completely down.
Then if the GH leaves something to be desired, the node can be ejected.
It might be necessary to create a policy for contributions to show the intention of the NR team to eject these nodes.
What would be ideal would be to expand on the Flows catalogue to provide more focussed and reliable assurance score and then to leverage that in the palette manager. I realise though, that's potentially a fair bit of work.
This is a topic I had already discussed but never moved forward. There is a package that can be used (I think - especially I hope).
The extension would also be to display the score in the palette.
I hope that the review scoring is revisited though - it really doesn't work well right now. And also, it should probably take into account the voting as well.
Never heard of, or seen a report button. But there should be a way to kill off unusable nodes. If the creator is still around they can fix or repost. True, I don't know how much trouble it is but dragging along dead weight for years isn't healthy.
If you are logged in, then you have the ability to report modules:
I hope that the review scoring is revisited though - it really doesn't work well right now
Do you mean the scorecard? Forgive me if you have shared previously, but what areas don't work well do you think? Not being defensive - genuine question to help focus the discussion on what concrete improvements could be made.
On the general subject, this is something I've been thinking about a lot recently. In doing the work to categorise nodes, I have a spreadsheet snapshot of the entire library from last week, that includes download stats, when the node was first published, when it was last published.
Unfortunately, download stats can be quite misleading - as there are so many bots pulling packages from npm for various reasons, it's hard to get a sense of true usage once you get into the very long tail of modules with < 100 downloads a week.
For every package that hasn't been updated in > 2 years so is "clearly abandoned", I can point at another one in a similar position that has really high levels of usage because it just works and hasn't required any updates.
Trying to find a sensible heuristic to flag up modules is proving quite hard.
Another challenge is how many nodes have an inaccurate git url in their package.json - usually because its a fork of another module and there hasn't been the appropriate attention to detail to update that field of the package.json. We did spend some time when the scorecard was first introduced to try to automatically validate the src url points to the right repo for the module. Unfortunately thats quite a hard problem to do automatically as there are lots of edges cases.
All of which is to say, I'm very aware of these issues and the category work is a small step in the right direction. I would love if interested parties would come together to help thrash out what improvements are needed; but that's where discussions often dry up.
The rate system: if a node has only one vote 5, sorting by "rating" indicates that this node has more weight/highlighted than a node with more votes (e.g. 10 * 5 and 1 * 4)
Improve scorecard with a real package evaluation.
What do you define a "real package evaluation" to mean?
edit - followed the link to your previous thread.
Yes, there are lots of things that could be done - but its a lot of work. What are the quick wins that can help move things forward?
Check if we can use their algorithm without too much work.
Hi Nick, I think I may have fed back originally - let me try to remember:
- The max number of dependencies is somewhat low I think - set at 6 is it? If you have a few small dependencies for example, it is easy to exceed this.
- I seem to remember that the node.js version check against Node-RED's engine setting was rather overly strict. For example, I think that it sets it to the exact node.js version that is in Node-RED's package.json which does not allow for minor version variations. I think it would be best set to the major version of node.js to avoid this issue.
- Dependencies use latest versions - this is a rather blunt instrument, especially at this time where some authors are deliberately dropping CJS versions and only publishing ESM ones. This got uibuilder marked down as I was relying on a dependency that had done this and it was quite hard to unpick. I have done that now but it wasn't pleasant and required a lot of work that could have been focused elsewhere since it really wasn't causing an issue.
I think those were my main issues. Nothing massive but certainly added some overheads that might have been avoided.
I think that the scorecard also potentially misses some things that might help provide a better assessment:
- Any package that doesn't publish the source code (though maybe that's covered by the issues link?)
- Integration with the user-provided star rating.
- Number of outstanding issues (would need to choose a sensible threshold of course - maybe 10?)
- Number of monthly downloads - not so sure about that one though, be interested to hear other people's thoughts. It might unnecessarily penalise people who publish specialised nodes.
- Stretching t a bit now, but maybe a threat measure of some kind. npm and GitHub both have methods for assessing outstanding threats at least in your dependencies. This would probably be a better measure than simply looking at whether all dependencies are "current"?
I suspect that the star rating would probably be the most useful to integrate.
I certainly see that. The stats for uibuilder have been consistently around 2k a month I think for the whole decade and I really don't believe we have anywhere near those kinds of user numbers.
Agreed. Which is why I mention above the idea of understanding critical outstanding dependency updates instead. This is more complex but a much better measure.
Both npm and GitHub seem to be able to do this for individual packages. It would be a little complex but maybe either a quarterly trawl through all Node packages or a rolling assessment programme?
And I think we all appreciate the difficulties. And I don't think we should make this into too big an issue. Not sure what we can do to help, maybe some kind of working group, at least to work through a few ideas and agree some actions? A Teams/Zoom call might settle out a few things?
There are nodes that just work and haven't been touched for a long time. I get that. I suppose what I was looking for was somewhere i could go, say, this node Xxxx doesn't work, no one responds to requests, can we delete? And then someone in node red world goes sure, makes sure I did what I said, then presses the magic delete button.