Polling API data to make a trend - how to process data?

Hi there,

For practice, I've gotten the idea of using Node Red to poll Google Maps API for a specific commute in both directions to trend the real commute time (taking traffic into account) during various times of day.
My goal is to collect these data points every 5 minutes all day every day to create a trend line for both sets of data on the same graph.

So my flow looks like:

  • Every 5 minutes, send a pulse out to trigger HTTP requests.
  • Filter the HTTP response down to [Origin, Destination, Trip Time, Timestamp] in each row.
  • Send to 'convert CSV' node to create CSV data, then
  • Write to file

First of all, do you see any issue with continually making an API call every 5 minutes for a very long period of time?

Second, I'm having some trouble processing the data. I'm using Excel, but I'm not sure if there is a better platform for this.
Even for one day's worth of data, it's 288 points per trend line. I'm not sure how well it will handle overlapping multiple days of data onto the graph.
Additionally, I've had to do some processing in Excel to play nice with times/dates, so it had to be converted to .xlsx file.
Given this, is there a way to have new saved data auto-entered into the file? Right now, I would have to open the file each time, do my re-formats, then save as .xlsx for updated data.
Further, I have to manually separate the two sets of data: A->B and B->A.

Any tips on this stuff?

You might run into some limit enforced by google.You should look into that.

Not knowing how or what you are doing to process the data make this difficult to help you with.

You don't explain how many trend lines you have but that said, if you send too many points to a graph you will run into slowdowns as the work to create the graph increases.

You don't mention how you are doing this but why not save the data into a database which you could do in Node-RED?

Thanks for your reply, I appreciate you taking the time.

Regarding your questions:

  • Data processing:
    • I am not doing much. I am converting the UNIX timestamp to something that Excel recognizes
      • (((Timestamp/1000)-18000)/86400) + DATE(1970,1,1))
        • Divide by 1000 to convert ms into seconds. Subtract 18,000 for my local time zone. Divide by 86,400 (minutes in a day) to get fractional day that Excel recognizes. Then add the offset that Excel uses for epoch (~year 1900), vs year 1970 for UNIX.
      • I then separate the whole and fractional portions to create a Date column and a Time column.
    • I'm also trying to separate the 2 data sets. The issue is that they post at the same time - one data entry for trip time from A->B, and the other trip time for B->A. So if I sort the whole data set by time, then it intermixes the two. To semi-combat this, I'm using the Table function in Excel to sort the data sets. This is clumsy at best.
  • As far as trend lines, I think that works well when only using a few days' worth of data, but would quickly get cluttered when trying to aggregate many days' worth of data. I think what might be more accurate is a scatter chart that shows general clusters. But still with 2 different data sets: drive time duration from: A->B; B->A.

I think the database option is the right way to go. This is my first experience with this sort of stuff, so I'll learn as I go and as people like you make recommendations. I realize what I'm doing is not scalable.
Quick thinking, with no experience, it seems like a "hub" database would make most sense. Node-Red can deposit into database, while Excel draws from database separately. The Excel file can be auto-updated.
Does this make sense?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.