For each Line in a text file do

Yes and what is he planning to do with the information ? Request urls right ? Those urls will not work to retrieve data from those pages because it is an ajax post request.

@zenofmud i see that mustache works but with 3 curly braces. {{{

Huh?

I am not following you. I looked at all his URLs he posted. What exact URLs are you talking about? Please repost them.

Thanks!

EDIT: Here is the URL he posted, right:

and the page is www.radiopopular.pt/pesquisa/ where I can have the product name and the price..

If it was me, I would pull this data with a python or php script in the shell, as I mentioned. it's trivial to do.

What other URLs did he mention? I missed them haha

Funny!

The info that i want its the price and the description of the Ean product . Example :

https://www.radiopopular.pt/pesquisa/6901443274383

price

SMARTPHONE HUAWEI PSMART 2019 BLK
159,99 ā‚¬
and save this info in a .txt file

You are "screen scraping" right? Or are you using an API?

Only screen scraping no API ...

That's exactly what I thought you posted .. LOL

Of course you can pull this with wget, curl, php, python, ajax, anything under the sun and process the text as you like.

I do this all the time, and as mentioned earlier, I just wrote an app for coronavirus stats which grabs the web page (from China, in Chinese) using php file_get_contents() and processes the test, publishes the stats we want with MQTT and receives via MQTT on Node-RED.

However, I see you are on Windows Lucas, and sorry, I don't do Window :).

The only reason I am still posting in your thread is because I read you wanted to use wget to screen-scrape, like you mentioned before. Thanks for confirming..... :slight_smile:

I've already posted at least twice on how I would easily accomplish your task in the Linux shell. I'm "outta here" for now.

Best Regards!!

It doesn't work @unixneo ... I have a server at home with linux... If this works with linux we can make at my personal server ( I have proxmox with nodered at home )

no you can't because it's like @bakman2 says...

Ah... I have no idea about proxmox

My bad. Sorry. I thought you were on Linux or the PI. I missed that part.

Whoops. Then, I'm outta your thread for good. I have zero idea about proxmox, really. LOL, I thought most folks here were PI people ....

I'll stop offering Linux shell-like solutions then, for sure, in your case.

:)'

@bakman2 sorry for misunderstanding this was a Linux/PI kinda problem.

1 Like

But wait... I just Googled proxmox.... it said:

Server virtualization with support for KVM and LXC

Proxmox is a Debian-based platformProxmox VE is based on Debian GNU/Linux and uses a customized Linux Kernel. The Proxmox VE source code is free, released under the GNU Affero General Public License, v3 (GNU AGPL, v3). This means that you are free to use the software, inspect the source code at any time or contribute to the project yourself.

Using open-source software guarantees full access to all functionalities at any time as well as a high level of reliability and security. We encourage everybody to contribute to the Proxmox VE project while Proxmox, the company behind it, ensures that the product meets consistent and enterprise-class quality criteria.

Read more about Proxmox open-source

So why is this NOT a Linux-based solution?

Seems like just another VM, like Linode. Right?

You are not running Linux on proxmox? What are you running then?

1 Like

Proxmox it's for vertualization . I have debian , ubuntu, debian with docker on proxmox and other virtual machines... soo yhea i have linux at home. Proxmox = Vmware , wine , qemu , etc ...

we do like your opinions :wink:

I was trying to reverse engineer the request, but they use multiple mechanisms to protect their data, the main issue seems to be the content-length header and cookies, we can't specify it as we don't know the length.

Well, I just pulled this:

The Proxmox Linux Kernel

Our software is based on a rock-solid Linux kernel. Linux is a free Unix-type operating system originally created by Linus Torvalds with the assistance of developers around the world. It is published under the GNU General Public License.

Linux provides great performance and is an extremely stable and reliable platform for our servers. It also supports a wide range of hardware configurations - from cheap systems for testing to high performance multiprocessor servers connected to SAN solutions.

It's just a VM.....

What does running in a VM have to do with "cannot use wget, curl, (or even PHP, Python, etc)?

Now I am really lost...LOL

I use a lot of VM services, like I assume everyone here,.... that has very little to do with how you exec a shell command or script.

You are running Node-RED on Windows? No? Yes?

Because... This ?

I'm running on windows and linux..i have multiple VMs

Yes, and then before my wife called me to sleep, I Googled promox and saw it was a Linux based VM..

You said you are a noob... well, I was a Linux noob at Linux Slackware 0.8.... before 1993-07-17 when Slackware 1.0 was released (27 years ago, noob on Linux)

We were screen scraping web pages, way back then (believe it or not, before Google).... so what's the big deal? It's not rocket science to take a few URLs, read them a line at a time, get the content, and save the file (with content) to disk or even process in script.

This can be done in more ways I can think of quickly (Javascript XHR, PHP, Python, VueJS Axios, fetch, wget, curl... the list goes on and on).... for miles. This is not rocket science, grabbing data from the web...

Good night....

sooo @bakman2 .. it's impossible using node-red?

This is not rocket science, grabbing data from the web...

Please enlighten us with a working flow or even script (no difference here) for the page @LucasSaraivaAzevedo is requesting and extract the price - as text, that can be saved to a text file.

1 Like

because your using windows and I gave you a flow to work on linux.

I have not used windows 2003 to now so I cant help you with file access on a windows machine ..... I have no clue...

these guys

https://www.radiopopular.pt/pesquisa/6901443274383

are real good at obfuscating their data. But if you click the link (the image) the next page that loads has the data you need to scrape.

    <meta itemprop="price" content="159.99">

you need to emulate the click on the image from the page then scrape the result.

No problems doing on Linux . I have linux