It returns a single messy huge item. Basically I only need there linkedin profile usernames, that comes after https://www.linkedin.com/in/username. Eventually, I was to find one within such element: $json.data.data.premiumDashAnalyticsObjectByAnalyticsEntity.elements[11].content.analyticsEntityLockup.ctaItem.actionData.entityProfile.publicIdentifier However, not all the other numbers consist of any data (only some).
I wonder if it would be possible to only pass those usernames (maybe also names and titles if there are any). Another (or same) complication here is that when users who visit my page turn on their privacy, I can’t see them or their profiles, but only e.g. company or university (not even location I guess). I’d wish I could also save those as a separate flow of items (including the custom search link if it’s there) to then try identify them as well.
Also, according to the doc above, I wonder how could I implement the recommended me by support way to scroll page and scrape more of my LinkedIn profile viewers: “you have to follow the process at start of doc page to catch the request in your webconsole and copy the url”.
Thank you, @ihortom! I was able to do it with this AI gen piece of code:
Also if I change number after “start:” in the API calls’ request_url I get per 10 urls in the chronological order.
Now it is not about more of unipile calls I’ll also need to use, but about the proper N8N algorithm to e.g. daily pull approx. 200 profile viewers (from start:0 to start:19), to than:
Compare pulled up to 200 urls to the previous day URL list and only store new ones (lists #2 and #3 below)
For those urls that actually send to user profiles (don’t have ‘/search/results/people/’ in it) - get location of those profiles (profile enrichment via another unipile API call), filter out irrelevant countries and use yet another unipile API call to send them invite request (with custom name)
Those cases when people hide their profiles - store those searches separately, specifically the keywords in those searches they have in order to then compare to data (of companies / universities / industries) of another linkedin leads list to match based on that and in a max several days of timestamp difference.
Hey @Dan_Burykin , it doesn’t sound too complicated. You probably need some sort of storage to have the data to be compared with the next day. Pick yours, whether Google Sheet, Airtable, or something else.
As for the start range from 0 to 19 you can do something like this
Yes, I was able to pull the needed amount of data:
You helped me with using Google Sheets as database before, so I’m likely to be able to use it. Side note: my current N8N installation also has Redis, so I wonder if it makes sense to use it. Not tech savvy enough for it though, so if you could share some opinion or advice on this as well.
Anyway, my question is how to syncup each daily pull with the previous day, because number of visitors will be different and the initial scrape data doesn’t have any timspamp value to syncup wit hit.