Having already tasked N.I.C.K., my OpenClaw AI assistant, with handling translations and Facebook uploads, I expected the rest of the process would just be a repeat of the same pattern. And that expectation wasn’t far off.
The WordPress publishing flow was incredibly smooth. All it took was creating an app, granting the necessary permissions, setting the auth key, and handing the skill to N.I.C.K. It was done. I could even choose between saving a post as a draft for later review or publishing it automatically. For now, I chose the draft route.
However, if I decide to push for a higher level of automation later, I might have to embrace a “publish first, edit later” approach. I’m not a major influencer, so it’s not like there’s a flood of people waiting the moment I post. I’d likely have enough time to finish edits before the first reader arrives. And even if someone came early, they would just see something less polished, not something factually incorrect.
Riding that momentum, I moved on to LinkedIn. This one was a bit different.
First, to create an app, I needed a company page to formally associate it with. Creating a page for a non-existent company with a plausible name wasn’t hard. The problem came next.
LinkedIn’s access tokens have a maximum validity of 60 days.
I’ve heard that official company apps can sometimes get tokens valid for up to a year, but the fact remains: they expire. The real issue wasn’t the inconvenience of periodic re-authentication; it was the complexity of the token issuance process, which makes it difficult to automate.
Here’s a simplified breakdown of the token issuance flow:
- Enter the authorization URL in a browser, log in to LinkedIn, and an approval screen appears.
- Clicking “Approve” should redirect to a results page. But since I hadn’t set up a handler page on localhost, it naturally leads to an error page.
- The URL of that error page, however, contains the authorization code. I can copy that code, paste it into a token issuance script, run it, and finally get the token.
The cleanest solution would be to set up a site to handle the callback page, and after authentication, have localhost connect to that site to retrieve the code. But building all that for something I’d use by myself only once every 60 days felt like overkill.
So, I decided to handle it in a more OpenClaw-native way. After all, if it can control the browser directly, there’s no reason it can’t read the value from the address bar.
I had a brief discussion with N.I.C.K. about this flow to confirm my understanding and see if I was missing anything. As expected, reading the browser’s address bar or clicking the approval button wasn’t the issue.
The problem was the LinkedIn login itself.
Thinking about the OAuth flow, this was perfectly obvious.
> “If you want me to handle the login, I’ll need your password… and you probably don’t want me storing that, right? 🔐”
N.I.C.K. hit the nail on the head.
After our discussion, I gave up on the plan to fully automate token issuance. No matter how I thought about it, handing over my login credentials to N.I.C.K. just didn’t sit right with me.
So, we designed a collaboration structure: N.I.C.K. opens the browser and enters the (quite complex) authorization URL, which includes app identifiers and other parameters. Then, I handle the login and click the approval button. Once the result page appears, N.I.C.K. takes over to handle the remaining steps.
The only thing left was to run this process periodically. We agreed that N.I.C.K. would notify me during the writing workflow if the token was nearing expiration or had already expired, rather than forcing a strict 60-day schedule.
Once the token issue was resolved, the actual posting process went off without a hitch. It correctly summarized the content to fit within LinkedIn’s 3,000-character limit and added a message directing readers to the full post on my blog, complete with the link. Honestly, the result looked more polished than the messages I used to write myself.
Now, all that’s left are Instagram, X, and Threads.
