this post was submitted on 08 May 2024
28 points (100.0% liked)

Programming

17364 readers
152 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS
 

Nearly a decade back I wrote a lot of browser CI tests with headless chrome as well as browser stack. I loved the idea, but they just didn’t handle things being a bit outside of perfect IRL, like taking a moment longer to load etc. They ended up having a lot of waits in them, taking a long time to write and were prone to being flakey. The tests basically lacked “common sense” and it made me think that one day someone would figure out how to make them work better.

I’m wondering if there are new frameworks, workflows, startups that have made this stuff easier and better. I’m not really in tech anymore but I wouldn’t mind writing some tests if the experience was better.

top 10 comments
sorted by: hot top controversial new old
[–] best_username_ever@sh.itjust.works 12 points 6 months ago (2 children)

People around me have used https://playwright.dev/ successfully and you can program it in multiple languages. That’s for web sites though.

For real applications (like C++ or C#) it’s still a mess but it works fine, nothing really broke for the past 10/15 years.

[–] vvv@programming.dev 3 points 6 months ago

I just started using this at $jorb. Check out their "ui-mode" is all I'm going to say about that.

[–] hallettj@leminal.space 2 points 6 months ago

Modern frameworks like Playwright do a good job of avoiding those waits. So the tests are less flaky, and are faster.

[–] Kache@lemm.ee 9 points 6 months ago* (last edited 6 months ago)

The synchronization problem (flakiness and all the waits) is tricky to get right. Browsers are concurrent systems, and programming around one is specialized enough that many devs don't do it well, e.g. IMO if you're adding ad-hoc waits or nesting timeouts, you've already lost.

[–] MajorHavoc@programming.dev 7 points 6 months ago* (last edited 6 months ago) (1 children)

RobotFramework is pretty nice.

The core challenges still exist, particularly with web automation. But RobotFramework at least has much better helper methods, syntax and quality of life tools than a few years back.

I still don't typically see teams maintaining anything deeper than a few smoke tests, for most projects.

Edit: I also see a decent number of folks using PlayWright pretty happily.

Source: I consult with various dev teams on this kind of thing.

[–] paige@lemmy.ca 2 points 6 months ago

Sounds like just a few integration tests for the core use cases is the ticket, just like before. Real unfortunate, I would have bet that by now that there would be some startup that had made an automated user that you trained to do tests with a chrome extension or something.

[–] arthur@lemmy.zip 4 points 6 months ago

I think the situation has improved, but still far from perfect. Last time I used robocop and Python to build some tests. But that's not something I use often, so there may be better options available.

[–] qaz@lemmy.world 4 points 6 months ago* (last edited 6 months ago)

Laravel has a system called dusk where you can specify what buttons to click and what to expect. It will save screenshots of each step. It was pretty reliable in my experience.

I have also tried using Selenium myself to scrape a site but it was very flakey and unreliable.

[–] abhibeckert@lemmy.world 3 points 6 months ago* (last edited 6 months ago) (1 children)

Yeah I've given up on integration tests.

We have a just do "smoke testing" — essentially a documented list of steps that a human follows after every major deployment. And we have various monitoring tools that do a reasonably good job detecting and reporting problems (for example, calculating how much money to charge a customer is calculated twice by separate systems, and if they disagree... an alert is triggered and a human will investigate. And if sales are lower than expected, that will be investigated too).

Having said that, you can drastically reduce the bug surface area and reduce how often you need to do smoke tests by moving as much as possible out of the user interface layer into a functional layer that closely matches the user interface. For example if a credit card is going to be charged, the user interface is just "invoice number, amount, card detail fields, submit, cancel". And all the submit button does is read every field (including invoice number/amount) and send it to an API endpoint.

From there all of the possible code paths are covered by unit tests. And unit tests work really well if your code follows industry best practices (avoid side effects, have a good dependency injection system, etc).

I generally don't bother with smoke testing if nothing that remotely affects the UX has changed... and I keep the UX as a separate project so I can be confident the UX hasn't changed. That code might go a year without a single code commit even on a project with a full time team of developers. Users also appreciate it when you don't force them to learn how their app works every few months.

[–] paige@lemmy.ca 1 points 6 months ago

This makes is sound like 10 years later nothing much has changed