It seems like they implemented permission checks purely in the frontend, and not just on one endpoint, but almost everywhere.
While it is conceptually easy to avoid this, I have seen similar mistakes much more frequently than I would like to admit.
Edit: the solution "check all permissions on the backend" reminds me of the solution to buffer overflows: "just add bounds checks everywhere". It's clear to the community at large what needs to be done, but getting everyone to apply this consistently is... not so easy.
> Edit: the solution "check all permissions on the backend" reminds me of the solution to buffer overflows: "just add bounds checks everywhere". It's clear to the community at large what needs to be done, but getting everyone to apply this consistently is... not so easy.
I don't see those as the same. Buffer overflow checks are a very specific implementation (and language) detail and can happen absolutely anywhere in a codebase. Permission checks happen at a specific boundary and are related to how you design your application.
Whenever I had any say on how a project was developed, I'd always insist on a clear separation between the development of the backend API and the frontend client code. In my experience, it makes things like this much easier to avoid (and test for). You also get a developer API for "free" (which to be honest, is the main reason I prefer to do it that way).
Junior developer probably opened a Jira ticket, saw a UI of a permission dialog, and did exactly that task with nobody senior enough to know better. That's how you reproduce the bugs that were in-fashion 15 - 20 years ago in my experience!
There two new front end guys in the team, understaffed backend, but manageable, nothing as dramatic, but I constantly have to remind the benefits of doing as much things as possible in the backend and keep the logics high level on the front is aggravating.
Also no review or planning anywhere in that process.
I'm semi-confident that if a Junior were to talk to another Junior before starting about things to look out for, and then the code was reviewed by say a third Junior, they would not have this bug.
Call me naive, but I don't think Juniors are as oblivious as they are made out to be
As a long time engineering manager, I have significantly benifited from this: leveling me up in WCAG/ADA, strcmp timing attacks, performance timing. Many managers start younger than we maybe should, and the burnout that I had in my early years pulled me out of my passion to learn more about comp sci. It was the random enthusiasm of younger folk that, in those times, was my exposure to topics I hadn't dived in to yet.
I have witnessed hiring, listening, and supporting early-career enthusiasm has significantly improved every startup I've had the joy to be a part of.
This is so true. I've seen this so many times. The darling of product, who can deliver so fast. They leave a trail of smoking rubble and half working features behind them.
Why can't you be more like darling of product over there.
What do you even do around here; all you seem to ever do is take darling of products code and make a few changes (which I don't understand) and committing it as your own work. It appears you are either trying to take credit for darling of product or are sabotaging their amazing 10x work.
However, I also try to make it a habit to not blame people for not knowing something. This presents as a structural problem in that company: they needed to hire people who do know how to secure server code and put them into a position to do so. Blame the company and those who decided to save every last penny in personnel cost.
> I also try to make it a habit to not blame people for not knowing something
There’s a point where critical thinking skills come into play, I’ve seen people walked off the premises for doing stuff like this with customer data. Actual seniors who have never been blamed for anything are suddenly intolerable threats to the company because they didn’t bother to check what they were doing and forced the company to disclose a breach.
Sexual preferences and such are special category data, and if you are an engineer dealing with this stuff you should treat it as though data breaches could get someone killed.
Sure, part of the responsibility of this is on management, but it's absolutely on the engineers too.
So if someone who can't drive, finds a car with the keys in it, and starts driving it, and causes an accident, who do you blame?
And do you have any reason at all to believe the backend people didn't know? They wrote a fair amount of code and infrastructure, so they cannot have been blank slates.
The people who hired the person who can't drive and gave them a job as a driver.
> do you have any reason at all to believe the backend people didn't know?
Well, either they knew and wanted to implement proper auth and were prevented from doing it, or they knew and couldn't be bothered, or they didn't know that their backend system wasn't properly locked down and were too incompetent to have a clue.
Yeah, the people who put those people into the position to touch server side code are to blame. But then the OP is right: the people having made these code changes should really not have touched anything server side or even anything security relevant in the beginning.
They shouldn't have to - the architecture should be made in such a way that permission checks are done without you specifically having to call them every time. This is the entire reason middleware exists!
Well, but apparently they let people create the architecture who just shouldn't have touched the backend code. That's the whole point. Since it was not just a single endpoint or so - it was everywhere!
I agree they should just quit but that requires experience to understand too. By the time they've learned that, they have also learned that client-side access controls are decorative.
Eternal September. Everyone starts somewhere, it’s just all the time now. In ten years, the dev will explain to a junior how bad they messed up, and why they have to validate this way. Well, I don’t know, but that’s what I hope.
yeah now imagine another engineer go "my first bridge just fell apart the first time a real truck tried to cross over it lol" or "man my first plane crashed so hard"...
Ya know, the Roman tradition was, you gotta stand under the bridge while the army marches over it. If it collapses, you die too. Maybe there's something to having nudes of that dev.
Real engineering is expensive. And hard. moving atoms around is tough. I've never cut stone, but I've melted and cast copper and aluminum. That's real and dangerous work.
Computation is cheap and plentiful. And I kinda like having full control of "stuff". But maybe we do need licensing or personal liability. If I could wave a magic wand, and make that exist, I don't really know what rules I'd put in place.
> Maybe there's something to having nudes of that dev.
Most users of these sorts of app don't pay enough attention to security to care. Do you really think that most developers are any better?
Most developers are just normal people who happen to be able to write a bit of code and convinced someone to employ them. Just like anyone else, far too many live under the delusion that "it can't happen to me."
Translation: making them eat their own dogfood and risk their own embarrassment won't help; they would have to know better, first! =)
That's an interesting idea. Bridge builders and flight sims are used in industry to test to see if a bridge design will fail or if a plane will crash. They're not limited to oversimplified and fun video games.
I wonder if there's a market for a "write a CRUD app and let it loose on the Internet and watch it get pwned" simulator/game.
That's hiring a pen tester, and there is a market for it, but companies don't do it as much as they should because it costs money while the app already "works" and brings in revenue. Of the 3 I've worked at, only one had yearly pen tests done.
No one hires someone to test what happens when a bridge is shot with a missile from 6000 miles away. The bridge "works" in the same way that the software "works".
Oh, I see. No, the missile is a hacker attacking your software remotely. Bridges are just accepted that they will collapse if deliberately attacked by a determined attacker. Software is held to a higher standard, not a lower one.
Yet after decades of this messaging, we still have these people touching the server-side code. Is it likely another few years of the same messaging will fix it?
I work in security and I don't trust myself to tie my shoes correctly every day.
OPs comparison is great. Bounds checks are easy. There are many overconfident C++ programmers that say they would never introduce a vulnerability like that. But it still happens, because in this class if vulnerabilities it's often enough to forget one check.
I once caught a webdev doing frontend authentification with a plain javascript dialog. Yes, simple! Put the password in the JS, and do a simple comparison. Why did I notice? Because the owner of the lamp account contacted me that all their data was suddenly gone. Checked the logs, and yep, Google Bot clicked all the "Delete" links in their internal management view. Simply because JavaScript is opt-in :-) Called the developer and educated him on what he just did. I lost a lot of trust in web people that day.
And that’s a very good reason never to fill in exact personal data, e.g. date of birth. Especially dating apps seem to need them, but don’t do it. Fill in something within a year or so from your real birthday.
And while this dating app isn’t well known, it caters to people with different tastes (such as bdsm and group sex) and queer people. Needless to say that this is very sensitive in many parts of the world.
It's not making bad, it's making cheaper/faster. They probably hired less experimented developers or didn't give them proper time to implement the features they wanted.
The costs involved with maintaining garbage are infinitely more than maintaining something well built.
This is why software is so lucrative.. because the true cost of the software isn't how much you pay for it .. it's "how much is it going to cost you to change to something else?"
I was that cheap contractor. My bosses were oblivious to anything but schedule and bugs surfacing to the client's reviewer. Guess threats of imprisonment in US and EU and data and insurance for data (if photos are not suitable for LinkedIn, you pay eye-watering prices) will be only deterrent.
Of course, the incentives shouldn't promote coverups.
I used the app briefly a few months prior to their discovery. The app was riddled with bugs. Things like chats not loading (received the push notification, but in the app not visible until force quit/reload). I’m not surprised it took them so long to remediate. I would guess a shoestring contractor dev team.
This is what happens when both founders are not technical. I use the app and it was obvious from day one it’s been designed and implemented by the lowest bidder.
Not necessarily the lowest bidder. It's quite easy for a consulting company that is bad at development to make a convincing pitch to a nontechnical founder as long as they're better at sales than they are at development.
Because a technical person would immediately find all of the glaring flaws and issues with their app and fix it promptly. Unless they’re incompetent. Which might be worse than non-technical.
Attending node.js events does not mean you are technical. A lot of people, I would say most people in my experience, go those events to connect with technical talent.
The problem is they probably don’t have full time developers. They probably built the app once years ago via a dev shop and then never updated it again. The talent moved on and updating it is expensive now.
Cost minimizing aligns well with the criminal-negligence theory. In fact every egregious security issue I've come across, like plain text passwords, public S3 buckets, publicly-accessible internal tools... it all directly correlates to being cheap in my experience.
They had turnover of £39m last year and profits of £5.5m (double the previous year, quite good for a UK business of this scale). If they don't have full time devs it'll be shocking, certainly had the money to sort crap like this out
They (or someone they hired) actually rewrote their whole app about a year ago, I remember seeing lots of people complaining about how much worse and buggier it got after the rewrite.
I have no idea if the back end was also replaced then or if the vulnerabilities were present in the previous version as well.
The online dating space (I use the term liberally) is a huge fucking mess. There's only 2 or 3 companies with an offering that is anywhere near useful, and they're either evil, incompetent, or both.
Maybe it's time for an open source federated dating service or something. Or at least something that doesn't sell your data, doesn't leak your nudes, or doesn't get you beaten up/raped/murdered. Probably easier said than done.
I’ve been conceptualizing one for a few years, but just don’t have the free dopamine to build it alongside my day job.
ActivityPub even has the mechanics to facilitate it through publishing Person records. There is MASSIVE space for innovation, especially if you prioritize on non-monogamy, non-heterosexual, non-gender-conforming needs.
Dating apps are a REALLY hard space to get into, however. You need a cumulative mass of users in a given area before they’re useful, and monetizing it inevitably means making the app less useful. There’s a reason okcupid went to shit after it stopped being a non-profit.
Now you've piqued my interest, especially if it could be done in a safe but distributed way, without a focus on profits.
How you'd envision it to work, considering the open nature of ActivityPub but the need/want from the users to remain private when using dating applications/protocols?
Profile data can be restricted based on authorized fetch, just like mastodon.
For messaging, I hadn’t put a much thought into it, but one could establish end to end encryption based on mutual validation signatures. Theoretically. Encryption isn’t my strong suit, but as long as the encoded body is unicode, it’s just as easy to transmit as any other text.
But, like, I also don’t know of any dating site that professes to be encrypting message contents.
I feel like ones nudity is something that should stay in analog form, where you have almost complete and absolute control over the distribution of it. If people want to make digital copies of their analog form that's their right to do, but they need to realize that no system is nor ever will be secure enough to prevent their inevitable release and distribution.
This is something that young people especially don't take into account; the potential and probable long term ramifications and embarrassment. Providing such a facility is merely inviting such negative effects.
I see where you're coming from, but I don't agree. It should be possible for people to share stuff (nudity or anything else, really) with the people they want, without worrying that $incompentCompany will leak it for all to see.
That, and a bit of embarrassment isn't really all that bad. The problem is that the leaked stuff keeps on circulating :-(
I have a pet theory that seeing more "real" people in the nude is good for your body image. There's a lot less nudity than there was 30 years ago (from movies to locker rooms and everything in between), there's a lot more shame, and everyone is wistfully staring at Instagram garbage.
This is utterly horrifying, clearly absolutely zero thought was put into security at all.
I'm a game developer and we put more effort into keeping our game fair than this company does in keeping it's users safe. They should be sued into oblivion.
Before I realised the app was a buggy mess I was very surprised to see it had an interests section that provided no context for the interests. For example: virtually everyone had Domination or Submission as one of their interests but no context whatsoever of which role they wanted. To not realise how fundamentally wrong this is for that scene implies they're clueless across the board.
Do note that profiles in a dating app are in principle accessible to everyone. You open the app, profiles appear. There's no ACL or anything like that.
GraphQL allows your front-end to query your data. Which is cool. But from the backend this is all really opaque (and usually implemented by a 3rd party library that has no idea about your access control).
Unless you're going to implement your access control in the database itself (not the worst idea, certainly better than doing it in the front end), then it's very hard to unwrap the GraphQL query in backend code to work out exactly what records should be returned/restricted.
Implementing decent access control in the backend means understanding the query and implementing a whole set of models/classes/functions/whatever that grok the database schema and can make decisions about "if the user_id is XXX then it can/cannot see this image in this context" [0]. They obviously implemented this in the front end because that's a lot easier with GraphQL.
I'm not saying this is a good implementation of GraphQL and that therefore the problem lies with GraphQL exclusively. I'm saying that GraphQL makes this mistake easier to make because it explicitly tries to remove the need for the backend to understand the query and so makes this kind of complex security situation harder.
[0] e.g. a specific image may be publicly accessible from the user's profile, or only available to matches, or only in a chat context (but not group chats), and inaccessible at any time from blocked users, etc. You can easily come up with a bunch of complex edge cases for just this one case.
It's pretty easy. Treat each resolver that retrieves data like it's a REST endpoint and secure it, and add a query allowlist that you append items to during your CI builds.
You don't need to touch the AST or understand the context of the rest of the query. Just answer the question "can user ABC see the photos of user XYZ?" in the resolver that fetches the photos. If this is inefficient then prefetch some data or use a dataloader.
Now, if you're using some magic library that turns GraphQL into SQL, that's going to be different.
Still I think this type of thing is much more likely to happen with GraphQL including various N + 1 and even worse performance issues.
Like if you imagine having junior engs they will be much more likely to make the mistake with GraphQL than otherwise and it is harder to review as well.
The permissions checking becomes a real spaghetti and difficult to understand in practice compared to just one by one checks.
The permissions checking is one-by-one checks. It's exactly as hard a mistake to make in GraphQL as it is in REST unless you've got more resolvers than an equivalent REST app would have, which is unlikely and would mean GraphQL wasn't a good choice.
I do think that you've got a good point about how the knowledge isn't widespread yet, that it's easier for frontend engineers to write awful expensive queries, and that GraphQL is very hard to secure against DoS unless you lock it down with query hashes.
I was responding to someone who was talking about one-by-one checks in REST. It is in fact true that using one-by-one checks in GraphQL is pretty similar to using them in REST.
You can do the equivalent of applying middleware at a routing level in GraphQL by wrapping multiple resolvers, although the semantics will be different because you're not working with a tree of routes and so you'll need to group your resolvers together in some other way. In the Node.js libraries a resolver is just a function, so you can very easily wrap a bunch of them in another function:
// auth.js
export const checkParent = (permission, fn) => (parent, args, ctx) => {
ctx.can(permission, parent); // CASL
return fn(parent, args, ctx);
};
// resolvers.js
import * as auth from './auth';
export const resolvers = {
// could also iterate over all the resolvers within User using Object.entries and apply auth.checkParent if you wanted
User: {
photoURLs: auth.checkParent('read', (parent, _, ctx) => {
return parent.getSignedPhotoURLs();
}),
},
Query: {
user: async (_, { id }, ctx) => {
const user = await ctx.db.users.getById(id);
ctx.can('read', user);
return user;
},
},
};
I'm not sure what you mean by "deny requests automatically" because there's obviously no manual step here, and equally obviously I'm not sure what you mean by "scenarios [I] never considered". Are you talking about rate limiting or heuristic detection? You can do those in GraphQL too.
Yes, this stuff is slightly different, but it's genuinely not that hard to secure a GraphQL API.
You don't treat resolvers like RESTful endpoints. You check that the user has permission to access the object (edit: or other value) which the resolver returns. This has nothing to do with RPC and does not stop you using the "graph" part of GraphQL.
For the purposes of comparing a REST API, where permissions checking is done for every endpoint, to a GraphQL API, where permissions checking is done for any resolver which loads data, it is necessary to compare the number of permissions checks you would need across the two services. This does not mean resolvers are in any way equivalent to RESTful endpoints except for comparing how many times you'd need to write `ctx.can('read', photo);` across the two, and even then the numbers will almost certainly be different because the APIs will be different.
The problem is the 'graph' nature of the system; you can check the permission for the object that the resolver returns but that object might be linked to another object that you're not checking for. Because anything can just link to anything, you would have to recursively check the permissions of the entire graph.
If the root query lets you query a user of type User, and the User object embeds an array photos of type [Photo], then there are two possibilities: either the resolver for user is loading the photos and letting the default resolver return them, in which case you know about it and can check permissions for them, or there's a resolver defined for photos, in which case you can check permissions in that second resolver.
Think about it. GraphQL won't go retrieve rows from your database without either a) you installing some other library to do the magic, in which case we should talk about that library instead, or b) you telling it to query your database, in which case you know what data you're querying in each resolver you write and can check that the user has permission to see it.
How do you guys bridge the abstraction gap/wall between resolvers to prevent N+1 queries? I have the suspicion that GraphQL is great for exposing a really generic API, useful when you have no idea what shape the front end will take (how often is that?). But it comes at a heavy price; genericity is always the opposite of specialization. And optimization can only occur during specialization.
Having worked with it for a bit over a year now, it really feels like GraphQL is just a different protocol for writing the same old REST CRUD, while introducing a huge framework with lots of annoying magic and language level reflection that isn't amendable to extension or modification according to the needs of the developers.
Is that all worth it, just to reduce the amount of HTTP requests? Is it that much of a sacrilege to add specialized REST HTTP endpoints to remedy that otherwise?
Putting a dataloader in front of batch APIs usually works okay. You end up with round trips but they're 1+1 and inside the data centre. I've used AST traversal to compute joins a couple of fields ahead + custom resolvers that only load their data if it wasn't loaded by the parent, but I don't think that's necessary to get decent performance and I wouldn't do it again unless there was a real business need.
I agree that genericity is often the opposite of specialisation. I disagree that it's a heavy price. REST is pretty general. To my mind specialist APIs are things like streaming video, file uploads, anything that relies on caching in an intermediate layer, etc. and these are all examples of where you'd follow the established standards and add some RESTful routes/services. I don't think it's sacrilege to upload files in a different way to how you load your user dashboard or your interface for editing project permissions.
Any third party GraphQL library worth its salt should implement some kind of ACL. It seems to be the case with the most popular ones [1] [2]. One simple idea is to implement authorization in the data models. GraphQL delegate ~get~ and ~list~ to ressource model that could implement authorization based on the context of the request.
I did not have this issue when using HotChocolate. You can easily give authorization rules to entities or properties of entities which will automatically be handled. Also to mutations
Have they included real profiles in the screenshots of the "Discover profiles" menu and the list of likes? If so that's pretty irresponsible even with the faces obscured.
I'm not terribly surprised. I use it but would describe it as incompetently put together as my bank app? maybe worse, it barley functions at all. I dont know how they managed it.
When I used it, I enjoyed the community, but the app was never competently written. Then a while back they had a flag day where they rolled out a new app and a new server to everyone all at once, and most people were not able to log in; those that were lost their premium perks if they were paying customers, likes and chats got lost, etc. I was never actually able to log in, and just dropped the app at that point.
I am honestly amazed that these researchers held off for as long as they did on publishing. If crappy startups are given 6 months to close egregiously bad privacy holes like this, they will continue to abuse the privilege they have in collecting this information to begin with. I say give them 2 months and then release. Fuckers need to learn not to play dice with people's private information.
As decades of Windows blue screens proved, shitty software won't scare away users if the software can provide a service or capability that the users can't easily get elsewhere.
And the fact that this, the literal only solution that has any chance of succeeding, is this far buried down in the comments, says so much about this industry.
Saddest part is that this sort of stuff or at least not proper authorization checks is very common. I do not really know what is the solution at this point. Clearly not enough developers care. Or can stop it...
Is it education problem? If so if there was training budget a day or two running against some simple capture the flag exercise might do a lot...
Applies to all dating apps, really: just treat any info you put in your profile as 100% public, for anyone, worldwide. Location is easily faked, other filtering options are about as effective as a lone "do not enter" sign with no fence - I can put any info I like into my profile to fit your criteria and have you show up in my feed.
Chats? The only IM apps with functional E2EE are: Signal, iMessage, WhatsApp; and even those have trade-offs. Treat everything else as readable by some third party, and dating apps by design need to be able to look into people's chats to be able to handle harassment cases.
That of course is no excuse for having gaping security/privacy holes, but you're trading off quite a bit of privacy by design; it's like meeting in a public space where you can feel a little bit safer with someone you don't know yet.
I'd say if you're concerned with any of that, go meet new people IRL, but there are 100% legitimate cases where this is not the most effective strategy (e.g. Feeld's primary target audience).
> Has WhatsApp ever had a security leak that we know about?
I don't know of any, but I distrust anything Meta/FB/MZ does, out of principle.
I have more trust in iMessage, but it's incredibly tightly tied to Apple's devices (as far as I can tell, part of its security architecture relies on the hardware/SEP).
Signal (as a non-profit org) could have been a neutral third party everyone could feel safe to trust, but they've lost my confidence when they introduced support for cryptocurrencies - I can no longer trust their motives. It also does not offer any choice over some security/usability trade-offs (like syncing your chat history to a new device); I understand this is critical for e.g. whistleblowers, but a deal-breaker for many of the rest of us.
Hey, sign me up. Outside the "necessary evil" trade-offs inherent to facilitating one-on-one meetups, I would really love a platform that treats people like people, not cattle to be milked for money.
(This is a throwaway account but I've been on HN for a decade)
I just read this and attempted to delete mine and my partners profile data. The process is currently totally broken in-app. There is no way to proceed past a certain point. There's nothing self-identifying about us in the app but still.... I'm furious.
I didn't exactly know of it but I had enough glitches on that terrible app when I was using it that it was obvious there was info being sent that it didn't mean to and some atrocious performance issues that made it feel like it was crudely thrown together
Pretty sure I flagged something or another as a security issue but can't recall what it was
Anybody who's ever used this app is probably not surprised to hear this. It's been a shitshow since day one, one of the buggiest apps I think I've ever used.
Even with a full redesign/rebuild over the past year it still is nothing but glitchy software.
This is pretty funny. I've been abusing this shitty API for a while to see who likes me in this dating app.
I didn't realise the problems were this bad. They've had massive issues with their tech stack from a user POV. I've multiple times had my phone running incredibly hot while using it.
Useful context is that they completely redid the app from scratch in 2023 using a contractor instead of in house developers and the launch was not very smooth
It seems like they implemented permission checks purely in the frontend, and not just on one endpoint, but almost everywhere.
While it is conceptually easy to avoid this, I have seen similar mistakes much more frequently than I would like to admit.
Edit: the solution "check all permissions on the backend" reminds me of the solution to buffer overflows: "just add bounds checks everywhere". It's clear to the community at large what needs to be done, but getting everyone to apply this consistently is... not so easy.
> Edit: the solution "check all permissions on the backend" reminds me of the solution to buffer overflows: "just add bounds checks everywhere". It's clear to the community at large what needs to be done, but getting everyone to apply this consistently is... not so easy.
I don't see those as the same. Buffer overflow checks are a very specific implementation (and language) detail and can happen absolutely anywhere in a codebase. Permission checks happen at a specific boundary and are related to how you design your application.
Whenever I had any say on how a project was developed, I'd always insist on a clear separation between the development of the backend API and the frontend client code. In my experience, it makes things like this much easier to avoid (and test for). You also get a developer API for "free" (which to be honest, is the main reason I prefer to do it that way).
You shouldn't be touching the server-side code if you find this hard to keep straight.
Junior developer probably opened a Jira ticket, saw a UI of a permission dialog, and did exactly that task with nobody senior enough to know better. That's how you reproduce the bugs that were in-fashion 15 - 20 years ago in my experience!
There two new front end guys in the team, understaffed backend, but manageable, nothing as dramatic, but I constantly have to remind the benefits of doing as much things as possible in the backend and keep the logics high level on the front is aggravating.
Also no review or planning anywhere in that process.
I'm semi-confident that if a Junior were to talk to another Junior before starting about things to look out for, and then the code was reviewed by say a third Junior, they would not have this bug.
Call me naive, but I don't think Juniors are as oblivious as they are made out to be
I should add this works best if you hire with some diversity, such as one Junior with a preference for security topics.
If you go up to the counter and yell "10 React devs please", don't be surprised
As a long time engineering manager, I have significantly benifited from this: leveling me up in WCAG/ADA, strcmp timing attacks, performance timing. Many managers start younger than we maybe should, and the burnout that I had in my early years pulled me out of my passion to learn more about comp sci. It was the random enthusiasm of younger folk that, in those times, was my exposure to topics I hadn't dived in to yet.
I have witnessed hiring, listening, and supporting early-career enthusiasm has significantly improved every startup I've had the joy to be a part of.
Seems like a solid development cycle.
Junior tries something -> hit production
I do not see multiple issues with this.
9 out of 10 PMs love this one hack to boost velocity
they were probably thinking what a 10x engineer they'd found to be so rapid at delivery...
This is so true. I've seen this so many times. The darling of product, who can deliver so fast. They leave a trail of smoking rubble and half working features behind them.
Why can't you be more like darling of product over there.
What do you even do around here; all you seem to ever do is take darling of products code and make a few changes (which I don't understand) and committing it as your own work. It appears you are either trying to take credit for darling of product or are sabotaging their amazing 10x work.
Are you my manager
Hey, it worked on my machine
Ultimately, I don't disagree.
However, I also try to make it a habit to not blame people for not knowing something. This presents as a structural problem in that company: they needed to hire people who do know how to secure server code and put them into a position to do so. Blame the company and those who decided to save every last penny in personnel cost.
> I also try to make it a habit to not blame people for not knowing something
There’s a point where critical thinking skills come into play, I’ve seen people walked off the premises for doing stuff like this with customer data. Actual seniors who have never been blamed for anything are suddenly intolerable threats to the company because they didn’t bother to check what they were doing and forced the company to disclose a breach.
Sexual preferences and such are special category data, and if you are an engineer dealing with this stuff you should treat it as though data breaches could get someone killed.
Sure, part of the responsibility of this is on management, but it's absolutely on the engineers too.
So if someone who can't drive, finds a car with the keys in it, and starts driving it, and causes an accident, who do you blame?
And do you have any reason at all to believe the backend people didn't know? They wrote a fair amount of code and infrastructure, so they cannot have been blank slates.
> who do you blame?
The people who hired the person who can't drive and gave them a job as a driver.
> do you have any reason at all to believe the backend people didn't know?
Well, either they knew and wanted to implement proper auth and were prevented from doing it, or they knew and couldn't be bothered, or they didn't know that their backend system wasn't properly locked down and were too incompetent to have a clue.
People getting paid to create software should know better then these basic mistakes.
than
Yeah, the people who put those people into the position to touch server side code are to blame. But then the OP is right: the people having made these code changes should really not have touched anything server side or even anything security relevant in the beginning.
They shouldn't have to - the architecture should be made in such a way that permission checks are done without you specifically having to call them every time. This is the entire reason middleware exists!
Well, but apparently they let people create the architecture who just shouldn't have touched the backend code. That's the whole point. Since it was not just a single endpoint or so - it was everywhere!
I agree they should just quit but that requires experience to understand too. By the time they've learned that, they have also learned that client-side access controls are decorative.
right*
u are wise and right
Eternal September. Everyone starts somewhere, it’s just all the time now. In ten years, the dev will explain to a junior how bad they messed up, and why they have to validate this way. Well, I don’t know, but that’s what I hope.
yeah now imagine another engineer go "my first bridge just fell apart the first time a real truck tried to cross over it lol" or "man my first plane crashed so hard"...
Ya know, the Roman tradition was, you gotta stand under the bridge while the army marches over it. If it collapses, you die too. Maybe there's something to having nudes of that dev.
Real engineering is expensive. And hard. moving atoms around is tough. I've never cut stone, but I've melted and cast copper and aluminum. That's real and dangerous work.
Computation is cheap and plentiful. And I kinda like having full control of "stuff". But maybe we do need licensing or personal liability. If I could wave a magic wand, and make that exist, I don't really know what rules I'd put in place.
How do you think people should get skilled up?
> How do you think people should get skilled up?
You didn't ask me but I can give you my answer: not on prod and with a lot of reviews!
> Maybe there's something to having nudes of that dev.
Most users of these sorts of app don't pay enough attention to security to care. Do you really think that most developers are any better?
Most developers are just normal people who happen to be able to write a bit of code and convinced someone to employ them. Just like anyone else, far too many live under the delusion that "it can't happen to me."
Translation: making them eat their own dogfood and risk their own embarrassment won't help; they would have to know better, first! =)
That's an interesting idea. Bridge builders and flight sims are used in industry to test to see if a bridge design will fail or if a plane will crash. They're not limited to oversimplified and fun video games.
I wonder if there's a market for a "write a CRUD app and let it loose on the Internet and watch it get pwned" simulator/game.
That's hiring a pen tester, and there is a market for it, but companies don't do it as much as they should because it costs money while the app already "works" and brings in revenue. Of the 3 I've worked at, only one had yearly pen tests done.
No one hires someone to test what happens when a bridge is shot with a missile from 6000 miles away. The bridge "works" in the same way that the software "works".
A software penetration tester has the same techniques and suite of tools for pwning as "the internet".
I don't see how that statement follows mine. Can you connect them at all?
I thought you were making the comparison that a pentester is like a missile shot at a bridge whereas the internet is the army walking over the bridge.
Oh, I see. No, the missile is a hacker attacking your software remotely. Bridges are just accepted that they will collapse if deliberately attacked by a determined attacker. Software is held to a higher standard, not a lower one.
Yet after decades of this messaging, we still have these people touching the server-side code. Is it likely another few years of the same messaging will fix it?
I work in security and I don't trust myself to tie my shoes correctly every day.
OPs comparison is great. Bounds checks are easy. There are many overconfident C++ programmers that say they would never introduce a vulnerability like that. But it still happens, because in this class if vulnerabilities it's often enough to forget one check.
They didn't ;)
I feel bad now I'm sorry to the person I replied to lol. I didn't mean 'you' I meant a generalized third party person.
I once caught a webdev doing frontend authentification with a plain javascript dialog. Yes, simple! Put the password in the JS, and do a simple comparison. Why did I notice? Because the owner of the lamp account contacted me that all their data was suddenly gone. Checked the logs, and yep, Google Bot clicked all the "Delete" links in their internal management view. Simply because JavaScript is opt-in :-) Called the developer and educated him on what he just did. I lost a lot of trust in web people that day.
This can happen very easily I think if one uses "automatic db APIs" on the backend. I'm thinking of some automatic graphql setups for example.
I flag it whenever I see it, but it is very worrying how little thought is sometimes put into the scope of client APIs.
This is unfortunately quite common in mobile apps, because "why would a user look closer in a mobile app".
I want to blame juniors, the no-code and ai-code crowd, but I'm as lazy as they are and will just shake my head and move on.
[dead]
And that’s a very good reason never to fill in exact personal data, e.g. date of birth. Especially dating apps seem to need them, but don’t do it. Fill in something within a year or so from your real birthday.
And while this dating app isn’t well known, it caters to people with different tastes (such as bdsm and group sex) and queer people. Needless to say that this is very sensitive in many parts of the world.
They were in the press a lot this week, but for earning money.
https://www.theguardian.com/technology/article/2024/sep/08/t...
It's been observed by many that making bad things seems to be a lot more profitable these days than making good things
It's not making bad, it's making cheaper/faster. They probably hired less experimented developers or didn't give them proper time to implement the features they wanted.
I agree, making something cheaply and quickly is a great way to make it bad, and thus profitable
It's always been like that.
The costs involved with maintaining garbage are infinitely more than maintaining something well built.
This is why software is so lucrative.. because the true cost of the software isn't how much you pay for it .. it's "how much is it going to cost you to change to something else?"
A great argument against relying on any software you can't control if ever I heard one
the costs of maintaining something well built that no-one uses are very low indeed.
unfortunately trash is cheaper and faster, and it takes a certain kind of genius insanity to sell something well built that doesn't exist yet.
Its been a long running joke between me and some friends that if you want to get rich, you should make a dating app.
1. Desperate men come in hordes. 2. You will probably get bought out by match group for millions.
Of course there are moral qualms and also it may not be actually just as easy as that.
Someone should make sure that The Guardian sees this
Criminal negligence levels of failure, especially given the category of app.
I was that cheap contractor. My bosses were oblivious to anything but schedule and bugs surfacing to the client's reviewer. Guess threats of imprisonment in US and EU and data and insurance for data (if photos are not suitable for LinkedIn, you pay eye-watering prices) will be only deterrent.
Of course, the incentives shouldn't promote coverups.
Wow you weren't kidding. These are vulns that would have been embarrassing a decade ago.
I think the timeline is the more damaging part too. Not only was their design woefully inadequate, they don't seem to care.
I used the app briefly a few months prior to their discovery. The app was riddled with bugs. Things like chats not loading (received the push notification, but in the app not visible until force quit/reload). I’m not surprised it took them so long to remediate. I would guess a shoestring contractor dev team.
This is what happens when both founders are not technical. I use the app and it was obvious from day one it’s been designed and implemented by the lowest bidder.
Not necessarily the lowest bidder. It's quite easy for a consulting company that is bad at development to make a convincing pitch to a nontechnical founder as long as they're better at sales than they are at development.
Replace ‘founder’ with organization
The founder used attend node.js events in London. Not sure why you think he’s non technical.
Because a technical person would immediately find all of the glaring flaws and issues with their app and fix it promptly. Unless they’re incompetent. Which might be worse than non-technical.
Attending node.js events does not mean you are technical. A lot of people, I would say most people in my experience, go those events to connect with technical talent.
The problem is they probably don’t have full time developers. They probably built the app once years ago via a dev shop and then never updated it again. The talent moved on and updating it is expensive now.
Cost minimizing aligns well with the criminal-negligence theory. In fact every egregious security issue I've come across, like plain text passwords, public S3 buckets, publicly-accessible internal tools... it all directly correlates to being cheap in my experience.
They had turnover of £39m last year and profits of £5.5m (double the previous year, quite good for a UK business of this scale). If they don't have full time devs it'll be shocking, certainly had the money to sort crap like this out
They (or someone they hired) actually rewrote their whole app about a year ago, I remember seeing lots of people complaining about how much worse and buggier it got after the rewrite.
I have no idea if the back end was also replaced then or if the vulnerabilities were present in the previous version as well.
The online dating space (I use the term liberally) is a huge fucking mess. There's only 2 or 3 companies with an offering that is anywhere near useful, and they're either evil, incompetent, or both.
Maybe it's time for an open source federated dating service or something. Or at least something that doesn't sell your data, doesn't leak your nudes, or doesn't get you beaten up/raped/murdered. Probably easier said than done.
I’ve been conceptualizing one for a few years, but just don’t have the free dopamine to build it alongside my day job.
ActivityPub even has the mechanics to facilitate it through publishing Person records. There is MASSIVE space for innovation, especially if you prioritize on non-monogamy, non-heterosexual, non-gender-conforming needs.
Dating apps are a REALLY hard space to get into, however. You need a cumulative mass of users in a given area before they’re useful, and monetizing it inevitably means making the app less useful. There’s a reason okcupid went to shit after it stopped being a non-profit.
And then there’s the moderation problem…
Now you've piqued my interest, especially if it could be done in a safe but distributed way, without a focus on profits.
How you'd envision it to work, considering the open nature of ActivityPub but the need/want from the users to remain private when using dating applications/protocols?
Profile data can be restricted based on authorized fetch, just like mastodon.
For messaging, I hadn’t put a much thought into it, but one could establish end to end encryption based on mutual validation signatures. Theoretically. Encryption isn’t my strong suit, but as long as the encoded body is unicode, it’s just as easy to transmit as any other text.
But, like, I also don’t know of any dating site that professes to be encrypting message contents.
I feel like ones nudity is something that should stay in analog form, where you have almost complete and absolute control over the distribution of it. If people want to make digital copies of their analog form that's their right to do, but they need to realize that no system is nor ever will be secure enough to prevent their inevitable release and distribution.
This is something that young people especially don't take into account; the potential and probable long term ramifications and embarrassment. Providing such a facility is merely inviting such negative effects.
I see where you're coming from, but I don't agree. It should be possible for people to share stuff (nudity or anything else, really) with the people they want, without worrying that $incompentCompany will leak it for all to see.
That, and a bit of embarrassment isn't really all that bad. The problem is that the leaked stuff keeps on circulating :-(
I have a pet theory that seeing more "real" people in the nude is good for your body image. There's a lot less nudity than there was 30 years ago (from movies to locker rooms and everything in between), there's a lot more shame, and everyone is wistfully staring at Instagram garbage.
This is utterly horrifying, clearly absolutely zero thought was put into security at all.
I'm a game developer and we put more effort into keeping our game fair than this company does in keeping it's users safe. They should be sued into oblivion.
Zero thought was put into anything.
Before I realised the app was a buggy mess I was very surprised to see it had an interests section that provided no context for the interests. For example: virtually everyone had Domination or Submission as one of their interests but no context whatsoever of which role they wanted. To not realise how fundamentally wrong this is for that scene implies they're clueless across the board.
Do note that profiles in a dating app are in principle accessible to everyone. You open the app, profiles appear. There's no ACL or anything like that.
Messages and private pictures is another matter.
Hot take: this is a problem with GraphQL.
GraphQL allows your front-end to query your data. Which is cool. But from the backend this is all really opaque (and usually implemented by a 3rd party library that has no idea about your access control).
Unless you're going to implement your access control in the database itself (not the worst idea, certainly better than doing it in the front end), then it's very hard to unwrap the GraphQL query in backend code to work out exactly what records should be returned/restricted.
Implementing decent access control in the backend means understanding the query and implementing a whole set of models/classes/functions/whatever that grok the database schema and can make decisions about "if the user_id is XXX then it can/cannot see this image in this context" [0]. They obviously implemented this in the front end because that's a lot easier with GraphQL.
I'm not saying this is a good implementation of GraphQL and that therefore the problem lies with GraphQL exclusively. I'm saying that GraphQL makes this mistake easier to make because it explicitly tries to remove the need for the backend to understand the query and so makes this kind of complex security situation harder.
[0] e.g. a specific image may be publicly accessible from the user's profile, or only available to matches, or only in a chat context (but not group chats), and inaccessible at any time from blocked users, etc. You can easily come up with a bunch of complex edge cases for just this one case.
It's pretty easy. Treat each resolver that retrieves data like it's a REST endpoint and secure it, and add a query allowlist that you append items to during your CI builds.
You don't need to touch the AST or understand the context of the rest of the query. Just answer the question "can user ABC see the photos of user XYZ?" in the resolver that fetches the photos. If this is inefficient then prefetch some data or use a dataloader.
Now, if you're using some magic library that turns GraphQL into SQL, that's going to be different.
Still I think this type of thing is much more likely to happen with GraphQL including various N + 1 and even worse performance issues.
Like if you imagine having junior engs they will be much more likely to make the mistake with GraphQL than otherwise and it is harder to review as well.
The permissions checking becomes a real spaghetti and difficult to understand in practice compared to just one by one checks.
The permissions checking is one-by-one checks. It's exactly as hard a mistake to make in GraphQL as it is in REST unless you've got more resolvers than an equivalent REST app would have, which is unlikely and would mean GraphQL wasn't a good choice.
I do think that you've got a good point about how the knowledge isn't widespread yet, that it's easier for frontend engineers to write awful expensive queries, and that GraphQL is very hard to secure against DoS unless you lock it down with query hashes.
> The permissions checking is one-by-one checks
Not true, authorization can be done in middleware. You can deny requests automatically, even scenarios you never considered.
I was responding to someone who was talking about one-by-one checks in REST. It is in fact true that using one-by-one checks in GraphQL is pretty similar to using them in REST.
You can do the equivalent of applying middleware at a routing level in GraphQL by wrapping multiple resolvers, although the semantics will be different because you're not working with a tree of routes and so you'll need to group your resolvers together in some other way. In the Node.js libraries a resolver is just a function, so you can very easily wrap a bunch of them in another function:
I'm not sure what you mean by "deny requests automatically" because there's obviously no manual step here, and equally obviously I'm not sure what you mean by "scenarios [I] never considered". Are you talking about rate limiting or heuristic detection? You can do those in GraphQL too.Yes, this stuff is slightly different, but it's genuinely not that hard to secure a GraphQL API.
Wouldn't it seem contra to the principles of GraphQL if you treat resolvers like rest endpoints?
At this point, it's just RPC, no? It's not really a graph. Why didn't I just use RPC/Rest the whole time?
You don't treat resolvers like RESTful endpoints. You check that the user has permission to access the object (edit: or other value) which the resolver returns. This has nothing to do with RPC and does not stop you using the "graph" part of GraphQL.
For the purposes of comparing a REST API, where permissions checking is done for every endpoint, to a GraphQL API, where permissions checking is done for any resolver which loads data, it is necessary to compare the number of permissions checks you would need across the two services. This does not mean resolvers are in any way equivalent to RESTful endpoints except for comparing how many times you'd need to write `ctx.can('read', photo);` across the two, and even then the numbers will almost certainly be different because the APIs will be different.
The problem is the 'graph' nature of the system; you can check the permission for the object that the resolver returns but that object might be linked to another object that you're not checking for. Because anything can just link to anything, you would have to recursively check the permissions of the entire graph.
This does not match my experience.
If the root query lets you query a user of type User, and the User object embeds an array photos of type [Photo], then there are two possibilities: either the resolver for user is loading the photos and letting the default resolver return them, in which case you know about it and can check permissions for them, or there's a resolver defined for photos, in which case you can check permissions in that second resolver.
Think about it. GraphQL won't go retrieve rows from your database without either a) you installing some other library to do the magic, in which case we should talk about that library instead, or b) you telling it to query your database, in which case you know what data you're querying in each resolver you write and can check that the user has permission to see it.
How do you guys bridge the abstraction gap/wall between resolvers to prevent N+1 queries? I have the suspicion that GraphQL is great for exposing a really generic API, useful when you have no idea what shape the front end will take (how often is that?). But it comes at a heavy price; genericity is always the opposite of specialization. And optimization can only occur during specialization.
Having worked with it for a bit over a year now, it really feels like GraphQL is just a different protocol for writing the same old REST CRUD, while introducing a huge framework with lots of annoying magic and language level reflection that isn't amendable to extension or modification according to the needs of the developers.
Is that all worth it, just to reduce the amount of HTTP requests? Is it that much of a sacrilege to add specialized REST HTTP endpoints to remedy that otherwise?
Putting a dataloader in front of batch APIs usually works okay. You end up with round trips but they're 1+1 and inside the data centre. I've used AST traversal to compute joins a couple of fields ahead + custom resolvers that only load their data if it wasn't loaded by the parent, but I don't think that's necessary to get decent performance and I wouldn't do it again unless there was a real business need.
I agree that genericity is often the opposite of specialisation. I disagree that it's a heavy price. REST is pretty general. To my mind specialist APIs are things like streaming video, file uploads, anything that relies on caching in an intermediate layer, etc. and these are all examples of where you'd follow the established standards and add some RESTful routes/services. I don't think it's sacrilege to upload files in a different way to how you load your user dashboard or your interface for editing project permissions.
Also if it's really the problem with HTTP Requests, you could still technically abstract multiple REST API/RPC calls into a single HTTP Request.
[dead]
GraphQL requires you to either define per-property access, or precompile queries and put them into a whitelist. Everything else leaks data.
https://hasura.io/docs/2.0/security/allow-list/
Any third party GraphQL library worth its salt should implement some kind of ACL. It seems to be the case with the most popular ones [1] [2]. One simple idea is to implement authorization in the data models. GraphQL delegate ~get~ and ~list~ to ressource model that could implement authorization based on the context of the request.
[1] https://www.apollographql.com/docs/apollo-server/security/au...
[2] https://docs.graphene-python.org/projects/django/en/latest/a...
I did not have this issue when using HotChocolate. You can easily give authorization rules to entities or properties of entities which will automatically be handled. Also to mutations
Wow. Remarkably responsible, and compassionate, disclosure.
Have they included real profiles in the screenshots of the "Discover profiles" menu and the list of likes? If so that's pretty irresponsible even with the faces obscured.
Actions don't match words.
What? They did hold off. Their actions matched their words.
I'm not terribly surprised. I use it but would describe it as incompetently put together as my bank app? maybe worse, it barley functions at all. I dont know how they managed it.
It was so bad when I used it, if it wasn't bizarre memory leak or privacy issues it was extremely poorly executed UX.
Between it and Fetlife there's some huge issues with those communities just sticking with the first app that emerges regardless of quality
Given the overlap between those communities and OSS people I’m amazed no one has created a B Corp that does this stuff right.
No one migrates from the first big thing so it's a waste of time. Feeld should be killed by this and it'll barely make a dint
When I used it, I enjoyed the community, but the app was never competently written. Then a while back they had a flag day where they rolled out a new app and a new server to everyone all at once, and most people were not able to log in; those that were lost their premium perks if they were paying customers, likes and chats got lost, etc. I was never actually able to log in, and just dropped the app at that point.
I am honestly amazed that these researchers held off for as long as they did on publishing. If crappy startups are given 6 months to close egregiously bad privacy holes like this, they will continue to abuse the privilege they have in collecting this information to begin with. I say give them 2 months and then release. Fuckers need to learn not to play dice with people's private information.
The question is -- did others know about it?
e.g. https://news.ycombinator.com/item?id=41517747
God damn it. People deserve better than this. Almost inclined to take a pay cut to go and fix this mess.
However little you're willing to take they can hire a less competent person cheaper.
You would hope that a mission driven company like them would care.
Or at least, a profit driven company would care about scaring away users.
As decades of Windows blue screens proved, shitty software won't scare away users if the software can provide a service or capability that the users can't easily get elsewhere.
They don't need your charity, they need to be fined
And the fact that this, the literal only solution that has any chance of succeeding, is this far buried down in the comments, says so much about this industry.
Saddest part is that this sort of stuff or at least not proper authorization checks is very common. I do not really know what is the solution at this point. Clearly not enough developers care. Or can stop it...
Is it education problem? If so if there was training budget a day or two running against some simple capture the flag exercise might do a lot...
Who do you trust? Would tinder and bumble have the same mindset?
Applies to all dating apps, really: just treat any info you put in your profile as 100% public, for anyone, worldwide. Location is easily faked, other filtering options are about as effective as a lone "do not enter" sign with no fence - I can put any info I like into my profile to fit your criteria and have you show up in my feed.
Chats? The only IM apps with functional E2EE are: Signal, iMessage, WhatsApp; and even those have trade-offs. Treat everything else as readable by some third party, and dating apps by design need to be able to look into people's chats to be able to handle harassment cases.
That of course is no excuse for having gaping security/privacy holes, but you're trading off quite a bit of privacy by design; it's like meeting in a public space where you can feel a little bit safer with someone you don't know yet.
I'd say if you're concerned with any of that, go meet new people IRL, but there are 100% legitimate cases where this is not the most effective strategy (e.g. Feeld's primary target audience).
Lots of great points in your post.
Real question: Has WhatsApp ever had a security leak that we know about? Example: Someone can break into accounts, or chats were leaked?
> Real question: Has WhatsApp ever had a security leak that we know about? Example: Someone can break into accounts, or chats were leaked?
Yes, a bunch of them. I don't remember any of the years, but from the top of my head:
- Pegasus was installable via Whatsapp calls that didn't need to be installed, probably the most famous vulnerability with the largest impact
- Bunch of multimedia vulnerabilities that allowed attackers remote execution
- At least one huge database dump was released at some point
Oh, I forgot about Pegasus. Hat tip there.
> Has WhatsApp ever had a security leak that we know about?
I don't know of any, but I distrust anything Meta/FB/MZ does, out of principle.
I have more trust in iMessage, but it's incredibly tightly tied to Apple's devices (as far as I can tell, part of its security architecture relies on the hardware/SEP).
Signal (as a non-profit org) could have been a neutral third party everyone could feel safe to trust, but they've lost my confidence when they introduced support for cryptocurrencies - I can no longer trust their motives. It also does not offer any choice over some security/usability trade-offs (like syncing your chat history to a new device); I understand this is critical for e.g. whistleblowers, but a deal-breaker for many of the rest of us.
Those types of bugs can be sold for millions so you probably won't hear about them
The for-profit dating scene is a quagmire. Sure it can work, I've seen it work, but at what cost?
We desperately need a new platform owned and operated by the people, for the people.
The Tokyo government is trying that: https://www.cnbc.com/2024/06/07/japan-pushes-citizens-toward...
Hey, sign me up. Outside the "necessary evil" trade-offs inherent to facilitating one-on-one meetups, I would really love a platform that treats people like people, not cattle to be milked for money.
(This is a throwaway account but I've been on HN for a decade)
I just read this and attempted to delete mine and my partners profile data. The process is currently totally broken in-app. There is no way to proceed past a certain point. There's nothing self-identifying about us in the app but still.... I'm furious.
With an article like this, it wouldn't surprise me if there's still a way to delete your data if you intercept the network traffic!
It's hard to expect any improvement while the personal data insecurity is tolerated without any penalty or fines.
interesting read - anyone have pointers to other app pentesting walk throughs like this?
I wrote up finding some of these issues entirely independently: https://mjg59.dreamwidth.org/70061.html
So the question is -- how many others knew about this and were exploiting it without discussing it? :(
Great question that would ideally be asked of the people who have logs
I didn't exactly know of it but I had enough glitches on that terrible app when I was using it that it was obvious there was info being sent that it didn't mean to and some atrocious performance issues that made it feel like it was crudely thrown together
Pretty sure I flagged something or another as a security issue but can't recall what it was
https://github.com/juliocesarfort/public-pentesting-reports is a substantial collection of public reports
Off the top of my head, DoyenSec has some good reports in there targeting web apps
For pentesting, often the company hires people to test under an NDA, and keep everything secret because they dont want to be embarassed.
There are sone public pentests out there. For example https://www.opentech.fund/impact/security-safety-audits/
If you want to read some really hard core security vuln hunting, see https://googleprojectzero.blogspot.com/
Anybody who's ever used this app is probably not surprised to hear this. It's been a shitshow since day one, one of the buggiest apps I think I've ever used.
Even with a full redesign/rebuild over the past year it still is nothing but glitchy software.
> View other people’s matches
"BRB going to slaughter everyone my wife has chatted to"
Hard to believe the levels of incompetence here
They have investor funding ... how come no due diligence was done ?
This is pretty funny. I've been abusing this shitty API for a while to see who likes me in this dating app.
I didn't realise the problems were this bad. They've had massive issues with their tech stack from a user POV. I've multiple times had my phone running incredibly hot while using it.
It gets worse. As of this moment it's impossible to delete your account data due to errors.
(Ask me how I know)
Useful context is that they completely redid the app from scratch in 2023 using a contractor instead of in house developers and the launch was not very smooth
https://mashable.com/article/feeld-app-down
[dead]
[dead]
[flagged]