It is a mysterious thing how the influencers of various kind
in software engineering, have to jump from something new to
something new. Even if the "new" new has been created a long
time ago.
Using old systems is in no way fashionable.
Everyone wants the latest whatnot on their resume to be more attractive
for future job opportunities.
We are forever stuck in a fast fashion web.
I like to think about typed vs non typed langauges
For a long time they both existed in relativ harmony
(much more so than emacs vs vi).
Then it became the THE THING to use non typed langues.
In part because "having to write the type in the
code was far too much work".
We got young software developers who had never used a
type langauge but who had joined the "church of non typed".
Skipping ahead, some influencer discovers typed languages
and how it solves many problems.
What about that.
Choices are good.
We can make informed choices as to what tool makes the
most sense in the context of what we are trying to solve.
(and its fit with the team and the legacy code etc)
Is it so mysterious, though? isn't it "simply" the thesis - antithesis - synthesis cycle? and this cycle needs time, years, decades sometimes.
OO starts in the 1970s. C++ comes 1985, Java 1995 (while smalltalk was born 1972)
Hoare talks about CSP in 1978. golang mainlined it 2009.
SQL started in 1970s and reached first consensus in the 1980s.
or take the evolution from sgml->xml->json w/ json-schema...
concepts need time to mature.
and merging concepts (like duck typing and static type analysis in python) in addition needs both base concepts to be mature enough first.
and likewise, at a personal level, brains need time to absorb the concepts. So of course "younglings" only know part of the world.
And on the other hand mature technologies often have evolved their idiosyncracies, their "warts", and they sometimes paint themselves into some corner. C++ with ABI stability; perl with sigils; Java with "Java only" and a poor JNI and "xml xml xml"; UML with "graphical language"; many languages with no syntax baseline marker "this is a C++-95 file" or "this is a python 2 file", freezing syntax...
So imho it's not simply a mysterious urge for the shiny, new stuff, but a mixed bag of overwhelm (I'll rather start something new before I dig through all that's here) and deliberate decisions (I do know what I'm talking about and X, and Y can't be "simply evolved" into Z, so I start Z).
I feel the main reason this cycle still needs those decades to loop back again is that the world underwent its digital revolution during current iteration. We can't do synthesis without breaking everything, so we instead spawn sub-cycles on top - and then most of those end up frozen mid-way for the same reason, and we spawn sub-cycles on top of them, and so on. That's why our systems look like geological layers - XML at the bottom, JSON in the middle, and at the top JSON w/ schemas, plus a little Cambrian explosion of alternatives forming its anthitesis... and looking at it, SGML starts looking sexy again. And sure enough, you can actually use it today - but guess how? That's right, via an npm packge[0].
Wonder if we ever get a chance to actually pull all the accumulated layers of cruft back - to go through synthesis, complete the cycle and create a more reasonable base layer for the future.
There's rarely if ever a time that its cheaper to pull so much back, some people do it but they are overwhelmed by the power of the accumulated cruft's inertia.
Having lived through all that I think it’s worth reminding that old typed languages were no match to modern ones. I’m not talking about arcane envs here, only about practical and widely used.
People didn’t really go just “untyped”. Untyped simply was quicker in becoming easier to use and less screwed than the old. Or you may even view it as paving the road. Typed slowly catched up with the same quality under new names.
This nuance is easy to miss, but set the cut off date to 2005 and recollect what your “options” were again. They weren’t that shiny.
I remember a lot of praise for duck "typing" as if it was an innovation, and claims you should just write unit tests to catch type errors.
People kept glossing over the fact that writing typed code would be so much easier than writing untyped code with sufficient manual tests to catch type errors, because no one did the last bit. Things just broke runtime instead.
> I remember a lot of praise for duck "typing" as if it was an innovation, and claims you should just write unit tests to catch type errors.
> People kept glossing over the fact that writing typed code would be so much easier than writing untyped code with sufficient manual tests to catch type errors, because no one did the last bit. Things just broke runtime instead.
That was the dumbest thing ever. Exchanging automation for lots of manual work was supposed to be innovation?
So many tradeoffs so sometimes someone could write a clever, unmaintainable bit of brainfuck in the production app.
Agreed. I caught a bug in Python code I wrote yesterday by adding a typehint to a variable in a branch that has never been executed. MyPy immediately flagged it, and I fixed it. Never had to watch it fail.
I put typehints on everything in Python, even when it looks ridiculous. I treat MyPy errors as show-stoppers that must be fixed, not ignored. This works well for me.
Yes, this always struck me as completely bananas. Types are a vastly superior form of test for certain conditions.
Duck typing with type inference would be nice but seems to be completely esoteric. You could have both the flexibility of being able to write "func(arg)" without having to specify the type, and the constraint that if you call arg.foo() in your function the compiler should enforce that you actually have a foo method on arg.
(special side-eye to the people materializing methods and properties all over the place at runtime. This seems to have been a rails/python thing that is gradually going out of fashion.)
I love ruby for relatively small scripts, but I wouldn’t want to attempt anything very large with it. I find that once my scripts start to get too long I find the need to start adding manual type checking to keep from tripping over myself.
I think it's a "forgetting" again: Hindley-Milner type inference dates back to 1969! And still very few languages let you use it. Some have wisely added a very weak type inference (var/auto).
Let's not forget the massive handicap that there is one and only one programming language that the browser allows: Javascript.
I think HM is simply not practical. You don’t want your types to be a multidimensional logic puzzle solved by a computer, cause you want to reason about them and you are much weaker than a computer. You want clear contracts and rigidity that they provide to the whole code structure. And only then to automatically fill up niches to avoid obvious typing. You also rarely want Turing Completeness in your types (although some people are still in this phase, looking at some d.ts’es).
Weak var/auto is practical. An average program and an average programmer have average issues that do not include “sound type constraints” or whatever. All they want is to write “let” in place of a full-blown declaration that must be named, exported, imported and then used with various modifiers like Optional<T> and Array<T>. This friction is one of the biggest reasons people may turn to the untyped mess. Cause it doesn’t require this stupid maintenance every few lines and just does the job.
Very few people use HM type systems even today, though.
I think it really is worth considering that Java effectively didn't have sum types until, I think, version 17, and nowadays, many modern and popular statically typed languages have them.
I have no issue at all recommending typescript over Python today. This is not something I’d ever do before ~2020. I completely agree this has nothing to do with fashion and everything with typescript and v8 being amazing despite the ugly JavaScript in between.
In the meantime, the parallel universe (to HN at least) of dotnet happily and silently keeps delivering… can’t wait for the C# rennesaince.
I've been doing C# for a long time, and throughout C# has quietly got on with delivering.
No drama over Generics, Exceptions, Lambdas/Closures, functional vs procedural syntax, etc. C# either did a lot of this well already or was quietly added to the language in broadly sensible ways. The package management isn't perfect, but I don't think any ecosystem has mastered that yet. It's more sensible than NPM at least. ( But what isn't! )
And all the while, a lot of work has been done for performance, where now I'd trust it to be as fast as almost anything else. I'm sure the SIMD wielding C/Rust experts can out-perform it, but for every day code C# is writeable, readable and still performs well.
Okay, so C# 1.0 was a clunky beast, it wasn't until version 2 where they fixed things like captures of variables in closures when it became pleasant to write. I don't know if it was Microsoft's ownership or the 1.x C# experience that put people off.
Most of the mainstream untyped languages are the "children of the 1990s", the era of a delusion that PC's can only get faster and that a computer is always plugged to a free power source.
Then along came the iPhone that prohibited inefficient languages.
So these cycles don't just happen for the sake of cycles, there are always underlying nonlinear breakthroughs in technology. We oscillate between periods of delusions and reality checks.
I'm not sure the runtime efficiency is the critical determinant: assembly is after all an untyped language (+), Javascript is not really typed and is the major language run on the iPhone (on every web page!).
It was more of a natural language/DWIM movement, especially from Perl.
(+ one of these days I will turn my "typesafe macro assembler" post-it note into a PoC)
Its also part of the economic cycle- centralized architectures thrive in a monopolistic oligarchy - and would quickly fall out of fashion after some trustbusting.
There is a management principle here that the tech world doesn't seem to have internalised yet. It is linked to the basic problem that we can't measure programmer productivity.
Software tends to be modularised based on a company's management structure. What the components do tends to reflect the manager. When the manager changes, the software either starts to change too or succumbs to bitrot. It is pretty normal for tech companies to start listing after the original founders move on - even a company like Google or Microsoft with money and technically strong individual employees starts to mean regress as new managers come in who have a different and murkier vision than the founders.
With luck the transition can be from excellence to a different form of excellence, but the basic strategy people seem to have that management is to be encoded in the org structure and bureaucratic processes doesn't seem to work at preserving high-performing tech companies. It depends on specific people in key leadership positions. Because if they get swapped out there is no way to measure the productivity change less effective managers foist on developers until the company has notably mean regressed years later.
I'd bet something like this happened here. A team lead or someone was swapped out and replaced with someone who thought differently.
Or, what happens sometimes, a "team lead or someone" was replaced with a pack of freshly appointed team leads, with nobody being positioned to "break ties" with them and with an explicit ban on any sane engineering guidance "because we don't want to depend on one person".
Then those team leads would try to drive their teams and efforts in alignment with the ways they saw familiar or acceptable, and with zero interest for the "big picture" - they would be focused on the survival of their position and their team only. Fiefdoms would emerge, and every little such fiefdom would be pulling the rug its way. With disastrous consequences for all of security, end user convenience, stability, resilience and malleability of the system.
The zero interest in big picture and fiefdoms hits home pretty hard. Even if a subgroup wants to put emphasis on the big picture that's just one more set of hands on the rug. Suddenly you're spending more time arguing and fighting political battles instead of building great stuff. You can win some battles, but you have to have amazing persistence to not burn out or retreat to your subgroup mentality after a while.
I really don't care if I disagree with top level technical leadership. Having no technical leadership is even worse.
> There is clearly a failure of engineering leadership. I am still puzzled why.
I personally believe it’s lack of good engineering leadership and it’s only getting worse at this point. I’ve experienced a fair share of technical leaders without enough technical background and/or experience - people/leadership skills is equally important imho, but people with both are hard to find (scarce resource) and it seems like it’s often the tech side that gets compromised.
I think one of the reasons is the IT industry is simply growing too fast, meaning we have a very small pool of people with many years of experience and a very large pool of people with less experience (compared to other industries). But we do need technical leaders, hence the compromise (need?) to pick people with less experience. Unfortunately.
As a rough generality, being good at something, like proper good at it, is incompatible with the basic forces of how modern organisations work. By this I mean the systems of reward, power, decision-making.
That doesn’t mean all organisations work this way (mythically, startups of old didn’t), nor that some leaders can’t manage this tension, but tension it is which means while these leaders will often achieve “better” outcomes (that is, “more good” outcomes with relevance to the area of expertise or craft they’re good at, often but not always with more empathy and better working conditions for those delivering), they’ll also encounter more barriers to progressing to and maintaining positions of power, face greater scrutiny, probably be confused why the organisation around them isn’t understanding or valuing their capability (and spend a lot of time translating to frames they do understand and value), and ultimately greater burnout.
I think the challenge is largely a matter of scale. Once you have thousands or tens of thousands of people, communication becomes by far the dominant factor in outcomes. You can set up different structures and cultural norms to try to nudge things in the right direction, but there is no org structure that can solve this because local details always matter. The trick is understanding which details matter and why.
Ultimately I think this depends on management competence and judgement. I also think IC leadership is a critical counterweight to the distortions of empire building that incentivize creating messes. And finally as a competent individual you need the maturity to recognize the difference between unavoidable but tolerable organizational dysfunction, and broken leadership where its impossible to do good work, and hence time to cut bait and leave.
Mediocre developers are the first to hop over to the management track or non-coding sanctuaries like system architecture. So naturally they are over represented in tech leadership.
I share the author's frustration with this trend that _everything_ be a JSON object.
Just recently I had some colleagues create a 200Mb JSON blob representing data records with the goal of querying the data in various ways.
I instead loaded it into a SQL database and wrote a few queries which quickly got the job done.
Also I totally agree that people have forgotten how to stream data via HTTP now, coming up with all sorts of crappy handcrafted approaches, often involving JSON. I've seen many sorry examples of this too.
As an older coder, past my 40s, the industry hasn’t forgotten, it’s just moved on.
Distributed transactions across many microservice is too inefficient and complex to deal with. DT is fine if you don’t have microservices, scaling out quickly is hard otherwise.
Session cookies likewise ~ how does authN and authZ work through each microservice without overloading a central service? JWTs do decently.
Don’t use microservices? That’s a different debate.
>Distributed transactions across many microservice is too inefficient and complex to deal with. DT is fine if you don’t have microservices, scaling out quickly is hard otherwise.
Perhaps microservices were the wrong idea then... Added complexity and splitting what is still conceptually a single coordinated operation.
>Don’t use microservices? That’s a different debate
If it's the proper debate though, we should get to it - not just ignore that part, and say "we moved on", as if we improved things...
There is no we: everyone does it different. The term engineering that we stole from engineers implies making pragmatic decisions based on cost, complexity and requirements. Honestly I see the erring on the side of the traditional more than I do the extavagent in real life.
> The Weblogic servers were configured to enable distributed transactions, so … there was a high degree of assurance that there will be no data corruption nor race conditions.
> we have forgotten how to make distributed apps
Have you, really? Did you ever know? It sounds like you just:
- used a framework (I hope the “high assurance” bit is a joke…)
- then forgot to configure it correctly for a new deployment environment.
Honestly, “Engineering excellence” and “we build things with Weblogic distributed transactions” are two things that are not commonly put together.
I wrote much of the Weblogic server's distributed transactions framework back in the day. Forget EJB (which too I wrote :(, but in my defense it was forced on us by IBM and Oracle, and I personally hated it).
The JTA bit was a thin sliver over the various db and messaging product vendors' XA drivers, which differed in important and annoying ways. Some were not thread-safe (SQLServer), some drivers' connections had to be pinned to the same thread (Informix) etc. As a user, if you had to have distributed transactions, you would have done worse than Weblogic, methinks.
It is not, Weblogic distributed transactions are using two-phase commits under the hood. From the point of view of contemporary Computer Science, two-phase commits give up Availability in the CAP theorem. Deadlocks may still happen.
The issue with moving from Weblogic to Kubernetes is that previously, Weblogic enforced consistency and prevented partitioning. With Kubernetes, this responsibility moved from the Weblogic to application developers. Were they prepared or empowered to take over? I think not all of them and not everywhere.
> I do not know what was the reason to get rid of session cookies. Maybe the fear of GDPR violations
Just to clarify, GDPR has nothing to do with cookies. GDPR applies exactly the same whether you use cookies, JWTs in local or session storage, some magic session id tacked at the end of every URL or device fingerprinting.
I am not surprised. A lot of people conflate GDPR with the well-intentioned but misdirected cookie directive.
A lot of people blame the EU for “forcing cookie banners onto the web”, while the GDPR solely demands that you ask for consent before storing data that’s outside of your core functional needs to operate the app/website.
The UX of those dialogs is largely a dark pattern because the law did not demand implementation details, yet people blame GDPR because businesses designed them to be a nightmare to use. Yet people applaud for App Tracking Transparency dialogs.
> The UX of those dialogs is largely a dark pattern because the law did not demand implementation details, yet people blame GDPR because businesses designed them to be a nightmare to use. Yet people applaud for App Tracking Transparency dialogs.
True, it does not mandate specific implementation details but in Recital 32 of the GDPR [1], it demands "request[s] must be clear, concise and not unnecessarily disruptive to the use of the service [...]" which is mostly not given with dark pattern implementations.
> The UX of those dialogs is largely a dark pattern because the law did not demand implementation details, yet people blame GDPR because businesses designed them to be a nightmare to use.
IMO the law was clear enough as highlighted in the sibling comment. It is poor enforcement that's been a major issue. If a company registered in country A flouts GPDR, even in country B, there's nothing country B can do, other than delegate to country A's data protection / privacy authority. If country A then drags their feet and takes no action, we arrive at the current situation.
The problem there was that the GDPR was dumbed down by either the legislators behind it or the media to "the cookie law" or "the cookie banner", but nobody, especially not the decision makers, seem to have looked into it. It's a huge cargo cult, in that people put up the same banners they saw the early adopters do and think that's complying to the regulations.
JWT is one of the worst things that came out of the JS frontend ecosystem and spread everywhere. It made working with services so much more cumbersome in the name of security.
Then people started using it to carry contextual information, and now it’s pretty common to see ginormous tokens being sent out in every S2S call.
Also, everyone loves working with obnoxiously short token refresh time.
The number of times I have had to remind a developer that the contents of the JWT are readable is nuts. Way too often they want to treat it like a general session store and just toss any old state in there regardless of if the end user should be able to see the value.
Back in 2011 my team built a news website with adaptive delivery. It loaded a small html page with a javascript that checked the screen size and user agent, then based on whether the user was on a phone, a tablet or a desktop, downloaded and displayed the content crafted for that particular device. It then left a cookie to avoid the extra round-trip for returning visitors.
Nowadays people tend to adapt the design to devices with CSS frameworks and flexbox layout, but this does not always reduce traffic and CPU time for low-powered devices.
While our engineering feat was adorable and I praised the team for the achievement, this architecture did not last. The editorial team did not want to maintain essentially 3 different content layouts daily, the marketing team was not willing to compromise on ads on smaller screens.
The browser session concept has “eroded” over time. Many, if not most, laptop users rarely closer all browser windows and hence sessions can last months. Haven’t checked but I wouldn’t be surprised if session cookies survive a browser update, started from the menu at
It does. Google is good example. I have fully closed my computer for a while and restarted it and session is still there. It is amazing just how long they can last if you regularly use it.
I believe that google login cookies, are persistent cookies with a longish max-age/expires value, so that is not surprising, and hence why you have to use google periodically to "refresh" the cookies.
Here is what MDN[1] says about session cookies:
> Session cookies — cookies without a Max-Age or Expires attribute – are deleted when the current session ends. The browser defines when the "current session" ends, and some browsers use session restoring when restarting. This can cause session cookies to last indefinitely.
I was unaware of the "session restoring" point. This has to be considered if wanting to "unforget" the use of session cookies.
I think I know the reason: OAuth2 naming. In the OAuth rfc https://www.rfc-editor.org/rfc/rfc6749 they named one of the role "client", but than meant to represent a server in the more standard flow, while they named the browser "user-agent". Then people understood that "client" is the browser, so they went nuts storing access token and refresh token in the browser local storage, instead of being store in the server session storage, accessible with a session cookie.
In the RFC, the browser is named "user-agent". And in OAuth2 flow, the browser is acting as client only on the implicit flow. Also the intent of the authors for the implicit flow is that the "client" is a mobile/desktop applications, and not especially something running in a browser
Companies have outsourced their thinking to Amazon Web Services.
Software developers and architects when faced with a problem, challenge, question about how to do things, simply look up the answer in Amazon Web Services documentation.
Maybe, but the end user of a browser based application (what this post feels like it references) deals with regular files served over HTTP still. Its source may be S3 in the end, but ultimately it's webservers and files, be they self hosted or one of the major content distribution networks' servers.
And that's absolutely fine, it's simple, highly optimized, standardized, etc. S3 blobs aren't and except for very specific use cases I wouldn't use it for serving files to end users.
Not everyone moves all their applications, files, kitchen sinks and glasses to cloud. There are still tons of traditional systems out there, and there's a small requirement for many of these organizations to have local, object storage systems.
... in corps hn crowds hang; in companies we consult at (very large to very small), things are very different. From nfs, nas/san to just the filesystem. S3/etc is still rare I would say for where we go.
> application server that synchronized state among instances
Huge sessions that move frequently (even if the original developers understand this and are disciplined enough, the juniors that continue development of the app are not, so the session size grows). This eats all the available memory and is slow. So it's better to go if not completely stateless then at least restrict the session to a viable minimum.
Seemingly we have also forgotten the pain brought by session cookies. Applications relying on session cookies typically broke when users opened several tabs and switched between them, users used the browser 'back' button or accessed the system with two devices simultaneously. It was difficult and required a lot of discipline to write a good application with session cookies. At least when you use them for more state than just authentication data.
I for one am very happy that we have passed the age of session cookies. That doesn't mean that everything is perfect now, but applications generally work better than they used to do before the JS+API pattern.
I think you're conflating something here. Whether you send a session ID in a cookie or a JWT makes no difference for the app's general behavior, even when you use multiple tabs or multiple devices.
But I remember a time when especially bank websites added an additional token (like a super strict CSRF token) to their app, which tracked the current page of a user and if you browsed the same website from another tab, this other tab didn't have the proper token and the whole thing just returned "invalid user action" or something like that.
However, this has nothing to do with session cookies.
Typically, in the Weblogic days, session cookies were used to hold a server-side session containing the app state. If you just hold auth data in the session this is not a problem. But if hold state like form data in the session it becomes a huge source of errors. Virtually all non-trivial web-based applications had these issues 20 years ago (before „Ajax“). J2EE servers like Weblogic even supported stateful EJBs that brought server-side state to a new (insane) level.
While you could theoretically use JWTs for the same purpose, they are typically only used for authentication. And back then JWT wasn’t a thing.
> Whether you send a session ID in a cookie or a JWT makes no difference for the app's general behavior
It does make a difference. The cookie is sent by the browser to the server, the JWT is sent in the Authorization: header by the JavaScript code executed by the browser.
Using an opaque JWT token wrapped in cookie is OK. Using a JWT token in the Authorization: header is not OK.
They complain because they discarded what is useful, breaking solved problems and needing to spend a lot of resources on working around that instead of solving the actual problem.
But this is a recurring problem in software development, people spend all their energy on setup and innovation, and relatively little on the actual value.
It is a mysterious thing how the influencers of various kind in software engineering, have to jump from something new to something new. Even if the "new" new has been created a long time ago.
Using old systems is in no way fashionable. Everyone wants the latest whatnot on their resume to be more attractive for future job opportunities.
We are forever stuck in a fast fashion web.
I like to think about typed vs non typed langauges For a long time they both existed in relativ harmony (much more so than emacs vs vi).
Then it became the THE THING to use non typed langues. In part because "having to write the type in the code was far too much work".
We got young software developers who had never used a type langauge but who had joined the "church of non typed".
Skipping ahead, some influencer discovers typed languages and how it solves many problems. What about that.
Choices are good. We can make informed choices as to what tool makes the most sense in the context of what we are trying to solve. (and its fit with the team and the legacy code etc)
Is it so mysterious, though? isn't it "simply" the thesis - antithesis - synthesis cycle? and this cycle needs time, years, decades sometimes.
OO starts in the 1970s. C++ comes 1985, Java 1995 (while smalltalk was born 1972)
Hoare talks about CSP in 1978. golang mainlined it 2009.
SQL started in 1970s and reached first consensus in the 1980s.
or take the evolution from sgml->xml->json w/ json-schema...
concepts need time to mature.
and merging concepts (like duck typing and static type analysis in python) in addition needs both base concepts to be mature enough first.
and likewise, at a personal level, brains need time to absorb the concepts. So of course "younglings" only know part of the world.
And on the other hand mature technologies often have evolved their idiosyncracies, their "warts", and they sometimes paint themselves into some corner. C++ with ABI stability; perl with sigils; Java with "Java only" and a poor JNI and "xml xml xml"; UML with "graphical language"; many languages with no syntax baseline marker "this is a C++-95 file" or "this is a python 2 file", freezing syntax...
So imho it's not simply a mysterious urge for the shiny, new stuff, but a mixed bag of overwhelm (I'll rather start something new before I dig through all that's here) and deliberate decisions (I do know what I'm talking about and X, and Y can't be "simply evolved" into Z, so I start Z).
I feel the main reason this cycle still needs those decades to loop back again is that the world underwent its digital revolution during current iteration. We can't do synthesis without breaking everything, so we instead spawn sub-cycles on top - and then most of those end up frozen mid-way for the same reason, and we spawn sub-cycles on top of them, and so on. That's why our systems look like geological layers - XML at the bottom, JSON in the middle, and at the top JSON w/ schemas, plus a little Cambrian explosion of alternatives forming its anthitesis... and looking at it, SGML starts looking sexy again. And sure enough, you can actually use it today - but guess how? That's right, via an npm packge[0].
Wonder if we ever get a chance to actually pull all the accumulated layers of cruft back - to go through synthesis, complete the cycle and create a more reasonable base layer for the future.
--
[0] - https://sgmljs.net/docs/producing-html-tutorial/producing-ht... / https://sgmljs.net/
There's rarely if ever a time that its cheaper to pull so much back, some people do it but they are overwhelmed by the power of the accumulated cruft's inertia.
Having lived through all that I think it’s worth reminding that old typed languages were no match to modern ones. I’m not talking about arcane envs here, only about practical and widely used.
People didn’t really go just “untyped”. Untyped simply was quicker in becoming easier to use and less screwed than the old. Or you may even view it as paving the road. Typed slowly catched up with the same quality under new names.
This nuance is easy to miss, but set the cut off date to 2005 and recollect what your “options” were again. They weren’t that shiny.
I remember a lot of praise for duck "typing" as if it was an innovation, and claims you should just write unit tests to catch type errors.
People kept glossing over the fact that writing typed code would be so much easier than writing untyped code with sufficient manual tests to catch type errors, because no one did the last bit. Things just broke runtime instead.
> I remember a lot of praise for duck "typing" as if it was an innovation, and claims you should just write unit tests to catch type errors.
> People kept glossing over the fact that writing typed code would be so much easier than writing untyped code with sufficient manual tests to catch type errors, because no one did the last bit. Things just broke runtime instead.
That was the dumbest thing ever. Exchanging automation for lots of manual work was supposed to be innovation?
So many tradeoffs so sometimes someone could write a clever, unmaintainable bit of brainfuck in the production app.
Agreed. I caught a bug in Python code I wrote yesterday by adding a typehint to a variable in a branch that has never been executed. MyPy immediately flagged it, and I fixed it. Never had to watch it fail.
I put typehints on everything in Python, even when it looks ridiculous. I treat MyPy errors as show-stoppers that must be fixed, not ignored. This works well for me.
Yes, this always struck me as completely bananas. Types are a vastly superior form of test for certain conditions.
Duck typing with type inference would be nice but seems to be completely esoteric. You could have both the flexibility of being able to write "func(arg)" without having to specify the type, and the constraint that if you call arg.foo() in your function the compiler should enforce that you actually have a foo method on arg.
(special side-eye to the people materializing methods and properties all over the place at runtime. This seems to have been a rails/python thing that is gradually going out of fashion.)
I love ruby for relatively small scripts, but I wouldn’t want to attempt anything very large with it. I find that once my scripts start to get too long I find the need to start adding manual type checking to keep from tripping over myself.
I think it's a "forgetting" again: Hindley-Milner type inference dates back to 1969! And still very few languages let you use it. Some have wisely added a very weak type inference (var/auto).
Let's not forget the massive handicap that there is one and only one programming language that the browser allows: Javascript.
I think HM is simply not practical. You don’t want your types to be a multidimensional logic puzzle solved by a computer, cause you want to reason about them and you are much weaker than a computer. You want clear contracts and rigidity that they provide to the whole code structure. And only then to automatically fill up niches to avoid obvious typing. You also rarely want Turing Completeness in your types (although some people are still in this phase, looking at some d.ts’es).
Weak var/auto is practical. An average program and an average programmer have average issues that do not include “sound type constraints” or whatever. All they want is to write “let” in place of a full-blown declaration that must be named, exported, imported and then used with various modifiers like Optional<T> and Array<T>. This friction is one of the biggest reasons people may turn to the untyped mess. Cause it doesn’t require this stupid maintenance every few lines and just does the job.
Very few people use HM type systems even today, though.
I think it really is worth considering that Java effectively didn't have sum types until, I think, version 17, and nowadays, many modern and popular statically typed languages have them.
Quicker only when not using IDEs, which some folks apparently failed to learn while hipping scritping languages.
Also some of the coolness of today's typed languages were already available in those days, I had ML languages, during early 1990's on my degree.
ML for the Working Programmer was published in 1991.
I have no issue at all recommending typescript over Python today. This is not something I’d ever do before ~2020. I completely agree this has nothing to do with fashion and everything with typescript and v8 being amazing despite the ugly JavaScript in between.
In the meantime, the parallel universe (to HN at least) of dotnet happily and silently keeps delivering… can’t wait for the C# rennesaince.
I've been doing C# for a long time, and throughout C# has quietly got on with delivering.
No drama over Generics, Exceptions, Lambdas/Closures, functional vs procedural syntax, etc. C# either did a lot of this well already or was quietly added to the language in broadly sensible ways. The package management isn't perfect, but I don't think any ecosystem has mastered that yet. It's more sensible than NPM at least. ( But what isn't! )
And all the while, a lot of work has been done for performance, where now I'd trust it to be as fast as almost anything else. I'm sure the SIMD wielding C/Rust experts can out-perform it, but for every day code C# is writeable, readable and still performs well.
Okay, so C# 1.0 was a clunky beast, it wasn't until version 2 where they fixed things like captures of variables in closures when it became pleasant to write. I don't know if it was Microsoft's ownership or the 1.x C# experience that put people off.
Basically Java, right?
C++ was mostly gnarly, especially considering the tools available to most people. There were cool languages but the tooling was not cool. GHC...
C++ Builder and Delphi were definitly cool.
Most of the mainstream untyped languages are the "children of the 1990s", the era of a delusion that PC's can only get faster and that a computer is always plugged to a free power source.
Then along came the iPhone that prohibited inefficient languages.
So these cycles don't just happen for the sake of cycles, there are always underlying nonlinear breakthroughs in technology. We oscillate between periods of delusions and reality checks.
I'm not sure the runtime efficiency is the critical determinant: assembly is after all an untyped language (+), Javascript is not really typed and is the major language run on the iPhone (on every web page!).
It was more of a natural language/DWIM movement, especially from Perl.
(+ one of these days I will turn my "typesafe macro assembler" post-it note into a PoC)
> Even if the "new" new has been created a long time ago.
laughs in common lisp
(+ ha ha ha ha)
The biggest sin IMO is the language devs who decide to "bring in types" to previously untyped langs. Dropped spaghetti everywhere.
Kubernetes is great. Legacy bash and cron is hip and cool again just because you can wrap stuff in a k8s cronjob.
Is that Alfonzo Church of the untyped lambda calculus?
Yes, however his brother Alonzo did not mind the typed lambda calculus all that much.
Its also part of the economic cycle- centralized architectures thrive in a monopolistic oligarchy - and would quickly fall out of fashion after some trustbusting.
There is a management principle here that the tech world doesn't seem to have internalised yet. It is linked to the basic problem that we can't measure programmer productivity.
Software tends to be modularised based on a company's management structure. What the components do tends to reflect the manager. When the manager changes, the software either starts to change too or succumbs to bitrot. It is pretty normal for tech companies to start listing after the original founders move on - even a company like Google or Microsoft with money and technically strong individual employees starts to mean regress as new managers come in who have a different and murkier vision than the founders.
With luck the transition can be from excellence to a different form of excellence, but the basic strategy people seem to have that management is to be encoded in the org structure and bureaucratic processes doesn't seem to work at preserving high-performing tech companies. It depends on specific people in key leadership positions. Because if they get swapped out there is no way to measure the productivity change less effective managers foist on developers until the company has notably mean regressed years later.
I'd bet something like this happened here. A team lead or someone was swapped out and replaced with someone who thought differently.
Or, what happens sometimes, a "team lead or someone" was replaced with a pack of freshly appointed team leads, with nobody being positioned to "break ties" with them and with an explicit ban on any sane engineering guidance "because we don't want to depend on one person".
Then those team leads would try to drive their teams and efforts in alignment with the ways they saw familiar or acceptable, and with zero interest for the "big picture" - they would be focused on the survival of their position and their team only. Fiefdoms would emerge, and every little such fiefdom would be pulling the rug its way. With disastrous consequences for all of security, end user convenience, stability, resilience and malleability of the system.
The zero interest in big picture and fiefdoms hits home pretty hard. Even if a subgroup wants to put emphasis on the big picture that's just one more set of hands on the rug. Suddenly you're spending more time arguing and fighting political battles instead of building great stuff. You can win some battles, but you have to have amazing persistence to not burn out or retreat to your subgroup mentality after a while.
I really don't care if I disagree with top level technical leadership. Having no technical leadership is even worse.
> There is clearly a failure of engineering leadership. I am still puzzled why.
I personally believe it’s lack of good engineering leadership and it’s only getting worse at this point. I’ve experienced a fair share of technical leaders without enough technical background and/or experience - people/leadership skills is equally important imho, but people with both are hard to find (scarce resource) and it seems like it’s often the tech side that gets compromised.
I think one of the reasons is the IT industry is simply growing too fast, meaning we have a very small pool of people with many years of experience and a very large pool of people with less experience (compared to other industries). But we do need technical leaders, hence the compromise (need?) to pick people with less experience. Unfortunately.
As a rough generality, being good at something, like proper good at it, is incompatible with the basic forces of how modern organisations work. By this I mean the systems of reward, power, decision-making.
That doesn’t mean all organisations work this way (mythically, startups of old didn’t), nor that some leaders can’t manage this tension, but tension it is which means while these leaders will often achieve “better” outcomes (that is, “more good” outcomes with relevance to the area of expertise or craft they’re good at, often but not always with more empathy and better working conditions for those delivering), they’ll also encounter more barriers to progressing to and maintaining positions of power, face greater scrutiny, probably be confused why the organisation around them isn’t understanding or valuing their capability (and spend a lot of time translating to frames they do understand and value), and ultimately greater burnout.
I think the challenge is largely a matter of scale. Once you have thousands or tens of thousands of people, communication becomes by far the dominant factor in outcomes. You can set up different structures and cultural norms to try to nudge things in the right direction, but there is no org structure that can solve this because local details always matter. The trick is understanding which details matter and why.
Ultimately I think this depends on management competence and judgement. I also think IC leadership is a critical counterweight to the distortions of empire building that incentivize creating messes. And finally as a competent individual you need the maturity to recognize the difference between unavoidable but tolerable organizational dysfunction, and broken leadership where its impossible to do good work, and hence time to cut bait and leave.
Mediocre developers are the first to hop over to the management track or non-coding sanctuaries like system architecture. So naturally they are over represented in tech leadership.
I don't know why the author blames Spring in this.
You can't swing a dead cat in HN without someone saying "just use Spring, it handles session cookies for you automatically"
Likewise why blame JWTs for needing to reauthenticate. Not needing to reauthenticate is JWTs unique selling point.
And I don't think we developers ever knew how to write distributed systems. It sounds like the Weblogic guys did though.
I share the author's frustration with this trend that _everything_ be a JSON object.
Just recently I had some colleagues create a 200Mb JSON blob representing data records with the goal of querying the data in various ways.
I instead loaded it into a SQL database and wrote a few queries which quickly got the job done.
Also I totally agree that people have forgotten how to stream data via HTTP now, coming up with all sorts of crappy handcrafted approaches, often involving JSON. I've seen many sorry examples of this too.
See: https://jargonfile.johnswitzerland.com/stone-knives-and-bear...
As an older coder, past my 40s, the industry hasn’t forgotten, it’s just moved on.
Distributed transactions across many microservice is too inefficient and complex to deal with. DT is fine if you don’t have microservices, scaling out quickly is hard otherwise.
Session cookies likewise ~ how does authN and authZ work through each microservice without overloading a central service? JWTs do decently.
Don’t use microservices? That’s a different debate.
>Distributed transactions across many microservice is too inefficient and complex to deal with. DT is fine if you don’t have microservices, scaling out quickly is hard otherwise.
Perhaps microservices were the wrong idea then... Added complexity and splitting what is still conceptually a single coordinated operation.
>Don’t use microservices? That’s a different debate
If it's the proper debate though, we should get to it - not just ignore that part, and say "we moved on", as if we improved things...
There is no we: everyone does it different. The term engineering that we stole from engineers implies making pragmatic decisions based on cost, complexity and requirements. Honestly I see the erring on the side of the traditional more than I do the extavagent in real life.
> The Weblogic servers were configured to enable distributed transactions, so … there was a high degree of assurance that there will be no data corruption nor race conditions.
> we have forgotten how to make distributed apps
Have you, really? Did you ever know? It sounds like you just:
- used a framework (I hope the “high assurance” bit is a joke…)
- then forgot to configure it correctly for a new deployment environment.
Honestly, “Engineering excellence” and “we build things with Weblogic distributed transactions” are two things that are not commonly put together.
Ouch!
I wrote much of the Weblogic server's distributed transactions framework back in the day. Forget EJB (which too I wrote :(, but in my defense it was forced on us by IBM and Oracle, and I personally hated it).
The JTA bit was a thin sliver over the various db and messaging product vendors' XA drivers, which differed in important and annoying ways. Some were not thread-safe (SQLServer), some drivers' connections had to be pinned to the same thread (Informix) etc. As a user, if you had to have distributed transactions, you would have done worse than Weblogic, methinks.
Sorry! I didn’t mean to throw shade on your work.
What I mean is that being able to turn a key in a car ignition doesn’t make one a car mechanic.
> “high assurance” bit is a joke…
OP here.
It is not, Weblogic distributed transactions are using two-phase commits under the hood. From the point of view of contemporary Computer Science, two-phase commits give up Availability in the CAP theorem. Deadlocks may still happen.
The issue with moving from Weblogic to Kubernetes is that previously, Weblogic enforced consistency and prevented partitioning. With Kubernetes, this responsibility moved from the Weblogic to application developers. Were they prepared or empowered to take over? I think not all of them and not everywhere.
> Screen readers and many kinds of web accessibility are barely useable, all that... because we have forgotten about session cookies.
What did the author meant by this? In what way that accessibility and session cookie are related?
My presumption would be a natively reliable back button. / navigation in general that doesn't end up accidentally losing your session
> I do not know what was the reason to get rid of session cookies. Maybe the fear of GDPR violations
Just to clarify, GDPR has nothing to do with cookies. GDPR applies exactly the same whether you use cookies, JWTs in local or session storage, some magic session id tacked at the end of every URL or device fingerprinting.
I am not surprised. A lot of people conflate GDPR with the well-intentioned but misdirected cookie directive.
A lot of people blame the EU for “forcing cookie banners onto the web”, while the GDPR solely demands that you ask for consent before storing data that’s outside of your core functional needs to operate the app/website.
The UX of those dialogs is largely a dark pattern because the law did not demand implementation details, yet people blame GDPR because businesses designed them to be a nightmare to use. Yet people applaud for App Tracking Transparency dialogs.
It’s ironic how the GDPR is painted as a villain.
> The UX of those dialogs is largely a dark pattern because the law did not demand implementation details, yet people blame GDPR because businesses designed them to be a nightmare to use. Yet people applaud for App Tracking Transparency dialogs.
True, it does not mandate specific implementation details but in Recital 32 of the GDPR [1], it demands "request[s] must be clear, concise and not unnecessarily disruptive to the use of the service [...]" which is mostly not given with dark pattern implementations.
[1] https://gdpr.eu/Recital-32-Conditions-for-consent/
> The UX of those dialogs is largely a dark pattern because the law did not demand implementation details, yet people blame GDPR because businesses designed them to be a nightmare to use.
IMO the law was clear enough as highlighted in the sibling comment. It is poor enforcement that's been a major issue. If a company registered in country A flouts GPDR, even in country B, there's nothing country B can do, other than delegate to country A's data protection / privacy authority. If country A then drags their feet and takes no action, we arrive at the current situation.
The problem there was that the GDPR was dumbed down by either the legislators behind it or the media to "the cookie law" or "the cookie banner", but nobody, especially not the decision makers, seem to have looked into it. It's a huge cargo cult, in that people put up the same banners they saw the early adopters do and think that's complying to the regulations.
> Just to clarify, GDPR has nothing to do with cookies.
Not strictly true, they are highlighted as a potential source of PII. https://gdpr.eu/cookies/
But as others have pointed out, the law is extremely technology-agnostic. Sticking the same information in a JWT makes no difference either way.
We haven't forgotten anything - rather we were forced to remain silent.
https://blog.julik.nl/2024/03/those-people-who-say-no
I may do a writeup on how the mentioned things are very indicative of a certain org failure, since I've seen something very similar first hand.
These are the new rockstar dev 20s devs who don’t know these and invent new ways - probably
JWT is one of the worst things that came out of the JS frontend ecosystem and spread everywhere. It made working with services so much more cumbersome in the name of security.
Then people started using it to carry contextual information, and now it’s pretty common to see ginormous tokens being sent out in every S2S call.
Also, everyone loves working with obnoxiously short token refresh time.
The number of times I have had to remind a developer that the contents of the JWT are readable is nuts. Way too often they want to treat it like a general session store and just toss any old state in there regardless of if the end user should be able to see the value.
The contents of a session cookie are also readable...
I think he's talking about a server-side session store (or perhaps an encrypted cookie payload)
Idea for your next article: progressive enhancement. I can't wait for people to discover it again.
Hah.
Back in 2011 my team built a news website with adaptive delivery. It loaded a small html page with a javascript that checked the screen size and user agent, then based on whether the user was on a phone, a tablet or a desktop, downloaded and displayed the content crafted for that particular device. It then left a cookie to avoid the extra round-trip for returning visitors.
Nowadays people tend to adapt the design to devices with CSS frameworks and flexbox layout, but this does not always reduce traffic and CPU time for low-powered devices.
While our engineering feat was adorable and I praised the team for the achievement, this architecture did not last. The editorial team did not want to maintain essentially 3 different content layouts daily, the marketing team was not willing to compromise on ads on smaller screens.
None was happy except the readers.
The browser session concept has “eroded” over time. Many, if not most, laptop users rarely closer all browser windows and hence sessions can last months. Haven’t checked but I wouldn’t be surprised if session cookies survive a browser update, started from the menu at
It does. Google is good example. I have fully closed my computer for a while and restarted it and session is still there. It is amazing just how long they can last if you regularly use it.
I believe that google login cookies, are persistent cookies with a longish max-age/expires value, so that is not surprising, and hence why you have to use google periodically to "refresh" the cookies.
Here is what MDN[1] says about session cookies:
> Session cookies — cookies without a Max-Age or Expires attribute – are deleted when the current session ends. The browser defines when the "current session" ends, and some browsers use session restoring when restarting. This can cause session cookies to last indefinitely.
I was unaware of the "session restoring" point. This has to be considered if wanting to "unforget" the use of session cookies.
[1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies
(edit for layout)
> Screen readers and many kinds of web accessibility are barely useable, all that... because we have forgotten about session cookies.
No I feel like screen readers and many kinds of web accessibility are barely useable because we forgot about screen readers and web accessibility.
> I do not know what was the reason to get rid of session cookies. Maybe the fear of GDPR violations or
Aren't regular session cookies already considered Strictly Necessary For Functionality?
Yes they are!
I think I know the reason: OAuth2 naming. In the OAuth rfc https://www.rfc-editor.org/rfc/rfc6749 they named one of the role "client", but than meant to represent a server in the more standard flow, while they named the browser "user-agent". Then people understood that "client" is the browser, so they went nuts storing access token and refresh token in the browser local storage, instead of being store in the server session storage, accessible with a session cookie.
So what people should have done https://www.rfc-editor.org/rfc/rfc6749#section-4.1 (you can see here the tokens never reach the user agent, so the server can keep them in a session, then have the user agent identified by a cookie. And what most of the people did https://www.rfc-editor.org/rfc/rfc6749#section-4.2
This is a very useful piece of information.
According to the saying "There are two hard problems in Computer Science: naming things and cache invalidation"...
OAuth 2.0 editors failed to properly name things.
But there are many times when the client _is_ the browser. Are you sure you're not confused by that?
In the RFC, the browser is named "user-agent". And in OAuth2 flow, the browser is acting as client only on the implicit flow. Also the intent of the authors for the implicit flow is that the "client" is a mobile/desktop applications, and not especially something running in a browser
Similar in spirit: https://rakhim.org/its-all-just-fashion-shows/
Companies have outsourced their thinking to Amazon Web Services.
Software developers and architects when faced with a problem, challenge, question about how to do things, simply look up the answer in Amazon Web Services documentation.
I think the author is not aware that most modern apps don’t interface with traditional files. Many modern apps rather use S3 with presigned urls.
Maybe, but the end user of a browser based application (what this post feels like it references) deals with regular files served over HTTP still. Its source may be S3 in the end, but ultimately it's webservers and files, be they self hosted or one of the major content distribution networks' servers.
And that's absolutely fine, it's simple, highly optimized, standardized, etc. S3 blobs aren't and except for very specific use cases I wouldn't use it for serving files to end users.
I think you've thrown around the words 'most' and 'many' a little too freely.
Not everyone moves all their applications, files, kitchen sinks and glasses to cloud. There are still tons of traditional systems out there, and there's a small requirement for many of these organizations to have local, object storage systems.
... in corps hn crowds hang; in companies we consult at (very large to very small), things are very different. From nfs, nas/san to just the filesystem. S3/etc is still rare I would say for where we go.
Author is wrong but it would take me whole blogpost of its own.
No cookies are not gone because of GDPR.
You can have JWT token in a cookie. Having it in local storage is bad practice and it should be in http only cookie.
> application server that synchronized state among instances
Huge sessions that move frequently (even if the original developers understand this and are disciplined enough, the juniors that continue development of the app are not, so the session size grows). This eats all the available memory and is slow. So it's better to go if not completely stateless then at least restrict the session to a viable minimum.
Seemingly we have also forgotten the pain brought by session cookies. Applications relying on session cookies typically broke when users opened several tabs and switched between them, users used the browser 'back' button or accessed the system with two devices simultaneously. It was difficult and required a lot of discipline to write a good application with session cookies. At least when you use them for more state than just authentication data.
I for one am very happy that we have passed the age of session cookies. That doesn't mean that everything is perfect now, but applications generally work better than they used to do before the JS+API pattern.
A major part of the PHP crowd would beg to differ; it was always pretty trivial for both state and auth.
I think you're conflating something here. Whether you send a session ID in a cookie or a JWT makes no difference for the app's general behavior, even when you use multiple tabs or multiple devices.
But I remember a time when especially bank websites added an additional token (like a super strict CSRF token) to their app, which tracked the current page of a user and if you browsed the same website from another tab, this other tab didn't have the proper token and the whole thing just returned "invalid user action" or something like that.
However, this has nothing to do with session cookies.
Typically, in the Weblogic days, session cookies were used to hold a server-side session containing the app state. If you just hold auth data in the session this is not a problem. But if hold state like form data in the session it becomes a huge source of errors. Virtually all non-trivial web-based applications had these issues 20 years ago (before „Ajax“). J2EE servers like Weblogic even supported stateful EJBs that brought server-side state to a new (insane) level.
While you could theoretically use JWTs for the same purpose, they are typically only used for authentication. And back then JWT wasn’t a thing.
> Whether you send a session ID in a cookie or a JWT makes no difference for the app's general behavior
It does make a difference. The cookie is sent by the browser to the server, the JWT is sent in the Authorization: header by the JavaScript code executed by the browser.
Using an opaque JWT token wrapped in cookie is OK. Using a JWT token in the Authorization: header is not OK.
We try new things, keep what is useful and discard what is not. That is normal, why complain, just do what is best for your problem.
They complain because they discarded what is useful, breaking solved problems and needing to spend a lot of resources on working around that instead of solving the actual problem.
But this is a recurring problem in software development, people spend all their energy on setup and innovation, and relatively little on the actual value.