Bah that old thing again. FWIW the demo scene 'coders' call themselves that with pride, and usually those are often also brilliant 'programmers' and 'software engineers'.
Coder, programmer, software engineer call yourself whatever you want, the point is to make a computer do what you want it to do.
Normally I agree, but this is a Lamport talk. His stuff is pretty much always worth engaging with. I have learned more from his writings than nearly any other human in my life.
Some slides are just a series with text slowly revealed. If merge them down is <1/5 of this number (the page counting ends with 41 which should've been 42 since ending slide is distinct from previous).
> Coder, programmer, software engineer call yourself whatever you want, the point is to make a computer do what you want it to do.
That is definitely not what software engineering is. It requires translating requirements and ensuring that the code is maintainable in the context of a team and the product's lifecycle.
Oliver Heaviside[1] was rejected when he attempted to join the society of telegraph engineers because they said he was a clerk and not an engineer. Thing is, noone cares about them and his achievements live on.
People protecting titles by putting an arbitrary barrier associated with possessing a piece of paper rather than actually having skill and knowledge should be treated with scorn in my opinion.
Requiring the sheet of paper is less about ensuring the person is qualified, and more about having something you can revoke if they act negligently. It turns out to be important some small percentage of the time that you can say "we will revoke your license if you aren't more careful in the future". And engineers can reject terrible suggestions from non-technical bosses by saying "I'd lose my license if I did that." That's it's main value-add.
I've been to work social events where all the alcohol was provided by the company, but they still had to hire a bartender. You'd pick up a drink, hand it to the bartender, and they'd open it and give it back to you. It sure seems like a stupid, wasteful ceremony; that is, until someone is on their way to getting blackout drunk and the bartender can start refusing to open more drinks for them. They can even use the line "I could lose my bartending license if I keep serving you." The requirement for a licensed bartender was not to make sure someone knew how to open the drinks, it was to make sure someone had the authority and accountability for keeping things under control. (Not to mention making sure everyone was over 21.)
Requiring a stamp from a licensed professional seems pointless up until it prevents a big problem. I'm not opposed to requiring that a Licensed Software Engineer (or whatever) sign off on the software running on chemotherapy machines, as long as that licensing is all done in good faith.
Having an engineering diploma is not the same as having an engineering license though. In US, 99% of engineers do not have a professional engineer license.
Maybe it's just about the ratio of scammers vs. honestly capable people.
You should also have mentioned that: "This riled Heaviside, who asked Thomson to sponsor him, and along with support of the society's president he was admitted 'despite the P.O. snobs'".
Thanks. I didn't actually know that - just knew that he had been rejected initially. The guy was a total beast of a mathematician so I have a lot of admiration for him. Reading that wiki page is facinating. I knew about him because of vector calculus etc, but among a bunch of things the dude invented the coaxial cable and went nuts at the end of his life painting his nails pink and using granite blocks as furniture while ranting against the theory of relativity. Really something else.
While I agree with the sentiments, there is a societal need for gatekeepers in some professions. The engineer title often comes with legal liabilities. It’s not necessarily just about talent. Of course it often becomes misused as well.
"While I agree with the sentiments, there is a societal need for gatekeepers in some professions. The engineer title often comes with legal liabilities. It’s not necessarily just about talent. Of course it often becomes misused as well."
I fixed this for you.
"I disagree with the sentiments, there is no societal need for gatekeepers in most professions. The engineer title rarly comes with legal liabilities. It’s not necessarily just about gatekeeping. Of course it often becomes misused as well."
> People protecting titles by putting an arbitrary barrier associated with possessing a piece of paper rather than actually having skill and knowledge should be treated with scorn in my opinion.
What’s your solution then? No attempt at providing professional standards at all?
Systems made by people will always be flawed. That is the reason for and criticism of certification and regulation.
I think professional standards around titles should only be in place where there’s some underlying public interest.
Like, you’re developing some land, and it turns out you need a geotechnical engineer. The engineer you hired is certified, their plans get put on file with government records office, and the contractors you hire are responsible for following those plans. Among other things, it creates a paperwork trail to figure out who gets the blame if your building collapses and takes out three neighboring apartment complexes with it.
I think we should have these kinds of rules for people who design bridges, medical equipment, and passenger airplanes. I don’t think we should have these rules for people who design hard drive controllers, compilers, and physics simulation software.
> some underlying public interest ... I think we should have these kinds of rules for people who design bridges, medical equipment, and passenger airplanes. I don’t think we should have these rules for people who design hard drive controllers, compilers, and physics simulation software.
You mean the hard drives that are storing inspection records for those bridges, and the compilers that were used to build that medical-equipment software, and the physics simulation software that was used to verify the required load strength of that passenger airplane?
It's up to the (licensed) people working in the high risk industry to account for the potential issues caused by any "consumer" level gear they choose to incorporate. If you require audited and certified simulation software then it's on you (the licensed professional) to choose to use that instead.
Verified programming languages and compilers for safety critical systems are a thing. Khronos even has some APIs for use in safety critical applications.
> People protecting titles by putting an arbitrary barrier associated with possessing a piece of paper rather than actually having skill and knowledge should be treated with scorn in my opinion.
Would you say the same about a medical degree ? How do you judge skill and knowledge in medicine without a professional association ?
Medicine has a lot of issues around this honestly. Obviously you don't want someone unqualified practicing but the flip side is that around the world various countries have been setting various types of quotas on who can become a doctor with serious negative effects to society (long wait times for medical care, expensive care, etc.). A system where qualifications and experience are well documented but without the quotas controlled by vested interests would be much more preferable.
Software engineering is lucky not to have the shackles that medicine had here. The door is more open. We still require people to prove their capabilities but we don’t allow organisations such as engineering or medical associations to add limits.
Medicine is even more weird with titles. At least in the UK if you get qualified people call you “Doctor” unless you’re a surgeon in which case you are called “Mr/Ms/Mrs/Miss/etc” unless you are a professor which some consultants are in which case it’s “professor”. But if you work in medicine and call a consultant surgeon “Dr <whatever>” they will actually correct you.
Yes, the medical field has its share of bureaucracy that mainly serves as job gatekeeping. Just consider the cases where a small lateral move in the medical field means you have to go back to school to recredentialize in the thing you've already been doing.
It's very HN and self-aggrandizing to act like the software 99.9% of us are writing is so important that it needs gatekeep credentials. And the important software that does exist already needs its own processes for quality and safety, not depend on software engineers having some feelgood credentials.
If its status you want, then just get a higher status tech job.
I think we should stop glorifying Heaviside for his short sighted view on the usage of quaternion for EM field that he considered it as unnecessary "evil" [1]. He then developed a crippled version of the unintuitive vector calculus, a mathematical hack that some people still considered it as the golden standard for doing EM calculations.
IMHO by doing this and going against quaternion he has hindered much progress in EM for more than century since that's what being taught in textbook, and most people including EM engineers don't care to look what's available beyond the run-of-the-mill textbooks.
There's a very famous saying by Einstein, make it simple but not simpler, and in this case Heaviside's "Vectors Versus Quaternions" paper and related advocacy has caused more harm than good to the math, science and engineering of EM based system by doing simpler than simpler (pardon the pun).
I have also a hypothesis that perhaps someone can research into, if Einstein is exposed properly to quaternion based EM, and using it he would have solved the general theory of relativity much earlier than he took of 10 years after the special relativity discovery. The quaternion form of relativity has been even presented by L. Silberstein in 1908, just three years after the discovery of special relativity [3].
It is a shame that even today apparently the Wikipedia entry of both special relativity and general relativity do not even mentioned quaternion (zero, nada, zilch) as if it is a taboo to do so, perhaps partly thanks to Heaviside of his seemingly very successful propaganda even after more than century of progress in math, science and engineering.
I understand your point, but having an engineering degree is not just the possession of a certificate. It's a piece of paper that testifies that the holder has the skill and knowledge, and has passed exams designed by experts in the field.
The response given to Heaviside does suggest that snobbery was a more likely reason for refusing his membership, but that's just my impression.
Nobody cares in Germany, because nobody is calling themselves "Softwareingenieur". Everyone says "Softwareentwickler" aka software developer, or colloquially just "Entwickler"/developer.
As a fun bit of trivia: entwickeln also means to unravel, with ent=un and wickeln = to ravel.
That should put a lot of the arguments in this thread to rest. Employers aren't actually seeing any difference in the gate kept title and aren't hiring for that title in Germany.
This is true, as long as you are aware that if you have to state your occupation to an authority, you better be an engineer if you tell them that you're a software engineer. Or if your employer requires this, but they'd check the documents anyway.
So you'd better err to the side of being a software developer, because you'd always be telling the truth (and hope your subtly arrogant German engineer colleagues don't pick on you).
I didn't study Engineering. I work in Germany and all my software positions said literally "Engineer" in the contract (both in German and English version). Maybe if it is in English it's not illegal who knows.
The real problem is that coding is more like being a great musician than a traditional engineer.
It's like telling Eric Clapton, Jimi Hendrix, George Benson, or Wes Montgomery that they're not qualified to teach at a music conservatory because they lack formal diplomas. Sure, technically they're not "certified," but put them next to your average conservatory professor and they'll play circles around 99.99% of them.
Same goes for brilliant coders versus formal engineers.
Trying to think of the coding analogues to those folks.
The only ones I can think of are folks who are self-taught, e.g., Bill Gates, but he famously noted that he had read the three (then extant) volumes of Knuth's _The Art of Computer Programming_, sometimes taking an entire day to contemplate a single page/idea/exercise (and doing all the exercises) and then going on to say that anyone who had read all three volumes (and with the unspoken caveat of "also done all the exercises") should send him a Résumé.
My best friend in high school was a self-taught guitarist, incredibly talented, he also had perfect pitch, and a wonderful insight into what a given piece of music was expressing.
All? of the original self-taught programmers were in an academic context, no? Even those who aren't have either read or researched a great deal, or they have laboriously re-created algorithms and coding concepts from first principles.
Maybe this is just another example of how society has not yet mastered the concept of education?
>There are masters of a thing, and then there are masters of a thing.
Where the difference is, some folks can do, but can't teach/communicate what they do or how they do it, while other folks have both mastered a skill, and are able to raise others up in how to also learn that skill.
My guitar-playing friend also would wear out cassette tapes and records playing back specific sequences working out how they had been played, and perfecting his technique....
In this case they all refer to building the same thing, but at different steps in the process. Theoretically each step could be carried out by a different person, but in modern practice it would be quite unusual to do so – which is why they can be, and are, used interchangeably. If someone is doing one of them it is almost certain they are doing all of them.
I recently attended ICWMM, a conference of people who model stormwater for a living.
Many of the people there technically program for a living, because some of their models rely on Python scripts.
They are not software engineers. These aren't programs designed around security issues, or scalability issues, maintainability issues, or even following processes like pull requests.
>They are not software engineers. These aren't programs designed around security issues, or scalability issues, maintainability issues, or even following processes like pull requests.
That's been the obvious problem for a while now.
These are the people whom software engineers need to be cheerfully working for.
Any less-effective hierarchy and they'll just have to continue making progress on their own terms.
> These are the people whom software engineers need to be cheerfully working for.
Nope: We are equals with very different goals.
I (a software engineer) happen to work on a product, with customers. Modeling stormwater is part of the product. Within my company, the people who do modeling are roughly equal to me.
In the past, I've built products for customers where the products do not provide programming / scripting tools.
> That's been the obvious problem for a while now.
A script that someone runs on their desktop to achieve a narrowly-defined task, such as running a model, does not need to handle the same concerns that I, as a software engineer, need to handle. To be quite honest, it could be awful spaghetti code, but if no one else will ever touch it once the model is complete, there is no need for things like code reviews, ect.
I think it would do most developers a lot of good if they were to pretend like they were a servant class to the customer & team for a period of time.
They will hopefully learn that a contest of egos is not enjoyable at scale over long timeframes. A happy customer provides a much bigger dopamine rush than getting some smart-ass jab in on your coworkers.
A job done right and seeing the recipient legitimately happy about what's been done for them should be the #1 goal of a person who identifies as being a fucking genius with computers. Most people suck at this and you should be helping them out, not competing with them in some bullshit heirarchy that only exists in your own head.
This is very reductive of scientific computing. Maybe that conference is particularly narrowly focused on certain applications but there's plenty of people developing "real" programs (not python scripts) in academia/research labs, and you usually see those in conferences.
Look for instance at multi-purpose solvers like FreeFEM or FEniCS, or the distributed parallelism linalg libraries like PETSc or MUMPS, or more recent projects like OCCA. These are not small projects, and they care a lot about scalability and maintainability, as well as some of them being pretty "CS-y".
Computer science isn't "science with a computer". It's not about calculations and simulations.
Computer science is the science of computation. It's more to do with theory of computation, information theory, data structures, algorithms, cryptography, databases.
"Science with a computer" is experimental physics, chemisty, biology, etc.
"Computer science" (in contrast to software engineering) is the science that looks into the properties of computers and thing related to it (algorithms, database systems). E.g. examining those systems on a scientific level. Depending on how abstract that can have more or less direct input on how to engineer systems in their software/hardware implementations.
"Science with a computer" on the other hand are usually (in terms of majors/academic disciplines) taught in the adjecent subfields: Bioinformatics, Chemoinformatics, Computational biology, etc.. Of course basic computing is also on some level seeping into every science subfield.
I laughed at this. Having a lovely fixed and unmovable perspective on many pursuits is so engineering. Coding, programming, etc, shapes our communications skills and our outlook in interesting ways. Dating truly benefits as well.
You clearly have not watched the talk. His point isn't that "coders" aren't "programmers", but that coding — i.e. the act of writing code, i.e. text in a programming language, i.e. text that can be interpreted and run by a computer — isn't the same as programming, which is the act of efficiently producing software that does what's expected of it. He attempts to show why only writing code is not the best way to produce software that does what's expected of it. You can agree or disagree, but it has nothing to do with how anyone calls themselves.
Those are idiosyncratic definitions that are not even remotely common or shared. It people misunderstand your message because you chose your words poorly, that's your fault.
It also feels like something of a straw man -- in reality, you have junior programmers/coders and senior programmers/coders and they get better over time.
In real life, they aren't two distinct activities. People write code to get things done, and as they get better they are able to write code that is faster, more elegant, or more scalable. But premature optimization is also the root of all evil, as one saying goes -- plenty of code doesn't need to be faster, more elegant, or more scalable. It just has to work and meet a deadline. And there's nothing wrong with that.
The point isn't better or faster, at least in and of itself. It's the higher level approach to system design. The balancing of competing goals, including things like less error prone or less effort to maintain.
You speak of programmers getting better over time. The point is to break that improvement down into distinct categories.
Of course they're idiosyncratic definitions. He's attempting to make a distinction which he feels is useful. He needs words to communicate that distinction.
> Of course they're idiosyncratic definitions... He needs words to communicate that distinction.
Generally people invent new terms by combining words, or call them programmers of "type 1" and "type 2" or something.
Using existing term that are synonyms, and inventing a distinction between them, is unusual, unhelpful, and confusing.
It's like me saying that sofas are always better than couches because sofas are designed with comfort in mind while couches are intended to maximize manufacturer profit. Huh?
I’d argue this whole discussion is a flaw of the English language. Quoting Pratchett “English doesn’t borrow from other languages. English follows other languages down dark alleys, knocks them over and goes through their pockets for loose grammar.”
English has a lot of words from different origins that are more or less synonyms and one of the most common time wasting behaviours i’ve seen is people endlessly trying to categorize those words in endless debate.
The web is full of endless slideshows with titles such as ‘the difference between management and leadership’ or some such. One’s a latin word and the others german for basically the same concept and since english has both (it steals from every other language without care) you’ll find a million slideshows people have created on the differences between the words. This whole thread is yet another example of this behaviour and if you’re aware of it you’ll very quickly tire of every fucking ‘the difference between [english loanword from latin] and [english loanword from german]’ thread you’ll see.
“The problem with defending the purity of the English language is that English is about as pure as a cribhouse whore. We don't just borrow words; on occasion, English has pursued other languages down alleyways to beat them unconscious and rifle their pockets for new vocabulary.”
English doesn’t have a lot of duplicate words because of some aggressive vocabulary-stealing nature. It has a lot of duplicate words for the same reason modern Nahuatl has a lot of loanwords from Spanish: England was colonized and ruled for centuries by non-English-speaking people.
These fine-grained attempts at parsing the words to mean things other than how they are commonly used doesn't serve any purpose. Yes, of course the act of physically typing in characters in a programming language is not exactly the same thing about thinking about the algorithms you want the computer to do, but so what? It's trivial and it doesn't matter, and specifically using language to highlight that difference is pointless. To use an analogy, people will often say that "so and so website says X", but the website didn't actually say anything; it can't talk. What they mean is they read so and so text on a website. But we all know what they mean, and it's annoying and pointless to jump in to correct the language there. Similarly, it's annoying and pointless to pedantically argue that "well actually that's not programming, that's coding".
> These fine-grained attempts at parsing the words to mean things other than how they are commonly used doesn't serve any purpose.
Except we're talking about a talk title. Lamport explains what he means in the talk. What I responded to was a comment on the content based entirely on the title.
> It's trivial and it doesn't matter, and specifically using language to highlight that difference is pointless.
Sure, and that is precisely Lamport's point. You really need to watch the talk. He shows how abstract algorithms cannot be fully described in any language that is intended to be executed by a computer.
> Similarly, it's annoying and pointless to pedantically argue that "well actually that's not programming, that's coding".
And Lamport is not doing that. You're arguing over a pithy title to a rather deep talk.
>These fine-grained attempts at parsing the words to mean things other than how they are commonly used doesn't serve any purpose.
It's meant to differentiate human "programmers" from AI "coders". Ever since LLM showed up there has been a noticeable urge to redefine the value proposition of software work to be less about what LLMs can do (write code) and more about what humans can do (write programs).
> Ever since LLM showed up there has been a noticeable urge to redefine the value proposition of software work to be less about what LLMs can do (write code) and more about what humans can do (write programs).
We’ve been having this debate for my entire career (since at least 2010). The suits have always misunderstood what programmers/engineers do and we’ve always pushed back explaining that it really is closer to city planning than to brick laying.
The agile manifesto (2001) was in large part a response to these pressures to think of programmers as the people who “just implement” what the smart suits have already figured all out.
DateTime.Now() is a perfectly valid thing to write while coding. Unless you are in a distributed system, where 'now' doesn't exist, so all source code using 'DateTime.Now()' is automatically suspect. How do you know if you're in a distributed system? That's a programming question, not a coding question. And from a lot of the microservice push-back you get here on HN ("just use a single DB instance - then your data is always consistent",) a lot of devs don't realise they're in a distributed system.
"Backtracking", "idempotent", "invariant", "transaction", "ACID", "CAP", "secure", "race condition", "CRDT", are all terms that exist at a programming level, but they don't show up in the source code. A single effectful statement in the source code can be enough to break any of these quoted programming terms.
At my org they are trying to move away from programmer/analyst to sowftware engineer. I told my manager call me what ever you want, just don't call me late to dinner.
As long as it never gets to "grokker" (I've never heard the term "grokker", just saying) . I can not stand the term "grok". I don't know why but it just grates on me
Imagine if people started calling constant functions, programs methods, etc. Words have meanings. Coder does imply that the key thing of those people is producing code the same cows make milk. Imagine calling an accountant a typist because the point is to make the computer do what you want it to do.
Point was that coders are often valued by the amount of code they produced in the same way cows are value by the amount of milk they produce. I feel the usage of "coder" promotes this mentality
> Coder, programmer, software engineer call yourself whatever you want, the point is to make a computer do what you want it to do.
The distinction is important because how you make the computer do things is important in many cases. At one end of the spectrum, your wasting resources if you have a software engineer working on a program that will be used for 5 years. At the other end of the spectrum, you're setting yourself up for pain if the program is going to be in use for 30 years (or is safety critical, or must produce verifiable results, or ...).
This isn't to say that the "hierarchy" should reflect some sort of software developer social class. It's just different skill-sets for different purposes.
When I made this decision 20 years ago I landed on Software Engineer because people calling themselves that generally earned more than people calling themselves programmer, coder or developer.
This hierarchy doesn’t actually exist in real life. Software engineer, programmer, “member of technical staff”, etc. are just all used interchangeably, at least in the US.
I see the keynote next year; 'vibing isn't coding isn't programming isn't ...; a pyramid of shite that sometimes works a bit'. I am happy Dijkstra doesn't have to see this; he was angry in the 80s in my parents living room; I cannot even imagine what he would be with 'vibe coding'.
"Vibe coding" seems like one of those things where there's way more people complaining about the concept than there ever were people actually promoting it in the first place.
Yes, there are results for people saying "vibe coding" on HN. I don't feel like reading the ~500 or so mentions of it to tally which ones are people promoting it vs. which are people complaining about it. Do you have a summary of your results?
The first page seems mostly to be people complaining.
> ... seems like one of those things where there's way more people complaining ... than there ever were people actually promoting it in the first place.
The entire social and political discourse since around 2016.
My father was a commercial guy (and a good programmer) who wanted to sell development services to companies, Dijkstra didn't really like this idea, especially if it was done in a way that badly educated (...) people would write not proven correct code was distributed to the clients, making the world worse.
> wanted to sell development services to companies
So he wanted to start a contracting service? Contractors are famous for writing piles of untested, spaghetti code that only kind of works but they've finished it by then and the contract is over. Then some poor sap gets stuck trying to maintain it. Probably one of the worse ways businesses acquire custom software.
At least if you’re programming high (or drunk) you get a different kind of fun and the next day you know you have to recheck your work. The one thing that bothers me about LLM coding¹ is that the vast majority of people doing it don’t understand the issues. Those who do check their work and believe everyone else also does are immensely naive and will one day be severely bitten² by bad unreviewed code written by an LLM.
¹ Not calling it “vibe coding” to not go into pedantic definition discussions.
Is it though; what does the term mean; are we talking to our computers, not reading anything and just judging the non-coder endresult? Or are you just using it to help coding. Vibe coding would be the former and while it sometimes works, it doesn't in many cases, including LoB tools if the backoffice is large and complex enough. Unless you have some interesting tooling to help out, in which case it is possible, but that's not really vibe coding at the moment.
I now have 5 "vibe coded" programs in our production chain at my non-tech company. Some have been running for over two years now.
I think what tech companies miss is that customers buy these expensive feature packed software suites and then use 3% of the functionality. Cutting edge LLM's, and especially the newer thinking ones, are incredibly talented at creating programs that offer 3% the functionality of software suites.
Just a few weeks ago we skipped out on a $1k/mo CAD package because claude wrote a program with a simple GUI that enabled our engineers to do the one task we needed the custom CAD software for (Manually creating part placement files from 90's era manufacturing files).
>Is it though; what does the term mean; are we talking to our computers, not reading anything and just judging the non-coder endresult? Or are you just using it to help coding. Vibe coding would be the former and while it sometimes works, it doesn't in many cases, including LoB tools if the backoffice is large and complex enough. Unless you have some interesting tooling to help out, in which case it is possible, but that's not really vibe coding at the moment.
I spent 10 years of my life as an FE with a total obsession for the field, and took huge pride in my hand tuned HTML/CSS interfaces and well architected JS. I was "the UI guy" that anyone would come to with questions, as it was my expertise and what I loved to do all day.
Now I'll never write a line of that stuff ever again as long as I live.
I do not write code anymore. Everything goes through an agent. Occasionally I will flip a config value here or there, or clean up some leftover logs or whatever, but I don't type code at all anymore. My job is now to guide the agent and review its' PRs. Yet, I now easily have the productivity of an entire team from a few years ago.
You can keep your head in the sand over this stuff for a little while longer maybe, but you will be completely obsolete if you don't adapt soon.
>"Yeah not every developer does html templates for wordpress. Of course if your job is extremely simple and generic, AI can kinda figure it out."
Keep telling yourself this. It's only hurting you.
I think anyone who isn't working heavily with agentic systems right now has absolutely no clue about the revolutionary things that have happened in the last six months or so. Our industry is going to be in a completely different universe this time next year.
"Past performance does not guarantee future results"
You can't extrapolate from the last 6 months, especially with technologies this new. There's no Moore's Law of LLMs. Context windows are still tiny and the codebases that actually run our world are still huge.
This time next year I expect you're going to be disappointed.
Let me rephrase that. This time next year I expect that I will be disappointed but you will still be predicting a completely different universe... next year.
You're obviously getting a lot of pushback, do you think people here haven't used these tools?
Yes it's great at React/Frontend. Because that is mainly boilerplate. Hooking up click handlers and validators and submit buttons and fetch calls is boilerplate. It's just busy work. There can be complicated FE work, like making something like google maps, but how many of us ever need to do that? Most of the time if you did need to do something complicated, you get a library to do it ALL for you. Charts, maps, etc.
If you've worked on more niche languages/tools/problems you start seeing behind the curtain. The trick. It's essentially just copying someone else's code and replacing the variables. I've inherited some robot framework tests. I tried it to fix a problem, the code it spat out (v1) was nonsense. I told it. It spat out some different code, v2. It's still wrong. It then spat out v1 again! It seemed to only have these two "templates" to use. No matter how I framed the questions, v1, v2, v1, v2, v1, v2. They didn't even compile.
Hence why it's great at React, there are millions of examples for it to combine and refine.
I'd love AI to take away a lot of boring coding. But in my honest experience anything much longer than 100/200 lines and the AI starts screwing it up. It can't work on a full codebase. It even does incredibly irritating things like remove code that's needed, forgetting code changes it's already made, and worse still deleting code updates or fixes you've made.
Maybe... Why are you so sure? It's incredible what these models do but now it's all incremental and it's pretty clear LLMs are as bad at complexity as humans. If a system is of a realistic size of a business system, the llm will fuck up things mostly all the time unless you are a good coder with patience. It saves a lot of time, but it saves a lot of time of stuff that is 'just typing'; that is more than many programmers in this world can claim, but that's really another case. You are not doing anything that requires anything more than hello world context, because LLMs now really break down there.
I am aware, I just don't see this. I too write a lot of code like this, but the LLMs cannot figure out many things. If you are on that track, it will sometimes just loop forever. But sure, most humans are pretty bad too so many fail at the same things the llms do and then the llms are a lot cheaper.
Most of the time, yes. But there are certain times where I prompt an AI to build X, and it delivers a seriously neat project. It's not all of the time, though.
It has only happened like twice where I saw the full power of LLMs. I don't understand it. It's like there's a switch to dumb these things down, but if you catch them at the right time, they will blow your mind. But yes, most times, they are horrible.
Frontend is a low bar for anyone and is the exact job which will be wiped out by AI. You are trapped in your own skillset prison that you don't understand what more complex engineering looks like.
My experience is that around 200-300 lines, the LLM is no longer able to understand the context of its own work. At that point, it also usually contains a significant mistake.
And around that scale, you’ll get much better code if you refactor it yourself and clean up its logic (which tends to be verbose, at best).
Someday, it will be better; today, none of them (Grok, ChatGPT, Claude) are good enough for what you describe.
Funny while getting high would make me creative in the past (draw) I could not do anything hard like writing logic, I also used to think my dumb ideas were good so yeah I don't do it anymore
Why do some people use such abominations like "shite" or "shoot" instead of just using "shit"? C'mon lads, everyone knows what you want to say, why not just say it?
“Shite” isn’t a minced oath. It’s a different word to “shit” with a different etymology. Same as “feck” is a different word to “fuck”, even if they are used in the same way.
Some people don't live in the US woooow... In the UK and especially certain parts, shite is the actual word to use. Shite is definitely not the same as gosh or shoot. It's (more) vulgar as shit but our own.
I personally also don't like gosh or shoot as we all know what you mean, but this is not that case, look it up.
Leslie Lamport's SCaLE 22x Keynote: Think, Abstract, Then Code
Lamport argued for a fundamental shift in programming approach, emphasizing thinking and abstraction before coding, applicable to all non-trivial code, not just concurrent systems.
Abstraction First: Define an abstract view (the "what" and "how") of your program before writing code. This high-level design clarifies logic and catches errors early. Focus on ideas, not specific languages.
Algorithms != Programs: Algorithms are abstract concepts; programs are concrete implementations.
Executions as States: Model program executions as sequences of states, where each state's future depends only on its present. This simplifies reasoning, especially for concurrency.
Invariants are Crucial: Identify invariants—properties true for all states of all executions. Understanding these is essential for correctness.
Precise Specifications Matter: Many library programs lack clear specifications, hindering correct usage, particularly with concurrency. precise, language-independent descriptions of functionality are necessary.
Writing is Thinking: Writing forces clarity and exposes sloppy thinking. Thinking improves writing. It's a virtuous cycle.
Learn to Abstract: Abstraction is a core skill, central to mathematics. Programmers need to develop this ability.
AI for Abstraction? The question was raised whether or not AI models could be utilized for abstract thought processes in programming, due to their seeming nature.
The core message: Programming should be a deliberate process of thoughtful design (abstraction) followed by implementation (coding), with a strong emphasis on clear specifications and understanding program behavior through state sequences and invariants. Thinking is always better than not thinking.
With all due respect to Dr. Lamport, whose work I greatly admire, I take issue with his example of the max function. The problem he sets up is “what to do with an empty sequence.” He proceeds to handwave the integers and suggest that negative infinity is the right value. But in realistic use cases for max, I want to assume that it produces a finite number, and if the inputs are integral, I expect an integer to pop out.
Apologies to Lamport if he addresses this later; this is where I quit watching.
There are basically two ways out of this, that I’m aware of: either return an error, or make the initial value of the reduction explicit in the function signature, i.e.:
I had the same thought skimming the PDF, but he does address it. One of his suggestions is to use an error value as the point at infinity. So I suppose the point is the coding looks the same, but you're taking a more principled view of what you're doing for reasoning purposes.
> But in realistic use cases for max, I want to assume that it produces a finite number, and if the inputs are integral, I expect an integer to pop out.
But that's his point. You're talking about a concrete concern about code, whereas he says that first you should think in abstractions. Negative infinity helps you, in this case, to think in abstractions first.
Algorithm vs. Program: An algorithm is a high-level, abstract concept implementable in various programming languages. We should focus on the ideas, not specific languages.
Concurrency Challenges: Concurrent programs are hard due to thread interleaving, leading to numerous possible executions. Debugging is unreliable, and changes can expose hidden bugs.
Abstraction is Key: Find an abstract view of the program, especially for concurrency, describing how threads synchronize. This higher-level thinking is crucial before coding.
All programs: any piece of code that requires thinking before you write code.
What and How: For most programs, write both "what" the program does and "how" it does it. Precise languages (like Lamport's TLA+) are vital for verifying concurrent programs but not essential for all.
Trivial Example: A simple "max element in array" function illustrates how abstraction simplifies even trivial problems. It revealed a bug in the initial "what" (handling empty arrays) and led to a more robust solution using mathematical concepts (minus infinity).
Executions as State Sequences: View program executions as sequences of states, where the next state depends solely on the current one. This simplifies understanding.
Invariants: An invariant is a condition true for all states of all executions. Understanding invariants is crucial for proving program correctness.
Termination: The example program doesn't always terminate, showcasing the need to consider termination conditions.
Real-World TLA+ Usage: Amazon Web Services uses TLA+ to find design flaws before coding, saving time and money. The Rosetta mission's operating system, Virtuoso, used TLA+ for a cleaner architecture and 10x code reduction.
Abstraction Beyond Concurrency: Even when the "what" is imprecise (like a pretty-printer), abstraction helps by providing high-level rules, making debugging easier.
Thinking and Writing: Thinking requires writing. Writing helps you think better, and vice versa.
Learn Abstraction: Abstraction is central to math. Programmers should learn to think abstractly, possibly with guidance from mathematicians.
Why Programs Should Have Bugs: Library programs often lack precise descriptions, making them hard to use correctly. This is especially true for concurrent programs where input/output relations aren't sufficient.
Lamport emphasizes that programming should be about thoughtful design through abstraction before coding. He advocates viewing executions as state sequences, understanding invariants, and using writing to clarify thinking. He also highlights the importance of precise specifications, especially for library programs. He also touches on the issue of whether or not it is better to outsource abstract thinking to AI.
An undergraduate mathematics professor of mine liked to use the word coding to refer to any act of transforming concepts into a more precise and machine readable form. Not just writing what you want a computer to do in a programming language, but also encoding data. The word "encode" should make this clear: we are turning something into code. Right afterwards, he drew some binary trees on a blackboard and asked us to come up with a coding scheme to turn these differently shaped binary trees into natural numbers. That means defining an injective function from the set of binary trees to the set of natural numbers.
Coding is too ambiguous a word and I don't really use it much.
Lamport argues that we should separate the "What?" from the "How?". I wonder though, for most problems, doesn't the "What" and the "How" of a program somewhat merge?
For example, are performance considerations part of the "What", or of the "How"?
This is a huge mental error that people make over and over again. There is no absolute "what" and "how", they are just different levels of abstraction - with important inflection points along the way.
The highest level of abstraction is to simply increase happiness. Usually this is accomplished by trying to increase NPV. Let's do that by making and selling an accounting app. This continues to get more and more concrete down to decisions like "should I use a for loop or a while loop"
Clearly from the above example, "increase NPV" is not an important inflection point as it is undifferentiated and not concrete enough to make any difference. Similarly "for loop vs while loop" is likely not an inflection point either. Rather, there are important decisions along the way that make a given objective a success. Sometimes details can make a massive difference.
I could not agree more. When trying to understand the distinction between "declarative" and "non-declarative" code, I came to the same conclusion: it's all relative.
I think he argues that more thought should go into explicitly designing the behaviour of a software system ("What?"), independently from its implementation ("How?"). Explicitly designed program behaviour is an abstraction (an algorithm or a set of rules) separate from its implementation (code written in a specific language).
Even if a user cannot precisely explain what a program needs to do, programmers should still explicitly design its behaviour. The benefit, he argues, is that having this explicit design enables you to verify whether the implementation is actually correct. Real-world programs without such designs, he jokes, are by definition bug-free, because without a design you can't determine if certain behaviour is intentional or a bug.
Although I have no experience with TLA+ (which he designed for this purpose in the context of concurrency), this advice does ring true. I have mentored countless programmers, and I've observed that many (junior) programmers see program behaviour as an unintentional consequence of their code, rather than as deliberate choices made before or during programming. They often do not worry about "corner cases", accepting whichever behaviour emerges from their implementation.
Lamport says: no, all behaviour must be intentional. Furthermore, he argues that if you properly design the intended program behaviour, your implementation becomes much simpler. I fully agree!
The problem I have with this is that I haven't seen a precise definition of the difference between behavior and implementation. Another word that people use for behavior is 'specification'.
However, a spec that has been sufficiently formalized (so that it can be executed) is an implementation. Maybe an implementation with certain (desirable or undesirable) characteristics, but still an implementation.
Of course there are informal, incomplete specifications that can't be executed. Those also have value of course, but I'd argue that writing those isn't programming.
Separating the "What" from the "How" is a pretty old idea originating from Prolog, and yes you can do that. Erlang for example, in which many of Lamport's ideas are implemented, is a variation of Prolog.
Prolog by itself haven't found any real world applications as of today, because by ignoring the "How", performance suffers a lot, even though the algorithm might be correct.
That's the reason algorithms are implemented procedurally, and then some invariants of the program are proved on top of the procedural algorithm using a theorem prover like TLA+.
But in practice we implement procedural algorithms and we don't prove any property on top of them. It is not like the average programmer writes code for spaceships. The spaceship market is not significant enough to warrant that much additional effort.
Isn't that more or less what all abstraction is about, getting an interface that allows you to work with the "what" but not the "how". When I call file open, I get the "what", a file to work with, but not the "how", DMA access, Spinning rust, SSDs, could be an NFS in there who knows.
Yes, the "hows" have impacts on performance. When implementing or selecting the "how" those performance criteria flow down into "how" requirements. As far as I am concerned that is no difference from correctness requirements placed on the "how".
Coming from the other direction. If I am hard disk manufacturer I don't care about the "what". I only care about the "how" of getting storage and the disk IO interface implemented so that my disk is usable by the file system abstraction that the "what" cares about. I may not know the exact performance criteria for your "what", but more performance equals more "whats" satisfied and willing to use my how.
"not 2, not 1" is the old Zen abstraction for deeply interrelated, but separate concepts.
The What puts constraints on the How, while the How puts constraints on the What.
I would say the the What is the spacial definition of the algorithm's potential data set, while the How is the temporal definition of the algorithm's potential execution path.
>Algorithms are not programs and should not be written in programming languages and can/should be simple, while programs have to be complex due to the need to execute quickly on potentially large datasets.
Specifically discussed in the context of concurrent programs executing on multiple CPUs due to order of execution differing.
Defining programs as:
>Any piece of code that requires thinking before coding
>Any piece of code to be used by someone who doesn't want to read the code
Apparently he has been giving this talk for a while:
Interesting that the solid example of simplifying finding the smallest item in a set to finding the smallest value equal to or less than that value and starting the search with the value negative infinity was exactly "Define Errors Out of Existence" from John Ousterhout's book:
> starting the search with the value negative infinity
But then there is no differentiation between the result of the empty set and a set containing only negative infinities.
I consider them separate conditions and would therefore make them separate return values.
That's why I would prefer returning a tuple of a bool hasMax and an int maxV, where maxV's int value is only meaningful if hasMax is true, meaning that the set is non-empty.
Another way of doing such would be passing in an int defMax that is returned for the empty set, but that runs into the same problem should the set's actual max value be that defVal.
Anyway, I typed it up differently in my other toplevel comment in this thread.
Abstractly dealing with a problem where there's already a negative infinity is no problem. e.g. you can just add another negative infinity that's even smaller. That negative infinity can also be your error value; concretely you might do that by writing a generic algorithm that operates on any Ordered type, and then provide an Ordering instance on Result[E,A] where an E is less than all A. `A` could be some extended number type, or another `Result[E,A']`, or whatever thing with its own `Ordering`.
I'm enjoying the irony of the comments section being primarily occupied by people who don't get the message while simultaneously being AI maximalists. Leslie Lamport's entire point is that developing abstract reasoning skills leads to better programs. Abstraction in the math and logical sense lets you do away with the all the irrelevant tiny details, just as AI-guided software development is supposed to do.
What's sad, but to be expected with anything involving rigor, is how many people only read the title and developed an averse reaction. The Hacker in Hacker News can be a skilled programmer who can find their way around things. Maybe now it's more of the hack as in "You're a Hack", meaning you're unskilled and produce low quality results.
As I look around at the state of software development in the tech industry,
I see a desperate need for more pedantry, not less.
As a whole the industry seems bent on selling garbage that barely works,
then gradually making it worse in pursuit of growth-at-all-costs.
When nobody cares if their systems work in any meaningful way,
of course they'll dismiss anything that hints are rigor and professionalism as pedantry.
Yes, I had to pump it through Gemini 3 times before I could really get what he was talking about. He spends the first 3 and half minutes just starting to frame the subject, which I gleaned from reading the transcript. I wonder if this is just his academic background or a long career of billing by the hour. Probably both.
There is a good article in the current ACM about how we don't agree on what abstractions are; and in spite of that, they are very useful. Point being that we largely agree where the important points are. We don't really agree on exactly what they are, or why they are important.
In that vein, I think people will often talk about the same general topics with massive agreement that they are the important ones. All the while not actually agreeing on any real points.
You will find lots of inspiration. Which, I think, has its own value.
My favourite difference is that coding has the letters c and d (which in my opinion are inferior) whereas programming has p, r, g, a, and m (which in my opinion are underrated). I dislike that they both have o, i, n, and g. I wish this distinction was touched on as well.
Hacking isn't coding isn't programming isn't software development isn't software engineering. But in the end many people use these terms mostly interchangeably and making a point of the differences between the definitions you personally use is rarely a productive use of time.
I'm pretty sure I can't program without coding, and likewise, I'm pretty sure I can't code without programming. This is a lot like guitarists arguing about "playing" vs "noodling".
I think the distinction is evident in the words themselves. Coding creates code, programming creates programs.
If you write code but it doesn't create a program (i.e. you are just recording data) then you're coding but not programming.
Likewise, if you create a program without writing any code (for example, the way a pilot programs routes into his nav computer, or the way you can program a modern thermostat to have specific temperatures at specific times of the day) you're programming but not coding.
I don't want to comment on this without giving the source a fair go, but between knee-jerk reaction to whether or not it's time-wasting pedantry and reading directly off slides without providing any real value, I find it hard to even try to make myself give it a chance. I tried skipping to the questions first to see if there is anything more valuable but it hasn't changed my mind. Would love to know if I'm missing something from people who find this valuable.
Oh! Oh! About 15+ years ago, I once wrote an article asking, “Are you a Programmer or a Coder?” after reading and inspired by a paper Newspaper article. It even became popular on HackerNews and elsewhere.
I got threats of many dire consequences and the like. I ignored and I was OK. Just like I sometimes get threatening emails about my posts on HackerNews, that I ignore and am advised to ignore. ;-)
[I haven’t watch the video in the submission above.]
Reminds me of: "People think software development is just writing code. But it isn't just writing: it requires planning, experimentation, research, and consideration of style and structure."
Writers everywhere: "Excuse me?"
I'm tired of all these bullshit attempts to establish one word for doing it poorly and a different word for doing it well. It's a dumb idea that teaches nobody anything and does nothing to raise the level of professionalism in the industry. Let's take a common word that people use to describe what they do and declare it to be pejorative; that'll show them! I'm disappointed that somebody with such extraordinary theoretical achievements would lower himself to such empty corporate consultant level rhetorical bullshit.
I'm sure the talk is fine, but right now I feel like I don't need to listen to it.
I'm curious about the part where LL is talking about how to prove the 3 parts of the max() example's invariant (around 31:46) and omits #2. Did you all understand why this condition holds or are you now scared of AI replacing you, like me?
Typist still implies human involvement, whereas compilers supplanted the need for coding ages ago.
But when words are no longer useful tech people love to resurrect them into new uses, so, ultimately coding and programming have become equivalent terms.
This is so true, it took me a while to get used to it.
If you started in the punch-card days, Coders were well-established.
They were the ones who punched the cards, that was pretty much the proper job description.
They operated the keypunch machines in the way they saw fit to get the most out of the electronics in their own domain.
With coding pods, quality control routines, the whole nine yards.
By the 1970's it was recognized as a declining technology need, there were still thousands of Coders anyway because jobs were scarce, but fewer every year.
No matter what direction people may want to migrate the meaning, that's the real root across the computer industry regardless of whether it was almost forgotten out of disuse.
I wanted to like this talk. I started to like this talk. But when he jumps from talking about a problem in clear language to mathematical language, I felt he's gone so wrong.
For me, there's no way "Set x to the smallest number >= to all elements of A" is clear to almost anyone in the world. And I was educated in formal logic at uni, taking an advanced, 3rd year course with like just 5 out of the 20,000+ odd students at a top UK uni. So a tiny percentage of a small percentage of the population have ever got exposed to that sort of logic.
The (massive) mistake he's made is thinking logic and language is clear and concise. That changing the words somehow magically changes the meaning. It's not.
I'm sure it works for him. It would not work for most of us that need to communicate clearly with other people, including programmers who are not aware of the full meaning of that sentence.
It's not how most humans work, think or communicate.
Part of many programmers job is to use language that is understandable. Using a specialist language of maths does not make the requirements any clearer for most people, just for mathematicians. Worse still, the sharp listeners amongst you would have noticed that having completely rewritten his "what" he almost immediately he said "most mathematicians agree that the smallest number...".
Notice the "most". So his rewritten "what" is not even right for all mathematicians.
He has written and spoken a number of time about the need for more mathematical (which I think would include logic and metalogic) education and thinking in CS
If all programmers/ software engineers had a reasonable level of mathematical formalism (could understand and write proofs), and some amount of formal logic, that would be much better.
I think that is his position
Also, understand this is coming from one of the most prominent living people in the field. People ask him for his opinion about this stuff, so he shares it.
In reality that is a waterfall process disguised as programming advice.
What that is saying is that if only you really specified in a clear technical language the code, you wouldn't need to write much of it. It's like his version of UML.
But it all boils down to what we know doesn't work, a waterfall development process.
But worse still, it's a waterfall process where no-one outside a small clique, those who can speak the special language, can join in the design process.
Perhaps you agree with Lamport even harder than Lamport does.
If your team is going to get together and implement "Set x to the smallest number >= to all elements of A", and you feel that that it's an ambiguous or incomplete description, then it's up to you guys to formalise what is meant - a programming activity - rather than just coding it.
If it is as poorly-specified as you suggest, then someone on your team will think that they have correctly implemented it (tests and all!) but it will give the wrong answer in production.
I made it through the first 28 minutes of the video. Nothing of substance. Perhaps there's some grand point he made at the end, but there's no way to represent negative infinity in most programming languages.
At around 32 minutes, he explained that a sufficient implementation of negative infinity would be the smallest possible value representable by the data type (e.g. 32 bit integers).
That's just moving the problem elsewhere. Since the smallest possible value is a possible value, then it may be present in the array, or it may not actually be present in the array. The caller would have to determine which case is which by testing for the length of the array, which if done, negates the entire utility of returning the smallest possible value.
The video is over an hour long, where does he state what the actual difference is? I assume it is just along the lines of "coding requires high level thinking, while programming does not". If so, the question becomes: who has a "programming" job today then? If the answer is "no one", then "programming" isn't an actual job and therefore fine to use either term.
Coder, programmer, software engineer call yourself whatever you want, the point is to make a computer do what you want it to do.
https://www.socallinuxexpo.org/sites/default/files/presentat...
After reading the whole thing I'm not sure how the title describes the presentation.
>Programming should be thinking followed by coding.
Maybe if nouns were switched, i.e. Programming Isn't Coding, will've been better.
That is definitely not what software engineering is. It requires translating requirements and ensuring that the code is maintainable in the context of a team and the product's lifecycle.
> The professional title "engineer" is legally protected under the German Engineering Act [0].
[0] https://verwaltung.bund.de/leistungsverzeichnis/EN/leistung/...
People protecting titles by putting an arbitrary barrier associated with possessing a piece of paper rather than actually having skill and knowledge should be treated with scorn in my opinion.
[1] “Heaviside step function” and the “Coverup” method for partial fraction expansion when doing integrals are among his discoveries. https://en.wikipedia.org/wiki/Oliver_Heaviside
I've been to work social events where all the alcohol was provided by the company, but they still had to hire a bartender. You'd pick up a drink, hand it to the bartender, and they'd open it and give it back to you. It sure seems like a stupid, wasteful ceremony; that is, until someone is on their way to getting blackout drunk and the bartender can start refusing to open more drinks for them. They can even use the line "I could lose my bartending license if I keep serving you." The requirement for a licensed bartender was not to make sure someone knew how to open the drinks, it was to make sure someone had the authority and accountability for keeping things under control. (Not to mention making sure everyone was over 21.)
Requiring a stamp from a licensed professional seems pointless up until it prevents a big problem. I'm not opposed to requiring that a Licensed Software Engineer (or whatever) sign off on the software running on chemotherapy machines, as long as that licensing is all done in good faith.
https://en.wikipedia.org/wiki/Therac-25
You should also have mentioned that: "This riled Heaviside, who asked Thomson to sponsor him, and along with support of the society's president he was admitted 'despite the P.O. snobs'".
I fixed this for you.
"I disagree with the sentiments, there is no societal need for gatekeepers in most professions. The engineer title rarly comes with legal liabilities. It’s not necessarily just about gatekeeping. Of course it often becomes misused as well."
What’s your solution then? No attempt at providing professional standards at all?
Systems made by people will always be flawed. That is the reason for and criticism of certification and regulation.
Like, you’re developing some land, and it turns out you need a geotechnical engineer. The engineer you hired is certified, their plans get put on file with government records office, and the contractors you hire are responsible for following those plans. Among other things, it creates a paperwork trail to figure out who gets the blame if your building collapses and takes out three neighboring apartment complexes with it.
I think we should have these kinds of rules for people who design bridges, medical equipment, and passenger airplanes. I don’t think we should have these rules for people who design hard drive controllers, compilers, and physics simulation software.
You mean the hard drives that are storing inspection records for those bridges, and the compilers that were used to build that medical-equipment software, and the physics simulation software that was used to verify the required load strength of that passenger airplane?
Verified programming languages and compilers for safety critical systems are a thing. Khronos even has some APIs for use in safety critical applications.
Would you say the same about a medical degree ? How do you judge skill and knowledge in medicine without a professional association ?
Software engineering is lucky not to have the shackles that medicine had here. The door is more open. We still require people to prove their capabilities but we don’t allow organisations such as engineering or medical associations to add limits.
https://www.bma.org.uk/advice-and-support/international-doct...
It's very HN and self-aggrandizing to act like the software 99.9% of us are writing is so important that it needs gatekeep credentials. And the important software that does exist already needs its own processes for quality and safety, not depend on software engineers having some feelgood credentials.
If its status you want, then just get a higher status tech job.
IMHO by doing this and going against quaternion he has hindered much progress in EM for more than century since that's what being taught in textbook, and most people including EM engineers don't care to look what's available beyond the run-of-the-mill textbooks.
There's a very famous saying by Einstein, make it simple but not simpler, and in this case Heaviside's "Vectors Versus Quaternions" paper and related advocacy has caused more harm than good to the math, science and engineering of EM based system by doing simpler than simpler (pardon the pun).
I have also a hypothesis that perhaps someone can research into, if Einstein is exposed properly to quaternion based EM, and using it he would have solved the general theory of relativity much earlier than he took of 10 years after the special relativity discovery. The quaternion form of relativity has been even presented by L. Silberstein in 1908, just three years after the discovery of special relativity [3].
It is a shame that even today apparently the Wikipedia entry of both special relativity and general relativity do not even mentioned quaternion (zero, nada, zilch) as if it is a taboo to do so, perhaps partly thanks to Heaviside of his seemingly very successful propaganda even after more than century of progress in math, science and engineering.
[1] Vectors Versus Quaternions (1893):
https://www.nature.com/articles/047533c0
[2] General relativity:
https://en.wikipedia.org/wiki/General_relativity
[3] Quaternionic Form of Relativity. L. Silberstein (1912) [PDF]:
https://dougsweetser.github.io/Q/Stuff/pdfs/Silberstein-Rela...
The response given to Heaviside does suggest that snobbery was a more likely reason for refusing his membership, but that's just my impression.
As a fun bit of trivia: entwickeln also means to unravel, with ent=un and wickeln = to ravel.
So you'd better err to the side of being a software developer, because you'd always be telling the truth (and hope your subtly arrogant German engineer colleagues don't pick on you).
It's like telling Eric Clapton, Jimi Hendrix, George Benson, or Wes Montgomery that they're not qualified to teach at a music conservatory because they lack formal diplomas. Sure, technically they're not "certified," but put them next to your average conservatory professor and they'll play circles around 99.99% of them.
Same goes for brilliant coders versus formal engineers.
The only ones I can think of are folks who are self-taught, e.g., Bill Gates, but he famously noted that he had read the three (then extant) volumes of Knuth's _The Art of Computer Programming_, sometimes taking an entire day to contemplate a single page/idea/exercise (and doing all the exercises) and then going on to say that anyone who had read all three volumes (and with the unspoken caveat of "also done all the exercises") should send him a Résumé.
My best friend in high school was a self-taught guitarist, incredibly talented, he also had perfect pitch, and a wonderful insight into what a given piece of music was expressing.
All? of the original self-taught programmers were in an academic context, no? Even those who aren't have either read or researched a great deal, or they have laboriously re-created algorithms and coding concepts from first principles.
Maybe this is just another example of how society has not yet mastered the concept of education?
>There are masters of a thing, and then there are masters of a thing.
Where the difference is, some folks can do, but can't teach/communicate what they do or how they do it, while other folks have both mastered a skill, and are able to raise others up in how to also learn that skill.
My guitar-playing friend also would wear out cassette tapes and records playing back specific sequences working out how they had been played, and perfecting his technique....
Each impressive in their own way, but clearly there is a difference both in terms of the skills required and the outcomes enabled.
I think the specific term used matters less than the ability to distinguish between different types of building things.
If you want to build a bridge, you’re not going to hire the sand castle builder if sand castles are all they have built.
Many of the people there technically program for a living, because some of their models rely on Python scripts.
They are not software engineers. These aren't programs designed around security issues, or scalability issues, maintainability issues, or even following processes like pull requests.
>They are not software engineers. These aren't programs designed around security issues, or scalability issues, maintainability issues, or even following processes like pull requests.
That's been the obvious problem for a while now.
These are the people whom software engineers need to be cheerfully working for.
Any less-effective hierarchy and they'll just have to continue making progress on their own terms.
Nope: We are equals with very different goals.
I (a software engineer) happen to work on a product, with customers. Modeling stormwater is part of the product. Within my company, the people who do modeling are roughly equal to me.
In the past, I've built products for customers where the products do not provide programming / scripting tools.
> That's been the obvious problem for a while now.
A script that someone runs on their desktop to achieve a narrowly-defined task, such as running a model, does not need to handle the same concerns that I, as a software engineer, need to handle. To be quite honest, it could be awful spaghetti code, but if no one else will ever touch it once the model is complete, there is no need for things like code reviews, ect.
I think it would do most developers a lot of good if they were to pretend like they were a servant class to the customer & team for a period of time.
They will hopefully learn that a contest of egos is not enjoyable at scale over long timeframes. A happy customer provides a much bigger dopamine rush than getting some smart-ass jab in on your coworkers.
A job done right and seeing the recipient legitimately happy about what's been done for them should be the #1 goal of a person who identifies as being a fucking genius with computers. Most people suck at this and you should be helping them out, not competing with them in some bullshit heirarchy that only exists in your own head.
Look for instance at multi-purpose solvers like FreeFEM or FEniCS, or the distributed parallelism linalg libraries like PETSc or MUMPS, or more recent projects like OCCA. These are not small projects, and they care a lot about scalability and maintainability, as well as some of them being pretty "CS-y".
At its simplest:
"Engineering Software" is what people who build software products do.
"Science with a Computer" is something that people doing real calculations, simulations, etc do.
I, too, forget that the tools we use to build complex solutions can still be used by people who don't live in a text editor.
Computer science is the science of computation. It's more to do with theory of computation, information theory, data structures, algorithms, cryptography, databases.
"Science with a computer" is experimental physics, chemisty, biology, etc.
"Computer Science" != "Science with a Computer"
"Computer science" (in contrast to software engineering) is the science that looks into the properties of computers and thing related to it (algorithms, database systems). E.g. examining those systems on a scientific level. Depending on how abstract that can have more or less direct input on how to engineer systems in their software/hardware implementations.
"Science with a computer" on the other hand are usually (in terms of majors/academic disciplines) taught in the adjecent subfields: Bioinformatics, Chemoinformatics, Computational biology, etc.. Of course basic computing is also on some level seeping into every science subfield.
It also feels like something of a straw man -- in reality, you have junior programmers/coders and senior programmers/coders and they get better over time.
In real life, they aren't two distinct activities. People write code to get things done, and as they get better they are able to write code that is faster, more elegant, or more scalable. But premature optimization is also the root of all evil, as one saying goes -- plenty of code doesn't need to be faster, more elegant, or more scalable. It just has to work and meet a deadline. And there's nothing wrong with that.
You speak of programmers getting better over time. The point is to break that improvement down into distinct categories.
Of course they're idiosyncratic definitions. He's attempting to make a distinction which he feels is useful. He needs words to communicate that distinction.
Generally people invent new terms by combining words, or call them programmers of "type 1" and "type 2" or something.
Using existing term that are synonyms, and inventing a distinction between them, is unusual, unhelpful, and confusing.
It's like me saying that sofas are always better than couches because sofas are designed with comfort in mind while couches are intended to maximize manufacturer profit. Huh?
English has a lot of words from different origins that are more or less synonyms and one of the most common time wasting behaviours i’ve seen is people endlessly trying to categorize those words in endless debate.
The web is full of endless slideshows with titles such as ‘the difference between management and leadership’ or some such. One’s a latin word and the others german for basically the same concept and since english has both (it steals from every other language without care) you’ll find a million slideshows people have created on the differences between the words. This whole thread is yet another example of this behaviour and if you’re aware of it you’ll very quickly tire of every fucking ‘the difference between [english loanword from latin] and [english loanword from german]’ thread you’ll see.
― James D. Nicoll
English doesn’t have a lot of duplicate words because of some aggressive vocabulary-stealing nature. It has a lot of duplicate words for the same reason modern Nahuatl has a lot of loanwords from Spanish: England was colonized and ruled for centuries by non-English-speaking people.
Except we're talking about a talk title. Lamport explains what he means in the talk. What I responded to was a comment on the content based entirely on the title.
> It's trivial and it doesn't matter, and specifically using language to highlight that difference is pointless.
Sure, and that is precisely Lamport's point. You really need to watch the talk. He shows how abstract algorithms cannot be fully described in any language that is intended to be executed by a computer.
> Similarly, it's annoying and pointless to pedantically argue that "well actually that's not programming, that's coding".
And Lamport is not doing that. You're arguing over a pithy title to a rather deep talk.
It's meant to differentiate human "programmers" from AI "coders". Ever since LLM showed up there has been a noticeable urge to redefine the value proposition of software work to be less about what LLMs can do (write code) and more about what humans can do (write programs).
We’ve been having this debate for my entire career (since at least 2010). The suits have always misunderstood what programmers/engineers do and we’ve always pushed back explaining that it really is closer to city planning than to brick laying.
The agile manifesto (2001) was in large part a response to these pressures to think of programmers as the people who “just implement” what the smart suits have already figured all out.
They are different and they absolutely do matter.
DateTime.Now() is a perfectly valid thing to write while coding. Unless you are in a distributed system, where 'now' doesn't exist, so all source code using 'DateTime.Now()' is automatically suspect. How do you know if you're in a distributed system? That's a programming question, not a coding question. And from a lot of the microservice push-back you get here on HN ("just use a single DB instance - then your data is always consistent",) a lot of devs don't realise they're in a distributed system.
"Backtracking", "idempotent", "invariant", "transaction", "ACID", "CAP", "secure", "race condition", "CRDT", are all terms that exist at a programming level, but they don't show up in the source code. A single effectful statement in the source code can be enough to break any of these quoted programming terms.
As long as it never gets to "grokker" (I've never heard the term "grokker", just saying) . I can not stand the term "grok". I don't know why but it just grates on me
Bad analogy because accountants need to be qualified by a centralized body.
We are already calling procedures "functions". So it's kinda game over already unfortunately.
lol what?
https://www.merriam-webster.com/dictionary/milcher
The distinction is important because how you make the computer do things is important in many cases. At one end of the spectrum, your wasting resources if you have a software engineer working on a program that will be used for 5 years. At the other end of the spectrum, you're setting yourself up for pain if the program is going to be in use for 30 years (or is safety critical, or must produce verifiable results, or ...).
This isn't to say that the "hierarchy" should reflect some sort of software developer social class. It's just different skill-sets for different purposes.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
The first page seems mostly to be people complaining.
you could probably even knock together a script to do that
in fact you might be able to get a language model to knock together such a script with a bit of iteration
The entire social and political discourse since around 2016.
Yikes, how bad was your parents’ living room?
So he wanted to start a contracting service? Contractors are famous for writing piles of untested, spaghetti code that only kind of works but they've finished it by then and the contract is over. Then some poor sap gets stuck trying to maintain it. Probably one of the worse ways businesses acquire custom software.
¹ Not calling it “vibe coding” to not go into pedantic definition discussions.
² As will we all.
I think what tech companies miss is that customers buy these expensive feature packed software suites and then use 3% of the functionality. Cutting edge LLM's, and especially the newer thinking ones, are incredibly talented at creating programs that offer 3% the functionality of software suites.
Just a few weeks ago we skipped out on a $1k/mo CAD package because claude wrote a program with a simple GUI that enabled our engineers to do the one task we needed the custom CAD software for (Manually creating part placement files from 90's era manufacturing files).
more or less this, yes. a quick glance, maybe skimming through a tricky part. otherwise, if it works, it ships.
I spent 10 years of my life as an FE with a total obsession for the field, and took huge pride in my hand tuned HTML/CSS interfaces and well architected JS. I was "the UI guy" that anyone would come to with questions, as it was my expertise and what I loved to do all day.
Now I'll never write a line of that stuff ever again as long as I live.
I do not write code anymore. Everything goes through an agent. Occasionally I will flip a config value here or there, or clean up some leftover logs or whatever, but I don't type code at all anymore. My job is now to guide the agent and review its' PRs. Yet, I now easily have the productivity of an entire team from a few years ago.
You can keep your head in the sand over this stuff for a little while longer maybe, but you will be completely obsolete if you don't adapt soon.
Of course if your job is extremely simple and generic, AI can kinda figure it out.
Keep telling yourself this. It's only hurting you.
I think anyone who isn't working heavily with agentic systems right now has absolutely no clue about the revolutionary things that have happened in the last six months or so. Our industry is going to be in a completely different universe this time next year.
You can't extrapolate from the last 6 months, especially with technologies this new. There's no Moore's Law of LLMs. Context windows are still tiny and the codebases that actually run our world are still huge.
This time next year I expect you're going to be disappointed.
Let me rephrase that. This time next year I expect that I will be disappointed but you will still be predicting a completely different universe... next year.
I don't tell myself this. I just read your comment.
You're welcome to be hired in my team, we're hiring. Then you can sit there and ask claude or copilot and be fired when your trial period ends.
I see this attitude a lot. As if "uses AI" === "can't program". Keep swinging that hammer, John Henry.
Yes it's great at React/Frontend. Because that is mainly boilerplate. Hooking up click handlers and validators and submit buttons and fetch calls is boilerplate. It's just busy work. There can be complicated FE work, like making something like google maps, but how many of us ever need to do that? Most of the time if you did need to do something complicated, you get a library to do it ALL for you. Charts, maps, etc.
If you've worked on more niche languages/tools/problems you start seeing behind the curtain. The trick. It's essentially just copying someone else's code and replacing the variables. I've inherited some robot framework tests. I tried it to fix a problem, the code it spat out (v1) was nonsense. I told it. It spat out some different code, v2. It's still wrong. It then spat out v1 again! It seemed to only have these two "templates" to use. No matter how I framed the questions, v1, v2, v1, v2, v1, v2. They didn't even compile.
Hence why it's great at React, there are millions of examples for it to combine and refine.
I'd love AI to take away a lot of boring coding. But in my honest experience anything much longer than 100/200 lines and the AI starts screwing it up. It can't work on a full codebase. It even does incredibly irritating things like remove code that's needed, forgetting code changes it's already made, and worse still deleting code updates or fixes you've made.
Maybe... Why are you so sure? It's incredible what these models do but now it's all incremental and it's pretty clear LLMs are as bad at complexity as humans. If a system is of a realistic size of a business system, the llm will fuck up things mostly all the time unless you are a good coder with patience. It saves a lot of time, but it saves a lot of time of stuff that is 'just typing'; that is more than many programmers in this world can claim, but that's really another case. You are not doing anything that requires anything more than hello world context, because LLMs now really break down there.
Step out of that bubble, though, and it's a different world.
It has only happened like twice where I saw the full power of LLMs. I don't understand it. It's like there's a switch to dumb these things down, but if you catch them at the right time, they will blow your mind. But yes, most times, they are horrible.
Or maybe I'm just crazy, i dunno.
And around that scale, you’ll get much better code if you refactor it yourself and clean up its logic (which tends to be verbose, at best).
Someday, it will be better; today, none of them (Grok, ChatGPT, Claude) are good enough for what you describe.
You're living in 2023. Try Cline with Sonnet 3.7.
Why do some people use such abominations like "shite" or "shoot" instead of just using "shit"? C'mon lads, everyone knows what you want to say, why not just say it?
I personally also don't like gosh or shoot as we all know what you mean, but this is not that case, look it up.
Lamport argued for a fundamental shift in programming approach, emphasizing thinking and abstraction before coding, applicable to all non-trivial code, not just concurrent systems.
The core message: Programming should be a deliberate process of thoughtful design (abstraction) followed by implementation (coding), with a strong emphasis on clear specifications and understanding program behavior through state sequences and invariants. Thinking is always better than not thinking.Apologies to Lamport if he addresses this later; this is where I quit watching.
There are basically two ways out of this, that I’m aware of: either return an error, or make the initial value of the reduction explicit in the function signature, i.e.:
And notBut that's his point. You're talking about a concrete concern about code, whereas he says that first you should think in abstractions. Negative infinity helps you, in this case, to think in abstractions first.
"Truer words were never spoken." --Case from Gibson's Neuromancer
Coding is too ambiguous a word and I don't really use it much.
For example, are performance considerations part of the "What", or of the "How"?
This is a huge mental error that people make over and over again. There is no absolute "what" and "how", they are just different levels of abstraction - with important inflection points along the way.
The highest level of abstraction is to simply increase happiness. Usually this is accomplished by trying to increase NPV. Let's do that by making and selling an accounting app. This continues to get more and more concrete down to decisions like "should I use a for loop or a while loop"
Clearly from the above example, "increase NPV" is not an important inflection point as it is undifferentiated and not concrete enough to make any difference. Similarly "for loop vs while loop" is likely not an inflection point either. Rather, there are important decisions along the way that make a given objective a success. Sometimes details can make a massive difference.
Even if a user cannot precisely explain what a program needs to do, programmers should still explicitly design its behaviour. The benefit, he argues, is that having this explicit design enables you to verify whether the implementation is actually correct. Real-world programs without such designs, he jokes, are by definition bug-free, because without a design you can't determine if certain behaviour is intentional or a bug.
Although I have no experience with TLA+ (which he designed for this purpose in the context of concurrency), this advice does ring true. I have mentored countless programmers, and I've observed that many (junior) programmers see program behaviour as an unintentional consequence of their code, rather than as deliberate choices made before or during programming. They often do not worry about "corner cases", accepting whichever behaviour emerges from their implementation.
Lamport says: no, all behaviour must be intentional. Furthermore, he argues that if you properly design the intended program behaviour, your implementation becomes much simpler. I fully agree!
However, a spec that has been sufficiently formalized (so that it can be executed) is an implementation. Maybe an implementation with certain (desirable or undesirable) characteristics, but still an implementation.
Of course there are informal, incomplete specifications that can't be executed. Those also have value of course, but I'd argue that writing those isn't programming.
Prolog by itself haven't found any real world applications as of today, because by ignoring the "How", performance suffers a lot, even though the algorithm might be correct.
That's the reason algorithms are implemented procedurally, and then some invariants of the program are proved on top of the procedural algorithm using a theorem prover like TLA+.
But in practice we implement procedural algorithms and we don't prove any property on top of them. It is not like the average programmer writes code for spaceships. The spaceship market is not significant enough to warrant that much additional effort.
Yes, the "hows" have impacts on performance. When implementing or selecting the "how" those performance criteria flow down into "how" requirements. As far as I am concerned that is no difference from correctness requirements placed on the "how".
Coming from the other direction. If I am hard disk manufacturer I don't care about the "what". I only care about the "how" of getting storage and the disk IO interface implemented so that my disk is usable by the file system abstraction that the "what" cares about. I may not know the exact performance criteria for your "what", but more performance equals more "whats" satisfied and willing to use my how.
The What puts constraints on the How, while the How puts constraints on the What.
I would say the the What is the spacial definition of the algorithm's potential data set, while the How is the temporal definition of the algorithm's potential execution path.
>Algorithms are not programs and should not be written in programming languages and can/should be simple, while programs have to be complex due to the need to execute quickly on potentially large datasets.
Specifically discussed in the context of concurrent programs executing on multiple CPUs due to order of execution differing.
Defining programs as:
>Any piece of code that requires thinking before coding
>Any piece of code to be used by someone who doesn't want to read the code
Apparently he has been giving this talk for a while:
https://www.youtube.com/watch?v=uyLy7Fu4FB4
(previous discussions of it here?)
Interesting that the solid example of simplifying finding the smallest item in a set to finding the smallest value equal to or less than that value and starting the search with the value negative infinity was exactly "Define Errors Out of Existence" from John Ousterhout's book:
https://www.goodreads.com/book/show/39996759-a-philosophy-of...
as discussed here previously: https://news.ycombinator.com/item?id=27686818
(but of course, as an old (La)TeX guy, I'm favourably disposed towards anything Leslie (La)mport brings up)
But then there is no differentiation between the result of the empty set and a set containing only negative infinities.
I consider them separate conditions and would therefore make them separate return values.
That's why I would prefer returning a tuple of a bool hasMax and an int maxV, where maxV's int value is only meaningful if hasMax is true, meaning that the set is non-empty.
Another way of doing such would be passing in an int defMax that is returned for the empty set, but that runs into the same problem should the set's actual max value be that defVal.
Anyway, I typed it up differently in my other toplevel comment in this thread.
What's sad, but to be expected with anything involving rigor, is how many people only read the title and developed an averse reaction. The Hacker in Hacker News can be a skilled programmer who can find their way around things. Maybe now it's more of the hack as in "You're a Hack", meaning you're unskilled and produce low quality results.
When nobody cares if their systems work in any meaningful way, of course they'll dismiss anything that hints are rigor and professionalism as pedantry.
> Probably both.
Maybe neither. Maybe PEBCAK.
In that vein, I think people will often talk about the same general topics with massive agreement that they are the important ones. All the while not actually agreeing on any real points.
You will find lots of inspiration. Which, I think, has its own value.
If you write code but it doesn't create a program (i.e. you are just recording data) then you're coding but not programming.
Likewise, if you create a program without writing any code (for example, the way a pilot programs routes into his nav computer, or the way you can program a modern thermostat to have specific temperatures at specific times of the day) you're programming but not coding.
I got threats of many dire consequences and the like. I ignored and I was OK. Just like I sometimes get threatening emails about my posts on HackerNews, that I ignore and am advised to ignore. ;-)
[I haven’t watch the video in the submission above.]
Writers everywhere: "Excuse me?"
I'm tired of all these bullshit attempts to establish one word for doing it poorly and a different word for doing it well. It's a dumb idea that teaches nobody anything and does nothing to raise the level of professionalism in the industry. Let's take a common word that people use to describe what they do and declare it to be pejorative; that'll show them! I'm disappointed that somebody with such extraordinary theoretical achievements would lower himself to such empty corporate consultant level rhetorical bullshit.
I'm sure the talk is fine, but right now I feel like I don't need to listen to it.
By your book analogy, your coworkers are doing typing, not writing. So keep calling them typists until they start writing something.
But when words are no longer useful tech people love to resurrect them into new uses, so, ultimately coding and programming have become equivalent terms.
If you started in the punch-card days, Coders were well-established.
They were the ones who punched the cards, that was pretty much the proper job description.
They operated the keypunch machines in the way they saw fit to get the most out of the electronics in their own domain.
With coding pods, quality control routines, the whole nine yards.
By the 1970's it was recognized as a declining technology need, there were still thousands of Coders anyway because jobs were scarce, but fewer every year.
No matter what direction people may want to migrate the meaning, that's the real root across the computer industry regardless of whether it was almost forgotten out of disuse.
For me, there's no way "Set x to the smallest number >= to all elements of A" is clear to almost anyone in the world. And I was educated in formal logic at uni, taking an advanced, 3rd year course with like just 5 out of the 20,000+ odd students at a top UK uni. So a tiny percentage of a small percentage of the population have ever got exposed to that sort of logic.
The (massive) mistake he's made is thinking logic and language is clear and concise. That changing the words somehow magically changes the meaning. It's not.
I'm sure it works for him. It would not work for most of us that need to communicate clearly with other people, including programmers who are not aware of the full meaning of that sentence.
It's not how most humans work, think or communicate.
Part of many programmers job is to use language that is understandable. Using a specialist language of maths does not make the requirements any clearer for most people, just for mathematicians. Worse still, the sharp listeners amongst you would have noticed that having completely rewritten his "what" he almost immediately he said "most mathematicians agree that the smallest number...".
Notice the "most". So his rewritten "what" is not even right for all mathematicians.
If all programmers/ software engineers had a reasonable level of mathematical formalism (could understand and write proofs), and some amount of formal logic, that would be much better.
I think that is his position
Also, understand this is coming from one of the most prominent living people in the field. People ask him for his opinion about this stuff, so he shares it.
What that is saying is that if only you really specified in a clear technical language the code, you wouldn't need to write much of it. It's like his version of UML.
But it all boils down to what we know doesn't work, a waterfall development process.
But worse still, it's a waterfall process where no-one outside a small clique, those who can speak the special language, can join in the design process.
If your team is going to get together and implement "Set x to the smallest number >= to all elements of A", and you feel that that it's an ambiguous or incomplete description, then it's up to you guys to formalise what is meant - a programming activity - rather than just coding it.
If it is as poorly-specified as you suggest, then someone on your team will think that they have correctly implemented it (tests and all!) but it will give the wrong answer in production.
and maxV is only meaningful if haxMax is true
LL is a mathematician, which is the problem here ;-)
because -Inf for the return value, for when there are no elements, is semantically different than if the array has elements and they're all -Inf.
That's why exceptional conditions must be coded as a separate piece of result data.
Things must be as simple as possible, but not simpler.