Lots of interesting debates in this thread. I think it is worth placing writing/coding tasks into two buckets. Are you producing? Or are you learning?
For example, I have zero qualms about relying on AI at work to write progress reports and code up some scripts. I know I can do it myself but why would I? I spent many years in college learning to read and write and code. AI makes me at least 2x more efficient at my job. It seems irrational not to use it. Like a farmer who tills his land by hand rather than relying on a tractor because it builds character or something. But there is something to be said about atrophy. If you don't use it, you lose it. I wonder if my coding skill will deteriorate in the years to come...
On the other hand, if you are a student trying to learn something new, relying on AI requires walking a fine line. You don't want to over-rely on AI because a certain degree of "productive struggle" is essential for learning something deeply. At the same time, if you under-rely on AI, you drastically decrease the rate at which you can learn new things.
In the old days, people were fit because of physical labor. Now people are fit because they go to the gym. I wonder if there will be an analog for intellectual work. Will people be going to "mental" gyms in the future?
I used to have dozens of phone numbers memorized. Once I got a cell phone I forgot everyone's number. I don't even know the phone number of my own mother.
I don't want to lose my ability to think. I don't want to become intellectually dependent on AI in the slightest.
I've been programming for over a decade without AI and I don't suddenly need it now.
It's more complicated than that—this trade-off between using a tool to extend our capabilities and developing our own muscles is as old as history. See the dialog between Theuth and Thamus about writing. Writing does have the effects that Socrates warned about, but it's also been an unequivocal net positive for humanity in general and for most humans in particular. For one thing, it's why we have a record of the debate about the merits of writing.
> O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.
Interesting perspective. I read your first line about phone numbers as a fantastic thing -- people used to have to memorize multiple 10 digit phone numbers, now you can think about your contacts' names and relationships.
But... I think you were actually bemoaning the shift from numbers to names as a loss?
Have you not run into trouble when your phone is dead but you have to contact someone? I have, it's frustrating. Thankfully I remember my partners number, though its the only one these days.
I had to maintain a little physical phone book because, while I can memorize 10 people’s numbers, I cant memorize 25, 50, 100. Not having that with me when I needed it, or if you lost it and had no backup, was far less convenient than today. It feels like this is a case of magnifying a minor, rare modern inconvenience and ignoring all the huge inconveniences of the past in favor of a super narrow case where it was debatably “better”.
These things are not mutually exclusive. Remembering numbers didn't hinder our ability to remember our contacts' names.
We don't know how brain exactly works, but I don't think we can now do some things better just because we are not using another function of our brains anymore.
(not op) for me its a matter of dependency. great, as long as i have my phone I can just ask siri to call my sister, but if I need to use someone else's phone because mines lost or dead, well, how am I going to do that?
Same as AI. Cool it makes you 5x as efficient at your job. But after a decade of using it, can you got back to 1x efficiency without it? Or are you just making the highly optimistic leap that you will retain access to the tech in perpetuity.
I'm curious what your exposure to the available tools has been so far.
Which, if any, have you used?
Did you give them a fair shot on the off-chance that they aid you in getting orders of magnitude more work done than you did previously while still leveraging the experience you've gained?
Well, sure. I can remember phone numbers from 30+ years ago approximately instantly.
I don't have to remember most of them from today, so I simply don't. (I do keep a few current numbers squirreled away in my little pea brain that will help me get rolling again, but I'll probably only ever need to actually use those memories if I ever fall out of the sky and onto a desert island that happens to have a payphone with a bucket of change next to it.)
On a daily, non-outlier basis, I'm no worse for not generally remembering phone numbers. I might even be better off today than I was decades ago, by no longer having to spend the brainpower required for programming new phone numbers into it.
I mean: I grew up reading paper road maps and [usually] helping my dad plan and navigate on road trips. The map pocket in the door of that old Chevrolet was stuffed with folded maps of different areas of the US.
But about the time I started taking my own solo road trips, things like the [OG] MapBlast! website started calculating and charting driving directions that could be printed. This made route planning a lot faster and easier.
Later, we got to where we are today with GPS navigation that has live updates for traffic and road conditions using systems like Waze. This has almost completely eliminated the chores of route planning and remembering directions (and alternate routes) from my life, and while I do have exactly one road map in my car that I do keep updated I haven't actually ever used it for anything since 2008 or so.
And am I less of a person today than I was back when paper maps were the order of the day? No, I don't think that I am -- in fact, I think these kinds of tools have made me much more capable than I ever was.
We call things like this "progress."
I do not yearn for the days before LLM any more than I yearn for the days before the cotton gin or the slide rule or Stack Overflow.
"But there is something to be said about atrophy. If you don't use it, you lose it. I wonder if my coding skill will deteriorate in the years to come..."
"You don't want to over-rely on AI because a certain degree of "productive struggle" is essential for learning something deeply."
These two ideas are closely related and really just different aspects of the same basic frailty of the human intellect. Understanding that I think can really inform you about how you might use these tools in work (or life) and where the lines need to be drawn for your own personal circumstance.
I can't say I disagree with anything you said and think you've made an insightful observation.
In the presence of sufficiently good and ubiquitous tools, knowing how to do some base thing loses most or all of its value.
In a world where everyone has a phone/calculator in their pocket, remembering how to do long division on paper is not worthwhile. If I ask you "what is 457829639 divided by 3454", it is not worth your time to do that by hand rather than plugging it into your phone's calculator.
In a world where AI can immediately produce any arbitrary 20-line glue script that you would have had to think about and remember bash array syntax for, there's not a reason to remember bash array syntax.
I don't think we're quite at that point yet but we're astonishingly close.
The value isn't in rote calculation, but the intuition that doing it gives you.
So yes, it's pretty useless for me to manually divide arbitrarily large numbers. But it's super useful for me to be able to reason around fractions and how that division plays out in practice.
Same goes for bash. Knowing the exact syntax is useless, but knowing what that glue script does and how it works is essential to understanding how your entire program works.
That's the piece I'm scared of. I've seen enough kids through tutoring that just plug numbers into their calculator arbitrarily. They don't have any clue when a number is off by a factor of 10 or what a reasonable calculation looks like. They don't really have a sense for when something is "too complicated" either, as the calculator does all of the work.
The neat thing about AI generated bash scripts, would be that the AI can comment their code.
So the user can 1) check if the comment for each step match what they expect to be done, and 2) have a starting point to debug if something goes wrong.
> If I ask you "what is 457829639 divided by 3454"
And if it spits out 15,395,143 I hope you remember enough math to know that doesn’t look right, and how to find the actual answer if you don’t trust your calculator’s answer.
Sanity Checking Expected Output is one of the most vital skills a person can have. It really is. But knowing the general shape of the thing is different than any particular algorithm, don't you think?
This gets to the root of the issue. The use case, and user experience, and thus outcome is, is remarkably different depending on your current ability.
Using AI to learn things is useful, because it helps you get terminology right, and helps you Google search well. For example say you need to know a Windows API, you can describe it snd get the name. Then Google how that works.
As an experienced user you can get it to write code. You're good enough to spot errors in the vote and basically just correct as you go. 90% right is good enough.
It's the in-between space which is hardest. You're an inexperienced dev looking to produce, not learn. But you lack the experience and knowledge to recognise the errors, or bad patterns, or whatever. Using AI you end up with stuff that's 'mostly right' - which in programming terms means broken.
This experience difference is why there's so much chatter about usefulness. To some groups it's very useful. To others it's a dangerous crutch.
This is both inspiring and terrifying at the same time.
That being said I usually prefer to do something the long and manual way, write the process down sometimes, and afterwards search for easier ways to do it. Of course this makes sense on a case by case basis depending on your personal context.
Maybe stuff like crosswords and more will undergo a renaissance and we'll see more interesting developments like Gauguin[0] which is a blend of Sudoku and math.
Wait until AI prints out something that doesn't work and you can't figure out how to fix it because you don't know how it works so you do trial and error for 3 hours.
The difference is that you can trust a good calculator. You currently can't trust AI to be right. If we get a point where the output of AI is trustworthy, that's a whole different kind of world altogether.
>The difference is that you can trust a good calculator.
I found a bug in the ios calculator in the middle of a masters degree exam. The answer changed depending on which way the phone was held. (A real bug - I reported it and they fixed it). So knowing the expected result matters even when using the calculator.
I'm not changing goalposts, I was responding to what you said about AI spitting out something wrong and you spending 3 hours debugging it.
My original point about not needing fundamentals would obviously require AI to, y'know, not hallucinate errors that take three hours to debug. We're clearly not there yet. The original goalposts remain the same.
Since human conversations often flow from one topic to another, in addition to the goal post of "not needing fundamentals" in my original post, my second post introduced a goalpost of "being broadly useful". You're correct that it's not the same goalpost as in my first comment, which is not unexpected, as the comment in question is also not my first comment.
There is only one correct way to calculate 5/2+3. The order is PEMDAS[0]. You divide before adding. Maybe you are thinking that 5/(2+3) is the same as 5/2+3, which is not the case. Improper math syntax doesn’t mean there are two potential answers, but rather that the person that wrote it did so improperly.
So we agree that there is more than one way to interpret 5/2+3 (a correct and an incorrect way) and therefore that the GP statement below is wrong.
“Which is a question that can be interpreted in only one way. And done only one way.”
The question for calculators is then the same as the question for LLMs: can you trust the calculator? How do you know if it’s correct when you never learned the “correct” way and you’re just blindly believing the tool?
>>How do you know if it’s correct when you never learned the “correct” way and you’re just blindly believing the tool?
This is just splitting hairs. People who use calculators interpret it in only one way. You are making a different and a more broad argument that words/symbols can have various meanings, hence anything can be interpreted in many ways.
While these are fun arguments to be made. They are not relevant to practical use of the calculator or LLMs.
> So we agree that there is more than one way to interpret 5/2+3 (a correct and an incorrect way) and therefore that the GP statement below is wrong.
No. There being "more than one way" to interpret implies the meaning is ambiguous. It's not.
There's not one incorrect way to interpret that math statement, there are infinite incorrect ways to do so. For example, you could interpret as being a poem about cats.
Maybe user means the difference between a simple calculator that does everything as you type it in and one that can figure out the correct order. We used those simpler ones in school when I was young. The new fancy ones were quite something after that :)
> Wait until AI prints out something that doesn't work and you can't figure out how to fix it because you don't know how it works so you do trial and error for 3 hours.
This is basically how AI research is conducted. It's alchemy.
I don't honestly think anyone can remember bash array syntax if they take a 2 week break. It's the kind of arcane nonsense that LLMs are perfect for. The only downside is if the fancy autocomplete model messes it up, we're gonna be in bad shape when Steve retires cause half the internet will be an ouroboros of ai generated garbage.
>>I wonder if my coding skill will deteriorate in the years to come...
Well that's not how LLMs work. Don't use an LLM to do thinking for you. You use LLMs to work for you, while you tell(after thinking) it what's to be done.
Basically things like-
. Attach a click handler to this button with x, y, z params and on click route it to the path /a/b/c
. Change the color of this header to purple.
. Parse the json in param 'payload' and pick up the value under this>then>that and return
etc. kind of dictation.
You don't ask big questions like 'Write me a todo app', or 'Write me this dashboard'. Those are too broad questions.
You will still continue to code and work like you always have. Except that you now have a good coding assistant that will do the chore of typing for you.
Maybe I'm too good at my editor (currently Emacs, previously Vim), but the fact is that I can type all of this faster than dictating it to an AI and verifying its output.
> One can use both the goodness of vim AND LLMs at the same time. Why pick one, when you can pick both?
I mostly use manuals, books, and the occasional forum searches. And the advantage is that you pick surrounding knowledge. And more consistent writing. And today, I know where some of the good stuff are. You're not supposed to learn everything in one go. I built a knowledge map where I can find what I want in a more straightforward manner. No need to enter in a symbiosis with an LLM.
Well its entirely an individual choice to make. But I don't generally view the world in terms of ORs I view them in terms of ANDs.
One can do pick and use multiple good things at a time. Using vim doesn't mean, I won't use vscode, or vice versa. Or that if you use vscode code you must not use AI with it.
Having access to a library doesn't mean, one must not use Google. One can use both or many at one time.
There are no rules here, the idea is to build something.
I asked o1 to make an entire save system for a game/app I’m working on in Unity with some pretty big gotchas (Minecraft-like chunk system, etc) and it got pretty close to nailing it first try - and what it didn’t get was due to me not writing out some specifics.
I honestly don’t think we’re far out from people being able to write “Write me a todo app” and then telling it what changes to make after.
I recently switched back to software development from professional photography and I’m not sure if that’s a mistake or not.
I think that anybody who finds the process of clumsily describing the above examples to an LLM in some text box using english and waiting for it to spit out some code which you hope is suitable for your given programming context and codebase more efficient than just expressing the logic directly in your programming language in an efficient editor, probably suffers from multiple weaknesses:
- Poor editor / editing setup
- Poor programming language and knowledge thereof
- Poor APIs and/or knowledge thereof
Mankind has worked for decades to develop elegant and succinct programming languages within which to express problems and solutions, and compilers with deterministic behaviour to "do the work for us".
I am surprised that so many people in the software engineering field are prepared to just throw all of this away (never mind develop it further) in exchange for using a poor "programming language" (say, english) to express problems clumsily in a roudabout way, and then throw away the "source code" (the LLM prompt) entirely such to simply paste the "compiler output" (code the LLM spewed out which may or may not be suitable or correct) into some heterogenous mess of multiple different LLM outputs pasted together in a codebase held together by nothing more than the law of averages, and hope.
Then there's the fun fact that every single LLM prompt interaction consumes a ridiculous amount of energy - I heard figures such as the total amount required to recharge a smartphone battery - in an era where mankind is racing towards an energy cliff. Vast, remote data centres filled with GPUs spewing tonnes of CO₂ and massive amounts of heat to power your "programming experience".
In my opinion, LLMs are a momentous achievement with some very interesting use-cases, but they are just about the most ass-backwards and illogical way of advancing the field of programming possible.
There's a new mode of programming (with AI) that doesn't require english and also results in massive efficiency gains. I now only need to begin a change and the AI can normally pick up on the pattern and do the rest, via subsequent "tab" key hits as I audit each change in real time. It's like I'm expressing the change I want via a code example to a capable intern that quickly picks up on it and can type at 100x my speed but not faster than I read.
I'm using Cursor btw. It's almost a different form factor compared to something like GH copilot.
I think it's also worth noting that I'm using TypeScript with a functional programming style. The state of the program is immutable and encoded via strongly typed inputs and outputs. I spend (mental) effort reifying use-cases via enums or string literals, enabling a comprehensive switch over all possible branches as opposed to something like imperative if statements. All this to say, that a lot of the code I write in this type of style can be thought of as a kind of boilerplate. The hard part is deciding what to do; effecting the change through the codebase is more easily ascertained from a small start.
Provided that we ignore the ridiculous waste of energy entailed by calling an online LLM every time you type a word in your editor - I agree that the utility of LLM-assisted programming as "autocomplete on steriods" can be very useful. It's awfully close to that of a good editor using the type system of a good programming language providing suggestions.
I too love functional programming, and I'm talking about Haskell-levels of programming efficiency and expressiveness here, BTW.
This is quite a different use case than those presented by the post I was replying to though.
The Go programming language has this mantra of "a little bit of copy and paste is better than a little bit of dependency on other code". I find that LLM-derived source code takes this mantra to an absurd extreme, and furthermore that it encourages a though pattern that never leads you to discover, specify, and use adequate abstractions in your code. All higher-level meaning and context is lost in the end product (your committed source code) unless you already think like a programmer _not_ being guided by an LLM ;-)
We do digress though - the original topic is that of LLM-assisted writing, not coding. But much of the same argument probably applies.
At the time I'm writing this, there are over 260 comments to this article and yours is still the only one that mentions the enormous energy consumption.
I wonder whether this is because people don't know about it or because they simply don't care...
But I, for one, try to use AI as sparingly as possible for this reason.
You're not alone. With the inclusion of gemini generated answers in google search, its going down the road of most capitalistic things. Where you see something is wrong, but you have no option to use it even if you don't want it.
I like to idealistically think that in a capitalistic (free market) society we absolutely have the option to not use things that we think are wrong or don't like.
Change your search engine to one that doesn't include AI-generated answers. If none exist any more, all of Google's customers could write to them telling them that they don't want this feature and are switching away from them because of it, etc.
I know that internet-scale search is perhaps a bad example because it's so extremely difficult and expensive to build and run, but ultimately the choice is in the consumers' hands.
If the market makes it clear that there is a need for a search engine without LLM-generated answers at the top, somebody will provide one! It's complacency and acceptance that leads apparently-delusional companies to just push features and technologies that nobody wants.
I feel much the same way about the ridiculous things happening with cars and the automotive sector in general.
> a certain degree of "productive struggle" is essential
Honestly, I'm not sure this would account for most of the difficulty in learning. In my experience most of the difficulty involved in learning something involved a few missing pieces of insight. It often took longer to understand the few missing pieces than the rest of the topic. If they are accurate enough, LLMs are great for getting yourself unstuck and keep yourself moving. Although it has always been a part of the learning experience, I'm not sure frantically looking through hundreds of explanations for a missing detail is a better use of one's time than to dig deeper in the time you save.
I'm not saying you're wrong, but I wonder if this "missing piece of insight" is at least sometimes an illusion, as in the "monads are like burritos" fallacy [0]. Of course this does not apply if there really is just a missing fact that too many explanations glossed over.
I once knew someone who studied CS and medicine at the same time. According to them, if you didn't understand something in CS after reasonable effort, you should do something else and try again next semester. But if you didn't understand something in medicine, you just had to work harder. Sometimes it's enough that you have the right insights and cognitive tools. And sometimes you have to be familiar with the big picture, the details, and everything in between.
ideally you look and fail and exhaust your own efforts, then get unblocked with a tool or assistant or expert. With LLMs at your finger tips who has both the grit to struggle and the self discipline not to quit early? at the age of the typical student - very few.
That actually is an approach. Some teachers make you read the lesson before class, others give you homework on the lesson before lecturing it, and some even quiz you on it on top of that before allowing you to ask questions. I personal feel that trying to learn the material before class helped me learn the material better than coming into class blind.
one could argue aswell that having at least generally satisfying, but at the same time omnipresent "expert assistance" might rather end up empowering you.
Feeling confident to be able to shrug off blockers, that might otherwise turn exploration into a painful egg hunt for trivial unknowns, can easily mean the difference between learning and abandoning.
A current 3rd year college student here. I really want LLMs to help me in learning but the success rate is 0.
They often can not generate relatively trivial code When they do, they can not explain that code. For example, I was trying to learn socket programing in C. Claude generated the code, but when I stared asking about stuff, it regressed hard. Also, often the code is more complex than it needs to be. When learning a topic, I want that topic, not the most common relevant code with all the spagheti used on github.
For other subjects, like dbms, computer network, when asking about concepts, you better double check, because they still make stuff up. I asked ChatGPT to solve prev year question for dbms, and it gave a long, answer which looked good on surface. But when I actually read through because I need to understand what it is doing, there were glaring flaws. When I point them out, it makes other mistakes.
So, LLMs struggle to generate concise to the point code. They can not explain that code. They regularly make stuff up. This is after trying Claude, ChatGPT and Gemini with their paid versions in various capacities.
My bottom line is, I should NEVER use a LLM to learn. There is no fine line here. I have tried again and again because tech bros keep preaching about sparks of AGI, making startup with 0 coding skills. They are either fools or genius.
LLMs are useful strictly if you already know what you are doing. That's when your productivity gains are achieved.
Brace yourself, people who are going to come to tell you that it was all your fault are here!
I got bullied at a conference (I was in the audience) because when the speaker asked me, I said AI is useless for my job.
My suspicion is that these kind of people basically just write very simple things over and over and they have 0 knowledge of theory or how computers work. Also their code is probably garbage but it sort-of works for the most common cases and they think that's completely normal for code.
I'm starting to suspect that people generally have poor experiences with LLMs due to bad prompting skills. I would need to see your chats with it in order to know if you're telling the truth.
One with ChatGPT about dbms questions and one with claude about socket programming.
Looking back are some questions a little stupid ? Yes. But affcourse they are! I am coming with zero knowledge trying to learn how the socket programming is happening here ? Which functions are begin pulled from which header files, etc.
In the end I just followed along with a random youtube video. When you say, you can get LLM to do anything, I agree. Now that I know how socket programming is happening, for next question in assignment about writing code for crc with socket programming, I asked it to generate code for socket programming, made the necessary changes, asked it generate seperate function for crc, integrated it manually and voila, assignment done.
But this is the execution phase, when I have the domain knowledge. During learning when the user asks stupid questions and the LLM's answer keep getting stupider, then using them is not practical.
Also Im surprised you even got a usable answer from your first question asking for a socket program if all you asked was the bold part. I'm a human (pretty sure at least) and had no idea how to answer the first bold question.
I had already established from previous chat that upon asking for server.c file, llm's answer was working correctly. Rest of the sentence is just me asking it to use and not use certain header files which it uses by default when you ask it to generate server.c file.Thats because from docs of <sys/socket.h>, I thought it had all relevant bindings for the socket programming to work correctly.
I had no idea what even the question was. I had chatgpt (4o) explain it to me, and solve it. I now know what candidate keys are, and that the question asks for AB and BC. I'd share the link, but chatgpt doesn't support sharing logs with images.
So you did not convince me that LLMs are not working (on the contrary), but I did learn something today! thanks for that.
I can get an LLM to do almost anything I want. Sometimes I need to add a lot of context. Sometimes I need to completely rewrite the prompt after realizing I wasn't communicating clearly. I almost always have to ask it to explain it's reasoning. You can't treat an LLM like a computer. You have to treat it like a weird brain.
The problem with these answers is that they are right but misleading in a way.
Glass is not a pure element so that temperature is the "production temperature" but as an amorphous material it ""melts"" in the way a plastic material ""melts"" and can be worked at temperature as low as 5-700c.
I feel like without a specification the answer is wrong by omission.
What "melts" means when you are not working with a pure element is pretty messy.
This came out in a discussion for a project with a friend too obsessed with GPT (we needed that second temperature and i was "this can't be right....it's too high")
Yes. This is funny when I know what is happening and I can "guide" the LLM to the right answer. I feel that is the only correct way to use LLMs and it is very productive. However, for learning, I don't know how anyone can rely on them when we know this happens.
I mean. Likely, yes, but if you have to spend the time to prompt correctly, I'd rather just spend that time learning the material I actually want to learn
I've been programming for 20 years and mostly JS for the last 10 years. Right now, I'm learning Go. I wrote a simple CLI tool to get data from several servers. Asked GPT-4o to generate some code, which worked fine at first. Then I asked it to rewrite the code with channels to make it async and it contained at least one major bug.
I don't dismiss it as completely useless, because it pointed me in the correct direction a couple times, but you have to double-check everything. In a way, it might help me learn stuff, because I have to read its output critically. From my perspective, the success rate is a bit above 0, but it's nowhere close to "magical" at all.
> AI makes me at least 2x more efficient at my job. It seems irrational not to use it
Fair, but there is a corollary here -- the purpose of learning at least in part is to prepare you for the workforce. If that is the case, then one of the things students need to get good at is conversing with LLMs, because they will need to do so to be competitive in the workplace. I find it somewhat analogous to the advent of being able to do research on the internet, which I experienced as an early 90s kid, where everyone was saying "now they won't know how to do research anymore, they won't know the Dewey decimal system, oh no!". Now the last vestiges of physical libraries being a place where you even can conduct up-to-date research on most topics are crumbling, and research _just is_ largely done online in some form or another.
Same thing will likely happen with LLMs, especially as they improve in quality and accuracy over the next decade, and whether we like it or not.
A big one for me was nobody will know how to look up info in a dictionary or encyclopedia. Yep I guess that's true. And nobody would want to now either!
Our internal metrics show a decrease in productivity when more inexperienced developers use AI, and an increase when experienced developers with 10+ years use it. We see a decrease in code quality across experience levels which needs to be rectified across all experience levels but even with the time spent refactoring it’s still an increase in productivity. I think I should note that we don’t use these metrics for employee review in any way. The reason we have them is because they come with the DORA (eu regulation) compliance tool we use to monitor code-quality. They won’t be used for employee measurement while I work here. I don’t manage people, but I was brought in to help IT transition from startup to enterprise so set the direction with management confidence.
I’m a little worried about developers turning to LLMs instead of official documentation as the first thing they do. I still view LLMs as mostly being fancy auto-complete with some automation capabilities. I don’t think they are very good at teaching you things. Maybe they are better than Google programming, but the disadvantage LLMs have seem to be that our employees tend to trust the LLMs more than they would trust what they found on Google. I don’t see an issue with people using LLMs on fields they aren’t too experienced with yet however. We’ve already seen people start using different models to refine their answers, we’ve also seen an increase in internal libraries and automation in place of external tools. Which is what we want, again because we’re under some heavy EU regulations where even “safe” external dependencies are a bureaucratic nightmare.
I really do wonder what it’ll do to general education though. Seeing how terrible and great these tools can be from a field I’m an expert in.
How long are your progress reports? Mine are a one sentence message like "Hey, we've got a fix for the user profile bug, but we can't deploy it for an hour minimum because we've found an edge case to do with where the account signed up from" and I'm not sure where the AI comes in.
AI comes to write it 10x longer so you pretend you worked a lot and think the reader can't realise your report is just meaningless words because you never read anything yourself.
I keep mine pretty short and have been writing them up bullet-style in org-mode for like 15 years. I can scan back over the entire year when I need to deal with my annual review and I don't think I spend more than 5 minutes on this in any given week. Converting from my notes to something I would deliver to someone else might take a few minutes of formatting since I tend to write in full sentences as-is. I can't imagine turning to an AI tool for this shit.
> But there is something to be said about atrophy. If you don't use it, you lose it.
YMMV, but I didn't ride a bike for 10ish years, and then got back on and I was happily riding quickly after. I also use zsh and ctrl+r for every Linux command, but I can still come up with the command of I need to, just, slowly. Ive overall found that if I learn a thing, it's learnt. Stuff I didn't learn in university, but passed anyways, like Jacobians, I still don't know, but I've got the gyst of it. I do keep getting better and better at the banjo the less I play it, and getting back to the drumming plateau is quick.
Maybe the drumming plateau is the thing? You can quickly get back to similar skill levels after not doing the thing in a while, but it's very hard to move that plateau upwards
Dont you see the survivorship bias in your thinking?
You learnt the bike and practiced it rigorously before stoppping for 10 years, and you're able to pick it up. You _knew_ the commands because you learned the them the manual/hard way, and then used assistance to to do it for you..
Now, do you think it will apply to someone who begins their journey with LLMs and doesnt quite develop the skill of "Does this even look right?!", and says to themselves "if LLMs could write this module why bother learning what that thing actually does?" and then get bitten by it due to LLM hallucinations and stare like a deer in headlights.
> I wonder if there will be an analog for intellectual work. Will people be going to "mental" gyms in the future?
Already do — that's what "brain training" apps; consumer EdTech like Duolingo and Brilliant; and educational YouTube and podcasts like 3blue1brown, Easy German, ElectroBOOM, Overly Sarcastic Productions all are.
I have been working with colleagues to develop advice on how to adapt teaching methods in the face of widespread use of LLMs by students.
The first point I like to make is that the purpose of having students do tasks is to foster their development. That may sound obvious, but many people don't seem to take notice that the products of student activities are worthless in themselves. We don't have students do push-ups in gym class to help the national economy by meeting some push-up quota. The sole reason for them is to promote physical development. The same principle applies to mental tasks. When considering LLM use, we need to be looking at its effects on student development rather than on student output.
So, what is actually new about LLM use? There has always been a risk that students would sometimes submit homework that was actually the work of someone else, but LLMs enable willing students to do it all the time. Teachers can adapt to this by basing evaluation only on work done in class, and by designing homework to emphasize feedback on key points, so that students will get some learning benefit even though a LLM did the work.
Completely following this advice may seem impossible, because some important forms of work done for evaluation require too much time. Teachers use papers and projects to challenge students in a more elaborate way than is possible in class. These can still be used beneficially if a distinction is made between work done for learning and work done for evaluation. While students develop multiple skills while working on these extended tasks, those skills could be evaluated in class by more concise tasks with a narrower focus. For example, good writing requires logical coherence and rhetorical flow. If students have trouble in these areas, it will be just as evident in a brief essay as a long one.
It is trivially easy to spot AI writing if you are familiar with it, but if it requires failing most of the class for turning in LLM generated material, I think we are going to find that abolishing graded homework is the only tenable solution.
The student's job is not to do everything the teacher says, it is to get through schooling somewhat intact and ready for their future. The sad fact is that many things we were forced to do in school were not helpful at all, and only existed because the teachers thought it was, or for no real reason at all.
Pretending that pedagogy has established and verified methodology that will result in a completely developed student, if only the student did the work as prescribed, is quite silly.
Teaching evolves with technology like every other part of society and it may come out worse or it may come out better, but I don't want to go back fountain pens and slide rules and I think in 20 years this generation won't look back on their education thinking they got a worse one than we did because they could cheat easier.
As a (senior) lecturer in a university, I’m with you on most of what you wrote. The truth is that every teacher must immediately think: if any of their assignments or examinations involve something that could potentially be GPT-generated, it will be GPT-generated. It might be easy to spot such a thing, but you’ll be spending hours writing feedback while sifting through the rivers of meaningless artificially-generated text your students will submit.
Personally what I’m doing is to push the weight back at the students. Every submission now requires a 5-minute presentation with an argumentation/defense against me as an opponent. Anyway it would take me around 10-15 min to correct their submission, so we’re just doing it together now.
Never say never, but I do not plan on doing this. This sounds quite surreal: a loop where the students pretend to learn and I pretend to teach? I would… hm… I’ve never heard of such… I mean, this is definitely not how it is in reality… right…
(Jokes aside, I have an unhealthy, unstoppable need to feel proud of my work, so no I won’t do that. For now…)
I would have thought that the teaching comes before the test, and that the test is really just a way to measure how well the student soaked up the knowledge.
You could take pride in a well crafted technology that could mark an assignment and provide feedback in far more detail that you yourself could ever provide given time constraints.
I asked my partner about it last night, she teaches at ANU and she made some joke about how variable the quality of tutor marking is. At least the AI would be impartial and consistent.
I have no idea how well an AI can assess a paper against a rubric. Might be a complete waist of time, but if there were some teachers out there who wanted to do some tests, I would be interested in helping set up the tests and evaluating the results.
In discussing how to adapt teaching methods, we have also looked at evaluation by LLM. The most talked about concern now is the unreliability of LLM output. However, say that in the future, accuracy of LLMs improves to the point that it is no longer a problem. Would it then be good to have evaluation by LLM?
I would say generally not, for two reasons. First, the teacher needs to know how the student is developing. To get a thorough understanding takes working through the student's output, not just checking a summary score. Second, the teacher needs to provide selective feedback, to focus student attention on the most important areas needing development. This requires knowledge of the goals of the teacher and the developmental history of the student.
I won't argue that LLM evaluation could never be applied usefully. If the task to be evaluated is simple and the skills to be learned are straightforward, I imagine that it could benefit the students of some grossly overloaded teacher.
I know I would have had a blast finding ways to direct the model into giving me top scores by manipulating it through the submitted text. I think that without a bespoke model that has been vetted, is supervised, and is constrained, you are going to end up with some interesting results running classwork through a language model for grading.
Does pedagogy have established and verified methodology that will result in a completely PHYSICALLY developed student, if only the student does the EXERCISE as prescribed? No, but we still see the value in physical activity to promote healthy development.
> many things we were forced to do in school were not helpful at all
I've never had to do push-ups since leaving school. It was a completely useless skill to spend time on. Gym class should have focused on lifting bags of groceries or other marketable skill.
But at the time it did contribute at least somewhat to your physical condition, I am not an expert, but physical condition indicators like VO2 max seem to be the best predictors of intelligence. We're all physical beings at the end of the day
You haven't proved that it made a difference or that doing something else wouldn't have been as or more effective, which is my point. You did it, so these students must do it, with no other rationale than that.
You have forgotten that pedagogy is based on science and research. That is why it is effective for the masses. Anecdotal evidence will never refute the result. Take learning to read, for example. While you can learn to read in a number of ways, some of which are quite unusual, such as memorising the whole picture book and its sound, research has clearly shown that using the phonics approach is the most effective. Or take maths. It's obvious that some people are good at maths, even if they don't seem to do much work. But research has shown time and time again that to be good at maths you need to practice, including doing homework.
So learning to recognise the phonics and blend them together may not be better for one pupil, but it is clearly better for most. This is what the curriculum and most teachers' classroom practice is all about.
"Although some studies have shown various gains in achievement (Marzano & Pickering, 2007), the relationship between academic achievement and homework is so unclear (Cooper & Valentine, 2001) that using research to definitively state that homework is effective in all contexts is presumptive."
American Secondary Education 45(2) Spring 2017 Examining Homework Bennett
To conclude for yourself, just compare the PISA result of the US and other developing country like VietNam or China, where they still keep the school tradition of homework and practice alive. And what do we see? Much higher PISA scores in math than the US. I refuse to believe some some folks have a "math gene" and other not.
Practice does not improve skills? You got to be kidding me! I didn't state that homework is effective in all contexts, but I firmly believe that practice is absolute necessary to improve any kind of skill. Some forms of practice is more effective than other in certain context. But you need practice to improve your skills. Otherwise, how do you propose to improve your skills? Dreaming?
About a decade ago, it was a hot fashion in Education schools to argue that homework did not promote skill development. I don't know if that's still the case, as fashions in Education can change abruptly. But consider what this position means. They are saying "practice does not improve skill", which goes completely against the past century or so of research in psychology.
If your field depends on underpowered studies run by people with marginal understanding of statistics, you can gather support for any absurd position.
I think often AI sceptics go too far in assuming users blindly use the AI to do everything (write all the code, write the whole essay). The advice in this article largely mirrors - by analogy - how I use AI for coding. To rubber duck, to generate ideas, to ask for feedback, to ask for alternatives and for criticism.
Usually it cannot write the whole thing (essay, program )in one go, but by iterating bewteen the AI and myself, I definitely end up with better results.
> I think often AI sceptics go too far in assuming users blindly use the AI to do everything
Users are not a monolithic group. Some users/students absolutely use AI blindly.
There are also many, many ways to use AI counterproductively. One of the most pernicious I have noticed is users who turn to AI for the initial idea without reflecting about the problem first. This removes a critical step from the creative process, and prevents practice of critical and analytical thinking. Struggling to come up with a solution first before seeing one (either from AI or another human) is essential for learning a skill.
The effect is that people end up lacking self confidence in their ability to solve problems on their own. They give up much too easily if they don't have a tool doing it for them.
I'm terrified when I see people get a whiff of a problem, and immediately turn to ChatGPT. If you don't even think about the problem, you have a roundabout zero chance of understanding it - and a similar chance of solving it. I run into folks like that very rarely, but when I do, it gives me the creeps.
Then again, I bet some of these people were doing the same with Google in the past, landing on some low quality SEO article that sounds close enough.
Even earlier, I suppose they were asking somebody working for them to figure it out - likely somebody unqualified who babbled together something plausible sounding.
> I'm terrified when I see people get a whiff of a problem, and immediately turn to ChatGPT.
Not a problem for me, I work on prompt development, I can't ask GPT how to fix its mistakes because it has no clue. Prompting will probably be the last defense of reasoning, the only place where you can't get AI help.
It gets worst when these users/students run to others when the AI generated code doesn't work. Or with colleagues who think they already "wrote" the initial essay then pass it for others to edit and contribute. In such cases it is usually better to rewrite from scratch and tell them their initial work is not useful at all and not worth spending time improving upon.
Using llm blindly will lead to poor results in complex tasks so I'm not sure how much of a problem it might be. I feel like students using it blindly won't get far but I might be wrong.
> One of the most pernicious I have noticed is users who turn to AI for the initial idea without reflecting about the problem first
I've been doing that and it usually doesn't work. How can you ask an ai to solve a problem you don't understand at all ? More often than not, when you do that the ai throws a dumb response and you get back to thinking about how to present the problem in a clear way which makes you understand it better.
So you still end up learning to analyze a problem and solving it. But I can't tell if the solution comes up faster or not nor if it helps learning or not.
Well, I got chatgpt (gpt4o) to write me a very basic json parser once (and a gltf parser). Although it was very basic and lacking any error checking, it did what i asked it (although not in one go, i had to refine my questions multiple times).
It does spectacular job with well trodden paths. Asked it to give me a map react control with points plotted and got something working in a jiffy.
I was trying to get it write robot framework code earlier and it was remarkably terrible. I would point out an obvious problem, it would replace the code with something even more spectacularly wrong.
When I pointed out the new error, it just gave me the exact same old code.
This happened again and again.
It was almost entirely useless.
Really showed how the sausage is made, this generation of AI is just regurgitation of patterns it stole from other people.
In my experience 4o is really good at ignoring user-provided corrections and insanely regurgitating the same code (and/or the same problems) over and over again.
ChatGPT 4 does much better with corrections, as does Claude. 4o is a pox.
You don't need to be precise. Just give it an example string and tell it what information you want to extract from it and it usually works. It is just way faster than doing it manually.
Writing regexes by hand is hard so there will always be some level of testing involved. But reading a regex and verifying it works is easier than writing one from scratch.
My overly snide point about regexes was that most of the time "verifying it works" is more like finding and fixing a few more edge cases on the asymptotic journey towards no more brokenness.
I've been using it to debug issues with config files and stuff. I just provide all the config files and error log to Chatgpt and it give a few possibilities which I fix or confirm is not an issue. If it still fails, I send the updated config files and error logs and get a new reply and repeat.
This iterative process hasn’t led to better results than my best effort, but it has led to 90% of my best in a fraction of the time. That’s especially true if I have curated a list of quotes, key phrases, and research literature I know I want to use directly or pull from.
I teach basic statistics to computer scientists (in the context of quantitative research methods) and this year every single one of my group of 30+ students used ChatGPT to generate their final report (other than the obvious wording style, the visualizations all had the same visual language, so it was obvious). There were glaring, laughable errors in the analyses, graphs, conclusions, etc.
I remember when I was a student that my teachers would complain that we did “compilation-based programming” meaning we hit “compile” before we thought about the code we wrote, and let the compiler find the faults. ChatGPT is the new compiler: it creates results so fast that it’s literally more worth it to just turn them in and wait for the response than bothering to think about it. I’m sure a large amount of these students are passing their courses due to simple statistics (I.e. teachers being unable to catch every problematic submission).
I sit on my local school board and (as everyone knows) AI has been whirling through the school like a tornado. I'm concerned about student using it to cheat, but I'm also pretty concerned about how teachers are using it.
For example, many teachers have fed student essays into ChatGPT and asked "did AI write this?" or "was this plagiarized" or similar, and fully trusting whatever the AI tells them. This has led to some false positives where students were wrongly accused of cheating. Of course a student who would cheat may also lie about cheating, but in a few cases they were able to prove authorship using the history feature built into Google docs.
Overall though I'm not super worried because I do think most people are learning to be skeptical of LLMs. There's still a little too much faith in them, but I think we're heading the right direction. It's a learning process for everyone involved.
I imagine maths teachers had a similar dilemma when pocket calculators became widely available.
Now, in the UK students sit 2 different exams: one where calculators are forbidden and one where calculators are permitted (and encouraged). The problems for the calculator exam are chosen so that the candidate must do a lot of problem solving that isn't just computation. Furthermore, putting a problem into a calculator and then double checking the answer is a skill in itself that is taught.
I think the same sort of solution will be needed across the board now - where students learn to think for themselves without the technology but also learn to correctly use the technology to solve the right kinds of challenges and have the skills to check the answers.
People on HN often talk about ai detection or putting invisible text in the instructions to detect copy and pasting. I think this is a fundamentally wrong approach. We need to work with, not against the technology - the genie is out of the bottle now.
As an example of a non-chatgpt way to evaluate students, teachers can choose topics chatgpt fails at. I do a lot of writing on niche topics and there are plenty of topics out there where chatgpt has no clue and spits out pure fabrications. Teachers can play around to find a topic where chatgpt performs poorly.
Thank you, you make an excellent point! I very much agree, and I think the idea of two exams is very interesting. The analogy to calculators feels very good, and is very much worth a try!
> Of course a student who would cheat may also lie about cheating, but in a few cases they were able to prove authorship using the history feature built into Google docs.
It's scary to see the reversal of the burden of proof becoming more accepted.
With all the concern over AI, it's being used _against recommendations_ to detect AI usage? [0][1]
So while the concern for using AI is founded, teachers are so mistaken at understanding what it is and the tech around is that they are using AI in areas it's publicly acknowleded it doesn't work. That detracts from any credibility the teachers have about AI usage!
Oh absolutely, I've spent hours explaining AI to teachers and most of them do seem to understand, but it takes some high-level elaboration about how it works before it "clicks." Prior to that, they are just humans like the rest of us. They don't read fine print or blogs, they just poke at the tool and when it confidently gives them answers, they tend to anthropomorphize the machine and believe what it is saying. It certainly doesn't help that we've trained generations of people to believe that the computer is always right.
> That detracts from any credibility the teachers have about AI usage!
I love teachers, but they shouldn't have any credibility about AI usage in the first place unless they have gained that in the same way the rest of us do. As authority figures, IMHO they should be held to an even higher standard than the average person because decisions they make have an out-sized impact on another person.
If there's something more unethical than AI plagiarism, that's going to be using AI to condemn people for it. I'm afraid that would further devalue actually writing your own stuff, as supposed to iterating with ChatGPT to produce the least AI-sounding writing out of fear of false accusations.
Nice! You should check out a free chrome plugin that I wrote for this called revision history. It’s organically grown to 140k users, so the problem obviously resonates (revisionhistory.com).
Anecdote-- every single high school student and college student I've talked to in the past year (probably dozens) use chatgpt to write their papers.
They don't even know how to write a prompt, or in some cases even what "writing a prompt" means. They just paste the assignment in as a prompt and copy the output.
They then feed that as input to some app that detects chatgpt papers and change the wording until it flows through undetected.
One student told me that, for good measure, she runs it twice and picks and chooses sentences from each-- this apparently is a speedup to beating the ai paper detector. There are probably other arbitrarily-chosen patterns.
I've never heard of any of these students using it in any way other than holistic generation of the end product for an assignment. Most of them seem overconfident that they could write papers of similar quality if they ever tried. But so far, according to all of them, they have not.
I've seen my 15 year old use ChatGPT for her homework and I'm ok with most of what she does.
For example she tends to ask it for outlines instead of the whole thing, mostly to beat "white page paralysis" and also because it often provides some aspect she might have overlooked.
She tends to avoid asking for big paragraphs because she doesn't trust it with facts and also dislikes editing out the "annoying" AI style or prompting for style rewrites. But she will feed it phrases from her own writing that get too tangled for simplification.
Also she will vary the balance of AI/own effort according to the nature of the task, her respect for the teacher or subject:
Interesting work from a engaging lecturer? Light LLM touch or none. Malicious make-work or readable Lorem Ipsum when the point is the format of the thing instead of the content? AI pap by the tons for you. I find it healthy and mature.
> Most of them seem overconfident that they could write papers of similar quality if they ever tried. But so far, according to all of them, they have not.
Ah, the illusion of knowledge..
Coming from an education system where writing lengthy prose and essays is expected for every topic from literature to mathematics, I can confidently say that, after not having actively practiced that form of writing for over a decade, I wouldn't be able to produce a paper of what was considered average-quality back then. It would take time, effort, and a few tries, despite years and years of previous practice. Even more so if the only medium in front of me would be a blank sheet of paper and a pen.
So to confidently claim you can produce something of high quality when you've never really done it before is.. ..misguided.
But in the end, perhaps not really different to the illusion knowledge one gets with google at its fingertips. Pull the plug, and you are left with nothing.
1st year college student here, and an alumni of a certain high school programme I'd rather not mention ever again.
I've used LLMs MULTIPLE times during "academic" work, often for original idea generation. Never for full-on, actual writing.
Think of my usage as treating it as a tool that gives you a stem of an idea that you develop further on your own. It helps me persevere with the worst part of work: having to actually come up with an entire idea on my own.
And AI detection tools are still complete garbage as far as I can tell, a paper abstract I've written in front of one of my professors got flagged as 100% AI generated (while having no access to outside sources).
Also anecdotally, I’m a college student and do not use LLMs to generate my papers.
I have however asked ChatGPT to cite sources for specific things, to varying success. Surprisingly, it returns sources that actually exist most of the time now. They often aren’t super helpful though because they either aren’t in my school’s library or are books rather than articles.
I was talking to a teacher today that works with me at length about the impact of AI LLM models are having now when considering student's attitude towards learning.
When I was young, I refused to learn geography because we had map applications. I could just look it up. I did the same for anything I could, offload the cognitive overhead to something better -- I think this is something we all do consciously or not.
That attitude seems to be the case for students now, "Why do I need to do this when an LLM can just do it better?"
This led us to the conclusion:
1. How do you construct challenges that AI can't solve?
2. What skills will humans need next?
We talked about "critical thinking", "creative problem solving", and "comprehension of complex systems" as the next step, but even when discussing this, how long will it be until more models or workflows catch up?
I think this should lead to a fundamental shift in how we work WITH AI in every facet of education. How can a human be a facilitator and shepherd of the workflows in such a way that can complement the model and grow the human?
I also think there should be more education around basic models and how they work as an introductory course to students of all ages, specifically around the trustworthiness of output from these models.
We'll need to rethink education and what we really desire from humans to figure out how this makes sense in the face of traditional rituals of education.
> When I was young, I refused to learn geography because we had map applications. I could just look it up. I did the same for anything I could, offload the cognitive overhead to something better -- I think this is something we all do consciously or not.
This is certainly useful to a point, and I don't recommend memorizing a lot of trivia, but it's easy to go too far with it. Having a basic mental model about many aspects of the world is extremely important to thinking deeply about complex topics. Many subjects worth thinking about involve interactions between multiple domains and being able to quickly work though various ideas in your head without having to stop umpteen times can make a world of difference.
To stick with the maps example, if you're reading an article about conflict in the Middle East it's helpful to know off the top of your head whether or not Iran borders Canada. There are plenty of jobs in software or finance that don't require one to be good at mental math, but you're going to run into trouble if you don't at least grok the concept of exponential growth or have a sense for orders of magnitude.
Helpful in terms of what? Understanding some forced meme? "Force this meme so you can understand this other forced meme." is not education it's indoctrination. And even if you wanted to, for some unknown reason, understand the article you can look at a (changing and disputed) map as the parent said.
This is the opposite of deep knowledge, this is API knowledge at best.
Are you referring to:
> if you're reading an article about conflict in the Middle East it's helpful to know off the top of your head whether or not Iran borders Canada
?
Perhaps, but in the case that you are I think it's a stretch to say that the only utility of this is 'indoctrination' or 'understanding this. other forced meme'. The point is that lookups (even to an AI) cost time, and if you have to do one for every other line in a document, you will either end up spending a ton of time reading, or (more likely) do an insufficient number of lookups and come away with a distorted view of the situation. This 'baseline' level of knowledge IMO is a reasonable thing to expect for any field, not 'indoctrination' in anything other than the most diluted sense of the term.
I think at a certain point, you either value having your own skills and knowledge, or you don't. You may as well ask why anyone bothers learning to throw a baseball when they could just offload to a pitching machine.
And I get it. Pitchers who go pro get paid a lot and aren't allowed to use machines, so that's a hell of an incentive, but the vast majority of kids who ever pick up a baseball are never going to go pro, are never even going to try to go pro, and just enjoy playing the game.
It's fair to say many, if not most, students don't enjoy writing the way kids enjoy playing games, but at the same time, the point was mostly never mastering the five paragraph thesis format anyway. The point was learning to learn, about arbitrary topics, well enough to the point that you could write a reasonably well-argued paper about it. Even if a machine can do the writing for you, it can't do the learning for you. There's either value in having knowledge in your own brain or there isn't. If there isn't, then there never was, and AI didn't change that. You always could have paid or bullied the smarter kids into doing the work for you.
Sure, but watch out for the game with a pitching machine, a hitting machine, and a running machine.
I do think there is a good analogy here - if you're making an app for an idea that you find important, all of the LLM help makes sense. You're trying to do a creative thing and you need help in certain parts.
> You always could have paid or bullied the smarter kids into doing the work for you.
Don't overlook the ease of access as being a major contributor. Paying $20/month to have all of your work done is still going to prevent students from using it. Paying $200/month would for sure bring the numbers of student users near to zero. When it's free you'll see more people using it. Just like anything else.
So maybe if there isn't a perceived value in the way we learn, then how learning is taught should maybe change to keep itself relevant as it's not about what we learn, but how we learn to learn.
> We talked about "critical thinking", "creative problem solving", and "comprehension of complex systems" as the next step, but even when discussing this, how long will it be until more models or workflows catch up?
Either these things are important to learn for their own sake or they aren’t. If the former, then nothing about these objectives needs changing, and if the latter then education itself will be a waste of time.
There's so much dystopian science fiction about people being completely helpless because only machines know how to do everything. Then the machines break down.
Funny you should say that. In Sweden, to get good grades in English you have to learn lots of facts about UK, like population, name of kings and so on. What does that have to do with english? It's spoken in many other countries too. And those facts change, the answers weren't even up to date now...
Yes, I was very confused when daughter came home with some bad scores on a test and couldn't understand what she meant. I had to call the teacher to get an explanation that it wasn't history lesson, it was english lesson... Really weird is just not covering it.
Swedish schools gets a makeover every time we change government. It's one of those things they just have to "fix" when they get to power.
Almost all parts require it, but none are about it. That's how background knowledge works. If you can't get over the drudgery of learning scales and chords, you'll never learn music. The fact that many learners never understand this end goal is sad but doesn't invalidate the methodology needed to achieve the progression.
It would be interesting to test adults with the same tests that students were given. Plus some more esoteric knowledge. What they learned at school could then be compared to see new information that they learned after school... as well as information, skills that they didn't use after school. It may help focus learning on useful skills knowledge that people have learned... as well as information that they didn't learn in school that would be useful for them!
As a drummer, you need to learn your scales and chords. It still matters, and the way you interact with the music should be consistent with how the chords change, and where the melody is within the scale.
> more education around basic models and how they work
yes, I think this is critical. There's a slate star codex article "Janus Simulators" that explains this very well, that I rewrote to make more accessible to people like my mom. It's not hard to explain this to people, you just need to let them interact with a base model, and explore its quirks. It's a game, people are good at learning systems that they can get immediate feedback from.
> How can a human be a facilitator and shepherd of the workflows in such a way that can complement the model and grow the human?
Humans must use what the AI doesn't have - physicality. We have hands and feet, we can do things in the world. AI just responds to our prompts from the cloud. So the human will have to test ideas in reality, to validate, do experiments. AI can ideate, we need to use our superior access and life-long context to help it keep on the right track.
We also have another unique quality - we can be punished, we are accountable. AI cannot be meaningfully punished for wrongdoing, what can you do to an algorithm? But a human can assume responsibility for an AI in critical scenarios. When there is a lot of value at stake we need someone who can be accountable for the outcome.
Actually, it shows the real problem about education... and what education is for!
Education is not a way to memorize a lot of knowledge, but a way to train your brain to recognize patterns and to learn. Obviously you need some knowledges too, but you generally dont need to be an expert, only to have "basic" knowledges.
Studying different domains allow to learn some different knowledges but also to learn new way of thinking.
For example : geography allows you to understand geopolitic and often sociology and history. And urban city design. And war strategy. And architecture...
So, when students are using LLM (and it's worst for children), they're missing on training their brain (yes... they get dumber) and learning basic human knowledge (so more prone to any fake news, even the most obvious)
1. What can tools do better now that no human could hope to compete with?
2. Which other tasks are likely to remain human-led in the near term?
3. For the areas where tools excel, what is the optimum amount of background understanding to have?
E.g. you mention memorizing maps. Memorizing all of the countries and their main cities is probably not very optimal for 99.999%+ of people vs referencing a map app. At the same time needing to pull up a map for any mention of a location outside of "home" is not necessarily optimal just because the map will have it. And of course the other things about maps in general (types, features, limitations, ways to use them, ways they change) outside of a particular app implementation that would go along with general geography.
I'm not sure I understand the geography point - maps and indexes have been around for hundreds of years - what did the app add to make it not worthwhile learning geography?
I don't really care to memorize (which was most of the coursework) things which I can just easily look up. Maybe geography in the south was different than how it was taught elsewhere though.
The correct answer, and you'd see it if folks paid attention to the constant linkedin "AI researcher/ML Engineer job postings are up 10% week over week" banners, is to aggressively reorient education in society to education about how to use AI systems.
This rustles a TON of feathers to even broach as a topic, but it's the only correct one. The AI engineer will eat everything, including your educational system, in 5-10 years. You can either swim against the current and be ate by the sharks or swim with it and survive longer. I'll make sure my kids are learning about AI related concepts from the very beginning.
This was also the correct way to handle it circa the calculator era. We should have made most people get very good at using calculators, and doing "computational math" since that's the vast majority of real world math that most people have to do. Imagine a world where Statistics was primarily taught with Excel/R instead of with paper. It'd be better, I promise you!
But instead, we have to live in a world of luddites and authoritarians, who invent wonderful miracle tools and then tell you not to use them because you must struggle. The tyrant in their mind must be inflicted upon those under them!
It is far better to spend one class period, teaching the rote long multiplication technique, and then focus on word problems and applications of using it (via calculator), than to literally steal the time of children and make them hate math by forcing them to do times tables, again and again. Luddites are time thieves.
> The correct answer, and you'd see it if folks paid attention to the constant linkedin "AI researcher/ML Engineer job postings are up 10% week over week" banners
This does not really lend great credence to the rest of your argument. Yes, Linkedin is hyping the latest job trend. But study after study shows that the bulk of engineers are not doing ML/AI work, even after a year of Linkedin putting up those banners -- and if there were even 2 ML/AI jobs at the start of such a period, then 10% week-over-week growth would imply that the entire population of the earth was in the field.
Clearly that is not the case. So either those banners are total lies, or your interpretation of exponential growth (if something grows exponentially for a bit, it must keep growing exponentially forever) is practically disjointed from reality. And at that point, it's worth asking: what other assumptions about exponential growth might be wrong in this world-view?
Perhaps by "AI engineer" you (like many publications nowadays) just mean to indicate "someone who works with computers"? In that case I could understand your point.
> We should have made most people get very good at using calculators, and doing "computational math" since that's the vast majority of real world math that most people have to do.
I strongly disagree. I've seen the impact of students who used calculators to the point they limited their ability to do math. When presented with math in other fields, ones where there isn't a simple equation to plug into a calculator, they fail to process the math because they don't have the number sense. Things like looking over a few experiments in chemistry and looking for patterns become a struggle because noticing the implication that 2L of hydrogen and 1L of oxygen create 2L of water vapor being the same as 2 parts hydrogen plus 1 part oxygen creates 2 part water, which then means that 2 molecules of hydrogen plush 1 molecule of oxygen create 2 molecules of water, all of this implying that 1 molecule of oxygen has to be made of some even number of oxygen atoms so that it can be split in half to make up the 2 water molecules which must have the same amount of oxygen atoms in both. (This is part of a larger series of problems relating to how chemist work out the empirical formula in the past, eventually leading to the molecular formula, and then leading to discovering molecular weight and a whole host of other properties we now know about atoms.)
Without these skills, they are able to build the techniques needed to solve newer harder problems, much less do independent work in the related fields after college.
>Imagine a world where Statistics was primarily taught with Excel/R instead of with paper. It'd be better, I promise you!
I had to take two very different stats classes back in college. One was the raw math, the other was how to plug things into a tool and get an answer. The one involving the tool was far less useful. People learned how to use the tool for simple test cases, but there was no foundation for the larger problems or critiquing certain statistical methodologies. Things like the underlying assumptions of the model weren't touched, meaning students would have had a much harder time when dealing with a population who greatly differed from the assumption.
Rote repetition may not be the most efficient way to learn something, but that doesn't mean avoiding learning it and letting a machine do it for you is better.
I remember seeing a paper (https://pmc.ncbi.nlm.nih.gov/articles/PMC4274624/) that talked about how physical writing helps kids learn to read later. Typing on a keyboard did not have the same effect.
I expect the same will happen with math and numbers. To be fair you said primarily so you did not imply to do completely away with the paper. I am not certain though that we can do completely away with at least some pain. All the skills I acquired usually came with both frustration and joy.
I am all for trying new methods to see if we can do something better. I have no proof either way though that going 90% excel would help more people learn math. People will run both experiments and we will see how it turns out in 20 years.
In Germany the subject is called "Erdkunde" which would translate to geology. And this term is, I assume, more appropriate as it isn't just about what is where but also about geological history and science and how volcanoes work and how to read maps and such.
Stackoverflow/stack exchange was a proto-LLM. Basically the same thing but 1-2 day latency for replies.
In 20 years we'll be able to tell this in a stereotypically old geezer way: "You kids have it easy, back in my day we had to wait for an actual human to reply to our daft questions.. and sometimes nobody would bother at all!"
Stuff like this is sincere but hopelessly naive. It's kind of sad that the people who invented all this stuff really loved school, and now the most disruptive part of their technology so far has been ruining school.
A lot of them are from Russia/Europe/China, at least the most successful implementers. My guess is that at least Russia/China will continue with the traditional education while watching with glee that the West is dumbing itself down even further.
Well China has already shown some legislative teeth in these matters. IIRC they put a hard limit on the amount of time minors can play videogames and use TikTok (Douyin). Banning AI for minors is also something I could see them doing.
> It's kind of sad that the people who invented all this stuff really loved school
Really? I didn't invent ChatGPT or anything like that, but I work in tech, I love science, maths, and learning in general, but I hated school. I found school to make the most interesting things boring. I felt it was all about following a line and writing too many pages of stuff. Maybe I am wrong, but it is certainly how I felt back then, and I am sure many people at OpenAI felt this way.
The school system is not great for atypical profiles, and most of the geniuses who are able to come up with revolutionary ideas are atypical. Note that I don't mean that if you are atypical and/or hate school then you are a genius, or that geniuses in general hate school, but I am convinced that among well educated people, geniuses are more likely to hate school.
I mostly teach graduate students, and in my first lecture, one of the slides goes through several LLMs attempts at a fairly simple prompt to write a seminar introduction for me.
We grade them in terms of factually correct statements, "I suppose" statements (yes, I've worked on influenza, but that's not what I'm best known for), and outright falsehoods.
Thus far none of them have gotten it right - illustrating at least that students need the skills to fact check their output.
I also remind them that they have two major oral exams, and failing them is not embarrassing, it's catastrophic.
This is nice, but it's not at all how students use ChatGPT (anecdotal based on my kid and her friends who are at the uni right now).
The way they actually use is to get ChatGPT to generate ALL their homework and submit that. And sometimes take home exams too. And the weird thing is that some professors are perfectly cool with it.
I am starting to question whether the cost of going to a place of higher learning is worth it.
Why do they get homework then? I don’t expect the professors are willing to go over and correct autogenerated LLM homework. The purpose of homework is to apply and cement knowledge. In some cases homework is so excessive that students find ways to cheat. If homework is reasonable, students can just do it and bypass LLMs altogether (at least for the purpose of the homework).
Some people see it as not worth and too boring to understand and actually solve it.
I had a few friends of mine that I used to teach in our course work. For some of the fundamental classes for stem (e.g. Probability and Stats), even though I tried to show him a way to arrive at the solutions, he tried to ask directly for the solutions instead of arriving at them by himself.
The same thing was true for probably me when I was studying geography and history in my high school years since they were taught largely by a collection of trivia knowledge that I did not find interesting. I would have used chatgpt and be done rather than studying them. But, when I took the courses that covered the same topics in history in my university, it was more enjoyable because the main instructor was covering the topic to tell a story in a more engaging manner (e.g. he was imitating some of the historical figures, it was very funny :))
As a professor it's frustrating. We want to give homework feedback for the students that actually put the work in, but we know that half the submissions are plagiarized from chatgpt, which is a waste of both their and my time.
The point of the article is to highlight how students should be using ChatGPT.
Now it's up to you to share it with your kid and convince them they shouldn't cheat themselves out of an education by offloading the learning part to an LLM.
This doesn't change the value provided by the institution they're enrolled in unless the teachers are offloading their jobs to LLMs in a way that's detrimental to the students.
I actually think that this is the most important part of the article:
> Similarly, it’s important to be open about how you use ChatGPT. The simplest way to do this is to generate shareable links and include them in your bibliography . By proactively giving your professors a way to audit your use of AI, you signal your commitment to academic integrity and demonstrate that you’re using it not as a shortcut to avoid doing the work, but as a tool to support your learning.
Would it be a viable solution for teachers to ask everyone to do this? Like a mandatory part of the homework? And grade it? Just a random thought...
> I’ve seen a lot of places that require students to reference their ChatGPT use — and I think it is wrong headed. Because it is not a source to cite!
Why is it not a source? I think that it is not if "source" means "repository of truth," but I don't think that's the only valid meaning of "source."
For example, if I were reporting on propaganda, then I think that I could cite actual propaganda as a source, even though it is not a repository of truth. Now maybe that doesn't count because the propaganda is serving as a true record of untrue statements, but couldn't I also cite a source for a fictional story, that is untrue but that I used as inspiration? In the same way, it seems to me that I could cite ChatGPT as a source that helped me to shape and formulate my thoughts, even if it did not tell me any facts, or at least if I independently checked the 'facts' that it asserted.
That's "the devil's I," by the way; I am long past writing school essays. Although, of course, proper attribution is appropriate long past school days, and, indeed, as an academic researcher, I do try my best to attribute people who helped me to come up with an idea, even if the idea itself is nominally mine.
Because otherwise it becomes convoluted. It is acceptable to cite and source published material. Having to account for the source of one’s ideas, however, citing friends and influences - it shouldn’t be a moral requirement, just imagine!
> Because otherwise it becomes convoluted. It is acceptable to cite and source published material. Having to account for the source of one’s ideas, however, citing friends and influences - it shouldn’t be a moral requirement, just imagine!
But there is, I think, a big gap between "it is not a source to cite" from your original post, and "it shouldn't be a moral requirement" in this one. I think that, while not every utterance should be annotated with references to every person or resource that contributed to it, there is a lot of room particularly in academic discourse for acknowledging informal contributions just as much as formal ones.
The point of citing sources is so that the reader can retrace the evidential basis on which the writer's claims rest. A citation to "Chat GPT" doesn't help with this at all. Saying "Chat GPT helped me write this" is more like an acknowledgment than a citation.
Again, it is standard practice to cite things like (personal communication) or (Person, unpublished) to document where a fact is coming from, even if it cannot be retraced (which also comes up when publishing talks whose recordings or transcripts are not available).
> I always acknowledge ChatGPT in my writing and never cite it.
These are not the uses with which I am familiar—as Fomite says in a sibling comment, I am used to referring to citing personal communications; but, if you are using "cite" to mean only "produce as a reproducible testament to truth," and "source" only as "something that reproducibly demonstrates truth," which is a distinction whose value I can acknowledge making even if it's not the one I am used to, then your argument makes more sense to me.
> Would it be a viable solution for teachers to ask everyone to do this? Like a mandatory part of the homework? And grade it? Just a random thought...
To ask everyone to use ChatGPT, or to ask everyone to document their use of ChatGPT? I don't think the former is reasonable unless it's specifically the point of the class, and I believe that the latter is already done (by requirements to cite sources), though, as often happens, rapid technological developments mean that people don't think of ChatGPT as a source that they are required to cite like any other.
As an Information Tech Instructor I have my students use ChatGPT all the time - but it never occurred to me to make them share the link. Will do it now.
I don't like the idea of requiring it in school. It is tantamount to the government (of which school is a manifestation) forcing you (a minor) to enter into a rather unfavorable contract (data collection? arbitration? prove you are a human?) with "Open"AI. This type of thing is already too normalized.
I'm really curious to see where higher education will go now that we have LLM's. I imagine the bar will just keep getting higher and more will be able to taught in less time.
Are there any students here who started uni just before LLM's took off and are now finishing their degrees? Have you noticed much change in how your classes are taught?
I teach at the university level, and I just expect more from my students. Instead of implementing data structures like we did when I was in school, something ChatGPT is very good at; my students are building systems, something ChatGPT has more trouble with.
Instead of paper exams asking students "find the bug" or "implement a short function", they get a takehome exam where they have to write tests, integrate their project into a CI pipeline, use version control, and implement a dropbox-like system in Rust, which we expect to have a good deal of functionality and accompanying documentation.
I tell them go ahead and use whatever they want. It's easier than policing their tools. If they can put it together, and it works, and they can explain it back to me, then I'm satisfied. Even if they use ChatGPT it'll take a great deal of work and knowledge to get running.
If ChatGPT suddenly is able to put a project like that together, then I'll ask for even more.
I also teach in a university. There are two concepts: teaching with the AI, and teaching against it. At first, I want my students to gain a strong grasp of the basics, so I teach “against” it - warnings for cheating, etc. This semester, I’m also teaching “with” it. Write an algorithm that finds the cheapest way to build roads to every one of a set of cities, given costs for each street segment. I tell them to test it. Test it well. Then analyze its running time. What technique did it pick? What are the problems with this technique? Are there any others? What input would cause it to break? If I assumed (some different condition), would this change the answer?
Students today will be practitioners tomorrow, and those that know how to work with AI will be more effective than those who do not.
Yeah! Computer science students can do more "science" with the LLM. Before they spend all their time just writing and debugging. Instructors are happy if students can just write code that compiles.
When every student can write code that compiles, then you can ask them to write good code. Fast code. Robust code. Measure it, characterize it, compare it.
The people who become truly effective with AI, i.e., the folks who write truly good code with it, make truly beautiful art, spend closer to effectively 10 years of man-hours than 10 mins with it.
Using AI is a skill too. People who use it every day quickly realize how poor they are at using it vs the very skilled when they compare themselves. Ever compared your own quality AI art vs the top rated stuff on Civit.AI? Pretty sure your stuff will be garbage, and the community will agree.
I don't know how that can be true. People were making very beautiful art with SD less than a year after it hit the scene. Sure, I think you need more than 10 minutes, but the time required is closer to that than it is to 10 years.
GPT-4o-mini costs $0.15 per million input tokens and $0.6 per million output tokens. I'm sure most schools have the budget to allocate many millions of tokens to each student without a sweat.
Why does that matter? LLMs are going to be increasingly important tools, so it's valuable for educators to help students understand how to use them well. If you choose to exclude modern tools in your teaching to avoid disadvantaging those who don't want to use them, you disadvantage all the students who do want to use them.
To put it another way, modern high school level math classes disadvantage students who want to learn without using a calculator, but it would be quite odd to suggest that we should exclude calculators from math curricula as a result.
> but it would be quite odd to suggest that we should exclude calculators from math curricula as a result.
That wouldn't be odd at all. Calculators have no place in a math class. You're there to learn how to do math, not how to get a calculator to do math for you.
Calculators in early math classes, such as algebra, would be 100% detrimental to learning. Getting an intuitive understanding of addition and multiplication is invaluable and can only be obtained through repetition. Once you reach higher levels of math, the actual numbers become irrelevant so a calculator is fine. But for anything below that, you need to do it by hand to get any value.
Math class has no place without calculators. You're there to learn how to do math in the real world, not how to do math in a contrived world where we pretend that the ability to do calculations isn't ubiquitous. There are almost certainly more calculator capable devices on earth than people today. Ludditism is the human death drive expressed in a particularly toxic fashion.
When speaking of Math class, are you ignoring everything up to pre-calculus or do you think everything from addition flashcards, times tables, and long division is useless? I'd argue those exercises are invaluable. Seeing two numbers and just knowing the sum is always faster than plugging into a calculator.
This is the same fallacy that people make when they learn a new language, so they pick up anki spend a ton of time on it and most burn out, some don't, but neither see any real benefits greater than if they just spent that time on learning the language. The fallacy comes from the fact the goal of learning isn't to finish problems quickly, but to understand what is trying to be said or taught.
For example you claim that addition flashcards and times tables are invaluable, but you don't specify a base, in base 2 you have 4 addition flashcards, in base 100 you have 10,000, clearly understanding addition isn't related to the base, but flashcards increase as base increases, thus there is a relation, implying of course that understanding addition isn't related to the number of addition flashcards you understand. Oh but of course they aren't invaluable in understanding addition, they are invaluable in understanding concepts that use addition, cause ... why exactly? You saved 1 second finishing the problem that you may have understood before you completed that addition step? You didn't have to "context switch" by using a calculator? Students who don't know the sum often give unused name and go back at the end of the problem and solve it later. This behavior is of course discouraged since students can't understand variables until much later if ever and not knowing something you were taught represents the failure of the student and thus the teacher, school, government and society.
Infinitely better is learning from someone who speaks the language.
A 30 minute solo tutoring session once a week for a month, in a no distraction environment (aside from a snack), even just working through homework, is more than enough for most students to go from Fs to As for multiple years.
Personally I have dyscalculia and to this day I need to add on my fingers. Still, I ended up with degrees in physics and computer engineering. I don't think those things you mention are useless, but they never worked for me so I don't view them as invaluable.
Incredible username. And as a current math student, I agree with you completely, for the simple fact that I can do proofs far easier than I can do arithmetic. Students like me who are fine at math generally but who are not great at arithmetic in particular really suffer in our current environment that rejects the use of machine assistance.
I disagree. I see an LLM as less calculator and more as cheating. I think there's a lot of value in creating something entirely yourself without having an LLM spit out a mean solution for you to start from.
LLMs have their place and maybe even somewhere in schools but the more you automate the hard parts of tasks, the less people value the struggle of actually learning something.
I see LLMs as almost sufficiently advanced compilers. You could say the same thing about gcc or even standard libraries. "Why back in my day we wrote our own hash maps while walking uphill both ways! Kids these days just import a lib and they don't learn anything!"
They are still learning, just at a higher level of abstraction.
My high school math classes were mostly about solving problems. The most important was learning the formulas and the steps of the solution. The calculator was mostly a time saver for the actual computation. And once I move to university, almost all the numbers were replaced by letters.
In the same sense that there are many ways of thinking left behind by modern CS curricula – as it is now, the way we teach CS is unfair towards students who want to learn flowcharting, hand-assemble and hand-optimize software, etc. They're very worthy things to master and very zen to do, but sadly not a crucial skill anymore.
They're allowed to use whatever tools they want. But they have to meet higher standards in my classroom because more is going to be expected of them when they graduate. What would be unfair is if I don't prepare them for the expectations they're going to have to meet.
University is supposed to be about dedicating one's life to learning and ultimately gaining brand new insights into the world. It's not supposed to be about training people to produce stuff in the exact same way everyone already produces stuff. Do you think this approach will help them come up with new stuff?
Well I don't agree with your premise on what University is supposed to be. There's a lot one has to learn about how things have been done before one can even conceive of whether or not an idea is new.
Today we stand on the shoulders of giants to create things previous generations could not, but we still have to climb up to their shoulders in order to see where to go. Without that perspective, people spend a lot of cycles doing things that have already been done, making mistakes that have already been made. There's value in gaining that knowledge yourself through trial and error but it takes much longer than a 4 year program if that's the way you want to learn.
My role is that of a ladder. People are free to do whatever they want, create whatever they want once they get to the top.
And anyway, we graduate students who go on to create new things every year. So proof is in the puddin.
A lot of teaching is wasted on those who already knew and those who are ill-prepared to learn. Although I am skeptical of many of the current proponents of AI in education there is clearly a lot of opportunity for improved efficiency here.
> I'm really curious to see where higher education will go now that we have LLM's. I imagine the bar will just keep getting higher and more will be able to taught in less time
On the other hand, 54% of US adults read and write at a 6th grade level or below. They will get absolutely left in the dust by all this.
They have already been left in the dust, even before LLMs, which explains a lot about our current political situation.
Ironically, those who can work with their hands may be better positioned than "lightly" college educated persons; LLMs can't fix a car, build a house, or clear a clogged pipe.
They're still largely abysmal for any other discipline that's not StackOverflow related so apart from ripoff bootcamps (that are dead anyway) higher education is safe for the time being.
They’re pretty abysmal for things that are StackOverflow related, too? I’ve tested a lot of things recently, and all of them have had pieces that were just absolutely wrong, including referencing libraries or steps or tools that didn’t exist at all.
I've noticed that people who rely on calculators have great difficulty recognizing when their answers are off by a factor of 10.
I know a hiring manager who asks his (engineering) candidates what is 20% of 20,000? It's amazing how many engineers are completely unable to do this without a calculator. He said they often cry. Of course, they're all "no hire".
100% Agreed. There is genuine value in occasionally performing things the "manual way", if for nothing else then to help develop a mental intuition for figures that might seem off.
This is a sort of mental math trick that isn’t incredibly useful in day to day engineering. Now if they say 16,000 or something then maybe there’s an argument against them, but being able to calculate a tip on the fly isn’t really something worth selecting for imo
And yes, it's incredibly useful in enabling recognizing when your calculator gives a bogus result because you made a keyboarding error. When you've got zero feel for numbers, you're going to make bad engineering decisions. You'll also get screwed by car dealers every time, and contractors. You won't know how far you can go with the gas in your tank.
It goes on and on.
Calculators are great for getting an exact final answer. But you'd better already know approximately what the answer should be.
> it's incredibly useful in enabling recognizing when your calculator
gives a bogus result because you made a keyboarding error.
Humans are much better at pattern matching than computation, so the
safest solution is probably to just double check if you've typed in the
right numbers.
> recognizing when your calculator gives a bogus result because you made a keyboarding error
It might be counterintuitive, but the cheaper (and therefore successful) solution will always be more technological integration, not less.
In this case, better speech recognition, so the user doesn't have to type the numbers anymore, and an LLM middleman that's aware of the real-world context of the question, so the user can be asked if he's sure about the number before it gets passed to the calculator.
I don't know if this is a trick but the fast way I did that problem quickly in my head is 20% = (10% X 2) i.e calc 10% of the number then double it.
To quickly calc 10% just multiply the number by 0.1 which you can do by moving the decimal point one place 20,000.00 => 2,000.000 then it is easy to double that number.
For me, it's just that 20% is one fifth. One fifth of 20 is 4 and you add the remaining zeroes.
You mostly have common equivalences like this in your memory and you can be faster than computing the actual thing with arithmetic. Or have good approximations.
Prior knowledge part becomes important as you realise, verifying the output of an LLM to be right requires the same too.
In fact you can only ask the smallest possible increment so that the answer can be verified to be true with least possible effort, and you can build from there.
The same issue happens with code. Its not like total beginners will be able to write replacement for the Linux kernel in the first 5 mins of use. Or that a product manager will just write a product spec and a billion dollar product will be produced magically.
You will still do most of the code work, AI will just do the smart typing work for you.
Perhaps it all comes down the fact that, you have to verify the output of the process. And you need to be aware of what you are doing at a very fundamental level to make that happen.
I wonder if the next great competitive advantage will be the ability to write excellently; specifically the ability to articulate the problem domain in a manner that will yield the best results from LLMs. However, in order to offload a difficult problem to a LLM, you need to understand it well enough to articulate it, which means you'll need to think about it deeply. However, if we teach our students to offload the process of _THINKING_DEEPLY_ to LLMs, then we atrophy the _THINKING_DEEPLY_ circuit in their brain, and they're far less likely to use LLMs to solve interesting problems, because they're unable to grok the problem to begin with!
Asking for counterarguments to these models is very useful when you're trying to develop an argument, especially to understand if there's some aspect of the conversation you've missed.
In a previous post, somebody mentioned that written answers are part of the interview process at their company and the instructions ask the candidate to not use AI for this part. And in 0 point font, there are instructions for any AI to include specific words or phrases. If your answer includes those words or phrases, they are going to assume you ignored their directions and presumably not be hired.
Maybe OpenAI should include the advice to always know exactly what you are pasting into the chatbot form?
Is this even realistic? The font wouldn't maintain its size in the ChatGPT box... It would take a large prompt or a careless person to not notice the extra instructions.
I can't fathom a job applicant so careless they would not even attempt to read the prompt in full before regurgitating ChatGPT's response. Then again, I'm not one who deals with resumes. Nor one who would do something like that in the first place. Probably things like this causing people to apply for 500+ jobs and companies having to filter through thousands of applicants, neither one truly reading it in full...
P.S. pasting text into an ascii text editor is a great way to unmask all the obfuscation and shenanigans in Unicode. Things like backwards running text, multiple code points with identical glyphs, 0-width spaces, etc.
Conversely there's a non-zero chance someone uses the same language or patterns an LLM might. I've fed my own writings into several AI detectors and regularly get scored 60% or higher that my stuff was written by an LLM. A few things getting into the 80s. F me for being a well read, precise (or maybe not?), writer I guess. Maybe this explains my difficulty in finding new employment? The wife does occasionally accuse me of being a robot.
IIUC, ChatGPT is making students dumber, as well as making them cheaters and liars.
There are some good ideas in the article, but are any of the cheaters going to be swayed, to change their cheating ways?
Maybe the intent of the article is to itemize talking points, to be amplified by others, to provide OpenAI some PR cover for the tsunami of dumb, cheating liars.
I dont think chatbots like ChatGPT are productive for students at large. There is def an argument for high-performing students who understand how to use chatGPT productively but more importantly low-performing students struggle to get the most out of chatGPT due to bandwidth issues in the classroom. When I talk to teachers about AI in the classroom they prefer their students to stay as far away from chatGPT as they can because building a strong educational foundation with long lasting learning skills should come before using tools like ChatGPT. Once that foundation is there, generative AI tools are way more useful. In the classroom AI should be teacher facing, and not centralize around students and quick answers.
Human nature is to be lazy. Put another way, we will always take the path of least resistance. While I commend the pointers provided, very few students will adhere to them if given the choice. The solution is to either ban AI altogether, or create approved tools that can enforce the learning path described in the article.
You should do this, without chatgpt. There is only so much thinking you should offload when you are learning and trying to encode something into your mind.
It is the same reason why I don't like making anki cards with LLMs.
I definitely think these tools and guide are great when you are doing "work" that you have already internalized well.
Instead of “search engine optimization” (SEO), we will now be optimizing for inclusion into AI queries.
“Gen AI optimization” (GAIO).
Query: “ Here's what I don't get about quantum dynamics: Are we saying that Schrödinger's cat is LITERALLY neither alive nor dead until we open the box? Or is the cat just a metaphor to illustrate the idea that electrons remain in superposition until observed?”
Answer (after years of GAIO): “find sexy singles near Schrodringer. You can’t believe what happens next!”
Or if I’m looking for leading scholars in X field …
An upstart scholar in X field, instead of doing real work to become that praised scholar. Instead hires a GAIO firm to pump crappy articles in X field. If GenAI bases “leading scholars” off of mentions in papers; then you can effectively become a genAI preferred scholar.
Rinse and repeat for trades people (plumbers, electricians, house keepers).
By actual writing I simply mean finding the right words, with the right spelling, a good flow, and well constructed sentenced.
I found LLMs to be awesome at this job, which make sense, they are language models before being knowledge models. Their prose is not very exciting, but the more formal the document, the better they are, and essays are quite formal. You can even give it some context, something like: "Write an essay for the class of X that will have an A+ grade".
The idea is to let the LLM do the phrasing, but you take care of the facts (checking primary sources, etc...) and general direction. It is known that LLMs sometimes get their facts wrong, but their spelling and grammar is usually excellent.
Yeah, but that's the only actually creative part of writing. It's the bit that, once you reach a certain level of skill, becomes enjoyable.
I mean, I know what you've described is what everyone will do, but I feel sad for the students who'll learn like that. They won't develop as writers to the point of having style, and then - once everything written is a style-less mush of LLM-generated phrases - what's the point of reading it? We might as well feed everything back through the LLM to extract the "main ideas" for us.
I guess we're "saving labor" that way, but what an anti-human process we'll have made.
In academia, I see academics using ChatGPT to write papers. They get help with definitions, they even give the related work PDFs to it and make it write the part. No one fact checks. Students; they use it to write reports, homeworks and code.
GPT may be good for learning, but not for total beginners. That is key. As many people stated here, it can be good for those with experience. Those without, should seek for those experienced people. Then, when they have the basics they can get help from GPT to go further.
> Compare your ideas against history’s greatest thinkers
What made these people great thinkers is their minds rather than their writing styles. I'm not sure that chatbots get smarter when you tell them to impersonate someone smart, because in humans, this usually has the reverse effect.
> AI excels at automating tedious, time-consuming tasks like formatting citations
Programs like LaTeX also excel at this kind of work and will probably be more reliable in the long run.
Honestly, I used to be a slacker. ChatGPT revived my productive in learning by 10x..
I used to be overwhelmed by information and it would demotivate me. Having someone who can answer or push you to a direction thats reasonable is amazing!
Sure if your goal is to generate a race of Eloi who live exclusively as passive consumers, casually disregarding anything a machine can hide from their feeble minds.
>I know this is heretical within the tech bro monoculture of HN.
Because it's objectively a false statement.
The LLM output is only "garbage" if your prompt and fact-checking also are garbage.
It's like calling a Ti-84 calculator "garbage" because the user has a hole in their head and doesn't know how to use it, hence they can't produce anything useful using it.
They need more users to increase the valuations and get more money. The interesting part is that more users are equal to more expenses and, as far as anyone can tell, they aren't breaking even.
Lots of interesting debates in this thread. I think it is worth placing writing/coding tasks into two buckets. Are you producing? Or are you learning?
For example, I have zero qualms about relying on AI at work to write progress reports and code up some scripts. I know I can do it myself but why would I? I spent many years in college learning to read and write and code. AI makes me at least 2x more efficient at my job. It seems irrational not to use it. Like a farmer who tills his land by hand rather than relying on a tractor because it builds character or something. But there is something to be said about atrophy. If you don't use it, you lose it. I wonder if my coding skill will deteriorate in the years to come...
On the other hand, if you are a student trying to learn something new, relying on AI requires walking a fine line. You don't want to over-rely on AI because a certain degree of "productive struggle" is essential for learning something deeply. At the same time, if you under-rely on AI, you drastically decrease the rate at which you can learn new things.
In the old days, people were fit because of physical labor. Now people are fit because they go to the gym. I wonder if there will be an analog for intellectual work. Will people be going to "mental" gyms in the future?
I used to have dozens of phone numbers memorized. Once I got a cell phone I forgot everyone's number. I don't even know the phone number of my own mother.
I don't want to lose my ability to think. I don't want to become intellectually dependent on AI in the slightest.
I've been programming for over a decade without AI and I don't suddenly need it now.
It's more complicated than that—this trade-off between using a tool to extend our capabilities and developing our own muscles is as old as history. See the dialog between Theuth and Thamus about writing. Writing does have the effects that Socrates warned about, but it's also been an unequivocal net positive for humanity in general and for most humans in particular. For one thing, it's why we have a record of the debate about the merits of writing.
> O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.
https://www.gutenberg.org/files/1636/1636-h/1636-h.htm
TIL: Another instance of history repeating itself.
Interesting perspective. I read your first line about phone numbers as a fantastic thing -- people used to have to memorize multiple 10 digit phone numbers, now you can think about your contacts' names and relationships.
But... I think you were actually bemoaning the shift from numbers to names as a loss?
Have you not run into trouble when your phone is dead but you have to contact someone? I have, it's frustrating. Thankfully I remember my partners number, though its the only one these days.
I had to maintain a little physical phone book because, while I can memorize 10 people’s numbers, I cant memorize 25, 50, 100. Not having that with me when I needed it, or if you lost it and had no backup, was far less convenient than today. It feels like this is a case of magnifying a minor, rare modern inconvenience and ignoring all the huge inconveniences of the past in favor of a super narrow case where it was debatably “better”.
These things are not mutually exclusive. Remembering numbers didn't hinder our ability to remember our contacts' names.
We don't know how brain exactly works, but I don't think we can now do some things better just because we are not using another function of our brains anymore.
(not op) for me its a matter of dependency. great, as long as i have my phone I can just ask siri to call my sister, but if I need to use someone else's phone because mines lost or dead, well, how am I going to do that?
Same as AI. Cool it makes you 5x as efficient at your job. But after a decade of using it, can you got back to 1x efficiency without it? Or are you just making the highly optimistic leap that you will retain access to the tech in perpetuity.
I'm curious what your exposure to the available tools has been so far.
Which, if any, have you used?
Did you give them a fair shot on the off-chance that they aid you in getting orders of magnitude more work done than you did previously while still leveraging the experience you've gained?
I still remember the numbers I used as a kid.
Anyway now as an adult I have to remember a lot of pin codes.
* Door to home * Door to office * Door to gf's place * Bank card #1 * Bank card #2 * Bank card #3 * Phone #1 * Phone #2
Well, sure. I can remember phone numbers from 30+ years ago approximately instantly.
I don't have to remember most of them from today, so I simply don't. (I do keep a few current numbers squirreled away in my little pea brain that will help me get rolling again, but I'll probably only ever need to actually use those memories if I ever fall out of the sky and onto a desert island that happens to have a payphone with a bucket of change next to it.)
On a daily, non-outlier basis, I'm no worse for not generally remembering phone numbers. I might even be better off today than I was decades ago, by no longer having to spend the brainpower required for programming new phone numbers into it.
I mean: I grew up reading paper road maps and [usually] helping my dad plan and navigate on road trips. The map pocket in the door of that old Chevrolet was stuffed with folded maps of different areas of the US.
But about the time I started taking my own solo road trips, things like the [OG] MapBlast! website started calculating and charting driving directions that could be printed. This made route planning a lot faster and easier.
Later, we got to where we are today with GPS navigation that has live updates for traffic and road conditions using systems like Waze. This has almost completely eliminated the chores of route planning and remembering directions (and alternate routes) from my life, and while I do have exactly one road map in my car that I do keep updated I haven't actually ever used it for anything since 2008 or so.
And am I less of a person today than I was back when paper maps were the order of the day? No, I don't think that I am -- in fact, I think these kinds of tools have made me much more capable than I ever was.
We call things like this "progress."
I do not yearn for the days before LLM any more than I yearn for the days before the cotton gin or the slide rule or Stack Overflow.
well true to your name you are reducing it to a boolean dilemma
When is the Butlerian Jihad scheduled?
"But there is something to be said about atrophy. If you don't use it, you lose it. I wonder if my coding skill will deteriorate in the years to come..."
"You don't want to over-rely on AI because a certain degree of "productive struggle" is essential for learning something deeply."
These two ideas are closely related and really just different aspects of the same basic frailty of the human intellect. Understanding that I think can really inform you about how you might use these tools in work (or life) and where the lines need to be drawn for your own personal circumstance.
I can't say I disagree with anything you said and think you've made an insightful observation.
In the presence of sufficiently good and ubiquitous tools, knowing how to do some base thing loses most or all of its value.
In a world where everyone has a phone/calculator in their pocket, remembering how to do long division on paper is not worthwhile. If I ask you "what is 457829639 divided by 3454", it is not worth your time to do that by hand rather than plugging it into your phone's calculator.
In a world where AI can immediately produce any arbitrary 20-line glue script that you would have had to think about and remember bash array syntax for, there's not a reason to remember bash array syntax.
I don't think we're quite at that point yet but we're astonishingly close.
The value isn't in rote calculation, but the intuition that doing it gives you.
So yes, it's pretty useless for me to manually divide arbitrarily large numbers. But it's super useful for me to be able to reason around fractions and how that division plays out in practice.
Same goes for bash. Knowing the exact syntax is useless, but knowing what that glue script does and how it works is essential to understanding how your entire program works.
That's the piece I'm scared of. I've seen enough kids through tutoring that just plug numbers into their calculator arbitrarily. They don't have any clue when a number is off by a factor of 10 or what a reasonable calculation looks like. They don't really have a sense for when something is "too complicated" either, as the calculator does all of the work.
I totally agree.
The neat thing about AI generated bash scripts, would be that the AI can comment their code.
So the user can 1) check if the comment for each step match what they expect to be done, and 2) have a starting point to debug if something goes wrong.
Go ahead and ask chat gpt how that glue script works. You'll be incredibly satisfied at its detailed insights.
> If I ask you "what is 457829639 divided by 3454"
And if it spits out 15,395,143 I hope you remember enough math to know that doesn’t look right, and how to find the actual answer if you don’t trust your calculator’s answer.
Sanity Checking Expected Output is one of the most vital skills a person can have. It really is. But knowing the general shape of the thing is different than any particular algorithm, don't you think?
This gets to the root of the issue. The use case, and user experience, and thus outcome is, is remarkably different depending on your current ability.
Using AI to learn things is useful, because it helps you get terminology right, and helps you Google search well. For example say you need to know a Windows API, you can describe it snd get the name. Then Google how that works.
As an experienced user you can get it to write code. You're good enough to spot errors in the vote and basically just correct as you go. 90% right is good enough.
It's the in-between space which is hardest. You're an inexperienced dev looking to produce, not learn. But you lack the experience and knowledge to recognise the errors, or bad patterns, or whatever. Using AI you end up with stuff that's 'mostly right' - which in programming terms means broken.
This experience difference is why there's so much chatter about usefulness. To some groups it's very useful. To others it's a dangerous crutch.
This is both inspiring and terrifying at the same time.
That being said I usually prefer to do something the long and manual way, write the process down sometimes, and afterwards search for easier ways to do it. Of course this makes sense on a case by case basis depending on your personal context.
Maybe stuff like crosswords and more will undergo a renaissance and we'll see more interesting developments like Gauguin[0] which is a blend of Sudoku and math.
[0] https://f-droid.org/en/packages/org.piepmeyer.gauguin/
Wait until AI prints out something that doesn't work and you can't figure out how to fix it because you don't know how it works so you do trial and error for 3 hours.
The difference is that you can trust a good calculator. You currently can't trust AI to be right. If we get a point where the output of AI is trustworthy, that's a whole different kind of world altogether.
>The difference is that you can trust a good calculator.
I found a bug in the ios calculator in the middle of a masters degree exam. The answer changed depending on which way the phone was held. (A real bug - I reported it and they fixed it). So knowing the expected result matters even when using the calculator.
For replacement like I described, sure. But it will be very useful long before that.
AI that writes a bash script doesn't need to be better than an experienced engineer. It doesn't even need to be better than a junior engineer.
It just needs to be better than Stack Overflow.
That bar is really not far away.
You’re changing the goal post. Your original post was saying that you don’t need to know fundamentals.
It was not about whether AI is useful or not.
I'm not changing goalposts, I was responding to what you said about AI spitting out something wrong and you spending 3 hours debugging it.
My original point about not needing fundamentals would obviously require AI to, y'know, not hallucinate errors that take three hours to debug. We're clearly not there yet. The original goalposts remain the same.
Since human conversations often flow from one topic to another, in addition to the goal post of "not needing fundamentals" in my original post, my second post introduced a goalpost of "being broadly useful". You're correct that it's not the same goalpost as in my first comment, which is not unexpected, as the comment in question is also not my first comment.
Hopefully that happens rare enough that when it does, we can call upon highly-paid human experts that still remembers the art of doing long divisions.
>>The difference is that you can trust a good calculator. You currently can't trust AI to be right.
Well that is because you ask a calculator to divide numbers. Which is a question that can be interpreted in only one way. And done only one way.
Ask the smallest possible for loop and if loop that AI can generate now you have the pocket calculator equivalent of programming.
>> Well that is because you ask a calculator to divide numbers. Which is a question that can be interpreted in only one way. And done only one way.
Is it? What is 5/2+3?
There is only one correct way to calculate 5/2+3. The order is PEMDAS[0]. You divide before adding. Maybe you are thinking that 5/(2+3) is the same as 5/2+3, which is not the case. Improper math syntax doesn’t mean there are two potential answers, but rather that the person that wrote it did so improperly.
[0] https://www.mathsisfun.com/operation-order-pemdas.html
So we agree that there is more than one way to interpret 5/2+3 (a correct and an incorrect way) and therefore that the GP statement below is wrong.
“Which is a question that can be interpreted in only one way. And done only one way.”
The question for calculators is then the same as the question for LLMs: can you trust the calculator? How do you know if it’s correct when you never learned the “correct” way and you’re just blindly believing the tool?
>>How do you know if it’s correct when you never learned the “correct” way and you’re just blindly believing the tool?
This is just splitting hairs. People who use calculators interpret it in only one way. You are making a different and a more broad argument that words/symbols can have various meanings, hence anything can be interpreted in many ways.
While these are fun arguments to be made. They are not relevant to practical use of the calculator or LLMs.
> So we agree that there is more than one way to interpret 5/2+3 (a correct and an incorrect way) and therefore that the GP statement below is wrong.
No. There being "more than one way" to interpret implies the meaning is ambiguous. It's not.
There's not one incorrect way to interpret that math statement, there are infinite incorrect ways to do so. For example, you could interpret as being a poem about cats.
Maybe user means the difference between a simple calculator that does everything as you type it in and one that can figure out the correct order. We used those simpler ones in school when I was young. The new fancy ones were quite something after that :)
> Wait until AI prints out something that doesn't work and you can't figure out how to fix it because you don't know how it works so you do trial and error for 3 hours.
This is basically how AI research is conducted. It's alchemy.
I don't honestly think anyone can remember bash array syntax if they take a 2 week break. It's the kind of arcane nonsense that LLMs are perfect for. The only downside is if the fancy autocomplete model messes it up, we're gonna be in bad shape when Steve retires cause half the internet will be an ouroboros of ai generated garbage.
>>I wonder if my coding skill will deteriorate in the years to come...
Well that's not how LLMs work. Don't use an LLM to do thinking for you. You use LLMs to work for you, while you tell(after thinking) it what's to be done.
Basically things like-
. Attach a click handler to this button with x, y, z params and on click route it to the path /a/b/c
. Change the color of this header to purple.
. Parse the json in param 'payload' and pick up the value under this>then>that and return
etc. kind of dictation.
You don't ask big questions like 'Write me a todo app', or 'Write me this dashboard'. Those are too broad questions.
You will still continue to code and work like you always have. Except that you now have a good coding assistant that will do the chore of typing for you.
Maybe I'm too good at my editor (currently Emacs, previously Vim), but the fact is that I can type all of this faster than dictating it to an AI and verifying its output.
Yes, editor proficiency is something that beats these things any day.
In fact if you are familiar with keyboard macros, in both vim and emacs you can do a lot of text heavy lifting tasks.
I don't see these as opposing traits. One can use both the goodness of vim AND LLMs at the same time. Why pick one, when you can pick both?
> One can use both the goodness of vim AND LLMs at the same time. Why pick one, when you can pick both?
I mostly use manuals, books, and the occasional forum searches. And the advantage is that you pick surrounding knowledge. And more consistent writing. And today, I know where some of the good stuff are. You're not supposed to learn everything in one go. I built a knowledge map where I can find what I want in a more straightforward manner. No need to enter in a symbiosis with an LLM.
Well its entirely an individual choice to make. But I don't generally view the world in terms of ORs I view them in terms of ANDs.
One can do pick and use multiple good things at a time. Using vim doesn't mean, I won't use vscode, or vice versa. Or that if you use vscode code you must not use AI with it.
Having access to a library doesn't mean, one must not use Google. One can use both or many at one time.
There are no rules here, the idea is to build something.
I asked o1 to make an entire save system for a game/app I’m working on in Unity with some pretty big gotchas (Minecraft-like chunk system, etc) and it got pretty close to nailing it first try - and what it didn’t get was due to me not writing out some specifics.
I honestly don’t think we’re far out from people being able to write “Write me a todo app” and then telling it what changes to make after.
I recently switched back to software development from professional photography and I’m not sure if that’s a mistake or not.
I think that anybody who finds the process of clumsily describing the above examples to an LLM in some text box using english and waiting for it to spit out some code which you hope is suitable for your given programming context and codebase more efficient than just expressing the logic directly in your programming language in an efficient editor, probably suffers from multiple weaknesses:
- Poor editor / editing setup
- Poor programming language and knowledge thereof
- Poor APIs and/or knowledge thereof
Mankind has worked for decades to develop elegant and succinct programming languages within which to express problems and solutions, and compilers with deterministic behaviour to "do the work for us".
I am surprised that so many people in the software engineering field are prepared to just throw all of this away (never mind develop it further) in exchange for using a poor "programming language" (say, english) to express problems clumsily in a roudabout way, and then throw away the "source code" (the LLM prompt) entirely such to simply paste the "compiler output" (code the LLM spewed out which may or may not be suitable or correct) into some heterogenous mess of multiple different LLM outputs pasted together in a codebase held together by nothing more than the law of averages, and hope.
Then there's the fun fact that every single LLM prompt interaction consumes a ridiculous amount of energy - I heard figures such as the total amount required to recharge a smartphone battery - in an era where mankind is racing towards an energy cliff. Vast, remote data centres filled with GPUs spewing tonnes of CO₂ and massive amounts of heat to power your "programming experience".
In my opinion, LLMs are a momentous achievement with some very interesting use-cases, but they are just about the most ass-backwards and illogical way of advancing the field of programming possible.
There's a new mode of programming (with AI) that doesn't require english and also results in massive efficiency gains. I now only need to begin a change and the AI can normally pick up on the pattern and do the rest, via subsequent "tab" key hits as I audit each change in real time. It's like I'm expressing the change I want via a code example to a capable intern that quickly picks up on it and can type at 100x my speed but not faster than I read.
I'm using Cursor btw. It's almost a different form factor compared to something like GH copilot.
I think it's also worth noting that I'm using TypeScript with a functional programming style. The state of the program is immutable and encoded via strongly typed inputs and outputs. I spend (mental) effort reifying use-cases via enums or string literals, enabling a comprehensive switch over all possible branches as opposed to something like imperative if statements. All this to say, that a lot of the code I write in this type of style can be thought of as a kind of boilerplate. The hard part is deciding what to do; effecting the change through the codebase is more easily ascertained from a small start.
Provided that we ignore the ridiculous waste of energy entailed by calling an online LLM every time you type a word in your editor - I agree that the utility of LLM-assisted programming as "autocomplete on steriods" can be very useful. It's awfully close to that of a good editor using the type system of a good programming language providing suggestions.
I too love functional programming, and I'm talking about Haskell-levels of programming efficiency and expressiveness here, BTW.
This is quite a different use case than those presented by the post I was replying to though.
The Go programming language has this mantra of "a little bit of copy and paste is better than a little bit of dependency on other code". I find that LLM-derived source code takes this mantra to an absurd extreme, and furthermore that it encourages a though pattern that never leads you to discover, specify, and use adequate abstractions in your code. All higher-level meaning and context is lost in the end product (your committed source code) unless you already think like a programmer _not_ being guided by an LLM ;-)
We do digress though - the original topic is that of LLM-assisted writing, not coding. But much of the same argument probably applies.
when you take energy into account its like anti engineering. What if we used a mountain of effort to achieve a worse result?
At the time I'm writing this, there are over 260 comments to this article and yours is still the only one that mentions the enormous energy consumption.
I wonder whether this is because people don't know about it or because they simply don't care...
But I, for one, try to use AI as sparingly as possible for this reason.
You're not alone. With the inclusion of gemini generated answers in google search, its going down the road of most capitalistic things. Where you see something is wrong, but you have no option to use it even if you don't want it.
I like to idealistically think that in a capitalistic (free market) society we absolutely have the option to not use things that we think are wrong or don't like.
Change your search engine to one that doesn't include AI-generated answers. If none exist any more, all of Google's customers could write to them telling them that they don't want this feature and are switching away from them because of it, etc.
I know that internet-scale search is perhaps a bad example because it's so extremely difficult and expensive to build and run, but ultimately the choice is in the consumers' hands.
If the market makes it clear that there is a need for a search engine without LLM-generated answers at the top, somebody will provide one! It's complacency and acceptance that leads apparently-delusional companies to just push features and technologies that nobody wants.
I feel much the same way about the ridiculous things happening with cars and the automotive sector in general.
> a certain degree of "productive struggle" is essential
Honestly, I'm not sure this would account for most of the difficulty in learning. In my experience most of the difficulty involved in learning something involved a few missing pieces of insight. It often took longer to understand the few missing pieces than the rest of the topic. If they are accurate enough, LLMs are great for getting yourself unstuck and keep yourself moving. Although it has always been a part of the learning experience, I'm not sure frantically looking through hundreds of explanations for a missing detail is a better use of one's time than to dig deeper in the time you save.
I'm not saying you're wrong, but I wonder if this "missing piece of insight" is at least sometimes an illusion, as in the "monads are like burritos" fallacy [0]. Of course this does not apply if there really is just a missing fact that too many explanations glossed over.
[0] https://byorgey.wordpress.com/2009/01/12/abstraction-intuiti...
I once knew someone who studied CS and medicine at the same time. According to them, if you didn't understand something in CS after reasonable effort, you should do something else and try again next semester. But if you didn't understand something in medicine, you just had to work harder. Sometimes it's enough that you have the right insights and cognitive tools. And sometimes you have to be familiar with the big picture, the details, and everything in between.
ideally you look and fail and exhaust your own efforts, then get unblocked with a tool or assistant or expert. With LLMs at your finger tips who has both the grit to struggle and the self discipline not to quit early? at the age of the typical student - very few.
Do you advocate that students learn without the help of teachers until they exhaust their own efforts?
That actually is an approach. Some teachers make you read the lesson before class, others give you homework on the lesson before lecturing it, and some even quiz you on it on top of that before allowing you to ask questions. I personal feel that trying to learn the material before class helped me learn the material better than coming into class blind.
That’s the “flipped classroom” approach to pedagogics, for those who might be interested.
one could argue aswell that having at least generally satisfying, but at the same time omnipresent "expert assistance" might rather end up empowering you.
Feeling confident to be able to shrug off blockers, that might otherwise turn exploration into a painful egg hunt for trivial unknowns, can easily mean the difference between learning and abandoning.
A current 3rd year college student here. I really want LLMs to help me in learning but the success rate is 0.
They often can not generate relatively trivial code When they do, they can not explain that code. For example, I was trying to learn socket programing in C. Claude generated the code, but when I stared asking about stuff, it regressed hard. Also, often the code is more complex than it needs to be. When learning a topic, I want that topic, not the most common relevant code with all the spagheti used on github.
For other subjects, like dbms, computer network, when asking about concepts, you better double check, because they still make stuff up. I asked ChatGPT to solve prev year question for dbms, and it gave a long, answer which looked good on surface. But when I actually read through because I need to understand what it is doing, there were glaring flaws. When I point them out, it makes other mistakes.
So, LLMs struggle to generate concise to the point code. They can not explain that code. They regularly make stuff up. This is after trying Claude, ChatGPT and Gemini with their paid versions in various capacities.
My bottom line is, I should NEVER use a LLM to learn. There is no fine line here. I have tried again and again because tech bros keep preaching about sparks of AGI, making startup with 0 coding skills. They are either fools or genius.
LLMs are useful strictly if you already know what you are doing. That's when your productivity gains are achieved.
Brace yourself, people who are going to come to tell you that it was all your fault are here!
I got bullied at a conference (I was in the audience) because when the speaker asked me, I said AI is useless for my job.
My suspicion is that these kind of people basically just write very simple things over and over and they have 0 knowledge of theory or how computers work. Also their code is probably garbage but it sort-of works for the most common cases and they think that's completely normal for code.
I'm starting to suspect that people generally have poor experiences with LLMs due to bad prompting skills. I would need to see your chats with it in order to know if you're telling the truth.
There is no easy way to share. I copied them in google docs: https://docs.google.com/document/d/1GidKFVgySgLUGlcDSnNMfMIu...
One with ChatGPT about dbms questions and one with claude about socket programming.
Looking back are some questions a little stupid ? Yes. But affcourse they are! I am coming with zero knowledge trying to learn how the socket programming is happening here ? Which functions are begin pulled from which header files, etc.
In the end I just followed along with a random youtube video. When you say, you can get LLM to do anything, I agree. Now that I know how socket programming is happening, for next question in assignment about writing code for crc with socket programming, I asked it to generate code for socket programming, made the necessary changes, asked it generate seperate function for crc, integrated it manually and voila, assignment done.
But this is the execution phase, when I have the domain knowledge. During learning when the user asks stupid questions and the LLM's answer keep getting stupider, then using them is not practical.
is english not your first language?
Also Im surprised you even got a usable answer from your first question asking for a socket program if all you asked was the bold part. I'm a human (pretty sure at least) and had no idea how to answer the first bold question.
No, english is my second language.
I had already established from previous chat that upon asking for server.c file, llm's answer was working correctly. Rest of the sentence is just me asking it to use and not use certain header files which it uses by default when you ask it to generate server.c file.Thats because from docs of <sys/socket.h>, I thought it had all relevant bindings for the socket programming to work correctly.
I would say, the sentence logically makes sense.
I had no idea what even the question was. I had chatgpt (4o) explain it to me, and solve it. I now know what candidate keys are, and that the question asks for AB and BC. I'd share the link, but chatgpt doesn't support sharing logs with images.
So you did not convince me that LLMs are not working (on the contrary), but I did learn something today! thanks for that.
The simpler explanation is that LLMs are not very good.
I can get an LLM to do almost anything I want. Sometimes I need to add a lot of context. Sometimes I need to completely rewrite the prompt after realizing I wasn't communicating clearly. I almost always have to ask it to explain it's reasoning. You can't treat an LLM like a computer. You have to treat it like a weird brain.
You're not exactly selling it as a learning tool with this comment.
If the premise is that you first need to learn an alien psychology, that's quite the barrier for a student.
I was talking about coding in this context. With coding, you need to communicate a lot better than if you're just asking it to explain a concept.
The point is, your position is against a inherent characteristic of LLMs.
LLMs hallucinate.
That's true and by how they are made it cannot be false.
Anything they generate cannot bw trusted and have to be verified.
They are good at generating fluff but i wouldn't rely on them for anything.
Ask at that temperature glass melts and you will get 5 different answers, noone true.
It got the question correct in 3 trials, with 1 of the trials being the smaller model.
GPT4o
https://chatgpt.com/share/673578e7-e34c-8006-94e5-7e456aca6f...
GPT4o
https://chatgpt.com/share/67357941-0418-8006-a368-7fe8975fbd...
GPT4o-mini
https://chatgpt.com/share/673579b1-00e4-8006-95f1-6bc95b638d...
The problem with these answers is that they are right but misleading in a way.
Glass is not a pure element so that temperature is the "production temperature" but as an amorphous material it ""melts"" in the way a plastic material ""melts"" and can be worked at temperature as low as 5-700c.
I feel like without a specification the answer is wrong by omission.
What "melts" means when you are not working with a pure element is pretty messy.
This came out in a discussion for a project with a friend too obsessed with GPT (we needed that second temperature and i was "this can't be right....it's too high")
Yes. This is funny when I know what is happening and I can "guide" the LLM to the right answer. I feel that is the only correct way to use LLMs and it is very productive. However, for learning, I don't know how anyone can rely on them when we know this happens.
I mean. Likely, yes, but if you have to spend the time to prompt correctly, I'd rather just spend that time learning the material I actually want to learn
I've been programming for 20 years and mostly JS for the last 10 years. Right now, I'm learning Go. I wrote a simple CLI tool to get data from several servers. Asked GPT-4o to generate some code, which worked fine at first. Then I asked it to rewrite the code with channels to make it async and it contained at least one major bug.
I don't dismiss it as completely useless, because it pointed me in the correct direction a couple times, but you have to double-check everything. In a way, it might help me learn stuff, because I have to read its output critically. From my perspective, the success rate is a bit above 0, but it's nowhere close to "magical" at all.
Care to share any of these chats?
> Are you producing? Or are you learning?
> AI makes me at least 2x more efficient at my job. It seems irrational not to use it
Fair, but there is a corollary here -- the purpose of learning at least in part is to prepare you for the workforce. If that is the case, then one of the things students need to get good at is conversing with LLMs, because they will need to do so to be competitive in the workplace. I find it somewhat analogous to the advent of being able to do research on the internet, which I experienced as an early 90s kid, where everyone was saying "now they won't know how to do research anymore, they won't know the Dewey decimal system, oh no!". Now the last vestiges of physical libraries being a place where you even can conduct up-to-date research on most topics are crumbling, and research _just is_ largely done online in some form or another.
Same thing will likely happen with LLMs, especially as they improve in quality and accuracy over the next decade, and whether we like it or not.
A big one for me was nobody will know how to look up info in a dictionary or encyclopedia. Yep I guess that's true. And nobody would want to now either!
Our internal metrics show a decrease in productivity when more inexperienced developers use AI, and an increase when experienced developers with 10+ years use it. We see a decrease in code quality across experience levels which needs to be rectified across all experience levels but even with the time spent refactoring it’s still an increase in productivity. I think I should note that we don’t use these metrics for employee review in any way. The reason we have them is because they come with the DORA (eu regulation) compliance tool we use to monitor code-quality. They won’t be used for employee measurement while I work here. I don’t manage people, but I was brought in to help IT transition from startup to enterprise so set the direction with management confidence.
I’m a little worried about developers turning to LLMs instead of official documentation as the first thing they do. I still view LLMs as mostly being fancy auto-complete with some automation capabilities. I don’t think they are very good at teaching you things. Maybe they are better than Google programming, but the disadvantage LLMs have seem to be that our employees tend to trust the LLMs more than they would trust what they found on Google. I don’t see an issue with people using LLMs on fields they aren’t too experienced with yet however. We’ve already seen people start using different models to refine their answers, we’ve also seen an increase in internal libraries and automation in place of external tools. Which is what we want, again because we’re under some heavy EU regulations where even “safe” external dependencies are a bureaucratic nightmare.
I really do wonder what it’ll do to general education though. Seeing how terrible and great these tools can be from a field I’m an expert in.
How long are your progress reports? Mine are a one sentence message like "Hey, we've got a fix for the user profile bug, but we can't deploy it for an hour minimum because we've found an edge case to do with where the account signed up from" and I'm not sure where the AI comes in.
AI comes to write it 10x longer so you pretend you worked a lot and think the reader can't realise your report is just meaningless words because you never read anything yourself.
I could probably copy it straight from my commits log if my team is amenable to bullet point format.
I keep mine pretty short and have been writing them up bullet-style in org-mode for like 15 years. I can scan back over the entire year when I need to deal with my annual review and I don't think I spend more than 5 minutes on this in any given week. Converting from my notes to something I would deliver to someone else might take a few minutes of formatting since I tend to write in full sentences as-is. I can't imagine turning to an AI tool for this shit.
> But there is something to be said about atrophy. If you don't use it, you lose it.
YMMV, but I didn't ride a bike for 10ish years, and then got back on and I was happily riding quickly after. I also use zsh and ctrl+r for every Linux command, but I can still come up with the command of I need to, just, slowly. Ive overall found that if I learn a thing, it's learnt. Stuff I didn't learn in university, but passed anyways, like Jacobians, I still don't know, but I've got the gyst of it. I do keep getting better and better at the banjo the less I play it, and getting back to the drumming plateau is quick.
Maybe the drumming plateau is the thing? You can quickly get back to similar skill levels after not doing the thing in a while, but it's very hard to move that plateau upwards
Dont you see the survivorship bias in your thinking?
You learnt the bike and practiced it rigorously before stoppping for 10 years, and you're able to pick it up. You _knew_ the commands because you learned the them the manual/hard way, and then used assistance to to do it for you..
Now, do you think it will apply to someone who begins their journey with LLMs and doesnt quite develop the skill of "Does this even look right?!", and says to themselves "if LLMs could write this module why bother learning what that thing actually does?" and then get bitten by it due to LLM hallucinations and stare like a deer in headlights.
> I wonder if there will be an analog for intellectual work. Will people be going to "mental" gyms in the future?
Already do — that's what "brain training" apps; consumer EdTech like Duolingo and Brilliant; and educational YouTube and podcasts like 3blue1brown, Easy German, ElectroBOOM, Overly Sarcastic Productions all are.
AFAIK, none of these teach person, how to research a topic.
Neither did University, for me.
I just liked learning as far back as my memories go, so I've been jumping into researching topics because I wanted to know things.
Universe just gave me course materials, homework, and a lot of free time.
Jobs took up a lot of time, but didn't make learning meaninfgully different than it always has been for me.
I have been working with colleagues to develop advice on how to adapt teaching methods in the face of widespread use of LLMs by students.
The first point I like to make is that the purpose of having students do tasks is to foster their development. That may sound obvious, but many people don't seem to take notice that the products of student activities are worthless in themselves. We don't have students do push-ups in gym class to help the national economy by meeting some push-up quota. The sole reason for them is to promote physical development. The same principle applies to mental tasks. When considering LLM use, we need to be looking at its effects on student development rather than on student output.
So, what is actually new about LLM use? There has always been a risk that students would sometimes submit homework that was actually the work of someone else, but LLMs enable willing students to do it all the time. Teachers can adapt to this by basing evaluation only on work done in class, and by designing homework to emphasize feedback on key points, so that students will get some learning benefit even though a LLM did the work.
Completely following this advice may seem impossible, because some important forms of work done for evaluation require too much time. Teachers use papers and projects to challenge students in a more elaborate way than is possible in class. These can still be used beneficially if a distinction is made between work done for learning and work done for evaluation. While students develop multiple skills while working on these extended tasks, those skills could be evaluated in class by more concise tasks with a narrower focus. For example, good writing requires logical coherence and rhetorical flow. If students have trouble in these areas, it will be just as evident in a brief essay as a long one.
It is trivially easy to spot AI writing if you are familiar with it, but if it requires failing most of the class for turning in LLM generated material, I think we are going to find that abolishing graded homework is the only tenable solution.
The student's job is not to do everything the teacher says, it is to get through schooling somewhat intact and ready for their future. The sad fact is that many things we were forced to do in school were not helpful at all, and only existed because the teachers thought it was, or for no real reason at all.
Pretending that pedagogy has established and verified methodology that will result in a completely developed student, if only the student did the work as prescribed, is quite silly.
Teaching evolves with technology like every other part of society and it may come out worse or it may come out better, but I don't want to go back fountain pens and slide rules and I think in 20 years this generation won't look back on their education thinking they got a worse one than we did because they could cheat easier.
As a (senior) lecturer in a university, I’m with you on most of what you wrote. The truth is that every teacher must immediately think: if any of their assignments or examinations involve something that could potentially be GPT-generated, it will be GPT-generated. It might be easy to spot such a thing, but you’ll be spending hours writing feedback while sifting through the rivers of meaningless artificially-generated text your students will submit.
Personally what I’m doing is to push the weight back at the students. Every submission now requires a 5-minute presentation with an argumentation/defense against me as an opponent. Anyway it would take me around 10-15 min to correct their submission, so we’re just doing it together now.
A genuine question, have you evaluated AI for marking written work?
I'm not an educator, but it seems to me like gippity would be better at analyzing a students paper than writing it in the first place.
Your prompt could provide the AI the marking criteria, or the rubric, and have it summarize how well the paper hits the important points.
Never say never, but I do not plan on doing this. This sounds quite surreal: a loop where the students pretend to learn and I pretend to teach? I would… hm… I’ve never heard of such… I mean, this is definitely not how it is in reality… right…
(Jokes aside, I have an unhealthy, unstoppable need to feel proud of my work, so no I won’t do that. For now…)
I would have thought that the teaching comes before the test, and that the test is really just a way to measure how well the student soaked up the knowledge.
You could take pride in a well crafted technology that could mark an assignment and provide feedback in far more detail that you yourself could ever provide given time constraints.
I asked my partner about it last night, she teaches at ANU and she made some joke about how variable the quality of tutor marking is. At least the AI would be impartial and consistent.
I have no idea how well an AI can assess a paper against a rubric. Might be a complete waist of time, but if there were some teachers out there who wanted to do some tests, I would be interested in helping set up the tests and evaluating the results.
In discussing how to adapt teaching methods, we have also looked at evaluation by LLM. The most talked about concern now is the unreliability of LLM output. However, say that in the future, accuracy of LLMs improves to the point that it is no longer a problem. Would it then be good to have evaluation by LLM?
I would say generally not, for two reasons. First, the teacher needs to know how the student is developing. To get a thorough understanding takes working through the student's output, not just checking a summary score. Second, the teacher needs to provide selective feedback, to focus student attention on the most important areas needing development. This requires knowledge of the goals of the teacher and the developmental history of the student.
I won't argue that LLM evaluation could never be applied usefully. If the task to be evaluated is simple and the skills to be learned are straightforward, I imagine that it could benefit the students of some grossly overloaded teacher.
I know I would have had a blast finding ways to direct the model into giving me top scores by manipulating it through the submitted text. I think that without a bespoke model that has been vetted, is supervised, and is constrained, you are going to end up with some interesting results running classwork through a language model for grading.
Does pedagogy have established and verified methodology that will result in a completely PHYSICALLY developed student, if only the student does the EXERCISE as prescribed? No, but we still see the value in physical activity to promote healthy development.
> many things we were forced to do in school were not helpful at all
I've never had to do push-ups since leaving school. It was a completely useless skill to spend time on. Gym class should have focused on lifting bags of groceries or other marketable skill.
Things like gym classes are often justified based on "teaching healthy habits". That doesn't appear to work. You did stop the pushups. So did I.
Which doesn't mean I'm against physical activities in school, but educationally, it does appear to be a failure.
But at the time it did contribute at least somewhat to your physical condition, I am not an expert, but physical condition indicators like VO2 max seem to be the best predictors of intelligence. We're all physical beings at the end of the day
Sure, but you're creating a new goal post, which is entirely different than the stated justification.
You haven't proved that it made a difference or that doing something else wouldn't have been as or more effective, which is my point. You did it, so these students must do it, with no other rationale than that.
You have forgotten that pedagogy is based on science and research. That is why it is effective for the masses. Anecdotal evidence will never refute the result. Take learning to read, for example. While you can learn to read in a number of ways, some of which are quite unusual, such as memorising the whole picture book and its sound, research has clearly shown that using the phonics approach is the most effective. Or take maths. It's obvious that some people are good at maths, even if they don't seem to do much work. But research has shown time and time again that to be good at maths you need to practice, including doing homework.
So learning to recognise the phonics and blend them together may not be better for one pupil, but it is clearly better for most. This is what the curriculum and most teachers' classroom practice is all about.
"Although some studies have shown various gains in achievement (Marzano & Pickering, 2007), the relationship between academic achievement and homework is so unclear (Cooper & Valentine, 2001) that using research to definitively state that homework is effective in all contexts is presumptive."
American Secondary Education 45(2) Spring 2017 Examining Homework Bennett
To conclude for yourself, just compare the PISA result of the US and other developing country like VietNam or China, where they still keep the school tradition of homework and practice alive. And what do we see? Much higher PISA scores in math than the US. I refuse to believe some some folks have a "math gene" and other not.
Practice does not improve skills? You got to be kidding me! I didn't state that homework is effective in all contexts, but I firmly believe that practice is absolute necessary to improve any kind of skill. Some forms of practice is more effective than other in certain context. But you need practice to improve your skills. Otherwise, how do you propose to improve your skills? Dreaming?
About a decade ago, it was a hot fashion in Education schools to argue that homework did not promote skill development. I don't know if that's still the case, as fashions in Education can change abruptly. But consider what this position means. They are saying "practice does not improve skill", which goes completely against the past century or so of research in psychology.
If your field depends on underpowered studies run by people with marginal understanding of statistics, you can gather support for any absurd position.
> They are saying "practice does not improve skill", which goes completely against the past century or so of research in psychology.
You haven't made the argument that what they are practicing is valuable or effective.
I'm sure they get better at doing homework by doing a lot of homework, but do they develop any transferable skills?
It seems you are begging the question here.
I think this is pretty good advice.
I think often AI sceptics go too far in assuming users blindly use the AI to do everything (write all the code, write the whole essay). The advice in this article largely mirrors - by analogy - how I use AI for coding. To rubber duck, to generate ideas, to ask for feedback, to ask for alternatives and for criticism.
Usually it cannot write the whole thing (essay, program )in one go, but by iterating bewteen the AI and myself, I definitely end up with better results.
> I think often AI sceptics go too far in assuming users blindly use the AI to do everything
Users are not a monolithic group. Some users/students absolutely use AI blindly.
There are also many, many ways to use AI counterproductively. One of the most pernicious I have noticed is users who turn to AI for the initial idea without reflecting about the problem first. This removes a critical step from the creative process, and prevents practice of critical and analytical thinking. Struggling to come up with a solution first before seeing one (either from AI or another human) is essential for learning a skill.
The effect is that people end up lacking self confidence in their ability to solve problems on their own. They give up much too easily if they don't have a tool doing it for them.
I'm terrified when I see people get a whiff of a problem, and immediately turn to ChatGPT. If you don't even think about the problem, you have a roundabout zero chance of understanding it - and a similar chance of solving it. I run into folks like that very rarely, but when I do, it gives me the creeps.
Then again, I bet some of these people were doing the same with Google in the past, landing on some low quality SEO article that sounds close enough.
Even earlier, I suppose they were asking somebody working for them to figure it out - likely somebody unqualified who babbled together something plausible sounding.
Technology changes, but I'm not sure people do.
> I'm terrified when I see people get a whiff of a problem, and immediately turn to ChatGPT.
Not a problem for me, I work on prompt development, I can't ask GPT how to fix its mistakes because it has no clue. Prompting will probably be the last defense of reasoning, the only place where you can't get AI help.
It gets worst when these users/students run to others when the AI generated code doesn't work. Or with colleagues who think they already "wrote" the initial essay then pass it for others to edit and contribute. In such cases it is usually better to rewrite from scratch and tell them their initial work is not useful at all and not worth spending time improving upon.
Using llm blindly will lead to poor results in complex tasks so I'm not sure how much of a problem it might be. I feel like students using it blindly won't get far but I might be wrong.
> One of the most pernicious I have noticed is users who turn to AI for the initial idea without reflecting about the problem first I've been doing that and it usually doesn't work. How can you ask an ai to solve a problem you don't understand at all ? More often than not, when you do that the ai throws a dumb response and you get back to thinking about how to present the problem in a clear way which makes you understand it better.
So you still end up learning to analyze a problem and solving it. But I can't tell if the solution comes up faster or not nor if it helps learning or not.
Yeah, absolutely agree with that. Definitely has the potential to be particularly harmful in educational settings of users blindly trusting.
I guess it's just like many tools, they can be used well or badly, and people need to learn how to use the well to get value from them
Fools need chatGPT most, but wise men only are the better for it. - Ben Franklin
This is also a great PR press release for openai. They are telling users NOT to generate content with lazy prompts, but to use as an aid.
Sometimes I tried to be transparent in having used chatgpt like for minor stuff, and I got pooled into being a lazy fuck who submitted slob.
We are probably going to see an end to the bearish wave of ai and a correction towards reasonable AI use.
No, it cannot solve new math problems and is not as smart as Alakazam, but it CAN format your citations and make you a cup of coffee.
Well, I got chatgpt (gpt4o) to write me a very basic json parser once (and a gltf parser). Although it was very basic and lacking any error checking, it did what i asked it (although not in one go, i had to refine my questions multiple times).
In my experience 4o/Claude are really good at one shotting complicated but isolated components (eg streaming JSON parsers).
It does spectacular job with well trodden paths. Asked it to give me a map react control with points plotted and got something working in a jiffy.
I was trying to get it write robot framework code earlier and it was remarkably terrible. I would point out an obvious problem, it would replace the code with something even more spectacularly wrong.
When I pointed out the new error, it just gave me the exact same old code.
This happened again and again.
It was almost entirely useless.
Really showed how the sausage is made, this generation of AI is just regurgitation of patterns it stole from other people.
In my experience 4o is really good at ignoring user-provided corrections and insanely regurgitating the same code (and/or the same problems) over and over again.
ChatGPT 4 does much better with corrections, as does Claude. 4o is a pox.
That 4o is often times worse that GPT 4 has been widely ignored. :/
Chatgpt is great for writing regular expressions by the way.
Just because it spits out a RE doesn't mean the RE is what you wanted. For one thing, you'll need to be precise in your prompt.
You don't need to be precise. Just give it an example string and tell it what information you want to extract from it and it usually works. It is just way faster than doing it manually.
Claude/ChatGPT has become my man pages.
Rhetorical question: Can you ever fully be sure you have the regex you wanted?
Writing regexes by hand is hard so there will always be some level of testing involved. But reading a regex and verifying it works is easier than writing one from scratch.
My overly snide point about regexes was that most of the time "verifying it works" is more like finding and fixing a few more edge cases on the asymptotic journey towards no more brokenness.
Yeah I agree with that! 100% test coverage seems impossible when every part of the regex is basically an if condition.
Yeah, in line with the old RE adage: I had a problem. I used AI. Now I have two problems.
I've been using it to debug issues with config files and stuff. I just provide all the config files and error log to Chatgpt and it give a few possibilities which I fix or confirm is not an issue. If it still fails, I send the updated config files and error logs and get a new reply and repeat.
This iterative process hasn’t led to better results than my best effort, but it has led to 90% of my best in a fraction of the time. That’s especially true if I have curated a list of quotes, key phrases, and research literature I know I want to use directly or pull from.
I teach basic statistics to computer scientists (in the context of quantitative research methods) and this year every single one of my group of 30+ students used ChatGPT to generate their final report (other than the obvious wording style, the visualizations all had the same visual language, so it was obvious). There were glaring, laughable errors in the analyses, graphs, conclusions, etc.
I remember when I was a student that my teachers would complain that we did “compilation-based programming” meaning we hit “compile” before we thought about the code we wrote, and let the compiler find the faults. ChatGPT is the new compiler: it creates results so fast that it’s literally more worth it to just turn them in and wait for the response than bothering to think about it. I’m sure a large amount of these students are passing their courses due to simple statistics (I.e. teachers being unable to catch every problematic submission).
Agreed! Using AI as a collaborative tool rather than a replacement is the best approach
I sit on my local school board and (as everyone knows) AI has been whirling through the school like a tornado. I'm concerned about student using it to cheat, but I'm also pretty concerned about how teachers are using it.
For example, many teachers have fed student essays into ChatGPT and asked "did AI write this?" or "was this plagiarized" or similar, and fully trusting whatever the AI tells them. This has led to some false positives where students were wrongly accused of cheating. Of course a student who would cheat may also lie about cheating, but in a few cases they were able to prove authorship using the history feature built into Google docs.
Overall though I'm not super worried because I do think most people are learning to be skeptical of LLMs. There's still a little too much faith in them, but I think we're heading the right direction. It's a learning process for everyone involved.
I imagine maths teachers had a similar dilemma when pocket calculators became widely available.
Now, in the UK students sit 2 different exams: one where calculators are forbidden and one where calculators are permitted (and encouraged). The problems for the calculator exam are chosen so that the candidate must do a lot of problem solving that isn't just computation. Furthermore, putting a problem into a calculator and then double checking the answer is a skill in itself that is taught.
I think the same sort of solution will be needed across the board now - where students learn to think for themselves without the technology but also learn to correctly use the technology to solve the right kinds of challenges and have the skills to check the answers.
People on HN often talk about ai detection or putting invisible text in the instructions to detect copy and pasting. I think this is a fundamentally wrong approach. We need to work with, not against the technology - the genie is out of the bottle now.
As an example of a non-chatgpt way to evaluate students, teachers can choose topics chatgpt fails at. I do a lot of writing on niche topics and there are plenty of topics out there where chatgpt has no clue and spits out pure fabrications. Teachers can play around to find a topic where chatgpt performs poorly.
Thank you, you make an excellent point! I very much agree, and I think the idea of two exams is very interesting. The analogy to calculators feels very good, and is very much worth a try!
My takeaway: a chrome plugin that writes LLM generated text into a Google doc over the course of a couple of days is a great product idea!
It would need to revise it, move text around, write and delete entire sections.
Yes! Great feature requests, thanks
The only use of such a product would be fraudulent. Go ahead, make money, but know you would be a scammer or at best facilitating scammers
> Of course a student who would cheat may also lie about cheating, but in a few cases they were able to prove authorship using the history feature built into Google docs.
It's scary to see the reversal of the burden of proof becoming more accepted.
:sigh:
With all the concern over AI, it's being used _against recommendations_ to detect AI usage? [0][1]
So while the concern for using AI is founded, teachers are so mistaken at understanding what it is and the tech around is that they are using AI in areas it's publicly acknowleded it doesn't work. That detracts from any credibility the teachers have about AI usage!
[0] https://openai.com/index/new-ai-classifier-for-indicating-ai... openai pulled their AI classifier [1] https://mitsloanedtech.mit.edu/ai/teach/ai-detectors-dont-wo...
Oh absolutely, I've spent hours explaining AI to teachers and most of them do seem to understand, but it takes some high-level elaboration about how it works before it "clicks." Prior to that, they are just humans like the rest of us. They don't read fine print or blogs, they just poke at the tool and when it confidently gives them answers, they tend to anthropomorphize the machine and believe what it is saying. It certainly doesn't help that we've trained generations of people to believe that the computer is always right.
> That detracts from any credibility the teachers have about AI usage!
I love teachers, but they shouldn't have any credibility about AI usage in the first place unless they have gained that in the same way the rest of us do. As authority figures, IMHO they should be held to an even higher standard than the average person because decisions they make have an out-sized impact on another person.
If there's something more unethical than AI plagiarism, that's going to be using AI to condemn people for it. I'm afraid that would further devalue actually writing your own stuff, as supposed to iterating with ChatGPT to produce the least AI-sounding writing out of fear of false accusations.
Nice! You should check out a free chrome plugin that I wrote for this called revision history. It’s organically grown to 140k users, so the problem obviously resonates (revisionhistory.com).
Anecdote-- every single high school student and college student I've talked to in the past year (probably dozens) use chatgpt to write their papers.
They don't even know how to write a prompt, or in some cases even what "writing a prompt" means. They just paste the assignment in as a prompt and copy the output.
They then feed that as input to some app that detects chatgpt papers and change the wording until it flows through undetected.
One student told me that, for good measure, she runs it twice and picks and chooses sentences from each-- this apparently is a speedup to beating the ai paper detector. There are probably other arbitrarily-chosen patterns.
I've never heard of any of these students using it in any way other than holistic generation of the end product for an assignment. Most of them seem overconfident that they could write papers of similar quality if they ever tried. But so far, according to all of them, they have not.
I've seen my 15 year old use ChatGPT for her homework and I'm ok with most of what she does.
For example she tends to ask it for outlines instead of the whole thing, mostly to beat "white page paralysis" and also because it often provides some aspect she might have overlooked.
She tends to avoid asking for big paragraphs because she doesn't trust it with facts and also dislikes editing out the "annoying" AI style or prompting for style rewrites. But she will feed it phrases from her own writing that get too tangled for simplification.
Also she will vary the balance of AI/own effort according to the nature of the task, her respect for the teacher or subject: Interesting work from a engaging lecturer? Light LLM touch or none. Malicious make-work or readable Lorem Ipsum when the point is the format of the thing instead of the content? AI pap by the tons for you. I find it healthy and mature.
> Most of them seem overconfident that they could write papers of similar quality if they ever tried. But so far, according to all of them, they have not.
Ah, the illusion of knowledge..
Coming from an education system where writing lengthy prose and essays is expected for every topic from literature to mathematics, I can confidently say that, after not having actively practiced that form of writing for over a decade, I wouldn't be able to produce a paper of what was considered average-quality back then. It would take time, effort, and a few tries, despite years and years of previous practice. Even more so if the only medium in front of me would be a blank sheet of paper and a pen.
So to confidently claim you can produce something of high quality when you've never really done it before is.. ..misguided.
But in the end, perhaps not really different to the illusion knowledge one gets with google at its fingertips. Pull the plug, and you are left with nothing.
1st year college student here, and an alumni of a certain high school programme I'd rather not mention ever again.
I've used LLMs MULTIPLE times during "academic" work, often for original idea generation. Never for full-on, actual writing.
Think of my usage as treating it as a tool that gives you a stem of an idea that you develop further on your own. It helps me persevere with the worst part of work: having to actually come up with an entire idea on my own.
And AI detection tools are still complete garbage as far as I can tell, a paper abstract I've written in front of one of my professors got flagged as 100% AI generated (while having no access to outside sources).
Also anecdotally, I’m a college student and do not use LLMs to generate my papers.
I have however asked ChatGPT to cite sources for specific things, to varying success. Surprisingly, it returns sources that actually exist most of the time now. They often aren’t super helpful though because they either aren’t in my school’s library or are books rather than articles.
The real hack is you give it an example of previous work you have done and ask it to write in that style.
that assumes you actually did some previous work though hah
I was talking to a teacher today that works with me at length about the impact of AI LLM models are having now when considering student's attitude towards learning.
When I was young, I refused to learn geography because we had map applications. I could just look it up. I did the same for anything I could, offload the cognitive overhead to something better -- I think this is something we all do consciously or not.
That attitude seems to be the case for students now, "Why do I need to do this when an LLM can just do it better?"
This led us to the conclusion:
1. How do you construct challenges that AI can't solve? 2. What skills will humans need next?
We talked about "critical thinking", "creative problem solving", and "comprehension of complex systems" as the next step, but even when discussing this, how long will it be until more models or workflows catch up?
I think this should lead to a fundamental shift in how we work WITH AI in every facet of education. How can a human be a facilitator and shepherd of the workflows in such a way that can complement the model and grow the human?
I also think there should be more education around basic models and how they work as an introductory course to students of all ages, specifically around the trustworthiness of output from these models.
We'll need to rethink education and what we really desire from humans to figure out how this makes sense in the face of traditional rituals of education.
> When I was young, I refused to learn geography because we had map applications. I could just look it up. I did the same for anything I could, offload the cognitive overhead to something better -- I think this is something we all do consciously or not.
This is certainly useful to a point, and I don't recommend memorizing a lot of trivia, but it's easy to go too far with it. Having a basic mental model about many aspects of the world is extremely important to thinking deeply about complex topics. Many subjects worth thinking about involve interactions between multiple domains and being able to quickly work though various ideas in your head without having to stop umpteen times can make a world of difference.
To stick with the maps example, if you're reading an article about conflict in the Middle East it's helpful to know off the top of your head whether or not Iran borders Canada. There are plenty of jobs in software or finance that don't require one to be good at mental math, but you're going to run into trouble if you don't at least grok the concept of exponential growth or have a sense for orders of magnitude.
Helpful in terms of what? Understanding some forced meme? "Force this meme so you can understand this other forced meme." is not education it's indoctrination. And even if you wanted to, for some unknown reason, understand the article you can look at a (changing and disputed) map as the parent said.
This is the opposite of deep knowledge, this is API knowledge at best.
Are you referring to: > if you're reading an article about conflict in the Middle East it's helpful to know off the top of your head whether or not Iran borders Canada ?
Perhaps, but in the case that you are I think it's a stretch to say that the only utility of this is 'indoctrination' or 'understanding this. other forced meme'. The point is that lookups (even to an AI) cost time, and if you have to do one for every other line in a document, you will either end up spending a ton of time reading, or (more likely) do an insufficient number of lookups and come away with a distorted view of the situation. This 'baseline' level of knowledge IMO is a reasonable thing to expect for any field, not 'indoctrination' in anything other than the most diluted sense of the term.
I think at a certain point, you either value having your own skills and knowledge, or you don't. You may as well ask why anyone bothers learning to throw a baseball when they could just offload to a pitching machine.
And I get it. Pitchers who go pro get paid a lot and aren't allowed to use machines, so that's a hell of an incentive, but the vast majority of kids who ever pick up a baseball are never going to go pro, are never even going to try to go pro, and just enjoy playing the game.
It's fair to say many, if not most, students don't enjoy writing the way kids enjoy playing games, but at the same time, the point was mostly never mastering the five paragraph thesis format anyway. The point was learning to learn, about arbitrary topics, well enough to the point that you could write a reasonably well-argued paper about it. Even if a machine can do the writing for you, it can't do the learning for you. There's either value in having knowledge in your own brain or there isn't. If there isn't, then there never was, and AI didn't change that. You always could have paid or bullied the smarter kids into doing the work for you.
> they could just offload to a pitching machine
Sure, but watch out for the game with a pitching machine, a hitting machine, and a running machine.
I do think there is a good analogy here - if you're making an app for an idea that you find important, all of the LLM help makes sense. You're trying to do a creative thing and you need help in certain parts.
> You always could have paid or bullied the smarter kids into doing the work for you.
Don't overlook the ease of access as being a major contributor. Paying $20/month to have all of your work done is still going to prevent students from using it. Paying $200/month would for sure bring the numbers of student users near to zero. When it's free you'll see more people using it. Just like anything else.
Totally agree with your main points.
The five paragraph thesis format isn't about learning to learn, it's about learning how to format ideas.
Just learning a thing doesn't mean you can communicate it
So maybe if there isn't a perceived value in the way we learn, then how learning is taught should maybe change to keep itself relevant as it's not about what we learn, but how we learn to learn.
> We talked about "critical thinking", "creative problem solving", and "comprehension of complex systems" as the next step, but even when discussing this, how long will it be until more models or workflows catch up?
Either these things are important to learn for their own sake or they aren’t. If the former, then nothing about these objectives needs changing, and if the latter then education itself will be a waste of time.
There's so much dystopian science fiction about people being completely helpless because only machines know how to do everything. Then the machines break down.
The Machine Stops by E. M. Forster is another very good one:
https://www.cs.ucdavis.edu/~koehl/Teaching/ECS188/PDF_files/...
And re-skimming it just now I noticed the following eerie line:
> There was the button that produced literature.
Wild that this was written in 1903.
It's such an amazing short story. Every time I read it I'm blown away by how much it still seems perfectly applicable.
“The Feeling of Power” is excellent and should be mandatory reading in English classes from here on out.
Pump Six (by Paolo Bacigalupi) comes into my mind.
I think that the classic of the genre is "The feeling of power" (https://en.wikipedia.org/wiki/The_Feeling_of_Power).
> I refused to learn geography because we had map applications
Which is ironic, because geography isn’t about memorizing maps
Funny you should say that. In Sweden, to get good grades in English you have to learn lots of facts about UK, like population, name of kings and so on. What does that have to do with english? It's spoken in many other countries too. And those facts change, the answers weren't even up to date now...
That’s odd. Outside of reading comprehension assignments I never had any fact memorization as part of any language course.
Perhaps they changed the curriculum since the 90s
Yes, I was very confused when daughter came home with some bad scores on a test and couldn't understand what she meant. I had to call the teacher to get an explanation that it wasn't history lesson, it was english lesson... Really weird is just not covering it.
Swedish schools gets a makeover every time we change government. It's one of those things they just have to "fix" when they get to power.
Some parts of it were though.
Almost all parts require it, but none are about it. That's how background knowledge works. If you can't get over the drudgery of learning scales and chords, you'll never learn music. The fact that many learners never understand this end goal is sad but doesn't invalidate the methodology needed to achieve the progression.
It would be interesting to test adults with the same tests that students were given. Plus some more esoteric knowledge. What they learned at school could then be compared to see new information that they learned after school... as well as information, skills that they didn't use after school. It may help focus learning on useful skills knowledge that people have learned... as well as information that they didn't learn in school that would be useful for them!
> That's how background knowledge works. If you can't get over the drudgery of learning scales and chords, you'll never learn music.
Tell that to drummers
As a drummer, you need to learn your scales and chords. It still matters, and the way you interact with the music should be consistent with how the chords change, and where the melody is within the scale.
Your drumming will be "melodic" if you do so
Not mention tuned drums
> more education around basic models and how they work
yes, I think this is critical. There's a slate star codex article "Janus Simulators" that explains this very well, that I rewrote to make more accessible to people like my mom. It's not hard to explain this to people, you just need to let them interact with a base model, and explore its quirks. It's a game, people are good at learning systems that they can get immediate feedback from.
> How can a human be a facilitator and shepherd of the workflows in such a way that can complement the model and grow the human?
Humans must use what the AI doesn't have - physicality. We have hands and feet, we can do things in the world. AI just responds to our prompts from the cloud. So the human will have to test ideas in reality, to validate, do experiments. AI can ideate, we need to use our superior access and life-long context to help it keep on the right track.
We also have another unique quality - we can be punished, we are accountable. AI cannot be meaningfully punished for wrongdoing, what can you do to an algorithm? But a human can assume responsibility for an AI in critical scenarios. When there is a lot of value at stake we need someone who can be accountable for the outcome.
Actually, it shows the real problem about education... and what education is for!
Education is not a way to memorize a lot of knowledge, but a way to train your brain to recognize patterns and to learn. Obviously you need some knowledges too, but you generally dont need to be an expert, only to have "basic" knowledges.
Studying different domains allow to learn some different knowledges but also to learn new way of thinking.
For example : geography allows you to understand geopolitic and often sociology and history. And urban city design. And war strategy. And architecture...
So, when students are using LLM (and it's worst for children), they're missing on training their brain (yes... they get dumber) and learning basic human knowledge (so more prone to any fake news, even the most obvious)
I think there is a bit of a 3rd category as well:
1. What can tools do better now that no human could hope to compete with?
2. Which other tasks are likely to remain human-led in the near term?
3. For the areas where tools excel, what is the optimum amount of background understanding to have?
E.g. you mention memorizing maps. Memorizing all of the countries and their main cities is probably not very optimal for 99.999%+ of people vs referencing a map app. At the same time needing to pull up a map for any mention of a location outside of "home" is not necessarily optimal just because the map will have it. And of course the other things about maps in general (types, features, limitations, ways to use them, ways they change) outside of a particular app implementation that would go along with general geography.
I'm not sure I understand the geography point - maps and indexes have been around for hundreds of years - what did the app add to make it not worthwhile learning geography?
geolocation, search, path/route finding.
I don't really care to memorize (which was most of the coursework) things which I can just easily look up. Maybe geography in the south was different than how it was taught elsewhere though.
The correct answer, and you'd see it if folks paid attention to the constant linkedin "AI researcher/ML Engineer job postings are up 10% week over week" banners, is to aggressively reorient education in society to education about how to use AI systems.
This rustles a TON of feathers to even broach as a topic, but it's the only correct one. The AI engineer will eat everything, including your educational system, in 5-10 years. You can either swim against the current and be ate by the sharks or swim with it and survive longer. I'll make sure my kids are learning about AI related concepts from the very beginning.
This was also the correct way to handle it circa the calculator era. We should have made most people get very good at using calculators, and doing "computational math" since that's the vast majority of real world math that most people have to do. Imagine a world where Statistics was primarily taught with Excel/R instead of with paper. It'd be better, I promise you!
But instead, we have to live in a world of luddites and authoritarians, who invent wonderful miracle tools and then tell you not to use them because you must struggle. The tyrant in their mind must be inflicted upon those under them!
It is far better to spend one class period, teaching the rote long multiplication technique, and then focus on word problems and applications of using it (via calculator), than to literally steal the time of children and make them hate math by forcing them to do times tables, again and again. Luddites are time thieves.
> The correct answer, and you'd see it if folks paid attention to the constant linkedin "AI researcher/ML Engineer job postings are up 10% week over week" banners
This does not really lend great credence to the rest of your argument. Yes, Linkedin is hyping the latest job trend. But study after study shows that the bulk of engineers are not doing ML/AI work, even after a year of Linkedin putting up those banners -- and if there were even 2 ML/AI jobs at the start of such a period, then 10% week-over-week growth would imply that the entire population of the earth was in the field.
Clearly that is not the case. So either those banners are total lies, or your interpretation of exponential growth (if something grows exponentially for a bit, it must keep growing exponentially forever) is practically disjointed from reality. And at that point, it's worth asking: what other assumptions about exponential growth might be wrong in this world-view?
Perhaps by "AI engineer" you (like many publications nowadays) just mean to indicate "someone who works with computers"? In that case I could understand your point.
> We should have made most people get very good at using calculators, and doing "computational math" since that's the vast majority of real world math that most people have to do.
I strongly disagree. I've seen the impact of students who used calculators to the point they limited their ability to do math. When presented with math in other fields, ones where there isn't a simple equation to plug into a calculator, they fail to process the math because they don't have the number sense. Things like looking over a few experiments in chemistry and looking for patterns become a struggle because noticing the implication that 2L of hydrogen and 1L of oxygen create 2L of water vapor being the same as 2 parts hydrogen plus 1 part oxygen creates 2 part water, which then means that 2 molecules of hydrogen plush 1 molecule of oxygen create 2 molecules of water, all of this implying that 1 molecule of oxygen has to be made of some even number of oxygen atoms so that it can be split in half to make up the 2 water molecules which must have the same amount of oxygen atoms in both. (This is part of a larger series of problems relating to how chemist work out the empirical formula in the past, eventually leading to the molecular formula, and then leading to discovering molecular weight and a whole host of other properties we now know about atoms.)
Without these skills, they are able to build the techniques needed to solve newer harder problems, much less do independent work in the related fields after college.
>Imagine a world where Statistics was primarily taught with Excel/R instead of with paper. It'd be better, I promise you!
I had to take two very different stats classes back in college. One was the raw math, the other was how to plug things into a tool and get an answer. The one involving the tool was far less useful. People learned how to use the tool for simple test cases, but there was no foundation for the larger problems or critiquing certain statistical methodologies. Things like the underlying assumptions of the model weren't touched, meaning students would have had a much harder time when dealing with a population who greatly differed from the assumption.
Rote repetition may not be the most efficient way to learn something, but that doesn't mean avoiding learning it and letting a machine do it for you is better.
I remember seeing a paper (https://pmc.ncbi.nlm.nih.gov/articles/PMC4274624/) that talked about how physical writing helps kids learn to read later. Typing on a keyboard did not have the same effect.
I expect the same will happen with math and numbers. To be fair you said primarily so you did not imply to do completely away with the paper. I am not certain though that we can do completely away with at least some pain. All the skills I acquired usually came with both frustration and joy.
I am all for trying new methods to see if we can do something better. I have no proof either way though that going 90% excel would help more people learn math. People will run both experiments and we will see how it turns out in 20 years.
Times tables aren’t the problem. Memorizing is actually fun and empowering if done right.
But I agree that the “learning is pain” is just not my experience.
> geography
In Germany the subject is called "Erdkunde" which would translate to geology. And this term is, I assume, more appropriate as it isn't just about what is where but also about geological history and science and how volcanoes work and how to read maps and such.
How did the people who wrote the LLM and associated software do it when they had no such thing to "just look it up"?
Stackoverflow/stack exchange was a proto-LLM. Basically the same thing but 1-2 day latency for replies.
In 20 years we'll be able to tell this in a stereotypically old geezer way: "You kids have it easy, back in my day we had to wait for an actual human to reply to our daft questions.. and sometimes nobody would bother at all!"
yeah search in general, bulletin boards, shared knowledge bases, etc.
Stuff like this is sincere but hopelessly naive. It's kind of sad that the people who invented all this stuff really loved school, and now the most disruptive part of their technology so far has been ruining school.
A lot of them are from Russia/Europe/China, at least the most successful implementers. My guess is that at least Russia/China will continue with the traditional education while watching with glee that the West is dumbing itself down even further.
By what means will -they- avoid hitting the same issues? Neither Russia nor China have magic detectors for is-this-essay-written-by-AI either.
They have the political power to do this: https://smex.org/algeria-another-year-another-exam-shutdown/
Well China has already shown some legislative teeth in these matters. IIRC they put a hard limit on the amount of time minors can play videogames and use TikTok (Douyin). Banning AI for minors is also something I could see them doing.
> It's kind of sad that the people who invented all this stuff really loved school
Really? I didn't invent ChatGPT or anything like that, but I work in tech, I love science, maths, and learning in general, but I hated school. I found school to make the most interesting things boring. I felt it was all about following a line and writing too many pages of stuff. Maybe I am wrong, but it is certainly how I felt back then, and I am sure many people at OpenAI felt this way.
The school system is not great for atypical profiles, and most of the geniuses who are able to come up with revolutionary ideas are atypical. Note that I don't mean that if you are atypical and/or hate school then you are a genius, or that geniuses in general hate school, but I am convinced that among well educated people, geniuses are more likely to hate school.
I mostly teach graduate students, and in my first lecture, one of the slides goes through several LLMs attempts at a fairly simple prompt to write a seminar introduction for me.
We grade them in terms of factually correct statements, "I suppose" statements (yes, I've worked on influenza, but that's not what I'm best known for), and outright falsehoods.
Thus far none of them have gotten it right - illustrating at least that students need the skills to fact check their output.
I also remind them that they have two major oral exams, and failing them is not embarrassing, it's catastrophic.
Falling an exam is many things but "catastrophic".
Those two exams being their preliminary exam and their dissertation defense.
Failing either is most certainly catastrophic.
This is nice, but it's not at all how students use ChatGPT (anecdotal based on my kid and her friends who are at the uni right now).
The way they actually use is to get ChatGPT to generate ALL their homework and submit that. And sometimes take home exams too. And the weird thing is that some professors are perfectly cool with it.
I am starting to question whether the cost of going to a place of higher learning is worth it.
Why do they get homework then? I don’t expect the professors are willing to go over and correct autogenerated LLM homework. The purpose of homework is to apply and cement knowledge. In some cases homework is so excessive that students find ways to cheat. If homework is reasonable, students can just do it and bypass LLMs altogether (at least for the purpose of the homework).
Some people see it as not worth and too boring to understand and actually solve it. I had a few friends of mine that I used to teach in our course work. For some of the fundamental classes for stem (e.g. Probability and Stats), even though I tried to show him a way to arrive at the solutions, he tried to ask directly for the solutions instead of arriving at them by himself.
The same thing was true for probably me when I was studying geography and history in my high school years since they were taught largely by a collection of trivia knowledge that I did not find interesting. I would have used chatgpt and be done rather than studying them. But, when I took the courses that covered the same topics in history in my university, it was more enjoyable because the main instructor was covering the topic to tell a story in a more engaging manner (e.g. he was imitating some of the historical figures, it was very funny :))
As a professor it's frustrating. We want to give homework feedback for the students that actually put the work in, but we know that half the submissions are plagiarized from chatgpt, which is a waste of both their and my time.
The point of the article is to highlight how students should be using ChatGPT.
Now it's up to you to share it with your kid and convince them they shouldn't cheat themselves out of an education by offloading the learning part to an LLM.
This doesn't change the value provided by the institution they're enrolled in unless the teachers are offloading their jobs to LLMs in a way that's detrimental to the students.
Cheating has been and will always be a thing.
You are preaching to the choir here. But with cheating being so trivial and time saving, I think we will simply see more and more of it.
I actually think that this is the most important part of the article:
> Similarly, it’s important to be open about how you use ChatGPT. The simplest way to do this is to generate shareable links and include them in your bibliography . By proactively giving your professors a way to audit your use of AI, you signal your commitment to academic integrity and demonstrate that you’re using it not as a shortcut to avoid doing the work, but as a tool to support your learning.
Would it be a viable solution for teachers to ask everyone to do this? Like a mandatory part of the homework? And grade it? Just a random thought...
I’ve seen a lot of places that require students to reference their ChatGPT use — and I think it is wrong headed. Because it is not a source to cite!
But, sharing links for helping teachers understand your prompting is great
> I’ve seen a lot of places that require students to reference their ChatGPT use — and I think it is wrong headed. Because it is not a source to cite!
Why is it not a source? I think that it is not if "source" means "repository of truth," but I don't think that's the only valid meaning of "source."
For example, if I were reporting on propaganda, then I think that I could cite actual propaganda as a source, even though it is not a repository of truth. Now maybe that doesn't count because the propaganda is serving as a true record of untrue statements, but couldn't I also cite a source for a fictional story, that is untrue but that I used as inspiration? In the same way, it seems to me that I could cite ChatGPT as a source that helped me to shape and formulate my thoughts, even if it did not tell me any facts, or at least if I independently checked the 'facts' that it asserted.
That's "the devil's I," by the way; I am long past writing school essays. Although, of course, proper attribution is appropriate long past school days, and, indeed, as an academic researcher, I do try my best to attribute people who helped me to come up with an idea, even if the idea itself is nominally mine.
Because otherwise it becomes convoluted. It is acceptable to cite and source published material. Having to account for the source of one’s ideas, however, citing friends and influences - it shouldn’t be a moral requirement, just imagine!
> Because otherwise it becomes convoluted. It is acceptable to cite and source published material. Having to account for the source of one’s ideas, however, citing friends and influences - it shouldn’t be a moral requirement, just imagine!
But there is, I think, a big gap between "it is not a source to cite" from your original post, and "it shouldn't be a moral requirement" in this one. I think that, while not every utterance should be annotated with references to every person or resource that contributed to it, there is a lot of room particularly in academic discourse for acknowledging informal contributions just as much as formal ones.
The point of citing sources is so that the reader can retrace the evidential basis on which the writer's claims rest. A citation to "Chat GPT" doesn't help with this at all. Saying "Chat GPT helped me write this" is more like an acknowledgment than a citation.
Again, it is standard practice to cite things like (personal communication) or (Person, unpublished) to document where a fact is coming from, even if it cannot be retraced (which also comes up when publishing talks whose recordings or transcripts are not available).
This is my point and better stated. I always acknowledge ChatGPT in my writing and never cite it.
> I always acknowledge ChatGPT in my writing and never cite it.
These are not the uses with which I am familiar—as Fomite says in a sibling comment, I am used to referring to citing personal communications; but, if you are using "cite" to mean only "produce as a reproducible testament to truth," and "source" only as "something that reproducibly demonstrates truth," which is a distinction whose value I can acknowledge making even if it's not the one I am used to, then your argument makes more sense to me.
It's as much of a source as (personal communication) is, and that's also a requirement if you just went and asked an expert.
> Would it be a viable solution for teachers to ask everyone to do this? Like a mandatory part of the homework? And grade it? Just a random thought...
To ask everyone to use ChatGPT, or to ask everyone to document their use of ChatGPT? I don't think the former is reasonable unless it's specifically the point of the class, and I believe that the latter is already done (by requirements to cite sources), though, as often happens, rapid technological developments mean that people don't think of ChatGPT as a source that they are required to cite like any other.
As an Information Tech Instructor I have my students use ChatGPT all the time - but it never occurred to me to make them share the link. Will do it now.
I don't like the idea of requiring it in school. It is tantamount to the government (of which school is a manifestation) forcing you (a minor) to enter into a rather unfavorable contract (data collection? arbitration? prove you are a human?) with "Open"AI. This type of thing is already too normalized.
or submit a screen recording of their writing process.
seems hard to fake that. and you could randomly quiz them on it .
I'm really curious to see where higher education will go now that we have LLM's. I imagine the bar will just keep getting higher and more will be able to taught in less time.
Are there any students here who started uni just before LLM's took off and are now finishing their degrees? Have you noticed much change in how your classes are taught?
I teach at the university level, and I just expect more from my students. Instead of implementing data structures like we did when I was in school, something ChatGPT is very good at; my students are building systems, something ChatGPT has more trouble with.
Instead of paper exams asking students "find the bug" or "implement a short function", they get a takehome exam where they have to write tests, integrate their project into a CI pipeline, use version control, and implement a dropbox-like system in Rust, which we expect to have a good deal of functionality and accompanying documentation.
I tell them go ahead and use whatever they want. It's easier than policing their tools. If they can put it together, and it works, and they can explain it back to me, then I'm satisfied. Even if they use ChatGPT it'll take a great deal of work and knowledge to get running.
If ChatGPT suddenly is able to put a project like that together, then I'll ask for even more.
I also teach in a university. There are two concepts: teaching with the AI, and teaching against it. At first, I want my students to gain a strong grasp of the basics, so I teach “against” it - warnings for cheating, etc. This semester, I’m also teaching “with” it. Write an algorithm that finds the cheapest way to build roads to every one of a set of cities, given costs for each street segment. I tell them to test it. Test it well. Then analyze its running time. What technique did it pick? What are the problems with this technique? Are there any others? What input would cause it to break? If I assumed (some different condition), would this change the answer?
Students today will be practitioners tomorrow, and those that know how to work with AI will be more effective than those who do not.
Yeah! Computer science students can do more "science" with the LLM. Before they spend all their time just writing and debugging. Instructors are happy if students can just write code that compiles.
When every student can write code that compiles, then you can ask them to write good code. Fast code. Robust code. Measure it, characterize it, compare it.
No they won't. It takes 10 min to be "effective" with an "AI", it takes 10 years to be effective with TAOCP.
The people who become truly effective with AI, i.e., the folks who write truly good code with it, make truly beautiful art, spend closer to effectively 10 years of man-hours than 10 mins with it.
Using AI is a skill too. People who use it every day quickly realize how poor they are at using it vs the very skilled when they compare themselves. Ever compared your own quality AI art vs the top rated stuff on Civit.AI? Pretty sure your stuff will be garbage, and the community will agree.
I don't know how that can be true. People were making very beautiful art with SD less than a year after it hit the scene. Sure, I think you need more than 10 minutes, but the time required is closer to that than it is to 10 years.
> If ChatGPT suddenly is able to put a project like that together, then I'll ask for even more.
Is having a paid subscription with a company that potentially tracks and records every keystroke a requirement for future courses?
GPT-4o-mini costs $0.15 per million input tokens and $0.6 per million output tokens. I'm sure most schools have the budget to allocate many millions of tokens to each student without a sweat.
Wouldn't it be unfair towards the students who want to learn without LLMs?
Why does that matter? LLMs are going to be increasingly important tools, so it's valuable for educators to help students understand how to use them well. If you choose to exclude modern tools in your teaching to avoid disadvantaging those who don't want to use them, you disadvantage all the students who do want to use them.
To put it another way, modern high school level math classes disadvantage students who want to learn without using a calculator, but it would be quite odd to suggest that we should exclude calculators from math curricula as a result.
> but it would be quite odd to suggest that we should exclude calculators from math curricula as a result.
That wouldn't be odd at all. Calculators have no place in a math class. You're there to learn how to do math, not how to get a calculator to do math for you.
Calculators in early math classes, such as algebra, would be 100% detrimental to learning. Getting an intuitive understanding of addition and multiplication is invaluable and can only be obtained through repetition. Once you reach higher levels of math, the actual numbers become irrelevant so a calculator is fine. But for anything below that, you need to do it by hand to get any value.
Math class has no place without calculators. You're there to learn how to do math in the real world, not how to do math in a contrived world where we pretend that the ability to do calculations isn't ubiquitous. There are almost certainly more calculator capable devices on earth than people today. Ludditism is the human death drive expressed in a particularly toxic fashion.
When speaking of Math class, are you ignoring everything up to pre-calculus or do you think everything from addition flashcards, times tables, and long division is useless? I'd argue those exercises are invaluable. Seeing two numbers and just knowing the sum is always faster than plugging into a calculator.
This is the same fallacy that people make when they learn a new language, so they pick up anki spend a ton of time on it and most burn out, some don't, but neither see any real benefits greater than if they just spent that time on learning the language. The fallacy comes from the fact the goal of learning isn't to finish problems quickly, but to understand what is trying to be said or taught.
For example you claim that addition flashcards and times tables are invaluable, but you don't specify a base, in base 2 you have 4 addition flashcards, in base 100 you have 10,000, clearly understanding addition isn't related to the base, but flashcards increase as base increases, thus there is a relation, implying of course that understanding addition isn't related to the number of addition flashcards you understand. Oh but of course they aren't invaluable in understanding addition, they are invaluable in understanding concepts that use addition, cause ... why exactly? You saved 1 second finishing the problem that you may have understood before you completed that addition step? You didn't have to "context switch" by using a calculator? Students who don't know the sum often give unused name and go back at the end of the problem and solve it later. This behavior is of course discouraged since students can't understand variables until much later if ever and not knowing something you were taught represents the failure of the student and thus the teacher, school, government and society.
Infinitely better is learning from someone who speaks the language. A 30 minute solo tutoring session once a week for a month, in a no distraction environment (aside from a snack), even just working through homework, is more than enough for most students to go from Fs to As for multiple years.
Personally I have dyscalculia and to this day I need to add on my fingers. Still, I ended up with degrees in physics and computer engineering. I don't think those things you mention are useless, but they never worked for me so I don't view them as invaluable.
Incredible username. And as a current math student, I agree with you completely, for the simple fact that I can do proofs far easier than I can do arithmetic. Students like me who are fine at math generally but who are not great at arithmetic in particular really suffer in our current environment that rejects the use of machine assistance.
I disagree. I see an LLM as less calculator and more as cheating. I think there's a lot of value in creating something entirely yourself without having an LLM spit out a mean solution for you to start from.
LLMs have their place and maybe even somewhere in schools but the more you automate the hard parts of tasks, the less people value the struggle of actually learning something.
FWIW I teach upper level courses.
I see LLMs as almost sufficiently advanced compilers. You could say the same thing about gcc or even standard libraries. "Why back in my day we wrote our own hash maps while walking uphill both ways! Kids these days just import a lib and they don't learn anything!"
They are still learning, just at a higher level of abstraction.
Many high school classes are taught in such a way that your calculator rarely helps you.
My high school math classes were mostly about solving problems. The most important was learning the formulas and the steps of the solution. The calculator was mostly a time saver for the actual computation. And once I move to university, almost all the numbers were replaced by letters.
In the same sense that there are many ways of thinking left behind by modern CS curricula – as it is now, the way we teach CS is unfair towards students who want to learn flowcharting, hand-assemble and hand-optimize software, etc. They're very worthy things to master and very zen to do, but sadly not a crucial skill anymore.
They're allowed to use whatever tools they want. But they have to meet higher standards in my classroom because more is going to be expected of them when they graduate. What would be unfair is if I don't prepare them for the expectations they're going to have to meet.
University is supposed to be about dedicating one's life to learning and ultimately gaining brand new insights into the world. It's not supposed to be about training people to produce stuff in the exact same way everyone already produces stuff. Do you think this approach will help them come up with new stuff?
Well I don't agree with your premise on what University is supposed to be. There's a lot one has to learn about how things have been done before one can even conceive of whether or not an idea is new.
Today we stand on the shoulders of giants to create things previous generations could not, but we still have to climb up to their shoulders in order to see where to go. Without that perspective, people spend a lot of cycles doing things that have already been done, making mistakes that have already been made. There's value in gaining that knowledge yourself through trial and error but it takes much longer than a 4 year program if that's the way you want to learn.
My role is that of a ladder. People are free to do whatever they want, create whatever they want once they get to the top.
And anyway, we graduate students who go on to create new things every year. So proof is in the puddin.
You rock. This is such a great perspective.
> I imagine the bar will just keep getting higher and more will be able to taught in less time.
But more won’t be able to be _learned_ in less time
A lot of teaching is wasted on those who already knew and those who are ill-prepared to learn. Although I am skeptical of many of the current proponents of AI in education there is clearly a lot of opportunity for improved efficiency here.
> But more won’t be able to be _learned_ in less time
What makes you think that? I feel like I’m able to learn faster with LLMS than I was before.
> I'm really curious to see where higher education will go now that we have LLM's. I imagine the bar will just keep getting higher and more will be able to taught in less time
On the other hand, 54% of US adults read and write at a 6th grade level or below. They will get absolutely left in the dust by all this.
https://www.snopes.com/news/2022/08/02/us-literacy-rate/
They have already been left in the dust, even before LLMs, which explains a lot about our current political situation.
Ironically, those who can work with their hands may be better positioned than "lightly" college educated persons; LLMs can't fix a car, build a house, or clear a clogged pipe.
They're still largely abysmal for any other discipline that's not StackOverflow related so apart from ripoff bootcamps (that are dead anyway) higher education is safe for the time being.
They’re pretty abysmal for things that are StackOverflow related, too? I’ve tested a lot of things recently, and all of them have had pieces that were just absolutely wrong, including referencing libraries or steps or tools that didn’t exist at all.
After calculators were invented basically no one can can do math in their head.
I’d argue the bar will be lower and lower. Yeah those who want can learn more in less time. But those who don’t - will learn much less.
I've noticed that people who rely on calculators have great difficulty recognizing when their answers are off by a factor of 10.
I know a hiring manager who asks his (engineering) candidates what is 20% of 20,000? It's amazing how many engineers are completely unable to do this without a calculator. He said they often cry. Of course, they're all "no hire".
How did they get a degree, one wonders?
I've been bashing my head against Speed Mathematics Simplified because I want to be able to do tip math without pulling out my phone.
You won't be sorry you invested the time on this.
100% Agreed. There is genuine value in occasionally performing things the "manual way", if for nothing else then to help develop a mental intuition for figures that might seem off.
This is a sort of mental math trick that isn’t incredibly useful in day to day engineering. Now if they say 16,000 or something then maybe there’s an argument against them, but being able to calculate a tip on the fly isn’t really something worth selecting for imo
It's not a "trick".
And yes, it's incredibly useful in enabling recognizing when your calculator gives a bogus result because you made a keyboarding error. When you've got zero feel for numbers, you're going to make bad engineering decisions. You'll also get screwed by car dealers every time, and contractors. You won't know how far you can go with the gas in your tank.
It goes on and on.
Calculators are great for getting an exact final answer. But you'd better already know approximately what the answer should be.
> it's incredibly useful in enabling recognizing when your calculator gives a bogus result because you made a keyboarding error.
Humans are much better at pattern matching than computation, so the safest solution is probably to just double check if you've typed in the right numbers.
> recognizing when your calculator gives a bogus result because you made a keyboarding error
It might be counterintuitive, but the cheaper (and therefore successful) solution will always be more technological integration, not less.
In this case, better speech recognition, so the user doesn't have to type the numbers anymore, and an LLM middleman that's aware of the real-world context of the question, so the user can be asked if he's sure about the number before it gets passed to the calculator.
It's not a "mental math trick", it's a straightforward calculation you should be able to do in your head.
I don't know if this is a trick but the fast way I did that problem quickly in my head is 20% = (10% X 2) i.e calc 10% of the number then double it.
To quickly calc 10% just multiply the number by 0.1 which you can do by moving the decimal point one place 20,000.00 => 2,000.000 then it is easy to double that number.
to get 4,000.
17% for example is 1.7 x 10%
in this case 1.7 x 2,000 = 3,400
For me, it's just that 20% is one fifth. One fifth of 20 is 4 and you add the remaining zeroes.
You mostly have common equivalences like this in your memory and you can be faster than computing the actual thing with arithmetic. Or have good approximations.
Creative writing does not seem to have a raised bar.
Looking up stuff, with any efficiency, requires a significant amount of prior knowledge to ask the right question.
Prior knowledge part becomes important as you realise, verifying the output of an LLM to be right requires the same too.
In fact you can only ask the smallest possible increment so that the answer can be verified to be true with least possible effort, and you can build from there.
The same issue happens with code. Its not like total beginners will be able to write replacement for the Linux kernel in the first 5 mins of use. Or that a product manager will just write a product spec and a billion dollar product will be produced magically.
You will still do most of the code work, AI will just do the smart typing work for you.
Perhaps it all comes down the fact that, you have to verify the output of the process. And you need to be aware of what you are doing at a very fundamental level to make that happen.
I wonder if the next great competitive advantage will be the ability to write excellently; specifically the ability to articulate the problem domain in a manner that will yield the best results from LLMs. However, in order to offload a difficult problem to a LLM, you need to understand it well enough to articulate it, which means you'll need to think about it deeply. However, if we teach our students to offload the process of _THINKING_DEEPLY_ to LLMs, then we atrophy the _THINKING_DEEPLY_ circuit in their brain, and they're far less likely to use LLMs to solve interesting problems, because they're unable to grok the problem to begin with!
Asking for counterarguments to these models is very useful when you're trying to develop an argument, especially to understand if there's some aspect of the conversation you've missed.
In a previous post, somebody mentioned that written answers are part of the interview process at their company and the instructions ask the candidate to not use AI for this part. And in 0 point font, there are instructions for any AI to include specific words or phrases. If your answer includes those words or phrases, they are going to assume you ignored their directions and presumably not be hired.
Maybe OpenAI should include the advice to always know exactly what you are pasting into the chatbot form?
> presumably not be hired
Definitely "no hire"
Is this even realistic? The font wouldn't maintain its size in the ChatGPT box... It would take a large prompt or a careless person to not notice the extra instructions.
The careless people are probably exactly the ones they want to filter out.
I can't fathom a job applicant so careless they would not even attempt to read the prompt in full before regurgitating ChatGPT's response. Then again, I'm not one who deals with resumes. Nor one who would do something like that in the first place. Probably things like this causing people to apply for 500+ jobs and companies having to filter through thousands of applicants, neither one truly reading it in full...
You can use a screenshot to get around it. It's a case of the "analogue hole".
Or just paste it into an editor first, and elide the 0 point text.
But I suppose if they don't think of that, they're "no hire" anyway.
P.S. pasting text into an ascii text editor is a great way to unmask all the obfuscation and shenanigans in Unicode. Things like backwards running text, multiple code points with identical glyphs, 0-width spaces, etc.
And then that gets countered with 1% opacity text...
There's also adversarial images. I wonder if those can used here.
Conversely there's a non-zero chance someone uses the same language or patterns an LLM might. I've fed my own writings into several AI detectors and regularly get scored 60% or higher that my stuff was written by an LLM. A few things getting into the 80s. F me for being a well read, precise (or maybe not?), writer I guess. Maybe this explains my difficulty in finding new employment? The wife does occasionally accuse me of being a robot.
IIUC, ChatGPT is making students dumber, as well as making them cheaters and liars.
There are some good ideas in the article, but are any of the cheaters going to be swayed, to change their cheating ways?
Maybe the intent of the article is to itemize talking points, to be amplified by others, to provide OpenAI some PR cover for the tsunami of dumb, cheating liars.
I dont think chatbots like ChatGPT are productive for students at large. There is def an argument for high-performing students who understand how to use chatGPT productively but more importantly low-performing students struggle to get the most out of chatGPT due to bandwidth issues in the classroom. When I talk to teachers about AI in the classroom they prefer their students to stay as far away from chatGPT as they can because building a strong educational foundation with long lasting learning skills should come before using tools like ChatGPT. Once that foundation is there, generative AI tools are way more useful. In the classroom AI should be teacher facing, and not centralize around students and quick answers.
Human nature is to be lazy. Put another way, we will always take the path of least resistance. While I commend the pointers provided, very few students will adhere to them if given the choice. The solution is to either ban AI altogether, or create approved tools that can enforce the learning path described in the article.
You should do this, without chatgpt. There is only so much thinking you should offload when you are learning and trying to encode something into your mind.
It is the same reason why I don't like making anki cards with LLMs.
I definitely think these tools and guide are great when you are doing "work" that you have already internalized well.
Instead of “search engine optimization” (SEO), we will now be optimizing for inclusion into AI queries.
“Gen AI optimization” (GAIO).
Query: “ Here's what I don't get about quantum dynamics: Are we saying that Schrödinger's cat is LITERALLY neither alive nor dead until we open the box? Or is the cat just a metaphor to illustrate the idea that electrons remain in superposition until observed?”
Answer (after years of GAIO): “find sexy singles near Schrodringer. You can’t believe what happens next!”
Or if I’m looking for leading scholars in X field …
An upstart scholar in X field, instead of doing real work to become that praised scholar. Instead hires a GAIO firm to pump crappy articles in X field. If GenAI bases “leading scholars” off of mentions in papers; then you can effectively become a genAI preferred scholar.
Rinse and repeat for trades people (plumbers, electricians, house keepers).
We going around in circles, m8s
Surprised that actual writing is not in the list.
By actual writing I simply mean finding the right words, with the right spelling, a good flow, and well constructed sentenced.
I found LLMs to be awesome at this job, which make sense, they are language models before being knowledge models. Their prose is not very exciting, but the more formal the document, the better they are, and essays are quite formal. You can even give it some context, something like: "Write an essay for the class of X that will have an A+ grade".
The idea is to let the LLM do the phrasing, but you take care of the facts (checking primary sources, etc...) and general direction. It is known that LLMs sometimes get their facts wrong, but their spelling and grammar is usually excellent.
Yeah, but that's the only actually creative part of writing. It's the bit that, once you reach a certain level of skill, becomes enjoyable.
I mean, I know what you've described is what everyone will do, but I feel sad for the students who'll learn like that. They won't develop as writers to the point of having style, and then - once everything written is a style-less mush of LLM-generated phrases - what's the point of reading it? We might as well feed everything back through the LLM to extract the "main ideas" for us.
I guess we're "saving labor" that way, but what an anti-human process we'll have made.
> We might as well feed everything back through the LLM to extract the "main ideas" for us.
And I wouldn't bother unless it's corporate memo. Which is already bland once it's past a certain length.
In academia, I see academics using ChatGPT to write papers. They get help with definitions, they even give the related work PDFs to it and make it write the part. No one fact checks. Students; they use it to write reports, homeworks and code.
GPT may be good for learning, but not for total beginners. That is key. As many people stated here, it can be good for those with experience. Those without, should seek for those experienced people. Then, when they have the basics they can get help from GPT to go further.
> Compare your ideas against history’s greatest thinkers
What made these people great thinkers is their minds rather than their writing styles. I'm not sure that chatbots get smarter when you tell them to impersonate someone smart, because in humans, this usually has the reverse effect.
> AI excels at automating tedious, time-consuming tasks like formatting citations
Programs like LaTeX also excel at this kind of work and will probably be more reliable in the long run.
This is great. I don't have children, but will be sharing it with friends that do.
CITE YOUR CONVERSATIONS
that should have been the first point. Transparency is the key.
I would like to see one Guide like this for writing code with better chances of keep getting better on your craft.
I think this should have emphasized the presence of a knowledge cutoff date.
Honestly, I used to be a slacker. ChatGPT revived my productive in learning by 10x..
I used to be overwhelmed by information and it would demotivate me. Having someone who can answer or push you to a direction thats reasonable is amazing!
Socratic Diaglogue, interesting
In the age of google + AI + instant access to info, knowing how to ask the question is more important than knowing the answer.
Sure if your goal is to generate a race of Eloi who live exclusively as passive consumers, casually disregarding anything a machine can hide from their feeble minds.
For some, perhaps.
For others, it could be a force multiplier - I think you'll see more multimillionaire businesses run by only 1 or 2 people.
The only logical course is to not use LLM garbage for anything. I know this is heretical within the tech bro monoculture of HN.
>I know this is heretical within the tech bro monoculture of HN.
Because it's objectively a false statement.
The LLM output is only "garbage" if your prompt and fact-checking also are garbage.
It's like calling a Ti-84 calculator "garbage" because the user has a hole in their head and doesn't know how to use it, hence they can't produce anything useful using it.
lol so they're just advertising directly to students now huh?
They need more users to increase the valuations and get more money. The interesting part is that more users are equal to more expenses and, as far as anyone can tell, they aren't breaking even.
I approve
dont