STOP putting your shitty dot-directories full of a mix of config, cache and data in my god damned home directory.
Concerned that not doing this will break things? Just check if you've dumped your crap in there already and if not then put it in the right places.
Worried about confusing existing users migrating to other machines? Document the change.
Still worried? How about this: I'll put a file called opt-into-xdg-base-directory-specification inside ${XDG_CONFIG_HOME:-$HOME/.config} and if you find that file you can rest assured I won't be confused.
>Debug information tends to be large and linking it slows down linking quite considerably. If you’re like many developers and you generally use println for debugging and rarely or never use an actual debugger, then this is wasted time.
Interesting. Is this true? In my work (java/kotlin, primarily in app code on a server, occasional postgres or frontend js/react stuff), I'm almost always reaching for a debugger as an enormously more powerful tool than println debugging. My tests are essentially the println, and if they fail for any interesting reason I'll want the debugger.
In my experience Java debuggers are exceptionally powerful, much more so than what I've seen from C/C++/Rust debuggers.
If I'm debugging some complicated TomEE application that might take 2 minutes to start up, then I'm absolutely reaching to an IntelliJ debugger as one of my first tools.
If I'm debugging some small command line application in Rust that will take 100ms to exhibit the failure mode, there's a very good chance that adding a println debug statement is what I'll try first
Sure. It’s got breakpoints, and conditional breakpoints, using the same engine as IntelliJ. It’s got evaluate, it’s got expression and count conditionals, it’s got rewind. It has the standard locals view.
Rust support has improved in 2024 pretty strongly (before this year it just shelled out to lldb); the expr parser and more importantly the variable viewer are greatly improved since January.
It can do any single expression, and the results are better than lldb, but it can’t do multiple statements and not everything in Rust can be one expression; you can’t use {} here
Depends on the developer. in the Practice of Programming https://en.m.wikipedia.org/wiki/The_Practice_of_Programming by Brian W. Kernighan and Rob Pike they say they use debuggers only to get a stack trace from a core dump and use printf for everything else. You can disagree but those are known very good programmers.
But what source code debuggers did they have available?
Other than gdb, I can't name any Unix C source code debuggers.
I believe they were working on Unix before GDB was created (wikipedia says gdb was created in 1986 - https://en.wikipedia.org/wiki/Gdb).
Plan 9 has acid, but from the man page and english manual, the debugger is closer to cli/tui than gui.
Printf debugging excels on environments where there's already good logging. Here I just need to pinpoint where in my logs things have already gone wrong and work my way backwards a bit.
You could do the same with a debugger setting up a breakpoint, but the logs can better surface key application-level decisions made in the process to get to the current bad state.
On a debugger I need to wind back on all functions, which it might get awful as some of them might be likely correct library calls that you need to skip over when going back in time, but that will take a huge portion of the functions called before the breakpoint.
I don't think it's impossible to do with a debugger, but logging sort of bypasses the process of telling the debugger what's relevant so it can hide the rest, and it might already be in your codebase, but there's no equivalent annotations already there in the code to help the debugger understand what's important.
To me printf helps surfacing the relevant application-level process to get to a broken state, and debuggers help understand hairy situations where things have gone wrong at a lower level, say missing fields or memory corruption, but these days with safer languages lower level issues should be way less frequent.
---
On a side-note, it doesn't help debuggers that going back in time was really hard with variable-length instructions. I might be wrong here, but it took a while until `rr` came out.
I do think that complexities like that resulted in spending too much time dealing with hairy details instead of improving the UI for debugging.
I really value debuggers. I have spent probably half my career solving problems that weren’t possible to solve with a debugger. When I can fall back on it, it helps me personally quite a bit.
I find them amazing, it's just that printf is unreasonably good given how cheap it is.
If I had the symbols, metadata, powerful debugger engine and a polished UI, I'll take that over printf everyday, but in the average situation printf is just too strong when fighting in mud.
Your existing logs will tell you roughly where and you just insert some more log lines to check the state of the data.
Depends how fast your build/run cycle is and how many different prcocesses/threads whether a debugger will be faster/easier but a lot of it just comes down to preference. Most time spent debugging for me at least is spent thinking about the probable cause then choosing what state to look at.
I'm not against printf at all, my lifetime commit history is evidence of that. Do you also think that in the case of a coredump not existing, that printf is faster? Sincere question. I'm having an internal argument with myself about it at the moment and some outside perspective would be most welcome.
The only time I use a debugger with Rust is when unsafe code in some library crate messes up. My own code has no "unsafe". I have debug symbols on and a panic catcher that displays a backtrace in a popup window. That covers most cases.
Rust development is mostly fixing compile errors, anyway. Once it compiles, it often works the first time. What matters is compile time for error compiles, which is pretty good.
Incremental compile time for my metaverse client is 1 minute 8 seconds in release mode. That's OK. Takes longer to test a new version.
I use the debugger fairly regularly, though for me I'm on a stack where friction is minimal. In Go w/ VS Code, you can just write a test, set your breakpoints, hit "debug test", and you're in there in probably less than 20 seconds.
I am like you though, I don't typically resort to it immediately if I think I can figure out the problem with a quick log. And the times where I've not had access to a debugger with good UX, this tipping point can get pushed quite far out.
Debugging in Rust is substantially less common for me (and probably not only for me) because it is less often needed and more difficult - many things that are accessible in interpreted world don't exist in native binary.
I do care about usable tracebacks in error reports though.
Main challenge with debuggers in Rust is to map the data correctly into the complex type system. For this reason I rarely use debuggers, becase dbg! is superior in that sense.
println debugging is where everyone starts. Some people never graduate to knowing how to use a debugger.
Debugging through log data still has a place, of course. However, trying to do all of your debugging through println is so much harder, even though it feels easier than learning to use a debugger.
To be fair, if your code is multithreaded and sensitive to pauses, it becomes harder to debug with a debugger.
Ultimately, if you have a good logging setup and kinda know where the issue is a quick log message could be faster than debugging if all you want to do is look a variable value.
printed/println debugging works if you wrote the code or have a good idea of where to go.
I frequently find myself debugging large unfamiliar code bases, and typically it’s much easier to stick a breakpoint in and start following where it goes rather than blindly start instrumenting with print statements and hoping that you picked the right code path.
I trained in the cout school of debugging. I can use a debugger, and sometimes do, but it's really hard to use a debugger effectively when you're also dealing with concurrency and network clients. Maybe one day, I'll learn how to use one of the time traveling debuggers and then I can record the problem and then step through it to debug it.
I came here to write exactly this .. if I was drinking something I would have spit it everywhere laughing when I read it.
I guess 'many developers' here probably refers to web developers who don't use the debugger, cause it's mostly useless/perpetually broken in JS land ..? I rely heavily on the debugger; can't imagine how people work without one.
> If you’re like many developers and you generally use println for debugging and rarely or never use an actual debugger
You lost me here.
Using a debugger to step through the code is a huge timesaver.
Inserting println statements, compiling, running, inserting more println, and repeating is very inefficient.
If you learn to use a debugger, set breakpoints, and step through code examining values while you go then the 8 seconds spent compiling isn’t an issue.
Engh... I used to rely on a debugger a lot 25 years ago when I was learning to program, and was extremely good at making it work and using its quirks; but, I was building overly-simplistic software, and it feels so rare of a thing to be useful anymore. The serious code I now find myself actually ever needing to debug, with multiple threads written in multiple languages all linked together--code which is often performance sensitive and even often involves networked services / inter-process communication--just isn't compatible with interactive debugging.
So like, sure: if I have some trivial one-off analysis tool I am building for which I am running into some issue I could figure out how to debug it, but even then I am going to have to figure out yet another debugging environment for yet another language, and also of course surmount the hassle of a ton of ensuring that sufficient debugging information is available and I'm running a build that is somehow not optimized enough for debugging and yet also not so slow that I'm gouging my eyes out, I could use a debugger, but I'd rather sit and stare at the code longer than start arguing with a debugger.
Your serious code isn't performance sensitive if it's a distributed monolith like you're describing.
It's just another wannabe big scale dumbsterfire some big brain architect thought up, thinking he's implementing a platform that's actively being used by millions of users simultaneously, aka Google's or a few social media sites.
Trace debugging can be inefficient, but it's also highly effective, portable between all languages, and requires no additional tooling. Hard to beat that combo.
> Inserting println statements, compiling, running, inserting more println, and repeating is very inefficient.
That all depends on how long it takes to compile and run.
> If you learn to use a debugger, set breakpoints, and step through code examining values while you go then the 8 seconds spent compiling isn’t an issue.
Meh, it's fine.
You still have to set new ones and re-run your code to trigger the reproduction iteratively if you don't know where anything is wrong.
Clicking "add logging break point here" and adding a print statement is really not very different. In my experience the hard part is know where you want to look. Stepping through all your code line by line and looking at all values every step is not a quick way to do anything. You have to divide and conquer your debugging smartly, like a binary search all over your call sites in the huge tree-walk that is your program.
Stepping through code and adding breakpoints is a spectacular way to figure out where to put log statements. Modern code is so abstract that it’s nigh impossible to figure out what the fuck function even gets called just from looking at code.
Is it really saving time or are you not thinking enough about what is wrong until you stumble on an answer? I can't answer for you but I find that the forced wait for build also forces me to think and so I find the problem faster. It feels slower though but the clock is the real measure.
It's one click to set the breakpoint in the ide or one line if you're using gdb from the command line. I'm not sure how printf debugging could be quicker even if you didn't have to rebuild. Having done both, I'd take the debugger any day.
the important time is thinking time and debuggers don't help. Often they hurt because it is so sudctive to set those breakpoints instead of stophing to think about why.
Sometimes "just thinking harder" works, but often not. A debugger helps you understand what your code is actually doing, while your brain is flawed and makes flawed assumptions. Regardless of who you are, it's unlikely you will be manually evaluating code in your head as accurately as gdb (or whatever debugger you use).
I think a lot of linux/mac folks tend to printf debug, while windows folks tend to use a debugger, and I suspect it is a culture based choice that is justified post hoc.
However, few things have been better for my code than stepping through anything complex at least once before I move on (I used to almost exclusively use printf debugging).
There are a lot of different t views on debugging with a debugger vs print statements and what works better. This often seems to be based on user preference and familiarity. One thing that I haven’t seen mentioned is for issues with dependencies. Setting up a path dep in rust or fetching the code in whatever language you’re using for your project usually takes more time than simply adding some breakpoints in the library code.
It's a lot faster to add a log point in a debugger than to add a print statement and recompile. Especially with cargo check, I really don't see the point of non-debuggable builds (outside of embedded, but the size of debuginfo already makes that a non-starter).
2. Conditionally compiling some code like logging (not sure if matters for typical Rust projects, but for embedded C projects it's typical).
3. Conditionally compiling assertions to catch more bugs.
I'm using logs, because debugger breaks hardware. Very rarely do I need to reach debugger. Even when hard exception occurs, usually enough info is logged to find out the root cause of the bug.
Depends what you're working on. Stopped in an unfortunate place? That one element didn't get turned off and burned out. Or the motor didn't stop. Or the crucial interrupts got missed and your state is now reset. Or...
are there debugging tools specifically for situations like that?
do you just write code to test manually?
How do you ensure dev builds don't break stuff like that even without considering debugging?
You ensure dev builds don't break stuff like that with realtime programming techniques. Dev tools exist and they're usually some combination of platform specific, expensive, buggy, and fragile.
printf and friends are fantastic when applicable. Sometimes the cost to even do an async print or even building in any mode except stripped release is impossible though, which usually leads to !fun!.
The most useful tool is a full tracing system (basically a stream of run instructions you can use to trace the execution of the code without interrupting it), but unfortunately they're quite expensive and proprietery, and require extra connections to the systems that do support them, so they're not particularly commonly used. Most people just use some kind of home-grown logging/tracing system that tracks the particular state they're interested in, possibly logged into a ringbuffer which can be dumped when triggered by some event.
On embedded, debuggers almost never work until you get to really expensive ones.
In addition, debuggers tend to obscure the failure because they turn on all the hardware which tends to make bugs go away if they are related to power modes.
One of my "best" debuggers for embedded was putting an interactive interpreter over a serial interface on an interrupt so I could query the state of things when a device woke up even if it was hung--effectively a run time injectable "printf".
Crude, but it could trace down obscure bugs that were rare because the device would stay in the failure mode.
The bigeest problem was maintaining the database of code so that we knew exactly what build was on a device. We had to hash and track the universe.
I’ve worked with many M3s and M4s and some Cypress microchips and the JTAG debuggers always worked fine as far as I recall. There were some vendors that liked to force you to buy really expensive ancillary HW but a) there was plenty of OSS that worked fairly well b) you could pick which vendor you went with.
All of those chips you mentioned will turn on all the units at full power when you connect a debugger to them.
And the OSS stuff never works correctly. I wind up debugging the OSS stuff more than my own hardware. And I've used a LOT of OSS (to the point that I wrote software to use a Beaglebone as my SWD debugger to work around all the idiocies--both commercial and OSS).
Surely it would be better to use a debugger and avoid recompiling your code for the added print statements than to strip out the debug information to decrease build times.
I build wasm as a target and this is sadly not an option, I have to rebuild for each little bit. I will be trying a different linker and see how it goes though!
Just my lack of experience here, but I'm trying to verify I'm successfully using sold. The mold linker claims it leaves metadata in the .comment section, but on mach-o does that exist? Is there a similar evaluation command using objdump as the mold readelf command?
> ~/.cargo/
This reminds me...
STOP putting your shitty dot-directories full of a mix of config, cache and data in my god damned home directory.
Concerned that not doing this will break things? Just check if you've dumped your crap in there already and if not then put it in the right places.
Worried about confusing existing users migrating to other machines? Document the change.
Still worried? How about this: I'll put a file called opt-into-xdg-base-directory-specification inside ${XDG_CONFIG_HOME:-$HOME/.config} and if you find that file you can rest assured I won't be confused.
Thanks in advance!
I would upvote at least 10 times if I could.
This!
>Debug information tends to be large and linking it slows down linking quite considerably. If you’re like many developers and you generally use println for debugging and rarely or never use an actual debugger, then this is wasted time.
Interesting. Is this true? In my work (java/kotlin, primarily in app code on a server, occasional postgres or frontend js/react stuff), I'm almost always reaching for a debugger as an enormously more powerful tool than println debugging. My tests are essentially the println, and if they fail for any interesting reason I'll want the debugger.
In my experience Java debuggers are exceptionally powerful, much more so than what I've seen from C/C++/Rust debuggers.
If I'm debugging some complicated TomEE application that might take 2 minutes to start up, then I'm absolutely reaching to an IntelliJ debugger as one of my first tools.
If I'm debugging some small command line application in Rust that will take 100ms to exhibit the failure mode, there's a very good chance that adding a println debug statement is what I'll try first
CLion adds the power of IntelliJ debuggers to Rust. It works exceptionally well.
Do you have more information about this?
Last time I debugged Rust with CLion/RustRover, the debugger was the same as VSCode uses.
Sure. It’s got breakpoints, and conditional breakpoints, using the same engine as IntelliJ. It’s got evaluate, it’s got expression and count conditionals, it’s got rewind. It has the standard locals view.
Rust support has improved in 2024 pretty strongly (before this year it just shelled out to lldb); the expr parser and more importantly the variable viewer are greatly improved since January.
Does it support evaluating code, in context, while debugging?
Yes*, mostly
It can do any single expression, and the results are better than lldb, but it can’t do multiple statements and not everything in Rust can be one expression; you can’t use {} here
Depends on the developer. in the Practice of Programming https://en.m.wikipedia.org/wiki/The_Practice_of_Programming by Brian W. Kernighan and Rob Pike they say they use debuggers only to get a stack trace from a core dump and use printf for everything else. You can disagree but those are known very good programmers.
But what source code debuggers did they have available?
Other than gdb, I can't name any Unix C source code debuggers. I believe they were working on Unix before GDB was created (wikipedia says gdb was created in 1986 - https://en.wikipedia.org/wiki/Gdb).
Plan 9 has acid, but from the man page and english manual, the debugger is closer to cli/tui than gui.
see https://9fans.github.io/plan9port/man/man1/acid.html and https://plan9.io/sys/doc/acid.html
Looks like there was an adb in 1979 which is supposedly the successor to db.
https://en.wikipedia.org/wiki/Advanced_Debugger
What if the program doesn’t crash? It just black-boxes the data incorrectly? I can find that error infinitely faster with a debugger.
Printf debugging excels on environments where there's already good logging. Here I just need to pinpoint where in my logs things have already gone wrong and work my way backwards a bit.
You could do the same with a debugger setting up a breakpoint, but the logs can better surface key application-level decisions made in the process to get to the current bad state.
On a debugger I need to wind back on all functions, which it might get awful as some of them might be likely correct library calls that you need to skip over when going back in time, but that will take a huge portion of the functions called before the breakpoint. I don't think it's impossible to do with a debugger, but logging sort of bypasses the process of telling the debugger what's relevant so it can hide the rest, and it might already be in your codebase, but there's no equivalent annotations already there in the code to help the debugger understand what's important.
To me printf helps surfacing the relevant application-level process to get to a broken state, and debuggers help understand hairy situations where things have gone wrong at a lower level, say missing fields or memory corruption, but these days with safer languages lower level issues should be way less frequent.
---
On a side-note, it doesn't help debuggers that going back in time was really hard with variable-length instructions. I might be wrong here, but it took a while until `rr` came out.
I do think that complexities like that resulted in spending too much time dealing with hairy details instead of improving the UI for debugging.
I appreciate your candor.
I really value debuggers. I have spent probably half my career solving problems that weren’t possible to solve with a debugger. When I can fall back on it, it helps me personally quite a bit.
I find them amazing, it's just that printf is unreasonably good given how cheap it is.
If I had the symbols, metadata, powerful debugger engine and a polished UI, I'll take that over printf everyday, but in the average situation printf is just too strong when fighting in mud.
Your existing logs will tell you roughly where and you just insert some more log lines to check the state of the data.
Depends how fast your build/run cycle is and how many different prcocesses/threads whether a debugger will be faster/easier but a lot of it just comes down to preference. Most time spent debugging for me at least is spent thinking about the probable cause then choosing what state to look at.
Logs are gold. Parsing logs can be very exhausting.
They use printf. Which they claim is faster.
I'm not against printf at all, my lifetime commit history is evidence of that. Do you also think that in the case of a coredump not existing, that printf is faster? Sincere question. I'm having an internal argument with myself about it at the moment and some outside perspective would be most welcome.
Most of my time with printf degugging is spent trying to reason about the code not compiling.
though you should note that I'm repeating their claims. What I think is hidden.
printf isn't faster if you want to single step through code to find math precision errors.
I've had to do that on a embedded system that didn't support debugging. It was hell.
I’ve always wondered why embedded devs make less than “JavaScript-FOTM” devs.
The only time I use a debugger with Rust is when unsafe code in some library crate messes up. My own code has no "unsafe". I have debug symbols on and a panic catcher that displays a backtrace in a popup window. That covers most cases.
Rust development is mostly fixing compile errors, anyway. Once it compiles, it often works the first time. What matters is compile time for error compiles, which is pretty good.
Incremental compile time for my metaverse client is 1 minute 8 seconds in release mode. That's OK. Takes longer to test a new version.
I use the debugger fairly regularly, though for me I'm on a stack where friction is minimal. In Go w/ VS Code, you can just write a test, set your breakpoints, hit "debug test", and you're in there in probably less than 20 seconds.
I am like you though, I don't typically resort to it immediately if I think I can figure out the problem with a quick log. And the times where I've not had access to a debugger with good UX, this tipping point can get pushed quite far out.
Debugging in Rust is substantially less common for me (and probably not only for me) because it is less often needed and more difficult - many things that are accessible in interpreted world don't exist in native binary.
I do care about usable tracebacks in error reports though.
Main challenge with debuggers in Rust is to map the data correctly into the complex type system. For this reason I rarely use debuggers, becase dbg! is superior in that sense.
println debugging is where everyone starts. Some people never graduate to knowing how to use a debugger.
Debugging through log data still has a place, of course. However, trying to do all of your debugging through println is so much harder, even though it feels easier than learning to use a debugger.
I am comfortable using a debugger, but println debugging is easy, fast, and disproportionately effective for most of my debugging in practice.
I reach for a “real” debugger when necessary, but that’s less than 5% of the time.
To be fair, if your code is multithreaded and sensitive to pauses, it becomes harder to debug with a debugger.
Ultimately, if you have a good logging setup and kinda know where the issue is a quick log message could be faster than debugging if all you want to do is look a variable value.
That is where OS tracing like DTrace and ETW come into play, which can then be loaded into a debugging session.
printed/println debugging works if you wrote the code or have a good idea of where to go.
I frequently find myself debugging large unfamiliar code bases, and typically it’s much easier to stick a breakpoint in and start following where it goes rather than blindly start instrumenting with print statements and hoping that you picked the right code path.
I trained in the cout school of debugging. I can use a debugger, and sometimes do, but it's really hard to use a debugger effectively when you're also dealing with concurrency and network clients. Maybe one day, I'll learn how to use one of the time traveling debuggers and then I can record the problem and then step through it to debug it.
I came here to write exactly this .. if I was drinking something I would have spit it everywhere laughing when I read it.
I guess 'many developers' here probably refers to web developers who don't use the debugger, cause it's mostly useless/perpetually broken in JS land ..? I rely heavily on the debugger; can't imagine how people work without one.
when you write async JS code the debugger essentially adds no value over printing
> If you’re like many developers and you generally use println for debugging and rarely or never use an actual debugger
You lost me here.
Using a debugger to step through the code is a huge timesaver.
Inserting println statements, compiling, running, inserting more println, and repeating is very inefficient.
If you learn to use a debugger, set breakpoints, and step through code examining values while you go then the 8 seconds spent compiling isn’t an issue.
Engh... I used to rely on a debugger a lot 25 years ago when I was learning to program, and was extremely good at making it work and using its quirks; but, I was building overly-simplistic software, and it feels so rare of a thing to be useful anymore. The serious code I now find myself actually ever needing to debug, with multiple threads written in multiple languages all linked together--code which is often performance sensitive and even often involves networked services / inter-process communication--just isn't compatible with interactive debugging.
So like, sure: if I have some trivial one-off analysis tool I am building for which I am running into some issue I could figure out how to debug it, but even then I am going to have to figure out yet another debugging environment for yet another language, and also of course surmount the hassle of a ton of ensuring that sufficient debugging information is available and I'm running a build that is somehow not optimized enough for debugging and yet also not so slow that I'm gouging my eyes out, I could use a debugger, but I'd rather sit and stare at the code longer than start arguing with a debugger.
Your serious code isn't performance sensitive if it's a distributed monolith like you're describing.
It's just another wannabe big scale dumbsterfire some big brain architect thought up, thinking he's implementing a platform that's actively being used by millions of users simultaneously, aka Google's or a few social media sites.
Trace debugging can be inefficient, but it's also highly effective, portable between all languages, and requires no additional tooling. Hard to beat that combo.
> Using a debugger to step through the code is a huge timesaver.
This is too slow and manual of a process to be a huge timesaver, you'd need a "time-travel" debugging capability that eliminate these to save time
> Inserting println statements, compiling, running, inserting more println, and repeating is very inefficient.
That all depends on how long it takes to compile and run.
> If you learn to use a debugger, set breakpoints, and step through code examining values while you go then the 8 seconds spent compiling isn’t an issue.
Meh, it's fine.
You still have to set new ones and re-run your code to trigger the reproduction iteratively if you don't know where anything is wrong.
Clicking "add logging break point here" and adding a print statement is really not very different. In my experience the hard part is know where you want to look. Stepping through all your code line by line and looking at all values every step is not a quick way to do anything. You have to divide and conquer your debugging smartly, like a binary search all over your call sites in the huge tree-walk that is your program.
Stepping through code and adding breakpoints is a spectacular way to figure out where to put log statements. Modern code is so abstract that it’s nigh impossible to figure out what the fuck function even gets called just from looking at code.
I guess it depends if you are working with your own code or someone else's. If you work with your own code, you should have pretty good idea already.
Debuggers are pretty good when you want to understand someone else's code.
Is it really saving time or are you not thinking enough about what is wrong until you stumble on an answer? I can't answer for you but I find that the forced wait for build also forces me to think and so I find the problem faster. It feels slower though but the clock is the real measure.
It's one click to set the breakpoint in the ide or one line if you're using gdb from the command line. I'm not sure how printf debugging could be quicker even if you didn't have to rebuild. Having done both, I'd take the debugger any day.
the important time is thinking time and debuggers don't help. Often they hurt because it is so sudctive to set those breakpoints instead of stophing to think about why.
Sometimes "just thinking harder" works, but often not. A debugger helps you understand what your code is actually doing, while your brain is flawed and makes flawed assumptions. Regardless of who you are, it's unlikely you will be manually evaluating code in your head as accurately as gdb (or whatever debugger you use).
I think a lot of linux/mac folks tend to printf debug, while windows folks tend to use a debugger, and I suspect it is a culture based choice that is justified post hoc.
However, few things have been better for my code than stepping through anything complex at least once before I move on (I used to almost exclusively use printf debugging).
Step through debuggers and tracers are two different dimensions of debugging, and not directly comparable.
Just ignore that part then? You don't have to stop reading the rest of the (very good) article lol.
There are a lot of different t views on debugging with a debugger vs print statements and what works better. This often seems to be based on user preference and familiarity. One thing that I haven’t seen mentioned is for issues with dependencies. Setting up a path dep in rust or fetching the code in whatever language you’re using for your project usually takes more time than simply adding some breakpoints in the library code.
Not linking debug info must be some kind of sick joke. What is the point of a debug build without symbols?
Enable them when you need to debug. This is for speeding up "edit-build-run" workflows.
It's a lot faster to add a log point in a debugger than to add a print statement and recompile. Especially with cargo check, I really don't see the point of non-debuggable builds (outside of embedded, but the size of debuginfo already makes that a non-starter).
1. Skipping some optimizations to build faster.
2. Conditionally compiling some code like logging (not sure if matters for typical Rust projects, but for embedded C projects it's typical).
3. Conditionally compiling assertions to catch more bugs.
I'm using logs, because debugger breaks hardware. Very rarely do I need to reach debugger. Even when hard exception occurs, usually enough info is logged to find out the root cause of the bug.
> because debugger breaks hardware
What? Seems like you’re talking about embedded but I’ve done a lot of embedded projects in my time and I’ve never had a debugger that breaks the HW.
Depends what you're working on. Stopped in an unfortunate place? That one element didn't get turned off and burned out. Or the motor didn't stop. Or the crucial interrupts got missed and your state is now reset. Or...
are there debugging tools specifically for situations like that? do you just write code to test manually? How do you ensure dev builds don't break stuff like that even without considering debugging?
You ensure dev builds don't break stuff like that with realtime programming techniques. Dev tools exist and they're usually some combination of platform specific, expensive, buggy, and fragile.
printf and friends are fantastic when applicable. Sometimes the cost to even do an async print or even building in any mode except stripped release is impossible though, which usually leads to !fun!.
The most useful tool is a full tracing system (basically a stream of run instructions you can use to trace the execution of the code without interrupting it), but unfortunately they're quite expensive and proprietery, and require extra connections to the systems that do support them, so they're not particularly commonly used. Most people just use some kind of home-grown logging/tracing system that tracks the particular state they're interested in, possibly logged into a ringbuffer which can be dumped when triggered by some event.
On embedded, debuggers almost never work until you get to really expensive ones.
In addition, debuggers tend to obscure the failure because they turn on all the hardware which tends to make bugs go away if they are related to power modes.
One of my "best" debuggers for embedded was putting an interactive interpreter over a serial interface on an interrupt so I could query the state of things when a device woke up even if it was hung--effectively a run time injectable "printf".
Crude, but it could trace down obscure bugs that were rare because the device would stay in the failure mode.
The bigeest problem was maintaining the database of code so that we knew exactly what build was on a device. We had to hash and track the universe.
I’ve worked with many M3s and M4s and some Cypress microchips and the JTAG debuggers always worked fine as far as I recall. There were some vendors that liked to force you to buy really expensive ancillary HW but a) there was plenty of OSS that worked fairly well b) you could pick which vendor you went with.
All of those chips you mentioned will turn on all the units at full power when you connect a debugger to them.
And the OSS stuff never works correctly. I wind up debugging the OSS stuff more than my own hardware. And I've used a LOT of OSS (to the point that I wrote software to use a Beaglebone as my SWD debugger to work around all the idiocies--both commercial and OSS).
Doesn't rust have overflow checks in debug and skips them in release?
Rust has compiled time overflow checks enabled by default in any profile. Runtime overflow checks are disabled by default in release profile.
split-debuginfo = "unpacked" (-gsplit-dwarf for C/C++) is the way. Your tools (debuggers, profilers, etc) probably support it by now.
Surely it would be better to use a debugger and avoid recompiling your code for the added print statements than to strip out the debug information to decrease build times.
I build wasm as a target and this is sadly not an option, I have to rebuild for each little bit. I will be trying a different linker and see how it goes though!
This post is from February, any interesting changes/improvements since then? Progress on “wild”?
Looks like he's been continuing to work on it.
GitHub repo: https://github.com/davidlattimore/wild
List of his blog posts, a couple of which are about wild: https://davidlattimore.github.io/
Just my lack of experience here, but I'm trying to verify I'm successfully using sold. The mold linker claims it leaves metadata in the .comment section, but on mach-o does that exist? Is there a similar evaluation command using objdump as the mold readelf command?
Good set of tips. Thank you.
It was my pleasure.
thanks
You're welcome
I get downvoted for thanking you. stupid HN folks :D