I wonder how well Montecarlo works with a problem like this (for the online version), where you try all the potential places and run simulations adding new random boxes similar to the one added so far in order to see which option allows for better outcome. For sure it's slow, yet it should work well, which is interesting per-se since it would show once more how you can come up with slow but good solutions just simulating stuff. But likely, this is how this heuristics, like Skyline, were found: analyzing the properties of the brute-forced best picks.
If you're printing randomly sized rectangles and want to cut them apart with a guillotine paper cutter, then the algorithm can be much simpler: https://github.com/trathborne/guillot
Very fun. I wonder if a heap could improve the performance of the "loop through the skyline in order to find the best candidate location" step. (I think it would improve the asymptotic time, if I'm understanding the algorithm correctly, but obviously the real performance is more complex).
There are all sorts of application for this. E.g. optimally cutting carpets from rolls of carpet. It is a variant on the 2D knapsack problem, one of the classic operational research problems:
https://en.wikipedia.org/wiki/Knapsack_problem
The knapsack problem gets much harder as you increase the dimensions.
Placing parts on a PCB is harder if you have to care about where the components go relative to each other (e.g. because they have to be electrically connected, a certain distance from ink marking or drill holes, due to thermal or interference issues etc) rather than just optimizing space used.
Packing has so many applications for different things. Such as optimal packing of a truck or shipping container. Or how to optimally pack graphics into a GPU texture.
I had this guy as a prof. https://en.wikipedia.org/wiki/David_A._Klarner
I have never encountered someone so excited about dividing up rectangles, as it is related to combinatorics. Also with such a seething hatred for floating point numbers.
Can't wait to ask this one as a Leetcode warmup.
I wonder how well Montecarlo works with a problem like this (for the online version), where you try all the potential places and run simulations adding new random boxes similar to the one added so far in order to see which option allows for better outcome. For sure it's slow, yet it should work well, which is interesting per-se since it would show once more how you can come up with slow but good solutions just simulating stuff. But likely, this is how this heuristics, like Skyline, were found: analyzing the properties of the brute-forced best picks.
An interesting variation is Dynamic Storage Allocation, in which the rectangles can slide only alongside one axis.
AKA "static memory allocation", since the heights of the rectangles can be interpreted as buffer sizes, and their widths as lifetimes.
Most of my PhD's effort has been devoted to beating the SOTA in this. Problem's importance nowadays is owed to deep learning's memory wall.
nice! you have your thesis or any related paper available online?
Nice, I didn’t know stb had a rect packer: https://github.com/nothings/stb/blob/master/stb_rect_pack.h
Fun write up, kudos!
At a glance, this feels like a good case for an exact cover algorithm. Would be neat to see how that compares, here.
Great writeup. Sucks when this gets asked in a coding interview to be solved in 15min :)
This blog post references https://github.com/juj/RectangleBinPack/blob/master/Rectangl... ( and implicitly https://github.com/juj/RectangleBinPack/ ) as it's primary source.
The README even mentions the guillotine algorithm / method that someone else posted (not the same link, but the same method).
If you're printing randomly sized rectangles and want to cut them apart with a guillotine paper cutter, then the algorithm can be much simpler: https://github.com/trathborne/guillot
Very fun. I wonder if a heap could improve the performance of the "loop through the skyline in order to find the best candidate location" step. (I think it would improve the asymptotic time, if I'm understanding the algorithm correctly, but obviously the real performance is more complex).
With some clever bit twiddling hacks, you might be able to avoid the loop entirely.
This looks useful for auto-placing parts inside a PCB.
There are all sorts of application for this. E.g. optimally cutting carpets from rolls of carpet. It is a variant on the 2D knapsack problem, one of the classic operational research problems: https://en.wikipedia.org/wiki/Knapsack_problem
The knapsack problem gets much harder as you increase the dimensions.
Placing parts on a PCB is harder if you have to care about where the components go relative to each other (e.g. because they have to be electrically connected, a certain distance from ink marking or drill holes, due to thermal or interference issues etc) rather than just optimizing space used.
Packing has so many applications for different things. Such as optimal packing of a truck or shipping container. Or how to optimally pack graphics into a GPU texture.
I had this guy as a prof. https://en.wikipedia.org/wiki/David_A._Klarner I have never encountered someone so excited about dividing up rectangles, as it is related to combinatorics. Also with such a seething hatred for floating point numbers.
Hell, I needed something like this when working on my Halloween costumes this year!
3d printing build plate optimalisation, passenger transport, real estate plot division. Let's make this list longer?