The old way to run LEDs was to just to put a resistor in series with them. The resistor value is chosen to feed the correct current for the LED given the source voltage minus the voltage drop across the LED. It was a perfectly good method when LEDs were low power devices.
You can increase efficiency by using multiple LEDs in series with the resistor. This makes the voltage across the resistor lower, hence less wasted energy for the same current. The problem though is that if the source voltage drops, it affects the setup much more.
For example using a 3.7V battery, a resistor and an LED with 1.8V forward voltage, the voltage across the resistor would be 3.7-1.8=1.9V
If the battery voltage drops by 0.1V, the volts across the resistor drops to 1.8V. The current, and brightness would drop by about 5%.
But if you used 2 LEDs in series instead of one, the voltage across the resistor would start at 3.7-1.8-1.8=0.1v
You could choose the resistor to give the correct current, you'd have twice the brightness (2 LEDs), but 19 times less wasted energy via the resistor.
BUT it the battery drops by that same 0.1V, there would be no voltage across the resistor, no current, no light.
Instead of the brightness dropping by 5% you'd get nothing - 100% less light!