Excellent question
@AaTu9903
, and you've actually identified a genuine error in the article.
Since the LoRA adapter contribution is scaled by α/r, a larger alpha does amplify the adapter's effect, making the fine-tuning more pronounced, not less. It was a typing error from my side and I have fixed it. I really appreciate you catching that.