Large Language Models (LLMs) are trained with a pre-defined context length, restricting their use in scenarios requiring long inputs. Previous efforts to adapt LLMs to a longer length usually require fine-tuning with this target length (Full-length fine-tuning), suffering intensive training costs.