We consider a stochastic differential game in the context of forward-backward stochastic differential equations, where one player implements an impulse control while the opponent controls the system continuously. Utilizing the notion of "backward semigroups" we first prove the dynamic programming principle (DPP) for a truncated version of the problem in a straightforward manner. Relying on a uniform convergence argument then enables us to show the DPP for the general setting. Our approach avoids technical constraints imposed in previous works dealing with the same problem and, more importantly, allows us to consider impulse costs that depend on the present value of the state process in addition to unbounded coefficients. Using the dynamic programming principle we deduce that the upper and lower value functions are both solutions (in viscosity sense) to the same Hamilton-Jacobi-Bellman-Isaacs obstacle problem. By showing uniqueness of solutions to this partial differential inequality we conclude that the game has a value.(c) 2023 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons .org /licenses /by /4 .0/).