This paper discusses derivative-free smooth optimization problems in machine learning.Two novel derivative-free methods are proposed for minimizing functions with Lipschitz continuous gradients.The methods use gradient approximations based on adaptive finite differences.Numerical experiments demonstrate the advantages of the proposed methods over other existing methods.