Skip to content

When grading code involves unit-style testing, you may want to use testthat expectation function to test the user's submitted code. In these cases, to differentiate between expected errors and internal errors indicative of issues with the grading code, gradethis requires that authors wrap assertion-style tests in fail_if_error(). This function catches any errors and converts them into fail() grades. It also makes the error and its message available for use in the message glue string as .error and .error_message respectively.

Usage

fail_if_error(
  expr,
  message = "{.error_message}",
  ...,
  env = parent.frame(),
  hint = TRUE,
  encourage = getOption("gradethis.fail.encourage", FALSE)
)

Arguments

expr

An expression to evaluate that whose errors are safe to be converted into failing grades with fail().

message

A glue string containing the feedback message to be returned to the user. Additional .error and .error_message objects are made available for use in the message.

...

Additional arguments passed to graded() or additional data to be included in the feedback object.

env

environment to evaluate the glue message. Most users of gradethis will not need to use this argument.

hint

Include a code feedback hint with the failing message? This argument only applies to fail() and fail_if_equal() and the message is added using the default options of give_code_feedback() and maybe_code_feedback(). The default value of hint can be set using gradethis_setup() or the gradethis.fail.hint option.

encourage

Include a random encouraging phrase with random_encouragement()? The default value of encourage can be set using gradethis_setup() or the gradethis.fail.encourage option.

Value

If an error occurs while evaluating expr, the error is returned as a fail() grade. Otherwise, no value is returned.

See also

Examples

# The user is asked to add 2 + 2, but they take a shortcut
ex <- mock_this_exercise("'4'")

# Normally, grading code with an author error returns an internal problem grade
grade_author_mistake <- grade_this({
  if (identical(4)) {
    pass("Great work!")
  }
  fail()
})(ex)
#> #> grade_this({
#> #>     if (identical(4)) {
#> #>         pass("Great work!")
#> #>     }
#> #>     fail()
#> #> })(ex)
#> Error in identical(4): argument "y" is missing, with no default

# This returns a "problem occurred" grade
grade_author_mistake
#> <gradethis_graded: [Neutral]
#>   A problem occurred with the grading code for this exercise.
#> >
# ...that also includes information about the error (not shown to users)
grade_author_mistake$error
#> $message
#> [1] "argument \"y\" is missing, with no default"
#> 
#> $call
#> [1] "identical(4)"
#> 
#> $gradethis_call
#> [1] "grade_this({\n    if (identical(4)) {\n        pass(\"Great work!\")\n    }\n    fail()\n})(ex)"
#> 

# But sometimes we'll want to use unit-testing helper functions where we know
# that an error is indicative of a problem in the users' code
grade_this({
  fail_if_error({
    testthat::expect_length(.result, 1)
    testthat::expect_true(is.numeric(.result))
    testthat::expect_equal(.result, 4)
  })
  pass("Good job!")
})(ex)
#> <gradethis_graded: [Incorrect]
#>   is.numeric(.result) is not TRUE
#> 
#>   `actual`: FALSE
#>   `expected`: TRUE
#> >

# Note that you don't need to reveal the error message to the user
grade_this({
  fail_if_error(
    message = "Your result isn't a single numeric value.",
    {
      testthat::expect_length(.result, 1)
      testthat::expect_true(is.numeric(.result))
      testthat::expect_equal(.result, 4)
    }
  )
  pass("Good job!")
})(ex)
#> <gradethis_graded: [Incorrect]
#>   Your result isn't a single numeric value.
#> >