Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test(clouddriver): Test Suite Refinement: MonitoredDeployTask and getRetrofitLogMessage() Behaviour #4617

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
1 change: 1 addition & 0 deletions orca-clouddriver/orca-clouddriver.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,7 @@ dependencies {
testImplementation("io.strikt:strikt-core")
testImplementation("io.mockk:mockk")
testImplementation("ru.lanwen.wiremock:wiremock-junit5:1.3.1")
testImplementation("org.mockito:mockito-inline")

testCompileOnly("org.projectlombok:lombok")
testAnnotationProcessor("org.projectlombok:lombok")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,12 @@

package com.netflix.spinnaker.orca.clouddriver.tasks.monitoreddeploy

import com.fasterxml.jackson.databind.ObjectMapper
import com.netflix.spectator.api.NoopRegistry
import com.netflix.spinnaker.config.DeploymentMonitorDefinition
import com.netflix.spinnaker.orca.api.pipeline.models.ExecutionStatus
import com.netflix.spinnaker.orca.api.pipeline.TaskResult
import com.netflix.spinnaker.orca.clouddriver.MortServiceSpec
import com.netflix.spinnaker.orca.deploymentmonitor.DeploymentMonitorService
import com.netflix.spinnaker.orca.deploymentmonitor.models.DeploymentMonitorStageConfig
import com.netflix.spinnaker.orca.deploymentmonitor.models.DeploymentStep
Expand All @@ -28,9 +30,13 @@ import com.netflix.spinnaker.orca.deploymentmonitor.models.MonitoredDeployIntern
import com.netflix.spinnaker.orca.pipeline.model.PipelineExecutionImpl
import com.netflix.spinnaker.orca.pipeline.model.StageExecutionImpl
import retrofit.RetrofitError
import retrofit.client.Response
import retrofit.converter.ConversionException
import retrofit.converter.JacksonConverter
import spock.lang.Specification
import com.netflix.spinnaker.orca.deploymentmonitor.DeploymentMonitorServiceProvider
import spock.lang.Unroll
import org.springframework.http.HttpStatus

import java.time.Instant
import java.util.concurrent.TimeUnit
Expand All @@ -41,7 +47,7 @@ class EvaluateDeploymentHealthTaskSpec extends Specification {
PipelineExecutionImpl pipe = pipeline {
}

def "should retry retrofit errors"() {
def "should handle retrofit network error and return the task status depending on the scenarios"() {
given:
def monitorServiceStub = Stub(DeploymentMonitorService) {
evaluateHealth(_) >> {
Expand Down Expand Up @@ -198,6 +204,202 @@ class EvaluateDeploymentHealthTaskSpec extends Specification {
false | null || ExecutionStatus.FAILED_CONTINUE
}

def "should handle retrofit http error and return the task status depending on the scenarios"() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I see the point of all this added test coverage. I mean, I'm all for more test coverage, but if I'm reading the code correctly, EvaluateDeploymentHealthTask doesn't catch any RetrofitErrors (nor have any catch blocks at all), so I don't see how adding SpinnakerRetrofitErrorHandler to DeploymentMonitorService (e.g. in #4614) could change behavior specifically in EvaluateDeploymentHealthTask,

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dbyron-sf - This is exactly what I mentioned in the comments : #4614 (comment)

There is no behavioural change . I added this to avoid the confusion we had over the above discussion

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The easiest way to resolve that thread is with some git juggling to move the addition of that one specific test case to a separate commit before the code change in that PR. I suppose these additional tests could be related, but it's hard to see how.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These additional test cases are similar, only differences are the type of Spinnaker*Exceptions are different. However if this test cases are confusing / irrelevant , I can still go ahead and remove it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rather focus on #4614.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added test case for EvaluateDeploymentTask and moved to a separate commit. : d481b84


def converter = new JacksonConverter(new ObjectMapper())

Pranav-b-7 marked this conversation as resolved.
Show resolved Hide resolved
Response response =
new Response(
"/deployment/evaluateHealth",
HttpStatus.BAD_REQUEST.value(),
"bad-request",
Collections.emptyList(),
new MortServiceSpec.MockTypedInput(converter, [
accountName: "account",
description: "simple description",
name: "sg1",
region: "region",
type: "openstack"
]))

given:
def monitorServiceStub = Stub(DeploymentMonitorService) {
evaluateHealth(_) >> {
throw RetrofitError.httpError("https://foo.com/deployment/evaluateHealth", response, converter, null)
}
}

def serviceProviderStub = getServiceProviderStub(monitorServiceStub)

def task = new EvaluateDeploymentHealthTask(serviceProviderStub, new NoopRegistry())

MonitoredDeployInternalStageData stageData = new MonitoredDeployInternalStageData()
stageData.deploymentMonitor = new DeploymentMonitorStageConfig()
stageData.deploymentMonitor.id = "LogMonitorId"

def stage = new StageExecutionImpl(pipe, "evaluateDeploymentHealth", stageData.toContextMap() + [application: pipe.application])
stage.startTime = Instant.now().toEpochMilli()

when: 'we can still retry'
TaskResult result = task.execute(stage)

then: 'should retry'
result.status == ExecutionStatus.RUNNING
result.context.deployMonitorHttpRetryCount == 1

when: 'we ran out of retries'
stage.context.deployMonitorHttpRetryCount = MonitoredDeployBaseTask.MAX_RETRY_COUNT
result = task.execute(stage)

then: 'should terminate'
result.status == ExecutionStatus.TERMINAL

when: 'we ran out of retries and failOnError = false'
serviceProviderStub = getServiceProviderStub(monitorServiceStub, {DeploymentMonitorDefinition dm -> dm.failOnError = false})
task = new EvaluateDeploymentHealthTask(serviceProviderStub, new NoopRegistry())
result = task.execute(stage)

then: 'should return fail_continue'
result.status == ExecutionStatus.FAILED_CONTINUE

when: 'we ran out of retries and failOnError = false but there is a stage override for failOnError=true'
stageData.deploymentMonitor.failOnErrorOverride = true
stage = new StageExecutionImpl(pipe, "evaluateDeploymentHealth", stageData.toContextMap() + [
application: pipe.application,
deployMonitorHttpRetryCount: MonitoredDeployBaseTask.MAX_RETRY_COUNT
])
stage.startTime = Instant.now().toEpochMilli()
result = task.execute(stage)

then: 'should terminate'
result.status == ExecutionStatus.TERMINAL
}

def "should handle retrofit conversion error and return the task status depending on the scenarios"() {
def converter = new JacksonConverter(new ObjectMapper())

Response response =
new Response(
"/deployment/evaluateHealth",
HttpStatus.BAD_REQUEST.value(),
"bad-request",
Collections.emptyList(),
new MortServiceSpec.MockTypedInput(converter, [
accountName: "account",
description: "simple description",
name: "sg1",
region: "region",
type: "openstack"
]))

given:
def monitorServiceStub = Stub(DeploymentMonitorService) {
evaluateHealth(_) >> {
throw RetrofitError.conversionError("https://foo.com/deployment/evaluateHealth", response, converter, null, new ConversionException("Failed to parse/convert the error response body"))
}
}

def serviceProviderStub = getServiceProviderStub(monitorServiceStub)

def task = new EvaluateDeploymentHealthTask(serviceProviderStub, new NoopRegistry())

MonitoredDeployInternalStageData stageData = new MonitoredDeployInternalStageData()
stageData.deploymentMonitor = new DeploymentMonitorStageConfig()
stageData.deploymentMonitor.id = "LogMonitorId"

def stage = new StageExecutionImpl(pipe, "evaluateDeploymentHealth", stageData.toContextMap() + [application: pipe.application])
stage.startTime = Instant.now().toEpochMilli()

when: 'we can still retry'
TaskResult result = task.execute(stage)

then: 'should retry'
result.status == ExecutionStatus.RUNNING
result.context.deployMonitorHttpRetryCount == 1

when: 'we ran out of retries'
stage.context.deployMonitorHttpRetryCount = MonitoredDeployBaseTask.MAX_RETRY_COUNT
result = task.execute(stage)

then: 'should terminate'
result.status == ExecutionStatus.TERMINAL

when: 'we ran out of retries and failOnError = false'
serviceProviderStub = getServiceProviderStub(monitorServiceStub, {DeploymentMonitorDefinition dm -> dm.failOnError = false})
task = new EvaluateDeploymentHealthTask(serviceProviderStub, new NoopRegistry())
result = task.execute(stage)

then: 'should return fail_continue'
result.status == ExecutionStatus.FAILED_CONTINUE

when: 'we ran out of retries and failOnError = false but there is a stage override for failOnError=true'
stageData.deploymentMonitor.failOnErrorOverride = true
stage = new StageExecutionImpl(pipe, "evaluateDeploymentHealth", stageData.toContextMap() + [
application: pipe.application,
deployMonitorHttpRetryCount: MonitoredDeployBaseTask.MAX_RETRY_COUNT
])
stage.startTime = Instant.now().toEpochMilli()
result = task.execute(stage)

then: 'should terminate'
result.status == ExecutionStatus.TERMINAL
}

def "should handle retrofit unexpected error and return the task status depending on the scenarios"() {

given:
def monitorServiceStub = Stub(DeploymentMonitorService) {
evaluateHealth(_) >> {
throw RetrofitError.unexpectedError("url", new IOException())
}
}

def serviceProviderStub = getServiceProviderStub(monitorServiceStub)

def task = new EvaluateDeploymentHealthTask(serviceProviderStub, new NoopRegistry())

MonitoredDeployInternalStageData stageData = new MonitoredDeployInternalStageData()
stageData.deploymentMonitor = new DeploymentMonitorStageConfig()
stageData.deploymentMonitor.id = "LogMonitorId"

def stage = new StageExecutionImpl(pipe, "evaluateDeploymentHealth", stageData.toContextMap() + [application: pipe.application])
stage.startTime = Instant.now().toEpochMilli()

when: 'we can still retry'
TaskResult result = task.execute(stage)

then: 'should retry'
result.status == ExecutionStatus.RUNNING
result.context.deployMonitorHttpRetryCount == 1

when: 'we ran out of retries'
stage.context.deployMonitorHttpRetryCount = MonitoredDeployBaseTask.MAX_RETRY_COUNT
result = task.execute(stage)

then: 'should terminate'
result.status == ExecutionStatus.TERMINAL

when: 'we ran out of retries and failOnError = false'
serviceProviderStub = getServiceProviderStub(monitorServiceStub, {DeploymentMonitorDefinition dm -> dm.failOnError = false})
task = new EvaluateDeploymentHealthTask(serviceProviderStub, new NoopRegistry())
result = task.execute(stage)

then: 'should return fail_continue'
result.status == ExecutionStatus.FAILED_CONTINUE

when: 'we ran out of retries and failOnError = false but there is a stage override for failOnError=true'
stageData.deploymentMonitor.failOnErrorOverride = true
stage = new StageExecutionImpl(pipe, "evaluateDeploymentHealth", stageData.toContextMap() + [
application: pipe.application,
deployMonitorHttpRetryCount: MonitoredDeployBaseTask.MAX_RETRY_COUNT
])
stage.startTime = Instant.now().toEpochMilli()
result = task.execute(stage)

then: 'should terminate'
result.status == ExecutionStatus.TERMINAL
}

private getServiceProviderStub(monitorServiceStub) {
return getServiceProviderStub(monitorServiceStub, {})
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,15 +17,20 @@
package com.netflix.spinnaker.orca.clouddriver.tasks.monitoreddeploy;

import static org.assertj.core.api.Assertions.assertThat;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;

import com.fasterxml.jackson.databind.ObjectMapper;
import com.google.common.io.CharStreams;
import com.netflix.spectator.api.NoopRegistry;
import com.netflix.spinnaker.config.DeploymentMonitorDefinition;
import com.netflix.spinnaker.orca.deploymentmonitor.DeploymentMonitorServiceProvider;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.nio.charset.StandardCharsets;
import java.util.ArrayList;
import java.util.HashMap;
import org.junit.jupiter.api.BeforeEach;
Expand Down Expand Up @@ -168,6 +173,61 @@ void shouldReturnDefaultLogMsgWhenUnexpectedErrorHasOccurred() {
assertThat(logMessageOnUnexpectedError).isEqualTo("<NO RESPONSE>");
}

@Test
void returnsEmptyHttpErrorDetailsWhenExceptionReadingHttpStatus() {

Response response = mock(Response.class);

when(response.getStatus()).thenThrow(IllegalArgumentException.class);

String logMessageOnHttpError = monitoredDeployBaseTask.getRetrofitLogMessage(response);

assertThat(logMessageOnHttpError).isEqualTo("status: \nheaders: \nresponse body: ");
}

@Test
void returnsOnlyStatusWhenExceptionParsingHttpErrorBody() {

Response response = mock(Response.class);

when(response.getStatus()).thenReturn(HttpStatus.BAD_REQUEST.value()); // arbitrary
when(response.getReason()).thenReturn("arbitrary reason");
when(response.getBody()).thenThrow(IllegalArgumentException.class);

String logMessageOnHttpError = monitoredDeployBaseTask.getRetrofitLogMessage(response);

String status = String.format("%d (%s)", response.getStatus(), response.getReason());

assertThat(logMessageOnHttpError)
.isEqualTo(String.format("status: %s\nheaders: \nresponse body: ", status));
}

@Test
void returnsOnlyStatusAndBodyWhenExceptionReadingHttpHeaders() throws IOException {

var converter = new JacksonConverter(objectMapper);
var responseBody = new HashMap<String, String>();

responseBody.put("error", "400 - Bad request, application name cannot be empty");

Response response = mock(Response.class);

when(response.getStatus()).thenReturn(HttpStatus.BAD_REQUEST.value()); // arbitrary
when(response.getReason()).thenReturn("arbitrary reason");
when(response.getBody()).thenReturn(new MockTypedInput(converter, responseBody));
when(response.getHeaders()).thenThrow(IllegalArgumentException.class);

String logMessageOnHttpError = monitoredDeployBaseTask.getRetrofitLogMessage(response);

String status = String.format("%d (%s)", response.getStatus(), response.getReason());
String body =
CharStreams.toString(
new InputStreamReader(response.getBody().in(), StandardCharsets.UTF_8));

assertThat(logMessageOnHttpError)
.isEqualTo(String.format("status: %s\nheaders: \nresponse body: %s", status, body));
}

static class MockTypedInput implements TypedInput {
private final Converter converter;
private final Object body;
Expand Down