Skip to content

Test Cases

Casper Kristiansson edited this page May 25, 2022 · 2 revisions
Grupp 8 Test Case
WeatherBrain Version 1.1
Name Description Responsible person
REST API Fredrik Lundström
Machine Learning Casper Kristiansson
IoT Module Ville Vik
React Website Philip Hägg, Fredrik Janetzky, Daniel Chouster

Weather Brain Project - React Website Case

Abstract

This document provides a specification of the Weather tile Test Case, including a defined set of test inputs, execution conditions and expected outputs.

Version History

Date Version Author Description
14/04/2022 1.0 Philip Hägg, Ferdrik Janetzky, Daniel Chouster First draft
24/04/2022 1.1 Philip Hägg, Ferdrik Janetzky, Daniel Chouster Completed draft
19/05/2022 1.2 Philip Hägg, Ferdrik Janetzky, Daniel Chouster Updated the document

An Essential Unified Process Document

Introduction

  1. Document Purpose The primary objectives of the React Website Test Case are to provide a specification of all aspects of this test case including:
  • Purpose and scope of the test case

  • Test inputs and outputs

  • Test procedures and constraints

  • Expected outcome and associated evaluation criteria.

    1. Document Scope The scope of this document is limited to consideration of:
  • Documentation of this specific test case.

This scope of this document does not include consideration of:

  • Other test cases, except to define the dependencies this test case

  • Test results relating to test executions that utilize this test case.

    1. Document Overview This document contains the following sections:
  • Identification and Coverage

    • Identification - uniquely identifies the Test Case, including a unique identifier, a unique and meaningful name, a purpose description
    • Coverage – defines scope coverage in terms of the scope items tested and the aspects of the scope items that are verified by the test case.
  • Input Specifications – full specification of the test input data

  • Output Specifications – full specification of all expected test output data

  • Constraints – including environment requirements and constraints and the test preconditions and postconditions

  • Test Procedure – the procedure that must be followed to execute the test case, including the required execution control points and observation points

  • Dependencies – dependencies that exist between this test case and other test cases, including predecessor cases that must be executed prior to this test case

  • Outcome – the expected outcome (Pass or Fail) and the criteria for objectively assessing the actual outcome.

  • References – provides full reference details for all documents, white papers and books referenced by this document.

  1. Identification and Coverage
  2. Identification This section provides headline information that uniquely identifies the Test Case.
Test Case Identifier 1
Test Case Name React website
Test Case Purpose To test the website and figure out if the website is working as expected
  1. Coverage The table below specifies the scope coverage for the test case in terms of:
  • Scope Type – type of scope item, such as Use-Case Flow, Supplementary Requirement, Subsystem, Component etc.
  • Scope Item – unique identifier for the scope item
  • Coverage – textual description of the aspects of the scope item that the test case validates.
Scope Type Scope Item Coverage
View Startpage Renders the start page as expected
Component currentWeatherTile Renders the component as expected
Component currentWeatherTile Renders data in the correct position on the website
Component currentWeatherTile Sends error back, no data
Component currentWeatherTile

Renders date in the right place

Component currentWeatherTile Renders the correct symbol
Use-Case Flow fetch Returns a promise of all three API calls
Component currentWeatherTileHolder Renders the component as expected
View aboutus Renders the component as expected
Component weathertile Renders the component as expected
Component weathertile Renders date as expected
Component weathertile Renders temperature in correct position
Component weathertile Renders humidity in the correct position
Component weathertile Renders symbol as expected
Component weathertileHolder Renders the component as expected
Component chart Renders the correct temperatures on the graph..
View statistics Calculates average, min and max values for temperature, humidity and air pressure correctly.
Component SMHIstats Renders the component as expected.
View contactUs Renders the component as expected.
  1. Input Specifications The input data for these tests uses fake data in order to see if the correct data is shown at the right place. There is a small amount of logic in this component, which makes the testing superficial.

This section contains a full specification of the test input data.

Scope Item Input Coverage
Startpage None Renders the start page as expected
currentWeatherTile None Renders the component as expected
currentWeatherTile Json Renders data in the correct position on the website
currentWeatherTile Json Sends error back, no data
currentWeatherTile Json

Renders date in the right place

currentWeatherTile Json Renders the correct symbol
fetch Make api call Returns a promise of all three API calls
currentWeatherTileHolder json Renders the component as expected
aboutus none Renders the component as expected
weathertile none Renders the component as expected
weathertile json Renders date as expected
weathertile json Renders temperature in correct position
weathertile json Renders humidity in the correct position
weathertile json Renders symbol as expected
weathertileHolder none Renders the component as expected
chart Two objects each of which contain both a temperature value and a date, just like the objects that the REST API generates. These two objects represent weather data from two consecutive days. The expected result is stored in a separate array that contains the temperatures from the two objects on the first and the second index respectively. Renders the correct temperatures, that are stored in the object, on the graph.
statistics Two objects each of which contain values for temperature, air pressure and humidity and then a third object that contains all statistics that are expected to be generated to simulate the statistics page. Calculates average, min and max values for temperature, humidity and air pressure correctly.
SMHIstats A DOM element as a render target. Renders the component as expected.
ContactUs none Renders the start page as expected
  1. Output Specifications Fake data is used as the expected outcome and compared to the input to see if the data matches and that correct data has been generated. For this, small data samples are used and created manually so that it is easy to verify that minimum, maximum, average values etc are correctly calculated.

This section contains a full specification of all expected test output data.

Scope Item Output Coverage
Startpage true Renders the start page as expected
currentWeatherTile true Renders the component as expected
currentWeatherTile

data to be same as input

Renders data in the correct position on the website
currentWeatherTile error Sends error back, no data
currentWeatherTile date as the same as input

Renders date in the right place

currentWeatherTile the branch that input triggers Renders the correct symbol
fetch check if returned was promise Returns a promise of all three API calls
currentWeatherTileHolder true Renders the component as expected
aboutus true Renders the component as expected
weathertile true Renders the component as expected
weathertile same date as input Renders date as expected
weathertile same temperature as input Renders temperature in correct position
weathertile same humidity as input Renders humidity in the correct position
weathertile same symbol as the value of the input should render a specific symbol Renders symbol as expected
weathertileHolder true Renders the component as expected
chart A result of the calculations. The unit test compares the expected result to the actual result. Renders the correct temperatures, that are stored in the object, on the graph.
statistics A result of the calculations. The unit test compares the expected result to the actual result. Calculates average, min and max values for temperature, humidity and air pressure correctly.
SMHIstats true Renders the component as expected.
contactUs true Renders the component as expected.
  1. Constraints This section details any constraints that apply to the execution of the test case including:
  • Environmental Needs – physical environment requirements and constraints
  • Applicable Releases – releases to which the Test Case should or should not be applied
  • Preconditions – the state that the system must be in prior to the execution of this test case, including preloaded test data and reference data requirements
  • Postconditions – the state that the system will be left in once the test case has executed, including conditions necessary for successor test cases to be executed.
    1. Environmental Needs In order to test the component we need to implement a testing environment. For this particular test we choose to use React Testing Library[1] and Jest[2]. These will be installed into our project folder that's hosted on GitHub. The website should be hosted on Azure and some manual tests could be executed, such as routing and design.
  1. Applicable Releases A deploy pipeline should be implemented on Azure in order to run the tests before each deployment. This will make sure that the website works to its full capacity before a new update is released.
  2. Preconditions All should be pre-loaded with testing data in the GitHub repository before each release. The preconditions and inputs will be following the Arrange Act Assert pattern for these unit tests, as described by JeffGrigg: The unit tests will first arrange all necessary preconditions, then act on the object or method under test and then assert that the expected result have occured [3].

Before each test a container will be created which the component will be attached to. In most of the tests they’re attached to a “div”. After this the tests will be executed and then the assertion will be checked.

In some cases data is created manually to be compared to the result and see if the correct data is produced.

  1. Postconditions When each test has passed the website is considered as ready for release, which will trigger the deploy stage of the pipeline.

After each test the attached component will be unmounted and removed.

  1. Test Procedure This section details the procedure that must be followed to execute the test case, including the required execution steps, control points and observation points.
  2. Dependencies
  3. Predecessors The table below lists all the Test Cases that must be run prior to this test case and specifies the nature of the dependency that exists between this test case and each predecessor test case.
Test Case ID Test Case Name Dependency
none none none
There are no tests that are predecessor to any successors
  1. Successors The table below lists all the Test Cases that must be run after this test case and specifies the nature of the dependency that exists between this test case and each successor test case.
Test Case ID Test Case Name Dependency
none none none
There are no tests that are predecessor to any successors
  1. Outcome
    1. Expected outcome After each test has been executed we expect all tests to pass their assertions.
  2. Evaluation Criteria When a test fails, the site won't be deployed and we need to investigate why the test failed. Is it wrong with the test or the function tested?

Appendix A - References

Use this section to give full reference details for all documents, white papers and books referenced by this document.

[1] Kent C. Dodds and contributors, “React Testing Library builds on top of DOM Testing Library by adding APIs for working with React components.” React Testing Library https://testing-library.com/docs/react-testing-library/intro/ (Accessed Apr 24, 2022)

[2] Facebook, “Jest is a delightful JavaScript Testing Framework with a focus on simplicity”Jest-Js https://jestjs.io/ (Accessed Apr 24, 2022)

[3] JeffGrigg, “Arrange Act Assert” http://wiki.c2.com/?ArrangeActAssert (Accessed May 3, 2022)

Weather Brain Project - Machine Learning Test Case

Abstract

This document provides a specification of the Machine Learning Test Case, including a defined set of test inputs, execution conditions and expected outputs.

Version History

Date Version Author Description
12/04/2022 1.0 Casper Kristiansson First draft
25/05/2022 1.1 Casper Kristiansson Fix formating and update tables
  1. Introduction
  2. Document Purpose The primary objectives of the React Website Test Case are to provide a specification of all aspects of this test case including:
  • Purpose and scope of the test case

  • Test inputs and outputs

  • Test procedures and constraints

  • Expected outcome and associated evaluation criteria.

    1. Document Scope The scope of this document is limited to consideration of:
  • Documentation of this specific test case.

This scope of this document does not include consideration of:

  • Other test cases, except to define the dependencies this test case

  • Test results relating to test executions that utilize this test case.

    1. Document Overview This document contains the following sections:
  • Identification and Coverage

    • Identification - uniquely identifies the Test Case, including a unique identifier, a unique and meaningful name, a purpose description
    • Coverage – defines scope coverage in terms of the scope items tested and the aspects of the scope items that are verified by the test case.
  • Input Specifications – full specification of the test input data

  • Output Specifications – full specification of all expected test output data

  • Constraints – including environment requirements and constraints and the test preconditions and postconditions

  • Test Procedure – the procedure that must be followed to execute the test case, including the required execution control points and observation points

  • Dependencies – dependencies that exist between this test case and other test cases, including predecessor cases that must be executed prior to this test case

  • Outcome – the expected outcome (Pass or Fail) and the criteria for objectively assessing the actual outcome.

  • References – provides full reference details for all documents, white papers and books referenced by this document.

  1. Identification and Coverage
  2. Identification This section provides headline information that uniquely identifies the Test Case.
Test Case Identifier 2
Test Case Name Predictions
Test Case Purpose To test and make sure that all of the different components work as they should. This includes the components for creating the predictions and the components for establishing a connection to the database and uploading data.
  1. Coverage The table below specifies the scope coverage for the test case in terms of:
  • Scope Type – type of scope item, such as Use-Case Flow, Supplementary Requirement, Subsystem, Component etc.
  • Scope Item – unique identifier for the scope item
  • Coverage – textual description of the aspects of the scope item that the test case validates.
Scope Type Scope Item Coverage
Component LoadCSV Test if a function is able to load in a specific CSV file which contains weather data. The data is then stored in a weather object.
Component GetLatestWeather Test if a function is able to get the latest weather data from Microsoft Azure database
Component CompareWeather Test if a function is able to detect which weather entries needs to be added to the local data
Component SaveCSV Test if a function is able to save a weather object to a CSV file.
Component load_model Test if the class can load a specific prediction model
Component load_old_data Test if the class can load in the previous weather data
Component get_prediction Test if a function is able to produce a prediction
Component time_series_forecasting Test if a function is able to split data into the correct format (train, validation and test dataframe, time_series).
Component auto_regression Test if a model is able to predict data
Component multi_recurrent Test if a model is able to predict data
  1. Input Specifications This section contains a full specification of the test input data.
Scope Item Input
LoadCSV Path to file (string)
GetLatestWeather No input
CompareWeather pastWeather (custom data object), compareWeather (custom data object)
SaveCSV Weather (custom data object)
load_model Path to model (string)
load_old_data Path to file (string)
get_prediction No input
time_series_forecasting Weather (pandas dataframe)
auto_regression Window (custom object), num_features (input width, int)
multi_recurrent Window (custom object), num_features (input width, int)
  1. Output Specifications This section contains a full specification of all expected test output data.
Scope Item Output
LoadCSV pastWeather (custom data object)
GetLatestWeather latestWeather (custom data object)
CompareWeather weatherDifference (custom data object)
SaveCSV No output (writes to file)
load_model No output (writes to local class object)
load_old_data No output (writes to local class object)
get_prediction Temperature (float), humidity (float), air pressure (float)
time_series_forecasting Train set (dataframe),Validation set (dataframe), Test set (dataframe), Num Features (int), Column Indices (int), Time Series (datetime dataframe)
auto_regression Prediction success rate validation set (float), Prediction success rate test set (float)
multi_recurrent Prediction success rate validation set (float), Prediction success rate test set (float)
  1. Constraints This section details any constraints that apply to the execution of the test case including:
  • Environmental Needs – physical environment requirements and constraints
  • Applicable Releases – releases to which the Test Case should or should not be applied
  • Preconditions – the state that the system must be in prior to the execution of this test case, including preloaded test data and reference data requirements
  • Postconditions – the state that the system will be left in once the test case has executed, including conditions necessary for successor test cases to be executed.
    1. Environmental Needs In this case the tests have no physical environment requirements or constraints. The tests due need to have the Microsoft Azure database to be online, local data exists and that the prediction model exists.
  1. Applicable Releases Because these tests will only test the back-end part of the code no development pipeline has been set up. In most cases a development pipeline where specific tests are being run before new versions of the program is released is important.
  2. Preconditions Nearly every test that has been mentioned above has some preconditions that need to happen before the actual test of the method happens. For example, in order to test the GetLatestWeather the method needs to have established a new DownloadCurrentWeather object and initiate a new database connection. Other tests like get_prediction need to have the model and past weather data to be loaded in beforehand.
  3. Postconditions All of the tests have their own postconditions. This means that the tests are set up so that when they are done testing the system they will reset it back to its initial state. For example the test which tries to write a weather object to CSV. Before the function is being run the test will write the current data in the CSV file to an object, test the function and then write back the initial CSV data to the file again.
  4. Test Procedure This section details the procedure that must be followed to execute the test case, including the required execution steps, control points and observation points.

Most tests are using assertion to check if a function's output is correct. But in some cases if a function does not return a value, the function simply tests if the function can be run without giving an error. The tests that have this structure are for example SaveCSV.

  1. Dependencies
  2. Predecessors The table below lists all the Test Cases that must be run prior to this test case and specifies the nature of the dependency that exists between this test case and each predecessor test case.
Test Case ID Test Case Name Dependency
None None None
There exists no tests that require any other test to be run prior to it.
  1. Successors The table below lists all the Test Cases that must be run after this test case and specifies the nature of the dependency that exists between this test case and each successor test case.
Test Case ID Test Case Name Dependency
None None None
There exists no tests that require any other test to be run after it.
  1. Outcome
  2. Expected outcome The expected outcome of running the tests is that all assertions are correct and that no method receives an error when executing.
  3. Evaluation Criteria If the tests would fail for “x” reason the developer, me, will use the information logged in the test framework to try and figure out what is wrong with it. Because the test is only being run manually it will be easier to try and directly try to fix the corresponding methods that were tested.

Appendix A - References

Use this section to give full reference details for all documents, white papers and books referenced by this document.

[1]

Weather Brain Project - Rest API / Database Test Case

Abstract

This document provides a specification of the REST API Test Case, including a defined set of test inputs, execution conditions and expected outputs.

Version History

Date Version Author Description
20/04/2022 1.1 Fredrik Lundström Described all tests
25/05/2022 1.2 Fredrik Lundström Made some changes
  1. Introduction
  2. Document Purpose The primary objectives of the React Website Test Case are to provide a specification of all aspects of this test case including:
  • Purpose and scope of the test case

  • Test inputs and outputs

  • Test procedures and constraints

  • Expected outcome and associated evaluation criteria.

    1. Document Scope The scope of this document is limited to consideration of:
  • Documentation of this specific test case.

This scope of this document does not include consideration of:

  • Other test cases, except to define the dependencies this test case

  • Test results relating to test executions that utilize this test case.

    1. Document Overview This document contains the following sections:
  • Identification and Coverage

    • Identification - uniquely identifies the Test Case, including a unique identifier, a unique and meaningful name, a purpose description
    • Coverage – defines scope coverage in terms of the scope items tested and the aspects of the scope items that are verified by the test case.
  • Input Specifications – full specification of the test input data

  • Output Specifications – full specification of all expected test output data

  • Constraints – including environment requirements and constraints and the test preconditions and postconditions

  • Test Procedure – the procedure that must be followed to execute the test case, including the required execution control points and observation points

  • Dependencies – dependencies that exist between this test case and other test cases, including predecessor cases that must be executed prior to this test case

  • Outcome – the expected outcome (Pass or Fail) and the criteria for objectively assessing the actual outcome.

  • References – provides full reference details for all documents, white papers and books referenced by this document.

  1. Identification and Coverage
  2. Identification This section provides headline information that uniquely identifies the Test Case.
Test Case Identifier 3
Test Case Name REST API Test
Test Case Purpose Test so that the REST API returns correct values from the database.
  1. Coverage The table below specifies the scope coverage for the test case in terms of:
  • Scope Type – type of scope item, such as Use-Case Flow, Supplementary Requirement, Subsystem, Component etc.
  • Scope Item – unique identifier for the scope item
  • Coverage – textual description of the aspects of the scope item that the test case validates.
Scope Type Scope Item Coverage
Component testControllerGetCurrentWeather Test the Controller layer so that it returns the current weather correctly.
Component testControllerGet7DaysAhead Test the Controller layer so that it returns the forecast weather correctly.
Component testDay Test the day model so that it is constructed in a correct way.
Component get7DaysAhead Test the Integration layer so that it returns the forecast weather correctly.
Component getCurrentWeather Test the Integration layer so that it returns the current weather correctly.
  1. Input Specifications The input data for testing the controller layer and the integration layer is taken from the WeatherBrain database using SQL queries. Input data for testing the model are expected variables chosen by the tester.
  2. Output Specifications The test output data is shown in the VS Code Testing window [2]. If a test fails or passes it is shown to the tester in the VS Code Testing window [2]. If a test fails the error is also shown for debugging. When testing the Controller and Integration layer the expected outputs are day model objects. When testing the Model layer the expected outputs are the day objects values.
  3. Constraints This section details any constraints that apply to the execution of the test case including:
  • Environmental Needs – physical environment requirements and constraints
  • Applicable Releases – releases to which the Test Case should or should not be applied
  • Preconditions – the state that the system must be in prior to the execution of this test case, including preloaded test data and reference data requirements
  • Postconditions – the state that the system will be left in once the test case has executed, including conditions necessary for successor test cases to be executed.
    1. Environmental Needs There are no physical environment requirements and constraints required for performing the tests for the REST API application.
  1. Applicable Releases The Test Case should always be applied before deployment. Since this is a backend application and is not part of the deployment pipeline it is tested by the tester before manual deployment.
  2. Preconditions The system does not have to be in any particular state for these tests to be performed. The only conditions required are that the database is running so that a connection can be made.
  3. Postconditions Since the REST API application does not alter the database in any way it does not leave the system in another state than it began with.
  4. Test Procedure The test procedure is very simple since the tests are done using Java JUnit [1] and the VS Code testing window [2]. JUnit uses assertion tests to assert that some values or objects are equal [1]. It also tests that the functions involved can be runned without throwing errors [1].
  5. Dependencies
  6. Predecessors The table below lists all the Test Cases that must be run prior to this test case and specifies the nature of the dependency that exists between this test case and each predecessor test case.
Test Case ID Test Case Name Dependency
None None None
  1. Successors The table below lists all the Test Cases that must be run after this test case and specifies the nature of the dependency that exists between this test case and each successor test case.
Test Case ID Test Case Name Dependency
None None None
  1. Outcome
  2. Expected outcome The expected outcomes from these tests are that no tests fail.
  3. Evaluation Criteria When using JUnit [1] and VS Code [2] the tests are shown either red (for fail) or green (for passed). When a test fails an error code is shown so that the tester can use that information for debugging what went wrong.

Appendix A - References

[1] JUnit, https://junit.org/junit5/, read 11:03 25th of May 2022.

[2] Visual Studio Code, Java Testing in Visual Studio Code, https://code.visualstudio.com/docs/java/java-testing, read 11:04 25th of May 2022.

Weather Brain Project - IoT Module Test Case

Abstract

This document provides a specification of the IoT Module Test Case, including a defined set of test inputs, execution conditions and expected outputs.

Version History

Date Version Author Description
20/04/2022 1.0 Ville Vik
19/05/2022 1.1 Ville Vik
  1. Introduction
  2. Document Purpose The primary objectives of the IoT-module Test Case are to provide a specification of all aspects of this test case including:
  • Purpose and scope of the test case

  • Test inputs and outputs

  • Test procedures and constraints

  • Expected outcome and associated evaluation criteria.

    1. Document Scope The scope of this document is limited to consideration of:
  • Documentation of this specific test case.

This scope of this document does not include consideration of:

  • Other test cases, except to define the dependencies this test case

  • Test results relating to test executions that utilize this test case.

    1. Document Overview This document contains the following sections:
  • Identification and Coverage

    • Identification - uniquely identifies the Test Case, including a unique identifier, a unique and meaningful name, a purpose description
    • Coverage – defines scope coverage in terms of the scope items tested and the aspects of the scope items that are verified by the test case.
  • Input Specifications – full specification of the test input data

  • Output Specifications – full specification of all expected test output data

  • Constraints – including environment requirements and constraints and the test preconditions and postconditions

  • Test Procedure – the procedure that must be followed to execute the test case, including the required execution control points and observation points

  • Dependencies – dependencies that exist between this test case and other test cases, including predecessor cases that must be executed prior to this test case

  • Outcome – the expected outcome (Pass or Fail) and the criteria for objectively assessing the actual outcome.

  • References – provides full reference details for all documents, white papers and books referenced by this document.

  1. Identification and Coverage
  2. Identification This section provides headline information that uniquely identifies the Test Case.
Test Case Identifier 4
Test Case Name IoT module
Test Case Purpose Make sure that the Azure database gets valid data from the IoT device
  1. Coverage The table below specifies the scope coverage for the test case in terms of:
  • Scope Type – type of scope item, such as Use-Case Flow, Supplementary Requirement, Subsystem, Component etc.
  • Scope Item – unique identifier for the scope item
  • Coverage – textual description of the aspects of the scope item that the test case validates.
Scope Type Scope Item Coverage
connection Azure connection A connection is made between device and Azure
connection I2C sensor connection The sensor is recognized over I2C by the device
Sensor reading sensor The sensor picks up weather data as expected
format format The device successfully sends data to Azure in the right format
location location Testing that the physical positioning of the device is good and consistent
pipeline Stream Analytics Job Turning the CSV into SQL for the database
  1. Input Specifications The input data used are the readings that the sensor picks up for the sensor reading test and the format test, also for the location to verify the data. For the connection ones, azure provides a connection string for the IoT hub and for connecting to the sensor the script needs the I2C address.
  2. Output Specifications The output when all the tests are done should be data in a CSV format that is ready for the pipeline to the database. The output can be monitored in either the Stream Analytics job or the Visual Studio Code extension for Azure IoT-hub. For the stream analytics job the data can be reviewed in the SQL database.
  3. Constraints
  4. Environmental Needs When testing that the data is good enough to make predictions on, the device needs a location to be at, preferably with perfect conditions. The Raspberry Pi also needs to work properly with respect to power, internet and the connection to the sensor.
  5. Applicable Releases From the first release where current data is available for the product.
  6. Preconditions To properly test the connection to Azure and send data to the cloud the device is required to have an internet connection. The data has to be read from the same location from the moment it starts sending data to the database all the way to the end.
  7. Postconditions The script will run at all times picking up data once every second and then send data to Azure once an hour.
  8. Test Procedure The first step once the Raspberry Pi is up and running, has the I2C open for reading, and the sensor is connected to the GPIO ports, is to check that the device can recognise the sensor on the right I2C-address. This is done through either a terminal command or via the sensor library in python which has the possible addresses as values( for example I2C_ADDR_PRIMARY).

Once we can initialize an instance of the sensor in python we can test the data coming from the sensor by printing it to the terminal. Once we verify that the data is somewhat close to reality we can move on.

The next step is initializing a connection to the Azure IoT hub, which is done through the Azure IoT device library. For this test we need the connection string provided by the IoT hub. To monitor the results we can make a try and catch statement in the script or just look at the terminal whether an error is returned by the library-function. Once a connection is established we can monitor the end points of the IoT hub via the stream analytics job or the VSCode Azure extension. Here we can also verify that the messages are in the right format, CSV. Once all of these steps are verified the IoT device works as intended and we can position it in the right position and boot it up again for the final release and for the Stream Analytics Job to move the data to the database.

Moreover, the Azure modules have built-in functions for testing the connections and also testing the query in the Stream Analytics job for transferring the data.

  1. Dependencies
  2. Predecessors The table below lists all the Test Cases that must be run prior to this test case and specifies the nature of the dependency that exists between this test case and each predecessor test case.
Test Case ID Test Case Name Dependency
  1. Successors The table below lists all the Test Cases that must be run after this test case and specifies the nature of the dependency that exists between this test case and each successor test case.
Test Case ID Test Case Name Dependency
  1. Outcome
  2. Expected outcome All tests succeed. Weather data in the Azure SQL database.
  3. Evaluation Criteria When a test fails, we need to investigate why the test failed. Is the test wrongfully written or is there another error to search for? No data will be provided to the SQL database if a test fails.

Appendix A - References

WeatherBrain @2022