Assignment #2: SysTick and Mixed Interrupts

📚 Assignment 2 CMPE2250: SysTick and Mixed Interrupts 👋

📋 Overview

  • In this ICA you will explore the SysTick interrupt and you will characterize the behaviour of a program running more than one interrupt.

  • Begin by creating a new project for the STM32G0B1RET6 Nucleo board, following the steps and best practices covered in class.

  • Include your library support for GPIO, Timer, USART, and clocks.

  • Answers and findings will be recorded to the appropriate markdown document in your ICA folder in your repo. Copy the questions from this document into your markdown and add your responses below the questions. Formatting also counts.

1️⃣ Part A - Speedy Little SysTick

  • In the one-time initialization section of your code, bring the clock up to 40MHz, initialize USART2 for operation at 9600 BAUD, and configure PD9 for scope monitoring.

  • Configure the main loop to do nothing (deal loop).

  • Print a hello message to the UART terminal (this will be enhanced in subsequent steps).

  • After the one-time initializations, but prior to entering the main loop, configure the SysTick interrupt for 1[ms] intervals. In the ISR, toggle the scope pin.

Part A - Analysis

  • Validate that you are seeing a 500[Hz] waveform on your scope when probing PD9. If you are not seeing anything, or you are seeing the wrong frequency, go back and check your code.

  • What effect would adding a __WFI() call in the main loop have on the output waveform, and what explanation can you provide for this?

2️⃣ Part B - Counting Seconds

  • Alter the ISR for SysTick to modify two global, volatile variables. One variable will count 1ms intervals (every SysTick event), and one variable will count full seconds (every 1000[ms]). Limit the ms counter to the range 0-999 - it will roll over at 1s boundaries.

  • Modify the main loop so that the elapsed time is displayed in the UART terminal in the format DDDDD HH:MM:SS. The time will be shown at the same position in the terminal, overwriting itself (must use Tera Term or Putty), and will only be rendered if the seconds count is different from the last time the rendering was completed.

Part B - Analysis

  • Validate that your running program is producing a fully normalized elapsed time count in the terminal.

  • If you permit the program to run for 1 hour, how accurate is the elapsed time relative to a trusted stopwatch source? (Do this at home!)

  • Because the ms counting variable is not used by the main program, it does not need to be global. Global variables can increase risk in your program, as they can be modified by any function in the program. Could the variable use the static storage class, and be limited to the scope of the ISR? We will need to use the ms counter in subsequent parts, so if you experiment with this, ensure your code is reverted for the next parts.

  • The code that limits redundant rendering may prevent the time from being initially rendered (meaning we don’t see the time until 1[s] has elapsed). What can you do or did you do to ensure that the time is rendered initially, without redundant code?

  • Did you use AI to figure out how to display normalized time from a single variable source (from a seconds count)? If you did, did you include a citation?

3️⃣ Part C - Performance Measurement

  • Add the code necessary to display the amount of time it takes to render the time string to the terminal. This can be done by taking a snapshot of the global ms count just prior to preparing the output string buffer, then taking a snapshot after the string is sent to the terminal. The displacement in these snapshots is the time it takes in ms to perform the transmission (ignoring buffering/other hardware effects).

  • Rendering the string that displays the time it takes to render the time string will also take a bunch of time, but we’ll ignore this.

  • The rendering time should be displayed on a separate line, and should also overwrite itself at the same time the other rendering is performed.

You may include a __WFI() instruction in your main loop to throttle redundant instruction execution.

Part C - Analysis

  • What is the approximate average measurement that you have made? At 9600 BAUD it takes ~1ms to transmit a character, so how does your measurement compare to your expectations?

  • In order to make this measurement, the variable that keeps track of [ms] must change between snapshots. How is this possible, when your main loop is a continuous procedural block?

  • Does adding/removing the __WFI() instruction have any effect on the output or performance of the program? What unobservable effect does including it provide?