Tuesday, August 16, 2022
HomeIoTAutomating workflows for AWS IoT Greengrass V2 parts

Automating workflows for AWS IoT Greengrass V2 parts


AWS IoT Greengrass V2 Improvement Equipment Command-Line Interface (GDK CLI) was introduced at AWS re:Invent 2021. With GDK CLI you may simply create AWS IoT Greengrass V2 parts, flexibly outline recipes, and publish these parts to AWS IoT Greengrass V2. Nonetheless, each time there’s a change to the AWS Greengrass V2 part recipe, you sometimes should manually provision it. For instance, each new model of the part should be re-built and re-published, leading to redundant duties. Moreover, if the part is a part of an automatic workflow, the re-building and re-publishing activity turns into inconvenient for general improvement efforts.

To beat these challenges, you may create an automatic workflow utilizing AWS CodePipeline together with AWS CodeCommit and AWS CodeBuild. The automated workflow will construct and publish the parts each time a brand new change to the supply is detected. The answer introduced on this weblog demonstrates this workflow with the usage of an instance.

The next picture exhibits a top level view of AWS providers used within the automated workflow. AWS CodeCommit may be changed with different Git repositories like GitHub or GitLab and ultimately mirrored into AWS CodeCommit repositories.

1. Getting Began

This part highlights the essential necessities like establishing AWS Id and Entry Administration (IAM) insurance policies for various providers which might be getting used. IAM insurance policies outline the entry granted to a useful resource. For instance, AWS CodeBuild must have a learn/write entry to AWS IoT Greengrass V2 with a view to publish parts.


The next are necessities to proceed with the construct resolution:

1.1 AWS CodePipeline

AWS CodePipeline is used for creating and managing a steady supply service. You should utilize it to handle the processes by accessing AWS CodeCommit logs. Primarily based on the adjustments pushed to AWS CodeCommit, the pipeline that runs AWS CodeBuild might be triggered to run the construct instructions as specified. To retailer the construct artifacts, you will want Amazon S3 entry which may be enlisted from the IAM insurance policies.

1.2 AWS CodeCommit

AWS CodeCommit is the supply management service used to host Git repositories. This may be completed in a few other ways as follows:

  1. Create a Git repository in AWS CodeCommit instantly – there are not any extra IAM coverage necessities
  2. Mirror Git repositories at the moment current in GitLab or GitHub into AWS CodeCommit – must configure your GitLab or GitHub repository to reflect into AWS CodeCommit or migrate a Git repository to AWS CodeCommit

1.3 AWS CodeBuild

AWS CodeBuild defines the supply in AWS CodeCommit to construct the undertaking, due to this fact you should configure the default IAM coverage to allow entry to AWS CodeCommit to implement git pull. Moreover, entry to Amazon S3 is required to retailer construct artifacts. That is non-compulsory however good to retailer the artifacts for future entry. To construct and publish AWS IoT Greengrass V2 parts, extra permissions should be added to record and create parts:

1.4 AWS IoT Greengrass V2

As soon as the part is constructed and printed to AWS IoT Greengrass V2, it is possible for you to to entry the part from the AWS IoT Greengrass V2 console or CLI. AWS IoT Greegrass V2 Deployments may be made as required as soon as the proper part is printed and obtainable.

2. Managing Supply and Construct

To construct and publish a part with GDK you should utilize Python and Bash scripts. This part demonstrates how a GDK Python pattern can be utilized to perform constructing and publishing of a part.

2.1 GDK

Step 2.1.1: Utilizing a GDK Python pattern

For Python part, use GDK to create a fundamental pattern. The command will create following information:

- README.md - A regular Readme file
- gdk-config.json - Used to outline the GDK construct parameters
- recipe.yaml - Used to outline the part run processes and associated parameters
- essential.py - An instance Python script that might be run as soon as the part is deployed
- src/ - Listing with a supporting script for essential.py
- exams/ - Listing with check scripts

Instructions to create a default Python based mostly part:

$ mkdir HelloWorldPython
$ cd HelloWorldPython/
$ gdk part init -l python -t HelloWorld

Step 2.1.2: Modifying the GDK Python pattern

Subsequent, modify the default essential.py script and the src/greeter.py script as proven beneath. Add a pattern run.sh bash script too. At present, for examples, GDK helps Python and Java. Nonetheless, in case your functions require operating binaries, Bash scripts, or every other Terminal/CMD instructions, then you should utilize run.sh Bash script. Therefore, as an alternative of operating essential.py Python script instantly, the run.sh Bash script can be utilized to execute it.

Right here is an instance of the modified essential.py script:

import sys
import src.greeter as greeter

def essential():
    args = sys.argv[1:]
    if len(args) == 2:
        print(greeter.get_greeting(args[0], args[1]))

if __name__ == "__main__":

Right here is an instance of the modified src/greeter.py script:

def get_greeting(msg1, msg2):
   Returns greeting string

       msg1(string): msg1 to append within the greeting.
       msg2(string): msg2 to append within the greeting.

       string : Returns greeting for the title

   print('The message is {} and {}!'.format(msg1, msg2))
   return '{} {}!'.format(msg1, msg2)

Right here is an instance of what’s contained within the run.sh script:


print_usage() { printf "Utilization: run.sh -a message1 -b message2" }

whereas getopts a:b: flag; do
    case "${flag}" in
        a) message1=${OPTARG} ;;
        b) message2=${OPTARG} ;;
        *) print_usage
            exit 1 ;;
carried out

echo "Message #1 = $message1"
echo "Message #2 = $message2"

echo "Operating essential.py script ..."
python3 -u essential.py $message1 $message2

Right here is an instance of what’s contained within the up to date gdk-config.json file:

  "part": {
    "com.instance.HelloWorldPython": {
      "creator": "<PLACEHOLDER_AUTHOR>",
      "model": "0.0.1",
      "construct": {
        "build_system": "zip"
      "publish": {
        "bucket": "<PLACEHOLDER FOR BUCKET>",
        "area": "<PLACEHOLDER FOR REGION>"
  "gdk_version": "1.0.0"

Right here is an instance of what’s contained within the up to date recipe.yaml file:

RecipeFormatVersion: "2020-01-25"
ComponentName: "{COMPONENT_NAME}"
ComponentVersion: "0.0.1"
ComponentDescription: "That is easy Good day World part written in Python."
ComponentPublisher: "{COMPONENT_AUTHOR}"
    configMessage1: "Good day"
    configMessage2: "World"
  - Platform:
      os: all
      - URI: "s3://BUCKET_NAME/COMPONENT_NAME/COMPONENT_VERSION/HelloWorldPythonComponent.zip"
        Unarchive: ZIP
      Run: "/bin/bash {artifacts:decompressedPath}/HelloWorldPythonComponent/run.sh -a {configuration:/configMessage1} -b {configuration:/configMessage2}

Add a buildspec.yml file that might be utilized by AWS CodeBuild to run instructions for pre-build, construct, and post-build processes. Right here is an instance of a buildspec.yml file with the required instructions:

model: 0.2

  set up:
      - apt-get replace && apt-get set up -y zip unzip build-essential wget git curl software-properties-common python3.7 python3-pip
      - curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && unzip awscliv2.zip && ./aws/set up && rm awscliv2.zip
      - python3 -m pip set up -U git+https://github.com/aws-greengrass/aws-greengrass-gdk-cli.git@v1.1.0
      - export PATH=$PATH:~/.native/bin
      - CURRDIR=$(basename "$PWD")
      - cd ../ && mv $CURRDIR HelloWorldPythonComponent && cd HelloWorldPythonComponent
      - gdk part construct
      - gdk part publish
      - mkdir bundle && cp -r greengrass-build bundle/. && cp -r zip-build bundle/.
      - pwd && ls -al && ls -al ..
    - bundle/**/*
  title: gg-component-$(date +%Y-%m-%d-%H-%M-%S).zip

2.2. AWS CodeCommit

To create the AWS CodeCommit repository, from the Developer Instruments, choose CodeCommit and Create repository. This could immediate for particulars like Repository title, Tags, and so on. As soon as created, you may push the code that has been beforehand created.
The next picture exhibits an instance of an AWS CodeCommit repository with the required information required for GDK CLI part construct and publish instructions. This additionally accommodates the modified scripts specifically run.sh, essential.py, src/greeter.py , recipe.yaml, gdk-config.json, and buildspec.yml.

2.3. AWS CodeBuild

The subsequent step is to setup AWS CodeBuild to make use of the above AWS CodeCommit repository as a supply and use construct instructions offered within the buildspec.yml file to run the construct course of. For this, choose CodeBuild from Developer Instruments and create undertaking. The method to setup the AWS CodeBuild is as follows:

Step 2.3.1: Organising construct atmosphere

To set the construct atmosphere for AWS CodeBuild, use the Amazon Elastic Container Registry (ECR) service with Ubuntu 18.04: public.ecr.aws/ubuntu/ubuntu:18.04. Following exhibits how the construct atmosphere is setup:

Step 2.3.2: Choosing the supply for construct

For the supply, join the AWS CodeCommit repository and level it to the proper Department/Tag/Commit ID. On this instance will probably be linked to the grasp department. Choose the IAM insurance policies that have been provisioned earlier:

Step 2.3.3: Choosing the instructions to run the construct

Subsequent, outline the Buildspec which makes use of the instructions to run the precise construct. This buildspec is outlined within the buildspec.yml which is part of the supply. Therefore you’ll want to present the file title right here. Optionally construct instructions may be added right here if not utilizing a buildspec.yml file.

Step 2.3.4: Storing the construct artifacts

With a view to retailer the construct artifacts, join the proper Amazon S3 bucket. Choose zip as an possibility to save lots of the construct artifacts in a compressed bundle within the Amazon S3 location:

2.4 Creating Pipeline

To handle the artifacts, GDK construct, and to publish adjustments, you may create the construct pipeline and automate the construct processes.

Step 2.4.1: Selecting pipeline settings

From the Developer Instruments, choose CodePipeline and create a brand new Pipeline. For the service roles, choose the function that was outlined earlier.

Step 2.4.2: Add supply stage

Subsequent, select the AWS CodeCommit repository and department that was created earlier. Choose Amazon CloudWatch Occasions on this part which might set off the Pipeline to start out if it detects any adjustments within the Amazon CloudWatch Occasions of the AWS CodeCommit repository talked about right here.

Step 2.4.3: Add construct stage

Now, join the AWS CodeBuild Venture on this stage which might set off the construct from the supply AWS CodeCommit adjustments.

Step 2.4.4: Add deploy stage

In case you are connecting the pipeline with the AWS CodeDeploy, you should utilize this part so as to add that half. Skip AWS CodeDeploy stage as this isn’t demonstrated right here.

Step 2.4.5: Overview the pipeline

Now that each one the items are linked, you may create your pipeline. The pipeline when invoked by Amazon CloudWatch Occasions, it could set off the construct. The next picture exhibits the circulate that’s outlined within the AWS CodePipeline. Right here, the supply is linked to the construct. Therefore, the pipeline pulls from the supply first after which runs the construct instructions talked about within the AWS CodeBuild buildspec.yml.

3. Deploy the part

3.1. Test logs of AWS CodePipeline

  • As soon as AWS CodePipeline runs efficiently, the part can be constructed and printed.
  • To test the logs, go to AWS CodeBuild undertaking and choose the Construct logs from the Construct historical past.
  • Upon checking the logs, you may make it possible for the part is saved within the Amazon S3 bucket and can be printed to AWS IoT Greengrass V2 Elements.

3.2. Checking the part in AWS IoT Greengrass V2

  • As soon as the part is accessible in AWS IoT Greengrass V2, it may be deployed on the IoT Issues. You are able to do this by revising current deployments utilizing the up to date part or by creating new deployment for the IoT Factor or Factor teams.

  • Upon checking the part within the AWS IoT Greengrass V2 console, you may see the main points just like the ‘Default configuration,’ ‘Lifecycle’ particulars, and ‘Artifacts’ location in Amazon S3 bucket, all of which relies on the recipe.yaml script.

4. Cleanup

AWS CodePipeline is provisioned to hearken to Amazon CloudWatch Occasions so each small replace to the AWS CodeCommit repository will set off the pipeline and it’ll construct and publish parts. Therefore, the pipeline may be stopped by deciding on the Cease execution.
This could additionally forestall updating construct artifacts in addition to part artifacts in Amazon S3.

5. Conclusion

AWS IoT Greengrass V2 providers are normally utilized in an automatic framework the place deployments of parts are provisioned based mostly on sure occasions. GDK CLI helps in being extra versatile for creating AWS IoT Greengrass V2 parts utilizing Python/Java/Bash. Nonetheless, as an alternative of manually provisioning the construct and publish duties each time there may be any change to the part, automation can be supreme. This construct resolution highlights the usage of AWS CodePipeline for constructing/publishing AWS IoT Greengrass V2 parts and reduces improvement efforts in addition to guide intervention. Moreover, for steady integration and steady deployment (CI/CD), versioning is a vital side which may be simplified and automatic by this construct resolution.

To be taught extra about AWS IoT Greengrass V2, go to the Documentation and Improvement Instruments. To get began with automated workflows, go to the Weblog.

Concerning the Authors

Romil Shah is a IoT Edge Information Scientist in AWS Skilled Companies. Romil has over 6 years of business expertise in Laptop Imaginative and prescient, Machine Studying and IoT edge units. He’s concerned in serving to prospects optimize and deploy their Machine Studying fashions for edge units.
Fabian Benitez-Quiroz is a IoT Edge Information Scientist in AWS Skilled Companies. He holds a PhD in Laptop Imaginative and prescient and Sample Recognition from The Ohio State College. Fabian is concerned in serving to prospects run their Machine Studying fashions with low latency on IoT units.




Please enter your comment!
Please enter your name here

Most Popular