Now that the hard work is out of the way, we’re all clear to install some libraries. Common libraries with C bindings that you may want to use are psycopg2, python-mysql, yaml, and all or most of the data science packages (numpy, etc.).
Add whatever you need into requirements.txt. From within the container in the same directory as the Makefile (which happens to be /code run):
root@f513331941bc:/code# make libs
Looking at the Makefile, you’ll see (again) that there isn’t much magic to this. The key here is that we’re building our C bindings on the same architecture that Lambda uses to run your functions — that is, Linux.
If you shut down your container, you’ll notice that your libs directory is still there. This is nice, and it’s on purpose: using the -v (volume) argument to docker run, we’re able to map our host’s directory into the container. Any packages we install will be built from within the Linux container but will ultimately be written to our host’s file-system. You’ll only need to make libs when you add or update your requirements.txt files. There is also a "make clean" command, which can be used to start over.
Now that we have all our libraries, we need to tell our Python code how to find them. At the top of handler.py, I always have these first four lines of code (two imports + two lines to deal with +`sys.path`):
# begin magic four lines
CWD = os.path.dirname(os.path.realpath(__file__))
sys.path.insert(0, os.path.join(CWD, "lib"))
# end magic four lines
# now it's ok to import extra libraries
import numpy as np
def handler(event, context):
Another very useful convention: using a single handler.py function as the entrypoint for all my functions. The handler does nothing more than the basic bootstrapping of the path, importing my own modules, and handing off the work to those other modules. In the end, the file structure looks something like this:
$ tree -L 2
│ ├── dev
│ └── production
│ ├── handler.py
│ ├── lib
│ ├── serverless.yml
│ └── very
│ ├── aws.py
│ ├── constants.py
│ └── feed.py
handler.py will import my other modules — which happen to be inside the very directory in this example — and rely on them to execute my business logic. Using this convention, you can be sure that the system path is already set up so that importing your extra modules will work as you’d expect, without needing to alter the path again.
Docker along with this Makefile make is extremely easy to manage different deployments of your Serverless stack and facilitate quickly iterating on your code. Still, there are a few gotchas that take a little time to learn and master. Organizing my Serverless projects like this has saved me quite a bit of time. I can spin up a new project in a matter of minutes and deploy code changes within seconds, all while keeping my host system clean and free of any installations of the Serverless framework. Changing versions of Serverless is a one-line change in the Makefile.