Files
COE379L_Project2/part3/readme.md

16 lines
1.1 KiB
Markdown
Raw Normal View History

2025-11-14 16:14:14 -06:00
# How to Use
2025-11-14 16:02:20 -06:00
2025-11-14 16:14:14 -06:00
## Option 1: Docker Compose (easiest)
Run ```docker-compose up```. Ensure ```docker-compose``` is installed by running ```sudo apt install docker-compose``` beforehand.
## Option 2: Build Docker Image
2025-11-14 17:08:43 -06:00
run ```docker build -t {username}/{imagename} .``` in this directory. I used ```adipu24/project02```. The required files are all included in this directory, so just ensure all files in the github repo are cloned properly.
2025-11-14 16:14:14 -06:00
## Example requests
A ```GET``` request can be sent to the ```/summary``` endpoint to retrieve a model summary including the purpose of the model, the version, and the number of parameters it has.
A ```POST``` request can be sent to the ```/inference``` endpoint, with an image sent as a binary message payload under the files attribute of the request. The response will be a JSON similar to ```{"prediction": "damage" (or "no_damage")}```
# Other info
2025-11-14 17:08:43 -06:00
The inference server runs on port 5000, and the docker-compose.yml file ensures that port 5000 is forwarded to the running container. This was tested with the grader code and it functions as expected.