This tutorial explains how to use Server-Sent Events (SSE) in a Python-based pipeline and how to serve the processed query results over an endpoint using FastAPI with an asynchronous, non-blocking solution.
The post describes a workaraound approach to create a pipeline task and set "sync" streaming callbacks on the event loop for chunk collection and yield the chunks in a server-sent event.
The pipeline is designed synchronously, and components to the pipeline can be added dynamically. The API KEY is passed through the end-point, and the openai generator is used to create a pipeline component, which is used to generate responses for user input.
The AsyncPipeline is defined to run the pipeline, and the server-sent event is used to stream the generated answers in SSE format.
The ChunkCollector is defined to handle and queue the generated answers and yield them in SSE formatting in an end-point.
The end-point can be served using fetch-event-source as a frontend to display the streams of generated answers.
The post concludes by suggesting that the use of sockets would be useful considering performance issues while handling a large volume of data.
The packages required for the tutorial include fastapi, uvicorn, haystack-ai, haystak-experimental, pydantic, and python, above version 3.10 and below 3.13.
The complete code, snippets, and a full explanation for each function are provided above.
The tutorial is meant for experts who are familiar with FastAPI and python programming languages, as it does not provide a guide for the FastAPI process.