Skip to content

Conversation

@SilasMarvin
Copy link
Contributor

No description provided.

@SilasMarvin SilasMarvin changed the title Initial streaming working Add streaming Nov 2, 2023
Copy link
Contributor

@montanalow montanalow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very exciting!

Ok(serde_json::from_str(&results)?)
}

pub fn transform_stream(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not replace the current transform implementation with the streaming version?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess if the user calls it without a limit or cursor, they do the same exact thing. I'm not sure what the overhead of dipping into python for every token is. If that isn't an issue I totally can!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I love the focus on performance. I think we can keep the two different implementations under a single API function, with a fork in the logic depending on the params. i.e. the caller shouldn't have to think about that as a high level split, and we shouldn't have to duplicate all the documentation for the function, just the change in behavior based on those arguments.

let args = serde_json::to_string(args)?;
let inputs = serde_json::to_string(&inputs)?;

Python::with_gil(|py| -> Result<Py<PyAny>> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can probably move all this into the transformers module just for organization.

return self.done_data.pop(0)
elif self.done:
raise StopIteration
time.sleep(0.1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does this need to sleep?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I don't have it sleep I get:
ERROR: RecursionError: maximum recursion depth exceeded

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's weird...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is because we hit the recursion depth before a new token is added to done_data. I'd suggest moving to a while loop rather than recursing since Python is a ... does not support tail recursion.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After moving to a while loop, I think it's worth a big fat comment warning that we are spin locking in this thread, and sleeping for some amount of time on each iteration in a pure else case, and sleeping for some amount of time may be a better cost/latency tradeoff.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to lock/wait/notify with postgres in this case?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SilasMarvin SilasMarvin marked this pull request as ready for review November 8, 2023 21:04
@SilasMarvin SilasMarvin merged commit 3e8cc28 into master Nov 9, 2023
@SilasMarvin SilasMarvin deleted the silas-streaming branch November 9, 2023 18:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants