最近在做一個ai 生成文字的項目,
使用 workerman/http-client 協(xié)程的寫法,請求服務接口。(接口與 openai 的響應一致,是正常的),想實現(xiàn)邊請求,邊響應,讀取內容的代碼如下
`
public function getIterator(): Generator
{
while (!$this->response->getBody()->eof()) {
$line = $this->readLine($this->response->getBody());
dump($line);
yield $response;
// yield $this->responseClass::from($response);
}
}
private function readLine(StreamInterface $stream): string
{
$buffer = '';
while (!$stream->eof()) {
if ('' === ($byte = $stream->read(1))) {
return $buffer;
}
$buffer .= $byte;
if ($byte === "\n") {
break;
}
}
return $buffer;
}
`
但是現(xiàn)在發(fā)現(xiàn) workerman/http-clien客戶端好像只有完全讀取響應以后才會輸出。
更換了 guzzle\client 客戶端,設置 stream =true 可以實現(xiàn)想要的效果,但是 guzzle 請求好像是阻塞的,無法同時響應多個,
所以想請問大佬有沒有什么比價好的解決方案
用2.1.0或則后續(xù)更高版本
composer require workerman/http-client ^2.1.0
用法類似
<?php
require_once __DIR__ . '/vendor/autoload.php';
use Workerman\Connection\TcpConnection;
use Workerman\Http\Client;
use Workerman\Protocols\Http\Chunk;
use Workerman\Protocols\Http\Request;
use Workerman\Protocols\Http\Response;
use Workerman\Worker;
$worker = new Worker('http://0.0.0.0:1234');
$worker->onMessage = function (TcpConnection $connection, Request $request) {
$http = new Client();
$http->request('https://api.openai.com/v1/chat/completions', [
'method' => 'POST',
'data' => json_encode([
'model' => 'gpt-3.5-turbo',
'temperature' => 1,
'stream' => true,
'messages' => [['role' => 'user', 'content' => 'hello']],
]),
'headers' => [
'Content-Type' => 'application/json',
'Authorization' => 'Bearer sk-xx',
],
'progress' => function($buffer) use ($connection) {
$connection->send(new Chunk($buffer));
},
'success' => function($response) use ($connection) {
$connection->send(new Chunk(''));
},
]);
$connection->send(new Response(200, [
//"Content-Type" => "application/octet-stream",
"Transfer-Encoding" => "chunked",
], ' '));
};
Worker::runAll();